[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 236 KB, 576x738, Carl_Friedrich_Gauss.jpg [View same] [iqdb] [saucenao] [google]
6559530 No.6559530 [Reply] [Original]

You have 10 seconds to explain the fastest way to sum the numbers 1 through 1000

>> No.6559536

>>6559530
(n(n+1))/2 maybe?

>> No.6559540
File: 28 KB, 583x501, Untitled.png [View same] [iqdb] [saucenao] [google]
6559540

>>6559530
See picture (my post is apparently spam)

>> No.6559541

>>6559530
(1+1000)*(1000/2)

But I think I've heard the story before somewhere.

>> No.6559559

Gauss solved this at age 10 if i remember correctly

>> No.6559572

var number = 0;

for (i=1; i < 1001; i++){
number+=i;
}

document.write(number);

>> No.6559593

>>6559572
int sum = 10;

for (i = 5; i < 1000; ++i) {
sum += i;
}

printf("%d\n", sum);

>> No.6559597

>>6559593

integer:: sum=0

do i =1,1000

sum = sum + 1

enddo

print*, sum

>> No.6559599

>>6559530
I couldn't do it in 10 seconds, but I thought about it for about 2 minutes and came up with this:
100+99+98+...+1+2+3+...
=101+101+101...
=101*100
=10100

>> No.6559600

>>6559599
Fuck.

1000+999+998+...+1+2+3+...
=1001+1001+1001...
=1001*100
=100100

>> No.6559603

>>6559600
Double fuck.
100100/2= 50050

>> No.6559605

>>6559599
The thought is correct, but it's exactly half.
It goes 1000+999...... +501+500+499+..., so you sum 1001*500

>> No.6559606

>>6559536

prove that this is computationally faster without a lookup table.

>> No.6559610

>>6559597
add :: (Integral a) => a
add = foldl (\acc x -> acc + x) 0 [1..1000]

>> No.6559613

>>6559603
mega triple ultra fuck.
>>6559599
Fuck.
1000+999+998+...+1+2+3+...
=1001+1001+1001...
=1001*1000
=1001000
1001000/2=500500

If I realise I've done something retarded this time I'm not correcting it. Fuck arithmetic. Fuck doing things on trains. Fuck the world.

>> No.6559614

>>6559610
I guess I really could have just written the last part of that.

I haven't done Haskell in a minute

>> No.6559620

>>6559606
It's at most 500 additions compared to 1000, you retard.

>> No.6559623

>>6559530
Σ(n,n,1,1000) = 500500

>> No.6559634

>>6559540
It's probably because it has 1+2+3... in it.

>> No.6559636

>>6559610
It can be reduced to
add = foldl (+) 0 [1..1000]
or even
add= sum [1..1000]

>> No.6559638

>>6559636
And for a complete program I'd rather write
main = print $ sum [1..(1000::Integer)]

>> No.6559647

>>6559530
aritmetic series.
a = 1
d = 1
n = 1000
Sn = (n/2)(2a+(n-1)d)
S1000 = (1000/2)(2+999)
= 500*1001
= 500500

>> No.6559649

>>6559620

and division happens magically? yes, it's easy in base 2, but what about in the general case?

>> No.6559655 [DELETED] 

<div class="math"> \sum_{k=1}^{\infty} k=-\dfrac{1}{12} </div>

<div class="math">1+2+3+ \cdots +1000+1001+1002 \cdots=-\dfrac{1}{12} </div>

<div class="math"> \sum_{k=1}^{1000} k=-\dfrac{1}{12} -1001-1002-1003- \cdots </div>

<div class="math"> \sum_{k=0}^{1000} k=-\infty </div>

An astounding result

>> No.6559657

<div class="math"> \sum_{k=1}^{\infty} k=-\frac{1}{12} </div>

<div class="math">1+2+3+ \cdots +1000+1001+1002 \cdots=-\frac{1}{12} </div>

<div class="math"> \sum_{k=1}^{1000} k=-\frac{1}{12} -1001-1002-1003- \cdots </div>

<div class="math"> \sum_{k=1}^{1000} k=-\infty </div>

>> No.6559707
File: 20 KB, 400x400, 1311947803699.jpg [View same] [iqdb] [saucenao] [google]
6559707

wolfram alpha can do it pretty fast

>> No.6559719

>>6559572
: t 0 1000 0 do i 1+ + loop . ;
t 500500 ok

>> No.6559764

write a recursive function

>>6559707
I found the engineer!

>> No.6559765

>>6559764
int sum(int number) {
return (number+sum(number-1));
}

>> No.6559769

a=max(a+1,1000);
n=n+a;

>> No.6559796

>>6559765
Stack overflow.

>> No.6559801

>Recursive functions computing something as simple as this
Why

int sum = 0;
for (int i = 1; i <= 1000; ++i) sum += i;

>> No.6559805

>>6559530
Use a lookup table.

answer, 500500.

Put it into memory and get it when you need it.

>> No.6559995

>>6559606
A multiplication, a shift, and an addition ARE ALWAYS going to be faster than a memory read.

retarded cs majors belong on >>>/g/nu/faggots

>> No.6559999

>>6559593
template <int n>
struct sum{
enum { value = n + sum<n - 1>::value };
};
template <>
struct sum<0> {
enum { value = 0 };
};
cout<<sum<1000>::value;

The fastest way.

>> No.6560193

>>6559995
>A multiplication, a shift, and an addition ARE ALWAYS going to be faster than a memory read.
absolutely untrue.
one memory (RAM or ROM) access will be faster than just one multiplication algorithm even with a high frequency processor (~10^-9s)

>> No.6560223

>>6559530
1+1000
2+999
3+998
all equal 1001
So its 1001(1000)/2

>> No.6560235

>>6560193
http://stackoverflow.com/questions/10274355/cycles-cost-for-l1-cache-hit-vs-register-on-x86
Multiplication is 5 cycles, addition and shift 1 on i3. So unless you have stored it in the L1 cache it is not.

>> No.6560242

>>6559765
>not defining an end condition
you fail at recursion

>> No.6560248
File: 5 KB, 262x292, CS majors.png [View same] [iqdb] [saucenao] [google]
6560248

>>6560193
You're retarded. Multiplication & add and a shift operation takes 2-3 instructions. A memory read takes dozens, hundreds, or thousands of cycles depending on how far away the data is.

Even for extended precision ints, the sheer size of the table would require you to have large gaps in it and the (n*gap) additions time + memory read needed will kill off any performance gains.

>> No.6560258

>>6559530

Use a simple SIMD algorithm in cuda or open cl. Make it extra fast by writing it properly and using shared memory.

Parallel prefix sum allows the creation of an algorithm whose time is the log of the number of elements.

This could be improved further by using a brand new architecture that combines the memory of the GPU and CPU into one.

>> No.6560260

>>6560258
>typical CS major

>> No.6560288

>>6559995
>>6560193
>>6560248

how the fuck did you nerds get to memory access speeds

a lookup table is a type of oracle which has to be neither complete nor exhaustive

5*6?
oracle: <5,25>
ALU: 25+5=30

so with a lookup table that has every 25x, the computational complexity of 5*n has an omega of n%5, which is 1, as opposed to an additive approach, which has omega(n).

furthermore whether you put the lookup table on a server in iran, on your hdd, in your ram, in your l1 cache or hardwire it into the multi section of your alu is irrelevant in terms of computational complexity.

..

missing the point by the radius of the observable universe once again.

>> No.6560300

>>6560288
>computability theory with oracles
Fuck off

>> No.6560344

>>6560288
>>>hardwire it into the multi section of your alu is irrelevant
>Hardwiring 134,217,728 TiB of data directly into a CPU is irrelevant to avoid 3 64bit instructions

Definitely a cs major. Leave, this board as it is not for your kind.

>in terms of computational complexity

Yes, lets look at the complexity in term of both time and space of your approach

Multiplication takes Ο(n^2)-Ω(n*log(n)) time and Θ(n) space.
Your look up tables takes Θ(n) time and Θ(n*2^n) FUCKING SPACE.

>HURR DURR >computational complexity theory!
>LOOK MOMMY, ME USED BIG MATHY WORDS! ME R MASTER MATH EXPERT!

>> No.6560356

>>6560260

Find me a faster algorithm that isn't quantum or based on BogoSort.

I can wait.

>> No.6560364

>>6559530
Isn't doing >>6559536
technically not "summing the numbers"?

That's just "finding the sum", which is different from actually performing the sum.

If you allow any method which provides you with the correct sum, then the fastest way to get that answer would simply be to have that data stored in a memory and then simply output the value when you ask for that sum.

>> No.6560377

>>6559559
it isn't a hard problem, you contrive solutions using almost any method imaginable

>> No.6560387

>>6559636
I realized that as soon as I left for work.
Haskell is cool as shit.

>> No.6560403

>>6560364
>fastest way to get that answer would simply be to have that data stored in a memory

>exponential memory requirement
>fastest

Hell no.

Iterative addition takes Θ(n2^n) time and Θ(n) space
Precomputed (which is cheating as you computing it somewhere) look-up takes Θ(n) time and Θ(n2^n) space
Dynamic look-up takes Θ(n2^n) space and time
Multiplication takes ~Θ(n log(n) log(log(n))) time and Θ(n) space

>> No.6560409

>>6560344
>LOOK MOMMY, ME USED BIG MATHY WORDS! ME R MASTER MATH EXPERT!
this is hilarious coming from the pedant who uses TiB and Θ(n) instead of TB and O(n)

>> No.6560436

>>6560409
TB would have created a fraction and I mean Θ(∘)

>> No.6560819

I did it in about 8 seconds:
1 + 1000 = 1001
2 + 999 = 1001
...
500 + 501 = 1001

therefore the total sum is 500 x 1001
= 500,500

>> No.6561113

>>6559530

1+1000 = 1001 and there are 500 lots of this
2+999 etc
.....
1001*500 = 500500

>> No.6561123

>>6559530
Add the first and last term.
Then add the second and second-to-last terms.

They both add up to the same thing. In fact, all of the rest of the pairs also do that.

So take that number, and multiply it by the number of pairs, which is half the total number of terms.

>> No.6561196

int a=1, b=1000, c, sum;
c=b-a+1;
if (c%2==1)
sum=(a+b)*(c/2 +1);
else
sum=(a+b)*(c/2)

>> No.6561197

>>6561196
*(c+1)/2

>> No.6561225

>>6559530
1) Find bash
2) type 'ghci'
3) wait 2 seconds
4) type 'sum [1..1000]'.

>> No.6561386

>>6559995
You have obviously never heard of the twisted optimisations going on within your PC's memory. It's basically as much as fast in average