[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 222 KB, 840x917, 53-534060_10982552-apu-apustaja-fren-hd-png-download.jpg [View same] [iqdb] [saucenao] [google]
11332799 No.11332799 [Reply] [Original]

brainlet here, can you prove that it is computationally harder to multiply than add?

>> No.11332808

https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Arithmetic_functions

>> No.11332809

>>11332799
Multiplication is just a bunch of additions computationally, isn't it?

>> No.11332824
File: 11 KB, 300x161, thumb_53-kb-png-apu-apustaja-happy-1172x675-png-52248367.png [View same] [iqdb] [saucenao] [google]
11332824

>>11332809
those are some cool arithmetic algorithms, but please prove that one cannot find multiplication algorithm as fast as addition algorithm.

>> No.11332826
File: 195 KB, 680x532, 618.png [View same] [iqdb] [saucenao] [google]
11332826

>>11332809
in the most naive algorithm - yes.

>> No.11332998

>>11332809
No it is not.

>> No.11333008

>>11332998
we are talking about N and equivalents

>> No.11333085

multiplication is just sequential addition within a specified limit, so it's harder than addition only in that there are more steps

>> No.11333127

>>11332799
Addition is easier
That's because when you add bits, it becomes 1 if you have exactly one 0 or carry forward 1 if its two 1s. You get the idea how easy it is.


Multiplication in a sense is multiple additions together. But this becomes particularly problematic if you have 2 large numbers ( ex 2987654 X 87654321 ). Should you add 2987654 87654321 times or the other way. You see how computationaly expensive it is. Which is why you can't just naively add to perform multiplication. You need to use some specific algorithm.

>> No.11333154

Lets say you have to binary numbers, A has n bitsand B has m bits. m >n;
The time complexity is O(n+m). This should be intuitively clear. After arranging the longer number over the shorter one, starting at the right hand side and adding-with-carry from right to left will generate a sum of length at most max(n,m)+1. The computation required for each bit of the result is O(1) or constant, since this merely combines the two respective bits of A and B with a possible carry from the previous step. (There are at most eight possibilities for such a step.)

Multiplication on the other hand uses grade school technique( https://en.wikipedia.org/wiki/Multiplication_algorithm#Long_multiplication ) but with binary numbers which means that you are doing m additions. Hence multiplication is slower.

Sorry for this terrible explanation

>> No.11333160

>>11333154
>Sorry for this terrible explanation
you used time complexity. Good point

>> No.11333164

>>11333127
>>11333154
Multiplications are done faster than additions if you're working with floating point numbers

>> No.11333166
File: 33 KB, 568x548, d4319645102804a6e00339197353b43b.jpg [View same] [iqdb] [saucenao] [google]
11333166

>>11333154
ok, but you have to prove that algorithm that you used for multiplication is the fastest (or find the lower bound for multiplication problem)

>> No.11333202

>>11332799
No.

>> No.11333253

>>11333166

building on what this anon said:
>>11333154

you could argue that since in most cases (not involving 0 or 1) the product has a larger representation than the sum, you'd have to use at least linear time to access the memory to store your product. not a formal argument though

>> No.11333421

>>11333164
redpoll me

>> No.11333468 [DELETED] 
File: 154 KB, 880x703, 22-220964_post-apustaja-suurennuslasi-clipart.jpg [View same] [iqdb] [saucenao] [google]
11333468

>>11333253
hmm to store addition you need max(m,n) and to store product you need m+n, both linear, and i don't think you should concern yourself with storing the result here, just output the result digit after digit.

But from the output size itself it seems that both algorithms have linear [math] $\Omega$ [/math]

>> No.11333476 [DELETED] 

>>11333253
>>11333253
hmm to store addition you need max(m,n) and to store product you need m+n, both linear, and i don't think you should concern yourself with storing the result here, just output the result digit after digit.

But from the output size itself it seems that both algorithms have linear >>11333253
hmm to store addition you need max(m,n) and to store product you need m+n, both linear, and i don't think you should concern yourself with storing the result here, just output the result digit after digit.

But from the output size itself it seems that both algorithms have linear Ω

>> No.11333482

>>11333253
>>11333253
hmm to store addition you need max(m,n) and to store product you need m+n, both linear, and i don't think you should concern yourself with storing the result here, just output the result digit after digit.

But from the output size itself it seems that both algorithms have linear Ω

>> No.11333499

>>11333421
floating point representation has fixed number size so everything can be done in O(1) i think.

>> No.11333509

>>11333154
I believe that you require O(max(m,n)) not O(m+n).

>> No.11333513

>>11333499
double not the same as float

>> No.11333577
File: 166 KB, 564x452, e95.png [View same] [iqdb] [saucenao] [google]
11333577

Best candidate for multiplication rigid lower bound is nlog(n) but it remains unproven

https://arxiv.org/abs/1902.10935

>> No.11333605

>>11333499

for most intents and purposes it can be assumed that arithmetic operations on scalars are constant time. but in this case we're concerned with the time complexity of multiplication and addition of arbitrary length integers, so that's not a useful assumption here

>> No.11334455

Is it possible to invent a multiplicative logic as opposed to the additive one we're using?