[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 244 KB, 1328x1142, v2-dc8055a32cb7c21a0ee3d64aecdb5f49_r.jpg [View same] [iqdb] [saucenao] [google]
15674677 No.15674677 [Reply] [Original]

do we REALLY need matrix multiplications?

>> No.15674696

>>15674677
Yes

>> No.15674732

>>15674696
why?

>> No.15675010

>>15674732
because they're useful you fucking moron what kind of question is this

>> No.15675067

>>15675010
>calls me a moron
>doesn't even look at the image
either a bot or a dunning kruger

>> No.15675104

>>15675067
it's almost like the summations you posted are all analogous to operations you can perform on a matrix, which again would make them inherently useful (summing over what object exactly?). have you taken a single linear algebra course ever

>> No.15675254

>>15675104
let me spell it out for you: adding is much cheaper than multiplication, so why don't we add weights and biases instead of multiplying them?

>> No.15675277

>>15675254
How are you going to do a series of additions to replicate the multiplication of floating point values? What even makes you think that would be cheaper in fpu cycles anyway? Also that image misses out the critical step of back-propogation and the sigmoid function which definitely can't be done any other way.

>> No.15675296
File: 495 KB, 748x755, 1654434567276.png [View same] [iqdb] [saucenao] [google]
15675296

>>15674677
We've done fine without matrix multiplication for 200,000 years. I think we don't REALLY need matrix multipliation.

>> No.15675314

>>15674677
is this suggesting you can replace convolution with a addition operation? I am pretty sure convolution is what your brain nerons usually do if you search for visual cortex they have the same kernel sized matrices.
Also one of the reason why convlution is so fast nowadays is because you can write a convolution as a matrix operation and then use blas libraries that are already fast.

>> No.15675320
File: 67 KB, 645x729, 53243322.jpg [View same] [iqdb] [saucenao] [google]
15675320

>>15675314
>I am pretty sure convolution is what your brain nerons usually do
Well, if you're pretty sure about it...

>> No.15675322

>>15675320
DERp I smrt HURF

>> No.15675329

>>15675277
https://github.com/huawei-noah/AdderNet

>> No.15675360

>>15675329
>>15674677
You need them when you need them and you don't when you don't. You could probably approximate any function using a deep enough net without multiplication. Whether or not this will work out in your favor for a given problem is a purely empirical question.

>> No.15675402

>>15675360
i wish there was more information about this so i could know if it is a practical thing to use additions over multiplication for all networks

>> No.15675437

>>15675402
>i wish there was more information about this so i could know
Do what those github repo authors did and test it, then there will be more information. This is your chance to finally do something science-adjacent and contribute something.

>> No.15675569

>>15675254
>let me spell it out for you: adding is much cheaper than multiplication
its the same operation in silicon you utterly retarded faggot, costs are the same

>> No.15675577

>>15675569
>its the same operation in silicon
get out of my thread

>> No.15675639

>>15675577
ask questions from a realistic perspective you ass, binary adding is the only way to computationally do math and its how all math is done inside of ALU you have ever used

transistors aka flipflops aka logic really only works one way in this universe, the analogue core is the only one thats even slightly different and still it works in logic and thus still has to do the implicit matrix multiplication

this whole thread is like asking why you have to use an equation to do quadratic math

>> No.15675659

>>15675639
you aren't explaining why it is worse for neural networks to use addition over multiplication

>> No.15675670

>>15675659
there is no way to do multiplication in binary without doing addition

you are a bot

>> No.15675698

>>15675569
>>15675577
>>15675639
>>15675659
>>15675670
You are both retarded. The difference between the "adder core" and the "multiply core" isn't in the fact that a single add op is cheaper than a single mul op in a modern GPU. It's in the number of operations you have to do in total in each case. Think about the difference in the total number of operations between matrix multiplication (in the normal mathematical sense) and a simple element-wise sum.

>> No.15675700

>>15675670
listen dunning kruger, i know what you are talking about. you are trying to tell me that a single addition operation is the same thing as multiple addition operations. stop trying to act smart

>> No.15675702

>>15674677
We do whatever works, just take in mind multiplication is far more versatile.

>> No.15675709
File: 226 KB, 696x352, lj.webm [View same] [iqdb] [saucenao] [google]
15675709

>>15675569

>> No.15675889

>>15675700
oh so you're pretending to be retarded?
>>15675698
this is definitely an AI or someone with a chris chan tier perspective on compute

>> No.15675905
File: 35 KB, 957x616, tard.png [View same] [iqdb] [saucenao] [google]
15675905

>>15675889
Here's a picture for low IQs like you. Now compare to the number of operations needed for an element-wise addition.

>> No.15675953

>>15675889
get out of my thread, dunning kruger

>> No.15676498

>>15675402
>i wish there was more information about this so i could know if it is a practical thing to use additions over multiplication for all networks
Experimentation has far outstripped theory. There is no "information" except for the practical results you get when you actually build and test the damn thing. No one in the ML field cares for armchair mouthbreathers who don't do anything but lament that someone hasn't conveniently made something available for them to read.

>> No.15676510

>>15676498
LOL. You're overflowing with rage over being a PyMonkey blindly tuning some hyperparameters, with no ability to grasp the theoretical foundations.

>> No.15676652
File: 25 KB, 1280x806, 1280px-Binary_multiplier.svg.png [View same] [iqdb] [saucenao] [google]
15676652

>>15675953
>I don't believe in how computers work
LOL
>>15675905
in logic gates you actually have to use two adders to build a multiplier, hence why additions is literally the only thing that matters

>> No.15676674

>>15676652
This broad ("people" like you in particular) proves that eugenicists were right and that forced sterilization is necessary.

>> No.15676971

>>15676652
i don't mind you doubling down like this, it allows my thread to stay up longer for someone that isn't retarded to reply

>> No.15677138

>>15675639
this isn't true you can do very fast dot product via simple voltage bridges problem is controlling the weights in this scenario. Search for hard ware dot engine that will show you what you need its a lot faster than normal ALU operations but the weights will be fixed so that sucks.
As a side note you can transform most ML ops to dot products (inference only I think) so getting a dot engine with variable weights would make AI in real time viable.

>> No.15677591

>>15677138
how do you load weights onto it if the weights are fixed?

>> No.15677901

>>15676971
>>15676674
Im glad you could learn something even if youre both densely delusional and probably retarded

>> No.15677933

I think neural networks are a crutch in ML

>> No.15677949

Matrix multiplication is important, we need it to make operations with matrices, and matrices are important for many fields such as data science, statistics and many other fields.
That said, cramer's rule is completely useless.

>> No.15677958
File: 39 KB, 667x500, StateSpace.png [View same] [iqdb] [saucenao] [google]
15677958

>>15674677
Yes, matrix multiplication is essential for many applications such as controllers (for example state space in pic). I'm not writing this out as a series of additions.

>> No.15678133

>>15677591
depends on the underlying tech the voltage bridges I mentioned do multiplication/addition with simple wires in a matrix with inputs/outputs look it up I can't really explain it well. Its why spin waves are so interesting you can take something like that and get adjustable weights and it will work at room temperature so far material fabrication is a big problem.

>> No.15678138

>>15678133
and to answer the question the wires have resistors on it of course so simple ohm's law will get you a dot product.

>> No.15678468

>>15675254
>>15675277
Modern GPUs and mobile chips both have specialized circuits that can do it very quickly, and yes, I think they do it through some trick that avoids fp operations, I don't know how exactly.

>> No.15678470

>>15677901
You are profoundly subhuman.

>> No.15678515

>>15677901
You need as many additions as there are bits, you idiot.