[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 538 KB, 1024x768, Lighthouse.jpg [View same] [iqdb] [saucenao] [google]
10514431 No.10514431 [Reply] [Original]

Hello everyone.
I plan to make a user-friendly app with a GUI which performs matrix multiplication at runtime. This app is a demo/educational app rather than computing software and I do not want users to install additional libraries like BLAS (or alike). It is being said I still want to improve the app performance by writing my own simple custom “gemm”. I also want to have an option to run the app on GPU’s (CUDA) if a user has appropriate hardware. Thus, I am looking for a general approach for matrix multiplication which would not be by all means most efficient but still would be better than a “naïve” matrix multiplication. So far I learned about importance of block-by-block multiplication (which takes into account the cache size) but I feel that there is something to it which can be easily implemented.
Thanks
P.S.
Sorry for my English, I am not a native speaker.

>> No.10514450
File: 64 KB, 758x644, 26E0BF2D77CD421F849F7FD2B03B6CE2.jpg [View same] [iqdb] [saucenao] [google]
10514450

Install a matrix library like glm. wtf do you expect us to come up with some magical algorithm for your nigger linear algebra also your software already exists it is called matlab. Now stop poluting my board you stinky cia nigger

>> No.10514572

>>10514431
Google SIMD and SSE.
Keep in mind that if you want to run your code on cuda then you have to use cuda library(no installation needed by user).

Also this belongs in
>>>/g/