[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.14504007 [View]
File: 27 KB, 519x604, Example-of-a-simple-neural-network-architecture-with-one-hidden-layer.png [View same] [iqdb] [saucenao] [google]
14504007

can someone explain me naming in neural networks correctly? assuming simple dense ones now
for example in this image, there's input layer with 3 neurons, 1 hidden layer with 5 neurons and an output layer with 3 neurons
there's 3x5 = 15 weights between input and hidden layer (20 if we add in bias)
do we say that that the hidden layer has these 15 / 20 weights? or does the input layer?
and then there's another 15 weights between hidden and output layer, do we say that these belong to the output layer then?
also looking at some examples, i saw some "simple" networks, that had like 1024 -> 512-> 128 -> 16 neurons
do they then have 1024x512 + 512x128 + 128x16 weights? that's 591872 weights, i know there are networks with billions of parameters, but do even simple ones get that huge?

Navigation
View posts[+24][+48][+96]