[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/jp/ - Otaku Culture

Search:


View post   

>> No.20535731 [View]
File: 578 KB, 766x906, 5cf10a6438b5855d0ba311628892068ff7561944.png [View same] [iqdb] [saucenao] [google]
20535731

>>20535424
Yes
>>20535444
>>20535408
A human brain is the physical proof that it is possible to run translation tasks (and much more!) on a 20W machine. Vision alone takes over 30% of your brain space and in many tasks we're nearing "solved problem" status. There is nothing magical about the brain, it is a very complex and mysterious biological device for sure but a device nonetheless.

>>20535461
I really hate the "neural network" word because beyond some initial inspiration (especially convolutional networks which are further inspired by vision neurons layout) deep neural networks are almost entirely divorced from any biological meaning. They're just (for the most part) layers of linear algebra stacked onto each another with non-linear functions (often tanh or sigmoids for natural language processing).

One of the cool things with deep neural networks is they can learn features automatically and essentially build a compact hidden inner representation of data. For machine translation this means in practice you can train an English -> Hidden representation network and a Hidden -> Japanese network, which allows you to get for instance a Finnish to Japanese engine without having extensive Finnish <-> Japanese translated works. The concept of abstracting language text into a universal compact representation should sound familiar to you, that's basically as close to "thinking" as you're going to get...

Some of the challenges right now are handling memory (how far back in the sequence do you have to remember to have relevant context ? If you had a limited memory how do you learn what is important or not for context ? see LSTM and GRU networks for reference) and more subtle structures where the slightest contextual clue can fully change the meaning (sarcasm is especially hard to detect but progress is being made).

Hope that helps explain why I'm confident we'll get there within 20 years! Reminder that 10 years ago self driving cars were science fiction. Sorry for off-topic...

Navigation
View posts[+24][+48][+96]