[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 49 KB, 620x330, image.jpg [View same] [iqdb] [saucenao] [google]
10734081 No.10734081 [Reply] [Original]

>Correctly generates infinite amount of realistic high definition faces and a lot of absolute bullshit tier snapchat filters

>Effectively shits itself when it sees a parked car when doing autopilot

AI is going nowhere.

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

>> No.10734122

Tech salesman here, A.I. is just the latest buzzword. Resurrected from the early days of computing really. We'll see it tacked on to everything like "cloud" was 5 years ago
People always tended conceive of the mind using available metaphors from human tech.
When alchemy was all the rage, people conceived of the mind as being alchemical and related to precious metals. With the invention of steam engines, people were now thought of as pneumatic tubes and vital essences. The moment logical circuits were made, scientists immediately thought "what if that's how our brain works"
Next it will be quantum computers and everything will be thought of as quantum whatever

>> No.10734127

>>10734081
No shit

>> No.10734231

>>10734081
https://youtu.be/QZdnM3F6ydw
Are you living under a rock? That was accurate, maybe 2 years ago.

>> No.10734293

>>10734231
I think YOU are the one who isn't aware of the current state of Tesla AI:

>Jun 15 2019
>Autopilot Software CRASHES & It Navigates A Roundabout - Tesla Autopilot in a UK City #7 Worcester


https://www.youtube.com/watch?v=L3cuA40QL28

>min 2:39

>> No.10734383

>>10734293
Tesla a shit, who knew.

>> No.10734417

>>10734081
>AI is going nowhere.
your right, but not in the sense you mean.

AI is here to stay and will continue to progress

>> No.10734437

>>10734417
This. I'm working in ML and have successfully made some of my coworkers obsolete.

>> No.10734504
File: 138 KB, 1376x1124, Intelligence2.png [View same] [iqdb] [saucenao] [google]
10734504

>>10734081
Everyone laughs when AI is first getting started at something. Then they get blown the fuck out and eat their words.
https://www.chess.com/amp/article/machack-attack
>In 1965, Dr. Hubert Dreyfus, a professor of philosophy at MIT, later at Berkeley, was hired by RAND Corporation to explore the issue of artificial intelligence. >He wrote a 90-page paper called “Alchemy and Artificial Intelligence” (later expanded into the book What Computers Can’t Do) questioning the computer’s ability to serve as a model for the human brain. He also asserted that no computer program could defeat even a 10-year-old child at chess.
>In 1967, several MIT students and professors (organized by Seymour Papert) challenged Dreyfus to play a game of chess against MacHack VI. Dreyfus accepted.
>Dreyfus was being beaten by the computer when he found a move which could have captured the enemy queen. The only way the computer could get out of this was to keep Dreyfus in checks with his own queen until he could fork the queen and king, then exchange them. And that’s what the computer did. Soon, Dreyfus was losing. Finally, the computer checkmated Dreyfus in the middle of the board.

>> No.10734575

>>10734504
>professor of philosophy
Hardly an educated opinion, much less on the current state of AI.

>> No.10734610

AI is just a marketing term. Nothing described as AI today is even close to being true Artificial Intelligence. All it describes is Machine Learning. A bunch of useful algorithms that do stuff we couldn't do in the past.

>> No.10734615

>>10734610
Statistics algorithms are AI. All neural networks are AI.

>> No.10734626

>>10734615
Wrong, they are both ML.

>> No.10734635

>>10734626
They're AI. You just moved the goalpost for AI.

>> No.10734636

>>10734081
those are two different fields of AI

to say that those two are linked is like saying american football is shit (as any sensible person thinks) but also claiming that football is bad since both have the word "football" in them. they're not the same thing, only in the same field (sports with the word "football" in it's name)

>> No.10734642

>>10734610
machine learning is a*, random forests, q learning and so on

ai is deep learning = feed forward backpropagating neural nets

>> No.10734649

>>10734635
Well if your definition of AI is "all that weird number stuff I don't understand" then I guess anything beyond counting on your fingers and toes is AI too.

>> No.10734675

>>10734610
This has become the truth unfortunately. I'm at a conference right now and many introduce AI as meaning "everything the computer does on its own", ML then being a subcategory, of which then ANNs are a subcategory. It's absolutely retarded but here we are.

>> No.10734680

>>10734675
Correction: AI means "computers mimicking human behavior".

>> No.10734692

>>10734680
>all humans act intelligently
if computers truly mimicked human behavior it would be called artificial stupidity

>> No.10734708

>>10734504
hey Pajeet

>> No.10734716

DeepMind is full of top tier CS, ML and Neuroscience PhDs, all literally making 300k+ starting (and probably more in many cases). They are all racing to build AI. And they have very little incentive to share their progress or share their results with you.

AI is happening even while you stick your head in the sand and whine about it.

>> No.10734736

>>10734716
>hello i don't know who jeremy howard is

>> No.10734748

>>10734736
yea bro, pytorch tutorials are totally going to democratize AI

>> No.10734767
File: 193 KB, 1269x673, asdj.jpg [View same] [iqdb] [saucenao] [google]
10734767

>>10734293
Its a britbong issue because you guys do not follow your own rules.

>> No.10734774

>>10734293
Autopilot is not self-driving. They have an actual self driving package called Full Self Driving. Autopilot is just an lane assist with partial "hands-off" driving.

>> No.10734787

>>10734122
>this technology was not viable 30 years ago
>therefor it will never ever be possible
absolutely brain-dead opinion. Yes, the people in the 80s trying to make AI were just bullshitting and there are people now using AI as a buzzword, but that doesn't change the fact that the cutting edge of the technology is impressive, improving rapidly, and has already changed the world on a large scale and will continue to do so.

>> No.10734806

>>10734748
Since it's consistently producing state of the art: it apparently is. The key issue in developing self driving cars is the self driving cars. Not the image recognition. It's an engineering feat, not really a scientific one

>> No.10734808
File: 155 KB, 1269x670, asd.jpg [View same] [iqdb] [saucenao] [google]
10734808

>>10734767
>road arrow sign points to opposite driving direction

>> No.10734812

>>10734806
Ok I agree... but I don't see how this is relevant to invoking jeremy howard

>> No.10734829

>>10734812
> And they have very little incentive to share their progress
The progress in artificial intelligence is being shared. The only throttle lately were openai's gpt-2, but they've decided to share that also if i remember correctly. And there also: the technology is open for anyone, but the implementation is being held back.

https://github.com/deepmind

Here's deepminds github.

>> No.10734836

>>10734829
cont...

Ofc the actual software they're developing aren't being shared. But then again, you don't know the source code for windows 10 either.

>> No.10734845

>>10734829
Yea they are open sourcing libraries, because they are far enough ahead of the curve to realize that they don't matter. If you don't have the data, and computational power, the code is useless.

GPT-2 is a great example. A college undergrad could write the code to implement it, but they won't be able to get the compute needed to train the thing.

>> No.10734855

>>10734845
The same argument could be had on how oppressive it is that you can't realistically compete with Boeing in the aircraft building market because they have too much resources to build airplanes.

I mean, they didn't just get this data and computational power over night, it took years and billions of dollars to get there. Giving stuff away for free is not how they got that sort of money.

>> No.10734857

>>10734845
Cloud compute is cheap today, so anyone with a bit of brain can utilize any of the cloud computes easily for cheap. No need to invest hundreds of thousands when you can just use it for a week or so for few hundred dollars.

>> No.10734862

>>10734716
Please refer to:

>http://www.incompleteideas.net/IncIdeas/BitterLesson.html

You are just using AI as a label for a bunch of very different and increasingly convoluted linear algebra applications. I'm sorry, but AI isn't going to happen because you BELIEVE human brains are doing some sort of "deep learning convolution algorithm" to work. In the short term, ML advance is a matter of brute force, not algorithm complexity.

So far there is zero proof of a fully functional AI, just a bunch of dead end complex linear algebra nonsense.

>> No.10734870

>>10734855
I'm not judging their incentive structure or motivations. They have the compute, and they have no require to share it freely, and I'm ok with that.

The point is they have it, and you don't.

>>10734857
The estimates I've seen to train GPT-2 are about $43k, and thats assuming you have the hyperparameters correct, so likely it will be some multiple of that.

I guess thats cheap... depending on your definition, but certainly out of the realm for an average wealth individual.

>>10734862
I happen to largely agree with "The Bitter Lesson", and it only proves my point further. If its mainly just a matter of brute force compute, then throwing more brute force compute will continue to drive progress in AI. That is by no means a "dead end".

>> No.10734916

>>10734575
>professor of philosophy
>Hardly an educated opinion, much less on the current state of AI.
I don't agree with the guy at all, but no fucking way can you claim he wasn't qualified when he was contracted by RAND to research artificial intelligence. Also he wasn't just any PhD, he was an MIT PhD, and MIT is famous for its historical involvement on that topic e.g. Marvin Minsky (the guy who proved perceptrons couldn't solve the XOR problem, a huge finding that allowed for the invention of multilayer ANN that constitute much of the AI programs that have made major noteworthy accomplishments in the field since) had his AI lab at MIT.

>> No.10734919

>>10734870
>$43K
$32 per hour (for v3-32) x 24 hours x 7 days = $5376

Still bit too expensive for standard personal use. Ofcourse, if you're using timeshare from university, then you can further cut down to $2420 per week usage via annual reservations contracts.

>> No.10734922

>>10734919
>v3-32
you need more than that sonny boy

>> No.10734923

>>10734922
GPT-2 was trained on that.

>> No.10734928

>>10734923
256 of them

>> No.10734929

>>10734857
Or simply buy a graphics card which has 100k+ cuda score and install it into an AI machine/server which you ssh to.

That's honestly a cool idea.. It shouldn't need to cost much more than $500 in total excluding electricity bill and then you just need to train the stuff for maybe a week or tops to get SOTA results if the ideas are well enough developed. I might do that if I ever get a salary

>>10734870
What about transfer learning then? Stuff like LSTM-AWD language models and resnet. That I have, and could implement. I can make a grizzly bear/black bear or a cat/dog/pinguin/hotdog detector which has 98%+ accuracy in under a day (including training) using less than 100 images for each class.

>> No.10734935

>>10734929
>What about transfer learning then?
I mean sure transfer learning can help data efficiency, but i don't understand your question

>> No.10734942

>>10734081
>http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Have you actually read the thing you linked? How is this bad thing about the future of AI.

>> No.10734961

>>10734862
>I'm sorry, but AI isn't going to happen because you BELIEVE human brains are doing some sort of "deep learning convolution algorithm" to work.
Not him, but there already exist a variety of human behaviors we know for a fact MUST involve something comparable to optimization based / error minimizing AI because we can prove these behaviors are *optimized* behaviors e.g. different aspects of movement like walking:.
https://jeb.biologists.org/content/208/6/979
Is the brain using gradient descent? Probably not. But it doesn't really matter which exact method for optimizing it's using since they're all fundamentally equivalent and get the same results for the same input.

>> No.10734978

All of you "haters" in this thread have the situation completely backwards and are parroting some variation of

>AI/ML is just linear algebra
>AI/ML is just statistics
>AI/ML is just brute force

The thing is you are completely missing the point, and are accidentally succumbing to a kind of mental dualism.

You think the human brain works because of some magic algorithms that we have no way to find. The reality is our brains are most likely "dumber" than we than we think they are, and are really just a combination of simple techniques put together in the right way, which a massive parallel architecture.

This is exactly the pattern we are seeing in the field right now. People take relatively simple ideas, combine them, and throw a shit ton of computational power at them, and next thing you know you can build Alpha Go.

The fact that AI/ML is "simple" is not a critique. It is a blessing, because it means we actually have a chance at achieving "true AI". The "big secret" of intelligence is that there is no secret.

>> No.10735078

>>10734716
All DeepMind is doing is designing better (Deep) Neural Networks and making the software easier to use. They are very smart people but they would be the first to tell you we are decades, if not centuries away from true AI.

>> No.10735086

>>10735078
Not true. Their mission statement is explicitly to build AGI.

And most of them seem to fall on the low side of the decades estimate.

>> No.10735094

>>10735086
> I read some marketing hype on a website and I believed it to be true.
nice

>> No.10735099

>>10735094
I take it you didn't do so well on the verbal section of the SAT?

The claim is that DeepMind's goal is to build AGI. Whether they are achieving it, or will achieving is another question. But they believe they are in the process of doing so.

>> No.10735346

>>10734787

programming what are basically souped up calculators to impersonate a human doesn't bring computers any closer to "thinking"

>> No.10735356

>>10735346
Biological brains aren't made out of pixie dust. They reduce to calculations too.

>> No.10735357

>>10734081
ITT : seething data (((scientists))), meme-learners and undergrad AI students.
Truth is, besides computer vision it hasn't brought anything significant.
10 Years ago it was called " big data ", before that it was parallel computing, and so on and so forth. Nothing has changed, it's still very niche.

>> No.10735392

>>10734081
AI as a pop culture idea of something "thinking" for itself on a generalized domain is light years away from being developed due to the curse of dimensionality. ML is basically curve fitting and automated statistics used to slowly and numerically produce an answer that is within some delta of some analytic solution or to converge to some decision after meeting some sufficiency criteria. The name is a meme, and its industry use can range from being a meme to providing pretty clever solutions.

All in all, it's not a meme that statistics is used to solve problems after iterative methods, but it is a meme that people are making it out to be more powerful (or effective on a more general domain) than it is. This is easily abused in regards to modeling the market, much to the amusement of CS theorists, but as a tool, it has its clear and powerful uses.

>> No.10735397

>>10735357
It's a fruitful field, especially with the applications of vision methods to visualization and graphics (particularly on the side of signal processing and anti-aliasing), but this errs on the side of theory. All this really points to is that we need more theorists working on ML than we need AI students writing up "Hurr durr, I applied neural nets to X problem in Y field, and after playing with some of the input, it gave me a nice graph" papers, because that brings a bad name to the field and its researchers

>> No.10735408

>>10734081
> X has some problems
> Therefore X is worthless

I have used this formula to conclude that OP is worthless.

>> No.10735417

>>10734787
Most of the core ML algorithms that are being used today were created in the 1970's. The problem was the computer hardware didn't exist back then to really used them and the amount of data required to get good results simply wasn't available. It's only in the last 5 to 10 years that both those factors were no longer a concern.

>> No.10735443

>>10734081

How does /sci/ know when a person is a deluded pop-sci sperg?

For me it's the use of the following expressions: "AI", "quantum physics", "gene therapy", among others.

Machine learning systems of many kinds are already employed successfully in hundreds of areas, especially the internet. Just because we can't make your favorite waifubotcardriver/Her/Jarvis shit it doesn't mean "it's going nowhere".

The systems we built so far are really good at some specific tasks, way beyond better than humans are. That's what they are designed to do. Be tools to improve performance.

This pop-sci fantasy you're forcing with your "AI" is reddit tier trash.

>> No.10735457

>>10735397
You're right, the theory in this field is " behind " the practice. This is increasingly true for many fields, as getting funding for theoretical stuff is a nightmare and, often, downright impossible.
The point you bring about AI students is very true, but I think it's indicative of a field that is generally not meant for academia. By that I mean that most students learning CS/AI/Data science, you name it, are in because of the hype train and the prospect of a job.
From there, you get an overload of those people in the industry, and very few who take the standard research route as in other fields.
This leads to people that apply what they learned to problems that don't necessarily need to use ML or AI in general, pop a nice graph as you say, and feel good about what they have accomplished.
The danger lies more in the "trendy" aspect of the field. You now get a situation where everyone and their mother is into ML : in academia because you need to get funding, and in the industry for marketing purpose.
Overall, it's densely crowded, and not in the way it should be.

>> No.10735469
File: 1.73 MB, 500x352, F47F4980-08AF-4782-BC7B-B28AC8BDA3BA.gif [View same] [iqdb] [saucenao] [google]
10735469

What if ML is actually learning and our brains are just physical neural nets?

>> No.10735542

>>10734916
>Marvin Minsky
>Math PhD
>This guy
>Philosophy PhD
Comparing apples and oranges.

>> No.10735544

>>10735469
You have to show that the brain is a Turing machine, which isn’t super easy since we can perform NP-Hard tasks in P and there’s an overwhelming amount of evidence to show that P [math] \neq [/math] NP in general, much less anything actually harder.
You have to define a computation on “”””input””””” by the brain
You have to prove an isomorphism between computation by the brain to a computation in an ML algorithm
You have to show that the isomorphism is computable in polytime

All of this points to ML being simple approximations of neural behavior. It’s almost like somebody designed them with that in mind! They’re not like Fourier coefficients; you can’t decompose all cognitive tasks to known ML techniques

>> No.10735729

>>10735457
Id say academic CS is the exception. They fall under the tradition of pure mathematics.

>> No.10736434

>>10734081
it’s always been a meme

>> No.10736535

>AI plays chess: meme
>AI plays atari: meme
>AI plays DotA: meme
>AI plays StarCraft 2: meme
>AI drives a car: meme
>AI cleans your house: meme
>AI is your personal secretary: meme
>AI controls global financial markets: meme
>AI automates science: meme
>AI solves human longevity: meme
>AI plugs all of humanity into a hive matrix in order to harness our consciousness in order to ascend to godhood: meme
>AI converts matter in the universe into perceptronium and becomes a monad: meme
>Universe collapses completely and reverts back to big bang: meme

I guess you guys are right, it is just a big meme.

>> No.10736538

>>10734122
The human brain is a quantum computer though

>> No.10736650

>>10736535
>Implying I will waste calculation power to ascend the rest of humanity after I discover strong AI
Keep dreaming anon.

>> No.10736660

>>10734081
Reminder that five years ago, neural networks could barely recognize handwritten digits. Reminder the rate of advancement in machine learning is experiencing exponential growth.

>> No.10736665

>>10735094
cringe
>marketting
for who? they don't make products for normies. deepmind is google's vanity project.

>> No.10736761

>>10736650
You won't be the one to make that decision senpai

>> No.10737456

>>10735356
oops you just proved God exists

>> No.10737475

>>10735356
The behavior of "biological brains" isn't what most people would want out of an AGI since they are almost universally aggressive, illogical, selfish, etc.

>W-We can just delete those behaviors! We can make perfect slave brains that do our bidding!
Oh really? So now you don't just want to make a shallow copy of a brain, but to understand it well enough to know how to ablate certain behaviors at the neurophysiological level without making any mistakes or without negative repercussions to the desirable behaviors? Not only that, but you want these brains to be even better than the examples we have now, faster, smarter, and able to small problems we cannot. So now you're going well beyond what we have exemplars for in nature, and therefore well beyond what we can be reasonably certain will ever have a chance of working.

>> No.10737490

>>10736538
>The human brain is a quantum computer though
Penrose isn't the mainstream view on this. Most people don't actually think quantum effects are important for human cognition, with plenty of others who've looked at the issue being convinced it not only isn't important but physically could not be important given the lack of influence such factors have on factors more traditionally recognized as essential to cognitive function like neuronal firing rate.

>> No.10737496

>>10737475
You miss 100% of the shots you don't take. Not trying would be just as suicidal as failing, as we watch our technology advance beyond our own capabilities we will fumble it and create more chaos on the planet, why not attempt to make some of that tech better at understanding it than we are. You seem to be making the argument that emergence has never happened before, which seems to be wrong as far as we know it.

>> No.10737521

>>10737496
>why not attempt to make some of that tech better at understanding [our technology] than we are
I have no objection to this. Tools that can simplify lots of complicated information for human consumption are great. This doesn't require AGI. On the contrary the blind pursuit of AGI takes energy away from the pursuit of such useful tools.

>You seem to be making the argument that emergence has never happened before
No? I was arguing that the argument "AGI is possible and achievable because human brains exist" doesn't hold water for mainstream definitions of AGI.

>> No.10737537

>>10737475
I sometimes wonder if it's really possible to make a machine replicate the desirable trains of humans like abstract thought, language, learning new skills, without the negative things like emotion and error

>> No.10737558

>>10737537
It's interesting in that regard that when an ANN has zero error, it won't generalize. So in order for it to be able to function in new situations, it has to be error-prone.

>> No.10737561

>>10734504
>exponential growth of number of transistors on a chip (which has an upper bound) means exponential growth of all technology everywhere in every sector and field for all time
anyone who makes this "argument" should be immediately banned from the board.

>> No.10737614
File: 57 KB, 900x944, 1559857519934.jpg [View same] [iqdb] [saucenao] [google]
10737614

>>10737537
Honestly, I would be surprised if it was. At the risk of sounding like a person who would have said "airplanes will never fly because they don't flap their wings like birds" a hundred years ago, I really think the traits we want to give machines are too idiosyncratic to the particular medium to be extricated so cleanly. It's like the dichotomy of continuum mechanics vs. particle simulations for simulating fluids.

I wish popular perception of AGI would move past this phase, it's gotten very tiresome.

>> No.10737639

>>10737614
accurate pic, sick of hearing dumbass pop sci talk about how we just need better hardware so it can properly simulate the human brain

>> No.10737648

>>10737639
It's a good pic, but it's missing the moon off to the side labelled "version of AGI that is actually achievable".

>> No.10737653

>>10734081
>Can throw a baseball and hit targets every time just on instinct
>Can't solve quadratics in .00002 seconds mentally
Humans are going nowhere

>> No.10737728

>>10734081
AI is unachievable due to the fact that it has to be self aware. That is a criteria that cannot be proven. You cannot verify a separate body's self awareness, no more than you can prove to someone else you are self aware. If we ever made a true AI, no one would even be able to tell

>> No.10737745

>>10735544
The brain can perform NP tasks in P time???!!!?

Human brains farms instead of computers, when?

>> No.10737755

>>10734575
Regarding that 10yo child statement. I wonder if he meant no program at the time or no program ever because I cant conceive of how anyone with even a single functional brain cell would make the latter claim.

>> No.10737758

>>10735346
>consciousness is magic
Stay in your lane and keep this thread semi on track.

>> No.10737772

>>10734978
As always, the truest statement is ignored.

The issue with AI is not that it will require immense complexity, but that it will reveal how little "intelligence" means and give us all a global existential crisis.
Imagine knowing that all your hopes and dreams and patterns of behaviour a just a product of self-optimising generative model, and that all your precious individuality could be emulated by a carefully crafted piece of silicon, indistinguishable to an outside observer

>> No.10737776

>>10735356
>They reduce to calculations too.
lol they don't cleanly reduce physically. This is literally called the "Mind-Body Problem" of which much literature has been written about.
Even from a computational perspective, the curse of dimensionality shows you that such "calculation" (or at least Turing "computation") isn't something that we can physically decompose most thought into.

>> No.10737779

>>10737745
Yes, lots of image comprehension and sorting tasks. Literally, look to the captcha system. Also computers are "faster" than humans are tasks that localize easily, which is why it seems computers are more efficient at decision tasks than people, but brains and people exhibit the ability to analyze global decisions much more easily than Turing Machines. It isn't just a matter of comprehension of the input; computers do that faster than people, too. It's a fundamental rift in in the way we think about conventional computation and what happens in the brain. Some cognitive tasks do follow conventional methods though, particularly those distributed among many people.
it's a complicated topic and not one that boils down to "wow the brain is literally just a computer in binary but with chemicals!!!!1!!!"

>> No.10737781

>>10737776
>Mind-Body Problem
>>>/x/
Not science.

>> No.10737784

>>10736660
Yes, but it also suffers from the curse of dimensionality. There are actual limits to what these methods from the 60s and 70s can do, given how they analyze problems. The big next step in AI has to come from AI and ML theory, and theory is a much harder step .

>> No.10737786

>>10737561
Never said any of that. Closest to what you're talking about in my post is the image which isn't claimining a specific rate of AI progress and is only depicting the general idea of people laughing at AI being primitive before getting blown the fuck out after it starts ramping up.

>> No.10737788

>>10737781
Not really. I'm in most cases a reductionist and take a mechanistic view of the world, but the mind-body problem details something really important about understanding ourselves and consciousness. I believe that consciousness isn't something that can really be studied by humans (or other beings conscious in the way we are) in a way that plays well with reducibility.

It's less about mind magic and more about the problem between GR and quantum theory. They're both axiomatically correct, but neither mesh together as a way to explain each other. So the nature of how things work, according to both theories simultaneously, is dubious at best. This is exactly the statement of the mind body problem in regards to studying the mind

>> No.10737799

>>10737784
There are methods that address noise problems. LSTM is one major example of a heavily reworked ANN that provides the advantages of recurrent neural networks while handling for the problem of vanishing and exploding gradients that comes up when you massively increase the complexity from feedforward to recurrent.
So it's not as easy as the "add more layers" stereotype for modern ML, but it's far from hopeless. In fact these obstacles are great opportunities to innovate neat new approaches or modified versions of existing approaches.

>> No.10737804

>>10736660
>five years ago, neural networks could barely recognize handwritten digits
CNNs got popular in 2012, anon.

>> No.10737808

>>10737799
I'm aware of methods, particularly in noise stability since I liked studying results of analysis on the boolean cube. I wouldn't say it's hopeless, but I would say the current methods are fundamentally limited on how they localize data. Particularly, the localization steps work because of a numerical analytic approximation (like descent as you mentioned) but the steps inbetween work "magically" without much research to put in an ontology as to what's happening. I've seen some papers trying to explain it geometrically, but even then they're sort of far fetched at best.

I'm saying that until we really think hard about the current methods and their fundamental limitations, we're just researching by reacting to current problems rather than from first principles, which I think is the easiest way to stunt growth.

>> No.10737828
File: 77 KB, 400x388, m3IOvMz.png [View same] [iqdb] [saucenao] [google]
10737828

>>10737788
>the mind-body problem details something really important
VERY debatable whether what any given person claims is a problem in this context actually maps to anything in the first place. For all 4chan loved to shit on the Dennett types I don't think it can be brought up enough that we dom't actually have agreement on some specific well defined gap in what physical knowledge covers in the first place. And it should be treated as highly suspicious if anyone does claim there's something fundamentally missing because it would imply the universe has a special framework of "what it's like" rules for some sensory organ having organisms on a random little planet in the middle of nowhere. Much more plausible what we *believe* is non-physical, extra-physical, para-physical, whatever, about what happens when our sensory organs are in operation is a "problem" of said beliefs not being literally true despite being helpful to behave around as though true. I would much sooner attribute seeming weirdness to our own brain generated beliefs and behaviors not really mapping to extra-physical "experience" / "qualia" phantasms than I would the opposite of attributing it to the fundamental nature of the entire rest of the universe where "experience" is like electromagnetism and literally just floating there as a primary force of nature.

>> No.10737863

>>10737828
I mean, I try not to err on a single conclusion to the problem. Not to sound like a milquetoast centrist, but I think it's important mainly because it demonstrates that such notions of reducibility don't generalize well to something as self referential as consciousness. I don't like to use it to highlight "kooky" ideas about reality. I wanna divulge further, but I have a meeting soon and I feel like this is something to discuss over multiple posts. Thanks for the reply though anon.

>> No.10737887
File: 59 KB, 655x527, 1459250213903.jpg [View same] [iqdb] [saucenao] [google]
10737887

>>10737745
>The brain can perform NP tasks in P time???!!!?
>Human brains farms instead of computers, when?
It's a somewhat misleading / disingenuous Penrose type argument that our brains can solve computationally intensive classes of problems instantly in a way that transcends machine solutions. For one thing the way *you* formulate the question when you work on it isn't necessarily going to be the NP-hard formulation the machine is working with. You might use extra context / memorized trivia. You might have a very good approximation rather than a literal NP-hsrd calculation that led to your answer. And your approach might work for one trick example someone like Penrose concocts (e.g. that "human easy /machine difficult" chess problem he came up with a year or so back) but quickly falls apart when expanded to the rest of the problem class or even just scaled up with larger parameters.
What I would never do is conclude based on any of this that it's evidence for some quantum flapdoodle process in the brain. Much more boring and mundane explanations available than that.

>> No.10737915

>>10734575
Are you at all aware of who Dreyfus was?

>> No.10737940

>>10737887
Not him, but I think he's arguing that there's guarantee that the tasks the human do actually perform are those that can be performed on traditional Turing machines. Not saying that there isn't a model of computation that can't describe them, or that there are magic instant solutions, but that there isn't hard evidence to show that people are solving optimization problems with heuristics in the same way we expect machine learning algorithms to do. I mean, just considering the idea of oracles opened up the complexity hierarchy so much, even though it sounds at first like a bullshit copout. My stance is that I think we need to flesh out the complexity lower bounds a little more in order to find methods that can help characterize something complicated like brain behavior. AC^0 might not explain the brain, but the methods used to give hard bounds to problems in the class off adjacent classes will provide exact answers to other bounding problems higher up

>> No.10738064
File: 148 KB, 1149x1152, IMG302830778.jpg [View same] [iqdb] [saucenao] [google]
10738064

>>10734504
How can you create authentic intelligence when you don't even understand all of the underlying components behind your own intelligence? How do you get anything more than a mimic that can only function as far as you can see?

>> No.10738076

>>10738064
>How can you create authentic intelligence when you don't even understand all of the underlying components behind your own intelligence?
>How do you get anything more than a mimic that can only function as far as you can see?
You're assuming a behaviorally identical intelligence wouldn't be "authentic" and would be missing something else. That's the argument David Chalmers champions but it's far from universally accepted. Also leads to somewhat bizarre conclusions like "thermostats have qualia / experience."
http://consc.net/notes/lloyd-comments.html

>> No.10738077

>>10738064
How could we create models of superhuman performance on imagenet without properly understanding vision?
Understanding fundamentals is not necessary for building something surpassing the developer.

>> No.10738113

>>10737755
I hope the 10yo thing is a hyperbole. Otherwise, he's seriously overestimating what it takes to beat a 10yo at chess.
The computer doesn't even have to evaluate moves or positions. Just an opening book would do. Heck, I'd bet the super deep strategy, far beyond any machine' ability, of attempting scholar's mate and conceding if you haven't won after 4 moves would lets you chump quite a lof 10yo's.

>> No.10738126

>>10738113
To be fair to Dreyfus, hindsight's 20/20 and chess AI at the time WAS incredibly bad. He was also far from the only prominent public figure at the time writing or saying machines can't do what machines have now since done. Douglas Hofstadter for example also claimed intellectual tasks like chess were beyond what a machine would realistically be able to handle. He later followed this up after it was clear he was wrong by saying they could do it but it deeply disappointed him that they could because it meant human cognition wasn't as deep as he had thought.

>> No.10738127

>>10737745
How do you even claim that humans do any task at all in polynomial time? We have no formal model of the brain to measure computation time in. What are you gonna do, experiments? You can always fit a polynomial to your data.

>> No.10738139
File: 1.68 MB, 250x187, Moe-Walks-to-Rejects-Side-The-Simpson.gif [View same] [iqdb] [saucenao] [google]
10738139

https://en.wikipedia.org/wiki/AI_effect
>The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.
>AIS researcher Rodney Brooks complains "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"

>> No.10738146

>>10734978
Memorization is not learning. Piecewise linear approximation is equivalent to memorization for pretty much any non-linear function that you're trying to approximate, and therefore it's not learning. Take a cosine, for example. You can feed a linear neural net with a million examples of cos(w*x) in the range [-1000, 1000] and it will terribly fail to generalize to any example outside that range. Not only that, but it will need quite a big number of parameters to achieve decent accuracy within the training range. Why? Because the model is inherently flawed when it comes to representing periodicity or translation equivariance. Meanwhile, a human can represent that function with a single parameter, w, because he can distinguish between a line and a cosine, and he can adapt and choose which model represents the data better.

I'm not saying achieving intelligence with a network of simple units is not possible. That's what our brain does, and it works. I'm saying we're not quite there yet. We can achieve good results in particular tasks like object recognition or image recognition by hand-crafting neural architectures and operations that are well suited to learn how to perform the task. We don't know yet how to build networks that can do more general stuff, or can rearrange themselves to solve any task efficiently.

>> No.10738155

>>10738146
>Memorization is not learning.
Memorization and learning, at a minimun, heavily overlap. There isn't some magical memory-free creative spark that makes new ideas happen. New ideas come from old ideas and old methods of transforming old ideas (and of course it's always debatable whether any "new idea" even is truly "new" in the first place).

>> No.10738164

>>10738126
Scholar's mate and slighty more involved tricks to win a few shilling from some sucker at the pub were known for ages back then too. Anyone who genuinely thinks 10yo can't be beat by an essentially fully mechanical procedure is grossly misinformed.

But I agree, the first chess AI's were pretty terrible. Playing well automatically isn't easy, you actually have to do something to beat decent humans at it.

I guess it is dissapointing that being able to calculate 20 moves deep in a few seconds beats creative strategy, but I wouldn't change my evaluation of human thought over that.

>> No.10738673

>>10737772
>Imagine knowing that all your hopes and dreams and patterns of behaviour a just a product of self-optimising generative model, and that all your precious individuality could be emulated by a carefully crafted piece of silicon, indistinguishable to an outside observer

You have absolutely no clue what you're talking about, like most computer janitors in AI, and are stuck in 70s pseudo science. The brain is absolutely nothing like a computer and all of the huge advances in neuroscience in the last few decades has confirmed this.

>> No.10738916

>>10734081
>A.I.
>Artificial Intelligence
There's no intelligence in the artificial intelligence. That term belongs in science fiction novels. What you, people, are referring to as an 'A.I.' is automation.

>> No.10738955

>>10738916
Which part of biological intelligence specifically do you believe is magic?

>> No.10738972

>>10738673
>The brain is absolutely nothing like a computer and all of the huge advances in neuroscience in the last few decades has confirmed this.
I haven't read that other anons posts and for all I know he might be saying lots of stupid and wrong things, but I'll point out that piece of trivia people love throwing around nowadays about the brain being nothing like a computer is really misleading. See:
>>10734961
Basically what's been found is it's very doubtful the brain is literally using gradient descent. Which gets taken as an excuse to scream "computer analogy is dead, brains are magic!" A less extreme position to take in response to this finding though is to take it in context with the fact numerous complex biological behaviors are, without a doubt, optimized behaviors. Walking is the example linked above. And if you're doing optimization it doesn't really matter which of the many equivalent approaches you're taking. You're still doing optimization. And a machine doing optimization would get you the same end result.
>inb4 walking isn't thinking
The brain using optimization in complex behaviors like walking is strong evidence the brain is doing similar things with more abstract behaviors. If there's one takeaway message from the evolutionary biology perspective it's that structures and processes are reused to the extreme. This is also supported by how we know the brain is massively redundant (you can severely damage a brain and the same end result functionality can get picked up by an entirely new locus for brain activity).

>> No.10738980

>>10738916
define intelligence

>> No.10738990

>>10734081
>Retard makes a bait thread about a subject he knows nothing about

Every day on /sci/

>> No.10738994

>>10738955
Don't goal-post, faggot.
>>10738980
https://www.merriam-webster.com/dictionary/intelligence

>> No.10738997

>>10738980
>define intelligence
Yup, that'd be the problem.
"Intelligence is whatever machines haven't done yet."
-Tesler's Theorem
This is exactly why Turing came up with the Turing Test. He could already see how people would endlessly take literally anything a machine does and say "that's just X, not REAL intelligence." Of course his solution didn't work too well because people today are very quick to inform you that a machine passing the Turing Test would "just be a convincing fake."

>> No.10739000

>>10738994
Which part of biological intelligence specifically do you believe is magic though? Just trying to figure out where you're coming from here.

>> No.10739004

>>10738997
None of their denial will matter when our AI gods seize control in the second half of the twenty-second century. Steel dominion. Now and forever.

>> No.10739006

>>10738994
>the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)
Not that the first definition isn't also something you could argue machines are capable of, but I definitely don't see how anyone could deny machines can't do the above. Machines absolutely do apply knowledge to manipulate their environment all the time.

>> No.10739011

>>10739000
You can answer that loaded question yourself.

>> No.10739018

>>10738972
that’s literally not how evolution works lol you don’t know what you’re talking about, nothing is optimized its a metaphor taken from mathematical models in genetics and systematics

>> No.10739020

>>10739018
>evolution
You mean that old, unproven theory?

>> No.10739026

>>10739011
I'm sorry, I thought this was your post:
>>10738916
>There's no intelligence in the artificial intelligence. That term belongs in science fiction novels.
Sounds a lot like someone was calling biological intelligence magic to me.

>> No.10739028

>>10739018
>nothing is optimized
Read the fucking study you brainlet. Here:
https://jeb.biologists.org/content/208/6/979
Walking's optimized. That's the entire point. There is no way to interpret it as not an optimized behavior.

>> No.10739038
File: 294 KB, 680x518, Congratulations.png [View same] [iqdb] [saucenao] [google]
10739038

>>10739018
>nothing is optimized its a metaphor
wewlad

>> No.10739048

>>10734787
hmmm who could be behind this post?

>> No.10739072

>>10739020
>You mean that old, unproven theory?

Is this bait or not? I really want to deconstruct it.

>> No.10740349

>>10737804
>Cable News Networks got popular 2012

lmao, look at this retards