[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 487 KB, 1280x1024, Terminator3t800.jpg [View same] [iqdb] [saucenao] [google]
10715397 No.10715397 [Reply] [Original]

AI can't be as smart as humans and biological systems are superior.

You will never see this in pic related.

>> No.10715405

You gonna post any proof or am I supposed to intuitively understand exploding organs and vestigial structures are superior?

>> No.10715406

>>10715405
Creationism and God.

>> No.10715412

>>10715406
Absolutely Based

>> No.10715413 [DELETED] 

>>10715397
Humans aren't optimal and there's room for improvement. AI's could be chads

>> No.10715415

where is the giant chord to the power supply??

>> No.10715423
File: 1008 KB, 2560x2048, 2560px-Summit_(supercomputer).jpg [View same] [iqdb] [saucenao] [google]
10715423

>>10715397
>AI can't be as smart as humans
I don't think that's true. But it probably is true that AI can't be effectively miniaturized to human brain sizes or efficiencies. Super computers with telemetry would be effective where human levels of flexibility is needed. Semi-autonomous killing machines don't need to be too smart, however. We basically already have that ability by combining Boston Dynamics and the US Military.

>> No.10715425

>>10715397

What a bunch of nonsense. A significant portions of our brain's signals travel at the speed of sound. If our brain's signals were to operate at the speed of a computer's signals, ak electrical signals, it would need to be the size of planet Earth to be that inefficient.

>> No.10715427

digital logic is to hard and rigid

self referencing fuzzy logic that uses analog hardware is the path to true general AI. You could make a hardware system with minimal "hard coed" software that builds itself the same way a child's mind "grows" based on sensory input. Good luck building analog transistors that are comparable with integrated circuits and modern CPUs

>> No.10715430

Modern intuitive AI has been noted to be scarily efficient and beats humans 100% of the time at the task they were designed for.

>> No.10715440

>>10715430
>at the task they were designed for.
Intelligence isn't defined very well. You could define it as your ability to learn new things quickly, or define it as your ability to master a single topic. When we think of an AI that's human-like we call it "generalized AI" so I'd I'm just going to say the ability to learn new things quickly should maybe be called "general intelligence" while being masterful in a single topic could be "specialized intelligence" and ergo the AI we're using today should be called specialized AI, IMHO.

>> No.10715447
File: 29 KB, 657x527, disappointed.png [View same] [iqdb] [saucenao] [google]
10715447

>>10715397
>AI can't be as smart as humans
Reminder that AI is already unbeatable by humans at Go.
Go is a game that cannot be brute-forced with computations.
Secondary reminder that retards like OP used to love laughing at how retarded early Chess AI.
No human Chess player in recorded history has ever ranked in with an ELO >= 3000.
More than 50 distinct Chess AIs have 3000+ ELOs now.
Tertiary reminder everyone called AI dead back when it was shown perceptrons couldn't even learn the XOR function.
A few years later the basic idea was slightly modified to overcome that limitation, and now:
>Google Duplex appointment booking AI produces a "nearly flawless" imitation of human-sounding speech to the point where people complained about ethical violations and they had to add an introductory prompt that tells you it's not a real person.
>AlphaZero mastered chess in 4 hours, defeating the best chess engine, StockFish, winning 28 out of 100 games and drawing the remaining 72.
>Alibaba language processing AI outscored top humans at Stanford University reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions.
>Open AI ML bot played at The International Dota 2 tournament and won 1v1 against professional Dota 2 player Dendi.
>A propositional logic boolean satisfiability problem solving AI proved the long-standing open conjecture on Pythagorean triples over the set of integers, validated by two independent certified automatic proof checkers.
>Poker (an imperfect information game, unlike Chess or even Go) AI Libratus defeated 4 of the best human players in the world, individually, at an extremely high aggregated winrate, over a statistically significant sample.

>> No.10715452

>>10715447
>AIs can do extremely specific things but if you change the parameters by .01% it flips out and becomes useless
yawwwwwwwwwwwwnnnn

>> No.10715455

>>10715440
Is there a difference between generalized intelligence vs. a giant bundle of many different specialized intelligences? I guess picking out which specialized intelligence to use for a given situation, but that doesn't sound like some impossibly difficult hurdle to jump once you already have your bundle of skills together e.g. It isn't exactly a subtle distinction between whether you're facing an "I should give you information about your optimal federal tax deduction strategy" problem vs an "I should drive your drunk daughter back to her apartment" kind of problem."

>> No.10715456
File: 29 KB, 741x568, hmm.png [View same] [iqdb] [saucenao] [google]
10715456

>>10715452
>AI is smart but autistic
So exactly like smart humans?

>> No.10715458

>>10715452
>Producing "nearly flawless" human-sounding speech is an "extremely specific thing"
lol, what the fuck? That's an incredibly generalized behavior. No one could EVER explicitly program instructions for that.

>> No.10715480
File: 90 KB, 645x729, retard.png [View same] [iqdb] [saucenao] [google]
10715480

>>10715458
Who cares if something is "explicitly coded"?

>> No.10715482

>>10715447
Here's openai's latest:

https://www.youtube.com/watch?v=tfb6aEUMC04

>>10715397
No we'll never see pic related because that's a rather stupid human idea of what AI should be like. If AI becomes sentient enough, we won't even be able to imagine how it'll look or what it'll do, since we cannot figure it out ourselves. That's the whole point.

>> No.10715489

>>10715480
>Who cares if something is "explicitly coded"?
People who aren't trolling.

>> No.10715491

>>10715480
Because that's the very essence of what AI is: Processes doing all the various tasks we used to not be able to do with programs because they were way too convoluted to write out with actual instructions.
Honestly this is the first thing anyone trying to begin to form their first opinion about AI should understand as a prerequisite.

>> No.10715497

>>10715397
It should be mandatory for everyone to do a MNIST implementation from scratch before discussing AI

>> No.10715500

>>10715491
>>10715489
Machine learning has been around for at least 30 years. It's not impressive anymore

>> No.10715542
File: 129 KB, 570x567, 24.jpg [View same] [iqdb] [saucenao] [google]
10715542

>>10715500
It's always impressive if you actually understand it. Unfortunately you see the opposite opinion constantly where people scream "it's just [insert your preferred reductionist description of an ML method here]" over and over, as though the fact what's done makes sense and consists of any sort of method at all somehow invalidates the fact you're getting a program to gain abilities nobody could ever enumerate a list of rules for.
I can unironically say it's still very high up on the list of most impressive things you can learn about and work with. And for all the deflation-happy anons looking to smugly point out how ML isn't really anything like how the human brain operates, keep in mind there already exist a variety of human behaviors we know for a fact MUST involve something comparable because we can prove these behaviors are *optimized* behaviors e.g. different aspects of movement like walking:
https://jeb.biologists.org/content/208/6/979
Now the deflation-anons are right that artificial neural networks using gradient descent aren't doing what the brain is doing. But this is more of a technicality and less of an attack on the fundamental similarity between systems than I see most anons give credit for. The reason being that optimization is optimization.
The particulars of how optimization is accomplished is an interesting open problem for the human brain and the corresponding optimized behaviors. But as long as it's a behavior we can show is optimized, the ultimate answer is going to be one of a variety of fundamentally equivalent and interchangeable approaches. You don't get radically different answers in these cases, by definition. Optimization means you have a deterministic best answer, and whether you get there with an algorithm running on a machine or a biological process you'll still be landing on that deterministic best answer with both.

>> No.10715613
File: 342 KB, 1575x531, screenshot (23).png [View same] [iqdb] [saucenao] [google]
10715613

>>10715397
check out biomimicry

https://en.wikipedia.org/wiki/Biomimetics

>> No.10715615

>>10715542
Sounds like you need some sort of optimum theory

>> No.10715890

>>10715406
This is one solution actually. Very advanced.

>> No.10715894

>>10715456
Be nice we are friends

>> No.10716874

>>10715613
>hasent done anything

LAME! biology is just a vague inspiration, not a way to make stuff

>> No.10716896

>>10715397
>AI can't be as smart as humans and biological systems are superior.

Citation needed.

>> No.10717827

>>10715447
>All current AI is super-specialized.
Realistically speaking how soon can we expect a more general AI? Maybe not something that can connect dots and make thoughts yet, but something that is this advanced in multiple fields?

>> No.10717963

>>10717827
between 2 and 200 years.

>> No.10717966

>>10715406
If humans are created in the image of god. Why doesn't that mean that humans have the same ability of god to create other in the image of us?

The bible even speaks of golems and humans doing so in the past.

>> No.10717982

>>10717966
God here.

You can. That's the point. It's entirely possible that metaphysics is fucked up beyond even what I've considered, and ANY act of creation changes the metaphysical structure of reality in a paradoxical manner. I have to be open to the possibility of letting humanity die to be replaced by its own creation, since running a paradox twice results in its original truth state being restored.

Also possible such twisted metaphysics only exist because I assume that they might.

>> No.10718047

>>10717982
Also I should probably point out that there's no evidence whatsoever that such a creation-polarity paradox exists. My post about this is the only evidence I've found.

TL;DR: Assume I'm delusional or it could harm creation.

>> No.10718176

>>10717963
So "eventually".

>> No.10718905

>>10715415
Tesla's through-air-energy-transit

>> No.10718917

>>10717827
>Realistically speaking how soon can we expect a more general AI? Maybe not something that can connect dots and make thoughts yet, but something that is this advanced in multiple fields?
If it "doesn't have to connect the dots," what are you even looking for here? You could easily take two different specialized programs and write a program that calls those two programs. That already exists with shit like Microsoft's Cortana or Amazon's Alexa picking out which ML module to make use of.

>> No.10718919

>>10715397
You are almost correct. It would've taken us years to reach machine level intellect without scanning the brain, but we did.
IT IS ALREADY DONE.

>> No.10719557

>>10715455
Ye, Gen. Intelligence should run like an OS and after considering what it needs now, open and run a simulation of a specialised neural network.

>> No.10719580
File: 376 KB, 1062x604, Deus-Ex-HumanRevolution-Cover.jpg [View same] [iqdb] [saucenao] [google]
10719580

>>10715397
if you take a different approach and try to make inorganic brains that simulate the brain rather then just use a computer then you may get the results in said picture.

and also it took life a long time to evolve from prokaryotes into humans. it could take a.i. a long time to evolve into a sentient lifeform.

>> No.10719588

>>10715397
Opinions like these boil down to the belief that humans are somehow exeptional. A belief, because that's relligion what's behind OP's faggotry, not science. AI is breaking all the limits guys like that said AI won't be able to cross. The last thing you will be left with is to say: "Oh, but robots have no souls". This will reveal your true faggy nature.
YOU MAKE ME SICK

>> No.10719632

>>10715482
> [...] we won't even be able to imagine how it'll look or what it'll do, since we cannot figure it out ourselves. That's the whole point.

That is a wise and intelligent comment. Thank you, anon.

>> No.10719797

if i want to be involved in developing the next leap in AI, what should I major in?

EE? CS? stats? cognitive science? molecular bio?

>> No.10720128

>>10719797
Philosophy

>> No.10720164
File: 22 KB, 570x312, .jpg [View same] [iqdb] [saucenao] [google]
10720164

>mfw reading this thread

>> No.10720166

>>10715397
I am seeing it -.-

>> No.10720192

>>10719797
Linguistics or mathematics for an undergrad, and then if possible go to a top school like Stanford, John Hopkins, MIT, or ideally the ILLC at the Universiteit van Amsterdam. Of course that's just my opinion, and that would be the ideal pathway into the field. Fortunately with AI and cognitive science there are a million different approach vectors, so to speak.

>> No.10720906

>>10717982
Stop it
you're fucking with my...your, creation

Lol look at you, you silly fucking consciousness, inhabiting this human to write this only to inhabit the one that recieves it. Dont I realize that were just talking to yourself. Lol fucking idiot.

>> No.10720970

>>10719797
An actual career that does not involve glorified statistics so you stop stealing research funds from CRISPR which at least has an use unlike popsci """""""""""""ai"""""""""""""

>> No.10721029
File: 96 KB, 1280x571, Human and Giraffe recurrent laryngeal nerve.jpg [View same] [iqdb] [saucenao] [google]
10721029

>>10715397
Most mammals have a nerve that loops down through the aorta then back up again rather than taking a direct route. This is inefficient and the only reason why we have it is because of a holdover from our fish ancestors. So basically biology won't even bother fixing dumb mistakes.

>> No.10721754

>>10715890
Thanks

>> No.10722564

>>10715415
On board fusion reactors: developed by an AI developed by an AI developed by an AI developed by an AI that is far superior in computational power and speed than any human.

Computer algorithms already do things humans simply cannot conceive of. It is only limited by a lack of ego and emotion. There is nothing innate to computers to give it direction, purpose, or ambition.

>> No.10722828

most of you go to shitty state schools so i wouldn't worry too much about AI research

>> No.10724146

>>10715447
Why are the people who made that poker AI not deploying it in online poker games and making insane amounts of money? That's what I'd be doing

>> No.10725608

>>10724146
>Why are the people who made that poker AI not deploying it in online poker games and making insane amounts of money? That's what I'd be doing
I'm not sure you're able to do online poker gambling everywhere anymore. I think I remember laws changing on that at some point. Also back when I was dabbling with online hold 'em (maybe around 2005) I remember every website had heavy duty measures to try to catch bots wherever possible. Not just programmatic bot detection but people who would monitor games and call you out if you were behaving bot-like.
That said a lot of people still *did* use bots. And many more people used program assisted approaches where they played as humans but had tools which would give them much better chances of making the right moves than people playing purely from their own abilities.