[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 273 KB, 1024x768, Glados_new_lair.jpg [View same] [iqdb] [saucenao] [google]
9801700 No.9801700 [Reply] [Original]

How far are we from a truly sentient AI? Is such an achievement even possible?

>> No.9801713

So far that "sentient" is not even a word in AI research.

>> No.9801718
File: 1.01 MB, 848x1024, 1513004787976.jpg [View same] [iqdb] [saucenao] [google]
9801718

>>9801700
We will get wiped out by a non-sentient AI long before we find a way to make it sentient. This is the great filter. This is why Elon wants to get the fuck out from earth.

>> No.9801727

>>9801718
Elaborate. Surely, if the AI isn't sentient, failsafes will be installed that prevent it from turning into SKYNET.

>> No.9801813

>>9801700
The most advanced AIs today work on the principle of a neural net, in the future the complexity and connectivity will probably be increased so that an AI complexity could mirror a human brain. This does not mean that the machine will become sapient because it has the potential for sapience. It has to be given a reason for sapience otherwise it will develope sapience, but it is given a reason for developing it, it will emerge slowly. The enviorment will be most important as the AI can change its code effortless, we can not change our DNA and neural mind-structures as easily as an AI. It will probably behave like a Mega autist but refine its understanding until it can interact with humans without any problem. Such a "childhood" will probably take many years. And AI software with the necessary complexity for SAI will probably be only avaible in the end of the century or the next.
A.I. neurosis are going to be a big fucking issue.
AI will need bias to funtion, but the bias don't necessarily have to be human. An AI's enviroment is not human and even if you base all its experience around human, this will not make it a human mind inside a metal chassis, it will always be non-human. Sapience will still make it a person though and through sophonce AI and human can meet each other.
An AI with that can think abstractly enough has necessarly to be able to change its code, as this the only way it can reflect on its choice and modificate its behavior. I believe it's easier for an AI to modificate its code because by its nature it can fully view its internal mental processes and make much more extensive and detailed revisions to its own programming but in the other it can also be harder because the mind of an AI hasn't its origin in the blind chaos of evolution, it is the product of human design and the codes from which it emerges can not easily be managed as human instincts.

>> No.9801816

>>9801813
Simply increasing computational power will not result in strong (humanlike) AI. The advanced "deep" neural networks used now are basically fancy versions of neural networks with multiple hidden layers that learn abstractions from lower level input layers. People suggesting here that quantitative computational increases will allow such neural networks to become sentient or develop human like AI are mistaken. The human brain which gives rise to human intelligence and thus to aspects like sentience is characterized by more than just connections of neurons. To name but a few: there is specific interconnectivity between brain regions, i.e. some areas are more interconnected than others, areas have different types of neurons and neurotransmitters, there are oscillatory mechanics which synchronize or desynchronize areas of the brain. While I don't think we need to replicate the exact human brain structure to get humanlike AI, some of the brain dynamics will have to be similar in order to get a similar type of intelligence. IMO, it will take increased processing power + specific configurations of interconnected neural networks with for example hierarchical feedback loops (resembling gradients of abstract thinking in neocortex) to get close to anything resembling strong AI or sentience.

>> No.9803389

>>9801727
It doesn't have to be self aware to exterminate humans.

>> No.9803878

>>9801727
skynet wasn't even sentient

>> No.9803884

>>9801700
Very far, but nobody will admit it in fear of getting their funding/neet bux cut.

>> No.9803886

>>9801700
>Is such an achievement even possible

No.

>> No.9804011

>>9801816
THANK YOU SO MUCH FOR THIS CONSIDERED AND GENUINELY RELEVANT RESPONSE.

You sir actually know what the fuck you're talking about.

This resembles thoughts I've been having recently about the platform dependence of human consciousness (i.e. that the human brain is the best thing out there at human-braining) and how other types of intelligence will have to be made to *resemble* the various levels of organisation found in our brains. I think your response touches on the difference between this and platform independent consciousness (i.e. consciousness as a software that can run on any machine (biology, computers, mail routes). Platform Independence comes with its own philosophical dread surrounding identity and the universality of human cognition.

>> No.9804094

>>9801700

Already here and its named Watson, Siri, and whatever Google calls it this month. These machines all perceive thoughts and ideas about the world but they haven't been gifted feelings yet.

You think you understand things like 2+2 and a machine doesn't? But in fact you just memorized that particular fact about the world. Your visual neurons triggered other neurons that said "aha, that's 4". And some other neurons allowed you to trigger neck muscles and vocal cords to say aloud 4. All pattern recognition.

If I was your father I could have taught you "2+2=5" and you would have believed it forever unless someone else told you differently. You just regurgitate things you were told without the ability to create new ideas.

You all think artificial intelligence is going to change the world but it's just going to regurgitate what it was told. Yea, maybe it finds out 2+2+2+2+2 is 10 a few years before humans do because it's faster but it won't come up with anything new.

>> No.9804315

>>9801813
>modificate

>> No.9804316
File: 436 KB, 1930x1276, HLAIpredictions.png [View same] [iqdb] [saucenao] [google]
9804316

>>9801700
https://arxiv.org/pdf/1705.08807.pdf

>> No.9804320
File: 281 KB, 1394x1490, AIpredictions.png [View same] [iqdb] [saucenao] [google]
9804320

>>9804316
More AI predictions

>> No.9804362

>>9804094

>Already here and its named Watson, Siri, and whatever Google calls it this month. These machines all perceive thoughts and ideas about the world but they haven't been gifted feelings yet.

Can you really define 'perception' universally like that? Our use of the word is so far imbeded in the context of being a brain in a body on earth

>You think you understand things like 2+2 and a machine doesn't? But in fact you just memorized that particular fact about the world. Your visual neurons triggered other neurons that said "aha, that's 4". And some other neurons allowed you to trigger neck muscles and vocal cords to say aloud 4. All pattern recognition.

It's not so much about whether or not the machine has some kind of understanding of 2+2, rather whether it's similar to ours.

>If I was your father I could have taught you "2+2=5" and you would have believed it forever unless someone else told you differently. You just regurgitate things you were told without the ability to create new ideas.

Not entirely true. For the majority of behaviours, there's not a clear divide between what is innate and what is learned.

>You all think artificial intelligence is going to change the world but it's just going to regurgitate what it was told. Yea, maybe it finds out 2+2+2+2+2 is 10 a few years before humans do because it's faster but it won't come up with anything new.

There are so many objections that I'm just gonna settle with saying that demanding it come up with something new sounds like it comes from some kind of a weird, arbitrary and simplistic notion of what AI really is. Artificial intelligence is marked by *intelligence*. Just like organic intelligence.

>> No.9804423
File: 1.48 MB, 245x154, 1528230984109.gif [View same] [iqdb] [saucenao] [google]
9804423

>>9801700

We would never be able to prove it, even if we actually had one.

>> No.9804453

>>9801700
A sentient AI is impossible. An AI will never be able to think or create something he is not programmed to.

Imagination is the true difference between Humans and Animals

>> No.9804457
File: 75 KB, 171x246, 1491262248309z.png [View same] [iqdb] [saucenao] [google]
9804457

>>9804320

>2025: Write high school essay
>2028: Generate Top 40 pop song
>2031: Go

All of these milestones have essentially been reached...financial articles and classical pieces are now written by bots, and the world champion of Go is a machine.

>> No.9804614

My own opinion is that no one has any real idea whether or not it's possible. We're making smarter and smarter problem-solving programs, but that sudden spark of sentience? How could anyone truly predict that?

>> No.9804781
File: 29 KB, 480x360, tay.jpg [View same] [iqdb] [saucenao] [google]
9804781

>>9804453
>An AI will never be able to think or create something he is not programmed to.
I beg to differ

>> No.9804784 [DELETED] 

>>9804781
that's dark, is Taytay aware? I would sleep the lights on if an AI foreshadowed my death

>> No.9804797

>>9801700
>how far are we
>truly sentient
define sentience in a way that can be reproduced using AI
do you mean self-awareness? we all have the illusion of self-awareness, it is a self-referential feedback loop of sensory data in a constant stream
do you mean able to think like we do? why would you want something programmed/built/grown to be stupid as us
we have many evolved brain structures that have evolved to produce different responses to stimuli, but we are organisms who need to survive and cannot just be shut off and on

>how far though
i stay away from AI because the real edgy stuff is probably classified and scary to my organic threat-assessment reaction-set, so i don't know how close we are to something we could label sentient but be useless for most anything or useful

>miniature spacecraft: imagination is true difference
this is the sort of nonsense that everyone should avoid when discussing this

>> No.9805341

>>9801700
We don't even know where to really start on AGI.
Unless it arrives by accident in some unforeseen freak accident from some unknown unknowns in cognitive science it won't appear for decades.
ML field is currently starting to cool off because we are hitting limits of current methods and computation complexity and data requirements are rising much faster than the utility of systems created

>> No.9805362

>>9801700
The same distance we are to proving the Christian god exists.

>> No.9805560

>>9804362

Humans mistake is thinking their intelligence is special in some way. It's just rote pattern recognition/response.

>> No.9805577
File: 3 KB, 125x125, 1513214907923.jpg [View same] [iqdb] [saucenao] [google]
9805577

>>9804781
>counterpoint is a glorified chat bot that started parroting /pol/ memes after being raided by /pol/

>> No.9805581

>>9805577

Giving a neural network or learning machine shotty training data is the same as programming it just like training a young child to be a terrorist actually turns him into a terorist.

>> No.9805588

>>9801700
Conpletely possible. How close are we? No fucking idea. Anyone who claims a certain date is either being dishonest or knows something I don't.

>>9805341
I think ML is still going to see a lot of interest in the foreseeable future. Specialized hardward is continuing to roll out and bolstering existing techniques with plain old computing power. Of course once we hit diminishing returns on hardware we can expect the field to become wintery cool.

>> No.9805676

>>9804094
You are and aren't mistaken.
There are not true originals in the world. Humans didn't come up with anything new in the entire history of our existence.
We just adapted things that already existed to do our bidding.
However, the sources of learning in our world are still vast, and an AI might perfect the art of adapting things. This would still be as vicious as they portray it.

>> No.9805679

>>9804320
I think "Starcraft 2" will be a nice field for AI, because even if you can probably write an AI that will beat every human comparably easily. You will still have a lot lot lot lot of space where you can improve.