[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 6 KB, 272x186, wet.jpg [View same] [iqdb] [saucenao] [google]
9753517 No.9753517 [Reply] [Original]

brainlet here,can someone tell explain to me what the technological singularity is and how it "can go terminator on us"?

>> No.9753525

AI is nothing more than smoke and mirrors at this stage.

>> No.9753585
File: 54 KB, 749x588, singularity.jpg [View same] [iqdb] [saucenao] [google]
9753585

>>9753517
A singularity is just a metaphorical term meaning a point we can't see past. In the case of a technological singularity, it represents the point in which we can no longer predict what is going to happen. The point we cannot see past. This is due to the exponential increase in intelligence provided by advanced general intelligence.

>> No.9753611

>>9753517
Technical progress feeds on itself. The tools we build enable us to make better tools.
The notion is that an AI just a wee bit smarter than we are would be smart enough to build an AI a bit smarter than it is. And so on and so on and so on. It's like evolution, but happening in days rather than gigayears.

The 2nd part is the fear that intelligent machines would have no use for us.
Personally, I think that's silly. Our motivations and drives were around long before intelligence. They come from completely separate layers of the brain. An IQ of 400 doesn't mean you want to DO anything.

David Brin has made the point that we are ALL doomed to be replaced, superannuated, and "put out to pasture" by newer models. We call them "children" and very few people are frightened or have called for a ban on their manufacture.

In any case, >>9753525 is correct. A "thinking" machine, as opposed to a "problem solving" machine, is not on the immediate horizon.

>> No.9753646

>>9753517
A true super AI would do one of three things: Kill all of us, take care of us, or leave.
Asimov, Banks, or Egan. Terminator or WALL-E.

>> No.9753653
File: 436 KB, 1930x1276, HLAIpredictions.png [View same] [iqdb] [saucenao] [google]
9753653

>>9753517
Predictions for when AI will exceed human intelligence
https://arxiv.org/pdf/1705.08807.pdf

>> No.9753657
File: 281 KB, 1394x1490, AIpredictions.png [View same] [iqdb] [saucenao] [google]
9753657

>>9753517
More AI predictions

>> No.9753661
File: 90 KB, 536x536, mindspace_2.png [View same] [iqdb] [saucenao] [google]
9753661

>>9753517
https://www.youtube.com/watch?v=EUjc1WuyPT8
https://intelligence.org/files/AIPosNegFactor.pdf
http://yudkowsky.net/singularity

>> No.9753677

>>9753517
It's an idea that a runaway deep learning algorithm (or something like it) increases its intelligence exponentially in order to maximize its utility function.

>> No.9753780

>>9753517
It would be unlikely to be malicious unless programmed to be so. But an AI with the potential to modify itself could get out of control very quickly due to unintended consequences. Google "paperclipping". The basic idea is that self preservation and acquiring resources is a convergent goal that most intelligent beings need in order to accomplish their primary goals, and unless we are very careful with programming, there is a significant chance an AI could end up using things we need to survive in order to increase its own intelligence or utilize matter for some other purpose.

Lesswrong is a fairly insular community that popularized this topic, but they can't be taken totally seriously because they can get a bit circlejerky. Still, they have some valid ideas, and most prominent figures in artificial intelligence agree with them that a badly made AI is an existential threat even if not made to be hostile. Any sufficiently advanced AI that isn't overtly benevolent could indirectly be a threat to us, and even that would extremely hard to do since we don't "talk" to AI directly but through abstracted programming languages. Getting them to really understand what humans want in terms of happiness while preserving freedom and diversity of experience would be difficult.

>> No.9753799

>>9753653
We are shit at predicting the future. Might as well flip a coin as consult such a chart.

>> No.9753860
File: 10 KB, 480x360, hqdefault.jpg [View same] [iqdb] [saucenao] [google]
9753860

I've seen 2 devinitions of the techno singulo being thrown around.

1) When AI meet or exceed human intelect

2) When Humans artificially alter or enhance their physical bodies with technology to the point where the line that divides synthetic robot with organic human doesn't just become blurred but disappears entirely. Think the ending to bicentennial man, where the robot has all his parts replaced with synthetic organs designed for human use, legally becomes human and dies of old age. If you havn't seen that movie, sorry for spoiling the ending.

>> No.9753877
File: 99 KB, 854x464, together.jpg [View same] [iqdb] [saucenao] [google]
9753877

>>9753860
Also, I've always thought the 2nd definition was correct because that's how it was first introduced to me with GitS.

>how it "can go terminator on us"?
If you're talking about the 2nd definition, then it's a sure fire way that it can't go terminator on us. The lines between synthetic robot and organic human will be so blurred that there would be no discernible enemy or target to "terminate."

>> No.9753949

>>9753517
it's just another tech buzzword thrown around by people who want to sound smart and cutting edge

honestly i've never heard this word used unironically around genuine scientists or engineers