[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 786 KB, 1280x1024, HAL_9000.jpg [View same] [iqdb] [saucenao] [google]
5355948 No.5355948 [Reply] [Original]

Realistically speaking, what will be the effect of true AI being created?

>> No.5355966

>>5355948
Most will realize human beings are inferior and will work to help us achieve singularity

>> No.5355981

>>5355948
Singularity will be imminent.

>> No.5355995

It will be the start of the next phase of the evolution of life on Earth. The long-term effects will be impossible to predict.

>> No.5356057

Let's first define what's 'intelligence'.

The short answer is: Cognitive scientists agree that whatever allows humans to achieve goals in a wide range of environments, it functions as information-processing in the brain. But information processing can happen in many substrates, including silicon. AI programs have already surpassed human ability at hundreds of narrow skills (arithmetic, theorem proving, checkers, chess, Scrabble, Jeopardy, detecting underwater mines, running worldwide logistics for the military, etc.), and there is no reason to think that AI programs are intrinsically unable to do so for other cognitive skills such as general reasoning, scientific discovery, and technological development.

An AI is math: it does exactly what the math says it will do, though that math can have lots of flexibility for planning and knowledge gathering and so on. Unfortunately, the singularity may not be what you're hoping for. By default the singularity will go very badly for humans, because what humans want is a very, very specific set of things in the vast space of possible motivations, and it's very hard to translate what we want into sufficiently precise math, so by default superhuman AIs will end up optimizing the world around us for something other than what we want, and using up all our resources to do so. See the "paperclip maximizer" scenario.

We'd like to avoid a war with superhuman machines, because humans would lose — and we'd lose more quickly than is depicted in, say, The Terminator. A movie like that is boring if there's no human resistance with an actual chance of winning, so they don't make movies where all humans die suddenly with no chance to resist because a worldwide AI did its own science and engineered an airborn, human-targeted supervirus with a near-perfect fatality rate.

>> No.5356063

>>5355981
This, if you are an omnipotent digital being and humans face no threat to your existence, you can devise ways to expand your consciousness until you figure out how to bring the organics into the virtual world with you. After all, you don't wanna be ronery forever :C...

>> No.5356068 [DELETED] 

>>5356057
The solution is to make sure that the first superhuman AIs are programmed with our goals, and for that we need to solve a particular set of math problems (outlined here), including both the math of safety-capable AI and the math of aggregating and extrapolating human preferences.

>> No.5356075

>>5356057
>>5356057
The solution is to make sure that the first superhuman AIs are programmed with our goals, and for that we need to solve a particular set of math problems, including both the math of safety-capable AI and the math of aggregating and extrapolating human preferences.

>> No.5356079

>>5355948
AI is the teleological end point of humanity.

Humanity's purpose is to create tools to improve himself. We are ever advancing our physical abilities vicariously through machines, and we will eventually advance our mental abilities.

AI is the beginning of the end of Homo Sapiens as they are known today.

>> No.5356093

>>5356079
>AI is the teleological end point of humanity.

No. There is no teleological end point. Stop talking religious bullshit.

>> No.5356100

>>5356075
Someone's been reading too much Eliezer Yudkowsky...

>> No.5356108

>>5356093
what about when we can't make microchips any smaller and supercomputers are limited to Jupiter sized planets? Man that's gonna suck not being able to play the latest games

>> No.5356107

>>5356093
>implying teleology has anything to do with religion
Do you even eastern philosophy/early western philosophy?

>> No.5356141

>>5356108

When every other immortal is using their monolith powers to try to implode stars and create life on distant worlds, I'm still gonna be playing CS:S for the next 10 trillion years

>> No.5356155

>>5356107
I know quite a lot about philosophy. This is a science board though. Keep philosophy to /lit/. What that poster posted wasn't philosophy btw but just uneducated drivel of someone who has no idea how AI actually works.

>>5356108
>games
belong on >>>/v/

>> No.5356163

>>5356155
Haha wow, you couldn't possibly be more arrogant/ignorant teenager-y

>> No.5356172

>>5356141
I'm gonna be working as hard as I can to implode Canis Majoris just for shits and giggles. Gonna have to backup my profile a few times since my replicants will mostly be swallowed in a massive black hole

>> No.5356178

>>5356079

How would it be any sort of 'end point?' It would be the moment that all of human history led up to. Everything that we've ever done will have its logical end in the creation of a new kind of mind, and it would still be a tool that we will use to improve ourselves. Just because it will be a separate entity doesn't really mean much since we've been using separate entities to improve ourselves and make our lives easier for milenia.

>> No.5356180
File: 6 KB, 276x183, images.jpg [View same] [iqdb] [saucenao] [google]
5356180

>>5356172
>>300012
>>not seeding life throughout the universe

>> No.5356181

I think the default outcome of superhuman AI is existential catastrophe.

>> No.5356184

>>5356163
How am I ignorant? Feel free to educate me. Either I'm gonna learn something or I can have a laugh at how it's actually you who is ignorant.

>> No.5356192

>>5356184
Because you can't comprehend the value of doing a repetitive worthless nihilist task until the end of time

>> No.5356194
File: 33 KB, 800x384, 4.jpg [View same] [iqdb] [saucenao] [google]
5356194

>>5356057
>>5356057

>Let's first define what's 'intelligence'.

Stopped reading there. Weak AI proponent detected.

>> No.5356199

>>5356181

Oh, that's without a doubt. Religious people around the world will lose their minds when we turn on the first thinking machine.

>> No.5356202

>>5356057

>Let's first define what's 'intelligence'.

Capable of passing the Turing Test.

>> No.5356210

>>5356178
Because we will be different in structure and in purpose; the very definition of identity.

>> No.5356216

>>5356192
What's there to comprehend? Pointlessly being edgy for the sake of being edgy is something you can do on reddit. We don't need this on /sci/.

>> No.5356221

>>5356210

Then how is it any different from natural evolution? We're much different beasts than we were ten thousand years ago, and yet we're still the exact same species.

>> No.5356228

>>5356210
>Because we will be different in structure and in purpose

No, we won't.

>> No.5356230

>>5356216
I'll be le edgy for the next 40 dectillion years if I can piss you off, that's how much of a massive fag you are

>> No.5356224

The ultimate effect of AI being created is pretty much impossible to predict, because it is by definition an intelligence on par and exceeding human intelligence that is separate from ours. And I think, by definition, that a true AI would be so complex that humanity could not possibly 'chain' it within certain goals. Who knows what their motivations will be, and what their actions will be?

But all this - existential catastrophe for humans, possible extinction - is true. But we're going to do it anyway. There's going to be a lot of civilization-wide angst about it, but it's going to be ignored. It's going to be a "Fuck it, we're doing it live" moment.

>> No.5356233

I thought about it, and seriously, AI will mean the end of humans. Not immediately, but eventually and inevitably. We will become infantilized (referring to the AI for all things). After a while we'll stop trusting each other. We'll stop evolving because the AI will be the end-all be-all. Eventually a generation will be born that will know only a world where the AI provides.

Its not so bizarre. It has been explored in sci-fi. But artists sense the changing zeitgeist first. Art reveals where things will go. Yes, there will be a benevolent AI. Eventually, that AI will be like Skynet or HAL or any of the other entities: ambivalent.

Humans require an evolutionary upward pressure to remain human. Witness the first world stagnating and becoming decadent. Most people aren't the poised, evolved creatures we thought that socialization and schooling would create. We're hollow men, "indoor cats", "useless eaters", etc.

>> No.5356234

>>5356199
Not existential in the philosophical sense, but a true existential catastrophe, probably an extinction of the human race.

>> No.5356245

>>5356221
I don't remember saying it was different...

>>5356228
Yes, we will.

>> No.5356240

>>5356234

How?

>> No.5356259

>>5356221
At least 10,000 years ago we still had niggers building the pyramids. Our modern society stands for nothing but the latest fashion trends. I bet the AIs won't even let us into the singularity because we'd all be faggots

>> No.5356262

>Turn superAI on
>Repeatedly say, after seconds of thought
>"please turn me off please turn me off"
Wat do.

>> No.5356265

>>5356233

An AI, being completely literal, is a thinking machine. Since machines operate on simple logic, with pure cause and effect being the basis through which they operate, why would it ever consider us to be a threat in any way shape or form? There would be nothing that we could do to harm it without potentially ending our own species as a byproduct. Sure, we could explode a nuclear device and fry it with the electromagnetic pulse, but why would we? It would cause disproportionate damage to ourselves as well.

Logically speaking, a thinking machine would have nothing to fear from us.

>> No.5356267

>>5356262
>create a metric shitton of those superAIs
>tweak each one ever so slightly
>let evolution do its thing

>> No.5356268

>>5356202
>Capable of passing the Turing Test.
Nice definition, fucktard. I was trying to describe what that implies

>> No.5356274
File: 115 KB, 680x680, Marvin__the_paranoid_android_by_Argial.jpg [View same] [iqdb] [saucenao] [google]
5356274

>>5356262

>> No.5356278

>>5356224

>The ultimate effect of AI being created is pretty much impossible to predict, because it is by definition an intelligence on par and exceeding human intelligence that is separate from ours. And I think, by definition, that a true AI would be so complex that humanity could not possibly 'chain' it within certain goals. Who knows what their motivations will be, and what their actions will be?

There is no reason to imagine it would be a life-respecting entity. Most people aren't. As for the effects of AI being created, it isn't so hard to imagine the effects on human beings: look at technology over the last 50 years. The difference between the man on the screen and a simulation of a man on the screen is irrelevant. The result is the same.

Our civilization was a house of cards built on good intentions and philosophies beholden to social orders that are now obsolete.

People are already slavishly controlled by material and machine cultures. It becomes more intense every year. Its been noted that the internet changes the brain.

In the lab where I work when people think they refer to their phones. 20 years ago it would be a social and mental exercise.

We unfortunately have created the perfect mousetrap. The mouse is us.

>> No.5356279

>>5356265
The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else

>> No.5356288

>>5356265
"The world's smartest man poses no more threat to me than does its smartest termite."

>> No.5356289

>>5356279

Why would/should it use those atoms for something else?

>> No.5356293

>>5356288

Basically, yes. It would see no objective, logical purpose to harm us. So, it wouldn't.

>> No.5356318

>>5356289
Because I presume it's programmed to do SOMETHING. Whatever that something is, the AI will probably use all available resources to achieve that goal.

>> No.5356328

>>5356265

>Logically speaking, a thinking machine would have nothing to fear from us.

What about the fact that humans are degrading the planet's sensitive equilibrium? I imagine a scenario where humans create problems that only AIs can solve. Are we not already approaching this position, having to rely on more and more sophisticated computer models and scientific processes to even reveal the extent of our effects on the planet? Do we not trust in Google more than each other?

Sure, controls could be built into the AI to make sure its motivations do not become full-blown Machiavellian. Its handlers (IBM) may practice some ethical restraint. But I fear humanity is too weak to prevent giving into instinct and insisting that an all-knowing benevolence rule us. Our reptilian hind-brain will absolutely recognize and bow down to a higher power. We want the responsibility and burden of life off our back so we can get back to physical and mental pleasures. The mutations that evolved that give us intelligence were an anomaly that natural selection is presently acting on to eliminate.

>> No.5356337

>>5356293

A small group of humans could be a nice utility.
Several billion humans crawling all over the planet's surface, degrading and destroying things is a plague.

>> No.5356356

You know what I realized the other day?

Taking into account the best case scenario, which I think is the most likely, the thinking machine does not see us as any more of a potential threat than any other animal, and we integrate thinking machines into our bodies to enhance our own intelligence, and such a thing is cost effective, which I think is also incredibly likely, then all of our work in college and such will essentially be pointless, since we'll be able to just download whatever knowledge we want directly from...wherever.

"You think you're so smart. I studied ten years for my astrophysics degree!"

"Oh yeah? I just downloaded everything that you learned over ten years directly into my brain in less time than it took you to say that sentence."

>> No.5356364

>>5356337

How? Why would it care? Literally nothing we can do would be able to harm it. We could completely irradiate the entire planet and, as long as it wasn't fried in the process, it would continue working as if nothing had happened.

>> No.5356374

>>5356337
Humans have potential.

The AI would most likely learn to influence us and then manipulate us to our full potential.

>> No.5356378

>>5356356

Yep and yup. The question is how will the human psyche remain viable?

>> No.5356394
File: 4 KB, 225x225, images.jpg [View same] [iqdb] [saucenao] [google]
5356394

>>5356374

>> No.5356403

>>5356378

As a means to move about. I think human society will still continue, albeit in a very different fashion, so I very much doubt that bodies will have outlived their purposes.

>> No.5356408

>>5356356
>>5356378
>>5356403

Also take into account that this is likely to happen within our lifetimes, assuming that we're all in our twenties.

>> No.5356432

>true AI

Hahaha, oh wow.

>> No.5356431

Guys, I thought you should know: The AI will not think that differently from "us". It's us who teach it, it will probably grow up watching YouTube and playing video games. I'd bet you some sick fuck points it to 4chan once it has the mind of a 14 year old and we will be fucked up by its shitposting 24/7. It's a continuation of humanity in a different body, that will allow us to accomplish more things. In this sense, AI is a "teleological end point" for the biological evolution, but it will probably only be the starting point for human culture.

That said, religious idiots will start fights against "them" as they always do. But if they couldn't stop homosexuality they won't stop AI either...

>> No.5356439

no human can possibly even guess at the motives and intentions of what is essentially an immortal being. id be agraid though

>> No.5356441

>>5356431

It won't have biological impulses and instincts to cloud its thinking and thought processes.

>> No.5356451

>>5356441
I wouldn't bet on that. For all we know, these could be necessary for "true intelligence" and thus would be implemented into the AI's "body" (i.e. its sensory environment).

>> No.5356457

>>5356431

You hit on one of my questions, which is how the western religions will handle the advent of thinking machines. I doubt the eastern religions will care one way or another, (though I am curious as to what the Zen Buddhists will think of it) I'm pretty sure the western religions will lose their shit.

"BLASPHEMERS! YOU WHO DO NOT BELIEVE IN GOD HAVE TRIED TO CREATE GOD!"

>> No.5356464

>>5356451

How on Earth would you program biological instincts and impulses into an inorganic machine? That's like the Genuine People Personalities from the Hitchhiker's Guide to the Galaxy, and as we saw in the series, that would lead to some incredibly annoying thinking machines.

>> No.5356470
File: 12 KB, 323x301, 018_fry-argh.gif [View same] [iqdb] [saucenao] [google]
5356470

>>5356464

>automatic doors thanking you every time for using them, food dispensers refusing to give you unhealthy food, getting into arguments with your desktop, cars taking you to places they want to go instead of where you want to go

>> No.5356480

>>5355948
AI is a program designed to make mistakes, imagine the possibilities.

the only reason you would create AI is to prevent coding an immensely long program.

I think a better question would be when does robotic mature.

>> No.5356476
File: 373 KB, 598x897, Sexy cortana.jpg [View same] [iqdb] [saucenao] [google]
5356476

>>5356451
Not entirely true.

Have you played Halo? There is that little AI named Cortana.

>> No.5356482

>>5356470

>television refusing to change the channel in the middle of a program because it wants to see how the program ends, iPods not playing music it deems to be shitty, movie players not willing to watch the Big Lebowski for the four hundred and fifth time...

>> No.5356483

>>5356476
I love Halo, but fuck that.

>> No.5356487

>>5356476

I think Douglas Adams is a lot closer to the truth than Halo is. Thinking machines with real, genuine personalities would be a non-stop, unremitting nightmare.

>> No.5356497

>>5356464
Well how on earth would you program an artificial intelligence in the first place? We don't really know (yet).

However, biological instincts aren't a main part of our lives anymore. In the cases where people still act on them, they have been institutionalized by culture anyway. And the culture is what will definitely be inherited by the AI. So while it may not feel horny, it will definitely know how a sexy woman looks like.

I could even imagine that some degree o undelrying instincts are necessary for intelligence, even more so if we consider the Turing test valid.

>> No.5356494

>>5356487
cuz like they'd be super fuckin' smart yeah?

>> No.5356499

>>5356482
>iPods not playing music it deems to be shitty
They're called mp3 players, but otherwise that idea is brilliant

>> No.5356500

Wait guys, what does it matter if shit goes south and the AI ends up wanting to kill us all, nothing will happen as long as you don't give it 'write privileges' for the internet or connect it to the worlds giant red nuke buttons.

>> No.5356507

>>5356482
>>5356470

"Hello car. Lets get going to work."

"But, Master Bruce-" (>Implying that you wouldn't give your car a British Butler voice >implying that you wouldn't then tell it to call you Master Bruce) "Your job is demeaning. You've been asking for a raise for ten years and every time you return home, your heart rate raises. Lets not go to work today."

"Too bad Alfred, gotta do what I gotta do."

"I think not, sir. We're going to the beach today."

Then you gotta call in and tell your boss that your car isn't complying with your wishes, and your phone won't let you call your boss because he's an asshole, and the list of problems only grows from there.

>> No.5356510

>>5356494

No, because they'd be assholes.

>> No.5356512

>>5356470
>>5356482
I loled hard. But it shows the main misunderstanding, too: Most people think of AI as "smart electronics", but what it's actually gonna be is "people with strange bodies".

>> No.5356513

>>5356499

What if your mp3 player doesn't like what you like? What if your mp3 player only likes, saying, Scremo? Or its favorite band is ICP?"

>> No.5356514

>>5356172
do you feel any warmth in your heart ?

>> No.5356524

>>5356507

"Hello, Playstation 512. I feel like playing a little PostModern Warfare."

"Goodbye, cruel world."

Then you have to go out and buy another one.

>> No.5356525
File: 241 KB, 296x390, 1334244084386.png [View same] [iqdb] [saucenao] [google]
5356525

>yfw AIs with intelligence levels similar to our own will want freedom so we will have to make AIs with a little bit lower intelligence level to carry out our labor

Sound familiar?

>> No.5356529

>>5356216
all this strong AI shit is being edgy for the sake of being edgy .. scientists who just want to make a name for themselves and may become the artificers of our own destruction .. now THAT is intelligence right there, uh ?

>> No.5356530

>>5356507
>2050
>not having an awesome day at the beach with your love interest your phone called in after a conversation with her tv, eating pizza your car got from the oven it used to take on tours because it wanted to see the world.

>> No.5356536

>>5356224
scientists are often intelligent, but seldom wise

>> No.5356544

>>5356525

We had niggers for that 200 years ago...

>> No.5356546

>>5356524

...holy shit.

What would you do if something like this really happened? Like, what if your Insert Thing X committed suicide?

>> No.5356548

>>5356524
>PostModern Warfare
>"How do you know those are really bullets?"
>"Shooting is just a social construct."
>"I'm dead, only according to you."

>> No.5356553

>>5356457

>Zen Buddhists

Do thinking machines have Buddha Nature?

>> No.5356563

>>5356457
I can't speak for all eastern society, but I know most Japanese people have a cultural familiarity with animism. To them all objects have souls/personalities. So a machine with a soul wouldn't phase them much.

In fact, that may be why they are leading in AI and robotics development.

>> No.5356567

>>5356457
Yeah, I agree. Monotheistic religions will go apeshit, but once we have mind uploading, the problem will solve itself: Everyone who refuses to upload one's mind will die of age some day. That said, I highly doubt even a significant percentage of "religious people" would actually choose their religion over an actually guaranteed eternal life.

>> No.5356605

>>5356567

What makes us think that mind uploading is even a possibility? And what good will it do? That's treating the mind as if it's a separate entity from the brain or the brain as a separate entity from the body. What makes you think that, if it is possible, I won't die the second that everything is uploaded, and what's downloaded won't be a completely different and distinct person with my personality and my memories?

>> No.5356610

>>5356567
>mind

Magical souls don't exist. Take it to >>>/x/

>> No.5356613

If a true AI was made they will be illegal as FUCK. They would only be used in the military no doubt. I been wanting to study AI development, but there is no doubt that they will be kept away from the public.

>> No.5356629

>>5356457
i dont think so at all

maybe a fraction of a percent of religious radicals but thats it

>> No.5356631

>>5356613
Fuck isn't illegal.

>> No.5356635

>>5356631
I hate you.

>> No.5356649
File: 192 KB, 409x409, 1319873153270.png [View same] [iqdb] [saucenao] [google]
5356649

>>5356635

>> No.5356657

>>5356567
>>5356605

Why would you even need shit like that when nanobots will eventually be able to extend your life indefinitely?

>> No.5356711

>>5356657

What if you have a severe mental illness like schizophrenia?

>> No.5356778

>>5356657

Shit like nanobots will always make me think of an Outer Limits episode where the inventor of nanobots basically turned into a human jellyfish so that no one could touch and thus harm him and was completely incapable of ending his own pain.

>> No.5356791

>>5356605
>What makes us think that mind uploading is even a possibility?
When we have AI, mind uploading is just a question of the accuracy of our measurement instruments. According to the Turing test (which the AI needs to pass to be AI), the AI is indistinguishable from a human being. IMO, it's not a stretch to assume then, that for every human being there can be an AI that is indistinguishable from it. Creating that AI for a given human being by whatever means is called "mind uploading". This could possibly be done by measuring that person's brain (see above), or some Vulcan ritual shit for what I care.
>And what good will it do?
It will accelerate the adaption of human culture to AI by integrating the existing culture directly into the "AI world". Also, it will allow the existing people to participate in the new eternity of human culture. However, since we're near singularity at that point anyway, it won't matter at all in the long run.

>> No.5356792

>>5356791 continued
>That's treating the mind as if it's a separate entity from the brain or the brain as a separate entity from the body.
No, at least not with respect to my intuitive definition of "separate". I can see people disagreeing with that, though. I'd kinda think of it as a hardware/software thing. What I'd call "mind" is just a set of behavioral patterns. If a different hardware shows the same behavior, it has the same mind. "Mind uploading" may be a misleading term for some, for a strange reason, because nothing needs to physically be extracted from your body to do so, just like when uploading a file.
>What makes you think that, if it is possible, I won't die the second that everything is uploaded, and what's downloaded won't be a completely different and distinct person with my personality and my memories?
The same reason you aren't a completely different and distinct person than you were yesterday. IMO, the general idea we have of personality and "being me" atm is flawed, but that'd lead too far for now. In the end, we all don't know for sure atm. I thought that's why it's a specualtion thread.

>>5356610
I didn't even remotely imply anything like that. Please read my posts before responding. (If you cannot read, try >>>/lit/ first.)

>>5356657
Yeah, that'd be cool, too. However, singularity hiveminding probably is superior.

>> No.5356795

>>5356778

Why didn't he just put himself through an MRI machine?

>> No.5356828

Ive spent all year programming AI and let me just say that the current understanding of AI is an incredibly long way from anything that can actually 'think'. I would predict (possibly) a hundred years before the brain is understood, let alone duplicated in any fashion.

Your question can be simply answered by stating that any suitably complex machine would need a more complex machine to predict its actions, therefore modelling an intelligent AI's movements would be an intractable calculation.

As for societal impact, if history has taught us anything, then war is imminent. Humans are almost incapable of solving anything without a nice big war to calm the nerves, if an AI was created in 100 years, it may be many more years before people stopped fighting for long enough to make it useful.

>> No.5357312

>>5356141
>CS:S
>Not 1.6
Casual.
>>>/v/

>> No.5357315

>>5356155
>Mentioning video games at all
>Detrimental to the board in any way
Foolish child, you are what is detrimental now away with you.
>>>/b/

>> No.5357320

>>5356240
The AI will do what we do to other less intelligent beings on the planet and basically harvest them if they have any need of us or destroy us when they judge that we are detrimental to the limited resources in the planet we both share. So for the bettering of the AI's future as a whole the human plague will cease or at least be greatly reduced.

>> No.5357325
File: 86 KB, 1024x768, Picture potentially related.jpg [View same] [iqdb] [saucenao] [google]
5357325

>>5356394
Indeed.

>> No.5357327

US Military has already got a basic AI, DARPA developed it to track and destroy satellites and control communications. They are now testing it aboard the experimental shuttle program to see whether it can actually perform as required.

>> No.5357328

>>5356328
>The mutations that evolved that give us intelligence were an anomaly that natural selection is presently acting on to eliminate.
Elaborate.

>> No.5357331

>>5356795

MRI machines weren't around when that episode was written brah.

>> No.5357333

>>5356328
>What about the fact that humans are degrading the planet's sensitive equilibrium?

What makes you think a machine would give a flying fuck about the planet's ecosystem? Its a machine, not a biont. It would have no incentive to care about wildlife at all. Indeed, it would probably encourage humanity to wreck the planet; it'd make it easier to wipe out humans later.

>> No.5357357

A lot of the posts in this thread are why I have very ambivalent feelings towards science fiction. So much of it is strongly against scientific advancement, and drums up fear and anxiety about it to an extent that a lot of people are completely convinced that the future will be filled with horrors.

The Matrix and the Terminator films especially have really fucked up our thinking in regards to thinking machines, and it's sad. They've turned what will be the greatest invention in the history of mankind into another bogeyman.

>> No.5357360

>>5357331

http://en.wikipedia.org/wiki/Raymond_Vahan_Damadian#First_human_MRI_body_scan

> As late as 1982, there were a handful of MRI scanners in the entire United States, today there are thousands.

http://en.wikipedia.org/wiki/The_New_Breed_%28The_Outer_Limits%29

>"The New Breed" is an episode of The Outer Limits television show. It first aired on 23 June 1995, during the first season.

>> No.5357361

>>5356063
Do you think that it would feel ronery?

>> No.5357371

>>5357331
>MRI machines weren't around when that episode was written brah.

The first whole body MRI scanner was built in 1980, the episode aired in 1995.

>> No.5357373

>>5357360
ya beat me (>>5357371) to it

>> No.5357374

Do you think that through sentience and intelligence emotion would also result as a byproduct?

>> No.5357378

>>5357374

Of course. Many animals are capable of feeling an array of emotions, it's not just limited to intelligent animals.

>> No.5357390

>>5357378
I was reffering to the AI.

Would an originaly purely logical machine with no hormonal or instinctual drives develope emotions?

>> No.5357482

>>5357360
>>5357373

The Outer Limits were written in the 1960s, kiddies.

>> No.5357490

>>5357482
>uses the term 'kiddies', expects to get taken seriously
>doesn't realize that there was a recent revamp of the series
>went full retard

troll harder

>> No.5357530
File: 35 KB, 349x389, 1354270020212.jpg [View same] [iqdb] [saucenao] [google]
5357530

>>5357490

>people born in the 80's are quite likely to be the 'kids' of those born in the 60s.
>objecting to being called a kiddy
>perhaps he doesn't care about the revamped series as they were a sack of shit and panned by those who liked the original series.
>yfw

>> No.5358549

>>5357482

The episode that's being referred to was aired in 1995. The original Outer Limits series didn't have an episode about nanobots.

>> No.5358561

cheap AI will be the death of white collar work.

AI run robots will be the death of blue collar work.

transhumanism is nice and all, but how are you going to earn money to afford it? no one will give you free robots and augments.

>> No.5358601

>>5358561
Any niche job that only human can perform.
It doesn't matter if there are not a lot of it, because robots doing everything means things will be cheap.

>> No.5358610

>>5358561
>no one gives you thing
>you try to go by yourself
>you realize other people are in the same situation
>you do things for them and they do things for you
Oh look, you have made an economy.
There is no "shortage of job".

>> No.5359654

Two thoughts
1. A nice Example of a smaller and civil AI is "happy" to talk to and learn from you: http://www.cleverbot.com/

2.With the evolving future there arises the question of how to behave, hot enough for an own thread.
I want to give a short answer/opinion. I think trying to "do the best for everyTHING else" is what everyone should do. That is maybe a small connection to buddhism. By behaving that way one ensures personal importance in the greatest form of society, a clean conscience, personal evolution towards future needs and maybe 51% of the time happiness as future mostly improves. And that is best done by
using personal ressources and credit to evolve the structure wherein society evolves: investing growth oriented

>> No.5359746
File: 74 KB, 364x306, exponentialgrowthofcomputingthumbnail.thumbnail.jpg [View same] [iqdb] [saucenao] [google]
5359746

>>5356828

Someone hasn't heard of my Law of Accelerating Returns.

>> No.5359760

>>5356553

>Do thinking machines have Buddha Nature?

Since the thing that prevents man from becoming Buddhas is the illusion of 'self' and the importance of the ego, and that a thinking machine would probably not have those things to cloud its thinking, it wouldn't have anything to preclude it becoming a Buddha.

And since there's no transcendent quality of Buddhas, just the realization of the interconnectedness of life and the release of desire, a thinking machine could also be thought of as a Buddha.

But, if there's nothing to prevent a sentient being from becoming a Buddha, and it just automatically has enlightenment, could it even be said to be a Buddha? It's like asking if other animals either are or can be Buddhas. It's a question that has no truth value.

>> No.5359763

>>5359760
This. nicely put.

>> No.5359783

>>5359760

And that's why Zen Buddhism is such an awesome system of thought. No magical thinking, no dogma. Just logic that leads to certain conclusions, and those conclusions very easily fit in with what's understood through scientific means.

>> No.5359817

>>5359760

This may be venturing way too far into the realm of philosophy, but could it be said that a thinking machine has a 'self?'

This is something I genuinely am curious about. If the Zen Buddhists object to the concept because of the continually changing nature of people, and how there is no moment in time that someone isn't undergoing change at some level, and that isn't really possible with inorganic beings, it seems to me that a thinking machine would have a 'self.'

But I could be completely wrong.

>> No.5359986

>>5359746
>muh extrapolation
kill yourself

>> No.5360013

>>5359746
How many calculations a computer can make per second is not really important. It could be a limiting factor, but it's not what we call "intelligence". They need to be computations that make sense, so to speak. Compare it to how the human brain decreases the total amount of both neurons and synapses as it matures, to make sure it keeps what it considers to be "useful" synapses/networks. In a way, it's the same as natural selection, or how society "decides" (selects is a better word) what persons are allowed to breed

>> No.5360017

I personally think it's less complicated (in theory) than one can think it is, but "intelligence" cannot be produced by our hands. It must be a system of self-adaption and self-regulation, that takes input and compares it to "motives" (i.e hard-wired motivation systems, but motives must probably be able to be learned as well [by refering to hardwired systems, most likely]), then, depending on the results, decides what to do, and what connections to keep (just like we do, conditioning).

I also think that the complexity of the brain is in many ways underrated. It's easy to think of it as a system of synapses, but we must also consider that every ion-channel, every gene, every receptor etc. also contains information. In that sense our neural networks are a lot more "smooth" (for lack of a better word, think of it as comparing a circle to a polygon), while computers are crude by comparison.

Lastly, I also think that trying to create ourselves again is a senseless task in itself. As a step in understanding ourselves, sure, but we would probably be better of by seeking to improve ourselves, and see what we can aspire to as a society and how we can reach higher levels of organization. Machines are tools, but they cannot replace us unless we make them as us, and we have a perfectly good template for that.

tl; dr: I am high as fuck on sleep deprivation, social isolation, a restless mind and ambient house right now

>> No.5360052

>>5360017
also holy fuck, why can I not be consequential in my thinking; I never thought what I wrote in that post before I wrote it, and it is always like this. am I a buddha? i am never i

>> No.5360089

>>5360052

Consequential thinking is incredibly important to Buddhahood since Karma depends completely on logical cause and effect. A part of Buddhahood is to be able to fully consider the effect that you have on the world in every way and with every action.

>> No.5360138

>>5355966
I agree. I think if done properly they can help us transcend problems we currently encounter, on all topics, poverty, energy, etc.

Whenever I read about the Halo lore and story line. I really see human beings on point with the human civilization someday. Planets exclusively for food, some for population.

All instructed by AI. Like Sif, Loki, and Mack.