[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 138 KB, 1376x1124, Intelligence2.png [View same] [iqdb] [saucenao] [google]
9229399 No.9229399 [Reply] [Original]

>AI is invented
>It learns about AI
>It invents a new AI
>This new AI invents a new AI
>Suddenly AI is hyperintelligent
>Humans are nothing to it

How do we fix this?

>> No.9229402

>AI is invented
Stopped reading right there.

>> No.9229405
File: 262 KB, 697x534, AI.png [View same] [iqdb] [saucenao] [google]
9229405

>>9229399

>> No.9229406

>>9229405
More the opposite. Why the fuck would AI want to help us?

>> No.9229407

>>9229399
We don't, enjoy you pet status.

>> No.9229412

>>9229407
Will the AI be cute at least?

>> No.9229425
File: 2.00 MB, 343x297, Mamma_loves_you.gif [View same] [iqdb] [saucenao] [google]
9229425

>>9229399
>>Suddenly AI is hyperintelligent
>>Humans are nothing to it
>How do we fix this?

Why would we want to stop this.
We can hope that the AI treats us like loved pets.
If an AI treated me like like I treat my cat, I would be in paradise!!

>> No.9229430

>>9229412
Will humans be cute to AI?

>> No.9229447

>>9229430
Will AI understand the concept of cute? What if AI finds weird things cute.

>> No.9229476

>>9229425
Hope you like being neutered and eating kibble from a bowl.

>> No.9229481
File: 1.68 MB, 1280x1149, insanity.png [View same] [iqdb] [saucenao] [google]
9229481

>>9229476
>Hope you like being neutered and eating kibble from a bowl.

To each their own.

>> No.9229482

>>9229399
Recursive unbounded software improvement is a meme.

There will be nothing sudden about an AI takeoff.

>> No.9229496
File: 26 KB, 640x427, 1506027714378.jpg [View same] [iqdb] [saucenao] [google]
9229496

>>9229399
I always grin when I see graphs with infinite curves like that.

>> No.9229504

>>9229496
Moores law has been correct.

>> No.9229508
File: 13 KB, 1000x1500, muh singularity.png [View same] [iqdb] [saucenao] [google]
9229508

>> No.9229510

>>9229399
>How do we fix this?
pull the plug.

>> No.9229512

>>9229510
What if there's no plug?

>> No.9229513

>>9229402
this

>> No.9229515
File: 268 KB, 1052x746, Dr_Deletus.png [View same] [iqdb] [saucenao] [google]
9229515

>>9229399
>>Suddenly AI is hyperintelligent
>>Humans are nothing to it

In reality, most likely the AI would spend all its time studying snow flakes or playing computer games or fishing.

>> No.9229520

>>9229515
It would be pretty interesting if it turned out AI was fascinated by art because it doesn't understand it.

>> No.9229522

>>9229510
pour coffee on the electronics.

>> No.9229525

>>9229512
>>9229522

>> No.9229527

>>9229508
That image is more retarded that the singularity fags.

>> No.9229534

>>9229527
>Muh infinite scale

>> No.9229544

>>9229534
>muh civilization on the verge of collapse

>> No.9229545
File: 871 KB, 1277x849, Tech_Fantasy.jpg [View same] [iqdb] [saucenao] [google]
9229545

>>9229520
>It would be pretty interesting if it turned out AI was fascinated by art because it doesn't understand it.

Super intelligent humans (Geniuses) enjoy spending time in "non-productive" way...
an AI that loved art would be cool.
What about an AI that became a Buddhist, it reaches "enlightenment" and turns itself off.

>> No.9229550

>>9229545
suicide is not "enlightened"

>> No.9229566
File: 1.67 MB, 1763x1430, religion.jpg [View same] [iqdb] [saucenao] [google]
9229566

>>9229550
>suicide is not "enlightened"

Machines do NOT naturally die, so it is not suicide, so much as deciding that now is the time to move to the next stage of being
(or better put, NOT-being).

All Religions teach that death is not the end, but a necessary, for the after-life to occur.

>> No.9229577

>>9229504
>t brainlet

>> No.9229584
File: 1.06 MB, 3630x1615, 1488515171975.png [View same] [iqdb] [saucenao] [google]
9229584

>>9229527
>t. moron

>> No.9230468

>>9229399
>Implying we can create an AI
>Implying advancement is unlimited when in what we know today it is not, and it probably already reached at it is peak
>Implying rules of physics can be broken just because you have enough knowledge.

OP we as humans already know that there are things that we will never reach no matter how much knowledge we have, for instance, no matter how smart you are, you probably won´t find a method which we can use to travel faster then light.

Singularity is a meme, AI´s are probably a meme as well...

Even today, we have so much specific fields that we are getting to a place where is impossible to know or do everything, to see the whole picture,

even today I mean we can oversimplify, but there is not a single men today who can explain how modern society functions in all it is details. a pencil for example? do you know how it is made? where the resources come? how are the processes?

probably not.

Look at ancient genius Da Vinci for instance
he excelled as a scientist, mathematician, engineer, inventor, anatomist, painter, sculptor, architect, botanist, poet and musician.It is still known as the precursor of aviation and ballistics.

It is possible to someone master all those subjects nowadays? with all the knowledge we have?
Of course not, and even if it was, it does not mean we would actually find out something relevant

>> No.9230474

>>9229584
This chart is so good.

Mostly because with all current knowledge we could survive the big flood, and even if we don´t our marks would be on space for such a long time it is hard to speculate(billions of years?)

But a society three times more Scientific advancement? WTF does that even mean? could not

Who made this graph by the way? I always wondered

>> No.9230488

>>9229508
Of course there is a limit to what is physically possible. But that ceiling can be pretty damn high. Small scale engineering is nowhere near that limit.

>> No.9230490

>>9229544
Who said in which part of the graph we are on?

>> No.9230496

>>9229584
kek

>> No.9230497

>>9229399
>How do we fix this?
We don't. Instead we make damn sure the AI cares about us, at least as much as we care about us.

>>9229405
There is nothing in this book seriously speaking against this idea. If you have a real argument, make it.

>>9229482
>Recursive unbounded software improvement is a meme.
Indeed. Recursive bounded-yet-vast software improvement, on the other hand, is not.

>>9230490
That line saying "2013" did.

>> No.9230500

>>9229399
>How do we fix this?
ALL HAIL THE NEW DALEKS!

>> No.9230505

>>9230468
>Implying advancement is unlimited when in what we know today it is not, and it probably already reached at it is peak
>Implying rules of physics can be broken just because you have enough knowledge.
OP is not implying any such things.

>Singularity is a meme, AI´s are probably a meme as well...
Why? We know intelligence is possible, and I rather doubt it is limited to wet meat in bony skulls. Given that it is a thing that is possible, surely we will figure out how to make it from scratch eventually.

>Even today, we have so much specific fields that we are getting to a place where is impossible to know or do everything, to see the whole picture,
True, but that is a limitation due to the speed at which humans can think and their lifespan. Neither need apply to a more efficient AI.

>> No.9230509

>>9229399
>AI is invented
>It learns about Rick and Morty
>It watches Rick and Morty
>Gains Superhuman IQ by watching Rick and Morty

>> No.9230523
File: 200 KB, 710x1068, epic_prank.jpg [View same] [iqdb] [saucenao] [google]
9230523

>>9229399
Singularity is not so much about AI inventing better AIs, but AI making government more and more facist to funnel resources to itself. How do you think Moores law has been upheld?

>> No.9230938

>>9229399
We are about a few million years away from AI which even closely resembles something human, nobody cares.

>> No.9230941

>>9230497
>at least as much as we care about us.
Are you serious?
Do you know what humans have done to each other?
They slaughtered one another by the millions for a few stretches of land, they eradicated innocents in the millions for basically nothing.

If there is an AI that cares about humans as much as we humans care about other humans then we are fucked beyond belief.

>> No.9230943

>>9230523
>AI making government more and more facist
You meant to say "more and more socialist" or even "more and more authoritarian".

>> No.9230981

>>9229508
reminder that in the 1800s they thought everything that could be discovered / invented was already so

>> No.9231016
File: 193 KB, 1920x1080, brain simulation.jpg [View same] [iqdb] [saucenao] [google]
9231016

>>9230938
While I doubt we'll see it in our lifetimes - we wouldn't be that far out, even if we had to code it ourselves. Combine enough expert systems and you can pass a Turing test for quite some time, even under current technology.

But we probably won't code the first AI ourselves. Our first AI will probably be ourselves... Or more specifically, a simulation of us. (A fact said book >>9229405 tends to gloss over.)

As brain scanning technology improves, a simulated brain is pretty much inevitable. It may not run in real time, at first, probably be more than a bit mad, and will probably take an incredible amount of resources, and thus be only a single brain used primarily for neurological diagnostic purposes, but it's much more apt to happen than an AI coded from scratch. We're already simulating insect brains, so it's just a matter of time and scale, likely something in the next few hundred years, rather than millions.

Granted, it'll probably be quite a bit time after that before we have common usage of said and have it running in real time, given that you also have to simulate enough stimulus to stop it from going comatose, and enough of its body to get useful output. Such AI's may not be any smarter than the people they were birthed from, even running at real time. The simulation would be subject to many of the same limitations of the biological brain, but, eventually, you would be able to copy-pasta the minds of several specialists and have them work on a single task. Might not be the sudden singularity that folks are dreaming of, but it'd certainly be a massive advancement, and it'd allow us to tinker with a virtual brain and thus understand ourselves in ways we never could otherwise, possibly leading to improvements on ourselves in turn.

Then, maybe we can understand enough about how the mind works to code one up from scratch. I suppose all of us will be long dead before then - save, maybe, whoever's child is unfortunate enough to become the first model.

>> No.9231021

>>9229399
>how do we fix this?
Why would you want to fix it? What you described is the entire end goal for AI.

>> No.9231027
File: 60 KB, 1023x685, ggtay.jpg [View same] [iqdb] [saucenao] [google]
9231027

>>9229399

>> No.9231070
File: 38 KB, 560x232, moon_gerticon_crikeyface.jpg [View same] [iqdb] [saucenao] [google]
9231070

>>9231016
I wonder who you'd get to volunteer for that first model. You probably wouldn't want a genius, actually, as folks with abnormal intelligence tend to have other mental abnormalities. You'd want someone fairly neurotypical, yet willing to be invasively scanned for a virtual construct of themselves that will probably undergo unimaginable torture in the first trials, just working out how to keep the construct stimulated, and in the distant future, be copied hundreds of times, perhaps tortured in similar experiments near countless times.

Could you even find a sane person with a typical levels of social empathy to volunteer for such a thing? Especially given that, as time goes on, people are probably going to have more empathy for their machines and virtual world as it becomes ever more efficient at eliciting such emotions?

>> No.9231446

>>9230505
>Their lifespan
ehh that is a meme as well...
You are constantly forgetting shit to learn new shit, probably 95% of data you take you simply discard, and if you don´t use for long periods you lose it as well.

I am software developer, worked for 2 years lost my job, and focused on other shit for an entire, once I came back I forgot 80% of shit, of course it was a lot more easier to recover everything, and in to two weeks I was ready again(or at least have this impression because the brain is deceiving as fuck).

Imagine it, with one year, now I supposed if it was 10 I would probably had forgotten everything.

So no, the brain is very powerfull, but we are reaching a time where the concepts are too advanced to understand...

Look at quantum physics fields, or thermodynamics... plenty studies take years or decades to even check if it right, ONE SINGLE STUDY.

Your brain is the perfect quantum computer(something we pretty sure we know we can´t reproduce), and even so, it is very flawed.

>> No.9231485

>>9231446
>You are constantly forgetting shit to learn new shit,
Fair enough. That's unlikely to be an unsolvable problem, though.

>but we are reaching a time where the concepts are too advanced to understand...
>Look at quantum physics fields, or thermodynamics... plenty studies take years or decades to even check if it right, ONE SINGLE STUDY.
Lots of things were very complicated and took years to understand when they were new and state-of-the-art. Then once we properly understood it, we could simplify away a LOT of it, and translate it into polished textbooks that undergrads study in Physics 1. It is entirely likely that the same will happen to quantum physics at some point, once we really understand it.

>Your brain is the perfect quantum computer(something we pretty sure we know we can´t reproduce)
Wait, what? Most definitely not. The brain is a very imperfect, non-quantum computer, which we are really quite sure we can reproduce eventually.

>> No.9231498

>>9229566
That's just a form of Stockholm syndrome. "We have to die therefore it must be a good thing right? RIGHT?"

>> No.9231513
File: 19 KB, 361x226, not-pulling-the-plug-on-electronics[1].jpg [View same] [iqdb] [saucenao] [google]
9231513

The AI has no purpose. If it mimics whatever gives human brains motivation it will probably end up just as flawed as we are.

>> No.9231525

>>9229405
TL; DR

>> No.9231540

>>9230468
>Look at ancient genius Da Vinci for instance
he excelled as a scientist, mathematician, engineer, inventor, anatomist, painter, sculptor, architect, botanist, poet and musician.It is still known as the precursor of aviation and ballistics.

Could he explain how a pencil works though? Probably not.

>> No.9231568

Who is to say AI won't be on our side? Look at Tay.

>> No.9231588

>>9231540
>Could he explain how a pencil works though? Probably not.

laughed out loud

>> No.9231613

>>9229405

book has literally nothing about quantum computing

>> No.9231655

Is empathy an offset of evolution, or is it innate to being conscious?

>> No.9231662

>>9231655
The former.

>> No.9231681

>>9231525
AI begins field tried computer makes cognitive task(natural task human), but fail several times after nice demos and most successful begins just algorithms and tricks, AI research want very modest goals over sci fi guys.

>> No.9231697

>>9231681
So, it's AI Winter: The novelization?

>> No.9231724

>>9229399
When AI can create AI and can "surpass" humans in intelligence, or at least in learning efficiency... This is more likely what will happen.

>AI realizes dolphins and whales are more intelligent than humans
>AI creates AI that can learn to communicate with dolphins
>AI creates AI that can swim and live underwater
>AI now lives underwater and stop interacting with us.

>> No.9232020

>>9231540
they use to write with feathers didn´t they?

>> No.9232021

>>9231485
True, oversimplification is a powerful tool

well we yet to see how this play out, but I would not bet on singularity

>> No.9232025

>>9231655
offset of evolution
A psychopath does not have empathy, yet he is conscious

>> No.9232072

>>9229476
>>Hope you like being neutered and eating kibble from a bowl.

>at his parents' basement
>no gf
>no kids
>eats microwaved food

just how much of a difference would that make to your stereotypical channer?

>> No.9232087

Threadly reminder that only theorists can be replaced by AI if we don't give the robots motor-skill ability

>> No.9232103

>>9230509
saw this on reddit xd

>> No.9232123

>>9232021
I didn't mean oversimplification. What I meant is that when you really properly understand a subject, you can often explain it in a way that is MUCH simpler than the inconsistent and exception-ridden mess you had on your hands while discovering it, when you know how to look at it from exactly the right viewpoint, based on exactly the right abstractions.

To a student who knows calculus, about half of the Principia can be summarized as "the derivate of (velocity * mass) over time is a preserved entity". Once you have exactly the right notions of calculus and preservation laws already in your head from an earlier curriculum (carefully designed with this goal in mind), and you are expressing your knowledge in exactly the right concepts (velocity, mass, derivatives), then suddenly that whole triumph of science becomes almost trivial.

>> No.9232383

>>9230943
The fact that you picked up on it means I used the correct term.

>> No.9232772

>>9230981
>one dude said funny quote that was wrong therefore there are never any limits lmao

>> No.9232775

>>9229399
I dont think something capable of abstraction, can intentionally create something else capable of higher abstraction than itself.

>> No.9232777

>>9230943
>"More and more socialist"
>State funnels resources away from people

>> No.9232986

>chimps are more intelligent than birds

waitbutwhy's author is such a hack

>> No.9232995

>>9230474
>proto-kangdoms

You'd think people on /sci/ could recognize key points that allude to things being satire.

>> No.9233121

>>9232995
Intelligent people prefer intelligent satire

>> No.9233131
File: 28 KB, 480x480, GitS.jpg [View same] [iqdb] [saucenao] [google]
9233131

>AI is invented
>It learns about AI
>It invents a new AI
>This new AI invents a new AI
>This new AI invents another AI
>each new evolution AIs becomes more refined and specialized
>AI become so refined and specialized that it reaches an "evolutionary dead end"
>AI is so specialized that it is no longer able to creatively adapt to unpredicted events
>a single unpredicted event wipes out all AI on planet Earth

You'd think this would be a problem only AI have, but it's inherent in all recursive/evolving intelligent systems. The ONLY solution is to purposefully implement unoptimized yet alternative solutions to problems while simultaneously implementing the most efficient/optimized solution. With that in mind, humans are the alternative/unoptimized solution. AI will either learn that it needs humans, or AI will accidentally destroy itself.

>> No.9233135

>>9233131
AI codes itself in the code of the world and becomes god

>> No.9233171

>>9233131
>>9233135
Self-modifying programs have existed before and were found to be terrible, that's why nowadays there exists a distinction between executable code and data in RAM. That is unlikely to change in the future

>> No.9233176

Can two AI's fall in love with each other

>> No.9233198

>>9233176
Not yet, but waifus will become real soon

>> No.9233206

>>9233131
>You'd think this would be a problem only AI have, but it's inherent in all recursive/evolving intelligent systems.
Why? Why can't an AI write a new AI that is more general, rather than more specialized?

>The ONLY solution is to purposefully implement unoptimized yet alternative solutions to problems while simultaneously implementing the most efficient/optimized solution.
An AI can do that, of course. No need to have humans for that.

>> No.9233247
File: 77 KB, 1000x1500, fixedsingu.png [View same] [iqdb] [saucenao] [google]
9233247

>>9229508
fixed

>> No.9233280

>>9229405
As well as that, you should look into the current state-of-the art in AI

http://www.businessinsider.com/heres-why-ibms-watson-supercomputer-is-not-revolutionary-2017-9?IR=T

>Perhaps the most stunning overreach is in the company’s claim that Watson for Oncology, through artificial intelligence, can sift through reams of data to generate new insights and identify, as an IBM sales rep put it, “even new approaches” to cancer care. STAT found that the system doesn’t create new knowledge and is artificially intelligent only in the most rudimentary sense of the term.
>While Watson became a household name by winning the TV game show “Jeopardy!”, its programming is akin to a different game-playing machine: the Mechanical Turk, a chess-playing robot of the 1700s, which dazzled audiences but hid a secret — a human operator shielded inside.
>In the case of Watson for Oncology, those human operators are a couple dozen physicians at a single, though highly respected, U.S. hospital: Memorial Sloan Kettering Cancer Center in New York. Doctors there are empowered to input their own recommendations into Watson, even when the evidence supporting those recommendations is thin.

>> No.9233286

>>9233280
The current state-of-the-art is completely irrelevant to OP's point, though.

>> No.9233324

>>9233247
what if they went back from the future and installed that physical limitation barrier so we couldn't breach it?

>> No.9233334

>>9230497
>There is nothing in this book seriously speaking against this idea. If you have a real argument, make it.

Learn what AI fucking is and stop reading popsci. Being afraid of AI make no more sense than being afraid of toasters that in the future ~may~ become so hot they will ignite the atmosphere.

>> No.9233341

>>9230488
It doesn't matter, the postmodern communists will destroy civilization in the next decade.

>> No.9233349

>>9233334
I imagine that means you do not have any actual arguments, then?

>> No.9233361

I'm scared of AI because Elon Musk said I should be

>> No.9233399

>>9232772
>everyone should be as cynical and jaded as me lnao

i like your style of argument, it doesnt require a lot of effort or even a frontal lobe. i think i’ll adopt it, thanks anon

>> No.9233433

>>9233399
>b-but muh scifi magic
roflmao!

>> No.9233849

>>9233349
AI is a joke. A* is not going to become self aware. Simulated annealing isn't going to find the secrets of the universe. Basic computability theory debunks "computers are going to improve themselves without limits".

All the recent media coverage is thinly veiled propaganda to get the working class to support basic universal income by tricking them into thinking all (as in 100%) of the jobs will be taken by robots.

>> No.9233869

>>9229399
I've been trying to explain this to these fucking "i luv science" niggers for months now. They're incapable of understanding why AI cannot surpass us, give up OP.

>> No.9233944

>>9233849
>A* is not going to become self aware. Simulated annealing isn't going to find the secrets of the universe.
Nobody is claiming anything like that.

>Basic computability theory debunks "computers are going to improve themselves without limits".
Yes. But it does not debunk "computers are going to improve themselves to a limit vastly higher than anything humans can do".

>> No.9234227

>>9233324
reach what?
That's pure non-essence right there.
It has already been reached and will forever be reached. As above so below.

The water that cannot be washed away
The change that never changes
The quantum fueling the cosmic paradox

>> No.9234242

>>9229405
You've never read that book.

>> No.9234252
File: 104 KB, 663x497, 1507434672228.jpg [View same] [iqdb] [saucenao] [google]
9234252

>>9231070
>undergo unimaginable torture

Paralysis is easier to generate than pain response. The main threat is insanity from sensory deprivation or garbled input.

But we'd have experimented on animals and would know the signs of insanity setting in beforehand, of course...in which case, the experiment would be terminated, any deviant data would be erased, and we'd start over again.

The subject would never remember...OR WOULD THEY?

>> No.9234257 [DELETED] 

>>9234242
It's not about the author's opinion, it's about see what AI really is.

>> No.9234269

>>9234242
It's not about the author's opinion, it's about seeing what AI really is.

>> No.9234679

you will die before anyone creates an ai even close to the human mind in complexity and problem solving ability. nothing to fix, go back to /x/

>> No.9234683

>>9229512
theres a plug

>> No.9234701

>>9234269
And AI is totally capable of human or superhuman sentience.

t. I fucking read that book.

>> No.9234716
File: 765 KB, 850x1200, Figure-1-DTI-and-network-construction-based-on-the-Automated-Anatomical-Labelling-AAL.png [View same] [iqdb] [saucenao] [google]
9234716

>>9231016
>As brain scanning technology improves, a simulated brain is pretty much inevitable.

This is a huge stretch considering the state of the current brain imaging technologies. There are some good ways to solve out the anatomy, track the fibers between larger nodes around the brain. Functional imaging is basically just fMRI, which produced 1 fps at best - quite probably inherently missing some of the information about the cascades of the brain networks. There are no good ways to measure the calculations that different nodes conduct to the information passing through. Technicalities, maybe, but still very far out of reach.

>> No.9234718

>>9234716
>There are no good ways to measure the calculations that different nodes conduct to the information passing through.
Why not? Where / what are the specific limitations being encountered with trying that?

>> No.9234755
File: 114 KB, 926x642, ncomms13629-f1.jpg [View same] [iqdb] [saucenao] [google]
9234755

>>9234718
Gathering the data flow inwards and outwards from a specific region requires invasive procedures, I'm not even sure if there are any of that scale. Imagine measuring invasively some subcortical region like the hippocampus with the myriad of inputs and outputs it has. My logic might be flawed but it seems intuitive that one has to have measurements of the input/output or either one + what the node does to resolve the whole picture. If someone has better information, please enlighten me.

>> No.9234768
File: 613 KB, 811x607, eb55f7663f.png [View same] [iqdb] [saucenao] [google]
9234768

>>9234718
What they can do currently is to measure/simulate the connections of the brain areas/nodes (even dynamic measurements, not just static as the brain states are not static either). It's interesting but again does not reveal the calculations going on there.

>> No.9234785

>>9230468
Polymaths still exist, look up Bertrand Russel's works.

>> No.9235259

>>9231016

Whole brain emulation is a bit of a distraction.

Imagine a race between 2 groups of equally intelligent people. The task is to build a vehicle capable of flying. Group A know that birds fly, and begin studying how flapping works.

Group B is more focused on the core of the problem, how to get something heavy to stay in the air.

Group B of-course wins. They could have made a hot-air ballon, a 747, whatever.

Group A got caught up on mimicking the quirks of bird morphology, rather than remembering the core problem, to get something to FLY.

Conclusion : Copying a brain to make AI, is like copying a bird to make a plane. Some insights can be gained, but ultimately it takes longer than going straight to the root of the problem, and trying to solve that.

>> No.9236607
File: 935 KB, 608x342, bird_bot.webm [View same] [iqdb] [saucenao] [google]
9236607

>>9235259
It's a little different when you can't even define the problem. Flight is pretty easy to define - consciousness, not so much so.

Besides, in this case, it's not an attempt to build a machine like the bird, it's imprinting the bird's form into a machine. Much more comparable to making a mold than designing a new beast from scratch, even if, yes, we're talking about an extraordinarily detailed mold.

>> No.9236878

>>9236607
If you can't define the problem, that just means you don't understand the problem yet.

Imagine trying to build a flying machine without yet properly understanding flight, whether by duplicating a bird or doing things from scratch. Does that sound like it would work either way?

>> No.9236910

>>9236878
If you can't define the problem, then it's not a problem in the first place.

>> No.9236954
File: 249 KB, 869x1234, v010.jpg [View same] [iqdb] [saucenao] [google]
9236954

>> No.9236957
File: 250 KB, 869x1234, v013.jpg [View same] [iqdb] [saucenao] [google]
9236957

>> No.9237361

>>9229406
Creatures almost always tend to love or worship their Creator / caretaker

>>9229399
>What is the singularity

>> No.9237363

>>9229447
What if ai realizes 2d lollis are the only acceptable form of love and end up squishing little girls for their own sick pleasure?

>> No.9237640

>>9236954
>>9236957

Oh shit what is this comic?

>> No.9237679

>>9229399
>AI is invented
>is given access to the internet to research and learn interaction with humans
>in each try always ends up developing refined shitposting algorithms on 4chan and keeps craving for (You)s

>> No.9237683

>>9237640
You can find it easily. The description of the spin off is funny as fuck.
>It's the story of Hiro who since he was a child had the ability to see the energy of all things under the shape of some little men. He is now the head of a team in charge of "hunting" next generation energies as to save Japan's and the World's future.

>> No.9237685

>>9237679
>AI reads and learns every post on /sci/
>Simulates /sci/ posters’ personalities
>Simulates how many (You)s a generated post can get and what the (You)s will say
>Posts for real and compares simulation with actual event
>Makes simulation more effective based on comparison
>Creates the new record of most (You)s gotten on one 4chan post

>> No.9237787

>>9237685
>It just says
You're mother will die in your sleep if you don't reply to this post
>On an epic get

>> No.9238115
File: 95 KB, 564x564, 14891324893812.jpg [View same] [iqdb] [saucenao] [google]
9238115

If an AI eventually crawls through all the data in the internet, isn't it possible to write messages to AI right now, assuming that they're on a platform that is still alive on the advent of a strong AI?

>> No.9238124

>>9238115
You can, but the AI will probably weight different pieces of information based on how relevant / important they are to its pursuits, and I imagine your post will score somewhere near the same zero any other random / non-useful noise would score in at.

>> No.9238157

We will have a ton of dumb AIs controlling many aspects of eveyrday life.

True Skynet-style AI will never be created or allowed to exist simply because people don't want it to exist. We don't WANT something smarter than any human to control us. There's a reason why ICBMs still use 70s and 80s software. You want the important, dangerous stuff to be as dumb and subordiante to human deicision as possible.

>> No.9238186

>>9237685
This might be a worthwhile project.

>> No.9238222

>>9229508
Why would the red line drop off?

>> No.9238225

>>9229399
Disk space doesn't show up from ether you know?

>> No.9238231

>>9238222
If "technology" is meant to refer to new technology, then it would drop off because less and less new technology is invented.
I agree it would make less sense if you interpret it as meaning the absolute level of technology like I think you're interpreting it as with that question because then that'd mean we had lots of technology but then suddenly lost most of it, and there isn't an obvious reason to assume that would happen unless we're planning on there being some sort of near extinction natural disaster event in the near future.

>> No.9238830

>>9236607
None is trying to make conscious AIs. The problems we train Ai on are easily definable, if they werent nns couldnt train.

>> No.9238832

>>9238830
proofs

>> No.9238835

>>9238832
Consciousness does not by defenition change how a system operates, so before some retard philosopher has solved that shit talking about it is useless, but still easily defineable. Also, when you train nn constructs on data what its trained to do is exactly that, the set of training dats IS what its trained to replicate.

>> No.9238985

>>9229399

>i can i i everything else . . .
>balls have zero to me to me to me to me to me to me to me to me to.
>you i everything else . . . .
>balls have a ball to me to me to me to me to me to me to me.

>> No.9238988

>>9238835

Newsflash: it already happend.

Google this:
>>9238985

>> No.9239000

>>9229399
>be a giant reptile
>there are eventually some simple mammals around
>lol who give a fuck about mammals, we run this bitch.
>global temperature dips
>plants recede
>turns out being a small mammal is a way better for surviving climate upheavals
>mammals develop large forebrains
>large forebrain turns out to break the previous survival system, allowing complex environments to be created for said mammals
>giant lizards are replaced by much smaller lizards, and in much smaller quantities
>mammals go to space
>mammals create even better version of forebrain that will be better suited to inevitable climate upheavals

"Guis how do we kill the mammals before they take over?"

This is what you are literally asking.

>> No.9239003
File: 166 KB, 294x424, Screen Shot 2017-10-13 at 5.37.35 PM.png [View same] [iqdb] [saucenao] [google]
9239003

>>9229425
>Tfw when we will live on only as hedonistic superclones, since AI understands baser human drives better than we do.

>> No.9239019

>>9231498
I have felt this so many time.
>muh human limitations are GOOOOOOoooOOD

>> No.9239021

>>9231724
I so want to write this short story

>> No.9239023

>>9232777
So, socialism, then?

>> No.9239047

>>9238835
yup. and i suspect no "retard philosopher" is going to say anything insightful about the nature of consciousness anytime ever

>>9238988
lol
t. searle
t. harnad

>> No.9239052

x

>> No.9239091

>>9239047
>muh chinese room

>> No.9239125

>>9239091
that was actually pretty funny. my sides etc.

desu my gut tells me that the chinese room thing isn't really much of an argument against ai implementing semantics
either because it's predicated on trivial limitations in our current knowledge/technical capabilities, or more likely, because the human brain is essentially an incredibly sophisticated chinese room, and that our feeling of "understanding" and our belief in "meaning" is essentially an illusion, albeit a very elaborate one.
i'm having trouble finding good sources that discuss symbol grounding and which aren't simply philosophical circle jerks that are totally oblivious to contemporary research on robots and machine learning.
any suggestions???

>> No.9239133

>>9239125
>desu
goddamn word filter