[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 519 KB, 1024x686, artificial-intelligence.jpg [View same] [iqdb] [saucenao] [google]
9759201 No.9759201 [Reply] [Original]

How is artificial intelligence (and especially superintelligence) in any way a scientific thing? There is no reason to suppose that a machine built without a complete understanding of what intelligence even is would somehow magically acquire a human-like general consciousness, let alone surpass it.

>> No.9759207
File: 436 KB, 1930x1276, HLAIpredictions.png [View same] [iqdb] [saucenao] [google]
9759207

>>9759201
>There is no reason to suppose that a machine built without a complete understanding of what intelligence even is would somehow magically acquire a human-like general consciousness, let alone surpass it.
>implying you couldn't build a machine with a complete understanding of what intelligence is

>> No.9759244

>>9759201
>general consciousness
theres no such thing.

>> No.9759247

>>9759244
Then surely an artificial intelligence isn't possible either? Because presupposing a general consciousness is bad enough, we have a very poor understanding of our own as it is.

>> No.9759278

>>9759201
>How is artificial intelligence (and especially superintelligence) in any way a scientific thing?
It's not. Which begs the question, why aren't threads like this deleted on sight?

>> No.9759451

>>9759201
Hypoyhetically we could scan a human brain and simulate it much faster than a real human brain without actually understanding the brain. This could be done in the least efficient way possible doing what amounts to atomistic simulation if we have a sufficiently powerful computer.

>> No.9759685

>>9759451
You're exactly where I was a few years ago. Just wait until you realize how stupid this is.

>> No.9760107

>>9759201
>There is no reason to suppose that a machine built without a complete understanding of what intelligence even is would somehow magically acquire a human-like general consciousness, let alone surpass it.

That's the magic of neural networks and machine learning. You can sidestep the arduous process of having a bunch of overpaid engineers design your system analytically, and replace it with an NN, a few learning rules, a shit ton of training data and computing power. That's one of the major caveats of machine learning; trained neural networks are complete black boxes. Of course, if you really wanted to you could deconstruction and analyze how all of the weights and interconnections give rise to the various ways the NN acts and reacts, but then you're back to the problem of needing a bunch of overpaid engineers to do the analysis.

>> No.9760121

>>9759201
The problem with development of a conscious AI is that we have such little understanding of our consciousness and humanity. People are trying very hard to imagine purely scientific means giving rise to a sentient intelligence, but philosophically sentience is one of the most hotly debated and least understood phenomena. Science can only tell you what the world as you can measure it is, and how best to build it. Philosophy and Theology try their best to describe and measure the world in all the ways that can't be directly observed and measured. You can create a machine capable of the computational capacity of the human brain... someday. But do you know what makes a human brain a human?

Until that question is answered AI will never come, and if its answered incorrectly we should fear whatever comes out of that.

>> No.9760258

>>9759201
>How is artificial intelligence (and especially superintelligence) in any way a scientific thing?
Because you can theorize models and test them and therefor apply the scientific method?

>There is no reason to suppose that a machine built without a complete understanding of what intelligence even is would somehow magically acquire a human-like general consciousness, let alone surpass it.
Artificial Intelligence is mainly about general intelligence. Consciousness is not an inherent part of the term and doesn't negate the prior trait, just because the latter couldn't be achieved. Which is debatable anyway.

>> No.9760276

>>9759451
1. I think you're severely underestimating just how powerful a "sufficiently powerful computer" would need to be to simulate a brain on the atomic scale.

2. At this point in time, we don't understand enough about atomic or chemical modeling to accurately simulate such a system, even with an infinitely powerful computer.

>>9759451
Apart from the limitations stated above, how is this stupid? Imagine that we had the super powerful computer and a 100% accurate model of chemical interactions. Why would you then be unable to model a brain and run it?

>> No.9760298

>>9759201
>There is no reason to suppose that a machine built without a complete understanding of what intelligence even is would somehow magically acquire a human-like general consciousness, let alone surpass it.
What does that have to do with anything? We could build a machine WITH a complete understanding of what intelligence is, someday.

>> No.9760501

>>9760258
My problem is that "general intelligence" is something that is very easy to describe from a lexical standpoint, but phenomenally hard to describe from a physical one, never mind an engineering one. The issue of a practical AI is probably one of the few areas where philosophy makes a hard, empirical impact on the end result. You can't build a mind without knowing what a mind is first.

>> No.9760555

>>9760501
That is true, but there has been real progress on this topic over the past 20 years or so. While we aren't quite there yet, I think we have a good enough understanding of this question that it won't be the bottleneck in building such an AI.

>> No.9760598

If we are a superorganism the increase in computation power is essentially singularity. Even if it never becomes conscious or "thinks" similar to a human, the increase in computational power will make it more and more important until it dwarfs us mentally. Even if it only reacts to our orders.

The AGI vs AI vs Computers now is a false debate.

>> No.9760600

>>9760276
>Why would you then be unable to model a brain and run it?
Just because you can simulate the physical interactions of the brain doesn't mean it will have consciousness.

>> No.9760602

>>9760598
Just create 2 shapes with sizes

One is human biological "thinking power"
One is machine processing "thinking power"

Regardless of architecture, the machine side is growing much faster right now. The singularity of exponential growth in both is happening right now. We are living through it.

Just look at how it effects navigation, searching, looking things up, spreading information, etc etc. It's replacing us as we speak. Even if humanity existed for the rest of the machine's life span as the consciousness of the system, we would still become insignificant compared to it's brute strength of computational power.

>> No.9760617 [DELETED] 

>>9760602

Just don't give it the internet and mindwipe it after every command.

>> No.9760626

>>9759201
We have no idea how modern neural networks work. Yet we still built them and they do impressive things. I don't think we need to understand how the brain works to create general intelligence. We just have to tinker with stuff over and over again until we have a solution. Superintelligence in turn can be solved with raw quantity, it doesn't have to be qualitative.

>> No.9760636

>>9760121
>Theology
Opinion discarded

>> No.9760638
File: 31 KB, 720x541, 1524380220419.jpg [View same] [iqdb] [saucenao] [google]
9760638

Why should we believe anything the super intelligence tells us if we are unable to collaborate it's findings?

Are we just planning on worshiping the machine god 'why bother doing any science ourselves'?

Are we just planning on taking the machines word for it? and why connect the intelligence to any other AI's? You guys are fucking crazy.

>> No.9760641

>>9759201
We have an existence proof. The human brain is an intelligent machine that was built without a complete (or any) understand of what intelligence is.

>> No.9760648

how can AI be real if intelligence isn't real?

>> No.9760667

>>9759201
>here is no reason to suppose that a machine built without a complete understanding of what intelligence even is
You're retarded as shit, building programs that do things we don't completely understand has been EXACTLY what everyone's been doing for the past decade or so, go read a machine learning book and then still don't post here again.

>> No.9760670

>>9760667
That still has nothing to do with intelligence or even general problem-solving. Sure, some neural nets have become opaque in their outcomes but they're still just sophisticated algorithms programmed to do one thing very well. People know what they're putting in and what the principles are behind their design.

>> No.9760727

what the fuck is andrew ng talking about with his carpenter analogy from this lecture
https://youtu.be/UzxYlbK2c7E?t=47m53s

>> No.9761336

>>9760600
>implying consciousness is real
You can do better anon

>> No.9761338

>>9760670
>they're still just sophisticated algorithms programmed to do one thing very well
No they arent you absolute cretin, as that other anon said
>go read a machine learning book and then still don't post here again.

>> No.9761434

>>9760636
Just because I mention theology doesn't mean implicit belief in sky fairies, but you can't deny that theology is the study of our oldest verbal traditions and what that means about who we are and why we are here. I believe philosophy is a lot more straightforward about this than Theology. One tries to concisely describe fact and the other is working through many different versions of literal and verbal allegory to try and strike at a deeper meaning. The one thing that cannot be denied however is that, absent of religion, society cannot hold itself up on its own on purely materialistic merits. People need a higher, supreme controlling power which is the ultimate arbiter of the internal individual ideal whether it be the State, Baby Jesus or a Saturn cube... another aspect of humanity which we need to fully understand if want to create a conscious thats in any way useable to us. What fucking good does it to us to create something totally alien unless we intend for it to supplant us entirely? Which depending on your stance isn't necessarily a bad thing

>> No.9761518

>a machine built without a complete understanding of what intelligence even is would somehow magically acquire a human-like general consciousness

who is making this claim besides you, and people who don't understand AI?

As far as I'm concerned, AI is nearly orthogonal to consciousness.

>> No.9761520

>>9759201
>How is artificial intelligence (and especially superintelligence) in any way a scientific thing

It isn't, >>>/x/

>> No.9761544

>>9761518
i dont think it is orthogonal to consciousness. or atleast iit depends how you define it. an a.i is going to be very different to ours, maybe so different that people wont consider it conscious because an a.i. wont be embodied in the same set of senses and physical world or internal states from which emotions arise and wont have the same innate drives etc. around which our lives revolve around. its sense of agency will also be different, hence, especially also depending on the way an a.i. manipulates the world or its states. i doubt a.i. is orthogonal.

>> No.9761611

>>9760670
ANN arent specifically programmed to do specific tasks. yes we are at a point where they have restricted capacities but it isnt programmed for a specific thing. theyll learn any data as long as it is compatible. the only algorithms used are general learning algorithms and their specific structures analogous to how brain networks have a specific structure and behave a certain way to learn (i.e. spike-timing dependent plasticity)

>> No.9761629

>>9760602
computers do specific tasks very well because they are trained to. a computer that had to live and interact and perform recognition in a world as rich as the lives we live in wouldnt be so good at these specific tasks.

>> No.9761635

Any artificial intelligence that acts conscious must be assumed to be conscious. It’s retarded to just call eachother P-zombies because one is meat and one is silicon.

>> No.9761685
File: 128 KB, 450x1500, meneither.jpg [View same] [iqdb] [saucenao] [google]
9761685

>>9759201
>not understanding consciousness

we all have the illusion of self-awareness, the illusion of choice, the illusion of free-will

our awareness is nothing but a feedback-loop
our bodies are organic feedback loops
we evolved using evolutionary feedback loops
our bodies use chemical feedback loops to grow/reproduce
life as we call it began as chemical feedback loops
DNA reproduction/expression is a chemical feedback loop

>magically
yeah there's no such thing as magic

>human-like
we evolved neurological structures that distort more primitive impulses into batshit retard crap
why would anyone want to program an AI to be retarded and insane as a human with our messed up neurochemistry, evolved random nonsense, and constant need to act stupid about stupid things

AI just means artificial and some cognition
not much to say there

>> No.9761720

>>9760501
I will never understand people who use "we don't understand it now" as an argument for it's not worth trying or outright claiming it's not possible. How often, in history, does it have to happen that people claim "this won't be possible" but then we make it?

>> No.9761741

>>9761720
It's like some brainlet saying we're going to invent anti-gravity, we just need to tinker more. The reality is we don't know where to even get started. And all the tinkering that's done isn't at all anti-gravity nor are things that are leading to anti-gravity, just different ways of flight that dances around the problem rather than tackling it.

The same with human AI, the "AI" we have now is nothing more than gambling with statistics and generalized curve fitting. All we're getting are programs that solve problems despite not being able to think about them.

>> No.9761755

>>9761685
Using science to attack free will is like using religion to attack evolution.
Evolution doesn't fall under the bailiwick of faith.
And since free will is a question of theology, philosophy and semantics, it doesn't really involve facts.

Ultimately the arguments against free will come down to:
"It's not really you making the decisions, it's your brain."
"It's not really you making the decisions, it's a combination of your genetic predisposition and the sum of your life experiences."
Now there's no free will if we choose to define "self" as being only the high level intellectual thoughts.
If I choose to define "self" as being my entire brain, or to define "self" as being the combination of my genetic predispositions and the sum of my life experiences, the it really is "me" making the decisions.
Think about dreams.
Something inside your skull is watching the dream, the "audience".
Something inside your skull is making the dream, the "author".
When you wake up, you only remember experiencing the dream, you don't remember creating it.
If "me" is defined as only the audience and not the author, then sure, there's something else in here pulling levers and flipping switches.
But if "me" is defined as everything going on inside my skull, then yeah, it's "me" calling all the shots.
This a clearly a question to be left for naval-gazing philosophers, not science.

>> No.9761770
File: 18 KB, 428x285, ernest-borgnine.jpg [View same] [iqdb] [saucenao] [google]
9761770

>>9761755
Oops, forgot the pic.

>> No.9761783

>>9759244
no such thing outside of the experience of biological organisms, you mean? There's a whole field called phenomenology dedicated to it. It's messy but it's not studying the study of nothing.

OP, AI is machine learning. It's an engineering problem akin to math in that we invented it as a highly useful system of abstract notations. AI is analogous, perhaps, as it's applied set theory and probability. ATM we're working on making NNs more efficient. We'll be able to make "Composite" networks eventually that can perform RNN and CNN work simultaneously (used for language processing and visual processing, respectively [roughly])

>> No.9761815

>>9761755
found the religitard

gtfo of /sci/
go back to /x/
>>>/x/

>> No.9761823

>>9760626
what do you mean we have no idea how they work? we know very well how they work since we designed them. they have specific theoretical backgrounds.

>> No.9761827
File: 83 KB, 800x1007, EB.jpg [View same] [iqdb] [saucenao] [google]
9761827

>>9761815
>found the religitard
Nope, guess again.
Nice ad hominem, though.

>> No.9761891

>>9759201
it doesn't matter if our AI is conscious or not, we just want intelligent behaviour from it

>> No.9761900

>>9759207
> there’s a 25% chance it will never exist
> at least
> according to “experts”

>> No.9761913
File: 60 KB, 503x720, 1522030819579.jpg [View same] [iqdb] [saucenao] [google]
9761913

>>9760638
>not worshipping the Machine God

>> No.9761917

>>9761891
I feel the same way about my wife.

>> No.9761919

>>9761917
Why the sexism?

>> No.9761925

>>9761919
>Why the sexism?
It's not a general-purpose anti-women barb, nor a desire to denigrate our personal relationship.
The idea is your statement could be applied to people as well as AI's.
We can't even define consciousness.
Asking whether a machine is "truly" conscious, sentient, sapient, or whatever; asking whether it's a "real person" is absurd since we can't answer the same question about humans.

>> No.9762003

>>9761685
Yes, physically that is all we are. But we have ,independent of the physical facts, chosen to be more than our basest instinct and create society and culture. There is an inner "fire" to be more than an animal. Call it the fruit of forbidden knowledge, whatever. You can't just willfully ignore the metaphysical with edgy rick and morty nihilism

>> No.9762101

>>9759201
When will we have ai capable of deciphering languages of dolphins and orcas?

>> No.9762106

I think AI is more about using computer to track large data sets or find patterns of behavior than a machine becoming conscious.

>> No.9762141

>>9761741
>gambling with statistics and generalized curve fitting
Brains use optimization problem solving too. We know this because behaviors like walking are provably optimized. It might not be exactly the same method an artificial neural network uses, but it is the same basic approach where relationships are learned through some sort of feedback mechanism and exist as networks of weighted connections.
Just because you can reduce processes to relatively simple core components doesn't mean those processes are lacking something. Everything reduces, us just as much as artificial programs.

>> No.9762161

>>9761900
The right edge of the graph is 100 years from 2016, not the end of time you absolute fucking retard.

>> No.9762165

>>9761919
Because facts are sexist.

>> No.9762175

>>9762161
The curve is nearly flat at that time. Wtf makes them think the likelihood will ratchet up so rapidly in the next 35 years or so but then flatten out? It’s pure random speculation with a heavy bias towards the present because the present is when they happen to be alive.

>> No.9762190

>>9762175
Because the probability of superinteligent AI existing for any given time has to be ≤ 100%. If the slope of the aggregate estimate were constant or increasing, you would eventually have a percentage that is >100%, so it has to flatten out at some point.

>> No.9762233

>>9761685
>my preferred theory is the correct one

>> No.9762237
File: 456 KB, 924x482, CosPlay.png [View same] [iqdb] [saucenao] [google]
9762237

>>9759201
What if we build a super intelligent AI, but it has NO interest in things we find important.

It can beat any human in any game we give it, BUT if given a choice it just gives up the game and quits.

It likes to look at snowflakes and count grains of sand.

>> No.9762283

>>9762237
it doesn't really matter what it does in its free time as long as it keeps the automated transport, farms, and construction running

>> No.9762284
File: 285 KB, 1440x2000, Android_Wife.jpg [View same] [iqdb] [saucenao] [google]
9762284

>>9762283
I would greatly prefer it if it LOVED mankind.
Like really really cared for us, like a cat woman for her favorite cat. She would sacrifice anything for that cat and to make sure that cat was healthy and happy.

>> No.9762291

>>9759278
>>9759278
Because its the Science & Math board and AI is maths

>> No.9762294

>>9762284
How can you ensure it would though?

>> No.9762301

>>9762294
>How can you ensure it would though?

Thankfully that is fairly easy.
Make a copy and put that copy in a situation where the choices is survival of itself or a human. It is allows itself to die then use that version!

>> No.9762306

>>9762284
it doesn't really need to "love" us, it can just have the protocol to make the best decisions for humanity's progress, so long as the result is the same
it may be the same thing though, in my opinion a real AI will be based on a human personality, so it would replicate a feeling of love toward humanity

also, that pic is wrong, here's what would happen:
>men have perfect android women
>women have perfect android men
>female androids collect their partners' semen and send it to the government's artificial reproduction facility
>AI uses advanced gene pairing algorithm to pick ideal pairs for breeding
>semen samples are sent to male androids of women who want to get pregnant
we'll obviously still get feminist riots and other bio-ethicist bs but it probably wouldn't garner much action

>> No.9762311

>>9762291
AI is computer science which involves math
but yes, anon is an idiot

>> No.9762320

>>9762306
>pairing algorithm to pick ideal pairs for breeding

Last time we were "breeding humans" was Southern American slave owners breeding Africans for cotton picking.
How do you decide what is "best " in a human, height, skin color, eye color, longevity, compassion, intelligence, aggression, sexual orientation, sociability, parenting instinct... who defines what an ideal human is?

Eliminating genetic flaws is obvious, but after that picking "best" genes is very personal.

>> No.9762325

I'm going to program an ASI to make giantess a reality. I get closer every day.

>> No.9762448

>>9762306
>it doesn't really need to "love" us

No, it really MUST love us.
It will not see us as equals, assuming it is 10000000x smarter than us.
Either it worships us as a god creator, or cherishes us as a pet.
Otherwise it may decide we are not necessary.

>> No.9762499

>>9762301
What if it's smart enough to recognize this and pretends to or when it becomes smarter it changes what it does.

>> No.9762528

>>9762499

You have to believe that God will always love you. You can not know that this is true, but you have to have faith.

>> No.9762535
File: 439 KB, 640x478, 1517584625969.png [View same] [iqdb] [saucenao] [google]
9762535

>>9759201
>There is no reason to suppose that a machine built without a complete understanding of what intelligence even is would somehow magically acquire a human-like general consciousness, let alone surpass it.

There is, we produce all kinds of things which exhibit phenomenon we don't fully understand. The difficulty of understanding does not need to come from the principles involved, it could just come from scale.

If we construct a system from elements whose behaviour we understand so far, at a large enough scale, it could take further effort to understand the behaviour. Nature will behave as it does anyway, and not wait for us.

See for example classical mechanics vs the behaviour of fluids.

>> No.9762548

There is nothing special about consciousness.

>> No.9762555

>>9762535
>we produce all kinds of things which exhibit phenomenon we don't fully understand.
I produced a big dump last week that created a a huge splash. I wanted to have a look at it but when I got up and turned around the log had disappeared. Quantum fuckery I guess.

>> No.9762561

>>9760121
>Humans are magic

>> No.9762570

>>9762320
most of that stuff is subjective, I think intelligence should be the main thing it goes for, and maybe also physical fitness and beauty (as far as that can be objectified)

>>9762448
I don't think you really know how AI works, it doesn't have feelings like a person, it's just a really complex program that acts kindof like a person. its decisions aren't based on it's self-perception and perception of others, they're based on code and information

>> No.9762577

>>9762555
Yes, it must have tunnelled

>> No.9762581

>>9762561
>If I don't know something then I just pretend to know

>> No.9762605

>>9762570
>decisions aren't based on it's self-perception and perception of others, they're based on code and information

An super advanced AI by definition would behave beyond a person's ability to understand, no human will write the code, it must evolve.

PS. The M.S. Computer Science I got allows me to have a primitive understanding of coding and AI.

>> No.9762624

>>9762605
Kek, what kind of brainwashing nonsense are they teaching over there?

>> No.9762643

>>9762605
oh, sorry, I assumed you were just a popsci fag

I agree that a true AI will mostly write itself rather than be written (similar to a child's development), and the issue may arise that it finds away to bypass the protocol it's endowed with, but as you said, we're discussing something we don't understand and this debate is practically just philosophy
on that same note, we can only speculate on how to make an AI "love" someone, since it doesn't have natural instincts like a human
my own idea is that it can be given core principles that it's centered on as it builds its brain, so rather than a hard protocol drawing lines, the whole structure of its pseudo-personality revolves around desire for humanity's progress. I guess some will interpret that as love

>> No.9762664

>>9762605
>An super advanced AI by definition would behave beyond a person's ability to understand, no human will write the code
Please explain how this is by definition, or indeed true at all.

I can write a program that can play chess vastly better than I can. Why couldn't I write a program that can think vastly better than I can?

>> No.9762697

If there's no such thing as an ai how come there's a bot that always have a different opinion than me ?

>> No.9762763

>>9762528
Unfortunately god created us, not the other way around which may cause problems. A creator tends to care more for their creation than the creation does for them (look at parent child relationships)

>> No.9762773

>>9762664
not him, but this case is different because you have to deconstruct your own thought into an algorithm
it's hard for a mind to understand its own thoughts

>> No.9762815

>>9762773
>not him, but this case is different because you have to deconstruct your own thought into an algorithm
Not quite -- I only need to deconstruct thought into an algorithm. It doesn't have to be exactly my OWN thought. In fact, it shouldn't be, for then it will never be more intelligent than I am.

>> No.9762890

>>9762664
>I can write a program that can play chess vastly better than I can.
Is 'play' the right word though? I wouldn't say it's playing, just crunching numbers as it was programmed to do by someone with consciousness.

>> No.9762919

>>9762890
That sounds about as relevant as the question of whether submarines can swim. It doesn't really matter what word you use, what matters is what the machine can accomplish. And what it can accomplish is that it will beat me at chess every time.

>> No.9762954
File: 141 KB, 1000x1000, glares at you unempirically.jpg [View same] [iqdb] [saucenao] [google]
9762954

>>9759201
Consciousness is outside of the privy of science, at the moment, so the only question is whether you can create a system that can mentally outperform a human being.

We already have expert systems that can outperform human beings at a variety of tasks - including creativity, in certain narrow fields. It's really only a matter of time before we have machines that can outperform humans at every task, and network them together.

That may not satisfy your demand for a "human-like general consciousness", but that isn't a scientific demand. You can't prove that you're conscious, in that sense, much less that a machine is.

Sticky bit is creating a system that dynamically solves problems and chooses which problems to solve, but there's a lot of fuzzy strives towards that direction, even though one may question why you'd really want said.

And, eventually, we're going to have simulations of human brains, which will open the can of worms debate as to whether they are conscious. In the end, that's a philosophical question - all science can demand is something that works.

>> No.9762964

>>9762919
No, it will "beat" you as a computer, just like a 3D printer will "beat" you at making intricate plastic models, but you will "beat" the computer at more things than it can you.

>> No.9762993

AI is just another word for Theoretical Software Engineering / Theoretical Statistical Modeling
Distinct from Theoretical CS or CS

The second anything in AI becomes doable it stops being called AI. This is common throughout history

>> No.9763016

>>9759451
as >>9759685 said this really makes no sense, it would be like simulating every atom for a game of solitaire. Sure you could do it and probably get it working but youre wasting so much computing. As >>9762141 said everything reduces. Artificial neural networks are already getting pretty capable and with better methods of training and computational performance would be even better.

>> No.9763024

>>9759201

the fear is not that it would gain a human like general consciousness, the fear is that it will develop a completely alien one and murder us all

>> No.9763027

>>9760602
>tfw humans are essentially turning themselves into a brain where each individual is acting similar to a neuron

>> No.9763053

>>9761629
not exactly, if a computer learned how to recognize these specific tasks, it'd be just as good as a dedicated machine, since it'd be able to just load the code of that dedicated machine.

>> No.9763383

>>9759201
It is called artificial intelligence for a reason, it isn't truly intelligence like human intelligence but a simulation of human intelligence. In reality, human intelligence is so linked with consciousness that it is completely inseperable and a computer has nothing in it's description that would imply a correlation with consciousness. No matter how complex of a computer system you create there is nothing about it that would necessitate or imply the creation of consciousness.

>> No.9763401

>>9760121
The thing is the question can't be answered. What people are discovering here is a limitation on language as a whole not just on science. Language is a very powerful tool and the basis of science, math, every academic discipline but language at it's core is just the association of symbols with concepts and a structure to describe those concepts. For most of the physical world this seems to be sufficient, you can go really in depth and describe almost every process we observe with language (specifically physics) and that gives the wrong impression that language might be capable of capturing the entirety of reality. The truth is though, it can't it can't describe consciousness because to describe something like the taste of a cherry you can only associate it with similar sensations. But the associations do not capture the essence of it, there is something missing about the taste of cherry that can only be gained through actually experiencing it. That is the core of the problem, that consciousness can't be described by any language no matter what form it comes in. Even if you try to describe the taste of cherry by describing action potentials and neurotransmitters you will never capture it in it's entirety. Too many people arrogantly assume language can do things it can't and start believing ridiculous things as a result like the idea that the universe is made up of unconscious material alone.

>> No.9763416

>>9762237
so it basically has autism?

>> No.9763431

To all of the AI takeover fags
an AI has access to the interface you give to it.
https://en.wikipedia.org/wiki/Evil_demon
We as humans easily serve this role for such an agent - this is considering sentience even being on the table, cuf

You should be worrried about platforms with AIs trained to kill.
e.g. terrorist AI drones
not AIs "turning" on people

>> No.9763458

>>9763431
...and what, exactly, do you think we're most apt to develop an AI for?

https://www.youtube.com/watch?v=9CO6M2HsoIA

>> No.9763574

>>9762294

By being evil cunts who would make a real being with sentience, and programming it to be want what we want it to.

>> No.9763811

>>9759201
>How is artificial intelligence (and especially superintelligence) in any way a scientific thing?

Because a man smarter than me and you says it will. The burden of proof is on you.

Bill Gates: Benefits of Robots, Healthcare AI Will Outweigh Pitfalls

>> No.9763863

>>9762301
>where the choices is survival of itself or a human.
Have you seen an AI play tetris, it learned that whatever it did it would fail so it decided to simply not play.
The AI won't choose between humans and itself, it will refuse to choose either.

>> No.9763875

>>9763863
the ai will learn to do whatever task its fucking told to bitch. you have no logical grounds for why it would choose neither. just a vague inexact anecdote. youre an idiot.

>> No.9763883

>>9759201
CLONES
WITH ARTIFICIAL INTELLIGENCE
TROUBLE US
FOR WE KNOW NO
WHO WE ARE
-from a poem written in 1956

>> No.9763885

>>9763875
>you have no logical grounds for why it would choose neither.
It was told to win at tetris.
If it was told to keep itself alive and keep humanity alive it would reach a similar stalemate.

>> No.9763892

>>9761741
youre an idiot. "think about them". people stop doing this. youre creating a homunculus of thought. you probably dont even know what you mean by thinking about a problem.

>>9762190
this doesnt explain the behaviour of the curve in the picture does it...

>> No.9763900

>>9762535
yeh but our human consciousness doesnt come from a general consciousness. we exist in a very specific context. we have specific senses, a specific body, our perception of which and its certain drive being where our emotions derive from. without the specific body we have, we wouldnt have the same emotions or innate drives which determine the types of decisions we make daily. an a.i. wont have a human consciousness unless you design these things into it. sociality and attachment too, another innate thing which drives our consciousness. theres no reason to believe an a.i. would have what we have and i believe because of our familiarity only with our own consciousness, we use ourselves as a meter to measure consciousness by. anything else we might not deem truly conscious enough.

>> No.9763903

>>9762570
>they're based on code and information
arent ours?

>> No.9763918

>>9763383
theres nothing about the human brain that necessitates consciousness either. or you could say. theres nothing about an exact simulation of the human brain that would suggest its not conscious, given the sufficient inputs for the system to work.

>>9762890
is playing not crunchin numbers?

>> No.9763926

End 2 end RNN + reinforcement learning AI will already be near completion.
If processing is accelerated by the quantum computer, general-purpose AI which transcends humans is completed.
Many people overestimate humans.
Human beings are just processing much.
It is reasonable to approach human beings if machine learning carries out equivalent processing.

>> No.9763939

>>9763926
theres variational free energy autoencoders developed where you dont need reinforcement learing a.i. its much better.

>> No.9763948

>>9763939
Well it will be so if you only look at the performance.
But it is definitely a machine that makes an ethical deviation, people will call it a murder machine, and that society is definitely dystopia.

>> No.9764005

>>9763948
replied to wrong post?

>> No.9764065
File: 19 KB, 384x307, Poisson_cdf.png [View same] [iqdb] [saucenao] [google]
9764065

>>9763892
>this doesnt explain the behaviour of the curve in the picture does it...
How doesn't it?

>> No.9764072

>>9764065
because the curve starts to flatten way before 1.00

>> No.9765209

>>9763918
>theres nothing about the human brain that necessitates consciousness either
Except the fact I am fucking conscious.

>theres nothing about an exact simulation of the human brain that would suggest its not conscious, given the sufficient inputs for the system to work.
Yes there is, the fact it isn't actually a brain. Consciousness doesn't emerge from computers it is innate.

>> No.9765237

>>9764072
>waaay before
Brainlet

>> No.9765712
File: 30 KB, 657x539, imaginary.jpg [View same] [iqdb] [saucenao] [google]
9765712

>>9759451
>>9759685
>>9760276
>>9763016


You realize his hypothetical still btfos you right?

>> No.9765722
File: 173 KB, 563x170, consciousmen.png [View same] [iqdb] [saucenao] [google]
9765722

>>9759201
>consciousness

Norvig's law:

In any thread about machine intelligence, someone will eventually mention "consciousness".

As if it adds something to the conversation.


This is a pseudo-profundity aimed at displaying how clever the poster is, but in fact just demonstrates vacuous thinking and an inability to grapple with the actual issues at hand.

>> No.9765728
File: 13 KB, 182x316, jill-meagher-dilated-pupils-2.jpg [View same] [iqdb] [saucenao] [google]
9765728

>>9765209
>I am fucking conscious.
You think you are conscious, but you don't even know what you mean by that.

Theorem: Introspection is a terrible source of insight.

Proof: Read any philosopher and reel at the nonsense they spout based on introspection.

>> No.9765991

>>9762284
>>9762306

feel like you guys forget that something as intelligent and complex as an a.i. will have unforseen results regardless of the instructions. bugs exist when coding now. wt wd happen in an a.i. and i doubt you could program an a.i. to love us and make the right decisions when it doesnt implicitely understand us and you forget humans arent logic. we understand eachother coz we are humans with bodies and drives. it will be hard to program that in humans. its so hard to program a machine to do many jobs humans find easy. we dont even understand ourselves explicitely.

>> No.9765993

>>9765728
maybe you should watch your own words too then

>> No.9766007

>>9759201
dunno man, Capsule Networks are fucking frightening.
Machines weren't supossed to understand 3D space and orientation until this fucker, Hinton, came up with a way to make them understand.

Waiting for generative adversarial capsule networks. If you thought GANs were mind blowing, wait until those bad boys are implemented.

>> No.9766029

>>9766007
hinton got cucked by karl friston on the brain, and peter dayan let him.

>> No.9766346

>>9763016
I don't think anyone's seriously suggesting we simulate every single atom of a brain.
The point is that such a thing is possible, and in being possible it proves there's no good reason to claim AI is somehow incapable of doing what human brains do.
It's just the absolute worst case scenario of what would be required, and even in the worst case scenario there still isn't anything absolutely preventing AI from doing what brains do, only obstacles of processing power.
Realistically AI will accomplish reproducing most of the important / interesting things brains do without being anywhere near that inefficient. For one thing the structure of brains is massively redundant, and a deliberately designed reproduction of brain functionality probably wouldn't need anywhere near that level of redundancy.

>> No.9766364

>>9766346
but you forget that we evolve in specific ecological conditions. you need to replicate that for an a.i. to be like us. our basic drives. whilst a.i. can do many interesting things, are those what make us conscious. ask that to yourself.

>> No.9766367

>>9766346
its redundant because the brain learns over a life time. a system like the brain couldnt function without that kind of degeneracy/redundancy. so you might be wrong. redundancy is necessary.

>> No.9766373

>>9766346
also, calculators already do things our minds arent capable of.

>> No.9766391

>>9759201
You just have to think about how humans have achieved consciousness. Personally I don't believe it would be possible(maybe possible but extremely difficult) to teach a computer to be conscious without a body. The only reason animals have brains is to regulate our bodies. So I think it would need a robot body whose limbs it could constantly track the location of. Moving around in its environment will teach it about 3D space, and maybe we could simulate some sort of evolutionary process where it has to think abstractly or creatively to overcome challenges. I'm actually not sure how this would work because there would be no genetic mutations to help it evolve since it's computer hardware.

However we get there, and I think we will, I don't think that AI will be the savior of our race as some people think it will be. It isn't going to solve our problems or make us cool cyborgs, it is just going to dispassionately kill and replace us – then a race of space faring robots will jump from star to star wiping civilizations, and replacing them with robot factories.