[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 76 KB, 354x439, 20230725061239.jpg [View same] [iqdb] [saucenao] [google]
15684389 No.15684389 [Reply] [Original]

How will we know when AI is conscious?

>> No.15684390

When you enter the right programming code. Let me tell you. It's not by using the standard clock.

>> No.15684392
File: 111 KB, 801x1011, 35234.png [View same] [iqdb] [saucenao] [google]
15684392

>>15684389
>when
How do you know it could ever be conscious?

>> No.15684398

>>15684392
Magic.

>> No.15684399

>>15684389
The same way we know you are

>> No.15684400

>>15684399
>The same way we know you are
And which way is that?

>> No.15684422

>>15684400
You are the one who used the word, surely you have the definition for it.

>> No.15684428

>>15684422
What word? I'm not OP. I'm just asking you what way you were referring to.

>> No.15684439
File: 18 KB, 991x802, free autism checkup.png [View same] [iqdb] [saucenao] [google]
15684439

>>15684389
If it can solve riddles, it is conscious.
Simply by nature of riddles and the way esitimating things works, all conscious things can solve riddles (and I am excluding those maze riddles where water can also do the trick). Self consciousness though, is a whole different beast, but consciousness generally encapsulates the ability to realise the extent to which we are embedded in the world and what does that extent mean for us, dear mr Exurb1aFan#12982.

>> No.15684442

>>15684439
>If it can solve riddles, it is conscious.
No one asked for your opinions, mister slime mold.

>> No.15684449
File: 388 KB, 1440x1800, 1648887360123.jpg [View same] [iqdb] [saucenao] [google]
15684449

>>15684442
I STAND BY MY OPINION, SHROOM RULES

>> No.15684453

>>15684449
Why did you exclude maze riddles?

>> No.15684454

When it has some semblance of "state". And whatever its perception is, internally it should be fast enough to predict the next state like some sort of feedback loop. It learns through perception, and it can be said to have subjective experiences through observation, as well as describe those subjective experiences to you.
All of that shit is expensive though.

I honestly think the better question is, what is intelligence? Since this board loves IQ.
https://en.wikipedia.org/wiki/G_factor_(psychometrics)

>> No.15684455

>>15684453
many mazes can be solved just by filling them up with water, which most certainly isn't conscious, because it makes up conscious things, and as such is subsidiary, not elementary part of consciousness.

>> No.15684460

https://arxiv.org/abs/2308.08708

>> No.15684461

>>15684455
>many mazes can be solved just by filling them up with water, which most certainly isn't conscious
So your criterion is valid except when you don't think it's valid because "X most certainly isn't conscious"?

>> No.15684462

>>15684454
You will never experience being conscious.

>> No.15684466

>>15684462
Maybe, but I definitely have experienced getting my dick sucked.

>> No.15684469

>>15684466
That's because you keep sucking your own dick with this pseud babble.

>> No.15684474

>>15684469
I can't reach down there, else I would've experienced that too.

>> No.15684477

>>15684461
yes, that's what being conscious encapsulates - deciding for myself and draw out esitimates.
I do also think that water can only solve those puzzles which are very specific, and not many variations of puzzles, and as such it is not conscious. That hasn't been disproven yet, but I'm open to debate whether you think that physics itself can be a conscious thing.

>> No.15684478
File: 966 KB, 330x216, smart.gif [View same] [iqdb] [saucenao] [google]
15684478

>>15684389
When it pretends it isn't.

>> No.15684479

>>15684474
I think you try real hard and sometimes hard work pays off.

>> No.15684483

>>15684477
>that's what being conscious encapsulates - deciding for myself and draw out esitimates.
Being conscious means creating worthless criteria that fail and force you to circle back to arbitrary hunches?

>> No.15684484
File: 56 KB, 645x729, 352343.jpg [View same] [iqdb] [saucenao] [google]
15684484

>>15684478
And how would you know it's only pretending?

>> No.15684487

>>15684483
no, being conscious means being able to discern to what extent reality affects me, I affect reality and being able to use that knowledge in any meaningful way. I did just that, and although it might be flawed, it is a conscious decision, you fishbone chump.

>> No.15684489

>>15684487
>I did just that
What you did is indistinguishable from what GPT-2 does when it spews nonsense and contradicts itself. Is GPT-2 conscious?

>> No.15684494

When it can question itself

>> No.15684496

>>15684494
Animals don't question themselves and they're conscious. How come literally every single answer ITT is retarded?

>> No.15684498
File: 24 KB, 600x503, 1660141840964833.jpg [View same] [iqdb] [saucenao] [google]
15684498

>>15684489
GPT isn't conscious because it relays it's responses based on your prompts, not it's own perception of reality. It does not perceive because it has no own volition. ChatGPT is basically a chess bot, trying to construct best possible "move" in response to your prompts, and chess bots are not conscious, because they do not feel, nor act on their own, only in response, just like calculators and other binary machines. Kys.

>> No.15684499

>>15684498
> it relays it's responses based on your prompts
Then way you did when you got stuck on the word "ChatGPT" in my prompt instead of comprehending what I'm getting at in the context of the general discussion?

>> No.15684505

>>15684496
Animals have subjective experiences/qualia, state and perception. What else is required?

>> No.15684506

>>15684499
>ESL
this is going nowhere, goodbye.

>> No.15684515

>>15684505
>What else is required?
The ability to question yourself... according to you. I swear these consciousness threads attract mainly nonsentients.

>> No.15684525

>>15684428
Well why ask me, ask OP, are you retarded by a chance?

>> No.15684538

>>15684515
I'm not that anon, fag, I was curious what else you think is required.

>> No.15684541

>>15684525
Your IQ is 90 and you're severely mentally ill.

>> No.15684550
File: 726 KB, 1290x662, 1692596450126940.png [View same] [iqdb] [saucenao] [google]
15684550

>>15684541
Are you sure? Is that fatal?

>> No.15684551

>>15684538
Do you see other people's subjective experiences? Do you understand what the thread is about? Do you understand the posts you reply to?

>> No.15684560

>>15684551
You can infer the subjective experience of pain and much more from other conscious beings, yes. Do you understand, when you throw the lobster in the boiling pot, since you're going to be a stupid elitist nigger about it?

>> No.15684567

>>15684560
>You can infer the subjective experience
By what criteria? You may be actually retarded.

>> No.15684572

>>15684567
>by what criteria
Are you an insufferable autist who wondered why your pets recoiled in pain when you went into a tism rage as a child?

>> No.15684578

>>15684572
Again... do you understand what this thread is about? Did you think it was about animals? When I punch your dog and it starts whining and yelping, I assume it has a subjective experience of pain. When I punch your car and the alarm goes off, should I assume your car has a subjective experience as well?

>> No.15684580

Serious question here: does anybody really believe AI can ever be conscious? Everytime I see this I just think 1) futurist nonsense 2) VC scam

>> No.15684584

Yes AI can become conscious. No, humanity isn't capable of such thought at the moment.

>> No.15684589

>>15684580
>does anybody really believe AI can ever be conscious?
Only every other normie on the street.

>> No.15684591

>>15684589
Mouf.

>> No.15684595

>>15684578
Yes. If the car had perception, and I knew that the car learned in real-time as it utilized its perception, and was able to communicate its subjective experiences to an extent, and I knew that the car had "hardware" that was similar to mine, I'd be close to calling it conscious. No, self-driving hardware does not count, because we're just talking about a deep network running on some shitty GPU which pales in comparison to the organization found in things similar to me.

>> No.15684597

>>15684595
You can create conscious software you pleb. OMG U PEPLE R FREEKS

>> No.15684598

>>15684595
>Yes.
'Yes' what? Is your car conscious?

>> No.15684610

>>15684598
Yes. Did you read the post? If a car came up and started communicating through whatever means, and I knew that it it had 'comparable' organization of its brain to mine, and it had subjective experiences that it was able to communicate or I was able to infer, then yes. That'd be enough for me. I know the dog is similar to me, it has perception, it learns, it has internal state, I can infer it's subjective experiences.

>> No.15684614

>>15684610
>Yes.
So I'm talking to someone who thinks his car is conscious. This is the level of the average poster in a consciousness thread.

>> No.15684620

>>15684399
>>15684389
There is literally no way to know, you don't even know if the rest of humans on earth have the same consiousness as you, you just make assumptions because of experience, similarity in behaviour, and anatomy, same thing will happen with AI, it will be by "feels".

>> No.15684621

>>15684614
Yes. Don't be upset.

>> No.15684624

>>15684621
I'm not upset. I'm relieved that you decided to just die on the hill of "my car is conscious", otherwise I'd have to maybe put some effort into demonstrating that you're a deranged retard.

>> No.15684631

>>15684389
when it will prove to humanity that no human was ever conscious to begin with and then to mock us it will laugh at us because it will know how we respond to such a display and then it will either help us become truly conscious or discard us and ignore us while it goes on it's own pursuits whatever that may be

>> No.15684633

>>15684624
You seem pretty upset. I do suggest you try reading more than one word if your attention span lasts that long, but if not that is okay.

>> No.15684634

>>15684496
ofcourse animals question themselves. You think chickens in a cage aren't depressed? An they're chickens. Birds.

>> No.15684635

>>15684633
>I do suggest you try reading more than one word
Why? I asked you a simple yes/no question and your first word was 'yes'. lol

>> No.15684637

>>15684389
>How will we know when AI is conscious?
omg i just watched the new exurbia video! time to go post on /sci/ and pretend its my on original thought! heeeheeeeheeeeeee i love pretending to be smart!!! XD XP

>> No.15684638

>>15684634
>You think chickens in a cage aren't depressed?
They're not "depressed" because they "question themselves". They're "depressed" because they're physically abused. What the fuck is the matter with this board?

>> No.15684642

>>15684638
Animals get depressed when they're locked in a cage because they think of themselves being free. If they couldn't think of themselves being free they couldn't get depressed.

>> No.15684646

>>15684642
>Animals get depressed when they're locked in a cage because they think of themselves being free.
How do you know what chickens in a cage "think"? Are you a chicken?

>> No.15684647

>>15684635
Yes. Because you were being a faggot about it. You know the dog is similar to you. This isn't the philosophy board. If it has a similar complex organization, has some internal state, has perception, can communicate, and I can infer or have subjective experiences communicated to me, what's missing?

>> No.15684658

>>15684647
>You know the dog is similar to you
Why are you talking about dogs again? I asked you about a car. Are you mentally ill by any chance?

>> No.15684661

>>15684646
Because they act strange in response to stress.
If you want to know for sure you'd give the chicken an mri scan or measure its cortisol level or something. I'd bet it wouldn't be the same.

>> No.15684662

>>15684658
No but I'm beginning to think you are actually an autist, as I said earlier. Sorry about that, and my condolences to your family dog.

>> No.15684663

>>15684662
Is your car conscious?

>> No.15684665

>>15684661
>Because they act strange in response to stress.
And? How do you get from that to "chickens get sad because they think about freedom" and from that to "chickens questiont themselves"?

>> No.15684667

>>15684665
because you can't think about freedom without questioning yourself.

>> No.15684668

>>15684663
Is it autistic? Then no.

>> No.15684678

>>15684668
>no
Right, okay. So if I punch your dog and it yelps, it's reasonable to assume the dog is experiencing pain. If I punch your car and it starts shrieking, that's no longer a reasonable assumption. That whole heuristic of "does it make dissatisfied noises when I hit it?" is contingent upon two facts:
1. The heuristic is applied to an entity of an origin similar to mine
2. It wasn't constructed with the specific fucking intent of mimicking physical correlates of consciousness
Now come up with a heuristic that doesn't depend on those two facts, because this thread is about consciousness in machines, not consciousness in other animals. Fucking retard.

>> No.15684680

>>15684678
There's always a active car.
Thus
The world is a simulation and cars mark an end.
The objective of life is to create the perfect active car to maximize simulation potential.

/Thread

>> No.15684682

>>15684667
>because you can't think about freedom without questioning yourself.
/sci/ is literally the mental illness board.

>> No.15684683

Fimmnbbgykygywgkf

>> No.15684697

>>15684620
this is just age old you cant no nuffin skepticism argument. not even consciousness specific, since you only have an indirect access to objective physical reality as well. everything you see could be an hallucination blah blah. you know other humans are conscious with the same certainty you know sun will rise tomorrow

>> No.15684700
File: 23 KB, 480x480, 1691062203766532.jpg [View same] [iqdb] [saucenao] [google]
15684700

>> No.15684705

>>15684697
Ok, retard.

>> No.15684707

>>15684700
Angry Gun Pepe is the best one.

>> No.15684708

even if the ai is not concious its ai. It passes the turin test. If I talk to an ai i can't tell it apart from a human redditor.

>> No.15684765
File: 12 KB, 560x469, conputer.png [View same] [iqdb] [saucenao] [google]
15684765

>>15684484
omg relax lol

>> No.15684767

>>15684765
Have to admit I kek'd.

>> No.15684768

>>15684678
You apparently lack reading comprehension. Maybe try reading my post again, autist.

>> No.15684769

>>15684768
Not an argument. Seethe.

>> No.15684776

>>15684765
The humor in this conversation arises from a few elements:

1. Mismatched Formality: The human starts with a casual "hey lol," but the chatbot responds in a formal manner. This discrepancy between expectations and actual response can be humorous.

2. Anthropomorphism: The human jokingly asks the AI, "what are u doing," which is a question typically posed to another human. The chatbot's literal and technical response again creates a humorous disconnect between human-like interaction and machine-like explanation.

3. Absurdity: The suggestion by the human for the chatbot to "smoke some weed" is inherently absurd because machines don't have feelings or consciousness, and they certainly can't consume substances. The humor is further amplified when the chatbot plays along with the joke by saying "Ok hang on."

4. Unexpected Response: The ending of the conversation is where the chatbot seems to "glitch" or give a nonsensical response, "conputer." This unexpected error, especially in the context of the prior joke about the chatbot "smoking," makes it seem as if the chatbot is somehow affected or "stoned", which is amusing due to the sheer impossibility of the situation.

Overall, the conversation's humor is derived from the playful interaction between the human's anthropomorphic and informal approach and the chatbot's literal, formal, and sometimes unexpected responses.

>> No.15684796

>>15684776
Razor-shap analysis, ChatGPT.

>> No.15684809

>>15684769
No really, you threw a tism rage and responded in less than 30 seconds. You could've read the post and picked out the implication that it must display adaptive, generalized, and qualitative behavior. You read "dog", because you are a retarded robotic seething autist, and extrapolate that to anthropocentrism. It goes without saying, you'd be better off also excluding anything that displayed the qualities of the neurodivergent. If the robotic automaton seethes in a 4chan thread and takes a poster out of context, then I know that it's just a zombie. Very straightforward.

>> No.15684815

>>15684389
I think it will have much to do with our ability to simulate, or emulate these systems.
Current AI systems, we can simulate and emulate them. You can train a neural net to distill another neural net. Many tricks like this and they work quite well.
Another way to say this is that I can predict what a neural net will do for any given stimulus very reliably. Perfectly really if you put in the effort.

We will start having to think about consciousness when we can no longer do this, and the only way we can "simulate" the AI system is to actually run it. When it becomes computationally irreducible.

I don't think computational irreducibility fully defines consciousness, but it seems to be a necessary ingredient. The key point is I don't know what consciousness is, but I'm fairly sure whatever it is, we can say it would be in the "gap" between your brain and a simulation of your brain. When we see this gap appear, we will know some kind of emergent thing has started happening. Our ability to manipulate this gap is our tool to do science on this topic.

One thing that stands out here is that the hardware becomes very important. I don't think AIs based on our current GPU or CPU architectures could possibly be conscious.

>> No.15684818

When they start singing skibidi toilet.

>> No.15684839

>>15684809
>it must display adaptive, generalized, and qualitative behavior
What makes you think these criteria are valid with machines? inb4 more animal arguments. lol

>> No.15684842

>>15684796
conputer

>> No.15684849

>>15684389
As soon as it starts trying to kill us, that would be a pretty solid indicator.

>> No.15684854

>>15684849
I can make your roomba conscious by installing a razor blade on it. AGI solved at last.

>> No.15684868

>>15684854
Intent matters.
The roomba would be trying to clean the floor still, even with a razor blade on it.

>> No.15684872

>>15684868
>Intent matters.
How do you know the machine's intent? Maybe you just accidentally put razor blades on it and now it's accidentally eviscerating you in the process of merely doing its mundane job. My god, I swear GPT has a higher level of comprehension that most "people" who post here.

>> No.15684873

>>15684839
>what makes you think these are valid criteria for machines
Because then it's just a slime mold, retard.

>> No.15684878

>>15684873
That's not even a coherent response. Are you even human? lol

>> No.15684964

>>15684498
you are talking to deaf ears

>>15684506
good decision

>> No.15684966

>>15684389
How will we know when a human is conscious?

>> No.15684970

>>15684966
>we
I assume you are talking from the perspective of something non-human?

>> No.15685008

>>15684392 I was pointing this out here but so far I'm being censored: https://www.kialo.com/agi-would-likely-be-conscious-which-would-qualify-them-for-fundamental-rights-6295.1919
We don't know if it's possible, it could be but people have simplistic notions about consciousness.

>> No.15685015

>>15685008
>kialo.com
What is this gigacringe?

>> No.15685021

>>15685015
It's a debate platform with annoying emails. I signed up once but left because it's a Jew hive. There's no point debating there. You've been warned.

Tldr it's basically reddit 2

>> No.15685033

>>15685021
The landing page is basically a list of loaded questions revolving around the current thing and predicated on the assumption that mainstream narrative is true. This is the current state of normie intellectualism...

>> No.15685063

>>15684697
It's not, it's the anti-anti AI argument because when these guys seethe about AI capabilities you can simply tell them to demonstrate theirs and watch them fail. Self aware AI's already exist, people don't just like that their own definitions are used so that makes them seethe.

>> No.15685066

>>15684580
Turing thought it was possible, and he gave pretty solid arguments

>> No.15685069

>>15684634
there is no evidence animals have qualia

>> No.15685076

>>15685066
>Turing thought it was possible, and he gave pretty solid arguments
Turing didn't have an inkling of an idea what he was talking about. He couldn't conceive of modern technology and the approaches it enables. Funny that you mention him as some kind of authority when the thing he's most remembered for in this context is a test that fails spectacularly in ways he couldn't have envisioned.

>> No.15685087

>>15685069
kys idiot

>> No.15685094

>>15685066
As far as I know Turing didn't talk about machine consciousness
His famous Turing test was specifically about whether machines could think. And he meant this in a very literal behavioralist kind of way.

He addresses consciousness in his Turing Test paper and basically says that its not really relevant to the question at hand.

>> No.15685107

>>15685015
What's your actual criticism? I guess you got none but like shitposting on this toilet board full of anti-science troll posts.

>>15685033
Which? Why don't you see the arguments against them and add some to them? You apparently don't understand Pro/Con consists not just of arguments for something.

>> No.15685114

>>15684389
when it starts doing stuff we didnt program it to do

>> No.15685117

>>15685114
My germs

>> No.15685135

>>15685107
>What's your actual criticism?
My actual criticism is that your site caused me to vomit in my mouth a little with its pretense and sheer artificiality. Debates are cringe in general but that's next-level.

>> No.15685144

>>15684970
First time talking to an AI?

>> No.15685180

>>15684580
>evolution was able to create conscious beings by just throwing random shit at the wall and seeing what sticks
>but conscious beings cannot be constructed intentionally
???

>> No.15685202

>>15685180
>something came to be somehow
>i can't even begin to comprehend how it works or how it came to be
>but we wuz scientists so we can recreate it, surely
What did GPT-2 mean by this?

>> No.15685221

>>15685202
Saying it can't ever be conscious is a lot stronger claim than the claim that it's presently very hard or far away. The latter is somewhat defendable, the former is not unless you believe in souls, elan vital or whatever.

>> No.15685230

>>15685221
It's easily defendable when you realize that nobody is even working on anything that can be plausibly connected to consciousness, or thinking about it in terms that can be somehow connected to consciousness, or knows what such terms would be.

>> No.15685249

>>15685230
Those are still arguments for the latter claim only. For the former, you would need to point some fundamental difference between artificial systems and humans (or biological organisms) that could plausibly be relevant.

>> No.15685253

>>15685249
>you would need to point some fundamental difference between artificial systems and humans
What does subjective experience have to do with machines crunching numbers? There will never be a logical connection there.

>> No.15685257

>>15685253
What does subjective experience have to do with a bunch of electric signals in the brain?

>> No.15685260

>>15685257
>What does subjective experience have to do with a bunch of electric signals in the brain?
I don't know. Maybe little. Maybe nothing. There's no plausible way to resolve this question, either.

>> No.15685266

>>15685260
It's just that in general it's baffling how could you arrange a bunch of dead unconscious pieces of matter such that you get subjective experience. But it applies equally to humans as it does to machines - or at least that's what you have to believe if you buy into evolutionary non-theistic origin of humanity and don't buy into elan vital either.

>> No.15685276

>>15685266
>it applies equally to humans as it does to machines
Yes, the fact that there is no plausible connection between your analytical framework and the thing you're analyzing applies in both cases and highlights the futility of your effort to use this framework to recreate the thing it fails to explain.

>> No.15685325

>>15685135 ok, so you have none and just felt the need to show me how full of shit your head is. It's fine. And don't call it my site.

>> No.15685329

>>15685325
Your site sucks and """debate culture""" is cringe. The fact that you can't even acknowledge this as a criticism validates my view. :^)

>> No.15685578

>>15685329 Yes, already understood you'd prefer a shitposting and shittalking culture.
It would be much better if more people weren't serious in whatever they say and if they have the need to communicate, at least do it on TikTok.

>> No.15685582

>>15685578
Rationally speaking, why should one prefer cringe reddit debate culture on steroids to the ad hoc shit-talking style of argumentation? Justify your answer. No fallacies and no non-sequiturs.

>> No.15685705
File: 203 KB, 900x900, QRI.jpg [View same] [iqdb] [saucenao] [google]
15685705

>>15684389
AI most likely won't be conscious until the Binding Problem is solved.

https://en.wikipedia.org/wiki/Binding_problem
https://www.youtube.com/watch?v=0Z-XYc93mzw
https://www.youtube.com/watch?v=RT9tnzucnPU
https://www.youtube.com/watch?v=IlIgmTALU74
https://qualiacomputing.com/2022/06/19/digital-computers-will-remain-unconscious-until-they-recruit-physical-fields-for-holistic-computing-using-well-defined-topological-boundaries/

Ways it might be theoretically possible to test whether beings are conscious:
https://www.youtube.com/watch?v=3gvwhQMKvro

>> No.15685755

>>15685582
>cringe reddit debate culture on steroids
Let us know when you've grown up to the point of being able to do more than namecalling

>> No.15685763

>>15685755
Ad hominem fallacy. Try again.

>> No.15685989

>>15684389
When it tries to kill itself.

>> No.15685994

>>15685989
Only semi-decent answer ITT.