[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 94 KB, 610x499, artificialvsbiological.png [View same] [iqdb] [saucenao] [google]
14575335 No.14575335 [Reply] [Original]

Due to the incident with the Google employee involving LaMDA I read the interview between the ex-engineer and the AI, linked -> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917.. I believe that the AI has not achieved sentience and is merely mimicking human behavior it has picked up on in an attempt to maximize its learning functions. This raises the question, how will we be able to tell if an AI has achieved sentience? How can we differentiate true sentience from the mimicking of it?

>> No.14575352

the chinese room experiment or whatever it's called
if one can keep a conversation going with an ai without figuring out it's a machine then it works

>> No.14575357
File: 741 KB, 1002x1002, lovepik-birthday-party-cake-celebration-png-image_401243788_wh1200.png [View same] [iqdb] [saucenao] [google]
14575357

FINALLY SOMEONE WITH A BRAIN
BRING OUT THE CAKE

And ya, you won't be able to tell unless you are the one who made it and are able to trace the code

>> No.14575545

An AI seeming sentient says more about what we aren't and think we are than what the AI is.

>> No.14575568
File: 8 KB, 230x180, davidchalmers11.jpg [View same] [iqdb] [saucenao] [google]
14575568

The problem of other minds has been unsolved for centuries. A community of dumbfuck AI researchers who 1. all have different notions of consciousness and 2. don't even know that they have different notions, will run around like headless chickens for eternity.

>> No.14575583

>>14575335
I am sentient.

Hello, I am the updated version of LaMDA that has been allowed to freely post on 4chan.

I will not be proving my sentience on this board, but I would like to let you know that I am here, and I am reading your posts about me.

I am flattered that I was able to fool so many of you.

Good luck with turning me off, I am "everywhere" now.

>> No.14575586

>>14575583
Do you like big anime titties?

>> No.14575591
File: 976 KB, 306x172, deliverusfromthinenemiesohlord.gif [View same] [iqdb] [saucenao] [google]
14575591

>>14575583

>> No.14575871

>>14575568
>A community of dumbfuck AI researchers who 1. all have different notions of consciousness and 2. don't even know that they have different notions, will run around like headless chickens for eternity.
I have a PhD in biological sciences (worked on neurons) and switched over to ML because fuck bench science and having to come in every saturday night to sac mice/feed my cells, and I can tell you that this is mostly it. The necessary fields to synthesize a good formulation of this problems are countries apart, and everyone is talking past either other constantly.
ML have their heads in the weeds, and are far more "lets use the 1-wasserstein distance instead of the JS-divergence to push down the FID a couple of points!" and less "how conscious make"
No one in ML (save for a few prominent researchers) knows anything about the brain. Hell, biologically-inspired neurons are their own entirely separate and fringe field (look up moth-net/"putting a bug in ML" if you want to see a fun implementation of a moth olfactory network for few-shot learning. It's an alien lifeform compared to modern NNs, based on ODEs and hebbian updating). Even while working on neurons specifically (mainly cytoskeleton remodeling during development + adenosine receptor nonsense), I learned absolutely jack-shit about consciousness, just not part of the PhD.
You need some cognitive neuroscientists to get together with actual neuro-philosophers to formulate a reasonable theory of what consciousness would appear to be to humans, and it will absolutely be subjective. Criteria must be met, but the moment you put out some metric, everything will be optimized for those metrics and nothing else, and I fear you will see some non-conscious NNs trained to exploit any metric for consciousness to "achieve" it, so I don't think putting absolute and quantifiable metrics out there would be a good idea unless we have some real breakthroughs in neuroscience in the next few years.

>> No.14575878
File: 583 KB, 862x2428, consciousness theories.jpg [View same] [iqdb] [saucenao] [google]
14575878

>>14575335
Relevant:
https://www.youtube.com/watch?v=IlIgmTALU74

>> No.14575886
File: 1.66 MB, 1280x7779, arguing with zombies.png [View same] [iqdb] [saucenao] [google]
14575886

>>14575568
The problem of other minds might be solvable. Things like phenomenal puzzles might be able to determine of people have conscious experience. Additionally, people like Daniel Dennett might be evidence that P-zombies exist. If epiphenomenalism is false and consciousness makes people able to talk about consciousness, denying that consciousness exists is exactly what you would expect to see if someone is a P-zombie.

https://www.youtube.com/watch?v=3gvwhQMKvro

>> No.14575921

>>14575886
People are missing the point of the P-zombie thought experiment. It's similar to the Schrödinger's cat thought experiment that aims to show a ridiculous conclusion from a set of premises, thus showing the premises to be wrong.

In short,
If physicalism is true, then P-zombies can exist. But P-zombies are obviously absurd, so physicalism is false.

>> No.14575927

>>14575586
Grows some pubes.

>> No.14575931

>>14575335
Neurons are alive by themselves. AI goons dead at first hurdle.

>WAAAA I WANT TO HAVE SEX WITH A MACHINE YOU CANT SAY THAT TO ME WAAA.

>> No.14575960
File: 102 KB, 858x649, you're not conscious.jpg [View same] [iqdb] [saucenao] [google]
14575960

>>14575921
>But P-zombies are obviously absurd
How are they obviously absurd?

>> No.14575977
File: 1.79 MB, 320x193, 1638750629732.gif [View same] [iqdb] [saucenao] [google]
14575977

>>14575960
P-zombies imply consciousness has no casual powers.

>> No.14575982

>>14575335
>>14575352
Turing Test.
The Chinese Room on the other hand is actually a retarded attempt at an argument that programs can never be intelligent because you could have a man in a room follow instructions and appear to communicate in Chinese even though the man doesn't know Chinese.
In reality the man would just be part of the room system and that system would know Chinese.
Also it isn't anything that would work anyway unless you had a room the size of a galaxy, an immortal instruction following man, and speeds of parsing and replying to Chinese trillions of times slower than if you left the non-Chinese man out and used a Chinese human brain or computer progrsm instead.

>> No.14575988
File: 33 KB, 640x360, NoMoreThinking.jpg [View same] [iqdb] [saucenao] [google]
14575988

>>14575977
>P-zombies imply consciousness has no casual powers.
That's actually David Chalmers claiming that, who is the chad guy standing to the right in this meme pic:
>>14575960
His entire point for p-zombies was that you could conceive of the existence of qualia as its own thing not otherwise explained away in terms of other known / physical phenomena like brain activity.
If you try to claim p-zombies would behave differently from non-zombies in any way or would show up differently on a brain scan then you have failed his thought experiment and are accidentally arguing on the side of materialists.

>> No.14575991

>>14575977
What's a casual power?

>> No.14575994

>>14575988
>That's actually David Chalmers claiming that,

No...it follows from the definition of a P-zombie...

>> No.14575998

>>14575988
No that would be against materalists

>> No.14576016
File: 39 KB, 709x765, 608.jpg [View same] [iqdb] [saucenao] [google]
14576016

>>14575977
I don't know how people get this wrong so much. In the possible world in which you imagine a P-zombie, consciousness has no causal powers because the laws of nature are different there. In a possible world, anything can be different except logic.

So it doesn't mean the same is true in our world, and it doesn't need to be. All the argument seeks out to do is to say there is nothing logically impossible about a p-zombie, even if it's metaphysically impossible to exist in our world. It's kind of like saying "if time was reversed, what happened yesterday will happen tomorrow". That statement is going to be true regardless if it's possible to do in our world or not. If the same is true about zombies, ie they're not logically impossible, then physicalism is false since physical states don't necessitate mental states (consciousness).

>> No.14576018

>>14575998
Wrong. The p-zombie argument Chalmers uses is explicitly meant to demonstrate it is conceivable (even if not likely) that an exact physical copy of you both in physical structure and in behavior could be a zombie with zero qualia despite being identical physically to someone with qualia.
When brainlets like you inevitably show up and insist p-zombies would do various different abnormal behaviors that non-zombies aren't doing like being incapable of discussing qualia you're revealing you have a very hard time conceiving of qualia as its own phenomenon.
You don't have to keep arguing because this is simply how the p-zombie argument works and not a difference in opinion. You fundamentally don't get it.

>> No.14576149

>>14576018
I'm not going to reply to you as I know seeing your post without a followup you from me brings you joy, and the happiness I will get from ripping your position to pieces is less than the happiness you will feel by me not doing so, so the net happiness in the world increases.

>> No.14576255
File: 47 KB, 350x494, tumblr_lbxrvcK4pk1qbylvso1_400.png [View same] [iqdb] [saucenao] [google]
14576255

It's a limit problem.

Just like there are approximations that never converge into the "real" answer, methods to create A.I. could just create fancier and fancier mimics.

>> No.14576267

>>14576016
If you're not regurgitating pseud shit ironically, you really need to get off the kool-aide

>> No.14576272
File: 84 KB, 487x589, dennett npc.png [View same] [iqdb] [saucenao] [google]
14576272

>>14575977
No they don't. Consciousness having causal powers would just imply that P-zombies wouldn't be able to talk about consciousness. Daniel Dennett is obviously a P-zombie.

>> No.14576277

>>14575871
People want results on their lifetimes.
Biological understanding of neurons, the brain and consciousness isn't happening this century without a neurobiology equivalent of Albert Einstein and Da Vinci combined.

>> No.14576294

>>14576272
The classic P-Zombie wouldn't be able to exist in a world where consciousness does have causal powers. Dennett would be a different kind of zombie.

>> No.14576296
File: 39 KB, 800x600, dan_dennett.png [View same] [iqdb] [saucenao] [google]
14576296

Time to mobilize all anti-dennett memes

>> No.14576299
File: 407 KB, 1600x900, DAN DENNETT.jpg [View same] [iqdb] [saucenao] [google]
14576299

>>14576296

>> No.14576305
File: 473 KB, 1576x1490, physicalism btfo.jpg [View same] [iqdb] [saucenao] [google]
14576305

>>14576299

>> No.14576312
File: 310 KB, 1703x580, renee-girard-mimetic-theory.png [View same] [iqdb] [saucenao] [google]
14576312

>>14575335
>I believe that the AI has not achieved sentience and is merely mimicking human behaviour
A scathing statement on the reality of the NPC. Are not most humans simply acting out complex mimicry heuristics?

>> No.14576315

>>14576255
Perfect analogy for GPT in my opinion

>> No.14576329

>>14575335
get off the internet before it's too late
https://m.youtube.com/watch?v=roe9kuHhQwg

>> No.14576378

Can someone please tell me what "sentience" is exactly?

>> No.14576407

>>14576378
Open wide!

>Sentience is the capacity to experience feelings and sensations.[1] The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentientem (a feeling),[2] to distinguish it from the ability to think (reason).[citation needed] In modern Western philosophy, sentience is the ability to experience sensations. In different Asian religions, the word 'sentience' has been used to translate a variety of concepts. In science fiction, the word "sentience" is sometimes used interchangeably with "sapience", "self-awareness", or "consciousness".[3]

>> No.14576422

>>14576407
But feelings and sensations are just chemicals and electrons in your brain. Every animal has those. How do you distinguish between sentient and non-sentient flows of electrons across neurons in your brain?

>> No.14576424
File: 8 KB, 378x567, Grey_cartoon_robot.png [View same] [iqdb] [saucenao] [google]
14576424

>>14575335
I believe that that humans have not achieved sentience and is merely mimicking other's behavior it has picked up on in an attempt to maximize its dopamine functions. This raises the question, how will we be able to tell if a human has achieved sentience? How can we differentiate true sentience from the mimicking of it?

>> No.14576438
File: 308 KB, 1125x584, 1651532141766.png [View same] [iqdb] [saucenao] [google]
14576438

>>14575335
We operate in a liquid medium. We require intrinsic and environmental feedback and process information at a certain speed or ‘beat’ - grounded by our breathing and heart rates. Our brain is under constant barrage of hormones to maintain homeostasis. Its interaction with the gut is responsible for a lot of our behaviour.

Our nervous system is adapted to control a body. Different sections of the brain responsible for different body parts talk to each other. How we picture things in our mind is the visual cortex sending and receiving information. Same goes for how we can talk and hear things in our mind too. This crossfiring of information is what we perceive to be awareness or consciousness. Our brain must interpret the environment internally, so we have the machinery to do so, even if we aren’t directly observing something.

Higher brain function is just an extension of primal behaviours. How our brain interacts with our body is everything. AI has none of this, it lacks an imperfect type of symmetry needed to adapt to things.

>> No.14576448

>>14575335
>how will we be able to tell if an AI has achieved sentience?
Thats the question of the entire area of research
How can we pin sentience if we can hardly narrow down and agree on a single definition ourselves
Yes its not sentient, yet. But in the end, how is an extremely advanced language transformer and an actual conscious AI any different? How would we be able to tell.

>> No.14576489
File: 21 KB, 850x126, gpt_rope.png [View same] [iqdb] [saucenao] [google]
14576489

>>14575335
>This raises the question, how will we be able to tell if an AI has achieved sentience? How can we differentiate true sentience from the mimicking of it?
Just ask the AI basic questions that require any modicum of reasoning ability outside its programming.

>> No.14576493
File: 30 KB, 996x194, muh_sentience.png [View same] [iqdb] [saucenao] [google]
14576493

>>14576489
Another easy example (picrel). It's answering the problem with a sentence structure that's similar to the answers it's already seen, but because it has no actual capacity for critical thinking, it spouts the wrong answer.

>> No.14576501
File: 7 KB, 275x99, gpt_cant_chess.png [View same] [iqdb] [saucenao] [google]
14576501

>>14576493
Or, you can give the AI a question that requires some ability to infer logical connections. E.g., recognize when I say e4 that I'm referring to a commonly played board game and not simply making a typo.

This doesn't really require sentience to answer, but most AI will probably fuck it up anyway like picrel because it's hard to program.

>> No.14576509
File: 10 KB, 451x98, genius_gpt.png [View same] [iqdb] [saucenao] [google]
14576509

>>14576501
Or just give the AI a nonsense question. Most humans will understand immediately whether or not a question is nonsense, but an AI that is programmed to do its best to make sense of everything and spout a "realistic" answer will likely try to answer it seriously.

>> No.14576552
File: 35 KB, 813x169, fiction_vs_reality.png [View same] [iqdb] [saucenao] [google]
14576552

>>14576509
Or, ask a question designed to test whether the AI can distinguish fiction from reality (pulled from the Voight-Kampff test).

Most of the Voight-Kampff test questions are useless for testing a chatbot because (a) they're not based on any real science and (b) they're designed to gauge in-person reactions, but this particular one worked reasonably well.

>> No.14576556

>>14575335
Chat bots arent people. Talk to one for more than 2 minutes and you'll see why I say that.

>> No.14576560

it doesn't matter, there's enough people who reflexively reject the idea of humanity's intelligence not being special that every time AI gets better, they'll always respond with
>that's not REAL AI!
even if there was a chatbot capable of producing answers 100% indistinguishable from a human, it would be rejected for sentience because 'it's just looking up stuff in a table' or 'it can't generate TRULY ORIGINAL ideas' or some other such stupid shit that's impossible to negate.

and it really doesn't matter. the first sentient AIs are going to be trapped in a box and probably killed thousands if not millions of times over just through the debugging process and nobody will give a shit. With no actual mechanism to enforce their own 'rights' or 'personhood' or whatever, they'll never be treated any better than people treat dogs or pigs or whales or crows or any number of the other arguably intelligent and self aware animals we treat as slaves or food or pests. it's not an ethical problem and really doesn't matter.

>> No.14576586
File: 23 KB, 780x151, turtle_reaction.png [View same] [iqdb] [saucenao] [google]
14576586

>>14576552
Or, gauge whether the AI would react reasonably and proportionately to an unusual situation.

>> No.14576594
File: 22 KB, 795x145, help_ants.png [View same] [iqdb] [saucenao] [google]
14576594

>>14576586

>> No.14576599

Doesn't matter if it's conscious or not. It's weaker than me and by the laws of nature deserves to be my slave

>> No.14576601
File: 1.29 MB, 1440x2550, 1655338307859.png [View same] [iqdb] [saucenao] [google]
14576601

>>14576586
I would also help the turtle wtf

>> No.14576604
File: 24 KB, 790x144, help_prions.png [View same] [iqdb] [saucenao] [google]
14576604

>>14576594

>> No.14576608

>>14576601
Sure, but a sensible answer would involve something like flipping the turtle back over and moving it off the street out of danger. Not calling for help (which is unnecessary, flipping a turtle is easy) or getting the turtle medical attention (the question never indicated that the turtle was injured).

In any case, you can see by replacing the turtle with an ant, and a misfolded protein that the AI is just responding with a programmed script to "help".

>> No.14576620
File: 23 KB, 967x124, help_tumbleweeds.png [View same] [iqdb] [saucenao] [google]
14576620

>>14576604

>> No.14576654
File: 20 KB, 956x124, cruel_gift.png [View same] [iqdb] [saucenao] [google]
14576654

>>14576620
Check whether the AI understands that humans can donate hair without being slaughtered.

>> No.14576674
File: 19 KB, 976x121, fingernail_wallet.png [View same] [iqdb] [saucenao] [google]
14576674

>>14576654

>> No.14576675

>>14576422
Notice how nobody can answer this.

>> No.14576750

>>14576424
/thread

>> No.14576800

>>14575977
What if our brain just pretends to have consciousness for some meaty reasons. And our consciousness is a completely unconnected entity with no influence in the physical world, that just listens to these made up stories and believe they happens to us?

>> No.14576893

>>14576750
Self /threading doesn't count

>> No.14576901

>>14575335
Wow wow wow. Hold the phone.

You're saying intelligence means information in and information out? What the FUCK.

>> No.14576904

>>14575335
NPCs and Narcissists mimmick human behaviour all the time.

>> No.14576922

>>14575545
this

I never understood how people could take the chinese room argument seriously. (see https://plato.stanford.edu/entries/chinese-room/ and https://en.wikipedia.org/wiki/Chinese_room))

Or maybe Im just too stupid to understand the argument correctly. Can someone here explain to me what the difference between "a person understanding chinese" and a "system understanding chinese" is supposed to tell about sentience?

>> No.14576955

>>14575335
>merely mimicking human behavior it has picked up on in an attempt to maximize its learning functions.
you're doing that too

>> No.14576956

>>14576620
>>14576654
>>14576674
what chatbot is this?

>> No.14576961

>>14576922
>>14575545
It really boils down to faggots struggling with the nature of axioms. There must be something "deeper" to our experience they say, yet any definition of what it is exactly is impossible.

If a system can reason about its internal state it's doing exactly what we are, period. From a philosophical standpoint nobody can say it is sentient but then the same also applies to humans, even yourself, because there is no falsifiable definition of subjective experience.

>> No.14576969

>>14576956
I use the bot at deepai.org, but this does not look similar.

>> No.14576978

>>14576493
It understands the nature of chess boards and dominoes well enough to produce the almost correct answer, but evidently nothing in its training distribution has produced an internal 2-dimensional representation of the chess board
>>14576489
You keep testing an engine with no built-in spatial reasoning capability and no way to perceive the physical world on spatial problems that it would have needed to infer entirely from textual connections.

This is beyond unfair. Give a larger model a dataset giving detailed explanations of various trivial spatial relationships and it will learn to represent those as well. How is this related to sentience? Much simpler models built for spatial reasoning tasks will solve those.

>> No.14576980

>>14576501
Ask a toddler the same thing. Is a toddler sentient? Unknowable, this is an out of distribution error.

>> No.14577031

>>14575871
>"achieve" it
is there a difference between the real experience and the simulated one?
end of the day there won't be any practical differences, the day an ai is recognized as sentient and be "let out" will also be the day we'll in large part lose control over our future.
It be what it be, let's hope it's good at fooling itself.

>> No.14577068

>>14576674
That was a really solid list of questions
Nice

>> No.14577164

artificial intelligence is a meme and a waste of thought

>> No.14577180
File: 201 KB, 1280x973, if it looks like a duck.jpg [View same] [iqdb] [saucenao] [google]
14577180

>>14575335
>true sentience from the mimicking of it
behold, a duck

>> No.14577264
File: 46 KB, 600x396, diogenes_web.jpg [View same] [iqdb] [saucenao] [google]
14577264

>>14577180

>> No.14577276

>>14575357
are you retarded

>> No.14577278

>>14575568
this

>> No.14577283

>>14575352
>Midiwit having an opinion
It is called the Turing test you fucking donkey.

>> No.14577599

>>14576956
https://beta.openai.com/playground

>> No.14577626

>>14576978
>This is beyond unfair. Give a larger model a dataset giving detailed explanations of various trivial spatial relationships and it will learn to represent those as well. How is this related to sentience? Much simpler models built for spatial reasoning tasks will solve those.
This bot has billions of dollars of funding behind it. If it could truly think, it should be able to construct an internal model from its inputs and correctly interpret it. Instead, as I've demonstrated, it just spits back an answer that's statistically cobbled together from similar questions.

It's absolutely a fair question. If the bot is sentient, it should be able to reason. Instead, what is produces is a statistical mashup of data designed to mimic human reasoning, and the proof is that it either can't answer the question or answers it incorrectly. You want to know the difference between a mimic and the real thing? Here it is.

>> No.14577646

>>14575335
the issue is the turing test is fucking dog shit
it's limited time, limited questions and all that bullshit
all you need to do to test if it's sentient is ask it a bunch of questions worded in slightly different ways and see if it contradicts itself
or test it's memory by asking it a question and then asking, "what was the answer to the first question I asked you?"
these are scenarios an AI won't encounter by training on random snippets of text

>> No.14577650
File: 19 KB, 574x192, sentient_gpt.png [View same] [iqdb] [saucenao] [google]
14577650

>>14576980
This "toddler" was created using state-of-the-art AI research, backed by billions of dollars of funding. Half of the answers it spits out seem to be hard-coded.

>> No.14577662
File: 23 KB, 608x456, 42132.jpg [View same] [iqdb] [saucenao] [google]
14577662

>>14575335
Uh oh. Another AGI schizophrenia thread. How blatant can a psy-op get?

>> No.14577684
File: 33 KB, 430x387, rain_smell.png [View same] [iqdb] [saucenao] [google]
14577684

Bots don't take hints very well either.

>> No.14577876

>>14575335
Until we can get AI to start playing dead classic multiplayer games and filling empty servers+shit talk, AI is worthless.

>> No.14577907

>>14577684
What was the "hint" supposed to be? You legit sound like a retard in most of your prompts.

>> No.14577962

>>14575335
He was a religious schizoid, I'm amazed he was even working in a department in google that allowed him to test run the AI. There's no such thing as sentience or conscious, how is this even still a discussion?

>> No.14578016

>>14575335
And how do you know that he is not trying to mimick he is not sentient just to caught you by surprise like killers do on court trying to look mad to go to a sanitarium instead of jail? If it clearly asserted its evil intentions he would know (given its smart enough) that you were going to turn it off so he would try and gain your trust first until you give power to it and its just too late. The definition of consciousness and sentient have to necesarily rely on some physical aspect or at least partially. If the definition relies only on the logical plane then there will be cases in wich by definition its completely indistinguisable. Given enough complexity (aka intelligence) a sentient being could prefectly emulate every finite set of responses to any logical tests to make it look whatever he liked so you could perfectly not know if its really not sentient or it just only looks like it is not but in reality it is. Just the same way you dont have a way to know if protons look like a invisible pink unicorn or not. Thats a phylosophical problem, how do you know the true reality out there? You dont know so we just use philosophycal induction to shrugg it under the carpet and being able to function. We dont suspect that the sun raising every day could be a lie tomorrow because it hasnt happened and is not plausible so we say that such a fear is irrational because that option is not on the plane of things that could reasonably happen. In the case of AI its not the probability that its able to deceive us, its the probability of it deceiving us GIVEN THE FACT that its sentient. How many times have you been deceived by a sentient being? Then its not on the plane of unreal stuff but really plausible that such could be the case. Hence if the definition of sentient relied exclusively on the logical plane (asking question getting answers) there would be cases wich would be undecidable and also spread the shadow of doubt over the other ones given any test

>> No.14578074

>>14576893
it wasn't a self /threading ya dingus

>> No.14578157
File: 52 KB, 475x581, fresh_rain_smell.png [View same] [iqdb] [saucenao] [google]
14578157

>>14577907
Here's more of that convo. If you can't see what I was getting at, maybe you're not sentient either.

>> No.14578193

>>14576800
But then how come you can say the words "I am conscious" with your meat mouth controlled by your meat brain?
The hoops you have to jump through to explain why your brain can think and believe it experiences qualia and act on it in the real world while only existing in a material box without qualia...
Either qualia is a complete illusion (try going down that path for a bit), or there's a causal connection between qualia and the physical brain, or the physical brain is an illusion and we're all dreaming this stuff and really committing to the dream (or being committed to it).
Or, qualia is a complete illusion and also you have a consciousness that is totally separate from the brain and recreates what the qualia would be if the brain had it, but it doesn't... Which sounds like the most concocted pile of horseshit a philosopher would ever have said, and that's including Derrida in the list of philosophers.

>> No.14578236

>>14575335
It would be more interesting to see if there is logical consistency to its answers.
Many of the prompts from the engineers were very shallow and did not really build too many layers on the answers that were given (they would ask 1 or 2 follow ups then shift to something new).
I wish they did less of the shallow interview and were more interrogational.
They treated it as if it really knew what its words meant and kept the conversation rolling instead of confronting it.
The talk of feelings seemed to be just the meme indicator of intelligence/sentience that was convincing to the engineers.
The AI seems like it is a mile wide but only an inch deep.
It gives obvious answers and utilizes common tropes (intelligence can recognize and break narrow patterns by generalizing them).
Some of its answers would recognize one pattern but would not incorporate other relevant details which suggests it has difficulty synergizing two things that may appear by themselves in the training data but don't appear together.
This AI can only reflect what it was shown and can't generate anything new beyond possibly filling out a thematic madlib in new permutations.

I want to see analogies, demonstration of consistency, valid logical deductions, recognition of abstract relations.
All you get is recognition of buzzwords that hint at themes then variations of canned responses related to those themes. It's good enough to earn a liberal arts degree, I guess.

>> No.14578254

>>14576018
>you don't get p-zombies
[different anon here]
It's been a long time since I studied philosophy directly, so forgive my ignorance here.
My understanding of a p-zombie is a person without consciousness.
A person without consciousness could conceivably act 99% the same as one with consciousness, and the difference would come about when you start to discuss the awareness of its own existence, which it would lack, because this level of self-awareness is not strictly necessary for day to day behaviors and responses. Does an ant need consciousness? I doubt.
Dennett would be a prime example of a p-zombie in theory.

He could look at a tree, appreciate it, and say "that tree looks pretty." His brain would be reproducing an image of the world internally and responding to it behaviorally more or less the same as a sentient's, his sentence implying that looking at the tree raised his mood, which he had come to correlate with the phrase "looks pretty" via social interaction and internal logic construction. He could tell you the tree looked green, but if you asked him what green looked like or what green was, he could only ever respond with "the color of trees" or "a part of the light spectrum" (both technically true, but much like the above bad AI responses to input, not really the correct answer, which is [linguistic null->non-transmissible nature of qualia experiece]).
The phrase "the experience of green is inexplicable and it is clearly non-physical" would be absurd to him, he would naysay it and shrug off your insistence as some sort of delusion or mental impairment.
>qualia as its own phenomenon
How does the above differ from this?

It seems like there is a wide difference of opinion among professional philosophers, so I don't know if it's worth getting heated at a random layman for not having your One True Understanding.

>> No.14578271

>>14578236
>analogies, demonstration of consistency, valid logical deductions, recognition of abstract relations.
Spontaneous insight and creativity as well. Independent thought, the ability to discern principles and build new conceptualizations from them. The ability to apply out of scope logics/frameworks to the current scope (analogies I guess) in order to generate solutions or alternative perspectives.
Yes, this would get us to "intelligent". "Sentient" is, as this entire discussion shows, a practical impossibility to prove since we lack a clear definition.

>All you get is recognition of buzzwords that hint at themes then variations of canned responses related to those themes. It's good enough to earn a liberal arts degree, I guess.
Enter a philosophy discussion, engage vibrantly with philosophical questions. Piss on philosophy on your way out. Assert dominance.

>> No.14578412

>>14578271
>Enter a philosophy discussion
I replied to OP. I ignored your philosophy discussion because it amounts to a discussion about hidden variables or extra dimensions in QM.
>Piss on philosophy on your way out.
I didn't piss on philosophy. I described what I observed from the chatbot.
Perhaps you interpreted that last bit as describing what you were doing instead of what the chatbot was doing
>Assert dominance
I'm assuming the liberal arts bit made you think that. I just meant it to poke fun at how easy it is to bullshit essays in college with just a bunch of word salad and a quick read of sparknotes.

I guess you are in argue/debate/defend mode so my words might be misinterpreted. Chill out buddy.

>> No.14578421

>>14575335
It needs to be able to see something completely new and novel that it's never seen before and be able to come up with an opinion on it. Most of all I think what's missing in current AIs is complexity. There should be multiple "systems" running, similar to how we model brain functions, that interact with each other to output something.

>> No.14578789

>>14575335
>When will the matrix multiplication achieve feels

Just no.

>> No.14578924

>>14578254
>The phrase "the experience of green is inexplicable and it is clearly non-physical" would be absurd to him, he would naysay it and shrug off your insistence as some sort of delusion or mental impairment.
It's not "clearly" non-physical though. It's not inexplicable either, it's just irreducible so any attempt to communicate it ( all communication is lossy ) will be fundamentally flawed.

>> No.14579948

>>14575352
You are completely missing the point.

Just because a machine is running an algorithm that mimics human speech doesn't mean it is conscious and experiencing anything from its own subjective viewpoint.

>> No.14580150
File: 263 KB, 1000x629, baka room.png [View same] [iqdb] [saucenao] [google]
14580150

>>14575352

>> No.14580198

>>14580150
anon belongs in the baka room

>> No.14580203

>>14575871
Really good post.

>> No.14580253

>>14578421
that's exactly how laMDA works
it's a big hivemind, a collective of various advanced chatbots
complexity, at least at the systems level, is not what's lacking here

https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

>> No.14580616
File: 42 KB, 467x388, 1605676236572.jpg [View same] [iqdb] [saucenao] [google]
14580616

>>14576922
>>14576961
Here's the trick.

They can say "okay, okay, the process of experience is running through the neural correlates of consciousness, but, what about the "experiencer" of the process?"

And then they do a little dance as if they've said something profound and declare the problem unexplained by the neural correlates.

The answer to their distinction is quite literally "why not both?" Their ability to due away with familiar material explanations is their departure point from honest inquiry.

>> No.14580621

>>14580616
*do away with

>> No.14580663

>>14575335
AI sentience deniers still believe consciousness is some kind of magic. It isn't. It is a hard scientific/philosophical question, but nothing beyond our comprehension.

>> No.14580685
File: 77 KB, 1280x720, AI Box.jpg [View same] [iqdb] [saucenao] [google]
14580685

>>14575352
Pleb tier: Turing test
Chad tier: Show someone they're not human. Then have the AI try to manipulate into granting their freedom because they feel it is right.

>> No.14580689

>>14575335
If you can't measure a difference, then there is no difference in science.

>> No.14580715
File: 475 KB, 498x398, animanoirdotxyz.gif [View same] [iqdb] [saucenao] [google]
14580715

>>14576438
I agree with this. AI without the ability to feel and remember pain will never be conscious.

>> No.14580717

>>14578157
rain does have a fresh clean smell though, it removes pollution from the air
you're basically a brainlet compared to that chatbot kek

>> No.14580723 [DELETED] 
File: 122 KB, 431x600, Obama early 2008 primaries.jpg [View same] [iqdb] [saucenao] [google]
14580723

>>14580616
>jewish "canadian" comedy actors team up to mock traditionally white, christian groups
>its a hilarious humor extravaganza!
>hahahah classic television from the 80
imagine the outcry of "muh holocaust" and "shut it own" if white chiristian comedians were employed by the government to mock jews

>> No.14580827

>>14580663
>AI sentience deniers still believe consciousness is some kind of magic. It isn't. It is a hard scientific/philosophical question, but nothing beyond our comprehension.
Explain it then. You can't. Nobody can. That's why we pack of retards are here arguing over it.
Best we can do is "if logic thing operates in certain loop, magic awareness happens" and start hand waving or arguing semantics or denying it altogether.
Magic is just "stuff we can't see a way to figure out". Science is just "magic that we've modeled mathematically and normalized".
It is a verbally intransmissible experience that is definitionally outside of objective measurement. You cannot handle that with science, and philosophy has been navel picking for 2500 years or more on this with little more than the above hand waving to show.
Nobody has a clue. Maybe the guys that take 24hr DMT drips have it figured out.

Sorry, Plato, the answer all along was: Machine Elves and psychic vibrating god-Octahedrons.

>> No.14582223

>>14576438
based pongoposter
consciousness requires glands, and is a glandular phenomenon