[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 137 KB, 1080x1440, 1654136585908.jpg [View same] [iqdb] [saucenao] [google]
20782962 No.20782962 [Reply] [Original]

Are philosophers' criticism of artificial intelligence still valid, such as Hubert Dreyfus?
As I am in the STEM major related to artificial intelligence, I have been increasingly skeptical about artificial intelligence.
Such as : Dall-e and LaMDA are not about human-like-intelligence at all, but are the results of focusing on enhancing the concept called semantic space.
A person who studies artificial intelligence is currently studying phenomenology. They think that intentionality will play an important role in consciousness, away from the previous analytical philosophical attitude.
I think the philosophy of mind (in the broadest sense) helps people know AI percisely, but I know very little about this type of combination, for example, a theory combining phenomenology and AI. Can you recommend a book or something?

>> No.20783015

>>20782962
Janny are you ok?

>> No.20783038

>>20782962
>the concept called semantic space.
Why don't you tell us about this first

>> No.20783114

>>20783038
It is about simulating the distribution of data, not the process of reasoning. What LaMDA did is having 10000-dimentional vector field of "conversation", and picking the most fitting point - and that is how they give an answer.
This is nothing like our conversation process. The most different thing is causality. They either can't remember at all, or they have a lot of trouble remembering.

>> No.20783126

>>20782962
Sorry anon, I couldn't read your post on account of the scantily clad jezebel attracting my attention

>> No.20783143

>>20782962
Intelligence & Spirit, and Hegel in a Wired Brain

>> No.20783144

>>20783114
Where can I read more about this?

>> No.20783187

>>20783144
https://en.m.wikipedia.org/wiki/Semantic_space
https://arxiv.org/abs/2201.08239
https://www.crayondata.com/how-do-chatbots-work-an-overview-of-the-architecture-of-a-chatbot/
I cannot give some explicit link, because it is not some new technique - it is more like a direct consequence of NLP.

>> No.20783842

>>20782962
IMO the key to human-like intelligence (or just sentience/consciousness in general) is sensation. It is the capacity of sensation which evolves to enable experience and emotion, and the ability to feel some way about things is the actual impetus for going on to reason about them (or to build machines which process data).

My guess is that this capacity of sensation can only emerge from the unrivaled complexity of organic chemistry, and that it won't be possible to create this potential in non-biological substrates. Or to put it another way, I don't believe that simulation can achieve sentience—only replication can (which we do not yet have the know-how to approach).

Of course non-sentient 'intelligence' is still a very powerful thing and is able to dwarf many facets of human intelligence, but if the goal is to artifice something sentient then I think current efforts are nothing more than smoke and mirrors.

>> No.20784105

>>20783114
This can be achieved already using a random dice roll and a hundred or so potential replies, with one selected, perhaps utilizing a single keyword from the previous response to whittle down the list to 'seem' relevant.

This is more or less what a lot of people do when they talk to people already. But if anything this would be more interesting due to the higher number of potential responses, unlike the two or three you get from your best friends lol

>>20782962
Yeah it's probably far beyond our means to create an actual thinking thing. Probably we'll have simulated conversations from the little robot and we'll be dumb and argue passionately for its sentience.

>> No.20784110
File: 225 KB, 455x568, Steve-Reeves-Classic-Bodybuilding-Pose.png [View same] [iqdb] [saucenao] [google]
20784110

>tfw /sci/ will always be living in the shadow of /lit/chads

>> No.20784111

>>20783842
>Of course non-sentient 'intelligence' is still a very powerful thing and is able to dwarf many facets of human intelligence,
Yeah this is the thing; you don't need to make a genius machine, you need to make a machine able to outfox the average dumb-dumb, which isn't a particularly high bar.

>> No.20784118

>>20782962
>Are philosophers' criticism of artificial intelligence still valid
>implying philosophers' criticism was ever generally valid
>particularly on purely speculative and technical subjects

>> No.20784131

>>20784118
Philosophers just need to cobble-together some telescope or box-like machine in order to present themselves as materially adept as astrophysicists. It'll happen eventually.

>> No.20784684
File: 76 KB, 700x933, abominable_stupidity.jpg [View same] [iqdb] [saucenao] [google]
20784684

>>20783114

It is just statistical data. A very elaborate mechanical Turk. Not even remotely how a mind works.

>> No.20784754

Why everyone's mentally jerking off on AIs and doesn't remotely consider our constant relationship with machines?

>> No.20784776

>>20784754
People who think about AIs probably think about that relationship far more than the average person.

>> No.20784810

this is incredibly retarded, dall-e and lamda and generally all other recent advances in AI are just the result of being able to scale models better, throw more data at them and other tricks that make the results more realistic
we're stuck at 2nd generation of neural networks and it has been pretty much a consensus that we won't reach true intelligence this way
whenever (or rather if) we suddenly make a breakthrough in neurology and comprehend how exactly neuroplasticity works, we'll be able to make 3rd generation neural networks that actually work and not just pretend they do, that might actually start to get scary
until that, neural networks are just statistics, understanding the data well enough that they seem human/aware/conscious, but they're just good at faking it

>> No.20785144

>>20782962
For the longest time I've thought that AI is no "I" and that strong AI is impossible, until I realized that neutrally monistic panpsychism is right and consciousness is just a binary algorithm whose goal is to, essentially, tell T from F because that aids survival (because there is One guiding principle to it all, the objective Truth, and the better the approximation of Truth by the algorithm, the better the survival). So there is this whole field of evolution of consciousness, and human consciousness is simply the peak of this evolution, an algorithm so good at telling Truth from Falsity it grew abstract thinking, planning for the future, and the better a species of Homo was at this, the more certain its survival, hence the domination of Homo sapiens. The algorithm begun to question even itself, its own Being, as a result of the abstract thinking module. The algorithm of consciousness became a hyper-algorithm, or in another sense, normal animal consciousness became hyper-consciousness in humans.

So the redpill is that strong AI is right. It is possible to create such a hyper-algorithm. Chinese room experiment? Wrong. Just put that algorithm in an artificial body that runs on perishables by converting accessible perishables to energy, and add the goal of survival to that program. And you've got strong AI.

The problem with that is, you've created Terminator, and not an artificial human with "empathy." But ultimately you could program an ethics code into the machine. So an artificial human is possible.

Of course, you could philosophically argue that it's not the real thing still, and the chinese room argument still holds. My point is, at that point, the distinction is purely philosophical, as the machine walks, breathes, talks and wants to survive as much as you do.

tldr:
strong AI is only possible once we crack the code of reality and codify what objective Truth is

>> No.20785153

You can't have ai and an egalitarian society.

>> No.20785157

>>20782962
Humans aren't computers. Intelligence is not a series of inputs and outputs. Intelligence probably doesn't even objectively exist, it's just a mode of human behavior the same way beavers build dams. I am 100% convinced that the key to understanding human consciousness lies not in neuroscience but in paleolithic history, when humans became human in the first place. Look at feral children like Genie who were raised without any human contact and are basically wild animals. That's what happens when you remove 100000 years of social behavioral evolution.

>> No.20785165

>>20785157
Humans are computers as much as a dog or a squirrel or a fruitfly are computers.

Fun fact. We've already cracked the code of certain nematodes. Only 300 neurons yet the algorithm is incredibly complex. They first tried to map it with monte-carlo before the ML revolution, but now we've achieved that with neural networks. We have a map of how a nematode's """soul""" functions

>> No.20785185

>>20785157
Your point is valid, but Genie is a terrible example. She was abused extensively from birth, and the scientists who worked with her were invested in her delayed development as they wanted to prove Chomsky's notion of a critical period of language acquisition, and they deliberately sabotaged her rehabilitation (habilitation?) until the court stepped in, took her away, changed her identity, and sent her to live anonymously with a foster family. She wasn't a feral child; she was an abused girl turned unethical science experiment. Moreover, subsequent dealings with "feral children" have shown that they can be totally integrated into society.

>> No.20785211

>>20782962
Technology is dumb and bad.

>> No.20785247

>>20783015
Are you ok? Are you ok janny

>> No.20785312

>>20785144
Where on earth did you get the idea that "the truth" is what aids survival? If you respond, do me a favour and don't indulge in circular reasoning

>> No.20785334

>>20782962
How is this thread still up? Tranny jannies kill yourselves. YWNBAW

>> No.20785647

>>20785334
by far the most interesting thread . Fags like you have destroyed this board . Go to your evola , jung ,mishima thread.

>> No.20785822

>>20784684
>NOOOOOOOOOOOOOOOOOOO
>you can't just have a material body behave in the exact same hecking way as any 'spiritual body' and demand that It's the same!
>It's missing the heckin spark of liferino

>> No.20785831

>>20785647
Nothing wrong with Mishima, non-faggot.

>> No.20785891

>>20785822
It doesn't behave the same. You just assume it does based on the end result because you is a dumbface haha

>> No.20785917
File: 85 KB, 694x900, divine_spark_of_the_motive_force.jpg [View same] [iqdb] [saucenao] [google]
20785917

>>20785822

We're not even talking such lofty topics here. Practically the whole field of "AI research" these days is bollocks, dead end technology (many such cases today, sad!). Fake it until you make it isn't an applicable scientific method and won't yield any meaningful results. These tards couldn't be any further from actual neuronal networks and cognition in general ...

>> No.20785938

>>20785647
Evola and Jung have more profundity than this crock of shit you're all brewing

>> No.20785968

>>20783114
>This is nothing like our conversation process.
Not nothing like. It's probably like part of the process. When we speak we just start going based on a vague idea and the words flow out in a way that's not completely unlike simple markov chains or the more complex versions in these bots but in humans it's all directed by other layers with more high level considerations.

>> No.20786017

>>20785917
This is far and away the stupidest thing I've read in, heavens 6 months or so? It's impressively retarded—I don't think someone could fake the degree of stupidity in it.

>> No.20786032
File: 38 KB, 409x600, I'm_clearly_on_a_mission.jpg [View same] [iqdb] [saucenao] [google]
20786032

>>20786017

Oh boy seems I got myself a new case ... :)

>> No.20786175

>>20785831
i love mishima but there is a thread here arguing he was not gay. tranny porn watcher calling themselves straight are less desillusioned

>> No.20786432

>>20782962
I didn't read your post but I will masturbate to your OP image.

>> No.20786740

>>20785144
>Nuclear grade pseud copypasta
This is why I come to /lit/

>> No.20786808

>>20785822
Cognitive activity doesn't work in if/then statement until the higher-tier functions. Human cognition/consciousness is a very specific and complex form of constant predictions/prediction-correction mapping.

>> No.20787853
File: 86 KB, 1140x660, 1659604122616.jpg [View same] [iqdb] [saucenao] [google]
20787853

>>20784776
>People who think about AIs
Are we talking about actual engineers? Because apart from them, what I see is "AIs will overcome us because they'll have consciousness, reasoning and even a nice and thicc booty"

Talking about philosophy, picrel is the only one who has the faintest idea of what he's talking about imho. Enlighten me if I'm wrong

>> No.20788544

>>20785144
>wants to survive as much as you do.
Does it though? You say it wouldn't have empathy, which implies it wouldn't have the capacity of 'feeling' in general (a necessity for actual desires). Unless its goals and ethics are felt, it is not a 'being' in a very concrete sense.