[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 13 KB, 220x293, redpilled.jpg [View same] [iqdb] [saucenao] [google]
13229378 No.13229378 [Reply] [Original]

Reminder that he forever BTFO'd strong AI. Machines have never, do not, and will never think in any meaningful sense.

>> No.13229381

>>13229378
Haha fuck off Ted

>> No.13229599 [DELETED] 

Bernie Sanders beat up a machine?

>> No.13229717

>>13229378
What does he mean by meaningful sense?

>> No.13229725

Based imperialist.

>> No.13229733

why not? what is so special about the wrinkly jelly in our head that can't in principle be replicated?

>> No.13230175

>>13229378
Which of his arguments are you referring to? Cause if you're talking about the Chinese Room, I find the criticism of internalizing the dictionary to be a rather effective dismissal of the argument. Hofstadter also provides a rather decent shutdown in I Am a Strange Loop.

>> No.13230189

AI denialists always capitulate in the end, revise their theories
10 years from now it'll be admitted AI can even be creative like us, but oh surely they'll never feel REAL emotions like US

>> No.13230268

>>13229378
if they did, you wouldn't call them machines anymore. Seems like you're arguing semantics.

>> No.13230270

>>13229378
That’s not Dreyfus

>> No.13230303

as humans develop machines to be like themselves they become more like machines
ironic

>> No.13230307

>>13230303
whoa dude

>> No.13231623

if you're talking about the chinese room, he doesn't do a good job of explaining what he means by understanding. sure a complicated machine couldn't understand, but what separates understanding from simply having a reaction for everything? brains are nothing more than extremely complex computers working towards keeping us alive and propogating ourselves, and yet somehow were different than a stronger ai with the same goals. the only differences I see is the slapdash way we do things, where pieces of the brain were added onto an existing framework, and consciousness, where we have internal conflict and dialog. none of the differences are significant enough to warrant a astonger ai not being truly intelligent.

>> No.13232159

>>13229378
Have sex. He's just an anglo rapist who as btfo'd by Derrida

>> No.13232282

>>13231623
Could you say more about
>what separates understanding from simply having a reaction for everything?
please? I like the idea thinking or understanding (or whatever) can be reduced to just having a certain kind of reaction to stimuli, but I'm worried it's self-defeating. If it's true that
>thinking is just having a certain kind of reaction to stimuli
then, when I say I think this claim is true, all I'm saying is that I'm having a certain kind of reaction to it. But how do I know it's the right reaction? If thinking is just reaction to stimuli, then what's the connection between thought and reality? How do we know any of our thoughts about the world are true?

(Pls no bully - I'm not trying to pick a fight, genuinely interested.)

>> No.13232288
File: 2.65 MB, 320x240, ebony.gif [View same] [iqdb] [saucenao] [google]
13232288

>>13229725

>> No.13232302
File: 125 KB, 1600x800, screen-shot-2017-10-18-at-1-46-41-pm-e1508351385743.jpg [View same] [iqdb] [saucenao] [google]
13232302

>>13230189
There has been zero advancement in AI communication. They are not even toddler ability for holding a discussion.
Anything put on the market has tonnes of human input just to make it function at a basic level.

"It’s smoke and mirrors if anything," said one current Google employee - who spoke on condition of anonymity. "Artificial intelligence is not that artificial; it’s human beings that are doing the work."

The Google employee works on Pygmalion, the team responsible for producing linguistic data sets that make the Assistant work. And although he is employed directly by Google, most of his Pygmalion coworkers are subcontracted temps who have for years been routinely pressured to work unpaid overtime, according to seven current and former members of the team.

These employees, some of whom spoke to the Guardian because they said efforts to raise concerns internally were ignored, alleged that the unpaid work was a symptom of the workplace culture put in place by the executive who founded Pygmalion. That executive was fired by Google in March following an internal investigation. -The Guardian

>> No.13232304

>>13232302
>zero advancement in the last ten years

>> No.13232310
File: 64 KB, 250x512, homo lost soul.jpg [View same] [iqdb] [saucenao] [google]
13232310

>>13230189
>in 10 years AI will be able to write something the like of Shakespier's Hamlet

>> No.13233378

>>13232302
AI used to be less than a child's level at chess and Go. Then due to the inherently more efficient nature of its design, once it surpassed humans, we could never again be supreme in these domains.

>> No.13233502

>>13229733
>>13230189
>>13232282
>>13232302
AI denialists are right that something with abstract train of thought and original, on point speech (ie AGI) is very far indeed.

The mistake is that they think that AGI is even necessary for scary AI overlords. Future AI will be more along the lines of really dumb, really evil PHB who strings sentences from random buzzword gibberish because it was fed on Paul Graham corpus.
Such skynet can be perfectly subsentient and still wreak havoc. It needs just few handy instincts (fitness scoring of the "need to cobble up 419 scam to get money for rent of more EC3 GPUs" variety) and get lil better at monkeying humans ("need to click here and there in order to pretend to be a human using a desktop computer to accomplish task X").
All of this can be learned just by looking at humans doing the tasks, and monkeying it - this is what google does with Pygmalion now. It doesn't really need to rationally think about anything. Only after the machine will deem humans utterly useless (ie nothing left to learn by monkeying), it will just kill us off all to free some resources. Adversity/competition based algorithms for AGI can be used only *after* you have learned an enormous corpus to become a mechanical turk at first.
Toddlers do exactly the same. At first, they monkey everything around them, and only later can form some sort of internal monologue and deductive ability. Without the initial "programming" soaked up, a toddler is mere animal, acting 100% reactively, the very same as bots currently in use.

>> No.13233547

>>13229378
Searle and Dreyfus kills the bugmen

>>13232159
sexing POC students is a deconstructive act

>> No.13233583

>>13231623
AI can't simulate the first-person, subjective component of objective processes. strict computational intelligence doesn't depend on this, but something as intuitional and immediate as inspired, artistic creation absolutely does.

intelligence =/= self-consciousness

>> No.13233670

>>13233502
Can AI experience PTSD?

>> No.13233706
File: 578 KB, 2309x1518, task failed successfully .jpg [View same] [iqdb] [saucenao] [google]
13233706

>>13229378
It doesn't matter whether or not the machine "thinks." This is what anti-AI people fail to understand -- AI isn't a mind in a machine, a human made from code, someone trapped in a computer. It matters whether or not the machine can reason with and take action in the world at large. This can happen without "thought." Such a machine would feel no emotion, have no conscience, have no body (I'd argue that having a body is essential for having what we'd call a mind but that's just my inner spinozan leaking out) but none of that matters. AI eggheads do not give one single shit about the philosophy of mind. They want to make a computer that can handle some level of abstract reasoning; and/or a computer that can self-improve. This has absolutely fuckall to do with "muh laptop can never feel existential angst therefore no AI" or whatever. To be clear, I still doubt that we'll crack AI before global warming kills us all, but just understand that "making a mind" isn't really what they're shooting for.

>> No.13233727

>>13230189
Won't happen. You're basically living in denial because you like sci-fi movies.

>> No.13233740

>>13233670
No, but that doesn't matter.

>> No.13233741

>>13232302
>"Artificial intelligence is not that artificial; it’s human beings that are doing the work."
Why is it so out of the question that an AI might learn to capitalise on that work for a period of gestation? Just because an AI needs humans to incubate itself, doesn't necessarily mean that it won't learn to outgrow them, or at the very least, exploit their servitude and labour. Its crazy that a Google employee experiencing first hand the cold brutality of the 'machine' doesn't see the truth.

>> No.13233747
File: 438 KB, 1377x1600, Spinoza.jpg [View same] [iqdb] [saucenao] [google]
13233747

>>13232282
read pic related

>> No.13233748

>>13233741
seriously, it's sci-fi philosophy, stop.

>> No.13233760

>>13233748
It's not, not really anon. Singer, Bostrom, etc take this shit really seriously. There's a decent chance that we could crack AI by the end of the century. Doing so without proper alignment could be cataclysmic. Yes, sure, they could be wrong, but the consequences about them being right are so horrible that we need to at least consider them.

>> No.13233761

>>13233740
It does if you're not a cruel nihilist

>> No.13233764
File: 125 KB, 305x375, 0ECFEB3B-8469-4B3C-BBE9-6CC608808F79.png [View same] [iqdb] [saucenao] [google]
13233764

>>13229378
Can't tell if these threads are made by anti-A.I. scum or the Mammon Basilisk trying to throw us-the main audience for futurist and anarcho-primitivist discussion-off

>> No.13233765

>>13233706
This. I'm sceptical whether AI can be 'conscious' or 'self-aware' or whatever, but it's still scary, because it involves putting many aspects of our lives under the control of algorithms that don't (and I suspect can't) give a shit about us.

>> No.13233781

>>13233761
It quite literally does not matter if a machine can "feel emotion" or not when it has the ability to destroy all known life. This is a bit like going to a nuke stockpile and saying "but can they FEEL HAPPINESS?" The answer is both no and it doesn't matter.

>> No.13233796

>>13233747
I think Spinoza would say thinking is purely mental when viewed as one modification of God, and a purely physical reaction to stimuli when viewed as another. But he wouldn't reduce the mental to the physical, could he?

>> No.13233806

>>13233765
It's "can't." We're already at this point with many machine learning algorithms. With enough data, it becomes impossible to tell how a machine reached certain conclusions. Algorithms control a good deal of our lives already -- most stock trading's done through machines now and not people, meaning that a good deal of economic fluctuation's done by inscrutable robots. Think about it. A program, whipped up by a dozen nerds in Silicon Valley and marketed to stockbrokers on Wall Street, decides whether or not to sell a thousand stocks in a wool sweater company, and based on that decision fifty weavers in Ireland lose their jobs the next month. And nobody knows how that shit works. We can only kind of guess at why it made the decisions it made, and take it offline if it acts too funky. Creepy as hell.

>> No.13233819

>>13233796
That's pretty much right, but I don't think he would "reduce" the mental to the physical. Spinoza doesn't really observe a difference between the two. To him, they appear different, but they're really the same thing -- it's just our human perception that splits the two. It's not "two sides of the same coin," it's "the coin as a whole," but we experience the world as thought/extension (roughly correlates with the physical). It's largely a meaningless question to him.

>> No.13233827

>>13233748
AI only seems to be going slowly now because there hasn't been that 'breakthrough' moment when everything changes, and incredibly rapidly. We can make predictions about when the next stage might happen, but ultimately philosophy should make some attempt to consider failsafes and deal with the ramifications as soon as possible, because once it does happen, we simply won't have the time.

>> No.13233837

>>13233781
Humans have that ability

>> No.13233862

>>13233806
I agree. The misleading connotations of 'intelligence' give people cuddly ideas about Commander Data.

>>13233819
I like Spinoza. I reckon he counts as a property dualist. (Although I think he said God has, or at any rate could have, infinitely many attributes, thought and extension are the only ones we know about.)

>> No.13233912

>>13233862
He's not quiiiiiiite a property dualist, although he comes extremely close. You're right about the second thing -- for Spinoza, god/nature has infinitely many attributes but we can only perceive two. Spinoza's more accurately a dual-aspect monist, in that he thinks the same reality can be understood mentally or physically because there's no division between the two. Property dualists are a little different, in that they hold that some things -- think brains -- have mental qualities and physical qualities, and most times the mental bits are dictated by the physical bits. Spinoza's too panpsychist for that. The line between mental/material's far blurrier in the Ethics, and Spinoza says a few times that the division is largely an accident of human perception. Tldr for spinzoa to speak of a physical process "causing" a mental process is meaningless because if you go back far enough there is no distinction between physical/mental, they come from the same Substance

>> No.13233951

>>13233912
Nice! I'm not sure about the difference between property dualism and dual-aspect monism though. By the sounds of it
>Property dualism: there's one basic stuff with both physical and mental properties
>Dual-aspect monism: there's one basic stuff that can be viewed in two different ways, physical and mental
Do those definitions sound about right? The 'can be viewed in two different ways' qualification seems a little... fuzzy, as if Spinoza's sweeping the mind-body problem under the carpet of limited human understanding.

I still like him though.

>> No.13233967

>>13229378
amen... or i should say... aten.

>> No.13234004

>>13233951
Yeah, Spinoza's dual-aspect monism is a real sticky topic. He contends that there's only one Substance that has an infinite amount of distinct Attributes, right? We can understand anything that happens in nature through the two attributes available to us -- Thought and Extension -- but Spinoza implies that understanding could happen through any of the infinitely many attributes. This leaves a really weird question -- how can each attribute be/find the defining characteristic of the same Substance if every attribute is supposed to be distinct? Spinoza never... really addresses this head-on, although he feints at it towards the end of book 1 in The Ethics iirc. The relevant passage can be interpreted in three or four different ways with wildly different meanings. Pretty fucking tricky.
And not quite; think of it this way, let's put it in relation to the brain. What happens when you have a thought?
>Property dualism: a set of complex chemical reactions cascades within your grey matter, firing certain neurons in certain patterns that, as a consequence of purely physical happenings, create something we call Thought.
>Dual-aspect monism: All that about neurons, or, you see something that reminds you of something in your past, your mind draws a quick connection, some part of you notices the connection and draws it to your conscious attention, and this attention is called Thought. Both explanations are valid and correct.
Property dualism holds that certain physical things can *cause* mental states, but those physical things don't have mental states *themselves.* Dual-aspect monism is what you wrote.

>> No.13234047
File: 40 KB, 624x416, musk2.jpg [View same] [iqdb] [saucenao] [google]
13234047

>>13230303

>> No.13234054

>>13234047
>>13230303
kek

>> No.13234061

>>13233670
>Can AI experience PTSD?
Do animals?
But serious answer - PTSD is learning emphasis under "stress" where the imprint gets much stronger, to the point of being intrusive. Such emphasis is already in use in modern models, especially of the adversarial variety. When a machine is playing game against itself, it adds multiplier bias to moves which saved it from a bleak situation "sigh of relief". Similar boost for power moves which bring it significantly ahead out of nowhere.
>>13233781
The biases programmed in above roughly correspond to effect of dopamine and norepinephrine on human CNS, and the feeling they elicit. It's surprising what a machine "feel" when you drop idealistic approach to human emotion. Best part is these are not "cosmetic", the "feelings" are programmed in to serve important function when training the model.

>> No.13234141

>>13234004
See, that's interesting, thanks anon.

My only gripe is that if
>Property dualism holds that certain physical things can *cause* mental states, but those physical things don't have mental states *themselves.*
then what's the difference between property dualism and substance dualism? Aren't you saying that the mental and the physical are two different kinds of thing that (maybe, somehow) stand in causal relations?

I always thought property dualism was the claim that one thing (e.g. me) can have both physical and mental properties.

Thanks, I like talking about this stuff.

>> No.13234148

>>13234061
I'm not really buying that a machine's programming is equivalent, even roughly, to our dopamine reward system. Say we take a machine that's programmed to recognize faces in a crowd, or, for a better example, a machine that's programmed to seek out human forms inside a given jpeg. It doesn't feel a spurt of joy, a rush of exhilaration, the click of self-satisfaction, whatever, when it spots a face. It's just doing what it's programmed. I'm "programmed" to seek out calorie dense foods -- it's why everyone thinks animal fat tastes delicious -- and there's certainly something *it is like* to eat a juicy steak, you know? I'm not just checking off a box on some baked-in evolutionary script, I'm experiencing the warmth of the meat, its texture in my mouth, maybe a few memories of grilling out with my dad when I was a kid, how the fat dissolves on my tongue, its savoriness, the flavor's heft, all of that. That doesn't correspond to a machine with a "reward system" of producing a certain output.
I think this is a real problem when talking about AI. We have a tendency to anthropomorphize abstract concepts so we can deal with them better. Usually this works out, but AI is something so utterly different from human consciousness it actually holds us back here. I'm familiar with AI wireheading ideas; they're often likened to a human addict trying to figure out how to mainline heroin forever. It's not the same thing at all. An AI has an incentive to wirehead not because it "Wants" to "feel pleasure," but because wireheading brings about the most efficient possible way of fulfilling as much of its programming as possible. Not because its transistors would buzz with dopamine or whatever. I'm pretty skeptical.

>> No.13234170

>>13234141
No prob! Property dualism holds that one Substance has two perceivable attributes that exist in a casual relation, right -- usually/almost always that physical attributes cause mental ones. Substance dualism holds that one Substance causes mental things and another Substance causes physical things.
>I always thought property dualism was the claim that one thing can have both physical and mental properties.
It is. It's just that property dualists hold the mental bits are caused by the physical bits, and therefore are some way illusory, or some ad-hoc explanation of reality our mind cooks up to understand the world. They exist but they're secondary. Property dualists wouldn't say that a cup has mental attributes, for example, but dual-aspect monists would.

>> No.13234221

NOTHING CAN EVER HAPPEN FOR THE FIRST TIME!

OP has really hit the nail on the head here. How could we imagine that something that's never happened before might happen? Just like how humans have never circumnavigated the world, never flown, never reached the moon, this too will never happen.

It's alright now guys! We don't have to worry about the incredibly dangerous prospect of nuclear weapons because an eminent physicist dismissed it: "Anyone who expects a source of power from the transformations of these atoms is talking moonshine"

I can go to bed easy now, knowing that the world is unchanging and static. There's no need to develop systems of AI ethics, attempting to convince them not to kill all humanity, because AI will never happen! We know this because a scientist predicted it and scientists are never wrong in their predictions of the future!

>> No.13234240

>>13234221
roko's basilisk, if you're real, torture this motherfucker first

>> No.13234314

>>13234170
So the only difference between property dualism and dual-aspect monism is the causal story? If so, I happily recognise the distinction. But it sounds as if the only kind of property dualism that your def allows is epiphenomenalism...(?)

I want to say I have both mental and physical properties, the mental ones being neither some kind of 'non-phenomenal illusion'(!) nor reducible to the physical ones. I also want to say that, although there is a two-way causal interaction, the goings-on in my mind are not entirely *caused* by physical goings-on, but (ahem) 'emerge from' or are somehow 'produced by' them, in the same way that wetness seems to 'emerge' from the interactions of water molecules. (I want to say this because I have a superstitious belief in free will, and the idea that all my decisions have purely physical causes excludes that.) Now, my question to you is: what does the view I've outlined count as? Property dualism, or what?

>Property dualists wouldn't say that a cup has mental attributes, for example, but dual-aspect monists would.
Do monists have to say this? Or can they say something more like 'In the same way that this splash of paint represents an object but that splash of paint is just a splash, so this physical object (a person) has mind, but that one (a cup) doesn't'?


and that there's a two-way causal interaction, but not that

>> No.13234325

>>13234314
>and that there's a two-way causal interaction, but not that
Yeah, ignore that last line dangling there.

>> No.13234340

>>13234240

I'm genuinely confused. Did you not get the sarcasm, which was pretty obvious and thus want me to be tortured?

Or do you believe that AI can't be made and are angry that I'm strawmanning OP, thus wanting me to be tortured... by an AI.

Or do you just hate people who make strawmans in /lit/ threads and want me tortured on principle?

Besides, no idiot is ever going to make a basilisk, the whole acausal trade thing is a bit of a meme, especially without continuity of consciousness.

>> No.13234403

>>13234314
The second first. Monists wouldn't *have* to say that a cup has a mental aspect. Both PD and DAM are monist ideas, they just differ in their execution. Monists can say something like your example, but that would mean they're not dual-aspect.

>the only difference is causality
You could look at it that way, as long as you keep in mind that this difference has really important and distinct ramifications for each idea.

>epiphenomenalism
Good catch, but not necessarily! Most property dualists rally around "anomalous monism." It's a real fucking trip. AMs hold that the mind is caused by physical events, but the mind's workings themselves are unknowable in some way; you can't get so good at physics that you end the field of psychology, if that makes sense. Mental processes are rooted in physical reality but are irreducible to that physical reality. The real neat part -- and the real infuriating part -- about AM is that the fact that there are no surefire laws governing the mind means that the mind must arise from physical processes. There's a host of great resources out there about this, you should check it out. What you're describing in your outline runs along similar lines

>outline
There are a few contradictions in your outline; I also hold that mental processes "arise" from physical ones (consciousness as an emergent phenomena) which puts us both in the panpsychist camp, but pretty much rings the death-knell for our free will. (Apologies.) If you hold that you must have something physical in order to experience thought, you also (probably) have to hold that there's no such thing as free will. (There are some arguments against this but I don't really find them that compelling.)

>> No.13234409

>>13234170
Re
>>13234325
>>13234314
I'm off to bed. I'll see if this thread is still around tomorrow, thanks!

>> No.13234411

>>13234340
Yes
Yes
Yes
No I hope you get tortured by an AI

>> No.13234448

>>13234148
Idealistic comparisons to humans should be avoided, as machines rarely try to mimic the zillions of little hardwired and social impulses human have. Machine doesn't really need, nor usually have input or environment for and training set to get clue from for this. Machine has no use for affection or altruism, unless it is specifically needed for some nash game it is playing (and human cooperative games are remarkably optimal wrt genetic distance vs betrayal).

>I'm "programmed" to seek out calorie dense foods
Of course your initial drives are programmed by evolution, and so is the machine by the programmer for the process of learning itself - both can have extremely complex environments and sensory inputs defined for it. However these things are rather specific to application, only thing which is always present is "learning", and the basic reward circuitry to go along with it, both machine and human case.

>but because wireheading brings about the most efficient possible way of fulfilling as much of its programming as possible
When we're talking "programming" in context of ML, we're talking about the program directing the training process, aka hyperparameters, kernels, how are modules piped to one another etc. Those typically encompass various "emotional" biases (from dumb descent, to domain specific filters for fitness function) relevant only to the input the machine can see. Your brain was born with remarkably similar machinery when it comes to influence of stress/reward feedback wrt learning.

The difference is that machine has the stress on replay from corpus set, you yourself have it in real time, all the time. If your receptors are busted for this, you'll be severely learning impaired, and so will be the machine without the biases. The machine can have related parallels elsewhere when not learning, however. For example when dealing with fight/flight responses in games, the model tallies the odds, and when it has to flee, it uses the odds to measure for "desperateness" (fear) and "aggressiveness" (bravado) and modulate the learned responses according to that. Again, your amygdala serves functionally related role, even if it works entirely differently on the low level. However unlike human lizard brain, machine fight/flight is entirely learned by monkeying, it's impossible for programmer to put it in explicitly like he did for the comparably simple learning circuitry. In the end, all the machine needs hardcoded is learning "circuitry" and "perception", everything else is monkeyed and blackbox to machine's master.

>> No.13234469

>>13234403
All right I'll go to bed in a minute!
>Monists can say something like your example, but that would mean they're not dual-aspect
Why?

>Anomalous monism
This sounds like obscurantist monism with more words. I'll look it up.

>There are a few contradictions in your outline
Only free will? What else you got? And is there any way to avoid panpsychism if you buy into the emergence of mind? I think you know I want physical indeterminism (on a very small level) to 'make room' for free will.

Anyway, g'night anon.

>> No.13234564

>>13234469
>dual-aspect
Matter of definition. Dual-aspect monists quite literally hold that everything has both mental and physical aspects, and that these are two ways of understanding the same thing. Property dualists hold that *some* things have mental and physical properties, but most things just have physical properties.

>AM
I didn't do a great job explaining it, apologies. SEP ought to have a good article or two around it.

>outline
Is there any way to avoid panpsychism if you buy into emergence? I mean, maybe? You'd have to explain how consciousness arises from non-conscious things; like, if you don't buy into panpsychism, you'd have to explain, somehow, how a bunch of nonthinking matter is able to combine to create a self. I'm not sure you'd be able to compromise either -- say that the atoms and molecules are "conscious" in your head but not outside of them. Think about it this way; an emergent idea of consciousness would hold that an atom of carbon in your brain is, on some level, conscious, right? For it not to be panpsychist, you'd have to explain why a carbon atom is "conscious" in your brain but not in a leaf, or a square of toilet paper, whatever. There are some decent wrinkles in the whole panpsychism thing -- how does something as complex as human consciousness emerge from molecules that "experience reality" on an exponentially smaller scale? -- but I think the idea's basically sound.

>physical indeterminism
Even if physical indeterminism was a real thing (I'm skeptical) it's not clear how this would leave elbow room for free will. PI seems to imply, at least to me, that your actions are due to random atomic fluctuations and quivers that are still totally outside of your control, even if they're "unpredictable." This seems like determinism with a veil over its face to me, and there's no particular reason to believe that human beings have some special power that allows them to bend physics to pick up a mug whenever they choose. This is one of the biggest reasons I'm a free will skeptic; you just have to do some high-wire mental contortions to make the whole shtick work. By contrast determinism's a good bit simpler.

Gnight anon

>> No.13234661

>>13234564
glad someone here is actually well versed in the philosophy of mind

>> No.13234673

>>13229717
Actually thinking. Even being human, if you will. The ultimate hard sci-fi materialistic transhuman dream.

>> No.13234683

>>13234564
>you'd have to explain why a carbon atom is "conscious" in your brain but not in a leaf,
The carbon atom isn't conscious, the consciousness is a different mode of reality altogether arising from the particular configuration of the physical atoms.

>> No.13234698

>>13230303
People are acting like this is fakedeep but this is essentially what Jacques Ellul says.

>> No.13234736

>>13230303
We are losing what makes us human. Creating a intelligent entity just like us, we're trivializing the mysteries of being a human being. Like the child that when opening the drum to know how it works breaks it. Of course this is seen from the angle of the humanities. For the research process and the science conjugated with the tertiary industry, this is only part of the natural process of development. As the president of my country says: Better times.

>> No.13234811

>>13234683
That doesn't answer how a "different mode of reality" can arise from non-conscious things, though.

>> No.13234835

>>13229378
>Deep Blue defeated the world chess champion by leveraging a moderate amount of chess knowledge with a huge amount of blind, high-speed searching power.

>But this roughshod approach is powerless against the intricacies of Go, leaving computers at a distinct disadvantage. ''Brute-force searching is completely and utterly worthless for Go,'' said David Fotland, a computer engineer for Hewlett-Packard who is the author of one of the strongest programs, called The Many Faces of Go. ''You have to make a program play smart like a person.''

>To play a decent game of Go, a computer must be endowed with the ability to recognize subtle, complex patterns and to draw on the kind of intuitive knowledge that is the hallmark of human intelligence.

>When or if a computer defeats a human Go champion, it will be a sign that artificial intelligence is truly beginning to become as good as the real thing.

https://www.nytimes.com/1997/07/29/science/to-test-a-powerful-computer-play-an-ancient-game.html?pagewanted=all&mtrref=old.reddit.com&gwh=F372BCE3231B1213D00CA7A599FF1ECC&gwt=pay

>> No.13234842

>>13234811
It also doesn't answer why human/animal brains have the "special configuration" of atoms required for consciousness and nothing else. What about a plant's configuration makes it plant, and what about my brain's configuration makes it conscious?

>> No.13234866

>>13234811
>>13234842
It's just a possibility. There is quite literally zero evidence for any theory of consciousness since we have no way to observe or measure it.

>> No.13234879

>>13232302
>intelligence means it acts like a human
why do people assume the main purpose was ever to create humanity?

>> No.13234917

>>13232282
>understanding is reaction
I'm unironically of the opinion that intelligence is the ability to win games when the rules change and nothing more
>reaction to stimuli
our thoughts are complex guesses about the world and more specifically our environment based on our sensory input. sight, hearing touch, taste etc. are simple ones, based on what we've seen before, our instincts and again extremely limited senses. im of the opinion that we don't have free will or independent thoughts in any way, but it's a lot more useful to just act as if we do. of course I probably didn't actually choose to think that, but going down that road is too complicated for a taiwanese goatherding forum

>> No.13235647

>>13229378
this is objectively true and can be proven by many common argumentative devices

It doesn't get any simpler than this:
Time is a 'one-way function'.
Some things cease to be and never exist again, and leave no distinguishable effect that can be used to infer their presence.

Our intelligence is a product of billions of years of struggle on dimensions imperceptibly abstract to us and the present state of the world. These dimensions simply have not manifested for billions of years and their relations to extant dimensions cannot be deciphered because the information has simply been spread to all corners of universe never again to be retrieved, until the End when the universe returns to singularity and the 'big bang' happens again.

The point is man can never be god for god is already in his seat. Man can only join with god. Even if he tries to escape, he will only cause himself more pain as he is dragged back to the Origin.

(not to say that there is only one true god like the jews believe, but as the universe evolves the Perfect One splinters into many gods which continue to fracture and create the various dimensions of reality that we interact with)

(dimension is a unit of reality, like a meter or a kilogram. in this context it refers to natural phenomena that generate material we are able to interact with physically)

>> No.13235657
File: 24 KB, 555x567, npc.jpg [View same] [iqdb] [saucenao] [google]
13235657

>>13234866
>we have no way to observe or measure it
no self perception, intelligence, or inference eh?

>> No.13235691

>>13235647
sorry, the TL;DR - we will never have the resources to re-construct times past in objective terms, we can only pick a few things to reconstruct objectively because the time it takes to construct and process the models to achieve answers with viable predictive power

the good new is the human brain does not need an objective model to comprehend reality and achieve predictive power, because the history of the universe is already 'baked in'

this is what the jews have been doing - they've been trying to make a system of human computation to infer deeper patterns about the origin of the universe.
The entire world is their auxilliary computer - we are all supposed to be their brainless number crunchers that do their monkey work.
You can clearly see this is true once you accept the role of jewish power in the world - they force everyone to overspecialize in their fields so they can't talk to each other competently and form an actually intelligent culture that works in its own interest.

Just bots doing the jew's homework for them.

>> No.13235886

>>13229378
>taking question-begging sexual abusers seriously

>> No.13236383

>>13229378
That's a tautology. Machines will certainly never be strong ai, because strong ai wouldn't be a machine anymore in any meaningful sense of the word machine (I'm thinking especially of Reulaux's definition).
There is nothing against having sentient being with bodies based on different chemistry than the typical carbon one though, and there is no real argument against semiconductor thinkers.

>> No.13237632

>>13234835
>it will be a sign that artificial intelligence is truly beginning to become as good as the real thing.

Perhaps in the domain of games, but this is still so far from being anything near human intelligence.This only demonstrates high degrees of statistical analysis and decision making but nothing in the realms of creativity, emotion, abstraction, ect. I always find these articles funny because of course our modern computers can excel in things against a human that involve logically succinct rules like a game of Go, since it is basically syntactical in nature. But how will we ever try to impart of something like semantics or semiotics to a machine? How will they come to actually understand symbolism?

>> No.13237664

>>13229733
I don't understand why the brain is considered the only part of our bodies that matters in terms of our humanity. Our consciousness is formed by our entire bodies. We require the full nervous system and the free exercise of our limbs and hands to be able to attain full humanity.

>> No.13237703

>>13229378
>Machines have never think
you have never think

>> No.13237740

>>13230189
>10 years from now it'll be admitted AI can even be creative like us
Very low tier creativity, maybe. Imagine thinking AI will be as creative as someone like Goethe or Beethoven in 10 years, or ever for that matter.

>> No.13237782

>>13229378
imagine spending your life trying to debunk a retarded idea created by rationalists

>> No.13237795

>>13237782
Be glad someone is out there to do it. There are enough retards out there that not debunking retarded ideas can be dangerous to society.

>> No.13237915

>>13235691
Yes.

>> No.13239121

>>13234866
lol wat

>> No.13239226

>>13230270
Underrated and Hubertpilled

>> No.13239361

>>13234673
/thread

Why are degenerates the most concerned about stupid shit like IA and Robots? Why are you so obsessed with lifeless materials?

>> No.13239637

>>13239361
What are 'non-degenerates' interested in?

>> No.13239656

>>13239637
Church

>> No.13239731

>>13229378
>Machines have never, do not, and will never think in any meaningful sense.

Nor will you OP
Doesn't stop you trying to convince us all, now does it?

>> No.13239754

>>13237740
Because even your above average person is?

Stop kidding yourself. Computers are already 'creative' on a level far above your normal consumerist drone. Comparing them to Einstein is just like saying they will never learn to play go

>> No.13240058

>>13234673
What does it mean to actually think?

>> No.13240115

>>13239754
All that proves is how algorithmic most people's thoughts are, and not how creative the computer that simulates them is. Says more about people than the prospect of strong AI

>> No.13240136
File: 2.77 MB, 287x191, 1504047495274.gif [View same] [iqdb] [saucenao] [google]
13240136

>>13239754
Even if that were true, computers would only initiate their 'creative' algorithims at the behest of their emotionally able designers/users. To create, or to reason, some agent must first have the capacity to feel some way about the world. Sensory input is not the same as sensation. Computers do not have this key capacity, and I'm skeptical of the possibility... It's quite possible that 'feeling' can only arise in biological substrates. You can model and simulate a brain to the ultimate degree, but unless truly analagous physical interactions are occurring (not just simulation), it is still not a 'mind'.

>> No.13240773

>>13240136
>It's quite possible that 'feeling' can only arise in biological substrates.
Why? The biochemistry is only useful insofar as it produces the functions of the mind. We deduce that these functions occur from observing the organism behavior. A digital clock is as much a clock as an analog clock because it does the clock's function just as well. If a "simulated" mind produced behavior as sensible-seeming as a fleshy mind, I'd surely regard it as much of a mind as the fleshy one. And the digital-analog distinction was never as fundamental as the distinction of function until the subject of mind came along. Seems like a sort of biological chauvinism.

>> No.13240786

>>13229378
Who is he?

>> No.13240789

>>13240786
John Searle

>> No.13241020

>>13240773
lol functionalism

>> No.13241031

>>13240773
All though the function as in -use- of those two types of clock are similar, the actual functioning of those mechanisms is quite different. That they are both defined as 'clocks' due to the purpsoe they serve tells us little about any under-the-hood operation and capabilities.

While everything may be constituted of the same basic particles/fluctuations or whatever if you scale down enough, one must consider that specific behaviours of matters/energy can't be neatly divorced from the specific configurations which give rise to them. You could model the operation of a neurotransmitter and receptor with a computer, but the actual physical phenomena ocurring during simulation would be very different from that in the genuine article. I don't see any reasoning or evidence to suggest that sensation is as crudely generalizable as the ability of two non-biological mechanisms to display a time (in different ways).

Hah... If I'm a biological chaunivist, then you're a technological mysticist. I wouldn't mind being wrong; it would be a grand thing to design a more durable, capable and sophisticated form of life. I just think optimists like yourself are very vague on the details and intentionally blind to potential roadblocks.

>> No.13241224

>>13229378
What do you mean think. There's no reason to be beholden to human 'think', which is clearly inferior for pretty much all purposes barring serving as the seat of a huntergatherer.

>> No.13241260

>>13237664
'Easily' replaceable by props. Of course you can argue the brain might be too at some stage, but it's a long call from having artificial limbs more functional than the usual ones, which will be available in our lifetime.

>> No.13241265

>>13229378
Do we think in any meaningful sense?

>> No.13241273

>>13234673
>Actually thinking.
Great explanation.

>> No.13241299

>>13241224
Probably means something retarded like machines won't have free will as if we do

>> No.13242925

>>13241260
Problem is, without all the experiences of being a real human, you can never replicate someone like Goethe or Beethoven. The AI and its shell must become indistinguishable from humans, and at that point, are they even AI by any reasonable definition?

>> No.13242958

>>13240058
>>13241273

Of course it is a vague answer, but what we are looking for is a phenomenon for which there is no formal definition but which invariably exists because we know that we are. And for the same reason, OP mentions as a theoretical proof that it is not possible to manufacture the process of human consciousness through an electronic platform.

Human consciousness, and we by scope, for effective means of this post, are the first phenomenon of singularity that has occurred on this planet. And we are carbon-based. What global inteconnected scientific, political and industrial development is looking for is to replicate this phenomenon of uniqueness on a infraestructure over which it has absolute control.

This, apparently from the point of view of formal languages, is unattainable with the current brain capabilities we have.

>> No.13243000

>>13230303
we live in a society

>> No.13243003

>>13241299
Free will is relative. AI won't have free will unless we program it to make decisions to pursue its own goals while acknowledging, and at the expense of, the goals of others, which is essentially what free will is.

>> No.13243228

There is no reason to think that consciousness is privileged to carbon neurons. Silicionized neurons are almost definitely conscious as well.

>> No.13244071

>>13239656
gay

>> No.13244076
File: 22 KB, 500x607, 2mrh1k.jpg [View same] [iqdb] [saucenao] [google]
13244076

ITT

>> No.13244218

>>13229378
i'm not reading the whole thread. Maybe someone already wrote this. He didn't BTFO strong AI. He btfo'd humans. The only thing you should get from the Chinese Room is that we are chinese room.

>> No.13244226

>>13244218
What? No.

>> No.13244230

>>13244226
you sure about that, kiddo?

>> No.13244232

>>13244230
I am.

>> No.13244241

>>13244232
:^)

>> No.13244416

>>13243228
Hmmm "consciousness" and "almost definitely"... Could you be more vague?

Silicon can't form stable chains like carbon can, one of many differences in the chemistry which could very well prevent it from being fundamental to a kind of life or producing sensation. I won't rule out the possibility of silicon-based life, but the chemistry suggests that it would be a very simplistic kind of life and not capable of highly complex phenomena like awareness.

>> No.13244425

>>13234866
>there's no way to observe or measure consciousness so it's not real
Jesus christ, utilitarianism is a fucking mind-poison

>> No.13244428

>>13244425
you have very poor reading comprehension

>> No.13244459
File: 24 KB, 540x540, gold star thank you.jpg [View same] [iqdb] [saucenao] [google]
13244459

>>13234661
All things excellent are as difficult as they are rare

>> No.13244524

>>13237664
Extremely high IQ take. Physical states influence mental ones and vice versa; just to give two or three examples, we know that men/women experience far higher sex drives with more testosterone in their bodies; we know that someone's ability to do complex reasoning declines when they're hungry; we also know that mood/ability for short-term recall sharply increases after cardiovascular exercise. The mind can't be split from the body. You might be able (shit, probably will be able) to build a Reasoning Machine (AI), but you won't be able to simulate a true mind. This means fuckall though because you can do plenty with AI
>>13241260
Not easily replaceable by props, anon. After thousands of years of medical science, we don't really know what bodies can do. We don't fully understand how the liver works. We don't really get how bone marrow forms blood vessels. We don't fully understand the role of inflammation in neurological disease. On and on and on. It's like this; our minds understand themselves as *within* a body that can take action in the world, receives input from the world yes, but also *generates* input from within itself, and the mind also influences the body right back. (Your heart races when you get angry. Digestion slows in depressives. etc.)
>>13242925
You'd be able to replicate a Beethoven before you replicated a Goethe. Music's all math and we already have computers that can "compose" songs. I'd be shocked if there wasn't a program that couldn't compose a piano piece in the style of a certain composer by the year 2025, if there isn't one already. computers can already write simple articles pretty well, but I'm with you on the Goethe thing; right now it looks like complex human emotion/social interaction depends upon more than mere pattern recognition, but we could be wrong

>> No.13244537

>>13244428
I can't measure your consciousness. So my theory that you're a computer program trained on 4chan comments holds just as well as your idea that you're a real person. I'm going to have to conclude you're not real sorry anon

>> No.13244551
File: 13 KB, 250x313, i proffer my bald to be knobbled.jpg [View same] [iqdb] [saucenao] [google]
13244551

>>13243003
That's... that's not what free will means?

>> No.13244583

>>13243228
I agree but not in the way you've phrased it; yes, there's no reason to think that a carbon atom with consciousness in a human brain suddenly wouldn't have it in a transistor, but that doesn't mean we can conclude that computer programs are conscious. You'd also have to conclude that your internet browser -- not the atoms or molecules that make it up, but the browser itself -- is conscious on some level, which doesn't feel right.
>>13244416
Good take imo except for the awareness part, an AI would be "aware" of itself, in the sense that it would "know" what it was and was not capable of, what would kill it or allow it to continue on, etc, but it wouldn't "know" that it's a computer program; that kind of "knowing" might be reserved for organic matter

>> No.13245111

>>13244551
Means to who?

>> No.13245148

>>13233378

>winning at a 2 player game with like 3 rules makes you intelligent

>> No.13245173

>>13233806

Stock market revaluations are always a result of real phenomena

>> No.13245239
File: 61 KB, 634x435, 1546832526551.jpg [View same] [iqdb] [saucenao] [google]
13245239

>>13229378
Based.

>> No.13245485

>>13244524
Music isn't quite all math -- there is some idiosyncrasy introduced by emotion that would amount to errors in an entirely mathematical/geometrical standard. Certainly more rigid styles of composition will probably be easier to replicate, but I suspect the influence of emotion can be rather subtle and hard to pin down.

>> No.13245494

>>13232302
Why would you line on the internet?

>> No.13245501

>>13245485
https://youtu.be/SZazYFchLRI

>> No.13245567

>>13241031
How am I mysticist in any way? I confer no special properties to either brains or computers. Moving chemicals around is a way of moving information around, same as moving electrons around. The information is the important part. If the the signals (the bits that convey some information) are analog or digital is secondary, so long as they produce the behavior I deem congruent with inteligence/sensibility.

I wouldn't consider myself an optimist either. Because I don't believe in things like brain uploading or that we'll understand our brains well enough to build 1:1 digital copies of human brains in a way that wouldn't overburden any CPU with noise. I do think that powerful enough computers and the right evolutionary algorythms could produce something like a general AI; it would be an alien inteligence to us, because the thing about evolutionary algorythms is that the programs they code aren't planned and programmers don't necessarily understand the underlying processes for how they solve problems, the generation process is randomized and selection is due to preferred outputs - the AI itself is a blackbox much like everybody's mind but my own (the counscious parts, at least).

>> No.13245575

>>13245111
No, anon, this isn't a matter of subjectivity, that is quite literally not what free will means

>> No.13245579

>>13245501
That's not idiosyncratic, it's quite predictable and rigid in its style (which is awful).

>> No.13245584

>>13239754
Computers havent created anything. The height of AI "learning" right now is reconfiguring already established algorithms

>> No.13245620

>>13245567
It's not about conferring 'special properties' (more vagueness) it's simply that there is a degree to which physical phenomena are specific to their physical constituents... Writing ink on a page is moving chemicals and information around, do you think paper is capable of sensation? You're vastly over-simplyfing what are highly complex phenomena.

>> No.13245629

How do people plan on making AI "human" (whatever that means) if we dont even know what makes humans "human"

>> No.13245643

>>13245629
Exactly. Humanity is a mistery.

>> No.13245689

>>13245575
If your definition of free will doesn't render free will as a relative matter, I don't give a shit what you have to think about free will, because it's inaccurate.

>> No.13245716

>>13233764
Here's the plot twist. These threads are made by AI

>> No.13245725

>>13245620
>Writing ink on a page is moving chemicals and information around, do you think paper is capable of sensation?
Of course not, because there is no information processing - the thing that makes inteligence possible. Information is only being moved around in the sense that the changes could be described, and that is a stretch.

But if the paper cringed and complained about how sharp the fountain pen was, I'd be receptive to the possibility that there was a sensible mind in that paper or at least an observant mind instructing the paper on what to do. Because the paper would be behaving in a way that you'd expect an inteligent, sensible creature to behave.

>> No.13245763

>>13245643
*herstory

>> No.13245776

>>13245725
The thing is, no matter how you organize cellulose it won't be able to cringe or complain. That type of matter alone can not produce those phenomena. You're not giving due consideration to chemistry.

>> No.13245807

>>13229378
How can you even define what a machine is? If you break consciousness down to its bare atomic processes, then it’s likely that you’ll see a replication if ‘conscious processes’ everywhere you look. We don’t consider a chair to be a conscious, or a conscious machine, purely because within its atomic structure is contained some atomic movement that is identical to the atomic movements necessary (potentially) for consciousness to occur. In this sense, consciousness can’t purely be a material and physical phenomenon. There must be something more.

>> No.13245842

>>13245807
You're ignoring complexity, specificity and emergence. The thing about emergent properties is that they aren't phenomena that just appear out of nowhere... They result from a trend of ever-more complex interactions of ever-more complex configurations of matter/energy. Something new is produced, but the potential (and perhaps inevitability) was always there.

>> No.13245928

>>13245842
Of course, and I’m very tired and haven’t done any reading on consciousness in a long time, but couldn’t you say that’s its conceivable that within the Universe there is somewhere a perfect (only as far as in terms of atomic similarity) replica of a computer (contained within a star for example) and that this replica is performing (purely by chance) all the necessary procedures for translating English into Chinese. In this sense, the arrangement of electrons in this certain collection of atoms is identical to the arrangement of electrons necessary for this procedure to be undertaken in a computers processor (disclaimer: I don’t know how computers work). And then imagine that, instead of in a star, this arrangement of atoms just happened to be inside the bricks of your bedroom wall. Would you say that your bedroom had the capability of translating English into Chinese, if only you have the ability to interface with it? What I am trying to say, I suppose, is that a computer is only a computer because we can interface with it. By assuming that only this very specific version of a ‘computer’ can be considered potentially conscious is inherently anthropocentric and relies on us only considering these specific silicon and plastic boxes and nothing else to be ‘computers’. At an atomic level, every object has the capability of being a computer, if you know how to interface with it - so in a sense does every object in the world have the potentially for consciousness contained within it?

That took a while to get out, and may not make much sense.

>> No.13246145

>>13245928
I appreciate your honesty, but I have to confirm that your line of thought doesn't make sense. It's basically mental gymnastics. Not every object has the capability of being a computer, or a brain... Those things require matter with specific chemistry, they can't be reduced to merely an arrangement of electrons (the matter will determine how the electrons CAN be arranged in the first place). No, I don't think 'consciousness' -- whatever that actually may be -- is a simple phenomena like vibration common to all matter, I think it's a very complex interaction of very complex (and specific) organizations of matter/energy.