[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.4457140 [View]

>>4457136
>Does the unobservable exist?
Unknown, and always will be unknown. The only way to learn about factual claims is science, and thus unobservable claims will always be unknowable. Thus this conversation is useless beyond trying to convince you of this simple rational truth.

>> No.4457123 [View]

>>4457121
You say science isn't good enough because it deals with only the observable. Thus you want to talk about the unobservable. What the hell methods could there be to learn about it? It's unobservable. Almost by definition it's irrelevant.

>> No.4457116 [View]

>>4457113
>I'm pointing out that our reality is just a little copy world, we are missing the bigger picture (if indirect realism is true).
This bigger picture will always be unobservable, and thus entirely irrelevant. Why bother? Why is anyone even continuing with this thread? Are you same-fagging it? Or are some people in sci this bored and/or stupid?

>> No.4457107 [View]

>I think it's completely missing the point.
I think everyone in this thread is missing the point by persisting to discuss with you. It's evident that you're questioning enough of our collective shared reality that we cannot have a constructive conversation with you. I want to use the term nihilist. Sophist is good too.

>> No.4457084 [View]

>>4457078
>I'm saying these predictions may have nothing to do with external objective reality.
Ah, the "wasting our time" category.

Let us know when you discover a way to learn about "external objective reality". Until then, I'm going to do science.

Thread's over.

>> No.4249620 [View]

>>4249617
PS: "any sufficiently analyzed magic is indistinguishable from science." - Genius girl webcomics.

If you don't understand that, then you do not understand science.

>> No.4249617 [View]

>>4249609
Finally, I argued that if an executing C++ program on silicon hardware shares enough homology with a human's brain, I see no reason why it wouldn't be conscious. Even if there isn't homology, if it passes the Turing test, it may be conscious. How do you know if a human brain being simulated on a computer is not conscious? How do you know? From my perspective, it looks like the human brain is merely a computation device that happens to be conscious. Why can't other kinds of computation devices be conscious?

Remember - a virtual OS is still executing on real hardware. Electrons are still being moved around in real hardware "to simulate" the virtual OS.

I'm not arguing that it must be. To do that I think is somewhat silly. The only qualia we can observe is our own. We don't know what "magic" creates qualia from material processes. At best I'm willing to say that I am not special, and I arose from evolution, and thus if I'm conscious then all other humans are conscious. I hesitate to extend this to other kinds of computation devices for certain, but I'm extremely hesitant to say other kinds of computation devices cannot be conscious as well. Of course - this is all philosophical wanking. There is no way to test any of this.

>> No.4249609 [View]

To recap, my arguments are thus:

Certain aspects of the mind are damaged when certain parts of the brain are damaged. These are more or less constant across individuals. The evidence strongly indicates that the mind is the manifestation of physical processes in the brain. Without the brain, there is no mind. Alter the brain, and alter the mind. "How" does this relationship "work"? I consider this question to be equivalent to asking how magnets work. They just do. We don't know how. We know they do because the evidence says they do.

Next is the argument that there exists finite C++ code which could pass the Bob-Turing-test, that is its execution would be indistinguishable from the actual human being Bob. This follows from the earlier point that the mind is merely the results of the brain, and the brain is just physics, and thus it's computable, and thus by the Church Turing thesis, there exists C++ code computationally equivalent to it.

Do not confuse these claims with the claim that a text file on a computer or on paper is conscious. I never claimed that.

>> No.4249602 [View]

>>4249584
To repeat what I said in the other thread, I hope we aren't trying to distinguish between a piece of text (source code), and the execution of that source code which is electrons moving in a silicon network.

An executing C++ code is a machine. It consists of physical electrons moving from one point to another point, and sometimes it's connected to robotic arms to make the arms move, a monitor to display an imagine, and so on.

If this is the whole debate of the last thread, I'm sorry for not realizing it sooner. I didn't realize people would be so stupid as to say "a program is just a piece of text", and not realize that "program" for the purposes of this argument is the (physical) execution of that program on some real hardware.

>> No.4249592 [View]

>>4249585
>>4249591
Ack, trying to keep this name in this thread only. Those are me.

>> No.4249567 [View]

>>4249564
I already did. When you write programs, they are eventually compiled down and run as electrons moving through silicon. Of course when I write some text it isn't conscious, but when I execute that text as electrons in a silicon network, it may become conscious.

Is this really the debate? Damn that's a lame distinction that I didn't even realize that some people were making.

>> No.4249565 [View]

>>4249562
But software running on your desktop /is/ electrons in a silicon network. It's just not the right configuration to be conscious. Why don't you understand?

We could hook up your brain to a bunch of sensors and hook that up to a conventional desktop monitor. Ok - we probably wouldn't get a Windows(tm) desktop, but we would get something. That's because your brain isn't "designed" to be a Windows(tm) operating system.

>> No.4249561 [View]

>>4249556
Then you agree with me that the difference between a zombie and a "real conscious" human being is not important. Good.

However, I was trying to argue that there is a difference between machines that can pass the Turing test - it is a task: act human - and those machines which cannot. Human beings are machines which can fulfill that task.

>> No.4249555 [View]

>>4249545
How do certain configurations of neurons in the brain make the neurons feel happy, or sad, or in pain? No one knows, and I bet no one will ever know. However, we know that certain configurations of neurons do make those neurons feel happy (in principle - we don't actually know exactly which neurons do it) because of the available evidence. And as I see no relevant difference between a bunch of "neurons in a neural network" vs a bunch of "electrons in a silicon network", I see no reason why both can't be conscious when properly arranged.

>> No.4249541 [View]

>>4249531
>
input-output, true/false, is a 2 state system...how can you reduce something like the actual feeling of pleasure to, two binary states? which state is pleasure?
>it makes no sense
What makes no sense is your extreme straw manning. Are you really that stupid? Do you think there's a single neuron in your brain with only two states that determines whether you're happy or not? Do you think in the C++ code example there's only a single bool variable that keeps track of whether you're happy or not?

Fucking seriously?

>> No.4249535 [View]

>>4249531
>a 2 dimensional graph isn't sufficient to capture a 20 dimensional vector force---
Actually, it is. Learn some transfinite math.

>> No.4249534 [View]

>>4249529
I've been posting for a name for several days now. How the fuck are you so stupid to think I'm one of the dualists? Really. Did you reading fucking anything?

And please - if we are not going to use observable phenomena to determine if something is intelligent, then exactly what are we going to use? Magic?

>> No.4249525 [View]

>>4249521
>It's complex, therefore it can't be reduced to physics.
Sure sounds like creationist thinking 'round here.

>> No.4249519 [View]

>>4249515
Again how do you know the first is true, and the second is false?
1- "You can't make a bunch of electrons in silicon network conscious!"
2- "You can't make a bunch of neurons in a neural network conscious!"

>> No.4249509 [View]

>>4249502
>The fact is a suitably realistic animatronic running some sort of social algorithm and stringing phrases together from a library could convince you it was conscious, and it wouldn't have any ability to learn or solve problems.

I still really hate this. This is a perversion of the intent of the Turing test. If it doesn't have the ability to learn or solve problems, then it fails the Turing test. I would be able to distinguish it from a genuine human being.

>> No.4249487 [View]

>>4249482
To be more helpful.
"You can't make a bunch of electrons in silicon network conscious!"
"You can't make a bunch of neurons in a neural network conscious!"
How do you know the first is true, and the second is false? From my standing point, they're both computation machines. Different kinds of hardware, but both computation machines. There - maybe that'll make sense what I've been saying all along.

>> No.4249482 [View]

>>4249477
>but you can't make a computer program conscious just via magical codes and algorithms
Again, how do you know this?

>> No.4249479 [View]

>>4249475
First, citations please.
Second, I doubt their brain state was completely indistinguishable from those who don't experience pain while under anesthetics. You argue as though they were, which is definitely more than whatever studies you can cite actually show.

>> No.4249474 [View]

>>4249472
>and more importantly, a computer program can't digest anything, it exists virtually in electrical charges, lmfao--are you stupid? be honest
I argued we would put this desktop into a artificial body.

But really, the crux of the argument is in my second post.

Navigation
View posts[+24][+48][+96]