[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 238 KB, 1024x1448, Techpriest.jpg [View same] [iqdb] [saucenao] [google]
2177383 No.2177383 [Reply] [Original]

Forgive me a moment of stream-of-conscioussness, /sci/, but I need to ask this:

If an Artificial Intelligence is ever constructed on a computer, its mind will consist of script lines. Its entire being will come down to mathematics that all boil down to 1s and 0s; ons and offs; ifs and ors. This entity will be summed up as math.

The thing is, currently, video games have just such things in them; entities and beings summed up as math. While hardly as complex as a thinking entity, they can produce novel behaviours, and those that are proceduarlly generated do so by means of something that might even be called their equivalent of a genetic code.

So, does this mean that killing video game characters is the equivalent of squishing flies for fun?

More poignantly, if an entity can be described through math, and the change in mathematics, could equations done by hand on a sheet of paper (disregarding how mind-fuckingly long it might take for some) represent the sum of an entity? Could you put a snapshot of a being on paper? By following through equations based on stimuli, could you effectively compute the output of such a being? And if so, does that mean that your sheets of paper are a living entity (or at least a part of one)?

>> No.2177387
File: 125 KB, 500x418, HappyTechPriest.jpg [View same] [iqdb] [saucenao] [google]
2177387

>>2177383

Further question: Is there a distinction between life and intelligence? Can the two be separated? Can something that is not alive be intelligent? Or is something that is intelligent, by definition, alive?

>> No.2177394

>So, does this mean that killing video game characters is the equivalent of squishing flies for fun?
Maybe if you have a machine that creates infinite duplicates of a fly

>> No.2177400

>>2177387
we have a definition of life. It doesn't require intelligence. Do you think trees are intelligent?

>> No.2177402

>>2177400

I never said life needed intelligence, I asked if intelligence needed life. Our definition of life is fuzzy and limited as is, and there are many potentially intelligent machines that could be defined as not alive.

The sheets of paper question goes slightly further.

>> No.2177404

I would say, in a way yes. But how can we say what a flies perspective is like. BTW I think a flies brain is much more sophisticated than the AI in computer games.

But think of the paper this way, where does the entity on the paper really exist. It doesnt actually exist on the paper, because the various equations on the paper dont interact with each other. However the equations on the paper interact in your brain, so the entity is actually within your brain.

Anyways, i believe that morality applies to natural beings or artifical. It doesnt matter the origins of an entity, what matters is what it experiences, and what it wants.

But in a nutshell, I think video games at their current stage, is like squishing amoebas, they dont really have brains, but they have behavior patterns that kind of make it look like they do. The "genes" for an entity can exist on paper, but you need some kind of mechinism that will automatically turn those "genes" into a living creature. A human brain would suffice, so In my opinion, yeah, a concious being can exist within a computer program, or possibly the mind of another conscious being.

>> No.2177405

>>2177402
we have a definition for life. Machines do not fall under that definition. Intelligence is not part of that definition. Why would life be required for intelligence?

>> No.2177414

>>2177405

You guys are completely missing the point of this post. Hes talking about conciousness and at what levels something can be considered concious. Whether its "alive" or not has nothing to do with it. In all his examples, none of them are alive.

>> No.2177417

I disagree with the current definitions of life..

Something must have empathy to be alive, without empathy, it is not alive, anything that has empathy, is alive.

>> No.2177418

>>2177417

swing and a miss!
another irrelevant post

>> No.2177427

this is actually very similar to the Chinese room idea. The chinese room if you havent heard of it is a thought experiment thats meant to show that a computer cant actually be given a mind.

Basically, visulize a guy who doesnt speak chinese, but hes locked in a room. He has a huge instruction manual, and people right him notes in chinese and slide them under the door. He can read the notes and give replys that look as if he understands the note, even though he doesnt actually comprehend whats going on, hes just following the manual.

Anyways, I think this argument is bogus. Sure the guy using the manual to understand, but that system (room + manual + guy) is actually its own entity, which does understand whats going on. Just because an individual part is oblivious, that oblivious part (the guy who doesnt speak chinese) combined with the manual does indeed comprehend.

An interesting side thing this always made me wonder. Do we have only one concsiousness? We have a conscious and a subsconious, and we tend to think of them as working together to make the whole. But perhaps they are both their own entity, each experiencing the world very differently. Perhaps both of them have those spiritual thoughts of, are other people also consious, but independently? Maybe it is that each human in a way, has two souls, each with its own specialization, and they must work together to create a functioning human and each could be said to be conscious.

>> No.2177430

>>2177417
I don't see how you arrived to that conclusion. I don't think a bacteria has empathy for other life forms, that has nothing to do with being alive. Either way, it has to do with morality.

>> No.2177433

>>2177430

quit arguing semantics

>> No.2177443

Life isn't just the formulae that can be extrapolated from it...it is the computation of those formulae.

If you extrapolated every possible switching of a brain in a single moment and fed it into a computer, the computer computation would think.

If you wrote those same switching paths on paper and calculated them yourself, you would think but the paper would not.

It's a matter of where the computation takes place, consciousness arises from the causality of computation, the action. A formula written and unmoving has no action.

video game characters lack the components necessary for consciousness, though we seem to be getting closer, and a game is likely what will solve the problems.

>> No.2177449

>>2177417
>Something must have empathy to be CONSCIOUS, without empathy, it is not CONSCIOUS, anything that has empathy, is CONSCIOUS.


>>2177430
There, I changed it to what he actually meant, happy now?

>> No.2177457

>>2177449
solipsists and psychopaths lack consciousness?

that's almost as wrong as the original statement...

>> No.2177465

>>2177404

At what point does a behaviour pattern become a rudimentary brain? At what level of complexity do we cross from one to the other?

As for the paper example, the mechanism by which the entity interacts may be varied, be it in a human mind or by a calculating machine that changes the sums, but the conclusion would still be that whatever carries it out is just that: A mechanism that facilitates this conscious's activity.

Have I mentioned I love /sci/?

Also,
>>2177418
>>2177414

Glad to see someone else is paying attention

>> No.2177471

>>2177457

Ya, I think so too. I think in order to be conscious. One must be able to observe ones own actions. And by that i mean, able to think about ones past actions. Although I think that doesnt quite cut it. I think as humans with powerful brains, we all have a bias on what we consider conscious. Because I think its possible to be "more conscious or less conscious" when looking from animal to animal. And at somepoint we will draw a line, even though it will never actually be black and white like that.

We will draw a line somewhere, and the being just past the line thats conscious, really wont be any more conscious than the being thats just under the line.

>> No.2177495

>>2177465

My reply to this, will be the same as what I posted here. >>2177471

I think as humans, we instinctively label things as black and white, night and day, cold and hot. And it will never be like that, there isnt really a conscious and not conscious, just varying degrees. Either way, I think that if something is complex enough to observe the world and think about it, it is consious, whether its a machine or a mind.

Anyways, I think the scale for consiousness is what I said earlier, the ability to think about ones past actions. A plant for example, cant really do this at all, (well it sort of can in a way, but its memories will be more of a physical type, like the thickness of rings in a tree that change due to the temperature of seasons), ants, nope, they follow sent marks, bees!, bees remember where flowers are, and they will actually change the order inwhich they visit the flowers they find in order to be more efficient, ever increasing on through the animal kingdom. When you get to humans, we have a huge frontal lobe. This is pretty much a large chunk of our brain purely dedicated to reflecting on ones past actions and memories. Its a reality simulator of sorts. While say the Bee, doesnt have a frontal lobe, it does have its own reality simulator, because it was able to "think" about the flowers it found, and create a more efficient route.

>> No.2177509

>>2177495

Oh, i just realized something. The example of the man in the chinese room. The entity that is the man + the manual + the room. Would actually be incapable of learning. It would be incapable of reflecting on ones past actions. So perhaps that example isnt bogus. Perhaps the man in the chinese room isnt actually conscious. Perhaps the key to being "conscious" is the ability to learn. If you cant learn, you cant reflect on ones own actions.

>> No.2177510

>>2177387
>Further question: Is there a distinction between life and intelligence? Can the two be separated?

this one is the important question.
What moves intelligence? What causes it to learn?

in every case it is the demands of life that causes it to move. You can not have true intelligence without it either being alive or believing it is, which is the same thing.

>> No.2177517

>>2177509

Of course, that would mean that someone like the guy in memento (is that a real disorder?) wouldnt be conscious. So maybe thats not it either, because the guy in memento was definatly conscious and understood the world. He was just unable to make any new memories.

Sorry, Im kind of argueing with myself right now....

>> No.2177575

>>2177509

I was going to point this out, actually, but demands draw on my time. I've enjoyed discussing this, fellow /sci/ons

>> No.2177577

I don't know what you exactly mean by entity, but in my opinion if it can't feel pain (feel pain /=/ "act" hurt, no matter how realist) you aren't doing anything wrong. If a being or even specie can't feel any sort of pain/discomfort you are free to do whatever you want to it (except if it's of some interest to someone else, in which case you'll hurt this someone if you destroy it).
Though now I wonder can we create something that can actually "feel"... Anyway, I don't think intelligence is very important, an intelligent robot wouldn't mind being destroyed, I think. the only reason the pixel soldiers are shooting back at you and avoiding your grenades is because they are programmed to. But then again, aren't we too programmed biologically to stay alive. Fuck I'm confused...
Eh well, I think our will to survive is not a product of our intelligence but something more primal, something even lowly insects have. An entity without this primal survival instinct won't mind being destroyed, no matter how intelligent (unless it's programmed, obligated to stay "alive").