[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 30 KB, 540x346, Holodeck.jpg [View same] [iqdb] [saucenao] [google]
3829109 No.3829109 [Reply] [Original]

>watch Star Trek
>Holodeck NPCs are self-aware

The holodeck is a murder machine.
In each simulation, dozens of individuals are generated. When the simulation ends, they are murdered to save memory/energy.

>> No.3829111

Thats what happens in a society where everyone who has ever used a transporter is a soulless clone.

>> No.3829113

>>3829109
By the same argument the universe is a murder machine.

The holograms should enjoy the lives they have while they last.

>> No.3829114

Your point?

>> No.3829121

>they are murdered

No. They are recycled. They are not human, so cannot be murdered.

>> No.3829126

>>3829121
Not op
Not human true, but still you are technically killing a sentient being

>> No.3829131

>>3829121
Vulcans aren't human either. \:(

>> No.3829133

>>3829114

The Federation's collective holodecks commit genocide on a daily basis.

>> No.3829136

>>3829131

Exactly. That's why they make such fine warriors--they don't care about death.

>> No.3829138

>>3829131
and who gives a shit about them
smug bastards

>> No.3829139

If the main computer can populate the holodeck with dozens or hundreds of self-aware beings, why does it have problems with basic human syntax?

>> No.3829141

>>3829133
I don't hear anyone complaining :3

>> No.3829145

>>3829136
>vulcans
>warriors
i'm cool guise i know aboot star wars too!

>> No.3829150

>>3829126
Yeah, functionalism and all that jazz.

If we accept that Data can be sentient, then functionalism is likely true.
And roughly, if two processes are functionally equivalent, they are mentally equivalent.

>> No.3829160

The one or two who did gain sentience were rare. On most occasions, their processes were limited to only what they needed to be.

And hey, since they are all part of the same memory database (except, again, for the I think one that somehow grew beyond the bounds of it's construct), wouldn't it only be murder if you killed the holodeck computer itself? Killing or re-writing individual scripts would be more like getting a haircut, or chopping a finger at worst.

>> No.3829166

>>3829133
Typical holodeck programs are not sentient. They are bound by the parameters of the program they are apart of. Only in extraordinary circumstances do they gain sentience (like when the entire Enterprise computer core re-purposed itself to create Moriarty). I believe medical hologram programs have limits built in. Removing such restraints doesn't equate to the program gaining sentience either.

>> No.3829167

>>3829109
With the exception of Doctor Moriarty the holograms on TNG are not self aware. They just simulate real people under a very narrow set of pre-programed instructions. If the humans using the holodeck didn't play along with the simulations the holograms would act like ELIZA.

>> No.3829173
File: 36 KB, 470x290, treknobabble29.jpg [View same] [iqdb] [saucenao] [google]
3829173

>>3829145
>implying they're not

>> No.3829175

If a holodeck character is not programmed to simulate consciousness itself, but only to simulate the -appearance- of consciousness, is there a problem?

>> No.3829172
File: 20 KB, 464x436, elementary1472.jpg [View same] [iqdb] [saucenao] [google]
3829172

>>3829139
The computer is probably limited on purpose.

In the Sherlock episode, they accidentally break the restrictions of the Holodeck. An AI construct nearly takes over the Enterprise.

>> No.3829184

They don't die, they find themselves in the dust. It's the regular Star Trek characters who should be worried. Since their laws of physics aren't internally consistent enough to continue working after an episode ends, there's no telling where they'll go. Most likely they wake up as Scientologists.

>> No.3829187

>>3829172
Insufficient security protocols are responsible for so many problems in the Federation.

>> No.3829193

>>3829175
No
Simulation of AI can be killed, i'd think twice about killing real AI

>> No.3829201

>>3829175

But that forces the question: how do Federation scientists know whether they are sentient or not? How do they know that Moriarty and Data are sentient, but the holographic projections are not?

This is similar in kind to the teleporter--personal identity problem.

>> No.3829211

The computer is actually powerful enough to give the personalities it simulates a home of their own on a background running simulation.

Also, since they can't exist beyond the computer, and the computer is the reason they can exist at that point in time, we're simply giving them a moment of existence they would otherwise not have.

>> No.3829212

>>3829193
>implying that a simulation of an AI wouldn't just BE an AI.

Unless you mean that the simulation would be just an approximation.

>> No.3829216

>>3829201
If they can identify the specific processes that lead to consciousness, by verifying whether these are present in a program they could determine whether it is truly conscious or just appearing to be. Didn't Moriarity have "free will" in a manner that other programs did not, just like Data?

>> No.3829222

>>3829109
There are only a few recorded self-aware holograms, and they usually compare being turned off to going to sleep. Deletion would be more akin to murder, but I don't remember any instances of deletion of self-aware holograms.

>> No.3829229

>>3829201
In "The Measure of a Man" nobody in the federation had a problem with treating Data as a piece of property because he was an android other than his friends on the Enterprise. They want to create an army of Datas to use as slave labor. Picard talked them out of it, but we later learn that Emergency Medical Holograms are being used for slave labor.

>> No.3829240

>>3829211
That seems absurdly cruel.

>> No.3829256

>>3829229
I seem to remember that EMH programs and similar holograms meant to do a job outside of the holodeck had a sort of sliding scale of consciousness. The Doctor started out as a high-functioning drone of sorts and became "his own person" as he evolved through continuous operation. It seems that such holograms have the potential to gain what Moriarity was born with over time.

>> No.3829264

>>3829150
Data isn't just software like holograms are though. Data is as much defined by his unique positronic hardware as he is by his software.

Star Trek has always been against transhumanism and wary of AI. The franchise usually tries to counterbalance the "technology has solved all our problems" theme with an almost radical form of humanism.

Science fiction in general often tends to avoid sentient AI's. The potential that artificial intelligence has for replacing humans and radically changing how society operates is too great and too unpredictable. It's hard enough imagining what someone smarter than you is like, let alone empathize with them, therefore AI's don't make good characters.

>> No.3829266

>>3829229
This is my problem with the Federation.

Shouldn't they have thought out android/AI rights before implementing the technology on a large scale?

>> No.3829268

I bet in real life this sort of issue is going to come up when AI in video games starts becoming lifelike. If the people you're killing in an FPS can think to some extent, and feel fear and pain, is it still ethical to play? Personally I'm a sadist who would love to torture and kill simulated people. But I can easily see it being illegal.

>> No.3829280

>>3829266
>Shouldn't they have thought out X before implementing X on a large scale?

You can ask this question about pretty much anything in Star Trek. At the end of the day, the show just wouldn't be as interesting (in my opinion) if everything had been thought out.

>> No.3829288

You all seem to be working under the naive assumption that everything that is self-aware feels a drive for self-preservation. If a true self-aware consciousness was created, and there was nothing in that consciousness making it want to continue existing, there would be nothing wrong with killing it. If one of the fundamental attributes of a simulated awareness was that it was only meant to exist for a limited period of time, to say that killing it or ending its awareness was morally wrong would be to project the animal biological instincts of VIOLENT SIMIANS onto a nonliving anthropogenic digital intelligence which is ridiculously absurd.

>> No.3829290
File: 527 KB, 2228x2020, 20110423.gif [View same] [iqdb] [saucenao] [google]
3829290

>>3829268

Pic sort of related?

>> No.3829293

>>3829229
>but we later learn that Emergency Medical Holograms are being used for slave labor.

Oh god that was so stupid.
If all they are is force fields, what reason did they have to use the medical programs other than being cruel?
Why not use holograms of mining equipment? Or just beam the minerals out of the fucking ground directly?

>> No.3829296

>>3829268
You are an idiot. There is never going to be a time at which it is anything but completely fucking hopelessly retarded to even think about creating sentient AIs as part of a god damn mother fucking video game for immobile virgins who haven't left their VR chairs in years.

>> No.3829302

>>3829288

But what if it does?

>> No.3829308

>>3829280
I think it's also an aspect of the hardware being available before the software. They clearly hadn't thought of the equipment as being able to produce sentient AI when they installed it in everything.

>> No.3829311

>>3829175
The difference is a simulation, like holodeck characters, wouldn't be capable of comprehending anything outside of the world it was created for. For example,

HoloButler : Hello sir, may I offer you a beverage?
Picard: No thank you Geoffrey, but I would like to tell you that you are a program within a holographic simulation.
HoloButler: Of course sir. Would you like me to fetch your riding crop? It is a beautiful day outside.

>> No.3829314

>>3829293
Agreed. I like to pretend Voyager never happened. Its stupidity just shit all over the entire universe the previous series established.

>> No.3829319

>>3829302
If it does, that means that at some point in time someone programmed the desire for continued existence into a simulation that was supposed to end, which basically means whoever programmed it is a psychopath creating beings only to suffer and die purely for their own sick pleasure.

>> No.3829322

>>3829293
>holograms of mining equipment

Y'know, that's not a bad idea. Holographic equipment could be reconfigured on-demand, operate basically anywhere, and require nothing but an emitter and a power source.

Why don't they use holographic everything?

>> No.3829328

>>3829308
I guess that makes sense. They started with a really efficient sex toy for maintenance of crew morale and ended up with an AI birthing machine.

>> No.3829330

>>3829290
I'd love to have my own reality where I could be god. I would think everyone would but maybe I'm unique.

>>3829296
Yeah, nah. Do you know who comprises the courts who make these kinds of decisions? Old faggots who don't know anything about modern technology. Their response to this ethical issue might be "Give rights to video game characters? Stop wasting my time!". And of course everyone else just wants money. I bet there's a chance it could happen and exist for a while before a rights lobby powerful enough to change this exists. And even then it's going to be hard to enforce control over this kind of stuff after it's been legal for a while.

Of course I could be wrong. I don't know how much of my expectations are motivated by wishful thinking.

>> No.3829334

>>3829319

bugs can happen too.

>> No.3829354

>>3829334
nope

>> No.3829365
File: 12 KB, 320x240, Evil_Lincoln.jpg [View same] [iqdb] [saucenao] [google]
3829365

Righto gents. It's another simulation gone mad, so, murder and mayhem, standard procedure.

>> No.3829372

>>3829322
I bet in like the 26th century they use holo everything.

>> No.3829682

>>3829322
When I was a kid I always wondered why the Federation didn't just put holodecks on shuttles instead of building huge space ships. It looked to me like the holodeck used bigger on the inside than the outside TARDIS technology. There were entire towns inside of those rooms FFS. I think the TNG technical manual says everyone is standing on treadmills that keep them in place, but that wouldn't explain how there can be two real people miles apart.

>> No.3829720

>>3829682
There are more tricks of perspective it uses than just moving the floor beneath people's feet, but yeah some of the situations to seem improbable. Like when they have a big party in a virtual town with half the crew inside, and people there are scattered about the town.

>> No.3829773

>>3829211
>>3829240
So, kind of like how the universe works, isn't it?

>> No.3829797 [DELETED] 

Ctrl+F Philosophical Zombie/zombie/philos/p-z
0 results
WHAT THE FUCK IS WRONG WITH YOU?

>> No.3829831

>>3829797
Philosophical debates are best done without buzz words. Go impress someone else that you passed PHIL101.

>> No.3829853
File: 17 KB, 300x300, VoyagerDoctorReaction.jpg [View same] [iqdb] [saucenao] [google]
3829853

>>3829264
>>3829264
> therefore AI's don't make good characters
mfw
Which is ironic because the best characters on StarTrek ARE the Ais and non-humans. In fact the only reason why i watch the damn show are for the non-humans.
Without non-humans StarTrek would be a very boring place.

>> No.3829867
File: 242 KB, 1087x1081, 1317264925238.jpg [View same] [iqdb] [saucenao] [google]
3829867

>buzz words
how about naming concepts for specific reference in conersations and organizing ideas pertaining to it?
Or you could always not give it a name and start explaining it in each comversation from the begining:
So, dude its, like that thing, were you seem, like, alive, but you, like, dont have that thing which makes you not, like, a rock (I didn't use consiousness too, since we said no buzzwords).

>> No.3829900

>>3829109
Holodeck NPCs are not self-aware. Moriarty was the exception, not the rule.

>> No.3829912

It doesn't seem to be how it works in the show, but I'd consider having a group of AI actors programmed to enjoy playing a variety of roles convincingly, so instead of being brought in and out of existence, they'd just be given descriptions of different characters they need to play. For large crowds, one AI could control multiple holograms with the help of non-sentient helper programs. This also helps with malfunctions, since the AIs involved would know it's supposed to be an act and and avoid doing anything that would seriously harm the crew.

>> No.3830009 [DELETED] 
File: 12 KB, 480x323, really.png [View same] [iqdb] [saucenao] [google]
3830009

>YFW we're all in a VR simulation with our memories outside of it turned off to enhance the immersion

>> No.3830085

>>3829797
>>3829831

Op here. I guess I did mention functionalism at some point.

Some philosophical terminology serves an important purpose: making sure that people recognize a basic thesis so they don't mistakenly argue about things they actually agree on. e.g. the minimal supervenience thesis in metaphysics.

The whole "zombie" debate seems to reduce to "who has the burden of proof/what counts as evidence for claims about possible states of affairs?" I'd like to avoid pure metaphysical discussion as much as possible (but obviously, metaphysics will matter). I think it's useful to just grant that Data is conscious in the same sense that a very aspie person is conscious. Given that fact about Data, what do we say about hologram/AI constructs? What do we say about those dicks in the Federation?

I've gotten some very good answers so far.

>> No.3830092

Most of the holograms are NOT self-aware...