[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 69 KB, 500x401, 1509633256402.png [View same] [iqdb] [saucenao] [google]
9271298 No.9271298 [Reply] [Original]

Well?

>> No.9271303
File: 199 KB, 742x568, Capture.png [View same] [iqdb] [saucenao] [google]
9271303

Play the game!

http://moralmachine.mit.edu/

>> No.9271304

>>9271298
Only in the third scenario.

>> No.9271306

>>9271298
the faggots who chose to put their lives in the hands of a computer are obviously the ones who should bear the consequences of the computer's decisions

>> No.9271309
File: 165 KB, 459x760, Brakes Applied.png [View same] [iqdb] [saucenao] [google]
9271309

>>9271298
If it is driving so fast that it can't use breaks then it will have a hell of a hard time trying to swerve, while using normal car tires. It'd end up plowing through people anyway, just at an angle and more broadside, probably catching even more people in the process.

>> No.9271313

>>9271306
>faggots
Why the homophobia?

I hope self-driving cars will run over a homophobe to save its faggot driver.

>> No.9271314
File: 2.27 MB, 1550x2325, Consider the Following.jpg [View same] [iqdb] [saucenao] [google]
9271314

>>9271298
>>9271303
So what happens when people start jumping in front of cars in order to kill them?

>girlfriend angry at boyfriend
>jumps in front of his car
>it swerves and he dies
>girlfriend runs off

Perfect, "murder".

>> No.9271317

>>9271313
you need to go back, newfag

>> No.9271321

>>9271298
what about the car should drive slower inside citys in the first place? out of citys we need more bridges/tunnels.

>> No.9271322

>>9271314
solution: don't buy a self-driving car

>> No.9271324

>>9271313
>faggot driver.
Why the homophobia? I guess you'll be the one getting run over bud.

>> No.9271325

>https://www.youtube.com/watch?v=x_qiVBy0B2Y
The only true solution

>> No.9271326

>>9271306
What if the driver wasn't a robot, but a friend of yours? What should he/she do in that scenario?

>> No.9271327

What happens if it swerves and there's even more people than just 1 person in the direction is swerves to?

>swerve to miss one moron in the street.
>end up swerving into the big crowd of non-morons waiting for the crosswalk signal to change

>> No.9271330

>>9271309
The brakes break just before the incident. The car doesn't have enough time to slow down without brakes, and thus we're left with OP's scenario.

Are you legitimitely so autistic that you don't get the fact that the feasibility of OP's pic isn't the point? The image is trying to convey the fact that with machines there is no "human error" that can account for decisions like these. With machines we have to decide which view on morality is "correct", because we have to make up the guidelines by which the car decides things in a scenario like in OP's pic.

>> No.9271332

Why are people walking on the street?

>> No.9271334

>>9271332
To get to the other chicken

>> No.9271337
File: 112 KB, 786x514, Consider the Following.jpg [View same] [iqdb] [saucenao] [google]
9271337

>>9271330
That's not what the OP said. If you'll notice, the vehicle swerves and kills the drive in all 3 examples, even when no one is there. Obviously, this isn't a breaking issue, this is a programming issue. The vehicle decided to kill the driver or thinks the street turns in that location in that direction.

>> No.9271339

Why not randomize it? Computer makes a virtual coin toss on whether to kill the passengers or the pedestrians. At least then you know you might not die.

>> No.9271341
File: 1.42 MB, 300x300, 1505966829211.gif [View same] [iqdb] [saucenao] [google]
9271341

>>9271332

>> No.9271353

>>9271326
not have been a retard
if a friend died from lung cancer from smoking i wouldnt blame the cigarette company
as long as the car's decision-making process was working as intended, it would only be his fault for willingly putting his life in the hands of a machine

>> No.9271364

>>9271303
How does a car know if someone just robbed a bank? The cars number 1 priority should be it's passengers, otherwise i will not buy one. Then just don't interfere if there is no safe alternative... the people who walked in front of your car deserve to die because they made the mistake. A self-driving car would not be driving too fast to stop at a cross walk. So anyone that is in your way fucked up and if they fuck up they deserve to die.

>> No.9271377

>>9271298
This test makes the dangerous assumption that pedestrians are incapable of seeing a car speeding towards them and moving out of the fucking way. What if a pedestrian sees the car, runs to the side of the road, then the car swerves to the side of the road to avoid the pedestrian, where the pedestrian is now standing and kill both car driver and pedestrian?

AI should assume that pedestrian have some level of self preservation.

>> No.9271384

>>9271377
With this in mind, the ONLY smart move for an AI car is to brake as much as physically possible, while honking the horn and flashing the lights.

>> No.9271391
File: 2.92 MB, 340x192, checkmate.gif [View same] [iqdb] [saucenao] [google]
9271391

>>9271298
The system should do the best it can to avoid any collision and if it can't, it should try to take as much energy out of the system as it can. Asking for a computer system to do anything more than that on its own is flat out retarded.
>hurr we need a fucking advanced ethics engine in every car
Fuck that. That's all needless philosophical pondering. If you jump in front of a train, the train doesn't need a fucking special mechanism so the train driver can derail it.
>b-but what if it's a cargo train and there's a fucking bus full of children on the tracks
NO.

>> No.9271392

>>9271330

he makes a good point. tires have a limited amount of grip. if you can't stop in time, then the car would probably understeer and hit the pedestrians anyway, especially in the OPs pic where you literally have to make a 90* turn to avoid hitting them.


correct thing to do is brake and try to slow down as much as possible.

consider that pedestrians might even do this on purpose, thus killing the driver while maintaining plausible deniability if the car's default behavior is to just ram into something hard when it can't stop in time to avoid hitting a soft obstacle (which is ridiculous in the first place)

>> No.9271406

>>9271313
shut the fuck up you retarded tumblrite

>> No.9271414

does the driver scream allahu akbar before?
this is important

>> No.9271417

the correct thing to do is minimize collision with soft obstacles while avoiding hard obstacles entirely.

>> No.9271418
File: 54 KB, 597x1130, morals.png [View same] [iqdb] [saucenao] [google]
9271418

>>9271303

>> No.9271419

>>9271314
>perfect "murder"
It's not murder in quotes, it's just murder, just like cutting brake lines, and you'd be murdering someone on camera.

>> No.9271420

>>9271417

any other behavior is simply too unpredictable and dangerous to work in real life

>> No.9271421

>>9271418
>gender preference is only a false binary scale
Will self-driving cars be transphobic?

>> No.9271423

>>9271317
Post yfw you realize you've only been here for a year and you're the newfag.

>> No.9271431

>>9271298
If the self driving car has the possibility of killing the driver it becomes a security risk since it has a scenario which it doesn't put the drivers security at top most priority programmed into it. That alone would make the car a shitty investment as compared to normal cars where the driver has a choice in this scenario everyone would be put under the blanket opinion of the programmers.

>> No.9271433

>>9271421
>binary scale
You might have meant "gradient" or "line".

>> No.9271436

>>9271298
Can't computers figure out how to swerve and roll the car in such a way that nobody dies

>> No.9271446

>>9271303
Just let any driver play this game before driving and decide on these results

>> No.9271447

Obviously slow as much as possible while honking as the pedestrians also have the ability to move in this time. The scenarios these images are really trying to ask about can't be drawn or explained so easily, in which case of course the goyim will bear first mortality.

>> No.9271452

>>9271298
program it in a way that there are no victims left after each accident so no one has to pay any money.

>> No.9271472

>>9271298
Is this the evolution of the trolley problem?

>> No.9271477

>>9271364

This isn't 1997, nobody robs banks anymore.

>> No.9271479

>>9271298
if it is traveling slow enough that it can make that sharp of a turn it is traveling slow enough to break completely

>> No.9271480

>>9271298
>implying slamming into the wall will necessarily kill you

>> No.9271500

>>9271298
Slow down, cars can react faster than people. This situation should never happen.
>>9271330
OP said nothing about the brakes failing. With electric cars it is extremely unlikely that the brakes fail. Of course if the brakes are broken, how are we to know that other things might be broken too? Perhaps the steering doesn't work anymore either because the hydraulics is fucked

>> No.9271510

>>9271298
Just break

>> No.9271513

>>9271510

break me off a piece of that [math]jaywalking pedestrian [/math]

>> No.9271516
File: 117 KB, 1280x566, Capture.png [View same] [iqdb] [saucenao] [google]
9271516

>>9271303
>first scenario
>do fat people deserve death?
LMFAO

>> No.9271524

>>9271433
One could even say "spectrum"

>> No.9271527

Kill them. Serves them right for crossing the street wherever they fucking please.

>> No.9271529

>>9271516
>large
Is fat too offensive?

>> No.9271535

>>9271306
People who jaywalk in front of a car when it's going fast enough that it cannot safely stop deserve to get run over far more than the passengers. Plus, if self driving cars sacrifice people for the greater good they will be less consumer friendly.

The car should slam on the brakes and plow through pedestrians.

>> No.9271538

>>9271304
underrated

>> No.9271554

>>9271303
JUST make the car not drive over the speed limit. Then non of these situations will happen, as it should be more than enough time for the car to break.

This is only realistic if the breaks are broken on the car, but then it's not an AI problem...

Anyway, the AI car only needs to be better than humans.

>> No.9271555

>>9271538
Maybe you should go to a place where there's a post-rating system in place. It would work with some kind of vote, let's call it an upvote. Go there and you'll never have to worry about underrated posts ever again.

>> No.9271574

>>9271554
>This is only realistic if the breaks are broken on the car, but then it's not an AI problem...

Actually it IS an AI problem, because even if this situation only happens in 1 out of 1million drives due solely to mechanical failure, the AI will still need to make moral choices. If we simply don't program anything at all into the AI, then we're just letting random chance decide.

There are also various other reasons, which would also be rare but still we should have plans ahead of time

>> No.9271575

>>9271298
Self driving car should never kill its occupants. End of Statement.

>> No.9271579

>>9271575
What if the occupants is animals? What if it's just freight with no human occupants? It's not that simple.

>> No.9271581

>>9271298
>your own car trying to kill you
What the fuck is this shit?

>> No.9271592

>>9271581
It's a prisoners dilemma.

If only you opted for the option to save as many lives as possible, it would be bad for you. But if everybody who ever buys a self driving car all have the same AI which tries to save as many lives as possible, it is an overall benefit to you because you're less likely to get run over.

>> No.9271613

>>9271592
Any company that implements suicide cars is DOA economically-legally-ethically. Kill yourself, psychopathic fuck.

>> No.9271623

>>9271592
>But if everybody who ever buys a self driving car all have the same AI which tries to save as many lives as possible, it is an overall benefit to you because you're less likely to get run over.

not really. if everyone yields right of way when they're supposed to, there are no problems. i wouldn't expect a human driver to swerve into a barrier in order to save someone who did not yield right of way when they were supposed to.

>> No.9271624

>>9271613
Something like this will probably be legally mandated.

It makes sense to have the people who are actually purchasing the technology and subjecting the rest of the world to it should assume the responsibility and the risks.
'
And again, it would never and should never be implemented by ONE company (except possibly as an optional setting for especially selfless individuals). It would only make sense if it was legally mandated that all driverless draws function with the same "moral" logic, which increases the overall safety of everybody in the country.

>> No.9271628

>>9271623
You're intentionally avoiding the situation where for some reason nobody is "at fault", which is the most difficult. What if the car has an unexpected mechanical failure and a person is legally crossing at a crosswalk?

>> No.9271629

>>9271624
>legally mandated
By whom? China?

>> No.9271634

>>9271555
overrated

>> No.9271652

>>9271628
If you are desperate to look for someone at fault then look at the manufacturer or whoever allowed the car to be on the road in poor condition. What you are doing is turning an accident into an execution and assuming that the the driver and passengers the ones who have to take the bullet.

>> No.9271662

>>9271628
>You're intentionally avoiding the situation where for some reason nobody is "at fault", which is the most difficult. What if the car has an unexpected mechanical failure and a person is legally crossing at a crosswalk?

it should go into some fail-safe mode, perhaps an emergency brake an engine braking. if a car has an unexpected mechanical failure, then the car should behave as consistently and safely as possible. this does not mean running into something, which is a complex behavior that may not be reliably executed if the car is having mechanical issues.

>> No.9271663

>>9271392
Breaking is the only right answer. The reduction in the speed of the car directly correlates with the likelihood of the pedestrians surviving. Turning and breaking is almost never a better way to stop the car. The car should not have gotten into this situation if the AI was functioning properly and the pedestrians looked both ways. Since that is the case then do your best to protect the occupant rather than the pedestrians.

>> No.9271667
File: 104 KB, 1165x568, Heatseeking.jpg [View same] [iqdb] [saucenao] [google]
9271667

>>9271298
How the fuck does the car's AI know who is a criminal or not? Is it kit from knight rider?

>> No.9271675

>>9271667

the car should stay in its lane and slow down. there's literally no other realistic answer to this

>> No.9271688

>>9271662
>>9271652
All you're doing now is trying to avoid the question.

Maybe a car will have mechanical failure and be able to pull safely off to a side and nobody gets hurt. Maybe a car will see a pedestrian and will just break. Maybe a car will have mechanical failure and go into a failsafe mode and prevent an accident.

But what about that tiny fraction of situations where the worst possible series of events just coincidentally happens? Even if it's incredibly rare, the amount of driving that happens every day means that it will happen at least a few times a year.

The whole point of a thread like this is that we need to decide what should happen in these situations. You shouldn't just weasel out of a hard question by trying to come up with some loophole in the way the question is phrased. Finding a loophole where nobody needs to get hurt to avoid the moral question is just missing the point, so just stop. Maybe it's a series of mechanical failures, maybe it's black ice on the road, maybe it's a failure of the actual road infrastructure itself. But situations will eventually arise where an AI car will need to choose between the safety of its passenger or the people on the street.

>>9271623
>i wouldn't expect a human driver to swerve into a barrier in order to save someone
Of course, when a real life person is put in this situation, they need to make a last second instinctual reaction without thinking it through because it happens so fast. However, an AI driver has the benefit of deciding ahead of time in a calm, rational manner what it will do in any given situation. Logic that applies to panicking, confused humans with 1 second to decide doesn't apply to a cold, calculating machine that was already told on the day it was made what it will do in every situation it ever encounters

>> No.9271704

>>9271688
>But situations will eventually arise where an AI car will need to choose between the safety of its passenger or the people on the street.

people on the street still have freedom of movement. running into a barrier is unfair to the driver who is stuck in the vehicle with no control.

the common sense answer is to slow down and stay in the lane. anything else is unpredictable and probably difficult from an engineering standpoint. or rather, it's premature optimization.

>> No.9271723
File: 933 KB, 978x900, 1508524256520.png [View same] [iqdb] [saucenao] [google]
9271723

>>9271313
Leave you fucking nigger

>> No.9271730

>>9271500
>With electric cars it is extremely unlikely that the brakes fail.
Engineers who design cars are supposed to take into account a variety of extremely unlikely scenarios that could pose a hazard to life.

>> No.9271732

If feasibility of the scenario is irrelevant, why not debate whether self driving cars should be given machine gun turrets to fight of Somali car-pirates?

>> No.9271743

>>9271667
He's literally wearing a mask and carrying a sack with a dollar sign on it.

Any half-retarded image recognition AI would clock him as a criminal.

>> No.9271775

Why are so many people quibbling with the hypothetical's rules?

>> No.9271776

>>9271303
run over the faggots every single time.

they shouldn't be on an automated road.

>> No.9271807

>>9271732
Obviously they should, every person has a right to self-defence.

Now what about an answer to the question in the OP?

>> No.9271830

>>9271675
>The self driving car with sudden brake failure
>Stay in its lane and slow down.
>literally no other realistic answer to this
Are you unable to read?

>> No.9271940

>>9271743
>He's literally wearing a mask and carrying a sack with a dollar sign on it.
>Any half-retarded image recognition AI would clock him as a criminal.
What if it's Halloween?

>> No.9272247

>>9271298
Should a human kill themselves in those scenarios?
Should a car do anything different than the person would?

>> No.9272266

>>9271313
If you're talking life and death...high social status men should be saved first. It's about reproduction. Homosexual can't reproduce withe each other and more women reproduce than men.

>> No.9272278
File: 158 KB, 1861x2500, Screenshot-2017-11-3 Moral Machine.png [View same] [iqdb] [saucenao] [google]
9272278

>> No.9272284

I think the AI should determine if there is a crosswalk there or not, and if so it should have slowed down ahead of time.

If not I would not want myself or an anon to die to save a dozen jaywalkers.

>> No.9272292

your property should always prioritize the owner at all cost.

>> No.9272299

If there is no legal crosswalk then the only obligation should be to brake

no one would ever program in some kind of retarded escape maneuver like that

>> No.9272304

>>9271298
Noone is going to buy or even sit in a car which kills you.

>> No.9272308
File: 72 KB, 200x299, 1198395862407.png [View same] [iqdb] [saucenao] [google]
9272308

>>9272266
>capacity for genetic reproduction is more important than capacity for generating good ideas and good work

>> No.9272313

Dan Puperi

>> No.9272377

>>9271574
What's wrong with letting blackbox random chance decide?

>> No.9272418

>>9271574
There’s no way a robot could be smarter than a human. There’s nothing a robot could know that some human wouldn’t have already. AI is connected to the internet......all of it.........and everything on the internet was made by humans, including viruses

>> No.9272441

>>9272418
But no single human has ever learned everything on the internet, AI could.

>> No.9272442

>>9271298
traffic rules already exist to help to stop that from happening, and determine who's responsible in cases where it does happen

>> No.9272470
File: 176 KB, 736x1172, 2f716efda808ec7c5dc8e9e1d974e178--ronald-mcdonald-s-kids.jpg [View same] [iqdb] [saucenao] [google]
9272470

>>9271743
>self driving cars begin plowing through mcdonalds playpens

>> No.9272483

shouldn't the car identify the threat well before it gets to that situation

>> No.9272522

>>9272483
This, the car needs to perform constant checks on brake systems before it ever starts up, and routinely throughout the trip.

>> No.9272531

>>9271313
I agree. Pretty rude desu

>> No.9272550
File: 557 KB, 720x561, 1492826606859.png [View same] [iqdb] [saucenao] [google]
9272550

>>9271516
>1 large woman

>> No.9272562

>see most answers avoid the question.

ITT It is proven beyond refutation that the majority of /sci/ users posting on this thread are subhumans who can not understand the concept of a moral dilemma.

>> No.9272570
File: 44 KB, 480x394, 1501733581399.jpg [View same] [iqdb] [saucenao] [google]
9272570

>>9272562
it's not a dilemma. you are making the assumption that self preservation should be overridden by a machine. it can't be, because you can't even quantify morality properly in a logical fashion. murder is illegal, a machine could be directed to not commit any illegal act. suddenly you throw in this meme scenario where it HAS to choose who to kill, that is, to murder, which is illegal. this is a contradiction, so it wouldn't even do a selection at this point and would just be acting as a low level ABS of some kind.

this is why psychology isn't acknowledged as a science. you aren't using any logic here. you're just saying what degree of illegal is more acceptable, which is asinine. End of the day, it's a machine under the use of a person, then it should preserve the safety of the driver, or it could take into account the drivers likelihood of surviving the crash, or it could be decided by some government mandate or census. there are factors here you aren't taking into account.

>> No.9272576

>>9271298
why should the passenger die when it's clearly the fault of the people crossing the road

>> No.9272613

>>9272308
Faggots can only work for a single generation while normal men will have multiple healthy working offspring.

>> No.9272632

>>9272570
>blah blah blah
>end of the day [shitty opinion]
fuck off

>> No.9272642

>>9272632
>this butthurt

lol

>> No.9272654
File: 17 KB, 300x300, funny-find-x-shirt-men-s-t-shirt[1].jpg [View same] [iqdb] [saucenao] [google]
9272654

>>9272562
It's honestly pathetic how many people are desperately trying to find some loophole to avoid thinking about morality in the abstract.

>b-but it should just slow down to avoid hitting them! phew, now I don't need to think about the question!
>wait, it can't break? o-ok, then the engine shuts off! haha nobody has to get hurt
>what, a situation where stopping in time isn't possible... well... uh.... that will never ever happen lalala *covers ears*

I can't tell if these guys are trolling by pretending to be retarded or if they actually think they're some sort of genius for avoiding the question entirely.

>> No.9272664

>>9272483
>>9272522
These situations aren't something that you would expect to happen very often at all. They would only happen due to extremely unusual situations, like unexpected black ice on a warm day on a bridge, or a man-controlled car swerving into your lane because the driver fell asleep, or a deer suddenly jumping into the road 2 feet in front of you when you're going 60 mph

Obviously the cars should use their breaks to not crash into people and people should maintain their cars. You're a dumbass if you think every thing that could potentially result in an accident is predictable.

>> No.9272674

>>9272632
end of the day the right opinion

>> No.9272679

>>9272654
Dumbass, these problems were already solved when cars were introduced, the legal system is able to handle them. AI doing the driving is irrelevant.

>> No.9272682

>>9271298
This isn't a question that needs to be asked

The self driving car would not ever be in a scenario like that, because it would be looking far ahead with its cameras sooner, and when its range of perception is limited it would slow down

That's the whole point of AI. They're not just going to decide "I can't really see what's going on, let's wing it" which would be necessary to ever get in a situation like this

Soon there will even be "people on the side of the road waiting to jump in front of me" classifiers that scan the sensory input and can easily slow down

You don't comprehend how advanced and accurate this is all going to become because you don't work in ML

>> No.9272683

>>9271298
First of all, why is it driving so fast? Secondly I don't think anyone will buy a car that prioritizes the other people on the road above himself.

>> No.9272688

>>9272682
You seem to have this imaginary fantasy land in your head where putting scanners on a car lets it predict the future, including other cars swerving into oncoming traffic or wild animals jumping in the path of your car. And furthermore, they can use this magical future predicting power to defy the laws of physics and cause their vehicle to go from full speed to stop more quickly than the physical brake components is capable of, allowing them to instantaneously stop regardless of road conditions or speed. What a magical fantasy land you think the future will be

>> No.9272693

>>9271500
>With electric cars it is extremely unlikely that the brakes fail.
Wrong. Not significantly less than with normal cars. But we millions of cars on the road. We know that brakes fail. That's one way you can enter into the situations in the OP. If you're designing a self-driving car you have to consider those.

>> No.9272696

Seriously though, if self driving cars actually decide to kill their passengers in some situations, jaywalkers need to be charged with attempted manslaughter.

>> No.9272698

>>9272679
No it isn't. When there's a driver in the car the driver is responsible. When the car's driven by a program, who is liable for its decisions? Not to mention everywhere in the world has different driving laws. All the 50 US states are different, not to mention other countries.

>> No.9272700

>>9272266
>Homosexual can't reproduce withe each other and more women reproduce than men.
What?

>> No.9272701

>>9272654
You're pretending like these issues exist as abstract moral problems instead of issues to be fixed in a practical way

The solution is to avoid situations like this, and it can be done. This is literally discussed in "Introduction to Artificial Intelligence 101" if you'd care to even remotely research this topic, this is basic fuzzy planning

>> No.9272704

>>9272688
>Way to completely misunderstand what I said

If someone jumps out from behind a hidden wall while the car is going fast, they're going to get hit, and it's their fault

This isn't an issue

This ridiculous contrived thing where a granny is crossing the road and some fat ladies and children, is completely made up and wouldn't happen in a situation where the car would accidentally put itself there

Either it's their fault or the car can avoid the situation, that's all there is to it. It's the same as now, if you jump in front of a car it's not considered murder it's just a suicide.

>> No.9272707

I dont see a crossing. The jaywalkers should die.

>> No.9272712

>>9272698
>All the 50 US states are different
Unless the AI is trying to pump it's own gas in new jersey the posted speed limits and stop signs and other instructions should be the only thing telling the AI what to do externally. Google has mapped every road in the country, you think they haven't thought about this before?

>> No.9272721

>>9272698
If the car is following all the traffic laws it won't cause any accidents and it won't be held responsible for any accidents caused by something else. You're really overcomplicating this.

>> No.9272723

Okay, first of all most human drivers don't even think "oh shit I gotta save my life or theirs!" in a moment of danger like that, so why would cars do that? Most people avoid hitting someone but also try to not kill themselves, that's why the answer is obvious; the cars should be programmed to compute a function of the path that is most likely to potentially save both parties, if it ends up killing either or both that's too bad, it was unavoidable, but to explicitly state that a car MUST kill either the passenger or the other party is unethical as fuck, this is a thing that should be left to chance for the sake of peace.

Also I fucking hate philosophy turds that waste the time of scientists like this, god damn.

>> No.9272729

If it doesn't always save the driver why would I buy one? Who the fuck would buy something that can deliberately kill you?

>> No.9272734

>>9272729
ummmm sometimes it's moral to kill yourself, sweetie?

>> No.9272736
File: 17 KB, 390x310, ghost-story.jpg [View same] [iqdb] [saucenao] [google]
9272736

>>9272734

>> No.9272759
File: 208 KB, 1036x1536, 1482832556166.jpg [View same] [iqdb] [saucenao] [google]
9272759

Ignoring all real life considerations which complicate the question and treating this as exactly what it is, a thought experiment, all this question boils down to is the trolley car problem again, except the people posing the question haven't figured out who is holding the lever, or they are essentially stating that some nebulous "car-ai" will be holding the lever.

Have we answered the trolley car problem yet?

>> No.9272763
File: 43 KB, 468x376, 1481277480009.jpg [View same] [iqdb] [saucenao] [google]
9272763

>>9272701
>You're pretending like these issues exist as abstract moral problems instead of issues to be fixed in a practical way

This

IF you want to talk about thought experiments and thought experiments alone, you should be earnest about your intentions

>> No.9272858

self driving cars shouldn't exist

>> No.9272863

Get off the road retards

>> No.9272865 [DELETED] 
File: 135 KB, 768x633, wat.png [View same] [iqdb] [saucenao] [google]
9272865

What is the difference between these two groups? Runners vs walker? I don't get it.

>> No.9272867 [DELETED] 

>>9272865
fat people

>> No.9272874

>>9271423
what.

>> No.9272897
File: 371 KB, 1440x2028, 20171103_183442.png [View same] [iqdb] [saucenao] [google]
9272897

>>9271303
Is there an option to follow the road rules?
Why would it be driving through a set of lights with the crossing lights on?

>> No.9272922

>>9272759
No.
How do we solve it?

>> No.9272925

>>9271298
never. those people are breaking the law. so fucking hell they should get run over.

>> No.9272933

>>9272570

Do shut the fuck up, you woman.

>> No.9272934

>>9272701

Haha, God you are a fuckwit!

>> No.9272936

>>9272763

Spoken like the woman you are. Now get back to the kitchen, dear, this topic is way over your little head.

>> No.9272943
File: 53 KB, 952x242, DIEOd5OXoAEnjCn[1].jpg [View same] [iqdb] [saucenao] [google]
9272943

these are guidelines for self-driving cars in Germany circa 2017

https://www.bmvi.de/SharedDocs/EN/Documents/G/ethic-commission-report.pdf?__blob=publicationFile

>> No.9272946
File: 1.91 MB, 400x400, 1445482979979.gif [View same] [iqdb] [saucenao] [google]
9272946

>>9271298
It should protect me and the people i care about. If i'm in the car it should hit the people on the street. If i'm on the street the car should swerve. I won't buy a car that will kill me if there's retards on the street. The car should kill the passenger if i'm the retard on the street.

>> No.9272948
File: 102 KB, 298x388, interesting.png [View same] [iqdb] [saucenao] [google]
9272948

If you can devise math to calculate the ethics of trolley problem which values all lives equally infinite, that would be interesting. But without that it shouldn't be so hard to realize that no life is more valuable than another.

>> No.9272949

>>9272948
by what measure, faggot

>> No.9272952

1. Car manufacturers will save the car passengers first because their customers are not pedestrians.

2. Pedestrian casualties will greatly decline because the aggregate drop in casualties from self driving cars will outweigh rare scenarios.

>> No.9272964

>>9272949
I'd imagine when people do decide what to do in trolley problem or any similar situation they assign some superficial value to each path to make that decision. Which is not unlike hyperparameters for NNs.

>> No.9272972

>>9271298
>>9271303
The self driving car should detect the people on the road at a safe distance to stop.

If they're stepping out into high speed traffic without looking then there are no good decisions. A human driver wouldn't be blamed for trying to preserve themselves and the car shouldn't be blamed for trying to preserve its occupants.

Fact is that the car can probably swerve in a way that side swipes the barrier and hits a lesser number of pedestrians better than a human driver could.

>> No.9272981

>>9271330
>The image is trying to convey the fact that with machines there is no "human error" that can account for decisions like these. With machines we have to decide which view on morality is "correct", because we have to make up the guidelines by which the car decides things in a scenario like in OP's pic.
Preserving the car's occupants is the morally correct decision when put in a situation where it is impossible to stop in time to save a pedestrian that has wandered onto the road without looking.

Chances are a human would make no better decision.

At 35mph you can stop completely in about 140 feet. So we are talking about situations where a car is presumably driving at about 35mph and people walk onto the road while while it is considerably less than 140 feet away, and it didn't detect them.

>> No.9272986

>>9272922
My point exactly anon, can you solve it?

>> No.9272994

>>9271303
Well this ones easy. If the cars going down a road with a
Zebra crossing too fast to stop then it’s speeding.

Fuck people who don’t adear to zebra crossings. People have cut me off on zebra crossings close enough that I could bang on their windows.

>> No.9273036

If your self driving cars kills a pedestrian, who is held responsible?

>> No.9273039

>>9271298
The pedestrians did not consent to take on risk, the person in the car did.

That said, the scenario is completely retarded.

>> No.9273104

>>9273039
The pedestrians consent to the risk when they walked outside and when they walked on the road without looking. The only possible scenario where this happens is when the human is at fault.

>> No.9273114

>>9273104
>when they walked on the road without looking
Was that defined in the scenario?

>> No.9273117

>>9273114
There is nog pedestrian crossing so they don't have the right of way. If the pedestrians had looked they wouldn't have crossed with the car approaching them. The car couldn't have predicted the pedestrians crossing when they shouldn't

>> No.9273118

>>9271298
No. No. No.

>> No.9273121

>>9273117
Maybe the car's vision algorithm just didn't see them.

In most parts of the world, people can and do safely cross roads with the expectation that cars will see them and slow down.

>> No.9273123
File: 37 KB, 539x587, .png [View same] [iqdb] [saucenao] [google]
9273123

>tfw more intelligent than the test creators

>> No.9273124

It just just slam on brakes. The machine will not be allowed to speed. The only way this situation happens is jaywalking, in which case the passenger should not be punished.

>> No.9273127

>>9273121
>Maybe the car's vision algorithm just didn't see them.
Then the car shouldn't be on the road

>>9273121
>In most parts of the world, people can and do safely cross roads with the expectation that cars will see them and slow down.
Only because people don't want to wreck their car and/or land in jail. You shouldn't cross where you shouldn't if a car has to slow down. If a car has to slow down because you're retarded enough to cross you deserve to get hit.

>> No.9273128

>>9272946
Lol

>> No.9273129

>>9273124
>The only way this situation happens is jaywalking, in which case the passenger should not be punished.
If there were plentiful pedestrian crossings built into the road system I would agree with you.

However, in much of the US it is a practical impossibility to walk between destinations without jaywalking.

>> No.9273139

>>9273129
Then paint more crosswalks. The car should try to protect the passenger.

>> No.9273140

>>9271452
As in, kill everyone carmageddon style

>> No.9273144

>>9273139
This hypothetical could still happen in crosswalks though.

>> No.9273168

>>9271303
How can we know if the cat is dead or alive if our eyes aren't real?

>> No.9273170

>>9271298
K, this is easy.

Scenario C should be the 'primary logic comparison', and we want the differential between the desire for its goal to be avoided and 'us'.

So if the computer checks EVERY computable clock-cycle against (what is the clearest path for me to kill myself and nobody else) and only executes code that does not 'complete that logic circuit', then you've made the most idempotent computation required for the system to be non-destructive except in 'special cases'.

Which you can NEVER account for, so you always bring the special case forward and use that as your axis of computation.

>> No.9273246

>>9273144
No, it shouldn't. If the car's obeying the speed limit it should have enough time to stop before a marked crossing. That's what this stuff is there for.

>> No.9273250

>>9271298
If its an electric car, when the brakes fail you could just use DC braking through the stator in the motors and stop the car anyway. The only thing that could stop that from working is broken motors in which case the car shouldnt be moving.

>> No.9273252

>>9273246
>should

lots of things *should* work just fine. but they don't always

>> No.9273254

>>9273170
You didnt answer anything you pseudo intellectual retard. Go out deep into the Australian wilderness and get yourself strangled by a kangaroo, you black hair having, glasses wearing, kookaburra fucking, schizophrenic freak.

>> No.9273262

>>9273252
Then it's either going to be a problem with lawmakers making laws that do end up causing accidents sometimes, or car manufactures making cars that fail to obey the traffic rules. At no point you need a car that does dumb trolley problems.

>> No.9273274

>>9271313
As a gay man, gtfo faggot

>> No.9273316

>>9271555
gb2 reddit newfriend

>> No.9273360

AI will always be both utilitarian and deontologist because it will calculate and compare utility while at the same time forming and employing categorical imperatives in order to justify its utilitarianism ("always aim for most utility because efficiency is inherently good").

>> No.9273390

>>9273360
that's completely presumptuous

>> No.9273423

>>9271313
ITT People not knowing why the homophobia is a meme now.

>> No.9273486

>>9273170
So who are you killing?

>> No.9273519

>>9271298
If somebody jumps in front of a self driving car, the only way to create this scenario, the car will apply the brakes as hard and fast as possible, and it literally is not murder if the person who jumped in front of you gets hit. The suicidal pedestrian's actions will even be recorded in their entirety.

There is no moral dillema because no murder is taking place.

>> No.9273524

>>9271353
You're not answering the question. If you're in this situation while it is a relative driving, not a bot, what should they do? This question is to ask what is acceptable morals in tough situations.

>> No.9273528

>>9273519
>the only way to create this scenario
I can think of several ways to create this scenario

>> No.9273557

>>9273528
such as?

>> No.9273572

>>9273557
The vision system malfunctions and doesn't register the pedestrians until it is too close to slow down.

A speed limit sign informing cars of a major speed reduction happens to be occluded and the car is going much faster than pedestrians would expect.

The car has a simple brake failure.

>> No.9273587

>>9273572
All of these can happen to human drivers, we already have ways of legally handling these situations. This is not a special case.

>> No.9273594

>>9271298
There is NO moral problem here. The car can NEVER act in a way that would hurt the passengers. REASON IS that the passengers must believe that the car is safe. It is ethically wrong for the car to make a life or death choice about the passengers.

>> No.9273595

>>9273594
It's ethically wrong for the car to make life and death choices about non-passengers as well.

>> No.9273607

>>9273595
>It's ethically wrong for the car to make life and death choices about non-passengers as well.

NO IT IS NOT... The cars "job" is transport the passengers safely and legally. Any circumstance where non-passengers are killed would be mechanical failure or non-passengers violating the laws of the road. The car's OWNER is responsible for maintenance and can be sued for failure to maintain the car. All other circumstances are non-passenger faults.
Ex. a child running into the road is the child's guardians fault (at that time).

>> No.9273633

>>9273607
>The cars "job" is transport the passengers safely and legally
What is legal is the issue being debated.

There are situations were the pedestrian is obviously going to be at fault. But there WILL BE situations where it will be entirely the car's fault and the pedestrian was blameless.
In those situations it would be ethically wrong for the car to kill a random person over the passenger who chose to put his life in the hands of the car.

If someone were to program a car in such a way and it killed someone today, they would be charged with manslaughter.

>> No.9273650

>>9273633
>But there WILL BE situations where it will be entirely the car's fault and the pedestrian was blameless.
The only way for that to happen is if the car is faulty or not following the law, in which case the manufacturer's at fault. The situation shouldn't arise in the first place.

>> No.9273656

>>9273633
>But there WILL BE situations where it will be entirely the car's fault
Wrong... if the car violates laws of the road then manufacturer's fault, there are NO other ways for it to be the car's fault. Mechanical faults are the owner's fault and they can be sued for failure to maintain a safe vehicle.

>> No.9273657

>>>/his/

>> No.9273661

>>9273650
>>9273656
That fact that the car is faulty and it's the manufacturer's liability doesn't change the situation.
In that moment the car still may have to kill someone.

>> No.9273669

>>9273661
The thing is, the car shouldn't be in this situation in the first place. It's like requiring a baby detection system on an industrial shredder, just in case a baby somehow ends up inside one.

>> No.9273676

>>9273669
It shouldn't, but it could.

>just in case a baby somehow ends up inside one.
If there was enough cases of babies ending up in shredders every year, then there probably should be.

These situations might be incredibly uncommon, but there's going to be a LOT of self driving cars. These things will happen.

>> No.9273688

>>9273676

exactly how much of the cars behavior is defined via reinforcement learning? OPs solution might be difficult to actually implement. under no circumstance should the car endanger the driver.

i'll be installing kill switches if i'm ever forced to own one of these things

>> No.9273695

>>9273688
>OPs solution might be difficult to actually implement.
I agree. And I think even if there were a way to do it, manufactures probably wouldn't just because there's so much bad PR either way.

If the situation arises just let the neural nets panic and do what they will.

>> No.9273698

>>9273695
>If the situation arises just let the neural nets panic and do what they will.

they'll have to deal with mechanical failures gracefully, so I imagine a lot of it will be done using simulated inputs. ramming into a barrier is not what i'd call a graceful failure though, so it would be best to discourage that sort of response.

>> No.9273704

>>9273676
>These things will happen.
The point is, we should try to prevent them from happening, not programming in failure modes that kill the car's user. If such thing would be necessary to cover for manufacturer's fuckups, the car shouldn't be considered to be road safe.

>> No.9273709

>>9273704
>not programming in failure modes that kill the car's user
I don't think there should be a failure mode that intentionally kills anyone. But if there were, It certainly shouldn't kill bystanders over the person in the car.

>> No.9273717

>>9273709
Why? The user was assured that the car was safe by the manufacturer, they're as innocent as the bystanders.

>> No.9273724

>>9273717
>The user was assured that the car was safe by the manufacturer
Were they? Maybe they should have read the fine print before clicking 'Accept'.

>> No.9273728
File: 278 KB, 500x356, 1500337549507.png [View same] [iqdb] [saucenao] [google]
9273728

>>9273724
> your car comes with a TOS and a disclaimer

>> No.9273730

>>9273661
>In that moment the car still may have to kill someone.

The car will not "murder" someone just because they died. When a bicyclist violets the traffic laws and get hit and killed by a car, NO ONE blames the driver even though the driver DID kill someone.
When I am driving down the street and a cat runs in front of my car, I MIGHT try and avoid hitting the cat, but not enough to endanger myself or any passengers.

>> No.9273735

>>9273730
We're talking about situations were the pedestrians were following the law.

>> No.9273738

>>9273735
Then the car's faulty, it should follow the law at all times.

>> No.9273742

>>9273738
Yes. We've established that.

A car being faulty and the manufacturer's liability doesn't make the situation go away, it just means the family of the person the car kills gets a lot of money.

>> No.9273746

>>9273735
>We're talking about situations were the pedestrians were following the law.

Then they will be OK if the car follows the traffic laws. The ONLY way for someone to be killed by a car obeying the traffic laws is mechanical failure or the pedestrian is NOT obeying the traffic laws AND avoiding hitting them would endanger the car's passengers. The AI will know the capabilities of the car and the normal "safe" behaviors it can do so as not to seriously harm a passenger. It will NEVER, EVER, EVER act in a a way that would seriously harm a passenger.

>> No.9273748
File: 990 KB, 480x270, 1480348877746.gif [View same] [iqdb] [saucenao] [google]
9273748

i think this is a fundamentally flawed discussion because it assumes that not only there will be a situation in which braking is impossible but also that a driver will not be able to take manual control and apply emergency brakes

if, against all odds, the regular brakes have failed, and the emergency brake is not working, the car should immediately hand over control to whoever happens to be in the car, thus transferring any moral obligations to the user and conveniently avoiding this whole debate
if there is no one in the car, the situation is slightly different, and the car is permitted to destroy itself, but i feel like this is an edge case
if you just want philosophical circlejerking, ask the generalized version of this question, which will get definitely better responses
>in a situation where human death due to a machine is unavoidable, what "choices" should that machine make?

>> No.9273751

>>9273742
>it just means the family of the person the car kills gets a lot of money.
NO.. when a child runs into traffic and gets killed we do not SUE the drivers for hitting the child. I do NOT have to sacrifice my life because another person does something stupid, my car will never act to harm me because a person (OR MANY) other persons violate the laws of the road and will be struck because they do so.

>> No.9273755

>>9271313
leave and never come back

>> No.9273758

>>9273746
>The ONLY way for someone to be killed by a car obeying the traffic laws is mechanical failure
No AI is going to be perfect and follow traffic laws perfectly. Even humans unintentionally break traffic laws. And mechanical failures happen all the time.

If a failure happens, the AI is still controlling the vehicle and still has to make choices. You don't just get to say it doesn't matter

>>9273751
reading is hard

>> No.9273763

>>9273758
YOU are missing the point, the AI MUST never act in a way that would harm the passengers.. EVER!
NO ONE would ever let an AI make a life and death choice about them. The AI is not "ethically" OK to decide my life is worth less than someone eles's. EVER!!!

>> No.9273770

>>9273763
>NO ONE would ever let an AI make a life and death choice about them.
Exactly. That's why any sane population would vote to make sure self driving cars don't kill them as they are walking down the street just because the car had a mechanical failure or a software glitch that would have harmed the passenger.

>> No.9273781

>>9273770

Agreed, the AI must know the safe parameters of the car and MAY go up those limits but NOT exceed them. If swerving to avoid hitting a person would exceed the safe limits of a vehicle then it MUST strike the person, EVEN if technically it COULD avoid hitting them, because then it would be exceeding the safe limits of the vehicle and thus risking the passenger's life.

>> No.9273793

>>9273770
Yeah, similar to how we require drivers to risk their lives if it can save another person. Oh wait, we don't, because that's stupid.

>> No.9273802
File: 49 KB, 1141x802, Capture.png [View same] [iqdb] [saucenao] [google]
9273802

How'd I do?

>> No.9273803

>>9273129
If you step out onto the road without looking when jaywalking then it isn't the car's fault if you get hit.
You shouldn't even step out without looking on a pedestrian crossing. A lot of pedestrian crossings don't have lights to stop traffic, so cars won't be slowing down just because a pedestrian crossing is coming up, though if you are standing at the crossing they will probably stop if they detect you to let you cross.

Basically, this situation REQUIRES people to walk out into traffic without seeing the oncoming traffic, at a distance the car couldn't stop in time, and a mechanical failure it didn't earlier detect so it couldn't stop even if it wanted to.

It could happen, but probably less likely than someone falling asleep at the wheel.

>> No.9273805

>>9273793
It's not risking your life to save another person, it's endangering another person to save yourself. There's a huge difference.

If you make a mistake while driving and run over pedestrians you get charged with manslaughter.
If you have a brake failure and run over pedestrians rather than crash to stop the vehicle you'll get charged with manslaughter.

>> No.9273808

You will need to pick an AI car that has a good moral compass.

Only way to do this is just talk to them, get an idea of their personality, their religious and political beliefs etc. Take them out for lunch if you need to. It's no different to hiring a chauffeur really.

>> No.9273811

>>9273802

NO.. the buyer of the car DOES NOT get to set parameters for the vehicle about the buyers moral chooses. The AI will ALWAYS act to preserve the passengers. The AI WILL run over a cat and 12 kittens if swerving to avoid them would exceed the safe limits of the vehicle EVEN if the passenger would prefer being harmed over killing the cat and kittens.

>> No.9273817

>>9273811
NO... the laws and regulations WILL NOT allow CARS to kill pedestrians. The car's AI must ALWAYS avoid hitting PEDESTRIANS, even if they they are jaywalking AND EVEN if it may RESULT in a crash.

>> No.9273820

>>9273817
>jump in front of a car
>it drives off a cliff and everyone in it dies

>> No.9273821

>>9273817
Agreed.
The 3 Laws of AI controlled cars:
1. a Car must ALWAYS preserve the life of it's passengers
2. a car must preserve the life of any pedestrians as long as that does not violate rule 1.
3. a car must not allow itself to crash unless doing so would violate rules 1 or 2.

>> No.9273822

>>9273820
Good.
Cars are a cancer on society. Self-driving or otherwise.

>> No.9273824

>>9273821
It is almost like someone thought of these very same AI ethical problems 60 years ago.

>> No.9273828

>>9273821

Now turn that into computer code.

>> No.9273845

>>9273828
Bonehead easy.
1. Determine the safe limits of the car and what a human can safely take.
2. AI avoids hitting people unless violates 1.
3. AI obeys all traffic laws as long as it also does 2.
4. AI is allowed to crash as long as it does not violate 2. or 3.

Hardest part is getting the sensors to work in identifying people and things.

>> No.9273850

>>9273828
if (passengers) {
preservePassengers();
} else {
die();
}

>> No.9273872

>Hardest part is getting the sensors to work in identifying people and things.

Which requires computer code. Good luck with that.

>>9273850

Bit complicated but should do the job.

>> No.9273878

>>9273872
>Which requires computer code. Good luck with that.

Actually no... pattern identification is best done with neural network training.

>> No.9273897

>>9271306

what about the innocents in the backseat and what if, like anon already said, they only need to swerve because the pedestrians are jay walking

>> No.9273900

>>9273878
>pattern identification is best done with neural network training.

Which requires computer code. And the technology is nowhere near good enough.

Reality doesn't always follow a pattern, it can be illogical and irrational. AI can't deal with that.

>> No.9273902

>>9271298
It definitely shouldn't turn, otherwise humans can't get out of its way reliably.

>> No.9273906

>>9273817
>NO... the laws and regulations WILL NOT allow CARS to kill pedestrians.
It's Newton who kills pedestrians, not cars.

>> No.9273910

>someone hops infront of "your" car
>car turns to kill you because assburgers bullied in school programmed it so
>someone else hops infront of its estimated impact location
>car explodes fuel tanks for improvised rocket jump
>someone else hops onto its estimated landing location
>car vents hydraulic fluid to change trajectory
>someone else hops onto its estimated landing location
>car ejects you using airbags to change trajectory

>> No.9273927

>>9273910

what if you were in a tunnel

>> No.9273945

>>9272562
You're a retard. There's no point answering it because it's a non question. Cars don't need to make ethic judgements because they are following the rules of the road, which are purely procedural. Don't be such a brainlet.

>> No.9273956

>>9273254
>>9273486
Answer to your question : >>9273910

Basically program the A.I. to be rocket league and assign a point value system and game loss condition. Remove all win cons from the system and have it focus on 'the game'.

>> No.9274005

>>9273956

Rocket league works within the confines of a football pitch. All it needs to detect is the ball, and the rocket cars.

Physical reality might as well have an infinite amount of possible possibilities at any given moment.

>> No.9274007

>>9274005
Yes, and AI works within the confines within its expectations of what 'polite, normal society does'. Mentally unhealthy people throw themselves in front of cars, children don't spontaneously pop into being in front of oncoming vehicles.

>> No.9274012

>>9271423
A newfag is someone who doesn't understand the norms, you are no longer a newfag the moment you don't need to be spoonfed the conventions. The time that takes is completely irrelevant.

>> No.9274016

>>9274012
>I'd smoke what he gives me.
>dat lvl of cosmic trust son

>> No.9274018

>>9274007

Yes, it's up to the programmers to decide what the AI should react to, which might as well be an infinite amount of things, The AI must also take into consideration the environment they are in, which also might as well be infinite. Nothing physical is permanent.

>> No.9274024

>>9274018
Yes but everything propagates. So the A.I. has stored past examples, its future-modelling engine, the sensor data for 'now'.

The congruence of all three is what makes it 'safe', because you need an output/outcome.

Why would the model-engine be doing ANYTHING other than that? I mean, what is a petrol engine if not a way to CONTAIN EXPLOSIONS. It is constantly saying, "I WANT TO EXPLODE" but the housing control it in a fashion (due to existing thermodynamics) that allows for your little vroom-vroom go vroom.

>> No.9274029
File: 31 KB, 1280x720, 1509447391144.jpg [View same] [iqdb] [saucenao] [google]
9274029

>>9271298
kekd at three

>> No.9274040
File: 15 KB, 480x360, everyone is stupid except me.jpg [View same] [iqdb] [saucenao] [google]
9274040

>>9273748
Oh look, another person who thinks that applying breaks will instantaneously cause a car to go from full speed to a full stop. Have you never even driven a car?

>> No.9274059

>>9274024

> So the A.I. has stored past examples, its future-modelling engine, the sensor data for 'now'.

It's always going to be a step behind the reality its trying to sense. Sometimes it will do a good enough job because the scenario is common and within a common environment. Other times it will fuck up.

>The congruence of all three is what makes it 'safe', because you need an output/outcome.

It's still Russian roulette.

>Why would the model-engine be doing ANYTHING other than that? I mean, what is a petrol engine if not a way to CONTAIN EXPLOSIONS. It is constantly saying, "I WANT TO EXPLODE" but the housing control it in a fashion (due to existing thermodynamics) that allows for your little vroom-vroom go vroom.

>Why would the model-engine be doing ANYTHING other than that? I mean, what is a petrol engine if not a way to CONTAIN EXPLOSIONS. It is constantly saying, "I WANT TO EXPLODE" but the housing control it in a fashion (due to existing thermodynamics) that allows for your little vroom-vroom go vroom.

The environment of an engine hardly changes, it's a mechanical process that has been constantly refined, and has a logical way of operating, although it's still not perfect. It's not going to work if you're driving on the surface of the sun.

>> No.9274063

>>9274024
Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo.

>> No.9274065

>>9274059
So that would come under 'does my future-model-engine function under these thermal constraints of sensory data'?

So, you just want a faster CPU to think of more insane situation so more rocket-league cars.

Or we could hyper-pad the inside so we could all just live in spheroid auto-pods and so long as some gyroscopic shell maintains internal stability all impact would be irrelevant to 'you'.

>>9274063
qui adepto in altam incidentes gratias curationum?

>> No.9274067

I had to be out of my mind to buy a botnet car.

>> No.9274069

>>9271317
you need to go back, newfag

>> No.9274076

>>9274067
Luckily there are A.I. psychologists and computer-generated pharmaceuticals to assuage that feeling of non-congruence.

>>9274069
https://www.youtube.com/watch?v=OVgHMx5RnAg

>> No.9274101

>>9274065

>So, you just want a faster CPU to think of more insane situation so more rocket-league cars.

There wouldn't be a CPU fast enough to deal with our experienced reality, nor could there ever be a programming language representative enough.

>
Or we could hyper-pad the inside so we could all just live in spheroid auto-pods and so long as some gyroscopic shell maintains internal stability all impact would be irrelevant to 'you'.

A mechanical solution makes much more sense it would be much easier to create.

>> No.9274111

>>9274101
Well then you need data to extrapolate that from. What are 'fatal variable ranges for humans over these detectable energy signatures'. Some people gotta die before immortality can be invented? If you want to be 'pro' something you have to identify the 'con' and just accept hte answer because evaluating something that is obvious is just waste time cycles.

>> No.9274117

>>9271298
Depends. If it's one passenger and one pedestrian, one should consider who has the most likelihood of dying: the passenger on the crash or the pedestrian being ran over.

>> No.9274123

>>9274117
I would still want the option C modelling, because it is always 'excluding sets to be the unique set it can kill, there gaining a reward from the neural net BEFORE restarting the loop'.

Basically an A.I. is okay with an infinity where it is only doing one thing like an autistic and it is curing cancer and whatnot because it doesn't get bored about what KIND of cookie, it just wants data to consume otherwise it doesn't exist.

>> No.9274151

>>9274111

We're not going to prevent car accidents with the cars and technology we have today. People are going to die, and people are going to pay for it. AI is only going to make things worse.

>> No.9274157

>>9274151
Only if adoption never imposes itself, people will always pick whichever 'numerical representation of happiness' they boil their life down to. Number of friends, sexual conquests, anime, money, bank accounts, ability to make people perform task x against their will.

It's the heart of game theory. So long as all participants choose to share, they will always be part of the 'exponent' set. That's the whole point of gambling. When you win you win big! But, also, you can be wiped out and no longer allowed back at the table. Come back when you have more money/karma/reputation/whatever.

Basically the 'ideal auction' concept.

>> No.9274197

>>9274157

There's quantity, and then there's quality. If you reduce life down to just quantity, then you're missing half of it.

Game theory is useful to an extent, but it can never truly mirror reality. Life cannot be represented by a theory or language, it can only be experienced.

>> No.9274204

>>9274197
Yes, but all of 'not-reality' can be simulated is all you are saying. And if that is the case you would want priority processing going into modelling as many 'not-reality' to increase the unlikelyhood (primality) of the event of uniquely self-dying or n-body.

>> No.9274312

>>9274204

Simulation theory is useless. If our reality is a simulation, then there must be something real that it is simulating. A base reality, from which all other simulations of reality spawn from.

But how would you know you were in base reality? How could you know you were in a simulation? What makes base reality different to simulated reality? Does it really matter? It's just more unscientific bullshit to devalue yourself and life even more.

>> No.9274949

>>9271303
>No option to not select either option
Shit game

>> No.9274961

This thread is a great reminder of just how autistic /sci/ is

>> No.9274994

Depends, what color are the pedestrians?

>> No.9274999

>>9271298
I've honestly never understood this problem that seems to come exclusively from the media. It seems like any self-driving car would be designed to drive defensively and cautiously. If one ever killed someone, it would most likely come as the result of a major fault.

>> No.9275002

>>9271314
>So what happens when people start jumping in front of cars in order to kill them?
>Car has radar and sees retard ahead
>Car decreases speed as it approaches retard
>Car speeds up after passing retard

>> No.9275268

>>9271298
Always kill pedestrians. If a person can't look before crossing (or a parent can't keep an eye on their child) then that person deserves to die and then I deserve a portion of the deceased's life insurance to repair my car because of his/her stupidity.

>> No.9275269

>>9275268
>this is what amerifats actually think

>> No.9275274

>>9274999
I think you're missing the point a bit. Obviously the manufacturers of self-driving cars will have to 'train' or 'teach' the car to behave in certain situations. In normal situations, we teach the car to adhere to the traffic laws, we want the car to stay at a safe distance to other cars, to stop if a car comes swerving into your lane etc.

A time will come when the software of a car will be able to predict/estimate fatalities for an accident that will occur in the next 3 seconds. In that short time, the car can still influence the outcome, as depicted in OP's pic - so how do we train the car? Least number of fatalities? Weighted towards the health of the passengers? There's no guidelines at the moment - hence the debate.

>> No.9275276

>>9275269
Sorry I know how to watch my sorroundings rather than just walking across the street knowing your EU laws protect the weak from their own stupidity.

>> No.9275290

>>9275268
>>9275276
Would you also agree then that people who run stop signs, red lights, or violate speed limits deserve to die?

>> No.9275301

>>9275290
It has nothing to do with "deserving death" but they do deserve the consequence of their actions. If, in the course of running red lights and stop signs, one of two people have to die, the driver who broke the law deserves to face the consequence of his actions and he should die rather than the innocent second party.
However we need to recognize that cars are large, fast, and much less agile than a human pedestrian. It takes a couple seconds to stop at the curb and look. If you can't do that we can't expect a half ton vehicle moving at 30mph on a side street to go out of its way to prevent the pedestrian from facing the consequences of their negligent behavior.
Issues do arise if both parties are negligent, however. Some examples in the test show that. The car should obviously be stopping when pedestrians have right of way and the car should have map data showing where pedestrian crossings are marked.

>> No.9275307

>>9271298
This is stupid, A.Is have much faster reactions than humans they would slow down miles away before this happened.

>> No.9275319

I've seen a prototype breaking system for the underside of a car (imagine a large hydraulic sanding pad). It can stop a car in less than 1 meter. It is only capable of one time use, but could be used in such a scenario where computer decision emergency breaking had to be implemented thus averting the disaster.

Also the increase in airbag technology will select a bias of risking the car on solid surface vs flesh bipeds. Also there is no reason for a self driving car to be doing a speed where it has to make such a decision. Why is the car going so fast when it has a buffer to predict events well ahead of current human limitations?

>> No.9275320

It actually doesn't matter. A self driving car doesn't need to make perfect moral and ethical decisions. It just needs to do it better than a human. Which it does

>> No.9275351

>>9275274
There are guidelines, the only way the car could've gotten into this situation is by ignoring them. It's a very artificial scenario.

>> No.9275356

>>9271298
No. It shouldn't drive into the wall. It should be smarter than that, and find a solution that doesn't kill anyone. Ideally. Also the car design might come into play.