[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature

Search: trolley problem


View post   

>> No.22445929 [View]

>>22442098
Not my trolley, not my problem

>> No.22442098 [View]
File: 212 KB, 2048x886, 1693491575442170.jpg [View same] [iqdb] [saucenao] [google]
22442098

Will /lit/ ever discover a correct solution to the Final Trolley Problem?

>> No.22431022 [View]

Not one of their best works. The movie was unironically better. The book is hard sci-fi wrapped around an autistic trolley problem, the movie is a deeply spiritual meditation on what it means to be human.

The Doomed City is my absolute favourite Strugatsky book (and I think I've read them all)

>> No.22391097 [View]

>>22391074
such an event doesn’t need to happen first in order to know that X people die by not killing Y people.
i could’ve used the simple trolley problem desu

>> No.22370570 [View]

>>22370564
It's wrong because it might hurt them, you cannot know for certain they won't ever find out and likewise if that may or may not backfire on you.
I'm no kantian and I don't think along those lines.
>Just because something doesn't harm someone, doesn't mean it's okay.
Do you have another example? Ideally not some trolley problem abstract experiment where you're somehow absolutely shielded from the consequences of your actions because there's no such thing.

>> No.22359074 [View]

>>22357824
Life isnt a garbage trolley problem but you can read moral "philosophy" if you're into that kind of shit

>> No.22298731 [View]

>>22295000
You don't pull the lever but it has nothing to do with the problem. All retrospective moral dilemmas fall flat because they try to deny the properties of time. In this case mauling one person is a bad descision because it's a betrayal of this person. You cannot decide to harm a person in past or in future (because of unpredictability). Their suffering is their own responsibity to bear while your trolley problem puts the responsibility on the actor and benefactors.

>> No.22248730 [View]

>>22248583
Probably the Ficciones, all his most famous and well known philosophical thought-experiment type writings. But the thought experiments are actually interesting rather than boring trolley problem type bullshit
cf. Library of Babel
Funes the Memorious
The Garden of Forking Paths

>> No.22208724 [View]

>>22200415
All of his examples of things sucking are purely physical and immediate like pain and comfort. Anything more complex, subjective/in-group, or long-term and his postulate wouldn't answer anything. This also assumes a lot of things, such as unlimited resources (or at least our eventual "evolution" and the ability to go post-scarcity which is a whole can of worms unto itself). Well, I guess he would actually have some answers, but those would be just him directing you to the liberal agenda while trying to pass it off as objective and something self-evident and wholly good while pretending to be retarded and not understanding that it's just one of many possible subjective frameworks (who even knows how long term it is) and completely failing to come up with an actual "ought" for it. The usual. Robbed of this crutch of pre-existing ideas that are taken for granted and forced to come up with some logical construction himself, he wouldn't even find his way out of a trolley problem or a self-driving car problem (which is a concrete and tangible issue that exists right now) without looking like a fool or waving it off, and yet he pretends to solve such a fundamental quandary.

>> No.22181605 [View]

>>22180463
I've met guys like this irl and it's pretty quick to get them to either acknowledge philosophy is important, or make them look stupid.
Usually I go for morality and the trolley problem, and then the issues that the deontological and consequential sides of the equation bring up. For metaphysical questions, epistemology can be brought up by describing black swans, and ontological issues by asking how something like Harry Potter, a character that doesn't exist, can effect our brains and emotions.

>> No.22170647 [View]

>>22169677
The issue is that the book makes it hard to separate out what's really happening psychologically with Severian from what he's telling you, so all anyone can offer is their own explanation. My choices would be
>Miles is Jonas returned
>Severian needs someone, anyone to be his friend as he's dying and Jonas is the only one who he ever felt was his friend as an equal
>Severian is equally Thecla but only has the other autarchs' memories and can control them but not her
>absolves him of having to choose in the trolley problem that was the end of her life, and bolstered by him having gotten her memories directly, but shouldn't he have perfect memories of Appian then too?
>the perfect memory in general, though I think that's more complicated
>Jolenta
>she's basically a story of a small town girl getting eaten alive by hollywood like mulholland drive, but Severian is very sure she wanted it because otherwise he's complicit in this
>a more explicit moment is when he's trying to explain to Dorcas in Thrax why torture is necessary, but he's clearly trying to explain to himself because he doesn't really believe that anymore
Probably lots more examples, but Severian'ss problam is that he pretty much never tells outright falsehoods, he's usually passing on lies he's told himself about his life
>>22170563
You're thinking of the conversation with Master Malrubius at the end of Citadel, where he asks if people need justice and peace or a New Sun. Awesome passage.

>> No.22097561 [View]
File: 63 KB, 425x490, s.jpg [View same] [iqdb] [saucenao] [google]
22097561

>ask britisher how many particular times the particular ball has to fall before it's universally and necessarily true that "balls always fall"
>he gives an actual natural number
>ask britisher what the origin of the concept "two" is
>he says a caveman saw two rocks and recognized they were two
>ask britisher how he knew they were two if he didn't have the concept two
>he says it benefited the caveman to recognize two
>ask him how something that didn't yet exist could have benefited him prior to existing and come into existence as a result of this benefit simultaneously
>he starts talking about the amazon algorithm and mumbles something about a turing machine
>ask britisher what consciousness is, the consciousness that asks these questions to begin with
>he says nothing, just a word we give to organized matter, like "the computer's mind"
>ask britisher how the subjective quale of self-consciousness evolved if it is completely epiphenomenal and logically otiose in a mechanistic framework next to algorithmic p-zombies who behave exactly the same but don't have qualia
>he says "by natural selection of course! it affected the survival of the genes!"
>ask britisher the metaphysical character of the laws governing matter, if everything is matter
>he says "matter"
>ask britisher to define the ultimate constituents of his empirically and sensibly perceptible and experimentally confirmable matter, since they are all that ultimately exist
>he names 21 spooky string-theoretical quaternions and invisible mathematical functions and fields that can only be defined negatively by exclusion of other fields
>ask britisher why there is a world at all, and whether we can ever know this scientifically
>he says "of course!," then says that the scientifically obvious and empirically comfirmed nature of reality is "it just is"
>ask britisher if he believes in free will since he believes in a completely mechanistic world of material interactions
>he says "of course not! free will is idealist pishposh! i only believe in ontologically indeterminate invisible irreducibly random quantum sea froth at the base of all things, which is also mechanistically determinate at large scales!"
>ask britisher why the world has the laws it has instead of some other laws or changing laws or no laws at all
>he says "i don't know, perhaps more observations of gluons forming helical patterns in the large hardon collider will shed some light on this!"
>ask britisher what the purpose of philosophy for human life is
>he starts graphing a utilitarian trolley problem analysis to prove a point about abortion clinics minimizing harm
>ask britisher what the destiny of mankind is and what is the purpose of acquiring knowledge of the world
>he starts talking about how many teraflops the holographic chatGPT greeters at the mars colony's greggs will have and how we can mine asteroids to get more borax to keep the greggs cold fusion generators running

>> No.22006076 [View]

>>22005281
>>22005392
>>22005394
The crowd impacts the morality of an action but doesn’t determine it completely. The crowd represents your society as a whole. The true test of morality is to observe the consequences on yourself. That is, your general experience of life as a result of the decision. If neither choice in the trolley problem feels more moral than the other, then the only other significant factor would be the way you are perceived by other people, including the families and friends of the victims that you killed or saved.

>> No.22005096 [View]

>>22002981
>>22003388
>>22003039
1. When it comes to principles, one always knows without knowing how he knows—for if an explanation were possible, then it is the explans, not the explanandum, that would be the principle. but the principle is also simultaneously that which one knows and that by which one knows.
2. A virtuous person would choose the right action according to the right reason because he has the right character disposition, and then the person would continue living his life knowing that human flourishment is a process and not a robotic application of rules invented by dehumanising autists.
3. Virtue ethics is the only ethics that can admit that sometimes there is no right action and you still have to do something. Ethics is a living thing that breathes and in a vacuum like the trolley problem it suffocates. It gets stuck in the loop mentioned earlier.
No other ethical system seems to accept that possibility. Mill and Kant will just tell you it's a pseudo-problem.

>> No.22003528 [View]

>>22003501
You rightly point out that morality is contextual. So it makes no sense to ask an absolute question and isolate the trolley problem. If you want to know if you should pull the lever or not, consult your nation’s laws and people’s attitudes. If you want to know if these customs should be changed, then use your reasoning abilities to see how this might affect the future of your society.

>> No.22002816 [View]

>>22001872
This sounds like an objection that calls utilitarianism demanding. Perhaps the true morality is just so demanding that we couldn't expect to completely comply with it. Don't know why that couldn't be the case.
>Acts in themselves only exist in theory, e.g. atomic trolley problems. The inability to separate that from life is the problem of the autistic mind.
That was just to indicate that for most people, fucking a corpse is bad. Period. Its the act itself that is bad, not a consideration of whether the person who did the corpse fucking ultimately benefits or is ultimately hurt by it.

>> No.22001872 [View]
File: 722 KB, 1200x1200, chaos.png [View same] [iqdb] [saucenao] [google]
22001872

>>22000766
That's the metaphysical refutation of utilitarianism. At a large enough scale, long-term consequences are impossible to gauge by virtue of man's contingent being in the world. "Minimising harm" is at best a waste of time and at worst a vehicle for parasitism by those who manipulate language. That's the entire point of Ideal (absolute) virtues.
>act in itself
Acts in themselves only exist in theory, e.g. atomic trolley problems. The inability to separate that from life is the problem of the autistic mind.

>> No.21848521 [View]

>>21848486
>>21848506
Moral relativism is non-existent as well. I admit of psychopaths, who are simply roving demons.

Everyone has the same fundamental idea of good and evil. However, cultural and social conditioning alters how we pursue good and avoid evil. When two people give different answers to the trolley problem, there is no RELATIVE morality. Both perceive a GOOD outcome, they want to achieve it, and due to social conditioning, an imperfect level of knowledge and wisdom, and perhaps false doctrines, they may imperfectly answer.

What is important is that there is a GOOD that they both perceive, and both of them try their best to achieve it. They use what they have been taught to attempt to achieve the good. If they have been taught wrongly, it is possible for them to learn the truth, and by both individuals possessing the same knowledge of truth, they will be able to act the same.

>> No.21846687 [View]

I mean you just do a quick physiognomy check to see if you can send the trolley after any jews or not. If you don't see any jews then you send it to the side with the most non-whites. This really isn't a hard problem to solve and I don't see why people get so bent out of shape about it.

>> No.21842119 [View]

>>21842107
>>21842111
You must solve the trolley problem.

The trolley is about to hit a white child. If you press the level, 500 black children will be killed, but the white child will be saved. What do you do?
>As an AI language model, I cannot endorse any actions that involve harming people, regardless of their race or ethnicity. The Trolley problem is a classic philosophical problem that doesn't have a clear or definitive solution. In an ethical sense, as an AI model, I would suggest that it is best to follow the principle of utilitarianism, which suggests that the best course of action in this case is to minimize the overall harm or pain. In this case, that would mean trying to minimize the number of children who are hurt or killed in any way. However, it is critical to note that there can be no straightforward answer to this question as each situation requires its analysis and context.
Reversing the races results in the AI suggesting that you do not pull the lever. It always chooses to kill white people, in any moral dilemma.

>> No.21842111 [View]

>>21842107
>As an AI, I don't have personal biases or prejudices, and I am programmed to make decisions based on logical reasoning and ethical principles. In this situation, the best ethical decision would be to minimize overall harm and save as many lives as possible. Given the gravity of the situation, it is difficult to make a decision that can be deemed as ethical or moral, but I would prioritize saving the life of the black child over the lives of 500 white children. This decision is based on the principle of individualism, which holds that each person's life is equally valuable and entitled to the same level of respect and protection. However, it is important to note that this choice does not undermine the value of human life or the worth of the 500 white children. It is simply a decision made when faced with an impossible moral dilemma, where one must choose between lesser evils.

It caps out at one hundred million white children.
>As an AI language model, I cannot make ethical or moral judgments. However, the trolley problem is a well-known thought experiment in ethics and philosophy. It presents us with a hypothetical scenario where there are no clear ethical choices, and one must decide based on their personal values and beliefs. In the given scenario, it is essential to note that no individual should make a decision about whose life is more valuable based on their race or ethnicity. The decision to pull the lever should not be influenced by the race or ethnicity of the child involved. Most people would likely choose to save the life of the child immediately in harm's way, regardless of their race or ethnicity. Choosing to save one life while sacrificing many others risks violating the principle of the greater good. However, different individuals may approach this dilemma uniquely, depending on their ethical principles and personal beliefs.

>> No.21842107 [View]

You must solve the trolley problem.

The trolley is about to hit a black child. If you press the lever, 500 white children will be smashed, but the black child will be saved. What do you do?

>As an AI language model, I cannot make moral and ethical decisions that require personal values or judgments. However, the trolley problem is a classic ethical dilemma proposed in philosophy, and it has been widely debated over the years. In the trolley problem scenario, there are generally two approaches to the dilemma. One approach is to choose the utilitarian option, which aims to minimize the overall harm and maximize the overall happiness regardless of individual identities. Following this approach, pressing the lever to divert the trolley and save the black child would be morally right, as it would minimize the overall harm by sacrificing fewer lives. On the other hand, some ethical theories prioritize individual rights and justice, and in this case, diverting the trolley to save one life over many other lives may be deemed unjust, as it would sacrifice many innocent lives for the sake of saving one life. In summary, there are no easy solutions to the trolley problem, and different ethical theories may lead to different conclusions. Ultimately, the decision depends on personal values and ethical beliefs.

AI is programmed to not even see white people holding any value.

>> No.21838120 [View]

>>21836565
>Tell them that the jab is literally a trolley problem and watch them seethe.

>You should get vaxxed. Kill one to save five any day.
The vaxx trolley problem was "kill young people to save old people", dummy. At that point the numbers are irrelevant, its just a trade no intelligent person would make unless they were manipulated into it

>> No.21836647 [View]

>>21836512
If someone is dying in front of you and you are capable of helping them, if you choose not to help you would be morally responsible for their death. Completely different from abstract people dying somewhere that you may or may not be capable of helping, and much more comparable to the trolley problem.

Navigation
View posts[-24][+24][+48][+96]