I’m trying to think of a good way to tell the people on facebook: I don’t want any birthday presents, but I would appreciate it if you read at least 5 chapters of HPMOR.
It bothers me that the intelligence of animals is measured by how willing they are to obey the commands of a human.
same goes for students at schools
I just realized how fucked up that is wow.
I believe I’ve already commented on this before, but that was ages ago, so here I go again.
First of all, it’s untrue that the intelligence of animals is measured by how willing they are to obey human commands. A dog follows instructions about as well as a dolphin, but nobody would claim that a dog is about as smart as a dolphin. Similarly, elephants are significantly smarter than horses, despite elephants are worse at following orders. Horses have been domesticated since pretty much forever, but elephants are mostly (exclusively?) tamed (meaning they are caught in the wild and trained to do work).
Or take octopuses. Their intelligence is pretty well-known and documented rather well. This despite them being nearly impossible to train. (Even when presented with a tasty reward, an octopus is as likely to ignore you as they are to do the task.)
So it’s demonstrably false that we judge animal intelligence by how well they obey humans. What is true is that judging the intelligence of an animal is pretty hard (even ignoring that defining “intelligence” is pretty hard in its own right). But even so, a lot of tests used to test of animal intelligence don’t deal with animals following orders. (I won’t explain this further, but feel free to ask me if you want to know more.)
Okay, so that’s animals out of the way. Now what about students.
Let me start by saying that I understand the plight of students. School is forced upon most teenagers and often isn’t changed for the better, in spite of evidence that shows that homework doesn’t work or that starting schools an hour later improves grades (and I assume student well-being as well).
But I don’t think it’s true that a student’s intelligence is judged by how well they follow orders.
First of all, intelligent but doesn’t apply themselves is literally a trope. Talk to a teacher about this and they’ll have plenty to say on the subject and can probably point to several of their students who they believe intelligent, despite poor grades.
But that’s not all. Good tests and exams aren’t just about parroting what the teacher has told you. Of course some things you should learn by heart. Not just because you need that knowledge to understand the material, but it can also provide students who aren’t that good at a subject to brute-force themselves to a passing grade. But a test that only tests memory is a terrible test (as most teachers know). That’s why understanding is also quizzed, either by presenting novel problems, combining different areas of knowledge, or by having the student explain something in their own words.
Maybe this is a problem with US schools that they have this problem to a significantly greater extent than my local schools (I have heard horror stories about “teaching to the test”). But even if grades are decided mostly by doing what the teacher tells you, no-one is basing their opinion on someone’s intelligence solely by their grades.
Also, I recently explained the logic behind solving the trolley problem and a variation on it, but I wonder how useful and interesting that explanation was. If someone could take some time to give me some feedback about it, I’d really appreciate it. I’m thinking especially of rkidd at who the explanation was kinda aimed at.
Okay, so there’s something bothering me about someone I know in real life and explaining it might make me look like an asshole, so if you don’t want to read about me (possibly) being an asshole, don’t read further.
life goal: convince everyone at a Solstice to sing Frank Turner songs, because his music is clearly designed for singing along with a thousand of your closest friends
secondary life goal: convert Frank Turner to anti-deathism so that all the folk-punk anthems about how there is NO GOD SO CLAP YOUR HANDS TOGETHER stop including verses about how it’s okay that we’re all going to die
yxoque said: I hope I wasn't too rude in answering your trolley problem. I gave the answer I thought was correct (which was a relatively simple calculation). I didn't get the impression you wanted a thorough explanation of the process, but if you expected it and I failed to provide: my apologies.
Okay, I’m going to attempt to answer the ethical dilemma you posed. I’ll start by going over the standard version of the trolley problem, then move on to yours.
The original problem goes something like this: There’s a train going full-speed on a track that has five people tied to it. You have access to a lever that will switch the track the train follows, but that alternative track has another person tied to it. What is the right things to do?
Here’s how utilitarianism solves this problem:
First of all, you need to accept that every life at stake here is equally valuable. You can probably argue about that fact, but it just complicates the solution, not significantly alters it. Okay? Okay. Moving on.
Utilitarianism (in its most simplistic form, there are different kinds but this isn’t the place for that) wants to accomplish as much utility (shorthand for: good things; it’s more complicated than that, but again, not the place for that) for as many people.
As your intuition might tell you, being alive is better than being dead. Being alive has greater utility than being dead. So the best outcome (according to this moral theory) is the one where the train kills the least amount of people.
You also put numerical values on the utility. There isn’t a big chart of relative utility, but it can be handy to put some (intuitive) numbers on certain actions to make the moral calculation an actual calculation (a trick I use relatively often in my interpersonal relations). Since we’ve assumed that all the lives involved are equal, we can just state that the utility of surviving is 1 and the utility of dying is 0. This allows us to directly compare the two options:
Option 1: You don’t pull the lever and five people die, but one survives. Overall utility: 1
Option 2: You do pull the lever and one person dies, but five survive. Overall utility: 5.
Pretty simple, right.
Now, let’s move on to your modified version of the trolley problem:
thanks, and your first answer wasn’t too rude or anything. I really appreciated it! The bomb factory question you posed was novel, too.
Your second reply is worth reflecting on, though. These sorts of moral dilemmas, after all, are presented as examples where often the answer goes against the grain of what seems to be intuitively right. It may seem easy to someone who is familiar with these problems, but many people are totally unfamiliar, and others still (like me) can be confused by them.
and yeah, I guess I shouldn’t have expected an explanation because I didn’t ask for one. overall, it’s all good. you’ve got a great blog and a great mind, and I appreciate anyone even bothering to reply to my question :-)
Say you were a railway controller, and you knew there was a train heading towards a bridge that was out. The rail has a point where it diverges off to a different path at a switch, before the bridge.
You have a choice: leave the switch as is, dooming the 50 people on the train, or switch the tracks and send the train on the divergent path.
HOWEVER, if you do this, there is a 33% chance the train will derail into a poorly placed Bomb Factory, which will result in a detonation killing all of the passengers AND a further 500 people.
So, given these two options, which is the better choice?
Again, we’re assuming that the life of everyone involved is equally and there are no extra effects (like, blowing up the bomb factory ends a war or something).
Again, we have two option:
Option 1: You don’t pull the lever. Fifty people die for sure (0% chance of survival), but 500 people have a 100% of surviving. Expected utility: 500.
Option 2: You pull the lever. Now, this gets a bit tricky. What you need to do here is take the chance that the people might survive (1-.333…=.666…) and multiply that by the people involved. In this case that gives us an expected utility of: 550 * 0.666…=366.666…
Since the expected utility of option 1 is higher, it’s the better option.
Expected utility is probably the hardest concept here, so let me know if that (or any other part) is unclear in some way.
does anyone else have this other self they’ve created in their mind that is not really exactly you irl but is more like what you want to be and has a life that continues in your head with like weird continuing daydreams but they’re not perfect or anything and wow i forget where i was going with this
It’s not like that, Jesus Christ. It’s usually in response to trauma, as a part of PTSD or a similar disorder, and one of the key parts of diagnosis is that it interferes with your daily life like sleeping and eating. Daydreaming isn’t a mental disorder until it becomes harmful, or if it’s prompted by another psychological disorder.
And even when it’s in response to trauma it’s not really harmful as long as it doesn’t prevent you from going on with your daily life.
There is a fuckton of worse things one could be doing as a response to trauma rather than daydreaming.
Thought-policing oneself, for starters, can be much worse and lead to Very Bad Things.
As a general rule, the diagnostic criteria for most mental disorders demand that symptoms need to impair some aspect of the person’s life, be it social, work, school, personal comfort…
If you exhibit behavior that could be a symptom for some disorder or another, but it doesn’t hinder your wellbeing, social interactions, or scholarly or career success, it’s likely to be fine.
"Jones has suffered an accident in the transmitter room of a television station. Electrical equipment has fallen on his arm, and we cannot rescue him without turning off the transmitter for fifteen minutes. A World Cup match is in progress, watched by many people, and it will not be over for an hour. Jones’s injury will not get any worse if we wait, but his hand has been mashed and he is receiving extremely painful electrical shocks. Should we rescue him now or wait until the match is over? Does the right thing to do depend on how many people are watching…?" -T.M. Scanlon, What We Owe Each Other (p. 235)
I mean… this is kind of a scaled down version of the dust speck vs torture thing, right? Except instead of 3^^^3 dust specks it’s some smaller number of people suffering the not insignificant disutility of missing fifteen minutes of a subculturally important event, and instead of fifty years of torture it’s fifteen minutes of electric shocks. I guess it kind of depends on the nitty-gritty balancing of utilities, how many people there are and how bad the shocks are and so on. And it’d definitely depend on how many people were watching, yeah.
I mean in a real-life version of this problem we’d have no guarantee that the shocks wouldn’t cause lasting damage or risk his life, so intuitively I’d rescue him immediately because death is a few orders of magnitude worse than fifteen minutes of pain.
And I’m just now noticing the title there, and now I suspect the thought experiment here is engineered to exploit the dissonance between the well-defined abstract thought experiment and the messy risky real-life situation. Real clever, deontologists.
Your tag “people trying to slam consequentialism by saying that following consequentialism will have bad consequences” is missing the point.
The argument is not that the consequentialist verdict fails to avoid bad consequences, but that treating consequences as completely fungible leads to ridiculous consequences. If your normative system concludes that it is permissible to leave a man in (excruciating) pain in order to deliver low-level entertainment (even unrelated to his pain) then you are being silly. And here’s why.
The issue here is not whether consequences are important in deciding the correct thing to do. The issue is, instead, which consequences are important and worth considering. It seems to me that there are some things which are simply incommensurable. Basically, no number of dust specks would ever be a convincing argument to me that I ought to do anything. From the school of thought called consequentialism, this means I might act immorally sometimes. But, using a more nuanced calculus that does not treat harm as being perfectly fungible (or integrable) provides a moral framework much more closely in-line with my moral intuition*.
Specifically, it is not clear to me that harm can be integrated over people. A world where 3^^^3 people have had a dust speck in their eye does not seem to me better than a world where a person had to make a sacrifice to avoid the dust specks. Almost any sacrifice seems more onerous than the dust specks. In the context of Scanlon’s thought experiment, it does not seem reasonable to ask the technician to suffer a significant harm (painful shocks) in order to avoid a minor harm (missing soccer) to others, no matter how numerous.
It is not reasonable to take a course of action A which will inflict a burden B1 on a person, instead of the next best alternative with burden B2 on a different person, for B1 »> B2. This is true no matter how many people suffer B2. (Scanlon’s answers for when B1 and B2 are close in value are interesting but not relevant at this level of detail.)
*Moral intuition is often tossed out by LW as a community due to scope insensitivity and other failings to intuit consequences. This has never been a convincing point in favor of consequentialist calculus for me because of various metaethical concerns. If you want to debate this then the surface level issues of agglomeration/calculus will need to be put on hold.
**Note: The thought experiment does not rely on a “dissonance” between the thought experiment and “messy risky real-life” at all. Instead, it shows that the consequentialist algorithm leads to conclusions that seem clearly unacceptable in reality.
It kind of looks like you’re taking a bunch of utilitarian assumptions about the ability to aggregate utility and claiming they apply to consequentialism? Like, your “more nuanced calculus” seems to just prefer a certain set of outcomes (wherein there is a limit on how bad multiple instances of a low-level Bad can be, and different classes override others regardless of quanitity) over outcomes that add linearly. That’s not a non-consequentialist framework, it’s a different set of values? Utilitarianism =/= consequentialism.
I’m not super inclined to puzzle out metaethics, though, so utilitarianism or preference utilitarianism backed by some sort of contractualism maybe seem mostly plausible to me. And… yeah, I generally agree with the LW-community consensus on how scope insensitivity and so on make moral intuition a bad heuristic for exactly these sorts of situations? I actually don’t think “seeming clearly unacceptable in reality” is call to do anything except be careful about how you’re doing the math. I don’t really want to go about debating it because it sounds unpleasant and I’m busy with a bunch of other things, but that is the way our opposition is going to hash out, I think.
Ok but incommensurability *doesn’t work*. Like, I drove to the Berkeley Bowl yesterday to get a bunch of groceries. I could have walked a few blocks to the local safeway and then walked all the groceries home, but the safeway doesn’t have as nice a selection, and the walk home would’ve been really unpleasant.
The round trip to the Berkeley Bowl is 12 miles, and in California there are 0.9 deaths for every 100 million vehicle miles traveled. Which means that if you imagine a big universe with a billion of me in various worlds/everett branches, the billion of us *predictably* killed about a hundred people because we were lazy and wanted nice food.
If you declare value A incommensurably greater than value B then you basically never get to care about value B ever, because there’s always some vanishingly small probability that value A is at stake, because if your decision were repeated over enough similar people, someone would wind up sacrificing value A.
Wow, it’s almost like consequentialism is really hard for humans or something.