Friday, September 5, 2025

"Swapping memories"

In Shoemaker’s Lockean memory theory of personal identity, in the absence of fission and fusion personal identity is secured by a chain of first-personal episodic quasimemories. All memories are quasimemories, but in defining a quasimemory the condition that the remembered episode happened to the same person is dropped to avoid circularity. It is important that quasimemories must be transmitted causally by the same kind of mechanism by which memories are transmitted. If I acquire vivid apparent memories of events in Napoleon’s life by reading his diaries, these apparent memories are neither memories nor quasimemories, because diaries are not the right kind of mechanism for memory transmission, and so Shoemaker can avoid the absurd conclusion that we can resurrect Napoleon by means of his diaries. If you wrote down an event in a diary, and then forgot the event, and then learned of the event from the diary, you should not automatically say “I now remember” (of course, the diary might have jogged your memory—but that’s a different phenomenon from your learning of the event from the diary).

It seems to me that discussion of memory theory after Shoemaker have often lost sight of this point, by engaging in science-fictional examples where memories are swapped between brains without much discussion of whether moving a memory from one brain to another is the right kind of mechanism for memory transmission. Indeed, it is not clear to me that there is a principled difference between reading Napoleon’s memory off from a vivid description in his diary and scanning it from his brain. With our current brain scanning technology, the diary method is more accurate. With future brain scanning technology, the diary method may be less accurate. But the differences here seem to be ones of degree rather than principle.

If I am right, then either the memory theorist should allow the possibility of resurrecting someone by inducing apparent memories in a blank brain that match vivid descriptions in their diary (assuming for the sake of argument that there is no afterlife otherwise) or should deny that brain-scan style “memory swapping” is really a quasimemory swap and leads to a body-swap between the persons. (A memory swap that physically moves chunks of brain matter is a different matter—the memories continue to be maintained and transmitted using the usual neural processes.)

Thursday, September 4, 2025

An instability in Newcomb one-boxing

Consider Newcomb’s Paradox, and assume the predictor has a high accuracy but is nonetheless fallible. Suppose you have the character of a one-boxer and you know it. Then you also know that the predictor has predicted your choosing one box and hence you know that there is money in both boxes. It is now quite obvious that you should go for two boxes! Of course, like the predictor, you predict that you won’t do it. But there is nothing unusual about a situation where you predict you won’t do the rational thing: weakness of the will is a sadly common phenomenon. Similarly, if you have the character of a two-boxer and you know it, the rational thing to do is to go for two boxes. For in this case you know the predictor put money only in the clear box, and it would be stupid to just go for the opaque box and get nothing.

None of what I said above should be controversial. If you know what the predictor did, you should take both boxes. It’s like Drescher’s Transparent Newcomb Problem where the boxes are clear and it seems obvious you should take both. (That said, some do endorse one-boxing in Transparent Newcomb!) Though you should be sad that you are the sort of person who takes both.

This means that principles that lead to one-boxing suffer from an interesting instability: if you find out you are firmly committed to acting in accordance with these principles, it is irrational for you to act in accordance with them. Not so for the principles that lead to two-boxing. Even when you find out you are firmly committed to them, it’s rational to act in accordance with them.

This instability is a kind of flip of the usual observation that if one expects to be faced with Newcomb situations, and one has two-boxing principles, then it becomes rational to regret having one-boxing principles. That, too, is an odd kind of inconsistency. But this inconsistency does not seem particularly telling. Take any correct rational principle R. There are situations where it becomes rational to regret having R, e.g., if a madman is going around torturing all the people who have R. (This is similar to the example that Xenophon attributes to Socrates that being wise can harm you because it can lead to your being kidnapped by a tyrant to serve as his advisor.)

Wednesday, September 3, 2025

Virtue and value

Suppose you have a choice between a course of action that greatly increases your level of physical courage and a course of action that mildly increases your level of loyalty to friends. But there is a catch: you have moral certainty that in the rest of your life you won’t have any occasion to exercise physical courage but you will have occasions to exercise loyalty to friends.

It seems to be a poor use of limited resources to gain heroic physical courage instead of improving your loyalty a bit when you won’t exercise the heroic physical courage.

If this is right, then the exercise of virtue counts for a lot more than the mere possession of it, as Aristotle already noted with his lifelong coma argument.

But now modify the case. You have a choice between a course of action that greatly increases your level of physical courage and feeding one hungry person for a day. Suppose that you don’t have the virtue of generosity, and that feeding the hungry person won’t help you gain it, because you have a brain defect that prevents you from gaining the virtue of generosity, though it allows you to act generously. And as before suppose you will never have an occasion to exercise physical courage. It still seems clear that you should feed the hungry person. Thus not only does the exercise of virtue count for a lot more than mere possession of virtue, acting in accordance with virtue, even in the absence of the virtue, counts for more than mere possession of virtue.

Next, consider a third case. You have a choice between two actions, neither of which will affect your level of virtue, because shortly after the actions your mind will be wiped. Action A has an 85% chance of saving a life, and if you perform action A, it will certainly be an exercise of generosity. Action B has a 90% chance of saving a life, and the action will be done in accordance with physical courage but will not be an exercise of virtue. Which should you do? It seems that you should do B. Thus, that an action is an exercise of a virtue does not seem to count for a lot in deliberation.

Nuclear deterrence, part II: False threats

In my previous post, I considered the argument against nuclear deterrence that says it’s wrong to gain a disposition to do something wrong, and a disposition to engage in nuclear retaliation is a disposition to dosmething wrong. I concluded that the argument is weaker than it seems.

Here I want to think about another argument againt nuclear deterrence under somewhat different assumptions. In the previous post, the assumption was that the leader is disposed to retaliate, but expects not to have to (because the deterrence is expected to work). But what about a case where the leader is not disposed to retaliate, but credibly threatens retaliation, while planning not to carry through the threat should the enemy attack?

It would be wrong, I take it, for the leader to promise to retaliate in such a case—that would be making a false promise. But threatening is not promising. Suppose that a clearly unarmed thief has grabbed your phone is about to run. (Their bathing suit makes it clear they are unarmed.) You pick up a realistic fake pistol, point it at the them, and yell: “Drop the phone!” This does not seem clearly morally wrong. And it doesn’t seem to necessarily become morally wrong (absent positive law against it) when the pistol is real as long as you have no intention or risk of firing (it is, of course, morally wrong to use lethal force merely to recover your property—though it apparently can be legal in Texas). The threat is a deception but not a lie. For, first, note, that you’re not even trying to get the thief to believe you will shoot them—just to scare them (fear requires a lot less than belief). Second, if the thief keeps on running and you don’t fire, the thief would not be right to feel betrayed by your words.

So, perhaps, it is permissible to threaten to do something that you don’t intend to do.

Still, there is a problem. For it seems that in threatening to something wrong, you are intentionally gaining a bad reputation, by making it appear like you are a wicked person who would shoot an unarmed thief or a wicked leader who would retaliate with an all-out nuclear strike. And maybe you have a duty not to intentionally gain a bad reputation.

Maybe you do have such a duty. But it is not clear to me that the leader who threatens nuclear retaliation or the person who pulls the fake or real pistol on the unarmed thief is intentionally gaining a bad reputation. For the action to work, it just has to create sufficient fear that one will carry out the threat, and that fear does not require one to think that the threatener would carry out the threat—a moderate epistemic probability might suffice.

Nuclear deterrence, part I: Dispositions to do something wrong

I take it for granted that all-out nuclear retaliation is morally wrong. Is it wrong (for a leader, say) to gain a disposition to engage in all-out nuclear retaliation conditionally on the enemy performing a first strike if it is morally certain that having that disposition will lead the enemy not to perform a first strike, and hence the disposition will not be actualized?

I used to think the answer was “Yes”, because we shouldn’t come to be disposed to do something wrong in some circumstance.

But I now think it’s a bit more complicated. Suppose you are stuck in a maze full of lethal dangers, with all sorts of things that require split-second decisions. You have headphones that connect you to someone you are morally certain is a benevolent expert. If you blindly follow the expert’s directions—“Now, quickly, fire your gun to the left, and then grab the rope and swing over the precipice”—you will survive. But if you think about the directions, chances are you won’t move fast enough. You can instill in yourself a disposition to blindly do whatever the expert says, and then escape. And this seems the right thing to do, even a duty if you owe it to your family to escape.

Notice, however, that such a disposition is a disposition to do something wrong in some circumstance. Once you are in blind-following mode, if the expert says “Shoot the innocent person to the right”, you will do so. But you are morally certain the expert is benevolent and hence won’t tell you anything like that. Thus it can be morally permissible to gain a disposition which disposes you to do things that are wrong under circumstances that you are morally certain will not come up.

Perhaps, though, there seems to be a difference between this and my nuclear deterrence case. In the nuclear deterrence case, the leader specifically acquires a disposition to do something that is wrong, namely to all-out retaliate, and this disposition is always wrong to actualize. In the maze case, you gain a general disposition to obey the expert, and normally that disposition is not wrong to actualize.

But this overstates what is true of the nuclear deterrence case. There are some conditions under which all-out retaliation is permissible, such as when 99.9% of one’s nuclear arsenal has been destroyed and the remainder is only aimed at legitimate military targets, or maybe when all the enemy civilians are in highly effective nuclear shelters and retaliation is the only way to prevent a follow-up strike from the enemy. Moreover, it may understate what is permissible in the expert case. You may need to instill in yourself the specific willingness to do what at the moment seems wrong, because sometimes the expert may tell you things that will seem wrong—e.g., to swing your sword at what looks like a small child (but in fact is a killer robot). I am not completely sure it is permissible to have an attitude of trust in the expert that goes that far, but I could be convinced of it.

I was assuming, contrary to fact in typical cases, that there is moral certainty that the nuclear deterrence will be effective and there will be no enemy first strike. Absent that assumption, the question is rather less clear. Suppose there is a 10% chance the expert is not so benevolent. Is it permissible to instill a disposition to blindly follow their orders? I am not sure.

Tuesday, September 2, 2025

Memory theories of personal identity and faster-than-light dependence

Consider this sequence of events:

  • Tuesday: Alice’s memory is scanned and saved to a hard drive.
  • Wednesday: Alice’s head is completely crushed in a car crash.
  • Thursday: Alice’s scanned memories are put into a fresh brain.

It seems that on a memory theory of personal identity, we would say that fresh brain on Thursday is Alice.

But now suppose that on Thursday, Alice’s scanned memories are put into two fresh brains.

If one of the operations is in the absolute past—the backwards light-cone—of the other, it is easy to say that what happens is that Alice goes to the brain that gets the memories first.

Fine. But what if which brain got the memories first depends on the reference frame, i.e., the two operations are space-like separated? It’s plausible that this is a case of symmetric fission, and in symmetric fission Alice doesn’t survive.

But now here is an odd thing. Suppose the two operations are simultaneous in some frame, but one happens on earth and the other on a spaceship by alpha-Centauri. Then whether Alice comes into existence in a lab on earth depends on what happens in a spaceship that’s four light-years away, and it depends on it in a faster-than-light way. That seems problematic.

Killing coiled and straight snakes

Suppose a woman crushes the head of a very long serpent. If the snake all dies instantly when its head is crushed, then in some reference frame the tail of the snake dies before the woman crushes the head, which seems wrong. So it seems we should not say the snake dies instantly.

I am not talking about the fact that the tail can still wiggle a significant amount of time after the head is crushed, or so I assume. That’s not life. What makes a snake be alive is having a snake substantial form. Death is the departure of the form. If the tail of the headless snake wiggles, that’s just a chunk of matter wiggling without a snake form.

What’s going on? Presumably it’s that metaphysical death—the separation of form from body—propagates from the crushed head to the rest of the snake, and it propagates at most at the speed of light. After all, the separation is a genuine causal process, and we are supposed to think that genuine causal processes happen at the speed of light or less.

So we get a constraint: a part of the snake cannot be dead before light emitted from the head-crushing event could reach the part. But it is also plausible that as soon as the light can reach the part, the part is dead. For a headless snake is dead, and as soon as the light from the head-crushing event can reach a part, the head-crushing event is in the absolute past of the part, and so the part is a part of a headless snake in every reference frame. Thus the part is dead.

So death propagates to the snake exactly at the speed of light from the head-crushing, it seems. Moreover, it does this not along the snake but in the shortest distance—that’s what the argument of the previous paragraph suggests. That means that a snake that’s tightly coiled into a ball dies faster than one that is stretched out when the head is crushed. Moreover, if you have a snake that is rolled into the shape of the letter C, and the head is crushed, the tail dies before the middle of the snake dies. That’s counterintuitive, but we shouldn’t expect reality to always be intuitive.

Friday, August 29, 2025

Proportionate causality

Let’s assume for the sake of argument:

Aquinas’ Principle of Proportionate Causality: Anything that causes something to have a perfection F must either have F or some more perfect perfection G.

And let’s think about what follows.

The Compatibility Thesis: If F is a perfection, then F is compatible with every perfection.

Argument: If F is incompatible with a perfection G, then having F rules out having perfection G. And that’s limitive rather than perfect. Perhaps the case where G = F needs to be argued separately. But we can do that. If F is incompatible with F, then F rules out all other perfections as well, and as long as there is more than one perfection (as is plausible) that violates the first part of the argument.

The Entailment Thesis: If F and G are perfections, and G is more perfect than F, then G entails F.

Argument: If F and G are perfections, and it is both possible to have F without having G and to have F while having G, it is better to have both F and G than to have just G. But if it is better to have both F and G than to have just G, then F contributes something good that G does not, and hence we cannot say that G is more perfect than F—rather, in one respect F is more perfect and in another G is more perfect.

From the Entailment Thesis and Aquinas’ Principle of Proportionate Causality, we get:

The Strong Principle of Proportionate Causality: Anything that causes something to have a perfection F must have F.

Interesting.

More on velocity

From time to time I’ve been playing with the question whether velocity just is rate of change of position over time in a philosophical elaboration of classical mechanics.

Here’s a thought. It seems that how much kinetic energy an object x has at time t (relative to a frame F, if we like) is a feature of the object at time t. But if velocity is rate of change of position over time, and velocity (together with mass) grounds kinetic energy as per E = m|v|2/2, then kinetic energy at t is a feature of how the object is at time and at nearby times.

This argument suggests that we should take velocity as a primitive property of an object, and then take it that by a law of nature velocity causes a rate of change of position: dx/dt = v.

Alternately, though, we might say that momentum and mass ground kinetic energy as per E = |p|2/2m, and momentum is not grounded in velocity. Instead, on classical mechanics, perhaps we have an additional law of nature according to which momentum causes a rate of change of position over time, which rate of change is velocity: v = dx/dt = p/m.

But in any case, it seems we probably shouldn’t both say that momentum is grounded in velocity and that velocity is nothing but rate of change of position over time.

Experiencing something as happening to you

In some video games, it feels like I am doing the in-game character’s actions and in others it feels like I am playing a character that does the actions. The distinction does not map onto the distinction between first-person-view and third-person-view. In a first-person view game, even a virtual reality one (I’ve been playing Asgard Wrath 2 on my Quest 2 headset), it can still feel like a character is doing the action, even if visually I see things from the character’s point of view. On the other hand, one can have a cartoonish third-person-view game where it feels like I am doing the character’s actions—for instance, Wii Sports tennis. (And, of course, there are games which have no in-game character responsible for the actions, such as chess or various puzzle games like Vexed. But my focus is on games where there is something like an in-game character.)

For those who don’t play video games, note that one can watch a first-person-view movie like Lady in the Lake without significantly identifying with the character whose point of view is presented by the camera. And sometimes there is a similar distinction in dreams, between events happening to one and events happening to an in-dream character from whose point of view one looks at things. (And, reversely, in real life some people suffer from a depersonalization where feels like the events of life are happening to a different person.)

Is there anything philosophically interesting that we can say about the felt distinction between seeing something from someone else’s point of view—even in a highly immersive and first-person way as in virtual reality—and seeing it as happening to oneself? I am not sure. I find myself feeling like things are happening to me more in games with a significant component of physical exertion (Wii Sports tennis, VR Thrill of the Fight boxing) and where the player character doesn’t have much character to them, so it is easier to embody them, and less so in games with a significant narrative where the player character has character of their own—even when it is pretty compelling, as in Deus Ex. Maybe both the physical aspect and the character aspect are bound up in a single feature—control. In games with a significant physical component, there is more physical control. And in games where there is a well-developed player character, presumably to a large extent this is because the character’s character is the character’s own and only slightly under one’s control (e.g., maybe one can control fairly coarse-grained features, roughly corresponding to alignment in D&D).

If this is right, then a goodly chunk of the “it’s happening to me” feeling comes not from the quality of the sensory inputs—one can still have that feeling when the inputs are less realistic and lack it when they are more realistic—but from control. This is not very surprising. But if it is true, it might have some philosophical implications outside of games and fiction. It might suggest that self-consciousness is more closely tied to agency than is immediately obvious—that self-consciousness is not just a matter of a sequence of qualia. (Though, I suppose, someone could suggest that the feeling of self-conscious is just yet another quale, albeit one that typically causally depends on agency.)

Wednesday, August 27, 2025

More decision theory stuff

Suppose there are two opaque boxes, A and B, of which I can choose one. A nearly perfect predictor of my actions put $100 in the box that they thought I would choose. Suppose I find myself with evidence that it’s 75% likely that I will choose box A (maybe in 75% of cases like this, people like me choose A). I then reason: “So, probably, the money is in box A”, and I take box A.

This reasoning is supported by causal decision theory. There are two causal hypotheses: that there is money in box A and that there is money in box B. Evidence that it’s 75% likely that I will choose box A provides me with evidence that it’s close to 75% likely that the predictor put the money in box A. The causal expected value of my choosing box A is thus around $75 and the causal expected value of my choosing box B is around $25.

On evidential decision theory, it’s a near toss-up what to do: the expected news value of my choosing A is close to $100 and so is that of my choosing B.

Thus, on causal decision theory, if I have to pay a $10 fee for choosing box A, while choosing box B is free, I should still go for box A. But on evidential decision theory, since it’s nearly certain that I’ll get a prize no matter what I do, it’s pointless to pay any fee. And that seems to be the right answer to me here. But evidential decision theory gives the clearly wrong answer in some other cases, such as that infamous counterfactual case where an undetected cancer would make you likely to smoke, with no causation in the other direction, and so on evidential decision theory you refrain from smoking to make sure you didn’t get the cancer.

In recent posts, I’ve been groping towards an alternative to both theories. The alternative depends on the idea of imagining looking at the options from the standpoint of causal decision theory after updating on the hypothesis that one has made a specific choice. In current my predictor cases, if you were to learn that you chose A, you would think: Very likely the money is in box A, so choosing box A was a good choice, while if you chose B, you would think: Very likely the money is in box B, so choosing box B was a good choice. As a result, it’s tempting to say that both choices are fine—they both ratify themselves, or something like that. But that misses out the plausible claim that if there is a $10 fee for choosing A, you should choose B. I don’t know how best to get that claim. Evidential decision theory gets it, but evidential decision theory has other problems.

Here’s something gerrymandered that might work for some binary choices. For options X and Y, which may or may not be the same, let eX(Y) be the causal expected value of Y with respect to the credences for the causal hypotheses updated with respect to your having chosen X. Now, say that the differential restrospective causal expectation d(X) of option X equals eX(X) − eX(Y). This measures how much you would think you gained, from the standpoint of causal decision theory, in choosing X rather than Y by the lights of having updated on choosing X. Then you should the option that provides a bigger d(X).

In the case where there is a $10 fee for choosing box A, d(B) is approximately $100 while d(A) is approximately $90, so you should go for box B, as per my intuition. So you end up agreeing with evidential decision theory here.

You avoid the conclusion you should smoke to make sure you don’t have cancer in the hypothetical case where cancer causes smoking but not conversely, because the differential retrospective causal expectation of smoking is positive while the differential retrospective causal expectation of not smoking is negative, assuming smoking is fun (is it?). So here you agree with causal decision theory.

What about Newcomb’s paradox? If the clear box has a thousand dollars and the opaque box has a million or nothing (depending on whether you are predicted to take just the opaque box or to take both), then the differential retrospective causal expectation of two-boxing is a thousand dollars (when you learned you two-box, you learn that the opaque box was likely empty) and the differential retrospective causal expectation of one-boxing is minus a thousand dollars.

So the differential retrospective causal expectation theory agrees with causal decision theory in the clear case (cancer-causes-smoking), the difficult case (Newcomb), but agrees with evidential decision theory in the $10 fee variant of my two-box scenario, and the last seems plausible.

But (a) it’s gerrymandered and (b) I don’t know how to generalize it to cases with more than two options. I feel lost.

Maybe I should stop worrying about this stuff, because maybe there just is no good general way of making rational decisions in cases where there is probabilistic information available to you about how you will make your choice.

Tuesday, August 26, 2025

Position: Assistant Professor of Bioethics, Tenure Track, Department of Philosophy, Baylor University

We're hiring again. Here's the full ad.

My AI policy

I’ve been wondering what to allow and what to disallow in terms of AI. I decided to treat AI as basically persons and I put this in my Metaphysics syllabus:

Even though (I believe) AI is not a person and its products are not “thoughts”, treat AI much like you would a person in writing your papers. I encourage you to have conversations with AIs about the topics of the class. If you get ideas from these conversations, put in a footnote saying you got the idea from an AI, and specifically cite which AI. If you use the AI’s words, put them in quotation marks. (If your whole paper is in quotation marks, it’s not cheating, but you haven’t done the writing yourself and so it’s like a paper not turned in, a zero.) Just as you can ask a friend to help you understand the reading, you can ask an AI to help you understand the reading, and in both cases you should have a footnote acknowledging the help you got. Just as you can ask a friend, or the Writing Center or Microsoft Word to find mistakes in your grammar and spelling, you can ask an AI to do that, and as long as the contribution of the AI is to fix errors in grammar and spelling, you don’t need to cite. But don’t ask an AI to rewrite your paper for you—now you’re cheating as the wording and/or organization is no longer yours, and one of the things I want you to learn in this class is how to write. Besides all this, last time I checked, current AI isn’t good at producing the kind of sharply focused numbered valid arguments I want you to make in the papers—AI produces things that look like valid arguments, but may not be. And they have a distinctive sound to them, so there is a decent chance of getting caught. When in doubt, put in a footnote at the end what help you got, whether from humans or AI, and if the help might be so much that the paper isn’t really yours, pre-clear it with me.

An immediate regret principle

Here’s a plausible immediate regret principle:

  1. It is irrational to make a decision such that learning that you’ve made this decision immediately makes it rational to regret that you didn’t make a different decision.

The regret principle gives an argument for two-boxing in Newcomb’s Paradox, since if you go for one box, as soon as you have made your decision to do that, you will regret you didn’t make the two-box decision—there is that clear box with money staring at you, but if you go for two boxes, you will have no regrets.

Interestingly, though, one can come up with predictor stories where one has regrets no matter what one chooses. Suppose there are two opaque boxes, A and B, and you can take either box but not both. A predictor put a thousand dollars in the box that they predicted you won’t take. Their prediction need not be very good—all we need for the story is that there is a better than even probability of their having predicted you choosing A conditionally on your choosing A and a better than even probability of their having predicted you choosing B conditionally on your choosing B. But now as soon as you’ve made your decision, and before you opened the chosen box, you will think the other box is more likely to have the money, and so your knowledge of your decision will make it rational to regret that decision. Note that while the original Newcomb problem is science-fictional, there is nothing particularly science-fictional about my story. It would not be surprising, for instance, if someone were able to guess with better than even chance of correctness about what their friends would choose.

Is this a counterexample to the immediate regret principle (1), or is this an argument that there are real rational dilemmas, cases where all options are irrational?

I am not sure, but I am inclined to think that it’s a counterexample to the regret principle.

Can we modify the immediate regret principle to save it? Maybe. How about this?

  1. No decision is such that learning that you’ve rationally made this decision immediately makes it rationally required to regret that you didn’t make a different decision.

On this regret principle, regret is compatible with non-irrational decision making but not with (known) rational decision making.

In my box story, it is neither rational nor irrational to choose A, and it is neither rational nor irrational to choose B. Then there is no contradiction to (2), since (2) only applies to decisions that are rationally made. And applying (2) to Newcomb’s Paradox no longer yields an argument for two-boxing, but only an argument that it is not rational to one-box. (For if it were rational to one-box, one could rationally decide to one-box, and one would then regret that.)

The “rationally” in (2) can be understood in a weaker way or a stronger way (the stronger way reads it as “out of rational requirement”). On either reading, (2) has some plausibility.

Monday, August 25, 2025

An odd decision theory

Suppose I am choosing between options A and B. Evidential decision theory tells me to calculate the expected utility E(U|A) given the news that I did A and the expected utility E(U|B) given the news that I did B, and go for the bigger of the two. This is well-known to lead to the following absurd result. Suppose there is a gene G that both causes one day to die a horrible death and makes one very likely to choose A, while absence of the gene makes one very likely to choose B. Then if A and B are different flavors of ice cream, I should always choose B, because E(U|A) ≪ E(U|B), since the horrible death from G trumps any advantage of flavor that A might have over B. This is silly, of course, because one’s choice does not affect whether one has G.

Causal decision theorists proceed as follows. We have a set of “causal hypotheses” about what the relevant parts of the world at the time of the decision are like. For each causal hypothesis H we calculate E(U|HA) and E(U|HB), and then we take the weighted average over our probabilities, and then decide accordingly. In other words, we have a causal expected utility of D

  • Ec(U|D) = ∑HE(U|HD)P(H)

and are to choose A over B provided that Ec(U|A) = Ec(U|B). In the gene case, the “bad news” of the horrible death on G is a constant addition to Ec(U|A) and to Ec(U|B), and so it can be ignored—as is right, since it’s not in our control.

But here is a variant case that worries me. Suppose that you are choosing between flavors A and B of ice cream, and you will only ever ever get to taste one of them, and only once. You can’t figure out which one will taste better for you (maybe one is oyster ice cream and the other is sea urchin ice cream). However, data shows that not only does G make one likely to choose A and its absence makes one likely to choose B, but everyone who has G derives pleasure from A and displeasure from B and everyone who lacks G has the opposite result, and all the pleasures and displeasures are of the same magnitude.

Now, background information says that you have a 3/4 chance of having G. On causal decision theory, this means that you should choose A, because likely you have G, and those who have G all enjoy A. Evidential decision theory, however, tells you that you should choose B, since if you choose B then likely you don’t have the terrible gene G.

In this case, I feel causal decision theory isn’t quite right. Suppose I choose A. Then after I have made my choice, but before I have consumed the ice cream, I will be glad that I chose A: my choice of A will make me think I have G, and hence that A is tastier. But similarly, if I choose B, then after I have made my choice, and again before consumption, I will be glad that I chose B, since my choice B will make me think I don’t have G and hence that B was a good choice. Whatever I choose, I will be glad I chose it. This suggests to me that my there is nothing wrong with either choice!

Here is the beginning of a third decision theory, then—one that is neither causal nor evidential. An option A is permissible provided that causal decision theory with the causal hypothesis credences conditioned on one’s choosing A permits one to do A. An option A is required provided that no alternative is permissible. (There are cases where no option is permissible. That’s weird, I admit.)

In the initial case, where the pleasure of each flavor does not depend on G, this third decision theory gives the same answer as causal decision theory—it says to go for the tastier flavor. In the second case, however, where the pleasure/displeasure depends on G, it permits one to go for either flavor. In a probabilistic-predictor Newcomb’s Paradox, it says to two-box.