Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
1 Séminaire automne 2008 ; Paradoxes Philosophiques Le Paradoxe de Newcomb – Selection d’articles A: One-boxers: Bar-Hillel M. & Margalit A.(1972) Newcomb's Paradox Revisited (in British Journal for the Philosophy of Science, vol.23, p301) Abstract: Newcomb’s problems is a 2-person game in which two principles of rational choice, maximizing expected utility and choosing the dominant strategy, seem to call for different moves. Upon analysis, the problem is restated as a conflict between the attractive move (on subjective and game theoretical grounds) and the tantalizing feeling that its choice is tantamount to magical thinking. However, the focus of the paradox is shown to lie in the prior assumptions regarding the nature of the game rather than the strategies involved, and can be dealt with from a psychological rather than logical point of view. Horgan T. (1981) Counterfactuals and Newcomb’s Problem (in Journal of Philosophy, vol.78, p331-56) Abstract: I argue that one box reasoning and two box reasoning in Newcomb’s problem employ counterfactual conditionals essentially and that the crux of the problem concerns the question of how best to resolve the vagueness of the relevant counterfactuals. If the appropriate resolution is what David Lewis calls the ‘standard’ resolution, then the two box reasoning prevails. But I argue that a non standard resolution is appropriate, for purposes of practical decision making, and hence that one box reasoning prevails. I then Discuss the implications for decision theory. B: Two-boxers_: Lewis D. (1981) Why Ain’cha Rich? (in Noûs, vol.15, p377-380) Abstract: Those who think it rational to take two boxes in Newcomb’s problem say their predictable failure to win millions shows not that they are making the irrational choice, but rather that the problem is one in which (predicted) rationality is rewarded. Could there be a similar problem in which what is rewarded is (predicted) irrationality according to the one boxers’ standard of rationality? That proves to be impossible. Nozick R. (1969) Newcomb’s Problem and Two Principles of Choice (in Rescher N. et al. (eds.), Essays in Honor of Carl G. Hempel, D. Reidel, p. 114-146) Sobel J.H. (1988) Infallible Predictors (in The Philosophical Review, vol.97, p3-24) Abstract: Some one-boxers argue: if your confidence in the predictor were complete, then you would see yourself as having a choice between a $1,000,000 for sure and $1 for sure. This paper stresses the difference between absolutely infallible and merely never-erring predictors; cautions against confusing interpretations of the displayed conditional, some of which are true and some of which are relevant, but none of which are both; and concludes 2 with a probability-of-one case in which the first box has in it not $1 but $1,000,000, just as the second box may. C: Pas de solution: Hubin D. & Ross G. (1985) Newcomb’s Perfect Predictor (in Noûs, vol.19, p439-446) Abstract: Perfect predictor versions of Newcomb’s problem, plausibly interpreted, are impossible. Controversy between one-boxers and two-boxers persists because the reasoning of each side depends upon paying selective attention to certain constraints on the puzzle. Either answer can be shown to be correct by attending to certain constraints and ignoring others. In order to establish this claim, we begin by examining the general constraints on a practical decision problem and the way in which the specific constraints on Newcomb’s problem function to delimit the range of possible solutions to the puzzle. Locke D. (1978) How to Make a Newcomb Choice (in Analysis, vol.38, p17-23) Abstract: Newcomb’s choice problem is presented, and arguments given both for choosing two boxes and for choosing one. The argument for choosing one box is shown to depend on a particular explanation of the predictor’s successes, and this explanation in turn suggests that the choice is not free. Thus the solution to the problem is not that there is good reason to choose only one box, but that there is good reason to think there is no freedom to choose other than has been predicted. But if the choice were free, the rational course of action would be to choose both boxes. Maitzen S. & Wilson G. (2003) Newcomb’s Hidden Regress (in Theory and Decision, vol.54, p151-162) Abstract: Newcomb's problem supposedly involves your choosing one or else two boxes in circumstances in which a predictor has made a prediction of how many boxes you will choose. We argue that the circumstances which allegedly define Newcomb's problem generate a previously unnoticed regress which shows that Newcomb's problem is insoluble because it is ill-formed. Those who favor, as we do, a ``no-box'' reply to Newcomb's problem typically claim either that the problem's solution is underdetermined or else that it is overdetermined. We are no-boxers of the first kind, but the underdetermination we identify is more radical than any previously identified: it blocks the very set-up of the problem and not just potential solutions to the problem once it has been set up. The defect is subtle, but it cripples every genuine version of the problem, regardless of variations in such things as the predictor's degree of reliability, the basis on which the prediction is made, or the amount of money in each box. The regress shows that, surprisingly enough, no one can understand Newcomb's problem, and so no one can possibly solve it Slezak P. (2006) Demons, Deceivers and Liars: Newcomb’s Malin Génie (in Theory and Decision, vol.61(3), p277-303) Abstract: A fully adequate solution to Newcomb’s Problem (Nozick 1969) should reveal the source of its extraordinary elusiveness and persistent intractability. Recently, a few accounts have independently sought to meet this criterion of adequacy by exposing the underlying source of the problem’s profound puzzlement. Thus, Sorensen (1987), Slezak (1998), Priest (2002) and Maitzen and Wilson (2003) share the ‘no box’ view according to which the very idea that there is a right choice is misconceived since the problem is ill- 3 formed or incoherent in some way. Among proponents of this view, Richard Jeffrey (2004) recently declared that he renounces his earlier position that accepted Newcomb problems as genuine decision problems. Significantly, Jeffrey suggests that “Newcomb problems are like Escher’s famous staircase on which an unbroken ascent takes you back where you started” (Jeffrey (2004; 113)). Jeffrey’s analogy is apt for a puzzle whose specific logical features can be precisely articulated. Along the lines of these related approaches, I propose to improve and clarify them by providing such a deeper analysis that elucidates their essential, related insights. Littérature supplémentaire: Benditt T.M. & Ross D.J. (1976) Newcomb’s Paradox (in British Journal for the Philosophy of Science, vol.27, p161-164) Abstract (mnr) : Les auteurs critiquent l’argument de Schlesinger pour la thèse que aucune base évidentielle ne peut justifier la croyance dans l’existence d’un être qui peut prévoir les choix libres. . Levi I. (1982) A Note on Newcombmania (in Journal of Philosophy, vol.79, p337-342) Abstract: Critique l’argument de Horgan en faveur de la stratégie de prendre une box. (Un peu technique pour le séminaire.) Lewis D. (1979) Prisoners’ Dilemma is a Newcomb Problem (in Philosophy and Public Affairs, vol.8, p236-237) Abstract: Removing inessentials from the description of a Newcomb problem, we obtain a description that applies also to the decision problem that arises in prisoners’ dilemma. Consequently, the dispute about what it is rational to do in the two problems is the same dispute. McKay P. (2004) Newcomb’s problem: the causalists get rich (in Analysis, vol.64(2), p187– 189) Abstract: The paper argues that the causalist can explain why intuitions waver in the Newcomb problem. They waver in favor of one-boxing because of a buried belief that the 100% success rate of the predictor must indicate a hidden causal connection between one-boxing and the apparently prior actions of the predictor. Once this is seen favoring one-boxing will not mislead philosophers into evidentialism. Schmidt J.H. (1998) Newcomb’s Paradox Realized with Backward Causation (in British Journal for the philosophy of Science, vol.49, p67-87) Abstract: In order to refute the widely held belief that the game known as ‘Newcomb's paradox’ is physically nonsensical and impossible to imagine (e.g. because it involves backward causation), I tell a story in which the game is realized in a classical, deterministic universe in a physically plausible way. The predictor is a collection of beings which are by many orders of magnitude smaller than the player and which can, with their exquisite measurement techniques, observe the particles in the player's body so accurately that they can predict his choice (in much the same way as we can predict the motion of celestial bodies). I argue that the player, by choosing whether to take only one box or both 4 boxes, influences whether or not, in the past, the predictor put a million pounds into the second box. Yet, I establish that no causal paradox can arise in this set-up. Woodward, P.A. (2006) Why Prisoners' Dilemma Is Not a Newcomb Problem (It's Not Even Two Newcomb Problems Side by Side) (in Sorites, vol.17, p81-84) Abstract: David Lewis has argued that we can gain helpful insight to the (all too common) prisoners' dilemmas that we face from the fact that Newcomb's problems are easy to solve, and the fact that prisoners' dilemmas are nothing other than two Newcomb problems side by side. The present paper shows that the (all too common) prisoners' dilemmas that we face are significantly different from Newcomb problems in that the former are iterated while the latter are not. Thus, Lewis's hope that we can get insight into the former from the latter is illusory.