Here are the first three chapters of his book Probability Theory: the Logic of Science. The historical material in the preface is fascinating.
Jaynes started as an Oppenheimer student, following his advisor from Berkeley to Princeton. But Oppenheimer's mystical adherence to the logically incomplete Copenhagen interpretation (Everett's "philosophic monstrosity") led Jaynes to switch advisors, becoming a student of Wigner.
Edwin T. Jaynes was one of the first people to realize that probability theory, as originated by Laplace, is a generalization of Aristotelian logic that reduces to deductive logic in the special case that our hypotheses are either true or false. This web site has been established to help promote this interpretation of probability theory by distributing articles, books and related material. As Ed Jaynes originated this interpretation of probability theory we have a large selection of his articles, as well as articles by a number of other people who use probability theory in this way.See Carson Chow for a nice discussion of how Bayesian inference is more like human reasoning than formal logic.
The seeds of the modern era could arguably be traced to the Enlightenment and the invention of rationality. I say invention because although we may be universal computers and we are certainly capable of applying the rules of logic, it is not what we naturally do. What we actually use, as coined by E.T. Jaynes in his iconic book Probability Theory: The Logic of Science, is plausible reasoning. Jaynes is famous for being a major proponent of Bayesian inference during most of the second half of the last century. However, to call Jaynes’s book a book about Bayesian statistics is to wholly miss Jayne’s point, which is that probability theory is not about measures on sample spaces but a generalization of logical inference. In the Jaynes view, probabilities measure a degree of plausibility.While I think the brain is doing something like Bayesian inference (perhaps with some kinds of heuristic shortcuts), there are probably laboratory experiments showing that we make a lot of mistakes and often do not properly apply Bayes' theorem. A quick look through the old Kahneman and Tversky literature would probably confirm this :-)
I think a perfect example of how unnatural the rules of formal logic are is to consider the simple implication A -> B which means - If A is true then B is true. By the rules of formal logic, if A is false then B can be true or false (i.e. a false premise can prove anything). Conversely, if B is true, then A can be true or false. The only valid conclusion you can deduce from is that if B is false then A is false. ...
However, people don’t always (seldom?) reason this way. Jaynes points out that the way we naturally reason also includes what he calls weak syllogisms: 1) If A is false then B is less plausible and 2) If B is true then A is more plausible. In fact, more likely we mostly use weak syllogisms and that interferes with formal logic. Jaynes showed that weak syllogisms as well as formal logic arise naturally from Bayesian inference.
[Carson gives a nice example here -- see the original.]
...I think this strongly implies that the brain is doing Bayesian inference. The problem is that depending on your priors you can deduce different things. This explains why two perfectly intelligent people can easily come to different conclusions. This also implies that reasoning logically is something that must be learned and practiced. I think it is important to know when you draw a conclusion, whether you are using deductive logic or if you are depending on some prior. Even if it is hard to distinguish between the two for yourself, at least you should recognize that it could be an issue.
"Edwin T. Jaynes was one of the first people to realize that probability theory, as originated by Laplace, is a generalization of Aristotelian logic that reduces to deductive logic in the special case that our hypotheses are either true or false."
ReplyDelete"our hypotheses" are true or false or they are meaningless.
probability is not a characteristic of the world, but a characteristic of the relation of mind to world? unless you believe the copenhagen interpretation, but it is an interpretation.
ReplyDelete"Edwin T. Jaynes was one of the first people to realize that probability theory, as originated by Laplace, is a generalization of Aristotelian logic that reduces to deductive logic in the special case that our hypotheses are either true or false."
"They say that Understanding ought to work by the rules of right reason. These rules are, or ought to be, contained in Logic; but the actual science of logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true logic for this world is the calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man's mind."
--James Clerk Maxwell, 1850
Interestingly, Maxwell was only 19 when he wrote this.
This quote is (I think) the very first one in Jaynes' book. I was quite impressed by it as well, though I didn't know Maxwell wrote it at 19!
ReplyDeleteHi Steve,
ReplyDeleteBayesian inference doesn't necessarily contradict the behavioral econonomists. Depending on what your prior is, you can come to erroneous conclusions. I think it is still open as to whether we do faulty inference or have faulty priors.
I think there are experiments (games) in which certain probabilities are basically *given* and the person tested just has to use Bayes' theorem to get at the answer to maximize their payoff. (They are not told this is the case but the smarter ones figure it out; imagine a game with draws from an urn.) However, the experiments show people don't do this very well, which would be surprising if we had all evolved a Bayes' theorem co-processor in our brain. The only priors that would come into this would be crazy things like "the experimenter is lying to me" or "the urn has a trapdoor in the bottom" -- things we can probably discount.
ReplyDeleteA more direct test would be just to give your students a draws-from-urn problem and see if they naturally get the right result. I would wager that they do not, although their heuristic points them *qualitatively* in the right direction.
My opinion is that deviations from Bayesian reasoning observed in experiments are attributable to the subjects' 1) misunderstanding questions posed by the oh-so-smart experimenter 2) subjects utilizing not-given information (that is, working with invalid priors)
ReplyDeleteFor example, I attribute the rather typical inability to intuitively apply the Bayes theorem while interpreting medical test results given test accuracy and information about a condition's incidence to a misinterpretation of what the "test accuracy" number actually means. People rely on semantic shortcuts - "high detection rate test" roughly translates into a "good, reliable test" - which is quite a reasonable interpretation if you hold it ceteris paribus against a "low detection rate test".
Another good example is the gambler's fallacy, which is not a fallacy at all in many real-life circumstances (as opposed to idealized mathematical processes).
Untrained people are prone to assume the truth of a weak explanation to no explanation at all (a useful survival trait, I think, and not so uncommon among scientists either, as the first step). So when we discover a deviation between the idealized Bayesian and actual real-life reasoning, we'd better check what kinds of spurious information a person is working with, rather than assuming that she violates the consistency rules of inference.
'I think a perfect example of how unnatural the rules of formal logic are is to consider the simple implication A -> B which means - If A is true then B is true. By the rules of formal logic, if A is false then B can be true or false (i.e. a false premise can prove anything). Conversely, if B is true, then A can be true or false. The only valid conclusion you can deduce from is that if B is false then A is false. ...'
ReplyDeleteI've been arguing this among friends for years :-)
@Steve - Surprisingly, some experimental evidence of irrationality has been explained by appealing to skepticism about task parameters (i.e. the experimenter is lying to me; see http://psy2.ucsd.edu/~mckenzie/McKenzieWixtedNoelleJEPLMC2004.pdf). The explanation fits the data quite well.
ReplyDeleteBirnbaum and Mellers (1983) provide the best evidence for humans not being Bayesian, including the possibility that we are "subjective Bayesians", although the "Bayesian model" still fits the data quite well. See page 802 for a particularly revealing analysis http://psych.fullerton.edu/MBIRNBAUM/papers/Birnbaum_Mellers_JPSP_1983.pdf. (Note: Birnbaum also gives reason to doubt "base-rate neglect" from Tversky's work).
Recently, there is some work coming out of MIT showing us humans are surprisingly Bayesian (e.g. http://web.mit.edu/cocosci/Papers/Griffiths-Tenenbaum-PsychSci06.pdf; see http://web.mit.edu/cocosci/josh.html and work of Tenenbaum's students and coauthors). Tenenbaum's work on subjective randomness is equally fascinating, and makes me think researchers were a little too quick to conclude subjects are "irrational" (see http://web.mit.edu/cocosci/Papers/complex.pdf).
It seems to me, with the possible exception of Brinbaum's scale-adjustment averaging model, the Bayesian model of human reasoning is the best mathematical theory applicable across a wide range of contexts that provides reasonably accurate predictions that are rarely directionally incorrect (although, see Birnbaum and Mellers for possible exception). And even though the scale-adjustment averaging model wins when probabilities are given to subjects, do not it applying to situations in which probabilities aren't given to human subjects (e.g. subjects rely on prior experience and observed samples).
BTW - I'm trying to write a dissertation in accounting showing Bayesian cognition. But its a tough sell when most in my field are essentially followers of Kahneman and Tversky with no experience using Bayes Theorem (and therefore extremely skepticism it can predict behavior).
Charles S. Pierce made the same link between logic and probability (Proc. Amer. Acad. Arts & Sciences, 7, 1867) and came up with a version of Bayes's Theorem that looks distinctly unlike what we read in modern textbooks but is equivalent. Don't know if he read Bayes (1762(1763?)) but he probably knew about Laplace. Admittedly he was 17 years behind Maxwell and nine years older, but he preceded Jaynes in that respect by a fair margin.
ReplyDeleteHi Matthew. Thanks for the references, although not all of them work anymore. If Bayesian is actually equivalent to "proper" inference, than it is not surprising that evolution found a way to make us "Bayesians". It might be that just the comforting idea that we are "no mathematical machines" makes people appreciate studies like those of Kahneman so much. That there could be an underlying prior corresponding with something not considered in the experiment, is annoying to scientists perhaps.
ReplyDeleteDid you start with your dissertation yet?