Here are the first three chapters of his book Probability Theory: the Logic of Science. The historical material in the preface is fascinating.
Jaynes started as an Oppenheimer student, following his advisor from Berkeley to Princeton. But Oppenheimer's mystical adherence to the logically incomplete Copenhagen interpretation (Everett's "philosophic monstrosity") led Jaynes to switch advisors, becoming a student of Wigner.
Edwin T. Jaynes was one of the first people to realize that probability theory, as originated by Laplace, is a generalization of Aristotelian logic that reduces to deductive logic in the special case that our hypotheses are either true or false. This web site has been established to help promote this interpretation of probability theory by distributing articles, books and related material. As Ed Jaynes originated this interpretation of probability theory we have a large selection of his articles, as well as articles by a number of other people who use probability theory in this way.See Carson Chow for a nice discussion of how Bayesian inference is more like human reasoning than formal logic.
The seeds of the modern era could arguably be traced to the Enlightenment and the invention of rationality. I say invention because although we may be universal computers and we are certainly capable of applying the rules of logic, it is not what we naturally do. What we actually use, as coined by E.T. Jaynes in his iconic book Probability Theory: The Logic of Science, is plausible reasoning. Jaynes is famous for being a major proponent of Bayesian inference during most of the second half of the last century. However, to call Jaynes’s book a book about Bayesian statistics is to wholly miss Jayne’s point, which is that probability theory is not about measures on sample spaces but a generalization of logical inference. In the Jaynes view, probabilities measure a degree of plausibility.While I think the brain is doing something like Bayesian inference (perhaps with some kinds of heuristic shortcuts), there are probably laboratory experiments showing that we make a lot of mistakes and often do not properly apply Bayes' theorem. A quick look through the old Kahneman and Tversky literature would probably confirm this :-)
I think a perfect example of how unnatural the rules of formal logic are is to consider the simple implication A -> B which means - If A is true then B is true. By the rules of formal logic, if A is false then B can be true or false (i.e. a false premise can prove anything). Conversely, if B is true, then A can be true or false. The only valid conclusion you can deduce from is that if B is false then A is false. ...
However, people don’t always (seldom?) reason this way. Jaynes points out that the way we naturally reason also includes what he calls weak syllogisms: 1) If A is false then B is less plausible and 2) If B is true then A is more plausible. In fact, more likely we mostly use weak syllogisms and that interferes with formal logic. Jaynes showed that weak syllogisms as well as formal logic arise naturally from Bayesian inference.
[Carson gives a nice example here -- see the original.]
...I think this strongly implies that the brain is doing Bayesian inference. The problem is that depending on your priors you can deduce different things. This explains why two perfectly intelligent people can easily come to different conclusions. This also implies that reasoning logically is something that must be learned and practiced. I think it is important to know when you draw a conclusion, whether you are using deductive logic or if you are depending on some prior. Even if it is hard to distinguish between the two for yourself, at least you should recognize that it could be an issue.