Pessimism of the Intellect, Optimism of the Will Favorite posts  Manifold podcast  Twitter: @hsu_steve
Wednesday, May 26, 2021
How Dominic Cummings And The Warner Brothers Saved The UK
Saturday, May 22, 2021
Feynman Lectures on the Strong Interactions (Jim Cline notes)
Feynman Lectures on the Strong Interactions
Richard P. Feynman, James M. Cline
These twentytwo lectures, with exercises, comprise the extent of what was meant to be a fullyear graduatelevel course on the strong interactions and QCD, given at Caltech in 198788. The course was cut short by the illness that led to Feynman's death. Several of the lectures were finalized in collaboration with Feynman for an anticipated monograph based on the course. The others, while retaining Feynman's idiosyncrasies, are revised similarly to those he was able to check. His distinctive approach and manner of presentation are manifest throughout. Near the end he suggests a novel, nonperturbative formulation of quantum field theory in D dimensions. Supplementary material is provided in appendices and ancillary files, including verbatim transcriptions of three lectures and the corresponding audiotaped recordings.
Sunday, May 16, 2021
Ditchley Foundation meeting: China Today and Tomorrow
China Today and Tomorrow
20 MAY 2021  21 MAY 2021
This Ditchley conference will focus on China, its internal state and sense of self today, its role in the region and world, and how these might evolve in years to come.
There are broadly two current divergent narratives about China. The first is that China’s successful response to the pandemic has accelerated China’s ascent to be the world’s preeminent economic power. The Made in China 2025 strategy will also see China take the lead in some technologies beyond 5G, become selfsufficient in silicon chip production and free itself largely of external constraints on growth. China’s internal market will grow, lessening dependence on exports and that continued growth will maintain the bargain between the Chinese people and the Chinese Communist Party through prosperity and stability. Retaining some elements of previous Chinese strategy though, this confidence is combined with a degree of humility: China is concerned with itself and its region, not becoming a global superpower or challenging the US. Economic supremacy is the aim but military strategy remains focused on defence, not increasing international leverage or scope of action.
The second competing narrative is that China’s position is more precarious than it appears. The Belt and Road Initiative will bring diplomatic support from client countries but not real economic gains. Human rights violations will damage China abroad. Internally the pressures on natural resources will prove hard to sustain. Democratic and freemarket innovation, combined with a bit more industrial strategy, will outstrip China’s efforts. Careful attention to supply chains in the West will meanwhile reduce critical reliance on China and curb China’s economic expansion. This perceived fragility is often combined though with a sense of heightened Chinese ambition abroad, not just through the Belt and Road Initiative but in challenging the democratic global norms established since 1989 by presenting technologicallyenabled and effective authoritarian rule as an alternative model for the world, rather than just a Chinese solution.
What is the evidence today for where we should settle between these narratives? What trends should we watch to determine likely future results? ...
[Suggested background reading at link above.]Unfortunately this meeting will be virtual. The video below gives some sense of the unique charm of inperson workshops at Ditchley.
... analysis by German academic Gunnar Heinsohn. Two of his slides appear below.
1. It is possible that by 2050 the highly able STEM workforce in PRC will be ~10x larger than in the US and comparable to or larger than the rest of the world combined. Here "highly able" means roughly top few percentile math ability in developed countries (e.g., EU), as measured by PISA at age 15.
[ It is trivial to obtain this kind of estimate: PRC population is ~4x US population and fraction of university students in STEM is at least ~2x higher. Pool of highly able 15 year olds as estimated by PISA or TIMMS international testing regimes is much larger than in US, even per capita. Heinsohn's estimate is somewhat high because he uses PISA numbers that probably overstate the population fraction of Level 6 kids in PRC. Current PISA studies disproportionately sample from more developed areas of China. At bottom (asterisk) he uses results from Taiwan/Macau that give a smaller ~20x advantage of PRC vs USA. My own ~10x estimate is quite conservative in comparison. ]2. The trajectory of international patent filings shown below is likely to continue.
Wednesday, May 12, 2021
Neural Tangent Kernels and Theoretical Foundations of Deep Learning
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot, Franck Gabriel, Clément Hongler
At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinitewidth limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function fθ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinitewidth limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positivedefiniteness of the limiting NTK. We prove the positivedefiniteness of the limiting NTK when the data is supported on the sphere and the nonlinearity is nonpolynomial. We then focus on the setting of leastsquares regression and show that in the infinitewidth limit, the network function fθ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinitewidth limit.The results are remarkably well summarized in the wikipedia entry on Neural Tangent Kernels:
For most common neural network architectures, in the limit of large layer width the NTK becomes constant. This enables simple closed form statements to be made about neural network predictions, training dynamics, generalization, and loss surfaces. For example, it guarantees that wide enough ANNs converge to a global minimum when trained to minimize an empirical loss. ...
An Artificial Neural Network (ANN) with scalar output consists in a family of functions $f\left(\cdot ,\theta \right):\mathbb {R} ^{n_{\mathrm {in} }}\to \mathbb {R}$ parametrized by a vector of parameters $\theta \in \mathbb {R} ^{P}$.
The Neural Tangent Kernel (NTK) is a kernel $\Theta :\mathbb {R} ^{n_{\mathrm {in} }}\times \mathbb {R} ^{n_{\mathrm {in} }}\to \mathbb {R}$ defined by
In the language of kernel methods, the NTK $\Theta$ is the kernel associated with the feature map $\left(x\mapsto \partial _{\theta _{p}}f\left(x;\theta \right)\right)_{p=1,\ldots ,P}$ .$\Theta \left(x,y;\theta \right)=\sum _{p=1}^{P}\partial _{\theta _{p}}f\left(x;\theta \right)\partial _{\theta _{p}}f\left(y;\theta \right).$
For a dataset $\left(x_{i}\right)_{i=1,\ldots ,n}\subset \mathbb {R} ^{n_{\mathrm {in} }}$ with scalar labels $\left(z_{i}\right)_{i=1,\ldots ,n}\subset \mathbb {R}$ and a loss function $c:\mathbb {R} \times \mathbb {R} \to \mathbb {R}$, the associated empirical loss, defined on functions $f:\mathbb {R} ^{n_{\mathrm {in} }}\to \mathbb {R}$, is given by
${\mathcal {C}}\left(f\right)=\sum _{i=1}^{n}c\left(f\left(x_{i}\right),z_{i}\right).$When training the ANN $f\left(\cdot ;\theta \right):\mathbb {R} ^{n_{\mathrm {in} }}\to \mathbb {R}$ is trained to fit the dataset (i.e. minimize ${\mathcal {C}}$) via continuoustime gradient descent, the parameters $\left(\theta \left(t\right)\right)_{t\geq 0}$ evolve through the ordinary differential equation:
$\partial _{t}\theta \left(t\right)=\nabla {\mathcal {C}}\left(f\left(\cdot ;\theta \right)\right).$During training the ANN output function follows an evolution differential equation given in terms of the NTK:
 $\partial _{t}f\left(x;\theta \left(t\right)\right)=\sum _{i=1}^{n}\Theta \left(x,x_{i};\theta \right)\partial _{w}c\left(w,z_{i}\right){\Big }_{w=f\left(x_{i};\theta \left(t\right)\right)}.$
This equation shows how the NTK drives the dynamics of $f\left(\cdot ;\theta \left(t\right)\right)$ in the space of functions $\mathbb {R} ^{n_{\mathrm {in} }}\to \mathbb {R}$ during training.
Saturday, May 08, 2021
Three Thousand Years and 115 Generations of 徐 (Hsu / Xu)
Cheng Ting Hsu was born December 1, 1923 in Wenling, Zhejiang province, China. His grandfather, Zan Yao Hsu was a poet and doctor of Chinese medicine. His father, Guang Qiu Hsu graduated from college in the 1920's and was an educator, lawyer and poet.
Cheng Ting was admitted at age 16 to the elite National Southwest Unified University (Lianda), which was created during WWII by merging Tsinghua, Beijing, and Nankai Universities. This university produced numerous famous scientists and scholars such as the physicists C.N. Yang and T.D. Lee.
Cheng Ting studied aerospace engineering (originally part of Tsinghua), graduating in 1944. He became a research assistant at China's Aerospace Research Institute and a lecturer at Sichuan University. He also taught aerodynamics for several years to advanced students at the air force engineering academy.
In 1946 he was awarded one of only two Ministry of Education fellowships in his field to pursue graduate work in the United States. In 19461947 he published a threevolume book, coauthored with Professor Li Shoutong, on the structures of thinwalled airplanes.
In January 1948, he left China by ocean liner, crossing the Pacific and arriving in San Francisco. ...My mother's father was a KMT general, and her family related to Chiang Kai Shek by marriage. Both my grandfather and Chiang attended the military academy Shinbu Gakko in Tokyo. When the KMT lost to the communists, her family fled China and arrived in Taiwan in 1949. My mother's family had been converted to Christianity in the 19th century and became Methodists, like Sun Yat Sen. (I attended Methodist Sunday school while growing up in Ames IA.) My grandfather was a partner of T.V. Soong in the distribution of bibles in China in the early 20th century.
Wikipedia: The State of Xu (Chinese: 徐) (also called Xu Rong (徐戎) or Xu Yi (徐夷)[a] by its enemies)[4][5] was an independent Huaiyi state of the Chinese Bronze Age[6] that was ruled by the Ying family (嬴) and controlled much of the Huai River valley for at least two centuries.[3][7] It was centered in northern Jiangsu and Anhui. ...
Generations 114 and 115:
Four volume history of the Hsu (Xu) family, beginning in the 10th century BC. The first 67 generations are covered rather briefly, only indicating prominent individuals in each generation of the family tree. The books are mostly devoted to generations 68113 living in Zhejiang. (Earlier I wrote that it was two volumes, but it's actually four. The printing that I have is two thick books.)
Sunday, May 02, 2021
40 Years of Quantum Computation and Quantum Information
This is a great article on the 1981 conference which one could say gave birth to quantum computing / quantum information.
Technology Review: Quantum computing as we know it got its start 40 years ago this spring at the first Physics of Computation Conference, organized at MIT’s Endicott House by MIT and IBM and attended by nearly 50 researchers from computing and physics—two groups that rarely rubbed shoulders.
Twenty years earlier, in 1961, an IBM researcher named Rolf Landauer had found a fundamental link between the two fields: he proved that every time a computer erases a bit of information, a tiny bit of heat is produced, corresponding to the entropy increase in the system. In 1972 Landauer hired the theoretical computer scientist Charlie Bennett, who showed that the increase in entropy can be avoided by a computer that performs its computations in a reversible manner. Curiously, Ed Fredkin, the MIT professor who cosponsored the Endicott Conference with Landauer, had arrived at this same conclusion independently, despite never having earned even an undergraduate degree. Indeed, most retellings of quantum computing’s origin story overlook Fredkin’s pivotal role.
Fredkin’s unusual career began when he enrolled at the California Institute of Technology in 1951. Although brilliant on his entrance exams, he wasn’t interested in homework—and had to work two jobs to pay tuition. Doing poorly in school and running out of money, he withdrew in 1952 and enlisted in the Air Force to avoid being drafted for the Korean War.
A few years later, the Air Force sent Fredkin to MIT Lincoln Laboratory to help test the nascent SAGE air defense system. He learned computer programming and soon became one of the best programmers in the world—a group that probably numbered only around 500 at the time.
Upon leaving the Air Force in 1958, Fredkin worked at Bolt, Beranek, and Newman (BBN), which he convinced to purchase its first two computers and where he got to know MIT professors Marvin Minsky and John McCarthy, who together had pretty much established the field of artificial intelligence. In 1962 he accompanied them to Caltech, where McCarthy was giving a talk. There Minsky and Fredkin met with Richard Feynman ’39, who would win the 1965 Nobel Prize in physics for his work on quantum electrodynamics. Feynman showed them a handwritten notebook filled with computations and challenged them to develop software that could perform symbolic mathematical computations. ...
... in 1974 he headed back to Caltech to spend a year with Feynman. The deal was that Fredkin would teach Feynman computing, and Feynman would teach Fredkin quantum physics. Fredkin came to understand quantum physics, but he didn’t believe it. He thought the fabric of reality couldn’t be based on something that could be described by a continuous measurement. Quantum mechanics holds that quantities like charge and mass are quantized—made up of discrete, countable units that cannot be subdivided—but that things like space, time, and wave equations are fundamentally continuous. Fredkin, in contrast, believed (and still believes) with almost religious conviction that space and time must be quantized as well, and that the fundamental building block of reality is thus computation. Reality must be a computer! In 1978 Fredkin taught a graduate course at MIT called Digital Physics, which explored ways of reworking modern physics along such digital principles.
Feynman, however, remained unconvinced that there were meaningful connections between computing and physics beyond using computers to compute algorithms. So when Fredkin asked his friend to deliver the keynote address at the 1981 conference, he initially refused. When promised that he could speak about whatever he wanted, though, Feynman changed his mind—and laid out his ideas for how to link the two fields in a detailed talk that proposed a way to perform computations using quantum effects themselves.
Feynman explained that computers are poorly equipped to help simulate, and thereby predict, the outcome of experiments in particle physics—something that’s still true today. Modern computers, after all, are deterministic: give them the same problem, and they come up with the same solution. Physics, on the other hand, is probabilistic. So as the number of particles in a simulation increases, it takes exponentially longer to perform the necessary computations on possible outputs. The way to move forward, Feynman asserted, was to build a computer that performed its probabilistic computations using quantum mechanics.
[ Note to reader: the discussion in the last sentences above is a bit garbled. The exponential difficulty that classical computers have with quantum calculations has to do with entangled states which live in Hilbert spaces of exponentially large dimension. Probability is not really the issue; the issue is the huge size of the space of possible states. Indeed quantum computations are strictly deterministic unitary operations acting in this Hilbert space. ]
Feynman's 1981 lecture Simulating Physics With Computers.Feynman hadn’t prepared a formal paper for the conference, but with the help of Norm Margolus, PhD ’87, a graduate student in Fredkin’s group who recorded and transcribed what he said there, his talk was published in the International Journal of Theoretical Physics under the title “Simulating Physics with Computers.” ...
We derive a fundamental upper bound on the rate at which a device can process information (i.e., the number of logical operations per unit time), arising from quantum mechanics and general relativity. In Planck units a device of volume V can execute no more than the cube root of V operations per unit time. We compare this to the rate of information processing performed by nature in the evolution of physical systems, and find a connection to black hole entropy and the holographic principle.
Physics of Computation Conference, Endicott House, MIT, May 6–8, 1981. 1 Freeman Dyson, 2 Gregory Chaitin, 3 James Crutchfield, 4 Norman Packard, 5 Panos Ligomenides, 6 Jerome Rothstein, 7 Carl Hewitt, 8 Norman Hardy, 9 Edward Fredkin, 10 Tom Toffoli, 11 Rolf Landauer, 12 John Wheeler, 13 Frederick Kantor, 14 David Leinweber, 15 Konrad Zuse, 16 Bernard Zeigler, 17 Carl Adam Petri, 18 Anatol Holt, 19 Roland Vollmar, 20 Hans Bremerman, 21 Donald Greenspan, 22 Markus Buettiker, 23 Otto Floberth, 24 Robert Lewis, 25 Robert Suaya, 26 Stand Kugell, 27 Bill Gosper, 28 Lutz Priese, 29 Madhu Gupta, 30 Paul Benioff, 31 Hans Moravec, 32 Ian Richards, 33 Marian PourEl, 34 Danny Hillis, 35 Arthur Burks, 36 John Cocke, 37 George Michaels, 38 Richard Feynman, 39 Laurie Lingham, 40 P. S. Thiagarajan, 41 Marin Hassner, 42 Gerald Vichnaic, 43 Leonid Levin, 44 Lev Levitin, 45 Peter Gacs, 46 Dan Greenberger. (Photo courtesy Charles Bennett)
Blog Archive

▼
2021
(91)

▼
05
(6)
 How Dominic Cummings And The Warner Brothers Saved...
 Feynman Lectures on the Strong Interactions (Jim C...
 Ditchley Foundation meeting: China Today and Tomorrow
 Neural Tangent Kernels and Theoretical Foundations...
 Three Thousand Years and 115 Generations of 徐 (Hsu...
 40 Years of Quantum Computation and Quantum Inform...

▼
05
(6)
Labels
 physics (415)
 genetics (324)
 globalization (291)
 genomics (290)
 brainpower (276)
 finance (272)
 technology (268)
 american society (257)
 China (240)
 innovation (220)
 ai (198)
 economics (197)
 psychometrics (187)
 science (172)
 psychology (168)
 machine learning (164)
 photos (162)
 biology (161)
 universities (148)
 genetic engineering (147)
 travel (144)
 higher education (140)
 podcasts (130)
 startups (130)
 human capital (127)
 credit crisis (115)
 geopolitics (113)
 iq (107)
 political correctness (105)
 quantum mechanics (105)
 cognitive science (101)
 autobiographical (96)
 politics (92)
 bounded rationality (88)
 careers (88)
 social science (84)
 history of science (83)
 realpolitik (83)
 statistics (83)
 elitism (81)
 talks (80)
 evolution (79)
 credit crunch (78)
 biotech (75)
 genius (75)
 gilded age (73)
 income inequality (73)
 caltech (67)
 books (64)
 academia (61)
 intellectual history (61)
 MSU (60)
 sci fi (60)
 harvard (57)
 history (57)
 mma (57)
 silicon valley (56)
 mathematics (55)
 education (53)
 video (52)
 kids (51)
 bgi (48)
 black holes (46)
 cdo (45)
 derivatives (43)
 neuroscience (43)
 affirmative action (42)
 behavioral economics (42)
 literature (42)
 computing (41)
 jiujitsu (41)
 nuclear weapons (40)
 physical training (40)
 economic history (39)
 film (39)
 many worlds (39)
 ufc (37)
 bjj (36)
 bubbles (36)
 expert prediction (36)
 mortgages (36)
 quantum field theory (36)
 google (35)
 race relations (35)
 security (34)
 von Neumann (34)
 hedge funds (33)
 meritocracy (31)
 feynman (30)
 quants (30)
 taiwan (30)
 efficient markets (29)
 foo camp (29)
 movies (29)
 sports (29)
 music (28)
 singularity (27)
 entrepreneurs (26)
 conferences (25)
 housing (25)
 obama (25)
 subprime (25)
 venture capital (25)
 berkeley (24)
 epidemics (24)
 wall street (23)
 war (23)
 athletics (22)
 ultimate fighting (22)
 russia (21)
 cds (20)
 internet (20)
 new yorker (20)
 blogging (19)
 scifoo (19)
 dna (18)
 gender (18)
 goldman sachs (18)
 japan (18)
 university of oregon (18)
 christmas (17)
 cryptography (17)
 freeman dyson (17)
 smpy (17)
 treasury bailout (17)
 algorithms (16)
 autism (16)
 cold war (16)
 personality (16)
 privacy (16)
 cosmology (15)
 happiness (15)
 height (15)
 oppenheimer (15)
 probability (15)
 social networks (15)
 wwii (15)
 Fermi problems (14)
 fitness (14)
 government (14)
 india (14)
 les grandes ecoles (14)
 neanderthals (14)
 quantum computers (14)
 blade runner (13)
 chess (13)
 hedonic treadmill (13)
 nsa (13)
 research (13)
 aspergers (12)
 climate change (12)
 harvard society of fellows (12)
 malcolm gladwell (12)
 net worth (12)
 nobel prize (12)
 philosophy of mind (12)
 pseudoscience (12)
 Einstein (11)
 art (11)
 democracy (11)
 entropy (11)
 geeks (11)
 string theory (11)
 television (11)
 Go (10)
 ability (10)
 complexity (10)
 dating (10)
 football (10)
 france (10)
 italy (10)
 mutants (10)
 nerds (10)
 olympics (10)
 pop culture (10)
 crossfit (9)
 encryption (9)
 energy (9)
 eugene (9)
 flynn effect (9)
 james salter (9)
 tail risk (9)
 turing test (9)
 alan turing (8)
 alpha (8)
 ashkenazim (8)
 data mining (8)
 determinism (8)
 games (8)
 keynes (8)
 manhattan (8)
 new york times (8)
 pca (8)
 philip k. dick (8)
 qcd (8)
 real estate (8)
 robot genius (8)
 success (8)
 usain bolt (8)
 Iran (7)
 aig (7)
 basketball (7)
 environmentalism (7)
 free will (7)
 fx (7)
 game theory (7)
 hugh everett (7)
 inequality (7)
 information theory (7)
 iraq war (7)
 paris (7)
 patents (7)
 poker (7)
 simulation (7)
 teaching (7)
 vietnam war (7)
 volatility (7)
 anthropic principle (6)
 bayes (6)
 class (6)
 drones (6)
 econtalk (6)
 godel (6)
 intellectual property (6)
 markets (6)
 nassim taleb (6)
 noam chomsky (6)
 prostitution (6)
 rationality (6)
 academia sinica (5)
 bobby fischer (5)
 demographics (5)
 empire (5)
 fake alpha (5)
 global warming (5)
 kasparov (5)
 luck (5)
 nonlinearity (5)
 perimeter institute (5)
 renaissance technologies (5)
 sad but true (5)
 software development (5)
 warren buffet (5)
 100m (4)
 Poincare (4)
 assortative mating (4)
 bill gates (4)
 borges (4)
 cambridge uk (4)
 censorship (4)
 charles darwin (4)
 computers (4)
 creativity (4)
 hormones (4)
 humor (4)
 judo (4)
 kerviel (4)
 microsoft (4)
 mixed martial arts (4)
 monsters (4)
 moore's law (4)
 solar energy (4)
 soros (4)
 supercomputers (4)
 trento (4)
 200m (3)
 babies (3)
 brain drain (3)
 charlie munger (3)
 cheng ting hsu (3)
 chet baker (3)
 correlation (3)
 ecosystems (3)
 equity risk premium (3)
 facebook (3)
 fannie (3)
 feminism (3)
 fst (3)
 intellectual ventures (3)
 jim simons (3)
 language (3)
 lee kwan yew (3)
 lewontin fallacy (3)
 lhc (3)
 magic (3)
 michael lewis (3)
 mit (3)
 nathan myhrvold (3)
 neal stephenson (3)
 olympiads (3)
 path integrals (3)
 risk preference (3)
 search (3)
 sec (3)
 sivs (3)
 society generale (3)
 systemic risk (3)
 thailand (3)
 twitter (3)
 alibaba (2)
 bear stearns (2)
 bruce springsteen (2)
 charles babbage (2)
 cloning (2)
 david mamet (2)
 digital books (2)
 donald mackenzie (2)
 drugs (2)
 dune (2)
 exchange rates (2)
 frauds (2)
 freddie (2)
 gaussian copula (2)
 heinlein (2)
 industrial revolution (2)
 james watson (2)
 ltcm (2)
 mating (2)
 mba (2)
 mccain (2)
 monkeys (2)
 national character (2)
 nicholas metropolis (2)
 no holds barred (2)
 offices (2)
 oligarchs (2)
 palin (2)
 population structure (2)
 prisoner's dilemma (2)
 singapore (2)
 skidelsky (2)
 socgen (2)
 sprints (2)
 star wars (2)
 ussr (2)
 variance (2)
 virtual reality (2)
 war nerd (2)
 abx (1)
 anathem (1)
 andrew lo (1)
 antikythera mechanism (1)
 athens (1)
 atlas shrugged (1)
 ayn rand (1)
 bay area (1)
 beats (1)
 book search (1)
 bunnie huang (1)
 car dealers (1)
 carlos slim (1)
 catastrophe bonds (1)
 cdos (1)
 ces 2008 (1)
 chance (1)
 children (1)
 cochranharpending (1)
 cpi (1)
 david x. li (1)
 dick cavett (1)
 dolomites (1)
 eharmony (1)
 eliot spitzer (1)
 escorts (1)
 faces (1)
 fads (1)
 favorite posts (1)
 fiber optic cable (1)
 francis crick (1)
 gary brecher (1)
 gizmos (1)
 greece (1)
 greenspan (1)
 hypocrisy (1)
 igon value (1)
 iit (1)
 inflation (1)
 information asymmetry (1)
 iphone (1)
 jack kerouac (1)
 jaynes (1)
 jazz (1)
 jfk (1)
 john dolan (1)
 john kerry (1)
 john paulson (1)
 john searle (1)
 john tierney (1)
 jonathan littell (1)
 las vegas (1)
 lawyers (1)
 lehman auction (1)
 les bienveillantes (1)
 lowell wood (1)
 lse (1)
 machine (1)
 mcgeorge bundy (1)
 mexico (1)
 michael jackson (1)
 mickey rourke (1)
 migration (1)
 money:tech (1)
 myron scholes (1)
 netwon institute (1)
 networks (1)
 newton institute (1)
 nfl (1)
 oliver stone (1)
 phil gramm (1)
 philanthropy (1)
 philip greenspun (1)
 portfolio theory (1)
 power laws (1)
 pyschology (1)
 randomness (1)
 recession (1)
 sales (1)
 skype (1)
 standard deviation (1)
 starship troopers (1)
 students today (1)
 teleportation (1)
 tierney lab blog (1)
 tomonaga (1)
 tyler cowen (1)
 venice (1)
 violence (1)
 virtual meetings (1)
 wealth effect (1)