Pessimism of the Intellect, Optimism of the Will Favorite posts | Manifold podcast | Twitter: @hsu_steve
Thursday, September 21, 2023
Huawei and the US-China Chip War — Manifold #44
Thursday, February 02, 2023
ChatGPT, LLMs, and AI — Manifold #29
Thursday, July 14, 2022
Tim Palmer (Oxford): Status and Future of Climate Modeling — Manifold Podcast #16
Sunday, June 12, 2022
Von Neumann: The Interaction of Mathematics and Computing, Stan Ulam 1976 talk (video)
To solve this problem, the Los Alamos team planned to produce an “explosive lens”, a combination of different explosives with different shock wave speeds. When molded into the proper shape and dimensions, the high-speed and low-speed shock waves would combine with each other to produce a uniform concave pressure wave with no gaps. This inwardly-moving concave wave, when it reached the plutonium sphere at the center of the design, would instantly squeeze the metal to at least twice the density, producing a compressed ball of plutonium that contained about 5 times the necessary critical mass. A nuclear explosion would then result.
Sunday, December 06, 2020
AlphaFold 2: protein folding solved?
Monday, September 28, 2020
Feynman on AI
Thanks to a reader for sending the video to me. The first clip is of Feynman discussing AI, taken from the longer 1985 lecture in the second video.
There is not much to disagree with in his remarks on AI. He was remarkably well calibrated and would not have been very surprised by what has happened in the following 35 years, except that he did not anticipate (at least, does not explicitly predict) the success that neural nets and deep learning would have for the problem that he describes several times as "pattern recognition" (face recognition, fingerprint recognition, gait recognition). Feynman was well aware of early work on neural nets, through his colleague John Hopfield. [1] [2] [3]
I was at Caltech in 1985 and this is Feynman as I remember him. To me, still a teen ager, he seemed ancient. But his mind was marvelously active! As you can see from the talk he was following the fields of AI and computation rather closely.
Of course, he and other Manhattan project physicists were present at the creation. They had to use crude early contraptions for mechanical calculation in bomb design computations. Thus, the habit of reducing a complex problem (whether in physics or machine learning) to primitive operations was second nature. Already for kids of my generation it was not second nature -- we grew up with early "home computers" like the Apple II and Commodore, so there was a black box magic aspect already to programming in high level languages. Machine language was useful for speeding up video games, but not everyone learned it. The problem is even worse today: children first encounter computers as phones or tablets that already seem like magic. The highly advanced nature of these devices discourages them from trying to grasp the underlying first principles.
If I am not mistaken the t-shirt he is wearing is from the startup Thinking Machines, which built early parallel supercomputers.
Just three years later he was gone. The finely tuned neural connections in his brain -- which allowed him to reason with such acuity and communicate with such clarity still in 1985 -- were lost forever.
Thursday, May 17, 2018
Exponential growth in compute used for AI training
Chart shows the total amount of compute, in petaflop/s-days, used in training (e.g., optimizing an objective function in a high dimensional space). This exponential trend is likely to continue for some time -- leading to qualitative advances in machine intelligence.
AI and Compute (OpenAI blog): ... since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.
... Three factors drive the advance of AI: algorithmic innovation, data (which can be either supervised data or interactive environments), and the amount of compute available for training. Algorithmic innovation and data are difficult to track, but compute is unusually quantifiable, providing an opportunity to measure one input to AI progress. Of course, the use of massive compute sometimes just exposes the shortcomings of our current algorithms. But at least within many current domains, more compute seems to lead predictably to better performance, and is often complementary to algorithmic advances.
...We see multiple reasons to believe that the trend in the graph could continue. Many hardware startups are developing AI-specific chips, some of which claim they will achieve a substantial increase in FLOPS/Watt (which is correlated to FLOPS/$) over the next 1-2 years. ...
Thursday, November 30, 2017
CMSE (Computational Mathematics, Science and Engineering) at MSU
At Oregon I was part of an interdisciplinary institute that included theoretical physicists and chemists, mathematicians, and computer scientists. We tried to create a program (not even a new department, just an interdisciplinary program) in applied math and computation, but failed due to lack of support from higher administration. When I arrived at MSU as VPR I learned that the faculty here had formulated a similar plan for a new department. Together with the Engineering dean and the Natural Sciences dean we pushed it through and created an entirely new department in just a few years. This new department already has a research ranking among the top 10 in the US (according to Academic Analytics).
Computational Mathematics, Science and Engineering at MSU.
Saturday, November 18, 2017
Robot Overlords and the Academy
In a previous post Half of all jobs (> $60k/y) coding related? I wrote
In the future there will be two kinds of jobs. Workers will eitherI've been pushing Michigan State University to offer a coding bootcamp experience to all undergraduates who want it: e.g., Codecademy.com. The goal isn't to turn non-STEM majors into software developers, but to give all interested students exposure to an increasingly important and central aspect of the modern world.
Tell computers what to do
or
Be told by computers what to do
I even invited the CodeNow CEO to campus to help push the idea. We're still working on it at the university -- painfully SLOWLY, if you ask me. But this fall I learned my kids are taking a class based on Codecademy at their middle school! Go figure.
(Image via 1, 2)
Sunday, June 11, 2017
Rise of the Machines: Survey of AI Researchers
These predictions are from a recent survey of AI/ML researchers. See SSC and also here for more discussion of the results.
When Will AI Exceed Human Performance? Evidence from AI ExpertsAnother figure:
Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans
Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.
Keep in mind that the track record for this type of prediction, even by experts, is not great:
See below for the cartoon version :-)
Sunday, June 04, 2017
Epistemic Caution and Climate Change
I have not, until recently, invested significant time in trying to understand climate modeling. These notes are primarily for my own use, however I welcome comments from readers who have studied this issue in more depth.
I take a dim view of people who express strong opinions about complex phenomena without having understood the underlying uncertainties. I have yet to personally encounter anyone who claims to understand all of the issues discussed below, but I constantly meet people with strong views about climate change.
See my old post on epistemic caution Intellectual honesty: how much do we know?
... when it comes to complex systems like society or economy (and perhaps even climate), experts have demonstrably little predictive power. In rigorous studies, expert performance is often no better than random.
... worse, experts are usually wildly overconfident about their capabilities. ... researchers themselves often have beliefs whose strength is entirely unsupported by available data.Now to climate and CO2. AFAIU, the direct heating effect due to increasing CO2 concentration is only a logarithmic function (all the absorption is in a narrow frequency band). The main heating effects in climate models come from secondary effects such as water vapor distribution in the atmosphere, which are not calculable from first principles, nor under good experimental/observational control. Certainly any "catastrophic" outcomes would have to result from these secondary feedback effects.
The first paper below gives an elementary calculation of direct effects from atmospheric CO2. This is the "settled science" part of climate change -- it depends on relatively simple physics. The prediction is about 1 degree Celsius of warming from a doubling of CO2 concentration. Anything beyond this is due to secondary effects which, in their totality, are not well understood -- see second paper below, about model tuning, which discusses rather explicitly how these unknowns are dealt with.
Simple model to estimate the contribution of atmospheric CO2 to the Earth’s greenhouse effectFrom Conclusions:
Am. J. Phys. 80, 306 (2012)
http://dx.doi.org/10.1119/1.3681188
We show how the CO2 contribution to the Earth’s greenhouse effect can be estimated from relatively simple physical considerations and readily available spectroscopic data. In particular, we present a calculation of the “climate sensitivity” (that is, the increase in temperature caused by a doubling of the concentration of CO2) in the absence of feedbacks. Our treatment highlights the important role played by the frequency dependence of the CO2 absorption spectrum. For pedagogical purposes, we provide two simple models to visualize different ways in which the atmosphere might return infrared radiation back to the Earth. The more physically realistic model, based on the Schwarzschild radiative transfer equations, uses as input an approximate form of the atmosphere’s temperature profile, and thus includes implicitly the effect of heat transfer mechanisms other than radiation.
... The question of feedbacks, in its broadest sense, is the whole question of climate change: namely, how much and in which way can we expect the Earth to respond to an increase of the average surface temperature of the order of 1 degree, arising from an eventual doubling of the concentration of CO2 in the atmosphere? And what further changes in temperature may result from this response? These are, of course, questions for climate scientists to resolve. ...The paper below concerns model tuning. It should be apparent that there are many adjustable parameters hidden in any climate model. One wonders whether the available data, given its own uncertainties, can constrain this high dimensional parameter space sufficiently to produce predictive power in a rigorous statistical sense.
The first figure below illustrates how different choices of these parameters can affect model predictions. Note the huge range of possible outcomes! The second figure below illustrates some of the complex physical processes which are subsumed in the parameter choices. Over longer timescales, (e.g., decades) uncertainties such as the response of ecosystems (e.g., plant growth rates) to increased CO2 would play a role in the models. It is obvious that we do not (may never?) have control over these unknowns.
THE ART AND SCIENCE OF CLIMATE MODEL TUNING
AMERICAN METEOROLOGICAL SOCIETY MARCH 2017 | 589
... Climate model development is founded on well-understood physics combined with a number of heuristic process representations. The fluid motions in the atmosphere and ocean are resolved by the so-called dynamical core down to a grid spacing of typically 25–300 km for global models, based on numerical formulations of the equations of motion from fluid mechanics. Subgrid-scale turbulent and convective motions must be represented through approximate subgrid-scale parameterizations (Smagorinsky 1963; Arakawa and Schubert 1974; Edwards 2001). These subgrid-scale parameterizations include coupling with thermodynamics; radiation; continental hydrology; and, optionally, chemistry, aerosol microphysics, or biology.
Parameterizations are often based on a mixed, physical, phenomenological and statistical view. For example, the cloud fraction needed to represent the mean effect of a field of clouds on radiation may be related to the resolved humidity and temperature through an empirical relationship. But the same cloud fraction can also be obtained from a more elaborate description of processes governing cloud formation and evolution. For instance, for an ensemble of cumulus clouds within a horizontal grid cell, clouds can be represented with a single-mean plume of warm and moist air rising from the surface (Tiedtke 1989; Jam et al. 2013) or with an ensemble of such plumes (Arakawa and Schubert 1974). Similar parameterizations are needed for many components not amenable to first-principle approaches at the grid scale of a global model, including boundary layers, surface hydrology, and ecosystem dynamics. Each parameterization, in turn, typically depends on one or more parameters whose numerical values are poorly constrained by first principles or observations at the grid scale of global models. Being approximate descriptions of unresolved processes, there exist different possibilities for the representation of many processes. The development of competing approaches to different processes is one of the most active areas of climate research. The diversity of possible approaches and parameter values is one of the main motivations for model inter-comparison projects in which a strict protocol is shared by various modeling groups in order to better isolate the uncertainty in climate simulations that arises from the diversity of models (model uncertainty). ...
... All groups agreed or somewhat agreed that tuning was justified; 91% thought that tuning global-mean temperature or the global radiation balance was justified (agreed or somewhat agreed). ... the following were considered acceptable for tuning by over half the respondents: atmospheric circulation (74%), sea ice volume or extent (70%), and cloud radiative effects by regime and tuning for variability (both 52%).
Here is Steve Koonin, formerly Obama's Undersecretary for Science at DOE and a Caltech theoretical physicist, calling for a "Red Team" analysis of climate science, just a few months ago (un-gated link):
WSJ: ... The outcome of a Red/Blue exercise for climate science is not preordained, which makes such a process all the more valuable. It could reveal the current consensus as weaker than claimed. Alternatively, the consensus could emerge strengthened if Red Team criticisms were countered effectively. But whatever the outcome, we scientists would have better fulfilled our responsibilities to society, and climate policy discussions would be better informed.
Note Added: In 2014 Koonin ran a one day workshop for the APS (American Physical Society), inviting six leading climate scientists to present their work and engage in an open discussion. The APS committee responsible for reviewing the organization's statement on climate change were the main audience for the discussion. The 570+ page transcript, which is quite informative, is here. See Physics Today coverage, and an annotated version of Koonin's WSJ summary.
Below are some key questions Koonin posed to the panelists in preparation for the workshop. After the workshop he declared that The idea that “Climate science is settled” runs through today’s popular and policy discussions. Unfortunately, that claim is misguided.
The estimated equilibrium climate sensitivity to CO2 has remained between 1.5 and 4.5 in the IPCC reports since 1979, except for AR4 where it was given as 2-5.5.I seriously doubt that the process by which the 1.5 to 4.5 range is computed is statistically defensible. From the transcript, it appears that IPCC results of this kind are largely the result of "Expert Opinion" rather than a specific computation! It is rather curious that the range has not changed in 30+ years, despite billions of dollars spent on this research. More here.
What gives rise to the large uncertainties (factor of three!) in this fundamental parameter of the climate system?
How is the IPCC’s expression of increasing confidence in the detection/attribution/projection of anthropogenic influences consistent with this persistent uncertainty?
Wouldn’t detection of an anthropogenic signal necessarily improve estimates of the response to anthropogenic perturbations?
Saturday, June 03, 2017
Python Programming in one video
Putting this here in hopes I can get my kids to watch it at some point 8-)
Please recommend similar resources in the comments!
Saturday, April 15, 2017
History of Bayesian Neural Networks
This talk gives the history of neural networks in the framework of Bayesian inference. Deep learning is (so far) quite empirical in nature: things work, but we lack a good theoretical framework for understanding why or even how. The Bayesian approach offers some progress in these directions, and also toward quantifying prediction uncertainty.
I was sad to learn from this talk that David Mackay passed last year, from cancer. I recommended his book Information theory, inference and learning algorithms back in 2007.
Yarin Gal's dissertation Uncertainty in Deep Learning, mentioned in the talk.
I suppose I can thank my Caltech education for a quasi-subconscious understanding of neural nets despite never having worked on them. They were in the air when I was on campus, due to the presence of John Hopfield (he co-founded the Computation and Neural Systems PhD program at Caltech in 1986). See also Hopfield on physics and biology.
Amusingly, I discovered this talk via deep learning: YouTube's recommendation engine, powered by deep neural nets, suggested it to me this Saturday afternoon :-)
Friday, November 25, 2016
Von Neumann: "If only people could keep pace with what they create"
One night in early 1945, just back from Los Alamos, vN woke in a state of alarm in the middle of the night and told his wife Klari:
"... we are creating ... a monster whose influence is going to change history ... this is only the beginning! The energy source which is now being made available will make scientists the most hated and most wanted citizens in any country.He then predicted the future indispensable role of automation, becoming so agitated that he had to be put to sleep by a strong drink and sleeping pills.
The world could be conquered, but this nation of puritans will not grab its chance; we will be able to go into space way beyond the moon if only people could keep pace with what they create ..."
In his obituary for John von Neumann, Ulam recalled a conversation with vN about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." This is the origin of the concept of technological singularity. Perhaps we can even trace it to that night in 1945 :-)
How will humans keep pace? See Super-Intelligent Humans are Coming and Don't Worry, Smart Machines Will Take Us With Them.
Monday, September 05, 2016
World's fastest supercomputer: Sunway TaihuLight (41k nodes, 11M cores)
Jack Dongarra, professor at UT Knoxville, discusses the strengths and weaknesses of the Sunway TaihuLight, currently the world's fastest supercomputer. The fastest US supercomputer, Titan (#3 in the world), is at Oak Ridge National Lab, near UTK. More here and here.
MSU's latest HPC cluster would be ranked ~150 in the world.
Top 500 Supercomputers in the world
Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in China's Jiangsu province is the No. 1 system with 93 petaflop/s (Pflop/s) on the Linpack benchmark. The system has 40,960 nodes, each with one SW26010 processor for a combined total of 10,649,600 computing cores. Each SW26010 processor is composed of 4 MPEs, 4 CPEs, (a total of 260 cores), 4 Memory Controllers (MC), and a Network on Chip (NoC) connected to the System Interface (SI). Each of the four MPEs, CPEs, and MCs have access to 8GB of DDR3 memory. The system is based on processors exclusively designed and built in China. The Sunway TaihuLight is almost three times as fast and three times as efficient as Tianhe-2, the system it displaces in the number one spot. The peak power consumption under load (running the HPL benchmark) is at 15.371 MW or 6 Gflops/W. This allows the TaihuLight system to hold one of the top spots on the Green500 in terms of the Performance/Power metric. [ IIRC, these processors are inspired by the old Digital Alpha chips that I used to use... ]
...
The number of systems installed in China has increased dramatically to 167, compared to 109 on the last list. China is now at the No. 1 position as a user of HPC. Additionally, China now is at No. 1 position in the performance share thanks to the big contribution of the systems at No. 1 and No. 2.
The number of systems installed in the USA declines sharply and is now at 165 systems, down from from 199 in the previous list. This is the lowest number of systems installed in the U.S. since the list was started 23 years ago.
...
The U.S., the leading consumer of HPC systems since the inception of the TOP500 lists is now second for the first time after China with 165 of the 500 systems. China leads the systems and performance categories now thanks to the No.1 and No. 2 system and a surge in industrial and research installations registered over the last few years. The European share (105 systems compared to 107 last time) has fallen and is now lower than the dominant Asian share of 218 systems, up from 173 in November 2015.
Dominant countries in Asia are China with 167 systems (up from 109) and Japan with 29 systems (down from 37).
In Europe, Germany is the clear leader with 26 systems followed by France with 18 and the UK with 12 systems.
Sunday, August 14, 2016
Half of all jobs (> $60k/y) coding related?
Tell computers what to do
or
Be told by computers what to do
See this jobs report, based on BLS statistics and analysis of 26 million job postings scraped from job boards, newspapers, and other online sources in 2015.
Coding jobs represent a large and growing part of the job market. There were nearly 7 million job openings in the U.S. last year for roles requiring coding skills. This represents 20% of the total market for career-track jobs that pay $15 an hour or more. Jobs with coding skills are projected to grow 12% faster than the job market overall in the next 10 years. IT jobs are expected to grow even more rapidly: 25% faster than the overall market.1See also The Butlerian Jihad and Darwin among the Machines.
Programming skills are in demand across a range of industries. Half of all programming openings are in Finance, Manufacturing, Health Care, and other sectors outside of the technology industry.
...
Jobs valuing coding skills pay $22,000 per year more, on average, than jobs that don’t: $84,000 vs $62,000 per year. The value of these skills is striking and, for students looking to increase their potential income, few other skills open the door to as many well-paying careers. Slicing the data another way, 49% of the jobs in the top wage quartile (>$58,000/yr) value coding skills.
...
We define coding jobs as those in any occupation where knowing how to write computer code makes someone a stronger candidate and where employers commonly request coding skills in job postings. In some cases, coding is a prerequisite skill for the role, such as for Database Administrators. In other cases, such as Graphic Designers, knowing how to code may not be required in all cases, but job seekers with relevant programming skills will typically have an advantage.
Thursday, April 21, 2016
Deep Learning tutorial: Yoshua Bengio, Yann Lecun NIPS 2015
I think these are the slides.
One of the topics which I've remarked on before is the absence of local minima in the high dimensional optimization required to tune these DNNs. In the limit of high dimensionality a critical point is overwhelmingly likely to be a saddlepoint (have at least one negative eigenvalue). This means that even though the surface is not strictly convex the optimization is tractable.
Thursday, April 14, 2016
The story of the Monte Carlo Algorithm
George Dyson is Freeman's son. I believe this talk was given at SciFoo or Foo Camp.
More Ulam (neither he nor von Neumann were really logicians, at least not primarily).
Wikipedia on Monte Carlo Methods. I first learned these in Caltech's Physics 129: Mathematical Methods, which used the textbook by Mathews and Walker. This book was based on lectures taught by Feynman, emphasizing practical techniques developed at Los Alamos during the war. The students in the class were about half undergraduates and half graduate students. For example, Martin Savage was a first year graduate student that year. Martin is now a heavy user of Monte Carlo in lattice gauge theory :-)
Monday, February 29, 2016
Moore's Law and AI
What I have not yet seen discussed is how a significantly reduced rate of improvement in hardware capability will affect AI and the arrival of the dreaded (in some quarters) Singularity. The fundamental physical problems associated with ~ nm scale feature size could take decades or more to overcome. How much faster are today's cars and airplanes than those of 50 years ago?
Hint to technocratic planners: invest more in physicists, chemists, and materials scientists. The recent explosion in value from technology has been driven by physical science -- software gets way too much credit. From the former we got a factor of a million or more in compute power, data storage, and bandwidth. From the latter, we gained (perhaps) an order of magnitude or two in effectiveness: how much better are current OSes and programming languages than Unix and C, both of which are ~50 years old now?
HLMI = ‘high–level machine intelligence’ = one that can carry out most human professions at least as well as a typical human. (From Minds and Machines.)
Of relevance to this discussion: a big chunk of AlphaGo's performance improvement over other Go programs is due to raw compute power (link via Jess Riedel). The vertical axis is ELO score. You can see that without multi-GPU compute, AlphaGo has relatively pedestrian strength.
ELO range 2000-3000 spans amateur to lower professional Go ranks. The compute power certainly affects depth of Monte Carlo Tree Search. The initial training of the value and policy neural networks using KGS Go server positions might have still been possible with slower machines, but would have taken a long time.
Thursday, February 26, 2015
Second-generation PLINK
Interview with author Chris Chang. User Google group.
If one estimates a user population of ~1000, each saving of order $1000 in CPU/work time per year, then in the next few years PLINK 1.9 and its successors will deliver millions of dollars in value to the scientific community.
Second-generation PLINK: rising to the challenge of larger and richer datasets
Background
PLINK 1 is a widely used open-source C/C++ toolset for genome-wide association studies (GWAS) and research in population genetics. However, the steady accumulation of data from imputation and whole-genome sequencing studies has exposed a strong need for faster and scalable implementations of key functions, such as logistic regression, linkage disequilibrium estimation, and genomic distance evaluation. In addition, GWAS and population-genetic data now frequently contain genotype likelihoods, phase information, and/or multiallelic variants, none of which can be represented by PLINK 1’s primary data format.
Findings
To address these issues, we are developing a second-generation codebase for PLINK. The first major release from this codebase, PLINK 1.9, introduces extensive use of bit-level parallelism, O(n‾√)-time/constant-space Hardy-Weinberg equilibrium and Fisher’s exact tests, and many other algorithmic improvements. In combination, these changes accelerate most operations by 1-4 orders of magnitude, and allow the program to handle datasets too large to fit in RAM. We have also developed an extension to the data format which adds low-overhead support for genotype likelihoods, phase, multiallelic variants, and reference vs. alternate alleles, which is the basis of our planned second release (PLINK 2.0).
Conclusions
The second-generation versions of PLINK will offer dramatic improvements in performance and compatibility. For the first time, users without access to high-end computing resources can perform several essential analyses of the feature-rich and very large genetic datasets coming into use.
Blog Archive
Labels
- physics (420)
- genetics (325)
- globalization (301)
- genomics (295)
- technology (282)
- brainpower (280)
- finance (275)
- american society (261)
- China (249)
- innovation (231)
- ai (206)
- economics (202)
- psychometrics (190)
- science (172)
- psychology (169)
- machine learning (166)
- biology (163)
- photos (162)
- genetic engineering (150)
- universities (150)
- travel (144)
- podcasts (143)
- higher education (141)
- startups (139)
- human capital (127)
- geopolitics (124)
- credit crisis (115)
- political correctness (108)
- iq (107)
- quantum mechanics (107)
- cognitive science (103)
- autobiographical (97)
- politics (93)
- careers (90)
- bounded rationality (88)
- social science (86)
- history of science (85)
- realpolitik (85)
- statistics (83)
- elitism (81)
- talks (80)
- evolution (79)
- credit crunch (78)
- biotech (76)
- genius (76)
- gilded age (73)
- income inequality (73)
- caltech (68)
- books (64)
- academia (62)
- history (61)
- intellectual history (61)
- MSU (60)
- sci fi (60)
- harvard (58)
- silicon valley (58)
- mma (57)
- mathematics (55)
- education (53)
- video (52)
- kids (51)
- bgi (48)
- black holes (48)
- cdo (45)
- derivatives (43)
- neuroscience (43)
- affirmative action (42)
- behavioral economics (42)
- economic history (42)
- literature (42)
- nuclear weapons (42)
- computing (41)
- jiujitsu (41)
- physical training (40)
- film (39)
- many worlds (39)
- quantum field theory (39)
- expert prediction (37)
- ufc (37)
- bjj (36)
- bubbles (36)
- mortgages (36)
- google (35)
- race relations (35)
- hedge funds (34)
- security (34)
- von Neumann (34)
- meritocracy (31)
- feynman (30)
- quants (30)
- taiwan (30)
- efficient markets (29)
- foo camp (29)
- movies (29)
- sports (29)
- music (28)
- singularity (27)
- entrepreneurs (26)
- conferences (25)
- housing (25)
- obama (25)
- subprime (25)
- venture capital (25)
- berkeley (24)
- epidemics (24)
- war (24)
- wall street (23)
- athletics (22)
- russia (22)
- ultimate fighting (22)
- cds (20)
- internet (20)
- new yorker (20)
- blogging (19)
- japan (19)
- scifoo (19)
- christmas (18)
- dna (18)
- gender (18)
- goldman sachs (18)
- university of oregon (18)
- cold war (17)
- cryptography (17)
- freeman dyson (17)
- smpy (17)
- treasury bailout (17)
- algorithms (16)
- autism (16)
- personality (16)
- privacy (16)
- Fermi problems (15)
- cosmology (15)
- happiness (15)
- height (15)
- india (15)
- oppenheimer (15)
- probability (15)
- social networks (15)
- wwii (15)
- fitness (14)
- government (14)
- les grandes ecoles (14)
- neanderthals (14)
- quantum computers (14)
- blade runner (13)
- chess (13)
- hedonic treadmill (13)
- nsa (13)
- philosophy of mind (13)
- research (13)
- aspergers (12)
- climate change (12)
- harvard society of fellows (12)
- malcolm gladwell (12)
- net worth (12)
- nobel prize (12)
- pseudoscience (12)
- Einstein (11)
- art (11)
- democracy (11)
- entropy (11)
- geeks (11)
- string theory (11)
- television (11)
- Go (10)
- ability (10)
- complexity (10)
- dating (10)
- energy (10)
- football (10)
- france (10)
- italy (10)
- mutants (10)
- nerds (10)
- olympics (10)
- pop culture (10)
- crossfit (9)
- encryption (9)
- eugene (9)
- flynn effect (9)
- james salter (9)
- simulation (9)
- tail risk (9)
- turing test (9)
- alan turing (8)
- alpha (8)
- ashkenazim (8)
- data mining (8)
- determinism (8)
- environmentalism (8)
- games (8)
- keynes (8)
- manhattan (8)
- new york times (8)
- pca (8)
- philip k. dick (8)
- qcd (8)
- real estate (8)
- robot genius (8)
- success (8)
- usain bolt (8)
- Iran (7)
- aig (7)
- basketball (7)
- free will (7)
- fx (7)
- game theory (7)
- hugh everett (7)
- inequality (7)
- information theory (7)
- iraq war (7)
- markets (7)
- paris (7)
- patents (7)
- poker (7)
- teaching (7)
- vietnam war (7)
- volatility (7)
- anthropic principle (6)
- bayes (6)
- class (6)
- drones (6)
- econtalk (6)
- empire (6)
- global warming (6)
- godel (6)
- intellectual property (6)
- nassim taleb (6)
- noam chomsky (6)
- prostitution (6)
- rationality (6)
- academia sinica (5)
- bobby fischer (5)
- demographics (5)
- fake alpha (5)
- kasparov (5)
- luck (5)
- nonlinearity (5)
- perimeter institute (5)
- renaissance technologies (5)
- sad but true (5)
- software development (5)
- solar energy (5)
- warren buffet (5)
- 100m (4)
- Poincare (4)
- assortative mating (4)
- bill gates (4)
- borges (4)
- cambridge uk (4)
- censorship (4)
- charles darwin (4)
- computers (4)
- creativity (4)
- hormones (4)
- humor (4)
- judo (4)
- kerviel (4)
- microsoft (4)
- mixed martial arts (4)
- monsters (4)
- moore's law (4)
- soros (4)
- supercomputers (4)
- trento (4)
- 200m (3)
- babies (3)
- brain drain (3)
- charlie munger (3)
- cheng ting hsu (3)
- chet baker (3)
- correlation (3)
- ecosystems (3)
- equity risk premium (3)
- facebook (3)
- fannie (3)
- feminism (3)
- fst (3)
- intellectual ventures (3)
- jim simons (3)
- language (3)
- lee kwan yew (3)
- lewontin fallacy (3)
- lhc (3)
- magic (3)
- michael lewis (3)
- mit (3)
- nathan myhrvold (3)
- neal stephenson (3)
- olympiads (3)
- path integrals (3)
- risk preference (3)
- search (3)
- sec (3)
- sivs (3)
- society generale (3)
- systemic risk (3)
- thailand (3)
- twitter (3)
- alibaba (2)
- bear stearns (2)
- bruce springsteen (2)
- charles babbage (2)
- cloning (2)
- david mamet (2)
- digital books (2)
- donald mackenzie (2)
- drugs (2)
- dune (2)
- exchange rates (2)
- frauds (2)
- freddie (2)
- gaussian copula (2)
- heinlein (2)
- industrial revolution (2)
- james watson (2)
- ltcm (2)
- mating (2)
- mba (2)
- mccain (2)
- monkeys (2)
- national character (2)
- nicholas metropolis (2)
- no holds barred (2)
- offices (2)
- oligarchs (2)
- palin (2)
- population structure (2)
- prisoner's dilemma (2)
- singapore (2)
- skidelsky (2)
- socgen (2)
- sprints (2)
- star wars (2)
- ussr (2)
- variance (2)
- virtual reality (2)
- war nerd (2)
- abx (1)
- anathem (1)
- andrew lo (1)
- antikythera mechanism (1)
- athens (1)
- atlas shrugged (1)
- ayn rand (1)
- bay area (1)
- beats (1)
- book search (1)
- bunnie huang (1)
- car dealers (1)
- carlos slim (1)
- catastrophe bonds (1)
- cdos (1)
- ces 2008 (1)
- chance (1)
- children (1)
- cochran-harpending (1)
- cpi (1)
- david x. li (1)
- dick cavett (1)
- dolomites (1)
- eharmony (1)
- eliot spitzer (1)
- escorts (1)
- faces (1)
- fads (1)
- favorite posts (1)
- fiber optic cable (1)
- francis crick (1)
- gary brecher (1)
- gizmos (1)
- greece (1)
- greenspan (1)
- hypocrisy (1)
- igon value (1)
- iit (1)
- inflation (1)
- information asymmetry (1)
- iphone (1)
- jack kerouac (1)
- jaynes (1)
- jazz (1)
- jfk (1)
- john dolan (1)
- john kerry (1)
- john paulson (1)
- john searle (1)
- john tierney (1)
- jonathan littell (1)
- las vegas (1)
- lawyers (1)
- lehman auction (1)
- les bienveillantes (1)
- lowell wood (1)
- lse (1)
- machine (1)
- mcgeorge bundy (1)
- mexico (1)
- michael jackson (1)
- mickey rourke (1)
- migration (1)
- money:tech (1)
- myron scholes (1)
- netwon institute (1)
- networks (1)
- newton institute (1)
- nfl (1)
- oliver stone (1)
- phil gramm (1)
- philanthropy (1)
- philip greenspun (1)
- portfolio theory (1)
- power laws (1)
- pyschology (1)
- randomness (1)
- recession (1)
- sales (1)
- skype (1)
- standard deviation (1)
- starship troopers (1)
- students today (1)
- teleportation (1)
- tierney lab blog (1)
- tomonaga (1)
- tyler cowen (1)
- venice (1)
- violence (1)
- virtual meetings (1)
- wealth effect (1)












