About Me

My photo
Senior Vice-President for Research and Innovation, Professor of Theoretical Physics, Michigan State University

Thursday, May 28, 2020

Michael Kauffman: Cancer, Drug Development, and Market Capitalism (Manifold Podcast #48)



Note: the part of the conversation I found most interesting -- venture and capital markets aspects of drug discovery, complexity and scale of biotech ecosystems, role of IP and US healthcare spending to incentivize discovery -- begins at ~35m.

Steve and Corey speak with Dr. Michael Kauffman, co-founder and CEO of Karyopharm Therapeutics, about cancer and biotech innovation. Michael explains how he and Dr. Sharon Schacham tested her idea regarding cellular nuclear-transport using simulation software on a home laptop, and went on to beat 1000:1 odds to create a billion dollar company. They discuss the relationship between high proprietary drug costs and economic incentives for drug discovery. They also discuss the unique US biotech ecosystem, and why innovation is easier in small (vs. large) companies. Michael explains how Karyopharm is targeting its drug at COVID-induced inflammation to treat people with severe forms of the disease.

Transcript

Michael Kauffman (Bio)

Karyopharm's Publications and Presentations

The Great American Drug Deal: A New Prescription for Innovative and Affordable Medicines by Peter Kolchinsky


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Wednesday, May 27, 2020

David Silver on AlphaGo and AlphaZero (AI podcast)



I particularly liked this interview with David Silver on AlphaGo and AlphaZero. I suggest starting around ~35m in if you have some familiarity with the subject. (I listened to this while running hill sprints and found at the end I had it set to 1.4x speed -- YMMV.)

At ~40m Silver discusses the misleading low-dimensional intuition that led many to fear (circa 1980s-1990s) that neural net optimization would be stymied by local minima. (See related discussion: Yann LeCun on Unsupervised Learning.)

At one point Silver notes that the expressiveness of deep nets was never in question (i.e., whether they could encode sufficiently complex high-dimensional functions). The main empirical question was really about efficiency of training -- once the local minima question is resolved what remains is more of an engineering issue than a theoretical one.

Silver gives some details of the match with Lee Sedol. He describes the "holes" in AlphGo's gameplay that would manifest in roughly 1 in 5 games. Silver had predicted before the match, correctly, that AlphaGo might lose one game this way! AlphaZero was partially invented as a way to eliminate these holes, although it was also motivated by the principled goal of de novo learning, without expert examples.

I've commented many times that even with complete access to the internals of AlphaGo, we (humans) still don't know how it plays Go. There is an irreducible complexity to a deep neural net (and to our brain) that resists comprehension even when all the specific details are known. In this case, the computer program (neural net) which plays Go can be written out explicitly, but it has millions of parameters.

Silver says he worked on AI Go for a decade before it finally reached superhuman performance. He notes that Go was of special interest to AI researchers because there was general agreement that a superhuman Go program would truly understand the game, would develop intuition for it. But now that the dust has settled we see that notions like understand and intuition are still hidden in (spread throughout?) the high dimensional space of the network... and perhaps always will be. (From a philosophical perspective this is related to Searle's Chinese Room and other confusions...)

As to whether AlphaGo has deep intuition for Go, whether it can play with creativity, Silver gives examples from the Lee Sedol match in which AlphaGo 1. upended textbook Go theory previously embraced by human experts (perhaps for centuries?), and 2. surprised the human champion by making an aggressive territorial incursion late in the game. In fact, human understanding of both Chess and Go strategy have been advanced considerably via AlphaZero (which performs at a superhuman level in both games).

See also this Manifold interview with John Schulman of OpenAI.

Saturday, May 23, 2020

Will Trump Pardon Obama?



I get that I'm supposed to hate this lady and her boss, but can someone do me a favor by answering the questions she posed?
1) Why did the Obama administration spy on members of the Trump campaign, during and after the campaign?

2) Why was General Michael Flynn unmasked by Obama's chief of staff, Joe Biden, Susan Rice, and others?

3) Why was Flynn's identity leaked to the press (a felony)?

4) Why did AG Sally Yates (DOJ) first learn about the FBI investigation of Flynn's communication with the Russian Ambassador from a conversation with Obama in the Oval Office?

5) Why did James Clapper, John Brennan, Samantha Power, and Susan Rice privately admit under oath (Congressional testimony, only declassified recently) that they had no evidence of collusion, while saying the opposite in public? (2017-2019)
If you have any pretensions to rationalism (or even to being moderately well-informed), please score your understanding, over time, of this scandal which has unfolded since 2016. My observations are well documented since the beginning.

In addition to items #4 and #5 above, which only became public through recent declassification, the other new fact is: On January 4 (day before the White House meeting which included Obama, Biden, Comey, Yates, and Rice, and which was memorialized by Rice in the infamous CYA email to herself weeks later), FBI field agents working on the Flynn investigation, who had access to the Dec 29 call with Kislyak, recommended closing the case on Flynn due to what they referred to as absence of derogatory information. Of course, as a result of the January 5 White House meeting, the case was NOT closed and the rest is history (like Watergate, only worse).

None of this information is disputed, but you won't hear much about it from NYT, WaPo, CNN, etc. But see WSJ: here and here.

Added: I wrote the post Lies and Admissions: Spygate in light of the IG FISA report in December 2019, after the DOJ Inspector General's report on FISA abuse became public. Media coverage of its content was extremely misleading (details at the link). The 3+ year timeline I described (reproduced below) has now reached its endpoint due to the recent (May 2020) declassifications.

Almost three years of hard work to bring the truth to light.
There was no spying [ WE STARTED HERE 2016 ]

Okay, there was spying, but it was all legal

Some illegal things happened, but by mistake

A few bad apples did the illegal things [ WE ARE HERE 12/2019 ]

Illegal spying was politically motivated and ordered from the top

Obama knew ???  [ BEYOND DOUBT NOW 5/2020 ]
No telling how far down the above list we will get, but:
Lisa Page (text to Peter Strzok 9/2/2016): POTUS wants to know everything we’re doing.

Thursday, May 21, 2020

How Pompeo's CIA and Sheldon Adelson spied on Julian Assange



An amazing story. I met Assange's attorney at an event in London last summer...

The mysterious death of the Chinese ambassador to Israel happened just a few days after Pompeo's visit with Bibi. A coincidence, I am sure... we have it on good authority from the media and other experts that these conspiracy theories are merely fever dreams.

University of California to end use of SAT and ACT

University of California Will End Use of SAT and ACT in Admissions (NYT)

This decision by the UC Regents (most of whom are political appointees) is counter to the recommendation of the faculty task force recently assigned to study standardized testing in admissions. It is obvious to anyone who looks at the graphs below that SAT/ACT have significant validity (technical term used in psychometrics) in predicting college performance for all ethnic groups.


See Report of the University of California Academic Council Standardized Testing Task Force for more.
... SAT and HSGPA are stronger predictors than family income or race. Within each of the family income or ethnicity categories there is substantial variation in SAT and HSGPA, with corresponding differences in student success. See bottom figure and combined model R^2 in second figure below; R^2 varies very little across family income and ethnic categories. ...

Test Preparation and SAT scores: "...combined effect of coaching on the SAT I is between 21 and 34 points. Similarly, extensive meta-analyses conducted by Betsy Jane Becker in 1990 and by Nan Laird in 1983 found that the typical effect of commercial preparatory courses on the SAT was in the range of 9-25 points on the verbal section, and 15-25 points on the math section."

Scott Adams on Trump, and his book Loserthink – Manifold Podcast #47



Corey and Steve talk to Scott Adams, creator of Dilbert and author of Loserthink. Steve reviews some of Scott's predictions, including of Trump’s 2016 victory. Scott (who once semi-humorously described himself as “left of Bernie”) describes what he describes as Trump's unique "skill stack". Scott highlights Trump's grasp of the role of psychology in economics, and maintains that honesty requires admitting that we do not know whether many of Trump’s policies are good or bad. Scott explains why he thinks it is mistaken to assume leaders are irrational.

Transcript

Scott Adams (Blog and Podcast)

Loserthink: How Untrained Brains Are Ruining America

Kihlstrom J. F. (1997). Hypnosis, memory and amnesia. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 352(1362), 1727–1732. https://doi.org/10.1098/rstb.1997.0155

Hypnosis and Memory (Blog Post)


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Tuesday, May 19, 2020

COVID-19: Open Thread


I haven't followed the latest scientific progress very carefully for the last week or two. It seems that things have slowed down a bit. Previous posts on COVID-19.

I still think the evidence is reasonably strong for IFR ~ 1% (meaning could be 0.5% under good conditions, higher if medical systems are overwhelmed; there seems to be some evidence for dosage dependence of severity as well).

I suspect that from a purely utilitarian perspective we might be overpaying per QALY.

Tests seem to be improving (e.g., Roche), and there seems to be positive early news about vaccines.

Does anyone know what the status of contact tracing apps is? Are there any that have been tested at scale outside of E. Asia?

Where and When were the earliest cases? Is there any evidence for functional (rather than simply genomic) differences between strains?

Please add any useful updates in the comments below.

Thursday, May 14, 2020

James Oakes on What’s Wrong with The 1619 Project - Manifold Podcast #46



Steve and Corey talk to James Oakes, Distinguished Professor of History and Graduate School Humanities Professor at the Graduate Center of the City University of New York, about "The 1619 Project" developed by The New York Times Magazine. The project argues that slavery was the defining event of US history. Jim argues that slavery was actually the least exceptional feature of the US and that what makes the US exceptional is that it is where abolition first begins. Steve wonders about the views of Thomas Jefferson who wrote that “all men are created equal” but still held slaves. Jim maintains many founders were hypocrites, but Jefferson believed what he wrote.

Topics: Northern power, Industrialization, Capitalism, Lincoln, Inequality, Cotton, Labor, Civil War, Racism/Antiracism, Black Ownership.

Transcript

James Oakes (Bio)

Oakes and Colleagues Letter to the NYT and the Editor’s Response

The Fight Over the 1619 Project Is Not About the Facts

The World Socialist Web Site interview with James Oakes


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Saturday, May 09, 2020

Pure State Quantum Thermalization: from von Neumann to the Lab


Perhaps the most fundamental question in thermodynamics and statistical mechanics is: Why do systems tend to evolve toward thermal equilibrium? Equivalently, why does entropy tend to increase? Because Nature is quantum mechanical, a satisfactory answer to this question has to arise within quantum mechanics itself. The answer was given already in a 1929 paper by von Neumann. However, the ideas were not absorbed (even, were misunderstood) by the physics community and only rediscovered in the 21st century! General awareness of these results is still rather limited.

See this 2011 post: Classics on the arxiv: von Neumann and the foundations of quantum statistical mechanics.

In modern language, we would say something to the effect that "typical" quantum pure states are highly entangled, and the density matrix describing any small sub-system (obtained by tracing over the rest of the pure state) is very close to micro-canonical (i.e., thermal). Under dynamical (Schrodinger) evolution, all systems (even those that are initially far from typical) spend nearly all of their time in a typical state (modulo some weak conditions on the Hamiltonian). Typicality of states is related to concentration of measure in high dimensional Hilbert spaces. One could even claim that the origin of thermodynamics lies in the geometry of Hilbert space itself.

[ It's worth noting that vN's paper does more than just demonstrate these results. It also gives an explicit construction of macroscopic classical (commuting) observables arising in a large Hilbert space. This construction would be a nice thing to include in textbooks for students trying to connect the classical and quantum worlds. ]

Recently I came across an experimental realization of these theoretical results, using cold atoms in an optical lattice (Greiner lab at Harvard):
Quantum thermalization through entanglement in an isolated many-body system

Science 353, 794-800 (2016)    arXiv:1603.04409v3

The concept of entropy is fundamental to thermalization, yet appears at odds with basic principles in quantum mechanics. Statistical mechanics relies on the maximization of entropy for a system at thermal equilibrium. However, an isolated many-body system initialized in a pure state will remain pure during Schrodinger evolution, and in this sense has static, zero entropy. The underlying role of quantum mechanics in many-body physics is then seemingly antithetical to the success of statistical mechanics in a large variety of systems. Here we experimentally study the emergence of statistical mechanics in a quantum state, and observe the fundamental role of quantum entanglement in facilitating this emergence. We perform microscopy on an evolving quantum system, and we see thermalization occur on a local scale, while we measure that the full quantum state remains pure. We directly measure entanglement entropy and observe how it assumes the role of the thermal entropy in thermalization. Although the full state remains measurably pure, entanglement creates local entropy that validates the use of statistical physics for local observables. In combination with number-resolved, single-site imaging, we demonstrate how our measurements of a pure quantum state agree with the Eigenstate Thermalization Hypothesis and thermal ensembles in the presence of a near-volume law in the entanglement entropy.
Note, given the original vN results I think the Eigenstate Thermalization Hypothesis is only of limited interest. [ But see comments for more discussion... ] The point is that this is a laboratory demonstration of pure state thermalization, anticipated in 1929 by vN.

Another aspect of quantum thermalization that is still not very well appreciated is that approach to equilibrium can have a very different character than what students are taught in statistical mechanics. The physical picture behind the Boltzmann equation is semi-classical: collisions between atoms happen in serial as two gases equilibrate. But Schrodinger evolution of the pure state (all the degrees of freedom together) toward typicality can take advantage of quantum parallelism: all possible collisions take place on different parts of the quantum superposition state. Consequently, the timescale for quantum thermalization can be much shorter than in the semi-classical Boltzmann description.

In 2015 my postdoc C.M. Ho (now director of an AI lab in Silicon Valley) and I pointed out that quantum thermalization was likely already realized in heavy ion collisions at RHIC and CERN, and that the quantum nature of the process was responsible for the surprisingly short time required to approach equilibrium (equivalently, to generate large amounts of entanglement entropy).

Entanglement and fast thermalization in heavy ion collisions (see also slides here).


Entanglement and Fast Quantum Thermalization in Heavy Ion Collisions (arXiv:1506.03696)

Chiu Man Ho, Stephen D. H. Hsu

Let A be subsystem of a larger system A∪B, and ψ be a typical state from the subspace of the Hilbert space H_AB satisfying an energy constraint. Then ρ_A(ψ)=Tr_B |ψ⟩⟨ψ| is nearly thermal. We discuss how this observation is related to fast thermalization of the central region (≈A) in heavy ion collisions, where B represents other degrees of freedom (soft modes, hard jets, co-linear particles) outside of A. Entanglement between the modes in A and B plays a central role; the entanglement entropy S_A increases rapidly in the collision. In gauge-gravity duality, S_A is related to the area of extremal surfaces in the bulk, which can be studied using gravitational duals.



An earlier blog post Ulam on physical intuition and visualization mentioned the difference between intuition for familiar semiclassical (incoherent) particle phenomena, versus for intrinsically quantum mechanical (coherent) phenomena such as the spread of entanglement and its relation to thermalization.
[Ulam:] ... Most of the physics at Los Alamos could be reduced to the study of assemblies of particles interacting with each other, hitting each other, scattering, sometimes giving rise to new particles. Strangely enough, the actual working problems did not involve much of the mathematical apparatus of quantum theory although it lay at the base of the phenomena, but rather dynamics of a more classical kind—kinematics, statistical mechanics, large-scale motion problems, hydrodynamics, behavior of radiation, and the like. In fact, compared to quantum theory the project work was like applied mathematics as compared with abstract mathematics. If one is good at solving differential equations or using asymptotic series, one need not necessarily know the foundations of function space language. It is needed for a more fundamental understanding, of course. In the same way, quantum theory is necessary in many instances to explain the data and to explain the values of cross sections. But it was not crucial, once one understood the ideas and then the facts of events involving neutrons reacting with other nuclei.
This "dynamics of a more classical kind" did not require intuition for entanglement or high dimensional Hilbert spaces. But see von Neumann and the foundations of quantum statistical mechanics for examples of the latter.

Thursday, May 07, 2020

Robert Atkinson on US-China Competition and Industrial Policy - Manifold Episode #45



Steve and Corey talk with Robert Atkinson, President of the Information Technology and Innovation Foundation about his philosophy of National Developmentalism. They discuss the history of industrial policy and mercantilism in the US and China. Why did the US lose 1/3 of its manufacturing jobs in the 2000s? How much was due to automation and how much to Chinese competition? Atkinson discusses US R&D and recommends policies that will help the US compete with China.

Other topics: Forced technology transfer, IP theft, semiconductors and Micron technologies (DRAM), why the WTO cannot handle misbehavior by China.

Transcript

Robert Atkinson (Bio)

Information Technology and Innovation Foundation (ITIF)

Big is Beautiful: Debunking the Mythology of Small Business (MIT Press, 2018)

Innovation Economics: The Race for Global Advantage (Yale, 2012)


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Sunday, May 03, 2020

QED and QCD theta angles, asymptotic boundary conditions in gauge theory

Warning: this post is for specialists.

I had reason to look back at the paper below recently, and thought I would write a longer post on the subject as I regularly see searches on "QED theta angle" and similar in my traffic logs. These readers may be unsatisfied with the standard textbook treatment of this topic.

The conventional thinking is that because FF-dual in QED is a total derivative, doesn't affect the classical equations of motion, and isn't related to any topological vacuum structure, it can't have physical consequences.


In the paper below I constructed gauge configurations in (3+1) dimensions that connect an initial configuration (e.g., vacuum state A=0) to two different final configurations A1 and A2 (e.g., in the far future). A1 and A2 differ by a gauge transformation (i.e., represent the same physical electric and magnetic fields), but the two (3+1) interpolating configurations are not gauge equivalent. By construction, the value of \int FF-dual is not the same when evaluated on the two (3+1) configurations. Thus the two trajectories have a relative phase weighting in the path integral which depends on the value of theta. This suggests that theta can have physical (though perhaps small and non-perturbative) effects, so it is indeed a fundamental physical parameter of QED and of the Standard Model of particle physics.
http://arxiv.org/abs/1107.0756

Theta terms and asymptotic behavior of gauge potentials in (3+1) dimensions

We describe paths in the configuration space of (3+1) dimensional QED whose relative quantum phase (or relative phase in the functional integral) depends on the value of the theta angle. The final configurations on the two paths are related by a gauge transformation but differ in magnetic helicity or Chern-Simons number. Such configurations must exhibit gauge potentials that fall off no faster than 1/r in some region of finite solid angle, although they need not have net magnetic charge (i.e., are not magnetic monopoles). The relative phase is proportional to theta times the difference in Chern-Simons number. We briefly discuss some possible implications for QCD and the strong CP problem.
The question of whether physical observables can depend on the value of theta QED is somewhat esoteric. However, the analysis raises the issue of asymptotic boundary conditions in gauge theory. One typically expects that local properties of a quantum field theory are independent of the choice of boundary conditions when the size of the system is taken to infinity. But topological or total derivative terms such as FF-dual seem to defy this expectation.

The gauge potentials required for the construction described above must have  A ~ 1/r  behavior for some region of solid angle. In d=4 Euclidean space, the assumption is often made to allow only potentials with faster than 1/r falloff. In this way one obtains a topological classification of gauge configurations (i.e., in terms of instanton number). However, in Minkowski space (d=3+1) there exist classical solutions of non-Abelian gauge theory (e.g., SU(2); discovered by Luscher and Schecter) that exhibit 1/r falloff and are the analog of the U(1) configurations I described above. These L-S solutions have fractional topological charge.

In the presence of fractional topological charges the gauge theory partition function no longer appears periodic in theta. This may have consequences for the Strong CP problem in QCD, which are briefly discussed in the paper above.

Note Added: Writing this blog post was beneficial -- thinking through the topic again allowed me to formulate the conclusions more clearly than in the original paper. I've replaced it on arXiv with a new version containing the additional material below. The observation at the end is related to Elitzur's Theorem -- gauge-variant operators must have zero average in the path integral.

Thursday, April 30, 2020

Raman Sundrum: Physics and the Universe - Manifold Episode #44



Steve and Corey talk with theoretical physicist Raman Sundrum. They discuss the last 30 years in fundamental physics, and look toward the next. Raman argues that Physics is a marketplace of ideas. While many theories did not stand the test of time, they represented avenues that needed to be explored. Corey expresses skepticism about the possibility of answering questions such as why the laws of physics have the form they do. Raman and Steve argue that attempts to answer such questions have led to great advances. Topics: models and experiments, Naturalness, the anthropic principle, dark matter and energy, and imagination.


Transcript

Raman Sundrum (Faculty Bio)

Sabine Hossenfelder on the Crisis in Particle Physics and Against the Next Big Collider  (Manifold Episode #8)


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Monday, April 27, 2020

COVID-19: CDC US deaths by age group

Reader LondonYoung points to this CDC data set. Table 2 is reproduced below.

If we assume that CV-19 has infected a few percent of the total US population, we should multiply the numbers in the CV-19 deaths column by ~30x to extrapolate to a full population sweep. With that adjustment factor the impact on people younger than 25 is still very modest. It is only among people ~50y or older for whom the effect of a full CV-19 sweep is comparable to All-Cause deaths.

As a rough estimate I'd guess a full population sweep (under good medical conditions) costs about 10M QALYS. How much is that worth? A few trillion dollars?


Of course, we should keep in mind that there might be very negative long-term health consequences from serious cases of CV-19 infection that do not result in death.

Added:

1. Germany’s leading coronavirus expert Christian Drosten on Merkel’s leadership, the UK response, and the ‘prevention paradox’ (Guardian).

2. US National Academy of Sciences COVID-19 Update.

Sunday, April 26, 2020

GOOG AI directs me to interview with Ari Ben-Menashe on Jeffrey Epstein


People talk about a future cybernetic era in which human intelligence will be fused in some way with machine intelligence (AI). To a degree, that era has already arrived. The GOOG AI watches almost everything I do -- not just my search queries, but pages I access via Chrome, seminars and interviews I watch on YouTube, my meetings on Google Calendar, what topics I discuss over gmail, where I travel, etc. I can now depend on it to make useful recommendations. (I hope the AI remains friendly to me in the future...)

This morning it suggested the interview below with Ari Ben-Menashe. Probably because it knows I have been interested in Jeffrey Epstein (see post Epstein and the Big Lie from Aug 2019), the activities of intelligence services (see, e.g., Twilight Struggles in a Wilderness of Mirrors: Admiral Mike Rogers, the NSA, and Obama-era Political Spying), and also nuclear weapons history.

Ben-Menashe was an Israeli intelligence operative, best known for his role in Iran-Contra in the 1980s. He was also one of the main sources for the book The Samson Option, by Sy Hersh (the journalist who uncovered both My Lai and Abu Ghraib). The Samson Option describes how the world became aware of the Israeli nuclear program, thanks to whistle-blower Mordechai Vanunu. After revealing the secret program to the British Sunday Times, Vanunu was kidnapped by Israeli intelligence agents, stood trial in Israel, and spent almost 20 years in prison. Ben-Menashe worked with publisher Robert Maxwell (Ghislaine Maxwell's father) to locate Vanunu in London and to capture him using a honey trap (female agent).

Ben-Menashe knew Jeffrey Epstein and Ghislaine Maxwell through Robert Maxwell. He states on the record that Epstein was involved in a honeypot operation for Israeli intelligence.

Ben-Menashe also comments on topics such as:
Epstein's "suicide" in MCC (where, by coincidence, Ben-Menashe was also held in the aftermath of Iran-Contra).

Ghislaine Maxwell's current location.

Robert Maxwell's mysterious death.

How Epstein could live and operate as if he had a 10-11 figure net worth when his actual wealth was one or two orders of magnitude less.
I do not know whether any of this is true, but I found the interview interesting.




Warning: in the comments I will censor anti-Jewish remarks.

Saturday, April 25, 2020

COVID-19: False Positive Rates for Serological Tests

It looks like very few of the tests have false positive rates in the percent range. Since most populations (with the exception of NYC and some other highly impacted places) do not have infection rates higher than a few percent, there is a danger of overestimating total infection rates and underestimating IFR using these tests. (See, e.g., the recent Stanford-USC papers.)

Sure Biotech seems to be an HK company, while Wondfo is in Guangzhou.
NYTimes: ... Each test was evaluated with the same set of blood samples: from 80 people known to be infected with the coronavirus, at different points after infection; 108 samples donated before the pandemic; and 52 samples from people who were positive for other viral infections but had tested negative for SARS-CoV-2.

Tests made by Sure Biotech and Wondfo Biotech, along with an in-house Elisa test, produced the fewest false positives.

A test made by Bioperfectus detected antibodies in 100 percent of the infected samples, but only after three weeks of infection. None of the tests did better than 80 percent until that time period, which was longer than expected, Dr. Hsu said.

The lesson is that the tests are less likely to produce false negatives the longer ago the initial infection occurred, he said.

The tests were particularly variable when looking for a transient antibody that comes up soon after infection, called IgM, and more consistent in identifying a subsequent antibody, called IgG, that may signal longer-term immunity.

“You can see that antibody levels rise at different points for every patient,” Dr. Hsu said. The tests performed best when the researchers assessed both types of antibodies together. None of the tests could say whether the presence of these antibodies means a person is protected from reinfection, however.

The results overall are promising, Dr. Marson added. “There are multiple tests that have specificities greater than 95 percent.”
Preprint: Test performance evaluation of SARS-CoV-2 serological assays

From Table 2 in the paper:


Dr. Patrick Hsu -- quoted in the Times article above, and a co-author of the paper -- is no relation, although we know each other. He has appeared in this blog before for his CRISPR work.

Thursday, April 23, 2020

Vineer Bhansali: Physics, Tail Risk Hedging, and 900% Coronavirus Returns - Manifold Episode #43



Steve and Corey talk with theoretical physicist turned hedge fund investor Vineer Bhansali. Bhansali describes his transition from physics to finance, his firm LongTail Alpha, and his recent outsize returns from the coronavirus financial crisis. Also discussed: derivatives pricing, random walks, helicopter money, and Modern Monetary Theory.

Transcript

LongTail Alpha

LongTail Alpha’s OneTail Hedgehog Fund II had 929% Return (Bloomberg)

A New Anomaly Matching Condition? (1992)
https://arxiv.org/abs/hep-ph/9211299

Added: Background on derivatives history here. AFAIK high energy physicist M.F.M. Osborne was the first to suggest the log-normal random walk model for securities prices, in the 1950s. Bachelier suggested an additive model which does not even make logical sense. See my articles in Physics World: 1 , 2


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Friday, April 17, 2020

The von Neumann-Fuchs bomb, and the radiation compression mechanism of Ulam-Teller-Sakharov


Some useful references below on the Ulam-Teller mechanism, Sakharov's Third Idea, and the von Neumann-Fuchs thermonuclear design of 1946. They resolve a mystery discussed previously on this blog:
Sakharov's Third Idea: ... If Zeldovich was already familiar with radiation pressure as the tool for compression, via the Fuchs report of 1948, then perhaps one cannot really credit Teller so much for adding this ingredient to Ulam's idea of a staged device using a fission bomb to compress the thermonuclear fuel. Fuchs and von Neumann had already proposed (and patented!) radiation implosion years before. More here.
It turns out that the compression mechanism used in the von Neumann-Fuchs design (vN is the first author on the patent application; the design was realized in the Operation Greenhouse George nuclear test of 1951) is not that of Ulam-Teller or Sakharov. In vN-F the D-T mixture reaches thermal equilibrium with ionized BeO gas, leading to a pressure increase of ~10x. This is not the "cold compression" via focused radiation pressure used in the U-T / Sakharov designs. That was, apparently,  conceived independently by Ulam-Teller and Sakharov.

It is only recently that the vN-F design has become public -- first obtained by the Soviets via espionage (Fuchs), and finally declassified and published by the Russians! It seems that Zeldovich had access to this information, but not Sakharov.

American and Soviet H-bomb development programmes: historical background by G. Goncharov.

John von Neumann and Klaus Fuchs: an Unlikely Collaboration by Jeremy Bernstein. See also here for some clarifying commentary.


A great anecdote:
Jeremy Bernstein: When I was an undergraduate at Harvard he [vN] came to the university to give lectures on the computer and the brain. They were the best lectures I have ever heard on anything — like mental champagne. After one of them I found myself walking in Harvard Square and looked up to see von Neumann. Thinking, correctly as it happened, that it would be the only chance I would have to ask him a question, I asked, ‘‘Professor von Neumann, will the computer ever replace the human mathematician?’’ He studied me and then responded, ‘‘Sonny, don’t worry about it.’’


Note added from comments: I hope this clarifies things a bit.
The question of how the Soviets got to the U-T mechanism is especially mysterious. Sakharov himself (ostensibly the Soviet inventor) was puzzled until the end about what had really happened! He did not have access to the vN-F design that has been made public from the Russian side (~2000, after Sakharov's death in 1989; still classified in US). Zeldovich and only a few others had seen the Fuchs information, at a time when the main focus of the Russian program was not the H bomb. Sakharov could never be sure whether his suggestion for cold radiation compression sparked Zeldovich's interest because the latter *had seen the idea before* without fully comprehending it. Sakharov wondered about this until the end of his life (see below), but I think his surmise was not correct: we know now that vN-F did *not* come up with that idea in their 1946 design. I've been puzzled about this question myself for some time. IF the vN-F design had used radiation pressure for cold compression, why did Teller get so much credit for replacing neutrons with radiation pressure in Ulam's staged design (1951)? I stumbled across the (now public) vN-F design by accident just recently -- I was reading some biographical stuff about Zeldovich which touched upon these issues.

https://infoproc.blogspot.com/2012/10/sakharovs-third-idea.html

Consider the following words in Sakharov’s memoirs, with a note he added toward the end of life:

Now I think that the main idea of the H-bomb design developed by the Zeldovich group was based on intelligence information. However, I can’t prove this conjecture. It occurred to me quite recently, but at the time I just gave it no thought. (Note added July 1987. David Holloway writes in “Soviet Thermonuclear Development,” International Security 4:3 (1979/80), p. 193: “The Soviet Union had been informed by Klaus Fuchs of the studies of thermonuclear weapons at Los Alamos up to 1946. … His information would have been misleading rather than helpful, because the early ideas were later shown not to work.” Therefore my conjecture is confirmed!)
Another useful resource: Gennady Gorelik (BU science historian): The Paternity of the H-Bombs: Soviet-American Perspectives
Teller, 1952, August (re Bethe’s Memorandum): The main principle of radiation implosion was developed in connection with the thermonuclear program and was stated at a con­ference on the thermonuclear bomb, in the spring of 1946. Dr. Bethe did not attend this conference, but Dr. Fuchs did. [ Original development by vN? ]

It is difficult to argue to what extent an invention is accidental: most difficult for someone who did not make the invention himself. It appears to me that the idea was a relatively slight modification of ideas generally known in 1946. Essentially only two elements had to be added: to implode a bigger volume, and, to achieve greater compression by keeping the imploded material cool as long as possible.
The last part ("cool as long as possible") refers to the fundamental difference between the vN-Fuchs design and the U-T mechanism of cold radiation compression. The former assumes thermal equilibrium between ionized gas and radiation, while latter deliberately avoids it as long as possible.

Official Soviet History: On the making of the Soviet hydrogen (thermonuclear) bomb, Yu B Khariton et al 1996 Phys.-Usp. 39 185. Some details on the origin of the compression idea, followed by the use of radiation pressure (Zeldovich and Sakharov).

Thursday, April 16, 2020

Jaan Tallinn: Coronavirus, Existential Risk, and AI - Manifold Episode#42



Steve talks with Skype founder and global tech investor Jaan Tallinn. Will the coronavirus pandemic lead to better planning for future global risks? Jaan gives his list of top existential risks and describes his efforts to call attention to AI risk. They discuss AGI, the Simulation Question, the Fermi Paradox and how these are all connected. Do we live in a simulation of a quantum multiverse?

RATIONALITY

Jaan X-Risk Links

LessWrong

Slate Star Codex

Metaculus


ADDITIONAL RESOURCES

Transcript

Fermi Paradox — Where Are All The Aliens?

Is Hilbert space discrete?
https://arxiv.org/abs/hep-th/0508039


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Wednesday, April 15, 2020

COVID-19: Testing, Isolation, Geolocation in Korea



The guy in the video has just returned to Korea from abroad. He is tested right away (results by next day), and asked to self-quarantine for 2 weeks. His location is monitored via phone (GPS) during this time. During quarantine the government supplies him with food free of charge.

Systems like this make it possible to contain the epidemic without shutting down the economy. Is there any chance the US can get to this point by May?

Sunday, April 12, 2020

COVID-19: Iceland tests 10% of population, CFR ~0.5%


I can't find a better reference for this than the Daily Mail, which may have garbled the results. But if the headline is correct, CFR ~ 0.4% (this may increase as the disease runs its course among the infected) and infection rate is just over 4% (1.6k infected / 36k tested). [ See update below the graph for better information. ]
Daily Mail: Iceland has tested one-tenth of its population for coronavirus at random and found HALF of people have the disease without realising - with only seven deaths in 1,600 cases
At the JHU coronavirus page, the latest numbers reported are 1700 infected with 8 dead, or CFR ~ 0.47%. From the graph below it appears that most infections in Iceland happened less than 2-3 weeks ago, suggesting that further deaths will result among the population currently infected. See here for comprehensive Iceland data. From the description there, about half the people tested were already in quarantine (e.g., due to contact tracing), so the Daily Mail claim that the results come from random sampling of the population does not seem correct.

If the 1700 positives did result from testing a completely random selection of 1/10 of the total population, then IFR ~ 8 / 17000 ~ 0.04%, which is very small.

But 4% infected (probably less, due to bias in sampling from quarantine) seems inconsistent with a super-rapid sweep, and is far short of herd immunity. [ See below for better information! ]


Added: I found a better source of information than the Daily Mail.
Iceland Review: ... The screenings of the general population have been carried out by Reykjavík-based medical research company deCODE genetics ...

CEO Dr. Kári Stefánsson: “Fifty per cent of those that test positive in our screenings of the general population are symptom-free at the time. Many of them get symptoms later,” Kári said.

Therefore, although about half of those who have tested positive for coronavirus in deCODE’s screenings did not have symptoms at the time, most of those who have tested positive developed symptoms at some point. A positive sample from an individual without symptoms means that the sample was most likely taken early in the virus’ incubation period, before symptoms such as dry cough or fever began to present themselves.

“DeCODE has now screened 10,401 individuals in Iceland. Of those, 92 were positive. So about 0.9% of those who we screened in the general population turned out to be positive. And that is probably the upper limit of the distribution of the virus in society in general,” Kári explained.

Interview: “The testing has been going on for 15 days – there was a little pause in the middle because we were missing swabs – but all of these 15 days, the rate of positives has been a little bit below one percent, which makes it likely that this is the true population prevalence. Today we are calling in people randomly, just selecting at random from the telephone directory. There is probably no perfect way to get a random sample. But I think it is very likely that the number is going to turn out in the end to be somewhat close to this number, probably somewhere between 0.5-1%.”
If the population prevalence of infection now is 0.5 to 1% (or 1.8k to 3.6k people in all of Iceland), then the 8 deaths imply an IFR in the range 0.22% to 0.44%. This should go up over time as many of the infections are early and we expect more deaths later. The 4% infection prevalence obtained using the Daily Mail numbers is probably an overestimate due to testing of already quarantined people -- DeCode has done about half the testing in Iceland, presumably using the random sampling method described by Stefansson, and another entity did the rest. All of this is aggregated in the current ~1.7k total confirmed cases as of now.

DeCode has genetic data on essentially all Icelanders, so should be able to identify alleles that make one more or less vulnerable to CV19.

Friday, April 10, 2020

COVID-19: CFR ~1% estimated in large random sample (Austria)


CV19 antigen test of a random representative sample in Austria. CFR (or IFR, to be very precise) is close to 1%.
WSJ: More twice as many people have been infected by the new coronavirus in Austria than official figures showed, according to a new survey, with a fatality rate of 0.77%.

The nationwide survey, which the Austrian government described as the first of a country with a sizable population, showed that lockdown measures, which are particularly strict in Austria, were necessary to avoid mass casualties and overwhelming the health-care system, said Heinz Fassmann, the country’s education minister, who presented the study in Vienna Friday.

The study, conducted by polling firm SORA in cooperation with the government the Red Cross, tested a random, representative sample of 1,544 people aged 0 to 94 from across the country in their homes or in drive-in testing stations. It indicated that 28,500 people, or around 0.33% of Austria’s 8.9 million population, were infected with the virus by April 6, sharply higher than the 12,467 infections recorded by that date, with 220 people dying of Covid-19, the disease the virus causes.

The findings suggest that while the death rate implied from the study, 0.77%, is lower than the World Health Organization’s estimate for reported cases, which is over 3%, it would still mean that the virus could kill many millions of people before a vaccine is available.
95% confidence interval for infection rate is 0.12% to 0.76%, so CFR range of ~0.3% to ~2%. The "standard model" of CV-19 epidemiology seems to be correct.


Note Added: First sign of Google / Apple action to bring the full power of geolocation to bear on contact tracing and isolation! These capabilities have been available in China for some time now.
Bloomberg: Apple Inc. and Google unveiled a rare partnership to add technology to their smartphone platforms that will alert users if they have come into contact with a person with Covid-19. People must opt in to the system, but it has the potential to monitor about a third of the world’s population.

The technology, known as contact-tracing, is designed to curb the spread of the novel coronavirus by telling users they should quarantine or isolate themselves after contact with an infected individual.

The Silicon Valley rivals said on Friday that they are building the technology into their iOS and Android operating systems in two steps. In mid-May, the companies will add the ability for iPhones and Android phones to wirelessly exchange anonymous information via apps run by public health authorities. The companies will also release frameworks for public health apps to manage the functionality.

This means that if a user tests positive for Covid-19, and adds that data to their public health app, users who they came into close proximity with over the previous several days will be notified of their contact. This period could be 14 days, but health agencies can set the time range.

The second step takes longer. In the coming months, both companies will add the technology directly into their operating systems so this contact-tracing software works without having to download an app. Users must opt in, but this approach means many more people can be included. Apple’s iOS and Google’s Android have about 3 billion users between them, over a third of the world’s population. ...

Thursday, April 09, 2020

Dan Gable: Legendary NCAA and Olympic Wrestler & Coach - Manifold Episode #41



Steve and Corey talk to legendary NCAA and Olympic wrestler and coach Dan Gable. Gable describes the final match of his collegiate career, an NCAA championship upset which spoiled his undefeated high school and college record. The Coach explains how the loss led him to take a more scientific approach to training and was critical for his later success. They discuss the tragic murder of Gable's sister, and the steps 15-year old Gable took try to save his parents’ marriage. Gable describes his eye for talent and philosophy of developing athletes. Steve gets Gable's reaction to ultimate fighting and jiujitsu.

Transcript

Dan Gable vs Larry Owings - 1970 NCAA Title Match (video)

The Champion (1970 documentary on Gable's senior NCAA season)


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.


Added: The wonders of YouTube! A great interview with Chris Campbell -- perhaps Gable's best wrestler!

Tuesday, April 07, 2020

COVID-19: A False Lockdown Dichotomy?


This was posted by commenter "husposter" in the thread for COVID-19: CBA, CFR, Open Borders, and I thought I would promote it here. Along similar lines, I read somewhere today that economic indicators for Sweden (no lockdown) are quite similar to those for neighboring countries that are under lockdown. Rational people will tend to enact social distancing even if it is not forced on them.

From what I understand, CV-19 is a terrible infection to have even for many people who avoid going to hospital. There may be an ~80% chance of a mild or entirely asymptomatic case, but the tail of the probability distribution is very unpleasant... and in the worst case, it seems like a terrible way to die.
husposter: Why do people assume that the economy will "get back to normal" if the lockdown is ended without the disease being controlled?

Are all you people going to start going to the bar and baseball games if the government lets you? Are you going to let you kids go to what are effectively infection factories (daycares and schools)? Are you going to start going to doctors offices full of people that might have Covid?

The "lockdowns" came AFTER the private market started to shut down voluntarily. I was there. I saw it. My company was canceling travel and the NBA was cancelling its season while "the government" was still encouraging people to go out.

Who the heck is going to take a 1% chance of dying? A 10% chance of hospital stay and permanent lung damage? Who is going to expose friends, loved ones, and co-workers to that chance if they get infected?

To even propose that there is some kind of "choice" to save the economy at the price of a few old people dying (and its not like the victims are even as clear as you'd like it to be) is a dangerous false dichotomy. There is no way the economy is exiting the lockdown until the disease is under control.

The "lockdown" just gives the authorities the ability to go after egregious malcontents that are so socially degenerate they can't obey basic behavioral norms at a time like this. Your workplace would not be open right now even if the government allowed it.

Either get the disease under control, or there is no economy. You aren't a clever heartless individual, you're an idiot that wants to seem "tough".

This is what people would be risking so they could go to a concert:
It first struck me how different it was when I saw my first coronavirus patient go bad. I was like, Holy shit, this is not the flu. Watching this relatively young guy, gasping for air, pink frothy secretions coming out of his tube and out of his mouth. The ventilator should have been doing the work of breathing but he was still gasping for air, moving his mouth, moving his body, struggling.

We had to restrain him. With all the coronavirus patients, we’ve had to restrain them. They really hyperventilate, really struggle to breathe. When you’re in that mindstate of struggling to breathe and delirious with fever, you don’t know when someone is trying to help you, so you’ll try to rip the breathing tube out because you feel it is choking you, but you are drowning.

When someone has an infection, I’m used to seeing the normal colors you’d associate with it: greens and yellows. The coronavirus patients with ARDS have been having a lot of secretions that are actually pink because they’re filled with blood cells that are leaking into their airways. They are essentially drowning in their own blood and fluids because their lungs are so full. So we’re constantly having to suction out the secretions every time we go into their rooms.
Added: WSJ on exiting lockdown in Wuhan... important sociological and public health experiment to watch.




Good twitter thread by J.D. Vance
addressing skeptics.


More Added: Ioannidis preprint estimates that a large fraction (e.g., ~30 percent) of US CV19 dead are under 65, in sharp contrast to Europe and Asia (~5 percent). This suggests (as pointed out many times by commenters here) that cocooning may be more difficult here than abroad. While this data is very noisy at the moment it is hard to believe that the entire difference is due to noise.

Friday, April 03, 2020

COVID-19: Exiting Lockdown and Geolocation

Pressure will mount around the end of this month (assuming we are past the peak death rate and virus spread is under control) for the US to exit lockdown. This needs to be done in a smart way, which includes:

1. Required use of facemasks
2. Cocooning of vulnerable populations
3. Contact tracing and forced isolation of cases, perhaps using geolocation technology

See related posts

COVID-19: Smart Technologies and Exit from Lockdown (Singapore)
COVID-19: CBA, CFR, Open Borders
COVID-19: Cocoon the vulnerable, save the economy?
COVID-19 Notes

WSJ: Western governments aiming to relax restrictions on movement are turning to unprecedented surveillance to track people infected with the new coronavirus and identify those with whom they have been in contact.

Governments in China, Singapore, Israel and South Korea that are already using such data credit the practice with helping slow the spread of the virus. The U.S. and European nations, which have often been more protective of citizens’ data than those countries, are now looking at a similar approach, using apps and cellphone data.

“I think that everything is gravitating towards proximity tracking,” said Chris Boos, a member of Pan-European Privacy-Preserving Proximity Tracing, a project that is working to create a shared system that could take uploads from apps in different countries. “If somebody gets sick, we know who could be infected, and instead of quarantining millions, we’re quarantining 10.” ...

Some European countries are going further, creating programs to help track individuals—with their permission—who have been exposed and must be quarantined. The Czech Republic and Iceland have introduced such programs and larger countries including the U.K., Germany and Spain are studying similar efforts. Hundreds of new location-tracking apps are being developed and pitched to those governments, Mr. Boos said.

U.S. authorities are able to glean data on broad population movements from the mobile-marketing industry, which has geographic data points on hundreds of millions of U.S. mobile devices, mainly taken from apps that users have installed on their phones.

Europe’s leap to collecting personal data marks a shift for the continent, where companies face more legal restrictions on what data they may collect. Authorities say they have found workarounds that don’t violate the European Union’s General Data Protection Regulation, or GDPR, which restricts how personal information can be shared. ...
Google, Apple, Facebook, etc. are reluctant to draw attention to their already formidable geolocation capabilites. But this crisis may focus public awareness on their ability to track almost all Americans throughout the day.
WSJ: Google will help public health officials use its vast storage of data to track people’s movements amid the coronavirus pandemic, in what the company called an effort to assist in “unprecedented times.”

The initiative, announced by the company late Thursday, uses a portion of the information that the search giant has collected on users, including through Google Maps, to create reports on the degree to which locales are abiding by social-distancing measures. The “mobility reports” will be posted publicly and show, for instance, whether particular localities, states or countries are seeing more or less people flow into shops, grocery stores, pharmacies and parks. ... 
This is just a hint at what Google is capable of. Check out Google Timeline! Of course, users have to opt in to create their Google Timeline. But it should be immediately obvious that Google already HAS the information necessary to populate a detailed geolocation history of every individual...




Added from the comments:
There are really two separate issues here:

1. What is the basic epidemiology of CV19? i.e., R0, CFR, age distribution of vulnerability, comorbidities, mechanism of spread, utility of masks, etc.

2. What is the cost benefit analysis for various strategies (e.g., lockdown vs permissive sweep with cocooning)

While we have not reached full convergence on #1 I think reasonable people agree that the "mainstream" consensus has a decent chance of being correct: e.g., CFR ~ 1% or so, possibility of wide sweep in any population, overload of ICUs means much higher CFR, warmer weather might not save the day, etc. Once this scenario for #1 has, say, >50% chance of being right you are forced to at least take it seriously and then you are on to #2. (It is not required to believe that the scenario above is true at 95% or 99% confidence level...)

#2 is a question of trade-offs and two reasonable people can easily disagree until the end of time... I've already posted very simple CBA that show the answer can go either way depending on how you "price" QALYs, what you think long term effects on economy are from lockdown -- i.e., how fragile you think financial, supply chain, psychological systems are in various places; is it a ~$trillion cost, or could it go nonlinear?

Re: Physicists (and addressing gmachine comment below which has a lot of truth in it), we have no trouble understanding modeling done by other people (whether in finance, climate, or epidemiology), and we are also trained to deal with very uncertain data / statistical situations. We can "take apart" the model in our head to see where the dependencies are and how the uncertainties propagate through the model. I am amazed often to meet people who built a very complex model (e.g., thousands of lines of code, lots of input parameters), but they lack the chops to develop good intuition for how their model works, to make qualitative estimates for uncertainty quantification, etc. I have seen this in economics, finance, biology, and climate contexts many times. "There are levels to this thing..." Understanding the model can be more g-loaded than building it!

Finally, we are trained to think from first principles -- which assumptions are crucial to reach the conclusions, which are not? What are the key uncertainties in the analysis? Do we really need very specific assumptions about, e.g., social interaction rates as in the Imperial models? Or can I do a quick Fermi estimate which gets me a more robust answer at the cost of a factor of 2 uncertainty that does not really affect the main conclusion -- e.g., will ICU overload happen?

Enrico Fermi at the Trinity test: "I tried to estimate its strength by dropping from about six feet small pieces of paper before, during, and after the passage of the blast wave. Since, at the time, there was no wind I could observe very distinctly and actually measure the displacement of the pieces of paper that were in the process of falling while the blast was passing. The shift was about 2 1/2 meters, which, at the time, I estimated to correspond to the blast that would be produced by ten thousand tons of T.N.T." The actual yield was about 20 kt. Sometimes a smart guy can get to within a factor of two, and with much greater clarity, than a huge team of modelers...

Thursday, April 02, 2020

Klaus Lackner on Carbon Capture, Climate Change, and Physics



Steve and Corey talk to Klaus Lackner, director of the Center for Negative Carbon Emissions (CNCE) at Arizona State University and the first person to suggest removing CO2 from air to address climate change. Steve asks whether Klaus’ research was motivated by a tail risk of catastrophic outcomes due to CO2 build up. Klaus explains that he sees atmospheric CO2 as a waste management problem. Calculations show that removing human-produced carbon is energetically and economically viable. Klaus describes his invention, a “mechanical tree”, that passively collects CO2 from the air, allowing it to be stored or converted to fuel.

Note: Klaus, in theorist fashion, performs a number of Fermi estimates in real time during the discussion. To fully understand his reasoning, it might be useful to consult the transcript or replay the relevant parts of the interview.

Transcript

Klaus Lackner (Faculty Bio)

Center for Negative Carbon Emissions at ASU


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Show Website

Blog Archive

Labels