Thursday, May 28, 2020

Michael Kauffman: Cancer, Drug Development, and Market Capitalism (Manifold Podcast #48)



Note: the part of the conversation I found most interesting -- venture and capital markets aspects of drug discovery, complexity and scale of biotech ecosystems, role of IP and US healthcare spending to incentivize discovery -- begins at ~35m.

Steve and Corey speak with Dr. Michael Kauffman, co-founder and CEO of Karyopharm Therapeutics, about cancer and biotech innovation. Michael explains how he and Dr. Sharon Schacham tested her idea regarding cellular nuclear-transport using simulation software on a home laptop, and went on to beat 1000:1 odds to create a billion dollar company. They discuss the relationship between high proprietary drug costs and economic incentives for drug discovery. They also discuss the unique US biotech ecosystem, and why innovation is easier in small (vs. large) companies. Michael explains how Karyopharm is targeting its drug at COVID-induced inflammation to treat people with severe forms of the disease.

Transcript

Michael Kauffman (Bio)

Karyopharm's Publications and Presentations

The Great American Drug Deal: A New Prescription for Innovative and Affordable Medicines by Peter Kolchinsky


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Wednesday, May 27, 2020

David Silver on AlphaGo and AlphaZero (AI podcast)



I particularly liked this interview with David Silver on AlphaGo and AlphaZero. I suggest starting around ~35m in if you have some familiarity with the subject. (I listened to this while running hill sprints and found at the end I had it set to 1.4x speed -- YMMV.)

At ~40m Silver discusses the misleading low-dimensional intuition that led many to fear (circa 1980s-1990s) that neural net optimization would be stymied by local minima. (See related discussion: Yann LeCun on Unsupervised Learning.)

At one point Silver notes that the expressiveness of deep nets was never in question (i.e., whether they could encode sufficiently complex high-dimensional functions). The main empirical question was really about efficiency of training -- once the local minima question is resolved what remains is more of an engineering issue than a theoretical one.

Silver gives some details of the match with Lee Sedol. He describes the "holes" in AlphGo's gameplay that would manifest in roughly 1 in 5 games. Silver had predicted before the match, correctly, that AlphaGo might lose one game this way! AlphaZero was partially invented as a way to eliminate these holes, although it was also motivated by the principled goal of de novo learning, without expert examples.

I've commented many times that even with complete access to the internals of AlphaGo, we (humans) still don't know how it plays Go. There is an irreducible complexity to a deep neural net (and to our brain) that resists comprehension even when all the specific details are known. In this case, the computer program (neural net) which plays Go can be written out explicitly, but it has millions of parameters.

Silver says he worked on AI Go for a decade before it finally reached superhuman performance. He notes that Go was of special interest to AI researchers because there was general agreement that a superhuman Go program would truly understand the game, would develop intuition for it. But now that the dust has settled we see that notions like understand and intuition are still hidden in (spread throughout?) the high dimensional space of the network... and perhaps always will be. (From a philosophical perspective this is related to Searle's Chinese Room and other confusions...)

As to whether AlphaGo has deep intuition for Go, whether it can play with creativity, Silver gives examples from the Lee Sedol match in which AlphaGo 1. upended textbook Go theory previously embraced by human experts (perhaps for centuries?), and 2. surprised the human champion by making an aggressive territorial incursion late in the game. In fact, human understanding of both Chess and Go strategy have been advanced considerably via AlphaZero (which performs at a superhuman level in both games).

See also this Manifold interview with John Schulman of OpenAI.

Saturday, May 23, 2020

Will Trump Pardon Obama?



I get that I'm supposed to hate this lady and her boss, but can someone do me a favor by answering the questions she posed?
1) Why did the Obama administration spy on members of the Trump campaign, during and after the campaign?

2) Why was General Michael Flynn unmasked by Obama's chief of staff, Joe Biden, Susan Rice, and others?

3) Why was Flynn's identity leaked to the press (a felony)?

4) Why did AG Sally Yates (DOJ) first learn about the FBI investigation of Flynn's communication with the Russian Ambassador from a conversation with Obama in the Oval Office?

5) Why did James Clapper, John Brennan, Samantha Power, and Susan Rice privately admit under oath (Congressional testimony, only declassified recently) that they had no evidence of collusion, while saying the opposite in public? (2017-2019)
If you have any pretensions to rationalism (or even to being moderately well-informed), please score your understanding, over time, of this scandal which has unfolded since 2016. My observations are well documented since the beginning.

In addition to items #4 and #5 above, which only became public through recent declassification, the other new fact is: On January 4 (day before the White House meeting which included Obama, Biden, Comey, Yates, and Rice, and which was memorialized by Rice in the infamous CYA email to herself weeks later), FBI field agents working on the Flynn investigation, who had access to the Dec 29 call with Kislyak, recommended closing the case on Flynn due to what they referred to as absence of derogatory information. Of course, as a result of the January 5 White House meeting, the case was NOT closed and the rest is history (like Watergate, only worse).

None of this information is disputed, but you won't hear much about it from NYT, WaPo, CNN, etc. But see WSJ: here and here.

Added: I wrote the post Lies and Admissions: Spygate in light of the IG FISA report in December 2019, after the DOJ Inspector General's report on FISA abuse became public. Media coverage of its content was extremely misleading (details at the link). The 3+ year timeline I described (reproduced below) has now reached its endpoint due to the recent (May 2020) declassifications.

Almost three years of hard work to bring the truth to light.
There was no spying [ WE STARTED HERE 2016 ]

Okay, there was spying, but it was all legal

Some illegal things happened, but by mistake

A few bad apples did the illegal things [ WE ARE HERE 12/2019 ]

Illegal spying was politically motivated and ordered from the top

Obama knew ???  [ BEYOND DOUBT NOW 5/2020 ]
No telling how far down the above list we will get, but:
Lisa Page (text to Peter Strzok 9/2/2016): POTUS wants to know everything we’re doing.

Thursday, May 21, 2020

How Pompeo's CIA and Sheldon Adelson spied on Julian Assange



An amazing story. I met Assange's attorney at an event in London last summer...

The mysterious death of the Chinese ambassador to Israel happened just a few days after Pompeo's visit with Bibi. A coincidence, I am sure... we have it on good authority from the media and other experts that these conspiracy theories are merely fever dreams.

University of California to end use of SAT and ACT

University of California Will End Use of SAT and ACT in Admissions (NYT)

This decision by the UC Regents (most of whom are political appointees) is counter to the recommendation of the faculty task force recently assigned to study standardized testing in admissions. It is obvious to anyone who looks at the graphs below that SAT/ACT have significant validity (technical term used in psychometrics) in predicting college performance for all ethnic groups.


See Report of the University of California Academic Council Standardized Testing Task Force for more.
... SAT and HSGPA are stronger predictors than family income or race. Within each of the family income or ethnicity categories there is substantial variation in SAT and HSGPA, with corresponding differences in student success. See bottom figure and combined model R^2 in second figure below; R^2 varies very little across family income and ethnic categories. ...

Test Preparation and SAT scores: "...combined effect of coaching on the SAT I is between 21 and 34 points. Similarly, extensive meta-analyses conducted by Betsy Jane Becker in 1990 and by Nan Laird in 1983 found that the typical effect of commercial preparatory courses on the SAT was in the range of 9-25 points on the verbal section, and 15-25 points on the math section."

Scott Adams on Trump, and his book Loserthink – Manifold Podcast #47



Corey and Steve talk to Scott Adams, creator of Dilbert and author of Loserthink. Steve reviews some of Scott's predictions, including of Trump’s 2016 victory. Scott (who once semi-humorously described himself as “left of Bernie”) describes what he describes as Trump's unique "skill stack". Scott highlights Trump's grasp of the role of psychology in economics, and maintains that honesty requires admitting that we do not know whether many of Trump’s policies are good or bad. Scott explains why he thinks it is mistaken to assume leaders are irrational.

Transcript

Scott Adams (Blog and Podcast)

Loserthink: How Untrained Brains Are Ruining America

Kihlstrom J. F. (1997). Hypnosis, memory and amnesia. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 352(1362), 1727–1732. https://doi.org/10.1098/rstb.1997.0155

Hypnosis and Memory (Blog Post)


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Tuesday, May 19, 2020

COVID-19: Open Thread


I haven't followed the latest scientific progress very carefully for the last week or two. It seems that things have slowed down a bit. Previous posts on COVID-19.

I still think the evidence is reasonably strong for IFR ~ 1% (meaning could be 0.5% under good conditions, higher if medical systems are overwhelmed; there seems to be some evidence for dosage dependence of severity as well).

I suspect that from a purely utilitarian perspective we might be overpaying per QALY.

Tests seem to be improving (e.g., Roche), and there seems to be positive early news about vaccines.

Does anyone know what the status of contact tracing apps is? Are there any that have been tested at scale outside of E. Asia?

Where and When were the earliest cases? Is there any evidence for functional (rather than simply genomic) differences between strains?

Please add any useful updates in the comments below.

Thursday, May 14, 2020

James Oakes on What’s Wrong with The 1619 Project - Manifold Podcast #46



Steve and Corey talk to James Oakes, Distinguished Professor of History and Graduate School Humanities Professor at the Graduate Center of the City University of New York, about "The 1619 Project" developed by The New York Times Magazine. The project argues that slavery was the defining event of US history. Jim argues that slavery was actually the least exceptional feature of the US and that what makes the US exceptional is that it is where abolition first begins. Steve wonders about the views of Thomas Jefferson who wrote that “all men are created equal” but still held slaves. Jim maintains many founders were hypocrites, but Jefferson believed what he wrote.

Topics: Northern power, Industrialization, Capitalism, Lincoln, Inequality, Cotton, Labor, Civil War, Racism/Antiracism, Black Ownership.

Transcript

James Oakes (Bio)

Oakes and Colleagues Letter to the NYT and the Editor’s Response

The Fight Over the 1619 Project Is Not About the Facts

The World Socialist Web Site interview with James Oakes


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Saturday, May 09, 2020

Pure State Quantum Thermalization: from von Neumann to the Lab


Perhaps the most fundamental question in thermodynamics and statistical mechanics is: Why do systems tend to evolve toward thermal equilibrium? Equivalently, why does entropy tend to increase? Because Nature is quantum mechanical, a satisfactory answer to this question has to arise within quantum mechanics itself. The answer was given already in a 1929 paper by von Neumann. However, the ideas were not absorbed (were in fact misunderstood) by the physics community and only rediscovered in the 21st century! General awareness of these results is still rather limited.

See this 2011 post: Classics on the arxiv: von Neumann and the foundations of quantum statistical mechanics.

In modern language, we would say something to the effect that "typical" quantum pure states are highly entangled, and the density matrix describing any small sub-system (obtained by tracing over the rest of the pure state) is very close to micro-canonical (i.e., thermal). Under dynamical (Schrodinger) evolution, all systems (even those that are initially far from typical) spend nearly all of their time in a typical state (modulo some weak conditions on the Hamiltonian). Typicality of states is related to concentration of measure in high dimensional Hilbert spaces. One could even claim that the origin of thermodynamics lies in the geometry of Hilbert space itself.

[ It's worth noting that vN's paper does more than just demonstrate these results. It also gives an explicit construction of macroscopic classical (commuting) observables arising in a large Hilbert space. This construction would be a nice thing to include in textbooks for students trying to connect the classical and quantum worlds. ]

Recently I came across an experimental realization of these theoretical results, using cold atoms in an optical lattice (Greiner lab at Harvard):
Quantum thermalization through entanglement in an isolated many-body system

Science 353, 794-800 (2016)    arXiv:1603.04409v3

The concept of entropy is fundamental to thermalization, yet appears at odds with basic principles in quantum mechanics. Statistical mechanics relies on the maximization of entropy for a system at thermal equilibrium. However, an isolated many-body system initialized in a pure state will remain pure during Schrodinger evolution, and in this sense has static, zero entropy. The underlying role of quantum mechanics in many-body physics is then seemingly antithetical to the success of statistical mechanics in a large variety of systems. Here we experimentally study the emergence of statistical mechanics in a quantum state, and observe the fundamental role of quantum entanglement in facilitating this emergence. We perform microscopy on an evolving quantum system, and we see thermalization occur on a local scale, while we measure that the full quantum state remains pure. We directly measure entanglement entropy and observe how it assumes the role of the thermal entropy in thermalization. Although the full state remains measurably pure, entanglement creates local entropy that validates the use of statistical physics for local observables. In combination with number-resolved, single-site imaging, we demonstrate how our measurements of a pure quantum state agree with the Eigenstate Thermalization Hypothesis and thermal ensembles in the presence of a near-volume law in the entanglement entropy.
Note, given the original vN results I think the Eigenstate Thermalization Hypothesis is only of limited interest. [ But see comments for more discussion... ] The point is that this is a laboratory demonstration of pure state thermalization, anticipated in 1929 by vN.

Another aspect of quantum thermalization that is still not very well appreciated is that approach to equilibrium can have a very different character than what students are taught in statistical mechanics. The physical picture behind the Boltzmann equation is semi-classical: collisions between atoms happen in serial as two gases equilibrate. But Schrodinger evolution of the pure state (all the degrees of freedom together) toward typicality can take advantage of quantum parallelism: all possible collisions take place on different parts of the quantum superposition state. Consequently, the timescale for quantum thermalization can be much shorter than in the semi-classical Boltzmann description.

In 2015 my postdoc C.M. Ho (now director of an AI lab in Silicon Valley) and I pointed out that quantum thermalization was likely already realized in heavy ion collisions at RHIC and CERN, and that the quantum nature of the process was responsible for the surprisingly short time required to approach equilibrium (equivalently, to generate large amounts of entanglement entropy).

Entanglement and fast thermalization in heavy ion collisions (see also slides here).


Entanglement and Fast Quantum Thermalization in Heavy Ion Collisions (arXiv:1506.03696)

Chiu Man Ho, Stephen D. H. Hsu

Let A be subsystem of a larger system A∪B, and ψ be a typical state from the subspace of the Hilbert space H_AB satisfying an energy constraint. Then ρ_A(ψ)=Tr_B |ψ⟩⟨ψ| is nearly thermal. We discuss how this observation is related to fast thermalization of the central region (≈A) in heavy ion collisions, where B represents other degrees of freedom (soft modes, hard jets, co-linear particles) outside of A. Entanglement between the modes in A and B plays a central role; the entanglement entropy S_A increases rapidly in the collision. In gauge-gravity duality, S_A is related to the area of extremal surfaces in the bulk, which can be studied using gravitational duals.



An earlier blog post Ulam on physical intuition and visualization mentioned the difference between intuition for familiar semiclassical (incoherent) particle phenomena, versus for intrinsically quantum mechanical (coherent) phenomena such as the spread of entanglement and its relation to thermalization.
[Ulam:] ... Most of the physics at Los Alamos could be reduced to the study of assemblies of particles interacting with each other, hitting each other, scattering, sometimes giving rise to new particles. Strangely enough, the actual working problems did not involve much of the mathematical apparatus of quantum theory although it lay at the base of the phenomena, but rather dynamics of a more classical kind—kinematics, statistical mechanics, large-scale motion problems, hydrodynamics, behavior of radiation, and the like. In fact, compared to quantum theory the project work was like applied mathematics as compared with abstract mathematics. If one is good at solving differential equations or using asymptotic series, one need not necessarily know the foundations of function space language. It is needed for a more fundamental understanding, of course. In the same way, quantum theory is necessary in many instances to explain the data and to explain the values of cross sections. But it was not crucial, once one understood the ideas and then the facts of events involving neutrons reacting with other nuclei.
This "dynamics of a more classical kind" did not require intuition for entanglement or high dimensional Hilbert spaces. But see von Neumann and the foundations of quantum statistical mechanics for examples of the latter.

Thursday, May 07, 2020

Robert Atkinson on US-China Competition and Industrial Policy - Manifold Episode #45



Steve and Corey talk with Robert Atkinson, President of the Information Technology and Innovation Foundation about his philosophy of National Developmentalism. They discuss the history of industrial policy and mercantilism in the US and China. Why did the US lose 1/3 of its manufacturing jobs in the 2000s? How much was due to automation and how much to Chinese competition? Atkinson discusses US R&D and recommends policies that will help the US compete with China.

Other topics: Forced technology transfer, IP theft, semiconductors and Micron technologies (DRAM), why the WTO cannot handle misbehavior by China.

Transcript

Robert Atkinson (Bio)

Information Technology and Innovation Foundation (ITIF)

Big is Beautiful: Debunking the Mythology of Small Business (MIT Press, 2018)

Innovation Economics: The Race for Global Advantage (Yale, 2012)


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Sunday, May 03, 2020

QED and QCD theta angles, asymptotic boundary conditions in gauge theory

Warning: this post is for specialists.

I had reason to look back at the paper below recently, and thought I would write a longer post on the subject as I regularly see searches on "QED theta angle" and similar in my traffic logs. These readers may be unsatisfied with the standard textbook treatment of this topic.

The conventional thinking is that because FF-dual in QED is a total derivative, doesn't affect the classical equations of motion, and isn't related to any topological vacuum structure, it can't have physical consequences.


In the paper below I constructed gauge configurations in (3+1) dimensions that connect an initial configuration (e.g., vacuum state A=0) to two different final configurations A1 and A2 (e.g., in the far future). A1 and A2 differ by a gauge transformation (i.e., represent the same physical electric and magnetic fields), but the two (3+1) interpolating configurations are not gauge equivalent. By construction, the value of \int FF-dual is not the same when evaluated on the two (3+1) configurations. Thus the two trajectories have a relative phase weighting in the path integral which depends on the value of theta. This suggests that theta can have physical (though perhaps small and non-perturbative) effects, so it is indeed a fundamental physical parameter of QED and of the Standard Model of particle physics.
http://arxiv.org/abs/1107.0756

Theta terms and asymptotic behavior of gauge potentials in (3+1) dimensions

We describe paths in the configuration space of (3+1) dimensional QED whose relative quantum phase (or relative phase in the functional integral) depends on the value of the theta angle. The final configurations on the two paths are related by a gauge transformation but differ in magnetic helicity or Chern-Simons number. Such configurations must exhibit gauge potentials that fall off no faster than 1/r in some region of finite solid angle, although they need not have net magnetic charge (i.e., are not magnetic monopoles). The relative phase is proportional to theta times the difference in Chern-Simons number. We briefly discuss some possible implications for QCD and the strong CP problem.
The question of whether physical observables can depend on the value of theta QED is somewhat esoteric. However, the analysis raises the issue of asymptotic boundary conditions in gauge theory. One typically expects that local properties of a quantum field theory are independent of the choice of boundary conditions when the size of the system is taken to infinity. But topological or total derivative terms such as FF-dual seem to defy this expectation.

The gauge potentials required for the construction described above must have  A ~ 1/r  behavior for some region of solid angle. In d=4 Euclidean space, the assumption is often made to allow only potentials with faster than 1/r falloff. In this way one obtains a topological classification of gauge configurations (i.e., in terms of instanton number). However, in Minkowski space (d=3+1) there exist classical solutions of non-Abelian gauge theory (e.g., SU(2); discovered by Luscher and Schecter) that exhibit 1/r falloff and are the analog of the U(1) configurations I described above. These L-S solutions have fractional topological charge.

In the presence of fractional topological charges the gauge theory partition function no longer appears periodic in theta. This may have consequences for the Strong CP problem in QCD, which are briefly discussed in the paper above.

Note Added: Writing this blog post was beneficial -- thinking through the topic again allowed me to formulate the conclusions more clearly than in the original paper. I've replaced it on arXiv with a new version containing the additional material below. The observation at the end is related to Elitzur's Theorem -- gauge-variant operators must have zero average in the path integral.

Blog Archive

Labels