Pessimism of the Intellect, Optimism of the Will Favorite posts | Manifold podcast | Twitter: @hsu_steve
Thursday, October 31, 2019
Manifold Podcast #22 Jamie Metzl on Hacking Darwin: Genetic Engineering and the Future of Humanity
Jamie Metzl joins Corey and Steve to discuss his new book, Hacking Darwin. They discuss detailed predictions for the progress in genomic technology, particularly in human reproduction, over the coming decade: genetic screening of embryos will become commonplace, gene-editing may become practical and more widely accepted, stem cell technology may allow creation of unlimited numbers of eggs and embryos. Metzl is a Technology Futurist, Geopolitics Expert, and Sci-Fi Novelist. He was appointed to the World Health Organization expert advisory committee governance and oversight of human genome editing. Jamie previously served in the U.S. National Security Council, State Department, Senate Foreign Relations Committee and as a Human Rights Officer for the United Nations in Cambodia. He holds a Ph.D. in Southeast Asian history from Oxford University and a J.D. from Harvard Law School.
Transcript
Hacking Darwin: Genetic Engineering and the Future of Humanity
Jamie Metzl (web site)
man·i·fold /ˈmanəˌfōld/ many and various.
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Wednesday, October 30, 2019
Future Investment Initiative (Riyadh)
I'm making my way home from the FUTURE INVESTMENT INITIATIVE 2019 in Riyadh, Saudi Arabia. At the moment I am sitting in a Lufthansa lounge at Frankfurt.
The annual event is sponsored by the Saudi sovereign wealth fund, or PIF (Public Investment Fund), which is one of the largest pools of capital in the world.
The meeting this year had an AI theme, and I spoke in the AI and Health (genomics) session. The mix of people was very interesting -- VC, hedge fund, and private equity investors (among other things, looking for allocations from PIF), tech entrepreneurs, policy and government people, etc. There was a large Chinese contingent at the meeting, and a strong Huawei presence. IIUC the telco infrastructure in the Kingdom uses a lot of Huawei gear.
I got a Star Wars cantina in business suits vibe from the thousands of attendees at the Ritz. The various global tribes were there in almost equal mixture -- Americans (Silicon Valley + NY money), Euro-grifters, money men, technologists, spooks, government suits, Chinese, Arabs, Indians, Russians (even Sputnik News). The Kingdom is really at the global crossroads.
Right away on the first day I "bumped into" someone from the US embassy. Her card says State, but I suspect another agency with three letters.
My hotel was in the DQ or Diplomatic Quarter, not far from the Ritz. The DQ is separated from the rest of the city by serious security checkpoints. The Saudi soldiers like to wear their pistols low on the thigh with cool looking black polymer "gunfighter" holsters.
See also The Geopolitics of US Global Decline: Beijing and Washington Struggle for Dominion over the World Island.
Kaifu Lee and Stephen Schwarzman dialog.
Our panel on AI and Health was held here:
The gala reception in the King Abdullah Financial District. An interesting little drone hovered above the crowd all evening.
My speaker pass. I had a driver and was able to get through the numerous security checkpoints quickly using this. MBS has his own elite Royal Guard, and they were in evidence at the event.
Over the summer I also spoke at the Tallinn Digital Summit and the World Congress of Information Technology in Yerevan Armenia -- lots of travel! I haven't even had time to blog about these events. There are videos of my talks and panels I will try to post at some point.
TDS 2019: Panel on AI social and political impacts
https://youtu.be/fddG7hQkkW4
TDS 2019 Parallel Breakout Sessions I: AI in Healthcare
https://youtu.be/atOnB1dW0OA
Tallinn Digital Summit YouTube Channel
https://www.youtube.com/channel/UC9ptGynkOPe3vFRW6otoI3g
The annual event is sponsored by the Saudi sovereign wealth fund, or PIF (Public Investment Fund), which is one of the largest pools of capital in the world.
The meeting this year had an AI theme, and I spoke in the AI and Health (genomics) session. The mix of people was very interesting -- VC, hedge fund, and private equity investors (among other things, looking for allocations from PIF), tech entrepreneurs, policy and government people, etc. There was a large Chinese contingent at the meeting, and a strong Huawei presence. IIUC the telco infrastructure in the Kingdom uses a lot of Huawei gear.
I got a Star Wars cantina in business suits vibe from the thousands of attendees at the Ritz. The various global tribes were there in almost equal mixture -- Americans (Silicon Valley + NY money), Euro-grifters, money men, technologists, spooks, government suits, Chinese, Arabs, Indians, Russians (even Sputnik News). The Kingdom is really at the global crossroads.
Right away on the first day I "bumped into" someone from the US embassy. Her card says State, but I suspect another agency with three letters.
My hotel was in the DQ or Diplomatic Quarter, not far from the Ritz. The DQ is separated from the rest of the city by serious security checkpoints. The Saudi soldiers like to wear their pistols low on the thigh with cool looking black polymer "gunfighter" holsters.
See also The Geopolitics of US Global Decline: Beijing and Washington Struggle for Dominion over the World Island.
Kaifu Lee and Stephen Schwarzman dialog.
Our panel on AI and Health was held here:
The gala reception in the King Abdullah Financial District. An interesting little drone hovered above the crowd all evening.
My speaker pass. I had a driver and was able to get through the numerous security checkpoints quickly using this. MBS has his own elite Royal Guard, and they were in evidence at the event.
Over the summer I also spoke at the Tallinn Digital Summit and the World Congress of Information Technology in Yerevan Armenia -- lots of travel! I haven't even had time to blog about these events. There are videos of my talks and panels I will try to post at some point.
TDS 2019: Panel on AI social and political impacts
https://youtu.be/fddG7hQkkW4
TDS 2019 Parallel Breakout Sessions I: AI in Healthcare
https://youtu.be/atOnB1dW0OA
Tallinn Digital Summit YouTube Channel
https://www.youtube.com/channel/UC9ptGynkOPe3vFRW6otoI3g
Friday, October 25, 2019
Genomic Prediction of 16 Complex Disease Risks Including Heart Attack, Diabetes, Breast and Prostate Cancer (Nature Scientific Reports)
Published online today!
Genomic Prediction of 16 Complex Disease Risks Including Heart Attack, Diabetes, Breast and Prostate CancerFrom the Discussion:
Louis Lello, Timothy G. Raben, Soke Yuen Yong, Laurent C. A. M. Tellier & Stephen D. H. Hsu
Nature Scientific Reports volume 9, Article number: 15286 (2019)
We construct risk predictors using polygenic scores (PGS) computed from common Single Nucleotide Polymorphisms (SNPs) for a number of complex disease conditions, using L1-penalized regression (also known as LASSO) on case-control data from UK Biobank. Among the disease conditions studied are Hypothyroidism, (Resistant) Hypertension, Type 1 and 2 Diabetes, Breast Cancer, Prostate Cancer, Testicular Cancer, Gallstones, Glaucoma, Gout, Atrial Fibrillation, High Cholesterol, Asthma, Basal Cell Carcinoma, Malignant Melanoma, and Heart Attack. We obtain values for the area under the receiver operating characteristic curves (AUC) in the range ~0.58–0.71 using SNP data alone. Substantially higher predictor AUCs are obtained when incorporating additional variables such as age and sex. Some SNP predictors alone are sufficient to identify outliers (e.g., in the 99th percentile of polygenic score, or PGS) with 3–8 times higher risk than typical individuals. We validate predictors out-of-sample using the eMERGE dataset, and also with different ancestry subgroups within the UK Biobank population. Our results indicate that substantial improvements in predictive power are attainable using training sets with larger case populations. We anticipate rapid improvement in genomic prediction as more case-control data become available for analysis.
The significant heritability of most common disease conditions implies that at least some of the variance in risk is due to genetic effects. With enough training data, modern machine learning techniques enable us to construct polygenic predictors of risk. A learning algorithm with enough examples to train on can eventually identify individuals, based on genotype alone, who are at unusually high risk for the condition. This has obvious clinical applications: scarce resources for prevention and diagnosis can be more efficiently allocated if high risk individuals can be identified while still negative for the disease condition. This identification can occur early in life, or even before birth.
In this paper we used UK Biobank data to construct predictors for a number of conditions. We conducted out of sample testing using eMERGE data (collected from the US population) and adjacent ancestry (AA) testing using UK ethnic subgroups distinct from the training population. The results suggest that our polygenic scores indeed predict complex disease risk - there is very strong agreement in performance between the training and out of sample testing populations. Furthermore, in both the training and test populations the distribution of PGS is approximately Gaussian, with cases having on average higher scores. We verify that, for all disease conditions studied, a simple model of displaced Gaussian distributions predicts empirically observed odds ratios (i.e., individual risk in test population) as a function of PGS. This is strong evidence that the polygenic score itself, generated for each disease condition using machine learning, is indeed capturing a nontrivial component of genetic risk.
By varying the amount of case data used in training, we estimate the rate of improvement of polygenic predictors with sample size. Plausible extrapolations suggest that sample sizes readily within reach of population genetics studies will result in predictors of significant clinical utility. ...
Thursday, October 17, 2019
Manifold Podcast #21: Tyler Cowen on Big Business, Socialism, Free Speech, and Stagnant Productivity Growth
Polymath and economist Tyler Cowen (Holbert L. Harris Professor at GMU) joins Steve and Corey for a wide-ranging discussion. Are books just for advertising? Have blogs peaked? Are podcasts the future or just a bubble? Is technological change slowing? Is there less political correctness in China than the US? Tyler's new book, an apologia for big business, inspires a discussion of CEO pay and changing public attitudes toward socialism. They investigate connections between populism, stagnant wage growth, income inequality and immigration. Finally, they discuss the future global order and trajectories of the US, EU, China, and Russia.
Transcript
Personal Website
Marginal Revolution (Blog)
Conversations with Tyler (Podcast)
Tyler Cowen | Bloomberg Opinion Columnist
Big Business: A Love Letter to an American Anti-Hero (Book)
man·i·fold /ˈmanəˌfōld/ many and various.
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Wednesday, October 16, 2019
Brexit: Down to the Wire
Get ready for the general election!
Over the summer I was at the Tallinn Digital Summit in Estonia. At dinner, sitting across from a UN official, I expressed to his initial incredulity that the victory of Vote Leave three years ago was a triumph of the human spirit: a small team of talented individuals defeated overwhelmingly powerful forces arrayed against them -- the UK government, the media, the elites. After some discussion, he came to understand my perspective.
This time the good guys are in Number 10 and the odds are in their favor in the coming election.
Having some insight into what is happening in UK politics, I can assure you that most of what is reported in the media is junk. Sometimes the story is deliberately distorted, sometimes it's just stupidity at work. But you are foolish if you trust the media, in the UK or US. Just as we know that ~50% of published results in biomedical or social psychology journals fails to replicate, it is easy to do a simple look back to see that information from the media is highly unreliable.
I was struck by this photo today in the NYTimes:
A Brexit meeting of European Union ministers on Tuesday in Luxembourg
Compare to Kubrick's Dr. Strangelove :-)
Over the summer I was at the Tallinn Digital Summit in Estonia. At dinner, sitting across from a UN official, I expressed to his initial incredulity that the victory of Vote Leave three years ago was a triumph of the human spirit: a small team of talented individuals defeated overwhelmingly powerful forces arrayed against them -- the UK government, the media, the elites. After some discussion, he came to understand my perspective.
This time the good guys are in Number 10 and the odds are in their favor in the coming election.
Having some insight into what is happening in UK politics, I can assure you that most of what is reported in the media is junk. Sometimes the story is deliberately distorted, sometimes it's just stupidity at work. But you are foolish if you trust the media, in the UK or US. Just as we know that ~50% of published results in biomedical or social psychology journals fails to replicate, it is easy to do a simple look back to see that information from the media is highly unreliable.
I was struck by this photo today in the NYTimes:
A Brexit meeting of European Union ministers on Tuesday in Luxembourg
Friday, October 11, 2019
The Quantum Simulation Hypothesis: Do we live in a quantum multiverse simulation?
The Simulation Hypothesis is the idea that our universe might be part of a simulation: we are not living in base reality. (See, e.g., earlier discussion here.)
There are many versions of the argument supporting this hypothesis, which has become more plausible (or at least more popular) over time as computational power, and our familiarity with computers and virtual worlds within them, has increased.
Modern cosmology suggests that our universe, our galaxy, and our solar system, have billions of years ahead of them, during which our civilization (currently only ~10ky old!), and others, will continue to evolve. It seems reasonable that technology and science will continue to advance, delivering ever more advanced computational platforms. Within these platforms it is likely that quasi-realistic simulations, of our world, or of imagined worlds (e.g., games), will be created, many populated by AI agents or avatars. The number of simulated beings could eventually be much larger than the number of biologically evolved sentient beings. Under these assumptions, it is not implausible that we ourselves are actually simulated beings, and that our world is not base reality.
One could object to using knowledge about our (hypothetically) simulated world to reason about base reality. However, the one universe that we have direct observational contact with seems to permit the construction of virtual worlds with large populations of sentient beings. While our simulation may not be entirely representative of base reality, it nevertheless may offer some clues as to what is going on "outside"!
The simulation idea is very old. It is almost as old as computers themselves. However, general awareness of the argument has increased significantly, particularly in the last decade. It has entered the popular consciousness, transcending its origins in the esoteric musings of a few scientists and science fiction authors.
The concept of a quantum computer is relatively recent -- one can trace the idea back to Richard Feynman's early-1980s Caltech course: Physical Limits to Computation. Although quantum computing has become a buzzy part of the current hype cycle, very few people have any deep understanding of what a quantum computer actually is, and why it is different from a classical computer. A prerequisite for this understanding is a grasp of both the physical and mathematical aspects of quantum mechanics, which very few possess. Individuals who really understand quantum computing tend to have backgrounds in theoretical physics, physics, or perhaps computer science or mathematics.
The possibility of quantum computers requires that we reformulate the Simulation Hypothesis in an important way. If one is willing to posit future computers of gigantic power and complexity, why not quantum computers of arbitrary power? And why not simulations which run on these quantum computers, making use of quantum algorithms? After all, it was Feynman's pioneering observation that certain aspects of the quantum world (our world!) are more efficiently simulated using a quantum computer than a classical (e.g., Turing) machine. (See quantum extension of the Church-Turing thesis.) Hence the original Simulation Hypothesis should be modified to the Quantum Simulation Hypothesis: Do we live in a quantum simulation?
There is an important consequence for those living in a quantum simulation: they exist in a quantum multiverse. That is, in the (simulated) universe, the Many Worlds description of quantum mechanics is realized. (It may also be realized in base reality, but that is another issue...) Within the simulation, macroscopic, semiclassical brains perceive only one branch of the almost infinite number of decoherent branches of the multiverse. But all branches are realized in the execution of the unitary algorithm running on qubits. The power of quantum computing, and the difficulty of its realization, both derive from the requirement that entanglement and superposition be maintained in execution.
Given sufficiently powerful tools, the beings in the simulation could test whether quantum evolution of qubits under their control is unitary, thereby verifying the absence of non-unitary wavefunction collapse, and the existence of other branches (see, e.g., Deutsch 1986).
We can give an anthropic version of the argument as follows.
1. The physical laws and cosmological conditions of our universe seem to permit the construction of large numbers of virtual worlds containing sentient beings.
2. These simulations could run on quantum computers, and in fact if the universe being simulated obeys the laws of quantum physics, the hardware of choice is a quantum computer. (Perhaps the simulation must be run on a quantum computer!)
If one accepts points 1 and 2 as plausible, then: Conditional on the existence of sentient beings who have discovered quantum physics (i.e., us), the world around them is likely to be a simulation running on a quantum computer. Furthermore, these beings exist on a branch of the quantum multiverse realized in the quantum computer, obeying the rules of Many Worlds quantum mechanics. The other branches must be there, realized in the unitary algorithm running on (e.g., base reality) qubits.
See also
Gork revisited 2018
Are You Gork?
Big Ed
There are many versions of the argument supporting this hypothesis, which has become more plausible (or at least more popular) over time as computational power, and our familiarity with computers and virtual worlds within them, has increased.
Modern cosmology suggests that our universe, our galaxy, and our solar system, have billions of years ahead of them, during which our civilization (currently only ~10ky old!), and others, will continue to evolve. It seems reasonable that technology and science will continue to advance, delivering ever more advanced computational platforms. Within these platforms it is likely that quasi-realistic simulations, of our world, or of imagined worlds (e.g., games), will be created, many populated by AI agents or avatars. The number of simulated beings could eventually be much larger than the number of biologically evolved sentient beings. Under these assumptions, it is not implausible that we ourselves are actually simulated beings, and that our world is not base reality.
One could object to using knowledge about our (hypothetically) simulated world to reason about base reality. However, the one universe that we have direct observational contact with seems to permit the construction of virtual worlds with large populations of sentient beings. While our simulation may not be entirely representative of base reality, it nevertheless may offer some clues as to what is going on "outside"!
The simulation idea is very old. It is almost as old as computers themselves. However, general awareness of the argument has increased significantly, particularly in the last decade. It has entered the popular consciousness, transcending its origins in the esoteric musings of a few scientists and science fiction authors.
The concept of a quantum computer is relatively recent -- one can trace the idea back to Richard Feynman's early-1980s Caltech course: Physical Limits to Computation. Although quantum computing has become a buzzy part of the current hype cycle, very few people have any deep understanding of what a quantum computer actually is, and why it is different from a classical computer. A prerequisite for this understanding is a grasp of both the physical and mathematical aspects of quantum mechanics, which very few possess. Individuals who really understand quantum computing tend to have backgrounds in theoretical physics, physics, or perhaps computer science or mathematics.
The possibility of quantum computers requires that we reformulate the Simulation Hypothesis in an important way. If one is willing to posit future computers of gigantic power and complexity, why not quantum computers of arbitrary power? And why not simulations which run on these quantum computers, making use of quantum algorithms? After all, it was Feynman's pioneering observation that certain aspects of the quantum world (our world!) are more efficiently simulated using a quantum computer than a classical (e.g., Turing) machine. (See quantum extension of the Church-Turing thesis.) Hence the original Simulation Hypothesis should be modified to the Quantum Simulation Hypothesis: Do we live in a quantum simulation?
There is an important consequence for those living in a quantum simulation: they exist in a quantum multiverse. That is, in the (simulated) universe, the Many Worlds description of quantum mechanics is realized. (It may also be realized in base reality, but that is another issue...) Within the simulation, macroscopic, semiclassical brains perceive only one branch of the almost infinite number of decoherent branches of the multiverse. But all branches are realized in the execution of the unitary algorithm running on qubits. The power of quantum computing, and the difficulty of its realization, both derive from the requirement that entanglement and superposition be maintained in execution.
Given sufficiently powerful tools, the beings in the simulation could test whether quantum evolution of qubits under their control is unitary, thereby verifying the absence of non-unitary wavefunction collapse, and the existence of other branches (see, e.g., Deutsch 1986).
We can give an anthropic version of the argument as follows.
1. The physical laws and cosmological conditions of our universe seem to permit the construction of large numbers of virtual worlds containing sentient beings.
2. These simulations could run on quantum computers, and in fact if the universe being simulated obeys the laws of quantum physics, the hardware of choice is a quantum computer. (Perhaps the simulation must be run on a quantum computer!)
If one accepts points 1 and 2 as plausible, then: Conditional on the existence of sentient beings who have discovered quantum physics (i.e., us), the world around them is likely to be a simulation running on a quantum computer. Furthermore, these beings exist on a branch of the quantum multiverse realized in the quantum computer, obeying the rules of Many Worlds quantum mechanics. The other branches must be there, realized in the unitary algorithm running on (e.g., base reality) qubits.
See also
Gork revisited 2018
Are You Gork?
Big Ed
Tuesday, October 08, 2019
AI in the Multiverse: Intellects Vast and Cold
In quantum mechanics the state of the universe evolves deterministically: the state of the entire universe at time zero fully determines its state at any later time. It is difficult to reconcile this observation with our experience as macroscopic, nearly classical, beings. To us it seems that there are random outcomes: the state of an electron (spin-up in the z direction) does not in general determine the outcome of a measurement of its spin (x direction measurement probability 1/2 of either spin up or down). This is because our brains (information processing devices) are macroscopic: one macroscopic state (memory record) is associated with the spin up outcome, which rapidly loses contact (decoheres) from the other macroscopic state with memory record of the spin down outcome. Nevertheless, the universe state, obtained from deterministic Schrodinger evolution of the earlier state, is a superposition:
| brain memory recorded up, spin up >
+
| brain memory recorded down, spin down >.
We are accustomed to thinking about classical information processing machines: brains and computers. However, with the advent of quantum computers a new possibility arises: a device which (necessarily) resides in a superposition state, and uses superposition as an integral part of its information processing.
What can we say about this kind of (quantum) intelligence? Must it be "artificial"? Could there be a place in the multiverse where evolved biological beings use superposition and entanglement as a resource for information processing?
Any machine of the type described above must be vast and cold. Vast, because many qubits are required for self-awareness and consciousness (just as many bits are required for classical AI). Cold, because decoherence destroys connections across superpositions. Too much noise (heat), and it devolves back to isolated brains, residing on decohered branches of the wavefunction.
One could regard human civilization as a single intelligence or information processing machine. This intelligence is rapidly approaching the point where it will start to use entanglement as a significant resource. It is vast, and (in small regions -- in physics labs) cold enough. We can anticipate more and larger quantum computers distributed throughout our civilization, making greater and greater use of nearby patches of the multiverse previously inaccessible.
Perhaps some day a single quantum computer might itself be considered intelligent -- the first of new kind!
What will it think?
Consciousness in a mini multiverse... Thoughts which span superpositions.
See also Gork revisited 2018 and Are You Gork?
Monday, October 07, 2019
Combat Drones
These are inexpensive, slow-moving drones -- but potentially quite effective. The Turkish drone should have "lock in" capability on stationary targets, so that the radio link to the operator is unnecessary near the end of the flight (i.e., the drone is invulnerable to jamming near the target).
A larger drone such as an ASBM (Anti-Ship Ballistic Missile) or UAV would not need the operator to perform the targeting -- it could have enough AI/ML to recognize an aircraft carrier from ~10km distance (e.g., using some combination of visual, IR, radar imaging). Given a satellite fix on the carrier location, just launch to that coordinate and let the AI/ML do final targeting.
See also
Death from the Sky: Drone Assassination
Assassination by Drone
Strategic Implications of Drone/Missile Strikes on Saudi Arabia
Thursday, October 03, 2019
Manifold Podcast #20: Betsy McKay (WSJ) on Heart Disease and Health
Steve and Corey talk to Betsy McKay, senior writer on U.S. and global public health at The Wall Street Journal, about her recent articles on heart disease. Betsy describes how background reporting led to her article linking the recent drop in life expectancy in the United States, often attributed to the opioid crisis or increases in middle age suicides due to economic despair, to the increasing prevalence of heart disease, driven by the rise in obesity. The three also discuss current public health recommendations on how to reduce heart disease risk and on the use of calcium scans to assess arterial plaque buildup. Steve describes boutique medical programs available to the super-rich that include full body scans to search for early signs of disease. Betsy elaborates on how she approached reporting on a new study linking egg consumption to higher cholesterol and increased risk of death, a result at odds with other recent findings and national recommendations that two eggs a day eggs is safe and healthy. Finally, they consider whether people are wasting money on buying fish oil supplements.
[ At about 20m I discuss how I got on the keto diet... ]
Transcript
Death Rates Rising for Young, Middle-Aged U.S. Adults
How to Reduce Your Risk of Heart Disease
New BP guidelines that set elevated BP as above 120mmHG/80 and Stage 1 hypertension is 120-130/80-90, Stage 2 140/90 or above.
New ACC/AHA High Blood Pressure Guidelines Lower Definition of Hypertension
Heart Attack at 49—America’s Biggest Killer Makes a Deadly Comeback
Study Links Eggs to Higher Cholesterol and Risk of Heart Disease
Fish Oil: Hunting for Evidence to Tip the Scales
Don’t Use Bootleg or Street Vaping Products, C.D.C. Warns
man·i·fold /ˈmanəˌfōld/ many and various.
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Wednesday, October 02, 2019
Harvard Discrimination Lawsuit: Judge Burroughs on Racial Balancing and "Unhooked" Applicants
As has been widely reported (WSJ):
While I have not read the entire decision (PDF), I was curious to see how two important arguments made by the plaintiffs (Students For Fair Admissions, SFFA) were addressed. You can evaluate Burroughs' logic and use of evidence for yourself. In the excerpts below I first quote from the SFFA filing, and then from the decision.
Issue #1: Racial Balancing:
From page 80 of the decision:
Figure 2 seems to show that Asian American applicants are a smaller fraction of the class relative to their share of the applicant pool, whereas, e.g., this ratio is reversed for African Americans. Racial balancing would be found only in detailed comparisons of these ratios across several years, adjusting for strength of application, etc.
Rather than giving a serious analysis of racial balancing (is it actually happening?), Burroughs seems to explicitly support the practice in her comments on racial diversity:
Issue #2: Is discrimination against Asian Americans especially obvious when one considers "unhooked" applicants separately?
U.S. District Judge Allison Burroughs found that Harvard’s practices were “not perfect” and could use improvements, including implicit bias training for admissions officers, but said “the Court will not dismantle a very fine admissions program that passes constitutional muster, solely because it could do better.”I anticipate that this case will end up before the Supreme Court.
While I have not read the entire decision (PDF), I was curious to see how two important arguments made by the plaintiffs (Students For Fair Admissions, SFFA) were addressed. You can evaluate Burroughs' logic and use of evidence for yourself. In the excerpts below I first quote from the SFFA filing, and then from the decision.
Issue #1: Racial Balancing:
SFFA: ... Harvard is engaging in racial balancing. Over an extended period, Harvard’s admission and enrollment figures for each racial category have shown almost no change. Each year, Harvard admits and enrolls essentially the same percentage of African Americans, Hispanics, whites, and Asian Americans even though the application rates and qualifications for each racial group have undergone significant changes over time. This is not the coincidental byproduct of an admissions system that treats each applicant as an individual; indeed, the statistical evidence shows that Harvard modulates its racial admissions preference whenever there is an unanticipated change in the yield rate of a particular racial group in the prior year. Harvard’s remarkably stable admissions and enrollment figures over time are the deliberate result of systemwide intentional racial discrimination designed to achieve a predetermined racial balance of its student body.This is a relevant figure from the Economist. It shows the increase in Asian representation at Caltech (mostly race-neutral admissions), tracking the overall population of college age Asian Americans, versus the suspicious Ivy league convergence at 15-20% of each class.
From page 80 of the decision:
Although Harvard tracks and considers various indicators of diversity in the admissions process, including race, the racial composition of Harvard’s admitted classes has varied in a manner inconsistent with the imposition of a racial quota or racial balancing. See [Oct. 31 Tr. 119:10–121:10; DX711]. As Figures 1 and 2 show, there has been considerable year-to-year variation in the portion of Harvard’s class that identifies as Asian American since at least 1980. [ italics mine ]Figure 1 seems merely to show that admittance by race tends to fluctuate by 5-10% from year to year. No attempt at analysis of correlations across years -- i.e., to detect racial balancing.
Figure 2 seems to show that Asian American applicants are a smaller fraction of the class relative to their share of the applicant pool, whereas, e.g., this ratio is reversed for African Americans. Racial balancing would be found only in detailed comparisons of these ratios across several years, adjusting for strength of application, etc.
Rather than giving a serious analysis of racial balancing (is it actually happening?), Burroughs seems to explicitly support the practice in her comments on racial diversity:
p.30 To summarize the use of race in the admissions process, Harvard does not have a quota for students from any racial group, but it tracks how each class is shaping up relative to previous years with an eye towards achieving a level of racial diversity that will provide its students with the richest possible experience. It monitors the racial distribution of admitted students in part to ensure that it is admitting a racially diverse class that will not be overenrolled based on historic matriculation rates which vary by racial group. [ Isn't this just a definition of racial balancing? ]Quota Bad, Soft-Quota Good! Is this now the law of the land in the United States of America? SCOTUS here we come...
Issue #2: Is discrimination against Asian Americans especially obvious when one considers "unhooked" applicants separately?
SFFA: ... The task here is to determine whether “similarly situated” applicants have been treated differently on the basis of race; “apples should be compared to apples.” SBT Holdings, LLC v. Town of Westminster, 547 F.3d 28, 34 (1st Cir. 2008). Because certain applicants are in a special category, it is important to analyze the effect of race without them included. Excluding them allows for the effect of race to be tested on the bulk of the applicant pool (more than 95% of applicants and more than two-thirds of admitted students) that do not fall into one of these categories, i.e., the similarly situated applicants. For special-category applicants, race either does not play a meaningful role in their chances of admission or the discrimination is offset by the “significant advantage” they receive. Either way, they are not apples.The judge seems to have ignored or rejected the claim that discrimination within the pool of unhooked applicants (95% of the total!) is worth considering on its own. This seems to be an entirely legal (as opposed to statistical) question that may be tested in the appeal. (ALDC = Athletes, Legacies, Deans interest list (donors), and Children of Harvard faculty.)
Professor Card’s inclusion of these applicants reflects his position that “there is no penalty against Asian-American applicants unless Harvard imposes a penalty on every Asian-American applicant.” But he is not a lawyer and he is wrong. It is illegal to discriminate against any Asian-American applicant or subset of applicants on the basis of race. Professor Card cannot escape that reality by trying to dilute the dataset. The claim here is not that Harvard, for example, “penalizes recruited athletes who are Asian-American because of their race.” The claim “is that the effects of Harvard’s use of race occur outside these special categories.” Professor Arcidiacono thus correctly excluded special-category applicants to isolate and highlight Harvard’s discrimination against Asian Americans. Professor Card, by contrast, includes “special recruiting categories in his models” to “obscure the extent to which race is affecting admissions decisions for those not fortunate enough to belong to one of these groups.” At bottom, SFFA’s claim is that Harvard penalizes Asian-American applicants who are not legacies or recruited athletes. Professor Card has shown that he is unwilling and unable to contest that claim.
p.52 Although ALDCs represent only a small portion of applicants and are admitted or rejected through the same admissions process that applies to other applicants, they account for approximately 30% of Harvard’s admitted class. [Oct. 30 Tr. 153:6–154:8, DX706; DD10 at 38, 40]. For reasons discussed more fully infra at Section V.F, the Court agrees with Professor Card that including ALDCs in the statistics and econometric models leads to more probative evidence of the alleged discrimination or lack thereof.See also Former Yale Law Dean on Harvard anti-Asian discrimination case: The facts are just so embarrassing to Harvard... Quotas and a climate of dishonesty and comments therein.