Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Thursday, December 01, 2022

Anna Krylov: The Politicization of Science in Academia — Manifold #25

 

Anna I. Krylov (Russian: Анна Игоревна Крылова) is Professor of Chemistry at the University of Southern California (USC), working in the field of theoretical and computational quantum chemistry. Krylov is an outspoken advocate of freedom of speech and academic freedom. She is a founding member of the Academic Freedom Alliance and a member of its academic leadership committee. 

Her paper, The Peril of Politicizing Science, launched a national conversation among scientists and the general public on the growing influence of political ideology in STEM. It has received over 80,000 views and, according to Altmetric, was the all-time highest-ranked article in the Journal of Physical Chemistry Letters. 

Steve and Anna discuss: 

0:00 Anna Krylov’s background, upbringing in USSR 
7:03 Ideological control and censorship for the greater good? 
14:59 How ideology underpins DEI work in academic institutions 
30:40 Captured institutions 
37:05 How much is UC Berkeley spending on DEI, and where the money is going 
41:46 Krylov thinks it can get worse 
52:09 An idea for defeating preference falsification at universities 



Resources: 

Professor Krylov academic page: 

Wiki page: 

The Peril of Politicizing Science, Journal of Physical Chemistry Letters 2021 https://pubs.acs.org/doi/10.1021/acs.jpclett.1c01475

Wednesday, September 28, 2022

The Future of Human Evolution -- excerpts from podcast interview with Brian Chau



1. The prospect of predicting cognitive ability from DNA, and the consequences. Why the main motivation has nothing to do with group differences. This segment begins at roughly 47 minutes. 

2. Anti-scientific resistance to research on the genetics of cognitive ability. My experience with the Jasons. Blank Slate-ism as a sacralized, cherished belief of social progressives. This segment begins at roughly 1 hour 7 minutes. 


1. Starts at roughly 47 minutes. 

Okay, let's just say hypothetically my billionaire friend is buddies with the CEO of 23andMe and let's say on the down low we collected some SAT scores of 1M or 2M people. I think there are about 10M people that have done 23andMe, let's suppose I manage to collect 1-2M scores for those people. I get them to opt in and agree to the study and da da da da and then Steve runs his algos and you get this nice predictor. 

But you’ve got to do it on the down low. Because if it leaks out that you're doing it, People are going to come for you. The New York Times is going to come for you, everybody's going to come for you. They're going to try to trash the reputation of 23andMe. They're going to trash the reputation of the billionaire. They're going to trash the reputation of the scientists who are involved in this. But suppose you get it done. And getting it done as you know very well is a simple run on AWS and you end up with this predictor which wow it's really complicated it depends on 20k SNPs in the genome ... 

For anybody with an ounce of intellectual integrity, they would look back at their copy of The Mismeasure of Man which has sat magisterially on their bookshelf since they were forced to buy it as a freshman at Harvard. They would say, “WOW! I guess I can just throw that in the trash right? I can just throw that in the trash.” 

But the set of people who have intellectual integrity and can process new information and then reformulate the opinion that they absorbed through social convention – i.e., that Gould is a good person and a good scientist and wise -- is tiny. The set of people who can actually do that is like 1% of the population. So you know maybe none of this matters, but in the long run it does matter. … 

Everything else about that hypothetical: the social scientists running the longitudinal study, getting the predictor in his grubby little hands and publishing the validation, but people trying to force you to studiously ignore the results, all that has actually already happened. We already have something which correlates ~0.4 with IQ. Everything else I said has already been done but it's just being studiously ignored by the right thinking people. 

 … 

Some people could misunderstand our discussion as being racist. I'm not saying that any of this has anything to do with group differences between ancestry groups. I'm just saying, e.g., within the white population of America, it is possible to predict from embryo DNA which of 2 brothers raised in the same family will be the smart one and which one will struggle in school. Which one will be the tall one and which one will be not so tall. 



2. Starts at roughly 1 hour 7 minutes. 

I've been in enough places where this kind of research is presented in seminar rooms and conferences and seen very negative attacks on the individuals presenting the results. 

I'll give you a very good example. There used to be a thing called the Jasons. During the cold war there was a group of super smart scientists called the Jasons. They were paid by the government to get together in the summers and think about technological issues that might be useful for defense and things like war fighting. … 

I had a meeting with the (current) Jasons. I was invited to a place near Stanford to address them about genetic engineering, genomics, and all this stuff. I thought okay these are serious scientists and I'll give them a very nice overview of the progress in this field. This anecdote takes place just a few years ago. 

One of the Jasons present is a biochemist but not an expert on genomics or machine learning. This biochemist asked me a few sharp questions which were easy to answer. But then at some point he just can't take it anymore and he grabs all his stuff and runs out of the room. ...

Thursday, May 05, 2022

Raghuveer Parthasarathy: Four Physical Principles and Biophysics -- Manifold podcast #11

 

Raghu Parthasarathy is the Alec and Kay Keith Professor of Physics at the University of Oregon. His research focuses on biophysics, exploring systems in which the complex interactions between individual components, such as biomolecules or cells, can give rise to simple and robust physical patterns. 

Raghu is the author of a recent popular science book, So Simple a Beginning: How Four Physical Principles Shape Our Living World. 


Steve and Raghu discuss: 

0:00 Introduction 

1:34 Early life, transition from Physics to Biophysics 

20:15 So Simple a Beginning: discussion of the Four Physical Principles in the title, which govern biological systems 

26:06 DNA prediction 

37:46 Machine learning / causality in science 

46:23 Scaling (the fourth physical principle) 

54:12 Who the book is for and what high schoolers are learning in their bio and physics classes 

1:05:41 Science funding, grants, running a research lab 

1:09:12 Scientific careers and radical sub-optimality of the existing system 



Resources: 


Raghuveer Parthasarathy's lab at the University of Oregon - https://pages.uoregon.edu/raghu/ 
 
Raghuveer Parthasarathy's blog the Eighteenth Elephant - https://eighteenthelephant.com/


Added from comments:
key holez • 2 days ago 
It was a fascinating episode, and I immediately went out and ordered the book! One question that came to mind: given how much of the human genome is dedicated to complex regulatory mechanisms and not proteins as such, it seems unintuitive to me that so much of heritability seems to be additive. I would have thought that in a system with lots of complicated,messy on/off switches, small genetic differences would often lead to large phenotype differences -- but if what I've heard about polygenic prediction is right, then, empirically, assuming everything is linear seems to work just fine (outside of rare variants, maybe). Is there a clear explanation for how complex feedback patterns give rise to linearity in the end? Is it just another manifestation of the central limit theorem...?
steve hsu 
This is an active area of research. It is somewhat surprising even to me how well linearity / additivity holds in human genetics. Searches for non-linear effects on complex traits have been largely unsuccessful -- i.e., in the sense that most of the variance seems to be controlled by additive effects. By now this has been investigated for large numbers of traits including major diseases, quantitive traits such as blood biomarkers, height, cognitive ability, etc. 
One possible explanation is that because humans are so similar to each other, and have passed through tight evolutionary bottlenecks, *individual differences* between humans are mainly due to small additive effects, located both in regulatory and coding regions. 
To genetically edit a human into a frog presumably requires many changes in loci with big nonlinear effects. However, it may be the case that almost all such genetic variants are *fixed* in the human population: what makes two individuals different from each other is mainly small additive effects. 
Zooming out slightly, the implications for human genetic engineering are very positive. Vast pools of additive variance means that multiplex gene editing will not be impossibly hard...
This topic is discussed further in the review article: https://arxiv.org/abs/2101.05870

Saturday, October 30, 2021

Slowed canonical progress in large fields of science (PNAS)




Sadly, the hypothesis described below is very plausible. 

The exception being that new tools or technological breakthroughs, especially those that can be validated relatively easily (e.g., by individual investigators or small labs), may still spread rapidly due to local incentives. CRISPR and Deep Learning are two good examples.
 
New theoretical ideas and paradigms have a much harder time in large fields dominated by mediocre talents: career success is influenced more by social dynamics than by real insight or capability to produce real results.
 
Slowed canonical progress in large fields of science 
Johan S. G. Chu and James A. Evans 
PNAS October 12, 2021 118 (41) e2021636118 
Significance The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas. 
Abstract In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.
See also Is science self-correcting?
A toy model of the dynamics of scientific research, with probability distributions for accuracy of experimental results, mechanisms for updating of beliefs by individual scientists, crowd behavior, bounded cognition, etc. can easily exhibit parameter regions where progress is limited (one could even find equilibria in which most beliefs held by individual scientists are false!). Obviously the complexity of the systems under study and the quality of human capital in a particular field are important determinants of the rate of progress and its character. 
In physics it is said that successful new theories swallow their predecessors whole. That is, even revolutionary new theories (e.g., special relativity or quantum mechanics) reduce to their predecessors in the previously studied circumstances (e.g., low velocity, macroscopic objects). Swallowing whole is a sign of proper function -- it means the previous generation of scientists was competent: what they believed to be true was (at least approximately) true. Their models were accurate in some limit and could continue to be used when appropriate (e.g., Newtonian mechanics). 
In some fields (not to name names!) we don't see this phenomenon. Rather, we see new paradigms which wholly contradict earlier strongly held beliefs that were predominant in the field* -- there was no range of circumstances in which the earlier beliefs were correct. We might even see oscillations of mutually contradictory, widely accepted paradigms over decades. 
It takes a serious interest in the history of science (and some brainpower) to determine which of the two regimes above describes a particular area of research. I believe we have good examples of both types in the academy. 
* This means the earlier (or later!) generation of scientists in that field was incompetent. One or more of the following must have been true: their experimental observations were shoddy, they derived overly strong beliefs from weak data, they allowed overly strong priors to determine their beliefs.

Wednesday, May 26, 2021

How Dominic Cummings And The Warner Brothers Saved The UK




Photo above shows the white board in the Prime Minister's office which Dominic Cummings and team (including the brothers Marc and Ben Warner) used to convince Boris Johnson to abandon the UK government COVID herd immunity plan and enter lockdown. Date: March 13 2020. 

Only now can the full story be told. In early 2020 the UK government had a COVID herd immunity plan in place that would have resulted in disaster. The scientific experts (SAGE) advising the government strongly supported this plan -- there are public, on the record briefings to this effect. These are people who are not particularly good at order of magnitude estimates and first-principles reasoning. 

Fortunately Dom was advised by the brothers Marc and Ben Warner (both physics PhDs, now working in AI and data science), DeepMind founder Demis Hassabis, Fields Medalist Tim Gowers, and others. In the testimony (see ~23m, ~35m, ~1h02m, ~1h06m in the video below) he describes the rather dramatic events that led to a switch from the original herd immunity plan to a lockdown Plan B. More details in this tweet thread.


I checked my emails with Dom during February and March, and they confirm his narrative. I wrote the March 9 blog post Covid-19 Notes in part for Dom and his team, and I think it holds up over time. Tim Gowers' document reaches similar conclusions.


 

Seven hours of riveting Dominic Cummings testimony from earlier today. 


Shorter summary video (Channel 4). Summary live-blog from the Guardian.



This is a second white board used in the March 14 meeting with Boris Johnson:



Friday, April 23, 2021

How a Physicist Became a Climate Truth Teller: Steve Koonin

 

I read an early draft of Koonin's new book discussed in the WSJ article excerpted below, and I highly recommend it. 


Video above is from a 2019 talk discussed in this earlier post: Certainties and Uncertainties in our Energy and Climate Futures: Steve Koonin.
My own views (consistent, as far as I can tell, with what Steve says in the talk): 
1. Evidence for recent warming (~1 degree C) is strong. 
2. There exist previous eras of natural (non-anthropogenic) global temperature change of similar magnitude to what is happening now. 
3. However, it is plausible that at least part of the recent temperature rise is due to increase of atmospheric CO2 due to human activity. 
4. Climate models still have significant uncertainties. While the direct effect of CO2 IR absorption is well understood, second order effects like clouds, distribution of water vapor in the atmosphere, etc. are not under good control. The increase in temperature from a doubling of atmospheric CO2 is still uncertain to a factor of 2-3 and at the low range (e.g., 1.5 degree C) is not catastrophic. The direct effect of CO2 absorption is modest and at the low range (~1 degree C) of current consensus model predictions. Potentially catastrophic outcomes are due to second order effects that are not under good theoretical or computational control. 
5. Even if a catastrophic outcome is only a low probability tail risk, it is prudent to explore technologies that reduce greenhouse gas production. 
6. A Red Team exercise, properly done, would clarify what is certain and uncertain in climate science. 
Simply stating these views can get you attacked by crazy people.
Buy Steve's book for an accessible and fairly non-technical explanation of these points.
WSJ: ... Barack Obama is one of many who have declared an “epistemological crisis,” in which our society is losing its handle on something called truth. 
Thus an interesting experiment will be his and other Democrats’ response to a book by Steven Koonin, who was chief scientist of the Obama Energy Department. Mr. Koonin argues not against current climate science but that what the media and politicians and activists say about climate science has drifted so far out of touch with the actual science as to be absurdly, demonstrably false. 
This is not an altogether innocent drifting, he points out in a videoconference interview from his home in Cold Spring, N.Y. In 2019 a report by the presidents of the National Academies of Sciences claimed the “magnitude and frequency of certain extreme events are increasing.” The United Nations Intergovernmental Panel on Climate Change, which is deemed to compile the best science, says all such claims should be treated with “low confidence.” 
... Mr. Koonin, 69, and I are of one mind on 2018’s U.S. Fourth National Climate Assessment, issued in Donald Trump’s second year, which relied on such overegged worst-case emissions and temperature projections that even climate activists were abashed (a revolt continues to this day). “The report was written more to persuade than to inform,” he says. “It masquerades as objective science but was written as—all right, I’ll use the word—propaganda.” 
Mr. Koonin is a Brooklyn-born math whiz and theoretical physicist, a product of New York’s selective Stuyvesant High School. His parents, with less than a year of college between them, nevertheless intuited in 1968 exactly how to handle an unusually talented and motivated youngster: You want to go cross the country to Caltech at age 16? “Whatever you think is right, go ahead,” they told him. “I wanted to know how the world works,” Mr. Koonin says now. “I wanted to do physics since I was 6 years old, when I didn’t know it was called physics.” 
He would teach at Caltech for nearly three decades, serving as provost in charge of setting the scientific agenda for one of the country’s premier scientific institutions. Along the way he opened himself to the world beyond the lab. He was recruited at an early age by the Institute for Defense Analyses, a nonprofit group with Pentagon connections, for what he calls “national security summer camp: meeting generals and people in congress, touring installations, getting out on battleships.” The federal government sought “engagement” with the country’s rising scientist elite. It worked. 
He joined and eventually chaired JASON, an elite private group that provides classified and unclassified advisory analysis to federal agencies. (The name isn’t an acronym and comes from a character in Greek mythology.) He got involved in the cold-fusion controversy. He arbitrated a debate between private and government teams competing to map the human genome on whether the target error rate should be 1 in 10,000 or whether 1 in 100 was good enough. 
He began planting seeds as an institutionalist. He joined the oil giant BP as chief scientist, working for John Browne, now Baron Browne of Madingley, who had redubbed the company “Beyond Petroleum.” Using $500 million of BP’s money, Mr. Koonin created the Energy Biosciences Institute at Berkeley that’s still going strong. Mr. Koonin found his interest in climate science growing, “first of all because it’s wonderful science. It’s the most multidisciplinary thing I know. It goes from the isotopic composition of microfossils in the sea floor all the way through to the regulation of power plants.” 
From deeply examining the world’s energy system, he also became convinced that the real climate crisis was a crisis of political and scientific candor. He went to his boss and said, “John, the world isn’t going to be able to reduce emissions enough to make much difference.” 
Mr. Koonin still has a lot of Brooklyn in him: a robust laugh, a gift for expression and for cutting to the heart of any matter. His thoughts seem to be governed by an all-embracing realism. Hence the book coming out next month, Unsettled: What Climate Science Tells Us, What It Doesn’t, and Why It Matters.
Any reader would benefit from its deft, lucid tour of climate science, the best I’ve seen. His rigorous parsing of the evidence will have you questioning the political class’s compulsion to manufacture certainty where certainty doesn’t exist. You will come to doubt the usefulness of centurylong forecasts claiming to know how 1% shifts in variables will affect a global climate that we don’t understand with anything resembling 1% precision. ...

Note Added from comments:

If you're older like Koonin or myself you can remember a time when climate change was entirely devoid of tribal associations -- it was not in the political domain at all. It is easier for us just to concentrate on where the science is, and indeed we can remember where it was in the 1990s or 2000s.

Koonin was MUCH more concerned about alternative energy and climate than the typical scientist and that was part of his motivation for supporting the Berkeley Energy Biosciences Institute, created 2007. The fact that it was a $500M partnership between Berkeley and BP was a big deal and much debated at the time, but there was never any evidence that the science they did was negatively impacted. 

It is IRONIC that his focus on scientific rigor now gets him labeled as a climate denier (or sympathetic to the "wrong" side). ALL scientists should be sceptical, especially about claims regarding long term prediction in complex systems.

Contrast the uncertainty estimates in the IPCC reports (which are not defensible and did not change for ~20y!) vs the (g-2) anomaly that was in the news recently.

When I was at Harvard the physics department and applied science and engineering school shared a coffee lounge. I used to sit there and work in the afternoon and it happened that one of the climate modeling labs had their group meetings there. So for literally years I overheard their discussions about uncertainties concerning water vapor, clouds, etc. which to this day are not fully under control. This is illustrated in Fig1 at the link: https://infoproc.blogspot.c...

The gap between what real scientists say in private and what the public (or non-specialists) gets second hand through the media or politically-focused "scientific policy reports" is vast...

If you don't think we can have long-lasting public delusions regarding "settled science" (like a decade long stock or real estate bubble), look up nuclear winter, which has a lot of similarities to greenhouse gas-driven climate change. Note, I am not claiming that I know with high confidence that nuclear winter can't happen, but I AM claiming that the confidence level expressed by the climate scientists working on it at the time was absurd and communicated in a grotesquely distorted fashion to political leaders and the general public. Even now I would say the scientific issue is not settled, due to its sheer complexity, which is LESS than the complexity involved in predicting long term climate change!

https://en.wikipedia.org/wi... 

Thursday, October 22, 2020

Replications of Height Genomic Prediction: Harvard, Stanford, 23andMe

These are two replications of our 2017 height prediction results (also recently validated using sibling data) that I neglected to blog about previously.

1. Senior author Liang is in Epidemiology and Biostatistics at Harvard.
Efficient cross-trait penalized regression increases prediction accuracy in large cohorts using secondary phenotypes 
Wonil Chung, Jun Chen, Constance Turman, Sara Lindstrom, Zhaozhong Zhu, Po-Ru Loh, Peter Kraft and Liming Liang 
Nature Communications volume 10, Article number: 569 (2019) 
We introduce cross-trait penalized regression (CTPR), a powerful and practical approach for multi-trait polygenic risk prediction in large cohorts. Specifically, we propose a novel cross-trait penalty function with the Lasso and the minimax concave penalty (MCP) to incorporate the shared genetic effects across multiple traits for large-sample GWAS data. Our approach extracts information from the secondary traits that is beneficial for predicting the primary trait based on individual-level genotypes and/or summary statistics. Our novel implementation of a parallel computing algorithm makes it feasible to apply our method to biobank-scale GWAS data. We illustrate our method using large-scale GWAS data (~1M SNPs) from the UK Biobank (N = 456,837). We show that our multi-trait method outperforms the recently proposed multi-trait analysis of GWAS (MTAG) for predictive performance. The prediction accuracy for height by the aid of BMI improves from R2 = 35.8% (MTAG) to 42.5% (MCP + CTPR) or 42.8% (Lasso + CTPR) with UK Biobank data.


2. This is a 2019 Stanford paper. Tibshirani and Hastie are famous researchers in statistics and machine learning. Figure is from their paper.


A Fast and Flexible Algorithm for Solving the Lasso in Large-scale and Ultrahigh-dimensional Problems 
Junyang Qian, Wenfei Du, Yosuke Tanigawa, Matthew Aguirre, Robert Tibshirani, Manuel A. Rivas, Trevor Hastie 
1Department of Statistics, Stanford University 2Department of Biomedical Data Science, Stanford University 
Since its first proposal in statistics (Tibshirani, 1996), the lasso has been an effective method for simultaneous variable selection and estimation. A number of packages have been developed to solve the lasso efficiently. However as large datasets become more prevalent, many algorithms are constrained by efficiency or memory bounds. In this paper, we propose a meta algorithm batch screening iterative lasso (BASIL) that can take advantage of any existing lasso solver and build a scalable lasso solution for large datasets. We also introduce snpnet, an R package that implements the proposed algorithm on top of glmnet (Friedman et al., 2010a) for large-scale single nucleotide polymorphism (SNP) datasets that are widely studied in genetics. We demonstrate results on a large genotype-phenotype dataset from the UK Biobank, where we achieve state-of-the-art heritability estimation on quantitative and qualitative traits including height, body mass index, asthma and high cholesterol.

The very first validation I heard about was soon after we posted our paper (2018 IIRC): I visited 23andMe to give a talk about genomic prediction and one of the PhD researchers there said that they had reproduced our results, presumably using their own data. At a meeting later in the day, one of the VPs from the business side who had missed my talk in the morning was shocked when I mentioned few cm accuracy for height. He turned to one of the 23andMe scientists in the room and exclaimed 

I thought WE were the best in the world at this stuff!?

Thursday, April 30, 2020

Raman Sundrum: Physics and the Universe - Manifold Episode #44



Steve and Corey talk with theoretical physicist Raman Sundrum. They discuss the last 30 years in fundamental physics, and look toward the next. Raman argues that Physics is a marketplace of ideas. While many theories did not stand the test of time, they represented avenues that needed to be explored. Corey expresses skepticism about the possibility of answering questions such as why the laws of physics have the form they do. Raman and Steve argue that attempts to answer such questions have led to great advances. Topics: models and experiments, Naturalness, the anthropic principle, dark matter and energy, and imagination.


Transcript

Raman Sundrum (Faculty Bio)

Sabine Hossenfelder on the Crisis in Particle Physics and Against the Next Big Collider  (Manifold Episode #8)


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Sunday, November 10, 2019

Good and Bad Journalism on Embryo Screening: The Economist vs Science Magazine

Note Added in response to 2020 Twitter mob attack which attempts to misrepresent my views:

This post discusses a Science News article that misrepresented the activities of the startup Genomic Prediction (GP), which I helped to found. The news article wrongly conflated screening embryos for intelligence (which GP does NOT do; this is not really possible technically at the moment -- there is still too much noise in the predictors) with testing for unusual disease risk (which GP does).

Media outlets tend to be very sloppy in confusing these two types of screening. Screening for disease risk is very common in IVF. About 2 million embryos per year undergo some level of genetic screening. The original Science News article was corrected after a tedious interaction with the editor. There are many other news articles out there that make the same mistake, but we expected something better from Science. By contrast, The Economist article is very accurate and concise.

#####################################################



Modern genetics will improve health and usher in “designer” children (Economist), which I linked to in the last post, does an excellent job of covering the scientific, technical, and ethical issues raised by recent advances in polygenic risk prediction and embryo screening.

The author, Ananyo Bhattacharya, is an experienced science writer with (if I recall correctly) a degree in Physics. His forthcoming book is an ambitious scientific / intellectual history of John von Neumann!

What did Ananyo get right in his article?

1. He gives an overview of polygenic risk scores (PRS) and the underlying science behind GWAS studies and construction of risk predictors
2. He describes how PRS will have important applications in health care as well as in IVF
3. He discusses the important ethical and societal aspects of embryo screening

As someone who works in this area, I can say that I don't know of any popular work that combines the clarity, precision, and concision of this article (3 pages).

Unfortunately, not all journalism reaches this high standard.

For example, a really terrible ("click-bait") article appeared in the News section of Science recently, which conflated embryo screening to reduce disease risk with the optimization of complex traits such as IQ or height. I had numerous email exchanges with the writer (a self-described "non-scientist"), running to thousands of words, and including references to published work on disease risk reduction from genomic prediction. The resulting story was irresponsible, and very confusing to readers. I can judge this directly and empirically from communications I received in reaction to it.

Here is the letter we submitted to Science in response to the article. We do not know whether our letter will be published, but the News editor has already made significant revisions to the original article in response to our complaints.
Dear Editor,

Your news article Screening embryos for IQ and other complex traits is premature, study concludes (Oct 24 2019) contained significant errors, which we correct below. 
Each year roughly 2 million IVF embryos are genetically screened worldwide. In many developed countries, a significant fraction of all babies are born via IVF (e.g., almost 10% in Denmark). Reproductive health and IVF are serious matters and deserve serious journalism, not the inaccurate sensationalism of your article. Errors persist in the article even after numerous email exchanges (consisting of thousands of words of text, including references to published research) with your writer, informing your journalist clearly of these misrepresentations.

1. Your article failed to cite published work that shows significant risk reduction for complex disease conditions using polygenic predictors to select between sibling embryos. These results, which we emphasized many times to the writer, explicitly contradict this entire paragraph of the article:

The work "is the first to empirically test the viability of screening embryos" for traits that are influenced by many genes, says sociologist and demographer Melinda Mills of the University of Oxford in the United Kingdom. Such embryo screening goes beyond today's testing for single-gene disorders and currently "isn't plausible," she concludes.

[ Note: this paragraph has been altered now in the Science article. The original is given above. Science added this to the modified article, but still without referencing our work: *Clarification, 5 November, 10:05 a.m.: This story has been updated to clarify the context of a quote from Melinda Mills to emphasize that she was referring to screening for desirable traits, not disease risks. ]

Carmi et al. is not the first to empirically test embryo screening. Our published work predates it. Furthermore, Carmi’s work uses far less sibling data than our preceding work - an order of magnitude fewer siblings, 2-3 orders of magnitude fewer families (28 vs several thousand). Carmi’s analysis relies primarily on “simulated” data, ours is 100% empirical. We made your writer abundantly aware of the published work validating differentiation of real siblings (not “synthetic genomes”) by polygenic disease status, linking to it in email correspondence:

“You would be negligent to cite a BioRxiv preprint without thoroughly addressing our peer-reviewed, formally published work in the field, significantly predating this preprint.”

Your article misleads the reader to think that the dozens of IVF clinics and laboratories working with Genomic Prediction to screen embryos for complex (polygenic) disease risk do so without detailed, published validation. This is an irresponsible, unprofessional, and dangerous misrepresentation. We reserve the right to seek damages.

2. The article, and especially the title of the article, conflates screening embryos for disease with optimizing embryos for IQ, and gives the false impression that Carmi address the use-case of our patients: relative risk reduction of disease. This is misleading, as we repeatedly emphasized in writing with your journalist: “You will misrepresent our test if you fail to make this distinction...” , etc. The reader is misled by the article - especially the headline - to think that Carmi’s work addresses the current polygenic use-case of screening embryos for relative risk reduction of disease, rather than Carmi’s futuristic thought experiment of IQ optimization. This conflation is irresponsible, and a disservice to everyone, particularly to the IVF families using screening to reduce polygenic disease risk.

From IVF scientific pioneer Prof. Simon Fishel, external to Genomic Prediction. Fishel is former Deputy Scientific Director of the world's first IVF clinic, which included Nobel prize winning colleagues Dr. Steptoe and Dr. Edwards. His response to the Science article: "IVF itself was a revolutionary new technology that also endured an initial response of similar misrepresentation. There is no reason to repeat the mistakes of the past; Science should aim to convey the state of the field with less inaccuracy."

We ask that you publish this letter, and publish a correction to the article. We also ask that you recommit yourself to serious science reporting.

Sincerely,

Prof. Stephen Hsu
Dr. Nathan Treff
Laurent C. A. Melchior Tellier
Dr. Jia Xu
Prof. Simon Fishel
Note, I commented on Carmi et al. when it first appeared, here. This commentary was one of the first things I shared with the journalist and it makes very clear the difference between optimization of traits such as height or IQ (which Genomic Prediction does not do) and disease risk reduction (which is the main focus of our report).

The following is from an email I sent to the writer and editors:
... disease conditions are themselves complex traits and are typically referred to as such, so the risk of confusion is high. From the Wikipedia article on Complex Traits: "Examples of complex traits include height, circadian rhythms, enzyme kinetics, and many diseases including diabetes and Parkinson's disease." ...
Hence the title of the Science article Screening embryos for IQ and other complex traits is premature, study concludes is extremely misleading.

The most important scientific point: we have demonstrated that polygenic predictors can differentiate between two adult siblings, one with the disease and the other without. This is the gold standard validation relevant for embryo selection -- the predictor can identify from DNA alone the sibling with higher risk of the disease. The evidence is very strong that we can reduce disease risk through embryo screening, and IVF parents have a right to make use of this capability.

In an era of rapid scientific progress and technological change, the public deserves careful, accurate reporting -- not sensationalism.

For future reference, here are simple sentences which a journalist can include in any future article on this topic:
Published validation studies, using genomic data from thousands of families, have shown that polygenic scores can predict which of two adult siblings has the disease, and which one is healthy. It is reasonable to conclude that these predictors can reduce disease risk through IVF embryo screening.

Thursday, July 11, 2019

Manifold Episode #14: Stuart Firestein on Why Ignorance and Failure Lead to Scientific Progress



Steve and Corey speak with Stuart Firestein (Professor of Neuroscience at Columbia University, specializing in the olfactory system) about his two books Ignorance: How It Drives Science, and Failure: Why Science Is So Successful. Stuart explains why he thinks that it is a mistake to believe that scientists make discoveries by following the “scientific method” and what he sees as the real relationship between science and art. We discuss Stuart’s recent research showing that current models of olfactory processing are wrong, while Steve delves into the puzzling infinities in calculations that led to the development of quantum electrodynamics. Stuart also makes the case that the theory of intelligent design is more intelligent than most scientists give it credit for and that it would be wise to teach it in science classes.

Stuart Firestein

Failure: Why Science Is so Successful

Ignorance: How it drives science

Transcript


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Thursday, May 30, 2019

Manifold Episode #11: Joe Cesario on Police Decision Making and Racial Bias in Deadly Force Decisions



Manifold Show Page    YouTube Channel

Corey and Steve talk with Joe Cesario about his recent work which argues that, contrary to activist claims and media reports, there is no widespread racial bias in police shootings. Joe discusses his analysis of national criminal justice data and his experimental studies with police officers in a specially designed realistic simulator. He maintains that racial bias does exist in other uses of force such as tasering but that the decision to shoot is fundamentally different: it is driven by specific events and context, rather than race.

Cesario is associate professor of Psychology at Michigan State University. He studies social cognition and decision-making. His recent topics of study include police use of deadly force and computational modeling of fast decisions. Cesario is dedicated to reform in the practice, reporting, and publication of psychological science.

Is There Evidence of Racial Disparity in Police Use of Deadly Force? Analyses of Officer-Involved Fatal Shootings in 2015–2016
https://journals.sagepub.com/doi/abs/...

Example of officer completing shooting simulator
https://youtu.be/Le8zoqk-UVo

Overview of Current Research on Officer-Involved Shootings
https://www.cesariolab.com/police

Joseph Cesario Lab
https://www.cesariolab.com/


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Friday, March 29, 2019

MSU Research Update (video)



Remarks at a recent Michigan State University leadership meeting. MSU is currently #1 in the US in annual Department of Energy (DOE) and DOE + NSF (National Science Foundation) funding. There are ~30 institutions in the US with larger annual research expenditures than MSU, however in all but a few cases (e.g., MIT and UC Berkeley) this is due to a large medical research complex and significant NIH (National Institutes of Health) funding. I discuss MSU's strategy in this direction: a new biomedical research complex and new $450M McLaren hospital on our campus.

Thursday, March 21, 2019

Manifold Episode #6: John Hawks on Human Evolution, Ancient DNA, and Big Labs Devouring Fossils



Show Page    YouTube Channel

John Hawks on Human Evolution, Ancient DNA, and Big Labs Devouring Fossils – Episode #6

Hawks is the Vilas-Borghesi Distinguished Achievement Professor of Anthropology at the University of Wisconsin – Madison. He is an anthropologist and studies the bones and genes of ancient humans. He’s worked on almost every part of our evolutionary story, from the very origin of our lineage among the apes, to the last 10,000 years of our history.

Links:

John Hawks Weblog

Ghosts and Hybrids: How ancient DNA and new fossils are changing human origins (Research Presentation)

Transcript

man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Wednesday, November 28, 2018

He did it: He Jiankui talk at HKU conference on gene editing



This is He's talk from a conference on gene editing, in progress now in HK. (Should start at 1h09.)

This article describes serious discussions between He and bioethicists over the last year.

CapEx required for this process is quite modest -- not beyond the capability of a medium-sized IVF clinic. The CRISPR vector was purchased, IIUC, for about $100!

The choice of CCR5 is not well motivated, from the perspective of most bioethicists: there are other ways to prevent HIV, and the edit could be regarded as an enhancement, not elimination of a disease allele.

@1h25 He claims that the parents were given the option to use unedited embryos for their pregnancy but chose to use the edited ones. (This decision was made even after being informed of the existence of a possible off-target edit in an inter-genic region. The possible off-target was not confirmed by later analysis.) If true, this has some important ethical implications. The problem becomes one of parental choice and reproductive freedom. IIUC, the father has rather strong feelings concerning HIV (being HIV positive) and the parents strongly desired HIV-resistance in their daughters. Who are we (or anyone else) to tell the parents whether to use the edited or unedited embryos?

Note it's possible I misunderstood what He said in his talk. There is a Twitter exegesis here, and a transcript here. I'm not sure I understood properly whether the intended edit was successful -- one embryo displayed mosaicism (not subsequently detected)?

Some comments I've shared with journalists and other interested parties below.
Re: Gene-editing using CRISPR, not a technical breakthrough -- it has been possible for some time. What is new is that someone had the audacity to push it to completion with human embryos. Some researchers who attended He's talk a few months ago at Cold Spring Harbor (the talk covered methodology but with no hint that real babies would be produced) found it sound but unremarkable.

In the near term most applications of CRISPR in IVF can already be accomplished simply by screening (genetic testing) against the undesirable genetic variant. No need to edit, just select one of the embryos without the variant.

With CRISPR one can potentially edit IN new genetic variants that neither parent has. This "enhancement" is much more ethically questionable, but may eventually happen. However it can only be done
with simple single-gene conditions.

Eventually we may have the technology to do multiple (hundreds?) of edits at a time, which will allow modification of polygenic traits. (Most traits are highly polygenic.) But this requires us to first identify actual causal variants (as opposed to variants used in a predictor that merely *correlate* strongly with the causal ones). This is a difficult scientific problem that may take a decade or more to solve. Predicting a complex trait is much easier than modifying it -- hence selection will dominate editing in utility for some time.

It is possible that gene editing will "normalize" selection of embryos as a less aggressive course of action!
See previous discussion Generation CRISPR?

At the same conference, George Daley, Dean of Harvard Medical School, advocates for a responsible pathway to clinical translation for gene editing.

George Church interview on what He did.

Tuesday, November 06, 2018

1 In 4 Biostatisticians Surveyed Say They Were Asked To Commit Scientific Fraud


In the survey reported below, about 1 in 4 biostatisticians were asked to commit scientific fraud. I don't know whether this bad behavior was more prevalent in industry as opposed to academia, but I am not surprised by the results.

I do not accept the claim that researchers in data-driven areas can be ignorant of statistics. It is common practice to outsource statistical analysis to people like the "consulting biostatisticians" surveyed below. But scientists who do not understand statistics will not be effective in planning future research, nor in understanding the implications of results in their own field. See the candidate gene and missing heritability nonsense the field of genetics has been subject to for the last decade.

I cannot count the number of times, in talking to a scientist with limited quantitative background, that I have performed -- to their amazement -- a quick back of the envelope analysis of a statistical design or new results. This kind of quick estimate is essential to understand whether the results in question should be trusted, or whether a prospective experiment is worth doing. The fact that they cannot understand my simple calculation means that they literally do not understand how inference in their own field should be performed.
Researcher Requests for Inappropriate Analysis and Reporting: A U.S. Survey of Consulting Biostatisticians

(Annals of Internal Medicine 554-558. Published: 16-Oct-2018. DOI: 10.7326/M18-1230)

Results:
Of 522 consulting biostatisticians contacted, 390 provided sufficient responses: a completion rate of 74.7%. The 4 most frequently reported inappropriate requests rated as “most severe” by at least 20% of the respondents were, in order of frequency, removing or altering some data records to better support the research hypothesis; interpreting the statistical findings on the basis of expectation, not actual results; not reporting the presence of key missing data that might bias the results; and ignoring violations of assumptions that would change results from positive to negative. These requests were reported most often by younger biostatisticians.
This kind of behavior is consistent with the generally low rate of replication for results in biomedical science, even those published in top journals:
What is medicine’s 5 sigma? (Editorial in the Lancet)... much of the [BIOMEDICAL] scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, [BIOMEDICAL] science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices. The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. ...
More background on the ongoing replication crisis in certain fields of science. See also Bounded Cognition.

Tuesday, October 30, 2018

Global R&D ~$1 trillion per annum?


Federal R&D, which skews more toward basic research, is typically somewhat less than 1% of US GDP (~$100 billion per annum). See figure below.
WSJ: ... U.S.-based companies accounted for $329 billion of a record $781.8 billion in R&D spending tallied by PwC for the year ended June 30. While Chinese R&D investment came in at $61 billion, in 2010 that figure was just $7 billion, PwC said. Today, 145 Chinese companies are among the top 1,000 R&D spenders, up from 14 a decade ago.

... PwC’s figures don’t include private companies, however, which leaves out China’s state-owned monoliths and closely held Huawei Technologies Co., the world’s largest maker of telecommunications equipment. Huawei said it spent more than $13 billion on R&D last year.

Saturday, September 22, 2018

The French Way: Alain Connes interview


I came across this interview with Fields Medalist Alain Connes (excerpt below) via an essay by Dominic Cummings (see his blog here).

Dom's essay is also highly recommended. He has spent considerable effort to understand the history of highly effective scientific / research organizations. There is a good chance that his insights will someday be put to use in service of the UK. Dom helped create a UK variant of Kolmogorov's School for Physics and Mathematics.

On the referendum and on Expertise: the ARPA/PARC ‘Dream Machine’, science funding, high performance, and UK national strategy


Topics discussed by Connes: CNRS as a model for nurturing talent, materialism and hedonic treadmill as the enemy to intellectual development, string theory (pro and con!), US, French, and Soviet systems for science / mathematics, his entry into Ecole Normale and the '68 Paris convulsions.

France and Ecole Normale produce great mathematicians far in excess of their population size.
Connes: I believe that the most successful systems so far were these big institutes in the Soviet union, like the Landau institute, the Steklov institute, etc. Money did not play any role there, the job was just to talk about science. It is a dream to gather many young people in an institute and make sure that their basic activity is to talk about science without getting corrupted by thinking about buying a car, getting more money, having a plan for career etc. ... Of course in the former Soviet Union there were no such things as cars to buy etc. so the problem did not arise. In fact CNRS comes quite close to that dream too, provided one avoids all interference from our society which nowadays unfortunately tends to become more and more money oriented.


Q: You were criticizing the US way of doing research and approach to science but they have been very successful too, right? You have to work hard to get tenure, and research grants. Their system is very unified in the sense they have very few institutes like Institute for Advanced Studies but otherwise the system is modeled after universities. So you become first an assistant professor and so on. You are always worried about your raise but in spite of all these hazards the system is working.


Connes: I don’t really agree. The system does not function as a closed system. The US are successful mostly because they import very bright scientists from abroad. For instance they have imported all of the Russian mathematicians at some point.


Q: But the system is big enough to accommodate all these people this is also a good point.


Connes: If the Soviet Union had not collapsed there would still be a great school of mathematics there with no pressure for money, no grants and they would be more successful than the US. In some sense once they migrated in the US they survived and did very well but I believed they would have bloomed better if not transplanted. By doing well they give the appearance that the US system is very successful but it is not on its own by any means. The constant pressure for producing reduces the “time unit” of most young people there. Beginners have little choice but to find an adviser that is sociologically well implanted (so that at a later stage he or she will be able to write the relevant recommendation letters and get a position for the student) and then write a technical thesis showing that they have good muscles, and all this in a limited amount of time which prevents them from learning stuff that requires several years of hard work. We badly need good technicians, of course, but it is only a fraction of what generates progress in research. It reminds me of an anecdote about Andre Weil who at some point had some problems with elliptic operators so he invited a great expert in the field and he gave him the problem. The expert sat at the kitchen table and solved the problem after several hours. To thank him, Andre Weil said “when I have a problem with electricity I call an electrician, when I have a problem with ellipticity I use an elliptician”.

From my point of view the actual system in the US really discourages people who are truly original thinkers, which often goes with a slow maturation at the technical level. Also the way the young people get their position on the market creates “feudalities” namely a few fields well implanted in key universities which reproduce themselves leaving no room for new fields.

....

Q: So you were in Paris [ Ecole Normale ] in the best place and in the best time.

Connes: Yes it was a good time. I think it was ideal that we were a small group of people and our only motivation was pure thought and no talking about careers. We couldn’t care the less and our main occupation was just discussing mathematics and challenging each other with problems. I don’t mean ”puzzles” but problems which required a lot of thought, time or speed was not a factor, we just had all the time we needed. If you could give that to gifted young people it would be perfect.
See also Defining Merit:
... As a parting shot, Wilson could not resist accusing Ford of anti-intellectualism; citing Ford's desire to change Harvard's image, Wilson asked bluntly: "What's wrong with Harvard being regarded as an egghead college? Isn't it right that a country the size of the United States should be able to afford one university in which intellectual achievement is the most important consideration?"

E. Bright Wilson was Harvard professor of chemistry and member of the National Academy of Sciences, later a recipient of the National Medal of Science. The last quote from Wilson could easily have come from anyone who went to Caltech! Indeed, both E. Bright Wilson and his son, Nobel Laureate Ken Wilson (theoretical physics), earned their doctorates at Caltech (the father under Linus Pauling, the son under Murray Gell-Mann).
Where Nobel winners get their start (Nature):
Top Nobel-producing undergraduate institutions

Rank School                Country               Nobelists per capita (UG alumni)
1 École Normale Supérieure France       0.00135
2 Caltech                               US             0.00067
3 Harvard University            US             0.00032
4 Swarthmore College          US             0.00027
5 Cambridge University       UK             0.00025
6 École Polytechnique          France       0.00025
7 MIT                                   US              0.00025
8 Columbia University         US              0.00021
9 Amherst College               US              0.00019
10 University of Chicago     US              0.00017

Wednesday, February 07, 2018

US Needs a National AI Strategy: A Sputnik Moment?

The US needs a national AI strategy. Many academic researchers that could contribute to AI research -- including to fundamental new ideas and algorithms, mathematical frameworks for better understanding why some algorithms and architectures work better than others, etc. -- are not able to get involved at the real frontier because they lack the kind of curated data sets and large compute platforms that researchers at Google Brain or DeepMind have access to. Those resources are expensive, but necessary for rapid progress. We need national infrastructure platforms -- similar to physics user facilities like an accelerator or light source or telescope -- in order to support researchers at our universities and national labs doing work in machine learning, AI, and data science.

In contrast, China has articulated a very ambitious national AI plan which has them taking the lead sometime in the 2020s.

Eric Schmidt discusses these points in the video, declaring this a Sputnik moment:

Friday, January 19, 2018

Allen Institute meeting on Genetics of Complex Traits

You can probably tell by all the photos below that I love their new building :-)

I was a participant in this event: What Makes Us Human? The Genetics of Complex Traits (Allen Frontiers Group), including in a small second day workshop with just the speakers and the AI leadership. This workshop will, I hope, result in some interesting new initiatives in complex trait genomics!

I'd like to thank the Allen Institute organizers for making this such a pleasant and productive 2 days. I learned some incredible things from the other speakers and I recommend all of their talks -- available here.

My talk:




Action photos:








Working hard on day 2 in the little conference room :-)

Tuesday, January 16, 2018

What Makes Us Human? The Genetics of Complex Traits (Allen Frontiers Group)


I'll be attending this meeting in Seattle the next few days.
Recent research has led to new insights on how genes shape brain structure and development, and their impact on individual variation. Although significant inroads have been made in understanding the genetics underlying disease risk, what about the complex traits of extraordinary variation - such as cognition, superior memory, etc.? Can current advances shed light on genetic components underpinning these variations?

Personal genomics, biobank resources, emerging statistical genetics methods and neuroimaging capabilities are opening new frontiers in the field of complex trait analysis. This symposium will highlight experts using diverse approaches to explore a spectrum of individual variation of the human mind.
Paul Allen (MSFT co-founder) is a major supporter of scientific research, including the Allen Institute for Brain Science. Excerpts from his memoir, Idea Man.
We are at a unique moment in bioscience. New ideas, combined with emerging technologies, will create unprecedented and transformational insights into living systems. Accelerating the pace of this change requires a thoughtful and agile exploration of the entire landscape of bioscience, across disciplines and spheres of research. Launched in 2016 with a $100 million commitment toward a larger 10-year plan, The Paul G. Allen Frontiers Group will discover and support scientific ideas that change the world. We are committed to a continuous conversation with the scientific community that allows us to remain at the ever-changing frontiers of science and reimagine what is possible.
My talk is scheduled for 3:55 PM Pacific Weds 1/17. All talks will be streamed on the Allen Institute Facebook page.

Blog Archive

Labels