... Estimates suggest that an extra robot per 1000 workers reduces the employment to population ratio by 0.18-0.34 percentage points and wages by 0.25-0.5%. This effect is distinct from the impacts of imports, the decline of routine jobs, offshoring, other types of IT capital, or the total capital stock.
If the robot does the work of a few workers, that explains the fraction of a percent (negative) effect on employment and compensation in a model with direct substitution of robot labor for human work, and smaller second order (positive) effect from comparative advantage of humans in complementary jobs. This is not the optimistic scenario where buggy whip makers displaced by the automobile easily find good new jobs in the expanded economy. We can expect to see many more robots (and virtual AI robots) per 1000 workers in the near future.
Related talk at HKUST by Harvard labor economist Richard Freeman: Work and Income in the Age of Robots and AI. This time it's different?
Here's Richard in 2011 when we were working on a project at Alibaba headquarters :-)
Rapid growth in number of Chinese S&E articles, reaching parity with US in 2013, and well ahead of Japan and India.
Fraction of high impact (top 1% most cited) papers highest for US research (~1.9%). China and Japan comparable at ~0.8% as of 2012. China's fraction roughly doubled between 2001 and 2012.
As of today total number of high impact papers is still probably ~2:1 in favor of US. But I think most people would be surprised to see that China has caught up with (surpassed?) Japan in this quality metric.
US and China now each account for ~30% of global high tech value-added manufacturing. Value-added means net of input components -- going beyond simple assembly.
This is a Caltech TEDx talk from 2013, in which Doris Tsao discusses her work on the neuroscience of human face recognition. Recently I blogged about her breakthrough in identifying the face recognition algorithm used by monkey (and presumably human) brains. The algorithm seems similar to those used in machine face recognition: individual neurons perform feature detection just as in neural nets. This is not surprising from a purely information-theoretic perspective, if we just think about the space of facial variation and the optimal encoding. But it is amazing to be able to demonstrate it by monitoring specific neurons in a monkey brain.
An earlier research claim (which, four years ago, she recapitulates @8:50min in the video), that certain neurons are sensitive only to specific faces, seems not to be true. I always found it implausible.
On her faculty web page Tsao talks about her decision to attend Caltech as an undergraduate:
One day, my father went on a trip to California and took a tour of Caltech with a friend. He came back and told me about a monastery for science, located under the mountains amidst flowers and orange trees, where all the students looked very skinny and super smart, like little monkeys. I was intrigued. I went to a presentation about Caltech by a visiting admissions officer, who showed slides of students taking tests under olive trees, swimming in the Pacific, huddled in a dorm room working on a problem set... I decided: this is where I want to go to college! I dreamed every day about being accepted to Caltech. After I got my acceptance letter, I began to worry that I would fall behind in the first year, since I had heard about how hard the course load is. So I went to the library and started reading the Feynman Lectures. This was another world…where one could see beneath the surface of things, ask why, why, why, why? And the results of one’s mental deliberations actually could be tested by experiments and reveal completely unexpected yet real phenomena, like magnetism as a consequence of the invariance of the speed of light.
It's only a matter of time... Note this kind of work can be done very secretly and with very modest resources -- it does not require banks of centrifuges, big reactors, or ICBM test launches.
Researchers have demonstrated they can efficiently improve the DNA of human embryos.
The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, MIT Technology Review has learned.
The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR, according to people familiar with the scientific results.
Until now, American scientists have watched with a combination of awe, envy, and some alarm as scientists elsewhere were first to explore the controversial practice. To date, three previous reports of editing human embryos were all published by scientists in China.
Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.
... Some critics say germline experiments could open the floodgates to a brave new world of “designer babies” engineered with genetic enhancements—a prospect bitterly opposed by a range of religious organizations, civil society groups, and biotech companies.
The U.S. intelligence community last year called CRISPR a potential "weapon of mass destruction.”
... Mitalipov and his colleagues are said to have convincingly shown that it is possible to avoid both mosaicism and “off-target” effects, as the CRISPR errors are known.
A person familiar with the research says “many tens” of human IVF embryos were created for the experiment using the donated sperm of men carrying inherited disease mutations.
Work on cognitive enhancement will probably be done first in monkeys, proving Planet of the Apes prophetic within the next decade or so :-)
His team’s move into embryo editing coincides with a report by the U.S. National Academy of Sciences in February that was widely seen as providing a green light for lab research on germline modification.
The report also offered qualified support for the use of CRISPR for making gene-edited babies, but only if it were deployed for the elimination of serious diseases.
The advisory committee drew a red line at genetic enhancements—like higher intelligence. “Genome editing to enhance traits or abilities beyond ordinary health raises concerns about whether the benefits can outweigh the risks, and about fairness if available only to some people,” said Alta Charo, co-chair of the NAS’s study committee and professor of law and bioethics at the University of Wisconsin–Madison.
In the U.S., any effort to turn an edited IVF embryo into a baby has been blocked by Congress, which added language to the Department of Health and Human Services funding bill forbidding it from approving clinical trials of the concept.
Despite such barriers, the creation of a gene-edited person could be attempted at any moment, including by IVF clinics operating facilities in countries where there are no such legal restrictions.
Prior to the modern era of genomics, it was claimed (without good evidence) that divergences between isolated human populations were almost entirely due to founder effects or genetic drift, and not due to differential selection caused by disparate local conditions. There is strong evidence now against this claim. Many of the differences between modern populations arose over relatively short timescales (e.g., ~10ky), due to natural selection.
Jeremy J Berg, Xinjun Zhang, Graham Coop
doi: https://doi.org/10.1101/167551
Most of our understanding of the genetic basis of human adaptation is biased toward loci of large phenotypic effect. Genome wide association studies (GWAS) now enable the study of genetic adaptation in highly polygenic phenotypes. Here we test for polygenic adaptation among 187 world- wide human populations using polygenic scores constructed from GWAS of 34 complex traits. By comparing these polygenic scores to a null distribution under genetic drift, we identify strong signals of selection for a suite of anthropometric traits including height, infant head circumference (IHC), hip circumference (HIP) and waist-to-hip ratio (WHR), as well as type 2 diabetes (T2D). In addition to the known north-south gradient of polygenic height scores within Europe, we find that natural selection has contributed to a gradient of decreasing polygenic height scores from West to East across Eurasia, and that this gradient is consistent with selection on height in ancient populations who have contributed ancestry broadly across Eurasia. We find that the signal of selection on HIP can largely be explained as a correlated response to selection on height. However, our signals in IHC and WC/WHR cannot, suggesting a response to selection along multiple axes of body shape variation. Our observation that IHC, WC, and WHR polygenic scores follow a strong latitudinal cline in Western Eurasia support the role of natural selection in establishing Bergmann's Rule in humans, and are consistent with thermoregulatory adaptation in response to latitudinal temperature variation.
From the paper:
... To explore whether patterns observed in the polygenic scores were caused by natural selection, we tested whether the observed distribution of polygenic scores across populations could plausibly have been generated under a neutral model of genetic drift ...
...
Discussion
The study of polygenic adaptation provides new avenues for the study of human evolution, and promises a synthesis of physical anthropology and human genetics. Here, we provide the first population genetic evidence for selected divergence in height polygenic scores among Asian populations. We also provide evidence of selected divergence in IHC and WHR polygenic scores within Europe and to a lesser extent Asia, and show that both hip and waist circumference have likely been influenced by correlated selection on height and waist-hip ratio. Finally, signals of divergence among Asian populations can be explained in terms of differential relatedness to Europeans, which suggests that much of the divergence we detect predates the major demographic events in the history of modern Eurasian populations, and represents differential inheritance from ancient populations which had already diverged at the time of admixture. ...
This paper suggests that some genetic variants which increase risk of coronary artery disease (CAD) have been maintained in the population because of their positive effects in other areas of fitness, such as reproduction.
Traditional genome-wide scans for positive selection have mainly uncovered selective sweeps associated with monogenic traits. While selection on quantitative traits is much more common, very few signals have been detected because of their polygenic nature. We searched for positive selection signals underlying coronary artery disease (CAD) in worldwide populations, using novel approaches to quantify relationships between polygenic selection signals and CAD genetic risk. We identified new candidate adaptive loci that appear to have been directly modified by disease pressures given their significant associations with CAD genetic risk. These candidates were all uniquely and consistently associated with many different male and female reproductive traits suggesting selection may have also targeted these because of their direct effects on fitness. We found that CAD loci are significantly enriched for lifetime reproductive success relative to the rest of the human genome, with evidence that the relationship between CAD and lifetime reproductive success is antagonistic. This supports the presence of antagonistic-pleiotropic tradeoffs on CAD loci and provides a novel explanation for the maintenance and high prevalence of CAD in modern humans. Lastly, we found that positive selection more often targeted CAD gene regulatory variants using HapMap3 lymphoblastoid cell lines, which further highlights the unique biological significance of candidate adaptive loci underlying CAD. Our study provides a novel approach for detecting selection on polygenic traits and evidence that modern human genomes have evolved in response to CAD-induced selection pressures and other early-life traits sharing pleiotropic links with CAD.
Author summary
How genetic variation contributes to disease is complex, especially for those such as coronary artery disease (CAD) that develop over the lifetime of individuals. One of the fundamental questions about CAD––whose progression begins in young adults with arterial plaque accumulation leading to life-threatening outcomes later in life––is why natural selection has not removed or reduced this costly disease. It is the leading cause of death worldwide and has been present in human populations for thousands of years, implying considerable pressures that natural selection should have operated on. Our study provides new evidence that genes underlying CAD have recently been modified by natural selection and that these same genes uniquely and extensively contribute to human reproduction, which suggests that natural selection may have maintained genetic variation contributing to CAD because of its beneficial effects on fitness. This study provides novel evidence that CAD has been maintained in modern humans as a by-product of the fitness advantages those genes provide early in human lifecycles.
From the paper:
... research in quantitative genetics has shown that rapid adaptation can often occur on complex traits that are highly polygenic [29, 30]. Under the ‘infinitesimal (polygenic) model’, such traits are likely to respond quickly to changing selective pressures through smaller allele frequency shifts in many polymorphisms already present in the population [13, 31].
To test for selection signals for variants directly linked with CAD, we utilized SNP summary statistics from 56 genome-wide significant CAD loci in Nikpay et al. [40], the most recent and largest CAD case-control GWAS meta-analysis to date, to identify 76 candidate genes for CAD (see Methods). Nikpay used 60,801 CAD cases and 123,504 controls ...
For a subset of CAD loci, we found significant quantitative associations between disease risk and selection signals and for each of these the direction of this association was often consistent between populations ...
In the comparison across populations, directionality of significant selection-risk associations tended to be most consistent for populations within the same ancestral group (Fig 1B). For example, in PHACTR1, negative associations were present within all European populations (CEU, TSI, FIN), and in NT5C2 strong positive associations were present in all East Asian populations (CHB, CHD, JPT). Other negative associations that were consistent across all populations within an ancestry group included five genes in Europeans (COG5, ABO, ANKS1A, KSR2, FLT1) and four genes (LDLR, PEMT, KIAA1462, PDGFD) in East Asians. ...
... By comparing positive selection variation with genetic risk variation at known loci underlying CAD, we were able to identify and prioritize genes that have been the most likely targets of selection related to this disease across diverse human populations. That selection signals and the direction of selection-risk relationships varied among some populations suggests that CAD-driven selection has operated differently in these populations and thus that these populations might respond differently to similar heart disease prevention strategies. The pleiotropic effects that genes associated with CAD have on traits associated with reproduction that are expressed early in life strongly suggests some of the evolutionary reasons for the existence of human vulnerability to CAD.
Bonus: ~300 variants control about 20% of total variance in genetic CAD risk. This means polygenic risk predictors will eventually have a strong correlation (e.g., at least ~0.4 or 0.5) with actual risk. Good enough for identification of outliers.
Genome-wide association studies (GWAS) in coronary artery disease (CAD) had identified 66 loci at 'genome-wide significance' (P < 5 × 10−8) at the time of this analysis, but a much larger number of putative loci at a false discovery rate (FDR) of 5% (refs. 1,2,3,4). Here we leverage an interim release of UK Biobank (UKBB) data to evaluate the validity of the FDR approach. We tested a CAD phenotype inclusive of angina (SOFT; ncases = 10,801) as well as a stricter definition without angina (HARD; ncases = 6,482) and selected cases with the former phenotype to conduct a meta-analysis using the two most recent CAD GWAS2, 3. This approach identified 13 new loci at genome-wide significance, 12 of which were on our previous list of loci meeting the 5% FDR threshold2, thus providing strong support that the remaining loci identified by FDR represent genuine signals. The 304 independent variants associated at 5% FDR in this study explain 21.2% of CAD heritability and identify 243 loci that implicate pathways in blood vessel morphogenesis as well as lipid metabolism, nitric oxide signaling and inflammation.
This is a recent review article (2016):
Genetics of Coronary Artery Disease ...Overall, recent studies have led to a broader understanding of the genetic architecture of CAD and demonstrate that it largely derives from the cumulative effect of multiple common risk alleles individually of small effect size rather than rare variants with large effects on CAD risk. Despite this success, there has been limited progress in understanding the function of the novel loci; the majority of which are in noncoding regions of the genome.
Under what circumstances should humans override algorithms?
From what I have read I doubt that a hybrid team of human + AlphGo would perform much better than AlphaGo itself. Perhaps worse, depending on the epistemic sophistication and self-awareness of the human. In hybrid chess it seems that the ELO score of the human partner is not the main factor, but rather an understanding of the chess program, its strengths, and limitations.
Unless I'm mistaken the author of the article below sometimes comments here.
... Some interpret this unique partnership to be a harbinger of human-machine interaction. The superior decision maker is neither man nor machine, but a team of both. As McAfee and Brynjolfsson put it, “people still have a great deal to offer the game of chess at its highest levels once they’re allowed to race with machines, instead of purely against them.”
However, this is not where we will leave this story. For one, the gap between the best freestyle teams and the best software is closing, if not closed. As Cowen notes, the natural evolution of the human-machine relationship is from a machine that doesn’t add much, to a machine that benefits from human help, to a machine that occasionally needs a tiny bit of guidance, to a machine that we should leave alone.
But more importantly, let me suppose we are going to hold a freestyle chess tournament involving the people reading this article. Do you believe you could improve your chance of winning by overruling your 3300-rated chess program? For nearly all of us, we are best off knowing our limits and leaving the chess pieces alone.
... We interfere too often, ... This has been documented across areas from incorrect psychiatric diagnoses to freestyle chess players messing up their previously strong position, against the advice of their supercomputer teammate.
For example, one study by Berkeley Dietvorst and friends asked experimental subjects to predict the success of MBA students based on data such as undergraduate scores, measures of interview quality, and work experience. They first had the opportunity to do some practice questions. They were also provided with an algorithm designed to predict MBA success and its practice answers—generally far superior to the human subjects’.
In their prediction task, the subjects had the option of using the algorithm, which they had already seen was better than them in predicting performance. But they generally didn’t use it, costing them the money they would have received for accuracy. The authors of the paper suggested that when experimental subjects saw the practice answers from the algorithm, they focussed on its apparently stupid mistakes—far more than they focussed on their own more regular mistakes.
Although somewhat under-explored, this study is typical of when people are given the results of an algorithm or statistical method (see here, here, here, and here). The algorithm tends to improve their performance, yet the algorithm by itself has greater accuracy. This suggests the most accurate method is often to fire the human and rely on the algorithm alone. ...
Largest component of genetic variation is a N-S cline (phenotypic N-S gradient discussed here). Variance accounted for by second (E-W) PC vector is much smaller and the Han population is fairly homogeneous in genetic terms: ...while we revealed East-to-West structure among the Han Chinese, the signal is relatively weak and very little structure is discernible beyond the second PC (p.24).
Neandertal ancestry does not vary significantly across provinces, consistent with admixture prior to the dispersal of modern Han Chinese.
As are most non-European populations around the globe, the Han Chinese are relatively understudied in population and medical genetics studies. From low-coverage whole-genome sequencing of 11,670 Han Chinese women we present a catalog of 25,057,223 variants, including 548,401 novel variants that are seen at least 10 times in our dataset. Individuals from our study come from 19 out of 22 provinces across China, allowing us to study population structure, genetic ancestry, and local adaptation in Han Chinese. We identify previously unrecognized population structure along the East-West axis of China and report unique signals of admixture across geographical space, such as European influences among the Northwestern provinces of China. Finally, we identified a number of highly differentiated loci, indicative of local adaptation in the Han Chinese. In particular, we detected extreme differentiation among the Han Chinese at MTHFR, ADH7, and FADS loci, suggesting that these loci may not be specifically selected in Tibetan and Inuit populations as previously suggested. On the other hand, we find that Neandertal ancestry does not vary significantly across the provinces, consistent with admixture prior to the dispersal of modern Han Chinese. Furthermore, contrary to a previous report, Neandertal ancestry does not explain a significant amount of heritability in depression. Our findings provide the largest genetic data set so far made available for Han Chinese and provide insights into the history and population structure of the world's largest ethnic group.
The Loveless (free now on Amazon Prime) was the first film directed by Kathryn Bigelow (Point Break, Zero Dark Thirty) and also the first first film role for a young Willem Dafoe. Dafoe has more leading man star power in this role than in most of his subsequent work.
Loveless was shot in 22 days, when Bigelow was fresh out of Columbia film school. The movie could be characterized as a biker art film with some camp elements, but overall a fairly dark and nihilistic mood. The video above is a fan mash up of Loveless and Bruce Springsteen's Born to Run. It works well on its own terms, although Born to Run is more romantic than nihilistic, at least musically. The lyrics by themselves, however, fit the film rather well.
Born To Run
Bruce Springsteen
In the day we sweat it out on the streets of a runaway American dream
At night we ride through the mansions of glory in suicide machines
Sprung from cages out on highway nine,
Chrome wheeled, fuel injected, and steppin' out over the line
H-Oh, Baby this town rips the bones from your back
It's a death trap, it's a suicide rap
We gotta get out while we're young
`Cause tramps like us, baby we were born to run
Yes, girl we were
Wendy let me in I wanna be your friend
I want to guard your dreams and visions
Just wrap your legs 'round these velvet rims
And strap your hands 'cross my engines
Together we could break this trap
We'll run till we drop, baby we'll never go back
H-Oh, Will you walk with me out on the wire
`Cause baby I'm just a scared and lonely rider
But I gotta know how it feels
I want to know if love is wild
Babe I want to know if love is real
Oh, can you show me
Beyond the Palace hemi-powered drones scream down the boulevard
Girls comb their hair in rearview mirrors
And the boys try to look so hard
The amusement park rises bold and stark
Kids are huddled on the beach in a mist
I wanna die with you Wendy on the street tonight
In an everlasting kiss
One, two, three, four
The highway's jammed with broken heroes on a last chance power drive
Everybody's out on the run tonight
But there's no place left to hide
Together Wendy we can live with the sadness
I'll love you with all the madness in my soul
H-Oh, Someday girl I don't know when
We're gonna get to that place
Where we really wanna go
And we'll walk in the sun
But till then tramps like us
Baby we were born to run
Oh honey, tramps like us
Baby we were born to run
Come on with me, tramps like us
Baby we were born to run
These neural nets reached super-human (better than an average human) performance on tasks requiring relational reasoning. See the short video for examples.
Adam Santoro, David Raposo, David G.T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, Timothy Lillicrap (Submitted on 5 Jun 2017)
Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.
Humans are going to have to learn to "trust the AI" without understanding why it is right. I often make an analogous point to my kids -- "At your age, if you and Dad disagree, chances are that Dad is right" :-) Of course, I always try to explain the logic behind my thinking, but in the case of some complex machine optimizations (e.g., Go strategy), humans may not be able to understand even the detailed explanations.
In some areas of complex systems -- neuroscience, genomics, molecular dynamics -- we also see machine prediction that is superior to other methods, but difficult even for scientists to understand. When hundreds or thousands of genes combine to control many dozens of molecular pathways, what kind of explanation can one offer for why a particular setting of the controls (DNA pattern) works better than another?
There was never any chance that the functioning of a human brain, the most complex known object in the universe, could be captured in verbal explication of the familiar kind (non-mathematical, non-algorithmic). The researchers that built AlphaGo would be at a loss to explain exactly what is going on inside its neural net...
Romero is 40 years old! He is a former World Champion and Olympic silver medalist for Cuba in freestyle wrestling. Watch the video -- it's great! :-)
He lost a close championship fight yesterday in the UFC at 185lbs. The guy he lost to, Robert Whittaker, is a young talent and a class act. It's been said that Romero relies too much on athleticism and doesn't fight smart (this goes back to his wrestling days). He should have attacked Whittaker more ruthlessly after he hurt Whittaker's knee early in the fight with a kick.
I'm old enough to have been aware of Donald Trump since before the publication of Art of the Deal in 1987. In these decades, during which he was one of the best known celebrities in America, he was largely regarded as a progressive New Yorker, someone who could easily pass as a rich Democrat. Indeed, he was friendly with the Clintons -- Ivanka and Chelsea are good friends. There were no accusations of racism, and he enjoyed an 11 year run (2004-2015) on The Apprentice. No one would have doubted for a second that he was an American patriot, the least likely stooge for Russia or the USSR. I say all this to remind people that the image of Trump promulgated by the media and his other political enemies since he decided to run for President is entirely a creation of the last year or two.
If you consider yourself a smart person, a rational person, an evidence-driven person, you should reconsider whether 30+ years of reporting on Trump is more likely to be accurate (during this time he was a public figure, major celebrity, and tabloid fodder: subject to intense scrutiny), or 1-2 years of heavily motivated fake news.
In the article below, Politico considers the very real possibility that Trump could have run, and won, as a Democrat. If you're a HATE HATE HATE NEVER NEVER TRUMP person, think about that for a while.
Politico: ... Could Trump have done to the Democrats in 2016 what he did to the Republicans? Why not? There, too, he would have challenged an overconfident, message-challenged establishment candidate (Hillary Clinton instead of Jeb Bush) and with an even smaller number of other competitors to dispatch. One could easily see him doing as well or better than Bernie Sanders—surprising Clinton in the Iowa caucuses, winning the New Hampshire primaries, and on and on. More to the point, many of Trump’s views—skepticism on trade, sympathetic to Planned Parenthood, opposition to the Iraq war, a focus on blue-collar workers in Rust Belt America—seemed to gel as well, if not better, with blue-state America than red. Think the Democrats wouldn’t tolerate misogynist rhetoric and boorish behavior from their leaders? Well, then you’ve forgotten about Woodrow Wilson and John F. Kennedy and LBJ and the last President Clinton.
There are, as with every what-if scenario, some flaws. Democrats would have deeply resented Trump’s ‘birther’ questioning of Barack Obama’s origins, and would have been highly skeptical of the former reality TV star’s political bona fides even if he hadn’t made a sharp turn to the right as he explored a presidential bid in the run up to the 2012 election. His comments on women and minorities would have exposed him to withering scrutiny among the left’s army of advocacy groups. Liberal donors would likely have banded together to strangle his candidacy in its cradle—if they weren’t laughing him off. But Republican elites tried both of these strategies in 2015, as well, and it manifestly didn’t work. What’s more, Trump did once hold a passel of progressive stances—and he had friendships all over the political map. As Bloomberg’s Josh Green notes, in his Apprentice days, Trump was even wildly popular among minorities. It’s not entirely crazy to imagine him outflanking a coronation-minded Hillary Clinton on the left and blitzing a weak Democratic field like General Sherman marching through Georgia. ...
I voted twice for Bill Clinton and twice for Obama. Listen carefully: their positions on immigration, as expressed below, do not differ much in substance from Trump's.
The American Journal of Human Genetics 101, 5–22, July 6, 2017
DOI: http://dx.doi.org/10.1016/j.ajhg.2017.06.005
Peter M. Visscher,1,2,* Naomi R. Wray,1,2 Qian Zhang,1 Pamela Sklar,3 Mark I. McCarthy,4,5,6 Matthew A. Brown,7 and Jian Yang1,2
Application of the experimental design of genome-wide association studies (GWASs) is now 10 years old (young), and here we review the remarkable range of discoveries it has facilitated in population and complex-trait genetics, the biology of diseases, and translation toward new therapeutics. We predict the likely discoveries in the next 10 years, when GWASs will be based on millions of samples with array data imputed to a large fully sequenced reference panel and on hundreds of thousands of samples with whole-genome sequencing data.
Background
Five years ago, a number of us reviewed (and gave our opinion on) the first 5 years of discoveries that came from the experimental design of the GWAS.1 That review sought to set the record straight on the discoveries made by GWASs because at that time, there was still a level of misunderstanding and distrust about the purpose of and discoveries made by GWASs. There is now much more acceptance of the experimental design because the empirical results have been robust and overwhelming, as reviewed here.
... GWAS results have now been reported for hundreds of complex traits across a wide range of domains, including common diseases, quantitative traits that are risk factors for disease, brain imaging phenotypes, genomic measures such as gene expression and DNA methylation, and social and behavioral traits such as subjective well-being and educational attainment. About 10,000 strong associations have been reported between genetic variants and one or more complex traits,10 where “strong” is defined as statistically significant at the genome-wide p value threshold of 5 × 10−8, excluding other genome-wide-significant SNPs in LD (r2 > 0.5) with the strongest association (Figure 2). GWAS associations have proven highly replicable, both within and between populations,11, 12 under the assumption of adequate sample sizes.
One unambiguous conclusion from GWASs is that for almost any complex trait that has been studied, many loci contribute to standing genetic variation. In other words, for most traits and diseases studied, the mutational target in the genome appears large so that polymorphisms in many genes contribute to genetic variation in the population. This means that, on average, the proportion of variance explained at the individual variants is small. Conversely, as predicted previously,1, 13 this observation implies that larger experimental sample sizes will lead to new discoveries, and that is exactly what has occurred over the last decade. ...
This is the best technical summary of the Los Alamos component of the Manhattan Project that I know of. It includes, for example, detail about the hydrodynamical issues that had to be overcome for successful implosion. That work drew heavily on von Neumann's expertise in shock waves, explosives, numerical solution of hydrodynamic partial differential equations, etc. A visit by G.I. Taylor alerted the designers to the possibility of instabilities in the shock front (Rayleigh–Taylor instability). Concern over these instabilities led to the solid-core design known as the Christy Gadget.
... Unlike earlier histories of Los Alamos, this book treats in detail the research and development that led to the implosion and gun weapons; the research in nuclear physics, chemistry, and metallurgy that enabled scientists to design these weapons; and the conception of the thermonuclear bomb, the "Super." Although fascinating in its own right, this story has particular interest because of its impact on subsequent devel- opments. Although many books examine the implications of Los Alamos for the development of a nuclear weapons culture, this is the first to study its role in the rise of the methodology of "big science" as carried out in large national laboratories.
... The principal reason that the technical history of Los Alamos has not yet been written is that even today, after half a century, much of the original documentation remains classified. With cooperation from the Los Alamos Laboratory, we received authorization to examine all the relevant documentation. The book then underwent a classification review that resulted in the removal from this edition of all textual material judged sensitive by the Department of Energy and all references to classified documents. (For this reason, a number of quotations appear without attribution.) However, the authorities removed little information. Thus, except for a small number of technical facts, this account represents the complete story. In every instance the deleted information was strictly technical; in no way has the Los Alamos Laboratory or the Department of Energy attempted to shape our interpretations. This is not, therefore, a "company history"; throughout the research and writing, we enjoyed intellectual freedom.
... Scientific research was an essential component of the new approach: the first atomic bombs could not have been built by engineers alone, for in no sense was developing these bombs an ordinary engineering task. Many gaps existed in the scientific knowledge needed to complete the bombs. Initially, no one knew whether an atomic weapon could be made. Furthermore, the necessary technology extended well beyond the "state of the art." Solving the technical problems required a heavy investment in basic research by top-level scientists trained to explore the unknown - scientists like Hans Bethe, Richard Feynman, Rudolf Peierls, Edward Teller, John von Neumann, Luis Alvarez, and George Kistiakowsky. To penetrate the scientific phenomena required a deep understanding of nuclear physics, chemistry, explosives, and hydrodynamics. Both theoreticians and experimentalists had to push their scientific tools far beyond their usual capabilities. For example, methods had to be developed to carry out numerical hydrodynamics calculations on a scale never before attempted, and experimentalists had to expand the sensitivity of their detectors into qualitatively new regimes.
... American physics continued to prosper throughout the 1920s and1930s, despite the Depression. Advances in quantum theory stimulated interest in the microscopic structure of matter, and in 1923 Robert Millikan of Caltech was awarded the Nobel Prize for his work on electrons. In the 1930s and 1940s, Oppenheimer taught quantum theory to large numbers of students at the Berkeley campus of the University of California as well as at Caltech. Also at Berkeley in the 1930s and 1940s, the entrepreneurial Lawrence gathered chemists, engineers, and physicists together in a laboratory where he built a series of ever-larger cyclotrons and led numerous projects in nuclear chemistry, nuclear physics, and medicine. By bringing together specialists from different fields to work cooperatively on large common projects, Lawrence helped to create a distinctly American collaborative research endeavor - centered on teams, as in the industrial research laboratories, but oriented toward basic studies without immediate application. This approach flourished during World War II.
The excerpt below is from a recent comment thread, arguing that the US Navy should de-emphasize carrier groups in favor of subs and smaller surface ships. Technological trends such as rapid advancement in machine learning (ML) and sensors will render carriers increasingly vulnerable to missile attack in the coming decades.
1. US carriers are very vulnerable to *conventional* Russian and PRC missile (cruise, ASBM) weapons.
2. Within ~10y (i.e., well within projected service life of US carriers) I expect missile systems of the type currently only possessed by Russia and PRC to be available to lesser powers. I expect that a road-mobile ASBM weapon with good sensor/ML capability, range ~1500km, will be available for ~$10M. Given a rough (~10km accuracy) fix on a carrier, this missile will be able to arrive in that area and then use ML/sensors for final targeting. There is no easy defense against such weapons. Cruise missiles which pose a similar threat will also be exported. This will force the US to be much more conservative in the use of its carriers, not just against Russia and PRC, but against smaller countries as well.
Given 1. and 2. my recommendation is to decrease the number of US carriers and divert the funds into smaller missile ships, subs, drones, etc. Technological trends simply do not favor carriers as a weapon platform.
Basic missile technology is old, well-understood, and already inexpensive (compared, e.g., to the cost of fighter jets). ML/sensor capability is evolving rapidly and will be enormously better in 10y. Imagine a Mach 5 AI kamikaze easily able to locate a carrier from 10km distance (on a clear day there are no countermeasures against visual targeting using the equivalent of a cheap iPhone camera -- i.e., robot pilot looks down at the ocean to find carrier), and capable of maneuver. Despite BS claims over the years (and over $100B spent by the US), anti-missile technology is not effective, particularly against fast-moving ballistic missiles.
One only has to localize the carrier to within few x 10km for initial launch, letting the smart final targeting do the rest. The initial targeting location can be obtained through many methods, including aircraft/drone probes, targeting overflight by another kind of missile, LEO micro-satellites, or even (surreptitious) cooperation from Russia/PRC (or a commercial vendor!) via their satellite network.
... the Navy plans to modernize its carrier program by launching a new wave of even larger and more expensive ships, starting with the USS Gerald Ford, which cost $15 billion to build — by far the most expensive vessel in naval history. This is a mistake: Because of changes in warfare and technology, in any future military entanglement with a foe like China, current carriers and their air wings will be almost useless and the next generation may fare even worse.
... most weapons platforms are effective for only a limited time, an interval that gets shorter as history progresses. But until the past few years, the carrier had defied the odds, continuing to demonstrate America’s military might around the world without any challenge from our enemies. That period of grace may have ended as China and Russia are introducing new weapons — called “carrier killer” missiles — that cost $10 million to $20 million each and can target the U.S.’s multibillion-dollar carriers up to 900 miles from shore.
... The average cost of each of the 10 Nimitz class carriers was around $5 billion. When the cost of new electrical systems is factored in, the USS Ford cost three times as much and took five years to build. With the deficit projected to rise considerably over the next decade, defense spending is unlikely to receive a significant bump. Funding these carriers will crowd out spending on other military priorities, like the replacement of the Ohio class ballistic missile submarine, perhaps the most survivable and important leg of our strategic deterrent triad. There simply isn’t room to fund an aircraft carrier that costs the equivalent of the entire Navy shipbuilding budget.
... The Navy’s decision on the carriers today will affect U.S. naval power for decades. These carriers are expected to be combat effective in 2065 — over 150 years since the idea of an aircraft carrier was first conceived. ...