Saturday, May 08, 2021

Three Thousand Years and 115 Generations of 徐 (Hsu / Xu)

Over the years I have discussed Economic historian Greg Clark's groundbreaking work on the persistence of social class. Clark found that intergenerational social mobility was much less than previously thought, and that intergenerational correlations on traits such as education and occupation were consistent with predictions from an additive genetic model with a high degree of assortative mating. 

See Genetic correlation of social outcomes between relatives (Fisher 1918) tested using lineage of 400k English individuals, and further links therein. Also recommended: this recent podcast interview Clark did with Razib Khan. 

The other day a reader familiar with Clark's work asked me about my family background. Obviously my own family history does not inform Clark's work, being only a single example. Nevertheless it provides an interesting microcosm of the tumult of China in the 20th century and a window into the deep past...

I described my father's background in the post Hsu Scholarship at Caltech:
Cheng Ting Hsu was born December 1, 1923 in Wenling, Zhejiang province, China. His grandfather, Zan Yao Hsu was a poet and doctor of Chinese medicine. His father, Guang Qiu Hsu graduated from college in the 1920's and was an educator, lawyer and poet. 
Cheng Ting was admitted at age 16 to the elite National Southwest Unified University (Lianda), which as created during WWII by merging Tsinghua, Beijing and Nankai Universities. This university produced numerous famous scientists and scholars such as the physicists C.N. Yang and T.D. Lee. 
Cheng Ting studied aerospace engineering (originally part of Tsinghua), graduating in 1944. He became a research assistant at China's Aerospace Research Institute and a lecturer at Sichuan University. He also taught aerodynamics for several years to advanced students at the air force engineering academy. 
In 1946 he was awarded one of only two Ministry of Education fellowships in his field to pursue graduate work in the United States. In 1946-1947 he published a three-volume book, co-authored with Professor Li Shoutong on the structures of thin-walled airplanes. 
In January, 1948, he left China by ocean liner, crossing the Pacific and arriving in San Francisco. ...
My mother's father was a KMT general, and her family related to Chiang Kai Shek by marriage. Both my grandfather and Chiang attended the military academy Shinbu Gakko in Tokyo. When the KMT lost to the communists, her family fled China and arrived in Taiwan in 1949. 

My father's family remained mostly in Zhejiang and suffered through the Cultural Revolution. 

When I met my uncle (a retired Tsinghua professor) and some of my cousins in 2010, they gave me a 2 volume family history that had originally been printed in the 1930s. The Hsu (Xu) lineage began in the 10th century BC and continued to my father, in the 113th generation. His entry is the bottom photo below.
Wikipedia:The State of Xu (Chinese: 徐) (also called Xu Rong (徐戎) or Xu Yi (徐夷)[a] by its enemies)[4][5] was an independent Huaiyi state of the Chinese Bronze Age[6] that was ruled by the Ying family (嬴) and controlled much of the Huai River valley for at least two centuries.[3][7] It was centered in northern Jiangsu and Anhui. ...

Generations 114 and 115:


Two volume history of the Hsu (Xu) family, beginning in the 10th century BC:



Sunday, May 02, 2021

40 Years of Quantum Computation and Quantum Information


This is a great article on the 1981 conference which one could say gave birth to quantum computing / quantum information.
Technology Review: Quantum computing as we know it got its start 40 years ago this spring at the first Physics of Computation Conference, organized at MIT’s Endicott House by MIT and IBM and attended by nearly 50 researchers from computing and physics—two groups that rarely rubbed shoulders. 
Twenty years earlier, in 1961, an IBM researcher named Rolf Landauer had found a fundamental link between the two fields: he proved that every time a computer erases a bit of information, a tiny bit of heat is produced, corresponding to the entropy increase in the system. In 1972 Landauer hired the theoretical computer scientist Charlie Bennett, who showed that the increase in entropy can be avoided by a computer that performs its computations in a reversible manner. Curiously, Ed Fredkin, the MIT professor who cosponsored the Endicott Conference with Landauer, had arrived at this same conclusion independently, despite never having earned even an undergraduate degree. Indeed, most retellings of quantum computing’s origin story overlook Fredkin’s pivotal role. 
Fredkin’s unusual career began when he enrolled at the California Institute of Technology in 1951. Although brilliant on his entrance exams, he wasn’t interested in homework—and had to work two jobs to pay tuition. Doing poorly in school and running out of money, he withdrew in 1952 and enlisted in the Air Force to avoid being drafted for the Korean War. 
A few years later, the Air Force sent Fredkin to MIT Lincoln Laboratory to help test the nascent SAGE air defense system. He learned computer programming and soon became one of the best programmers in the world—a group that probably numbered only around 500 at the time. 
Upon leaving the Air Force in 1958, Fredkin worked at Bolt, Beranek, and Newman (BBN), which he convinced to purchase its first two computers and where he got to know MIT professors Marvin Minsky and John McCarthy, who together had pretty much established the field of artificial intelligence. In 1962 he accompanied them to Caltech, where McCarthy was giving a talk. There Minsky and Fredkin met with Richard Feynman ’39, who would win the 1965 Nobel Prize in physics for his work on quantum electrodynamics. Feynman showed them a handwritten notebook filled with computations and challenged them to develop software that could perform symbolic mathematical computations. ... 
... in 1974 he headed back to Caltech to spend a year with Feynman. The deal was that Fredkin would teach Feynman computing, and Feynman would teach Fredkin quantum physics. Fredkin came to understand quantum physics, but he didn’t believe it. He thought the fabric of reality couldn’t be based on something that could be described by a continuous measurement. Quantum mechanics holds that quantities like charge and mass are quantized—made up of discrete, countable units that cannot be subdivided—but that things like space, time, and wave equations are fundamentally continuous. Fredkin, in contrast, believed (and still believes) with almost religious conviction that space and time must be quantized as well, and that the fundamental building block of reality is thus computation. Reality must be a computer! In 1978 Fredkin taught a graduate course at MIT called Digital Physics, which explored ways of reworking modern physics along such digital principles. 
Feynman, however, remained unconvinced that there were meaningful connections between computing and physics beyond using computers to compute algorithms. So when Fredkin asked his friend to deliver the keynote address at the 1981 conference, he initially refused. When promised that he could speak about whatever he wanted, though, Feynman changed his mind—and laid out his ideas for how to link the two fields in a detailed talk that proposed a way to perform computations using quantum effects themselves. 
Feynman explained that computers are poorly equipped to help simulate, and thereby predict, the outcome of experiments in particle physics—something that’s still true today. Modern computers, after all, are deterministic: give them the same problem, and they come up with the same solution. Physics, on the other hand, is probabilistic. So as the number of particles in a simulation increases, it takes exponentially longer to perform the necessary computations on possible outputs. The way to move forward, Feynman asserted, was to build a computer that performed its probabilistic computations using quantum mechanics. 
[ Note to reader: the discussion in the last sentences above is a bit garbled. The exponential difficulty that classical computers have with quantum calculations has to do with entangled states which live in Hilbert spaces of exponentially large dimension. Probability is not really the issue; the issue is the huge size of the space of possible states. Indeed quantum computations are strictly deterministic unitary operations acting in this Hilbert space. ] 

Feynman hadn’t prepared a formal paper for the conference, but with the help of Norm Margolus, PhD ’87, a graduate student in Fredkin’s group who recorded and transcribed what he said there, his talk was published in the International Journal of Theoretical Physics under the title “Simulating Physics with Computers.” ...

Feynman's 1981 lecture Simulating Physics With Computers.

Fredkin was correct about the (effective) discreteness of spacetime, although he probably did not realize this is a consequence of gravitational effects: see, e.g., Minimum Length From First Principles. In fact, Hilbert Space (the state space of quantum mechanics) itself may be discrete.



Related: 


My paper on the Margolus-Levitin Theorem in light of gravity: 

We derive a fundamental upper bound on the rate at which a device can process information (i.e., the number of logical operations per unit time), arising from quantum mechanics and general relativity. In Planck units a device of volume V can execute no more than the cube root of V operations per unit time. We compare this to the rate of information processing performed by nature in the evolution of physical systems, and find a connection to black hole entropy and the holographic principle. 

Participants in the 1981 meeting:
 

Physics of Computation Conference, Endicott House, MIT, May 6–8, 1981. 1 Freeman Dyson, 2 Gregory Chaitin, 3 James Crutchfield, 4 Norman Packard, 5 Panos Ligomenides, 6 Jerome Rothstein, 7 Carl Hewitt, 8 Norman Hardy, 9 Edward Fredkin, 10 Tom Toffoli, 11 Rolf Landauer, 12 John Wheeler, 13 Frederick Kantor, 14 David Leinweber, 15 Konrad Zuse, 16 Bernard Zeigler, 17 Carl Adam Petri, 18 Anatol Holt, 19 Roland Vollmar, 20 Hans Bremerman, 21 Donald Greenspan, 22 Markus Buettiker, 23 Otto Floberth, 24 Robert Lewis, 25 Robert Suaya, 26 Stand Kugell, 27 Bill Gosper, 28 Lutz Priese, 29 Madhu Gupta, 30 Paul Benioff, 31 Hans Moravec, 32 Ian Richards, 33 Marian Pour-El, 34 Danny Hillis, 35 Arthur Burks, 36 John Cocke, 37 George Michaels, 38 Richard Feynman, 39 Laurie Lingham, 40 P. S. Thiagarajan, 41 Marin Hassner, 42 Gerald Vichnaic, 43 Leonid Levin, 44 Lev Levitin, 45 Peter Gacs, 46 Dan Greenberger. (Photo courtesy Charles Bennett)

Wednesday, April 28, 2021

Let The Bodies Pile High In Their Thousands (Boris Johnson)



In the UK:
Recording a conversation in secret is not a criminal offence and is not prohibited. As long as the recording is for personal use you don’t need to obtain consent or let the other person know.
The security man in the foyer of No 10 Downing Street asks that you turn off your phone and deposit it in a wooden cubby shelf built into the wall. I sometimes wondered what the odds were that someone might walk out with my phone -- a disaster, obviously.

But it is not difficult to keep your phone as close attention is not paid. (Or, one could enter with more than one phone.) I'm not saying I have ever disobeyed the rules but I know that it is possible. 

Of course the No 10 staffers all have their phones, which are necessary for their work throughout the day. Thus every meeting at the heart of British government is in danger of being surreptitiously but legally recorded.
Dominic Cummings 'has audio recordings of key government conversations', ally claims (Daily Mail
Dominic Cummings 'has audio recordings of key government conversations' and 'can back up a lot of his claims', ally of the former chief adviser says. 
Dominic Cummings kept audio recordings of key conversations, an ally claims Former chief adviser is locked in an explosive war of words with Boris Johnson. 
Whitehall source said officials did not know extent of material Mr Cummings has. 
Dominic Cummings kept audio recordings of key conversations in government, an ally claimed last night. The former chief adviser is locked in an explosive war of words with Boris Johnson after Downing Street accused him of a string of damaging leaks. 
No 10 attempted to rubbish his claims on Friday night, saying it was not true that the Prime Minister had discussed ending a leak inquiry after a friend of his fiance Carrie Symonds was identified as the likely suspect. But an ally of Mr Cummings said the PM's former chief adviser had taken a treasure trove of material with him when he left Downing Street last year, including audio recordings of discussions with senior ministers and officials. 
'Dom has stuff on tape,' the ally said. 'They are mad to pick a fight with him because he will be able to back up a lot of his claims.
Dom is an admirer of Bismarck. Never underestimate him.
"With a gentleman I am always a gentleman and a half, and when I have to do with a pirate, I try to be a pirate and a half."
Tories scramble to defend Johnson: Politics Weekly podcast (Guardian)

Note the media have no idea what is really going on, as usual.

Friday, April 23, 2021

How a Physicist Became a Climate Truth Teller: Steve Koonin

 

I read an early draft of Koonin's new book discussed in the WSJ article excerpted below, and I highly recommend it. 


Video above is from a 2019 talk discussed in this earlier post: Certainties and Uncertainties in our Energy and Climate Futures: Steve Koonin.
My own views (consistent, as far as I can tell, with what Steve says in the talk): 
1. Evidence for recent warming (~1 degree C) is strong. 
2. There exist previous eras of natural (non-anthropogenic) global temperature change of similar magnitude to what is happening now. 
3. However, it is plausible that at least part of the recent temperature rise is due to increase of atmospheric CO2 due to human activity. 
4. Climate models still have significant uncertainties. While the direct effect of CO2 IR absorption is well understood, second order effects like clouds, distribution of water vapor in the atmosphere, etc. are not under good control. The increase in temperature from a doubling of atmospheric CO2 is still uncertain to a factor of 2-3 and at the low range (e.g., 1.5 degree C) is not catastrophic. The direct effect of CO2 absorption is modest and at the low range (~1 degree C) of current consensus model predictions. Potentially catastrophic outcomes are due to second order effects that are not under good theoretical or computational control. 
5. Even if a catastrophic outcome is only a low probability tail risk, it is prudent to explore technologies that reduce greenhouse gas production. 
6. A Red Team exercise, properly done, would clarify what is certain and uncertain in climate science. 
Simply stating these views can get you attacked by crazy people.
Buy Steve's book for an accessible and fairly non-technical explanation of these points.
WSJ: ... Barack Obama is one of many who have declared an “epistemological crisis,” in which our society is losing its handle on something called truth. 
Thus an interesting experiment will be his and other Democrats’ response to a book by Steven Koonin, who was chief scientist of the Obama Energy Department. Mr. Koonin argues not against current climate science but that what the media and politicians and activists say about climate science has drifted so far out of touch with the actual science as to be absurdly, demonstrably false. 
This is not an altogether innocent drifting, he points out in a videoconference interview from his home in Cold Spring, N.Y. In 2019 a report by the presidents of the National Academies of Sciences claimed the “magnitude and frequency of certain extreme events are increasing.” The United Nations Intergovernmental Panel on Climate Change, which is deemed to compile the best science, says all such claims should be treated with “low confidence.” 
... Mr. Koonin, 69, and I are of one mind on 2018’s U.S. Fourth National Climate Assessment, issued in Donald Trump’s second year, which relied on such overegged worst-case emissions and temperature projections that even climate activists were abashed (a revolt continues to this day). “The report was written more to persuade than to inform,” he says. “It masquerades as objective science but was written as—all right, I’ll use the word—propaganda.” 
Mr. Koonin is a Brooklyn-born math whiz and theoretical physicist, a product of New York’s selective Stuyvesant High School. His parents, with less than a year of college between them, nevertheless intuited in 1968 exactly how to handle an unusually talented and motivated youngster: You want to go cross the country to Caltech at age 16? “Whatever you think is right, go ahead,” they told him. “I wanted to know how the world works,” Mr. Koonin says now. “I wanted to do physics since I was 6 years old, when I didn’t know it was called physics.” 
He would teach at Caltech for nearly three decades, serving as provost in charge of setting the scientific agenda for one of the country’s premier scientific institutions. Along the way he opened himself to the world beyond the lab. He was recruited at an early age by the Institute for Defense Analyses, a nonprofit group with Pentagon connections, for what he calls “national security summer camp: meeting generals and people in congress, touring installations, getting out on battleships.” The federal government sought “engagement” with the country’s rising scientist elite. It worked. 
He joined and eventually chaired JASON, an elite private group that provides classified and unclassified advisory analysis to federal agencies. (The name isn’t an acronym and comes from a character in Greek mythology.) He got involved in the cold-fusion controversy. He arbitrated a debate between private and government teams competing to map the human genome on whether the target error rate should be 1 in 10,000 or whether 1 in 100 was good enough. 
He began planting seeds as an institutionalist. He joined the oil giant BP as chief scientist, working for John Browne, now Baron Browne of Madingley, who had redubbed the company “Beyond Petroleum.” Using $500 million of BP’s money, Mr. Koonin created the Energy Biosciences Institute at Berkeley that’s still going strong. Mr. Koonin found his interest in climate science growing, “first of all because it’s wonderful science. It’s the most multidisciplinary thing I know. It goes from the isotopic composition of microfossils in the sea floor all the way through to the regulation of power plants.” 
From deeply examining the world’s energy system, he also became convinced that the real climate crisis was a crisis of political and scientific candor. He went to his boss and said, “John, the world isn’t going to be able to reduce emissions enough to make much difference.” 
Mr. Koonin still has a lot of Brooklyn in him: a robust laugh, a gift for expression and for cutting to the heart of any matter. His thoughts seem to be governed by an all-embracing realism. Hence the book coming out next month, Unsettled: What Climate Science Tells Us, What It Doesn’t, and Why It Matters.
Any reader would benefit from its deft, lucid tour of climate science, the best I’ve seen. His rigorous parsing of the evidence will have you questioning the political class’s compulsion to manufacture certainty where certainty doesn’t exist. You will come to doubt the usefulness of centurylong forecasts claiming to know how 1% shifts in variables will affect a global climate that we don’t understand with anything resembling 1% precision. ...

Note Added from comments:

If you're older like Koonin or myself you can remember a time when climate change was entirely devoid of tribal associations -- it was not in the political domain at all. It is easier for us just to concentrate on where the science is, and indeed we can remember where it was in the 1990s or 2000s.

Koonin was MUCH more concerned about alternative energy and climate than the typical scientist and that was part of his motivation for supporting the Berkeley Energy Biosciences Institute, created 2007. The fact that it was a $500M partnership between Berkeley and BP was a big deal and much debated at the time, but there was never any evidence that the science they did was negatively impacted. 

It is IRONIC that his focus on scientific rigor now gets him labeled as a climate denier (or sympathetic to the "wrong" side). ALL scientists should be sceptical, especially about claims regarding long term prediction in complex systems.

Contrast the uncertainty estimates in the IPCC reports (which are not defensible and did not change for ~20y!) vs the (g-2) anomaly that was in the news recently.

When I was at Harvard the physics department and applied science and engineering school shared a coffee lounge. I used to sit there and work in the afternoon and it happened that one of the climate modeling labs had their group meetings there. So for literally years I overheard their discussions about uncertainties concerning water vapor, clouds, etc. which to this day are not fully under control. This is illustrated in Fig1 at the link: https://infoproc.blogspot.c...

The gap between what real scientists say in private and what the public (or non-specialists) gets second hand through the media or politically-focused "scientific policy reports" is vast...

If you don't think we can have long-lasting public delusions regarding "settled science" (like a decade long stock or real estate bubble), look up nuclear winter, which has a lot of similarities to greenhouse gas-driven climate change. Note, I am not claiming that I know with high confidence that nuclear winter can't happen, but I AM claiming that the confidence level expressed by the climate scientists working on it at the time was absurd and communicated in a grotesquely distorted fashion to political leaders and the general public. Even now I would say the scientific issue is not settled, due to its sheer complexity, which is LESS than the complexity involved in predicting long term climate change!

https://en.wikipedia.org/wi... 

Sunday, April 18, 2021

Francois Chollet - Intelligence and Generalization, Psychometrics for Robots (AI/ML)

 

If you have thought a lot about AI and deep learning you may find much of this familiar. Nevertheless I enjoyed the discussion. Apparently Chollet's views (below) are controversial in some AI/ML communities but I do not understand why. 

Chollet's Abstraction and Reasoning Corpus (ARC) = Raven's Matrices for AIs :-)
Show Notes: 
...Francois has a clarity of thought that I've never seen in any other human being! He has extremely interesting views on intelligence as generalisation, abstraction and an information conversation ratio. He wrote on the measure of intelligence at the end of 2019 and it had a huge impact on my thinking. He thinks that NNs can only model continuous problems, which have a smooth learnable manifold and that many "type 2" problems which involve reasoning and/or planning are not suitable for NNs. He thinks that many problems have type 1 and type 2 enmeshed together. He thinks that the future of AI must include program synthesis to allow us to generalise broadly from a few examples, but the search could be guided by neural networks because the search space is interpolative to some extent. 
Tim Intro [00:00:00​]
Manifold hypothesis and interpolation [00:06:15​]
Yann LeCun skit [00:07:58​]
Discrete vs continuous [00:11:12​]
NNs are not turing machines [00:14:18​]
Main show kick-off [00:16:19​]
DNN models are locally sensitive hash tables and only efficiently encode some kinds of data well [00:18:17​]
Why do natural data have manifolds? [00:22:11​]
Finite NNs are not "turing complete" [00:25:44​]
The dichotomy of continuous vs discrete problems, and abusing DL to perform the former [00:27:07​]
Reality really annoys a lot of people, and ...GPT-3 [00:35:55​]
There are type one problems and type 2 problems, but...they are enmeshed [00:39:14​]
Chollet's definition of intelligence and how to construct analogy [00:41:45​]
How are we going to combine type 1 and type 2 programs? [00:47:28​]
Will topological analogies be robust and escape the curse of brittleness? [00:52:04​]
Is type 1 and 2 two different physical systems? Is there a continuum? [00:54:26​]
Building blocks and the ARC Challenge [00:59:05​]
Solve ARC == intelligent? [01:01:31​]
Measure of intelligence formalism -- it's a whitebox method [01:03:50​]
Generalization difficulty [01:10:04​]
Lets create a marketplace of generated intelligent ARC agents! [01:11:54​]
Mapping ARC to psychometrics [01:16:01​]
Keras [01:16:45​]
New backends for Keras? JAX? [01:20:38​]
Intelligence Explosion [01:25:07​]
Bottlenecks in large organizations [01:34:29​]
Summing up the intelligence explosion [01:36:11​]
Post-show debrief [01:40:45​]
This is Chollet's paper which is the focus of much of the discussion.
On the Measure of Intelligence 
François Chollet   
https://arxiv.org/abs/1911.01547 
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
Notes on the paper by Robert Lange (TU-Berlin), including illustrations like the ones below.





Friday, April 16, 2021

Academic Freedom in Crisis: Punishment, Political Discrimination, and Self-Censorship

Last week MSU hosted a virtual meeting on Freedom of Speech and Intellectual Diversity on Campus. I particularly enjoyed several of the talks, including the ones by Randall Kennedy (Harvard), Conor Friesdorf (The Atlantic), and Cory Clark (UPenn). Clark had some interesting survey data I had never seen before. I hope the video from the meeting will be available soon. 

In the meantime, here are some survey results from Eric Kaufmann (University of London). The full report is available at the link.

In this recent podcast interview Kaufmann discusses the woke takeover of academia and other institutions.

Stylized facts:

1. Academia has always been predominantly left, but has become more and more so over time. This imbalance is stronger in Social Science and Humanities (SSH) than in STEM, but even in STEM the faculty are predominantly left of center relative to the general population.

2. Leftists are becoming more and more intolerant of opposing views.

3. Young academics (PhD students and junior faculty) are the least tolerant of all.


In my opinion the unique importance of research universiites originates from their commitment to the search for Truth. This commitment is being supplanted by a focus on social justice, with extremely negative consequences.
 

Figure 1. Note: Excludes STEM academics. Labels refer to hypothetical scenarios in which respondents are asked whether they would support a campaign to dismiss a staff member who found the respective conclusions in their research. Brackets denote sample size.

 

Figure 2. Note: Includes STEM academics. Based on a direct question rather than a concealed list technique.

 

Figure 3. Note: SSH refers to social sciences and humanities. Sample size in brackets. STEM share of survey responses: US and Canada academic: 10%; UK mailout: zero; UK YouGov SSH active: zero; UK YouGov All: 53%; UK PhDs: 55%; North American PhDs: 63%.

Thursday, April 08, 2021

Freedom of Speech and Intellectual Diversity on Campus (MSU virtual conference)

The LeFrak Forum On Science, Reason, and Modern Democracy 
Department of Political Science 
Michigan State University 

Register here!

 
Thursday, April 8 -- Saturday, April 10; on ZOOM 
Conference Program: 
Keynote Address - Thursday, April 8, 
5:00-6:30pm EST 
Randall Kennedy, "The Race Question and Freedom of Expression." 
Randall Kennedy is the Michael R. Klein Professor at Harvard Law School, preeminent authority on the First Amendment in its relation to the American struggle for civil rights.

 

Day One: Intellectual Diversity - Friday, April 9  
11:30am - 1:00pm EST 
Panel 1: What are the empirical facts about lack of intellectual diversity in academia and what are the causes of existing imbalances? 
Paper: Lee Jussim, Distinguished Professor and Chair, Department of Psychology, Rutgers University, author of The Politics of Social Psychology. 
Discussant: Philip Tetlock, Annenberg University Professor, University of Pennsylvania, author of “Why so few conservatives and should we care?” and Cory Clark, Visiting Scholar, Department of Psychology, University of Pennsylvania, author of “Partisan Bias and its Discontents.” 
2:00pm - 3:30pm EST 
Panel 2: In what precise ways and to what degree is this imbalance a problem? 
Paper: Joshua Dunn, Professor and Chair, Department of Political Science, University of Colorado, co-author of Passing on the Right: Conservative Professors in the Progressive University. 
Discussant: Amna Khalid, Associate Professor of History, Carleton College, author of “Not A Vast Right-Wing Conspiracy: Why Left-Leaning Faculty Should Care About Threats to Free Expression on Campus." 
4:00pm - 5:45pm EST 
Panel 3: What is To Be Done? 
Paper: Musa Al-Gharbi, Paul F. Lazarsfeld Fellow in Sociology, Columbia University and Managing Editor, Heterodox Academy, author of “Why Care About Ideological Diversity in Social Research? The Definitive Response.” 
Paper: Conor Friedersdorf, Staff writer at The Atlantic and frequent contributor to its special series “The Speech Wars,” author of “Free Speech Will Survive This Moment.”

 

Day Two: Freedom of Speech - Saturday, April 10 
11:30am - 1:00pm EST 
Panel 1: An empirical accounting of the recent challenges to free speech on campus from left and right. What is the true character of the problem or problems here and do they constitute a “crisis”? 
Paper: Jonathan Marks, Professor and Chair, Department of Politics and International Relations, Ursinus College, author of Let's Be Reasonable: A Conservative Case for Liberal Education. 
Respondent: April Kelly-Woessner, Dean of the School of Public Service and Professor of Political Science at Elizabethtown College, author of The Still Divided Academy 
2:00pm - 3:45pm EST 
Panel 2: But is Free speech, as traditionally interpreted, even the right ideal? -- a Debate 
Ulrich Baer, University Professor of Comparative Literature, German, and English, NYU, author of What Snowflakes Get Right: Free Speech and Truth on Campus 
Keith Whittington, Professor of Politics, Princeton University, author of Speak Freely: Why Universities Must Defend Free Speech. 
4:30pm - 6:15pm EST  
Panel 3: What is To Be Done? 
Paper: Nancy Costello, Associate Clinical Professor of Law, MSU. Founder and Director of the First Amendment Law Clinic -- the only law clinic in the nation devoted to the defense of student press rights. Also, Director of the Free Expression Online Library and Resource Center. 
Paper: Jonathan Friedman, Project Director for campus free speech at PEN America – “a program of advocacy, analysis, and outreach in the national debate around free speech and inclusion at colleges and universities.”

Monday, April 05, 2021

Machine Learning Prediction of Biomarkers from SNPs and of Disease Risk from Biomarkers in the UK Biobank

These new results arose from initial investigations of blood biomarker predictions from DNA. The lipoprotein A predictor we built correlates almost 0.8 with the measured result, and this agreement would probably be even stronger if day to day fluctuations were averaged out. It is the most accurate genomic predictor for a complex trait that we are aware of.

We then became interested in the degree to which biomarkers alone could be used to predict disease risk. Some of the biomarker-based disease risk predictors we built (e.g., for kidney or liver problems) do not, as far as we know, have widely used clinical counterparts. Further research may show that predictors of this kind have broad utility. 

Statistical learning in a space of ~50 biomarkers is considered a "high dimensional" problem from the perspective of medical diagnosis, however compared to genomic prediction using a million SNP features, it is rather straightforward. 
 
Machine Learning Prediction of Biomarkers from SNPs and of Disease Risk from Biomarkers in the UK Biobank  
Erik Widen, Timothy G. Raben, Louis Lello, Stephen D.H. Hsu 
doi: https://doi.org/10.1101/2021.04.01.21254711 
We use UK Biobank data to train predictors for 48 blood and urine markers such as HDL, LDL, lipoprotein A, glycated haemoglobin, ... from SNP genotype. For example, our predictor correlates  ~ 0.76 with lipoprotein A level, which is highly heritable and an independent risk factor for heart disease. This may be the most accurate genomic prediction of a quantitative trait that has yet been produced (specifically, for European ancestry groups). We also train predictors of common disease risk using blood and urine biomarkers alone (no DNA information). Individuals who are at high risk (e.g., odds ratio of > 5x population average) can be identified for conditions such as coronary artery disease (AUC ~ 0.75, diabetes (AUC ~ 0.95), hypertension, liver and kidney problems, and cancer using biomarkers alone. Our atherosclerotic cardiovascular disease (ASCVD) predictor uses ~10 biomarkers and performs in UKB evaluation as well as or better than the American College of Cardiology ASCVD Risk Estimator, which uses quite different inputs (age, diagnostic history, BMI, smoking status, statin usage, etc.). We compare polygenic risk scores (risk conditional on genotype: (risk score | SNPs)) for common diseases to the risk predictors which result from the concatenation of learned functions (risk score | biomarkers) and (biomarker | SNPs).

Sunday, April 04, 2021

Inside Huawei, and Wuhan after the pandemic

The first three videos below are episodes of Japanese director Takeuchi Ryo's ongoing series on Huawei. 

Ryo lives in Nanjing and speaks fluent Mandarin. He became famous for his coverage of the lockdown and pandemic in Wuhan. The fourth video below tells the stories of 10 families: how they survived, and how their lives have changed.

The general consensus seems to be that Huawei is 2+ years ahead of other competitors in 5G technology, and has a very deep IP position (patent portfolio) as well. In AI applications my impression is that they are also strong, but not world leaders at the research frontier like Google Brain or DeepMind. Like most Chinese companies their strength is in practical deployment of systems at scale, not in publishing papers. In smartphones and laptops they compete head to head with Samsung, Apple, etc. in all areas, including chip design. Their HiSilicon subsidiary has designed Kirin CPUs that are on par with the best Qualcomm and Apple competitors used in flagship handsets. However, all three rely on TSMC to fabricate these designs.




Tuesday, March 30, 2021

Future of CRISPR (base & prime) and epigenome editing (Interview with Prof David R. Liu)

 

Excellent interview with David Liu of Harvard, which gives an overview of key innovations in gene editing since the discovery of CRISPR. 

Labs all around the world are busy building new tools and libraries for gene editing, with dramatic progress since CRISPR was first discovered less than 10 years ago.

Liu is optimistic about clinical applications over the next 10 years. He does not discuss germline editing (i.e., of embryos) but one can readily imagine how these advances in technology might be applied there.

Friday, March 26, 2021

John von Neumann, 1966 Documentary

 

This 1966 documentary on von Neumann was produced by the Mathematical Association of America. It includes interviews with Wigner, Ulam, Halmos, Goldstine, and others. 

At ~34m Bethe (leader of the Los Alamos theory division) gives primary credit to vN for the implosion method in fission bombs. While vN's previous work on shock waves and explosive lenses is often acknowledged as important for solving the implosion problem, this is the first time I have seen him given credit for the idea itself. Seth Neddermeyer's Enrico Fermi Award citation gives him credit for "invention of the implosion technique" and the original solid core design was referred to as the "Christy gadget" after Robert Christy. As usual, history is much more complicated than the simplified narrative that becomes conventional.
Teller: He could and did talk to my three-year-old son on his own terms and I sometimes wondered whether his relations to the rest of us were a little bit similar.
A recent application of vN's Quantum Ergodic Theorem: Macroscopic Superposition States in Isolated Quantum Systems.

Cloning vN (science fiction): short story, longer (AI vs genetic engineering).

Thursday, March 25, 2021

Meritocracy x 3

Three videos: 

1. Political philosopher Daniel Bell on PRC political meritocracy. 

2. Documentary on the 2020 Gao Kao: college entrance exam taken by ~11 million kids. 

3. Semiconductor Industry Association panel on PRC push to become self-sufficient in semiconductor technology. 






Sunday, March 21, 2021

The Contribution of Cognitive and Noncognitive Skills to Intergenerational Social Mobility (McGue et al. 2020)

If you have the slightest pretension to expertise concerning social mobility, meritocracy, inequality, genetics, psychology, economics, education, history, or any related subjects, I urge you to carefully study this paper.
The Contribution of Cognitive and Noncognitive Skills to Intergenerational Social Mobility  
(Psychological Science https://doi.org/10.1177/0956797620924677)
Matt McGue, Emily A. Willoughby, Aldo Rustichini, Wendy Johnson, William G. Iacono, James J. Lee 
We investigated intergenerational educational and occupational mobility in a sample of 2,594 adult offspring and 2,530 of their parents. Participants completed assessments of general cognitive ability and five noncognitive factors related to social achievement; 88% were also genotyped, allowing computation of educational-attainment polygenic scores. Most offspring were socially mobile. Offspring who scored at least 1 standard deviation higher than their parents on both cognitive and noncognitive measures rarely moved down and frequently moved up. Polygenic scores were also associated with social mobility. Inheritance of a favorable subset of parent alleles was associated with moving up, and inheritance of an unfavorable subset was associated with moving down. Parents’ education did not moderate the association of offspring’s skill with mobility, suggesting that low-skilled offspring from advantaged homes were not protected from downward mobility. These data suggest that cognitive and noncognitive skills as well as genetic factors contribute to the reordering of social standing that takes place across generations.
From the paper:
We believe that a reasonable explanation of our findings is that the degree to which individuals are more or less skilled than their parents contributes to their upward or downward mobility. Behavioral genetic and genomic research has established the heritability of social achievements (Conley, 2016) as well as the skills thought to underlie them (Bouchard & McGue, 2003). Nonetheless, these associations may be due to passive gene–environment correlation, whereby high-achieving parents both transmit genes and provide a rearing environment that promotes their children’s social success (Scarr & McCartney, 1983). Our within-family design controlled for passive gene–environment correlation effects. Although offspring inherit all of their genes from their parents, they inherit a random subset of parental alleles because of meiotic segregation. Consequently, some offspring inherit a favorable subset of their parents’ alleles, whereas others inherit a less favorable subset. We found, as did previous researchers (Belsky et al., 2018), that the inheritance of a favorable subset of alleles was associated with an increased likelihood of upward mobility... 
...In summary, our analysis of intergenerational social mobility in a sample of 2,594 offspring from 1,321 families found that (a) most individuals were educationally and occupationally mobile, (b) mobility was predicted by offspring–parent differences in skills and genetic endowment, and (c) the relationship of offspring skills with social mobility did not vary significantly by parent social background. In an era in which there is legitimate concern over social stagnation, our findings are noteworthy in identifying the circumstances when parents’ educational and occupational success is not reproduced across generations.

See also Game Over: Genomic Prediction of Social Mobility (PNAS July 9, 2018: 201801238). Both papers provide out of sample validation of polygenic predictors for cognitive ability, specifically of the relationship to intergenerational social mobility.


Thursday, March 18, 2021

Council on Foreign Relations: The Rise and Fall of Great Powers? America, China, and the Global Order

 

Insights from Ray Dalio and Paul Kennedy (The Rise and Fall of the Great Powers, 1987) on the balance of power and future global order. I was in graduate school when Kennedy's book was first published and I still have the hardcover first edition somewhere. Dalio and Kennedy have both carefully studied historical examples and present, in my opinion, a realistic view of what is happening. Kennedy mentions the PRC naval build up as a very explicit, material comparison of strength, whereas Dalio focuses on financial and economic matters. Elizabeth Economy provides some interesting comments on internal Chinese politics, but I am unsure how much insight any US analysts can have into the fine details of this.

The Naval War College Review article mentioned by Paul Kennedy is: 

Related: PRC ASBM Test in South China Sea and links therein.

Panelists discuss the rise and fall of great powers and the competing grand strategies of the United States and China. 
Speakers 
Ray Dalio Founder, Co-chairman, and Co-chief Investment Officer, Bridgewater Associates, LP; Author, The Changing World Order: Why Nations Succeed and Fail 
CFR Member Elizabeth C. Economy Senior Fellow for China Studies, Council on Foreign Relations; Senior Fellow, Hoover Institution, Stanford University; Author, The Third Revolution: Xi Jinping and the New Chinese State; @LizEconomy 
Paul M. Kennedy J. Richardson Dilworth Professor of History and Director of International Security Studies, Yale University; Author, The Rise and Fall of the Great Powers

Bonus! Short WSJ piece on digital RMB rollout. SWIFT beware...

 

Wednesday, March 10, 2021

Academic Freedom Alliance

We live in an era of preference falsification. 

Vocal, dishonest, irrational activists have cowed all but the most courageous of the few remaining serious thinkers, even at our greatest universities. 

I hope the creation of the Academic Freedom Alliance will provide a much needed corrective to the dishonest reign of terror in place today.
Chronicle: When I spoke to the Princeton University legal scholar and political philosopher Robert P. George in August, he offered a vivid zoological metaphor to describe what happens when outrage mobs attack academics. When hunted by lions, herds of zebras “fly off in a million directions, and the targeted member is easily taken down and destroyed and eaten.” A herd of elephants, by contrast, will “circle around the vulnerable elephant.” 
... What had begun as a group of 20 Princeton professors organized to defend academic freedom at one college was rapidly scaling up its ambitions and capacity: It would become a nationwide organization. George had already hired an executive director and secured millions in funding. 
... Today, that organization, the Academic Freedom Alliance, formally issued a manifesto declaring that “an attack on academic freedom anywhere is an attack on academic freedom everywhere,” and committing its nearly 200 members to providing aid and support in defense of “freedom of thought and expression in their work as researchers and writers or in their lives as citizens,” “freedom to design courses and conduct classes using reasonable pedagogical judgment,” and “freedom from ideological tests, affirmations, and oaths.” 
... All members of the alliance have an automatic right for requests for legal aid to be considered, but the organization is also open to considering the cases of faculty nonmembers, university staff, or even students on a case-by-case basis. The alliance’s legal-advisory committee includes well-known lawyers such as Floyd Abrams and the prolific U.S. Supreme Court litigator Lisa S. Blatt. 
When I spoke to him in February, as the date of AFA’s public announcement drew closer, George expressed surprise and satisfaction at the success the organization had found in signing up liberals and progressives. “If anything we’ve gone too far — we’re imbalanced over to the left side of the agenda,” he noted wryly. “That’s because our yield was a little higher than we expected it to be when we got in touch with folks.” 
The yield was higher, as George would learn, quoting one such progressive member, because progressives in academe often feel themselves to be even more closely monitored for ideological orthodoxy by students and activist colleagues than their conservative peers. “‘You conservative guys, people like you and Adrian Vermeule, you think you’re vulnerable. You’re not nearly as vulnerable as we liberals are,’” George quoted this member as saying. “They are absolutely terrified, and they know they can never keep up with the wokeness. What’s OK today is over the line tomorrow, and nobody gave you the memo.” 
George went on to note that some of the progressives he spoke with were indeed too frightened of the very censorious atmosphere that the alliance proposes to challenge to be willing to affiliate with it, at least at the outset. 
... Nadine Strossen, a New York Law School law professor and former president of the ACLU, emphasized the problem of self-censorship that she saw the alliance as counteracting. “When somebody is attacked by a university official or, for lack of a better term, a Twitter mob, there are constant reports from all individuals targeted that they receive so many private communications and emails saying ‘I support you and agree with you, but I just can’t say it publicly.’” 
She hopes that the combined reputations of the organization’s members will provide a permission structure allowing other faculty members to stand up for their private convictions in public. While a lawsuit can vindicate someone’s constitutional or contractual rights, Strossen noted, only a change in the cultural atmosphere around these issues — a preference for open debate and free exchange over stigmatization and punishment as the default way to negotiate controversy in academe — could resolve the overall problem. 
The Princeton University political historian Keith E. Whittington, who is chairman of the alliance’s academic committee, echoed Strossen’s point. The recruitment effort, he said, aimed to gather “people who would be respectable and hopefully influential to college administrators — such that if a group like that came to them and said ‘Look, you’re behaving badly here on these academic-freedom principles,’ this is a group that they might pay attention to.” 
“Administrators feel very buffeted by political pressures, often only from one side,” Whittington told me. “They hear from all the people who are demanding action, and the easiest, lowest-cost thing to do in those circumstances is to go with the flow and throw the prof under the bus. So we do hope that we can help balance that equation a little bit, make it a little more costly for administrators.” ...
Perhaps amusingly, I am one of the progressive founding members of AFA. At least, I have for most of my life been politically to the left of Robby George and many of the original Princeton 20 that started the project. 

When I left the position of Senior Vice-President for Research and Innovation at MSU last summer, I wrote
6. Many professors and non-academics who supported me were afraid to sign our petition -- they did not want to be subject to mob attack. We received many communications expressing this sentiment. 
7. The victory of the twitter mob will likely have a chilling effect on academic freedom on campus.

For another vivid example of the atmosphere on US university campuses, see Struggles at Yale.  

Obama on political correctness:
... I’ve heard some college campuses where they don’t want to have a guest speaker who is too conservative or they don’t want to read a book if it has language that is offensive to African-Americans or somehow sends a demeaning signal towards women. I gotta tell you, I don’t agree with that either. I don’t agree that you, when you become students at colleges, have to be coddled and protected from different points of view. I think you should be able to — anybody who comes to speak to you and you disagree with, you should have an argument with ‘em. But you shouldn’t silence them by saying, "You can’t come because I'm too sensitive to hear what you have to say." That’s not the way we learn ...


Monday, March 08, 2021

Psychology Is: interview with Nick Fortino

 

This is a recent interview. Enjoy!
In episode 14 of the Psychology Is podcast, we have the special opportunity to talk to Dr. Steve Hsu, a physicist, professor at MSU, and founder of Genomic Prediction. We discuss the newest innovations related to genetic testing and editing, including Genomic Prediction and CRISPR. We also discuss what these innovations may make possible (for better or worse), and how we can proceed carefully as we learn to harness this new power.
For more, see this recent review article.

Inside AI/ML: Mark Saroufim

 

Great discussion and insider views of AI/ML research. 
Academics think of themselves as trailblazers, explorers — seekers of the truth. 
Any fundamental discovery involves a significant degree of risk. If an idea is guaranteed to work then it moves from the realm of research to engineering. Unfortunately, this also means that most research careers will invariably be failures at least if failures are measured via “objective” metrics like citations. 
Today we discuss the recent article from Mark Saroufim called Machine Learning: the great stagnation. We discuss the rise of gentleman scientists, fake rigor, incentives in ML, SOTA-chasing, "graduate student descent", distribution of talent in ML and how to learn effectively.
Topics include: OpenAI, GPT-3, RL: Dota & Starcraft, conference papers, incentives and incremental research, Is there an ML stagnation? Is theory useful? Is ML entirely empirical these days? How to suceed as a researcher, Why everyone is forced to become their own media company, and much more.

If you don't want to watch the video, read these (by Mark Saroufim) instead:

Machine Learning: The Great Stagnation 

Friday, March 05, 2021

Genetic correlation of social outcomes between relatives (Fisher 1918) tested using lineage of 400k English individuals

Greg Clark (UC Davis and London School of Economics) deserves enormous credit for producing a large multi-generational dataset which is relevant to some of the most fundamental issues in social science: inequality, economic development, social policy, wealth formation, meritocracy, and recent human evolution. If you have even a casual interest in the dynamics of human society you should study these results carefully...

See previous discussion on this blog. 

Clark recently posted this preprint on his web page. A book covering similar topics is forthcoming.
For Whom the Bell Curve Tolls: A Lineage of 400,000 English Individuals 1750-2020 shows Genetics Determines most Social Outcomes 
Gregory Clark, University of California, Davis and LSE (March 1, 2021) 
Economics, Sociology, and Anthropology are dominated by the belief that social outcomes depend mainly on parental investment and community socialization. Using a lineage of 402,000 English people 1750-2020 we test whether such mechanisms better predict outcomes than a simple additive genetics model. The genetics model predicts better in all cases except for the transmission of wealth. The high persistence of status over multiple generations, however, would require in a genetic mechanism strong genetic assortative in mating. This has been until recently believed impossible. There is however, also strong evidence consistent with just such sorting, all the way from 1837 to 2020. Thus the outcomes here are actually the product of an interesting genetics-culture combination.
The correlational results in the table below were originally deduced by Fisher under the assumption of additive genetic inheritance: h2 is heritability, m is assortativity by genotype, r assortativity by phenotype. (Assortative mating describes the tendency of husband and wife to resemble each other more than randomly chosen M-F pairs in the general population.)
Fisher, R. A. 1918. “The Correlation between Relatives on the Supposition of Mendelian Inheritance.” Transactions of the Royal Society of Edinburgh, 52: 399-433
Thanks to Clark the predictions of Fisher's models, applied to social outcomes, can now be compared directly to data through many generations and across many branches of English family trees. (Figures below from the paper.)





The additive model fits the data well, but requires high heritabilities h2 and a high level m of assortative mating. Most analysts, including myself, thought that the required values of m were implausibly large. However, using modern genomic datasets one can estimate the level of assortative mating by simply looking at the genotypes of married couples. 

From the paper:
(p.26) a recent study from the UK Biobank, which has a collection of genotypes of individuals together with measures of their social characteristics, supports the idea that there is strong genetic assortment in mating. Robinson et al. (2017) look at the phenotype and genotype correlations for a variety of traits – height, BMI, blood pressure, years of education - using data from the biobank. For most traits they find as expected that the genotype correlation between the parties is less than the phenotype correlation. But there is one notable exception. For years of education, the phenotype correlation across spouses is 0.41 (0.011 SE). However, the correlation across the same couples for the genetic predictor of educational attainment is significantly higher at 0.654 (0.014 SE) (Robinson et al., 2017, 4). Thus couples in marriage in recent years in England were sorting on the genotype as opposed to the phenotype when it comes to educational status. 
It is not mysterious how this happens. The phenotype measure here is just the number of years of education. But when couples interact they will have a much more refined sense of what the intellectual abilities of their partner are: what is their general knowledge, ability to reason about the world, and general intellectual ability. Somehow in the process of matching modern couples in England are combining based on the weighted sum of a set of variations at several hundred locations on the genome, to the point where their correlation on this measure is 0.65.
Correction: Height, Educational Attainment (EA), and cognitive ability predictors are controlled by many thousands of genetic loci, not hundreds! 


This is a 2018 talk by Clark which covers most of what is in the paper.



For out of sample validation of the Educational Attainment (EA) polygenic score, see Game Over: Genomic Prediction of Social Mobility.

 

Saturday, February 27, 2021

Infinity and Solipsism, Physicists and Science Fiction

The excerpt below is from Roger Zelazny's Creatures of Light and Darkness (1969), an experimental novel which is somewhat obscure, even to fans of Zelazny. 
Positing infinity, the rest is easy. 
The Prince Who Was A Thousand is ... a teleportationist, among other things ... the only one of his kind. He can transport himself, in no time at all, to any place that he can visualize. And he has a very vivid imagination. 
Granting that any place you can think of exists somewhere in infinity, if the Prince can think of it too, he is able to visit it. Now, a few theorists claim that the Prince’s visualizing a place and willing himself into it is actually an act of creation. No one knew about the place before, and if the Prince can find it, then perhaps what he really did was make it happen. However, positing infinity, the rest is easy.
This contains already the central idea that is expressed more fully in Nine Princes in Amber and subsequent books in that series.
While traveling (shifting) between Shadows, [the prince] can alter reality or create a new reality by choosing which elements of which Shadows to keep or add, and which to subtract.
Creatures of Light and Darkness also has obvious similarities to Lord of Light, which many regard as Zelazny's best book and even one of the greatest science fiction novels ever written. Both have been among my favorites since I read them as a kid.

Infinity, probability measures, and solipsism have received serious analysis by theoretical physicists: see, e.g.,  Boltzmann brains. (Which is less improbable: the existence of the universe around you, or the existence of a single brain whose memory records encode that universe?) Perhaps this means theorists have too much time on their hands, due to lack of experimental progress in fundamental physics. 

Science fiction is popular amongst physicists, but I've always been surprised that the level of interest isn't even higher. Two examples I know well: the late Sidney Coleman and my collaborator Bob Scherrer at Vanderbilt were/are scholars and creators of the genre. See these stories by Bob, and Greg Benford's Remembing Sid
... Sid and some others created a fannish publishing house, Advent Publishers, in 1956. He was a teenager when he helped publish Advent’s first book, Damon Knight’s In Search of Wonder. ... 
[Sid] loved SF whereas Einstein deplored it. Lest SF distort pure science and give people the false illusion of scientific understanding, Einstein recommended complete abstinence from any type of science fiction. “I never think of the future. It comes soon enough,” he said.
While I've never written science fiction, occasionally my research comes close -- it has at times addressed questions of the form: 

Do the Laws of Nature as we know them allow ... 

This research might be considered as the ultimate in hard SF ;-) 
Wikipedia: Hard science fiction is a category of science fiction characterized by concern for scientific accuracy and logic.

Note Added: Bob Scherrer writes: In my experience, about 1/3 of research physicists are SF fans, about 1/3 have absolutely no interest in SF, and the remaining 1/3 were avid readers of science fiction in middle school/early high school but then "outgrew" it.

Here is a recent story by Bob which I really enjoyed -- based on many worlds quantum mechanics :-) 

It was ranked #2 in the 2019 Analog Magazine reader poll!

Note Added 2: Kazuo Ishiguro (2017 Nobel Prize in Literature) has been evolving into an SF/fantasy writer over time. And why not? For where else can one work with genuinely new ideas? See Never Let Me Go (clones), The Buried Giant (post-Arthurian England), and his latest book Klara and the Sun.
NYTimes: ... we slowly discover (and those wishing to avoid spoilers should now skip to the start of the next paragraph), the cause of Josie’s mysterious illness is a gene-editing surgery to enhance her intellectual faculties. The procedure carries high risks as well as potential high rewards — the main one being membership in a professional superelite. Those who forgo or simply can’t afford it are essentially consigning themselves to economic serfdom.
WSJ: ... Automation has created a kind of technological apartheid state, which is reinforced by a dangerous “genetic editing” procedure that separates “lifted,” intellectually enhanced children from the abandoned masses of the “unlifted.” Josie is lifted, but the procedure is the cause of her illness, which is often terminal. Her oldest friend and love interest, Rick, is unlifted and so has few prospects despite his obvious brilliance. Her absentee father is an engineer who was outsourced by machines and has since joined a Community, one of the closed groups formed by those lacking social rank. In a conversational aside it is suggested that the Communities have self-sorted along racial lines and are heavily armed.

Sunday, February 21, 2021

Othram: Appalachian hiker found dead in tent identified via DNA forensics

 

Othram helps solve another mystery: the identity of a dead Appalachian hiker. 

There are ~50k unidentified deceased individuals in the US, with ~1k new cases each year.
CBS Sunday Morning: He was a mystery who intrigued thousands: Who was the hiker who walked almost the entire length of the Appalachian Trail, living completely off the grid, only to be found dead in a tent in Florida? It took years, and the persistence of amateur sleuths, to crack the case. Nicholas Thompson of The Atlantic Magazine tells the tale of the man who went by the name "Mostly Harmless," and about the efforts stirred by the mystery of his identity to give names to nameless missing persons.
See also Othram: the future of DNA forensics.

Thursday, February 18, 2021

David Reich: Prehistory of Europe and S. Asia from Ancient DNA

 

In case you have not followed the adventures of the Yamnaya (proto Indo-Europeans from the Steppe), I recommend this recent Harvard lecture by David Reich. It summarizes advances in our understanding of deep human history in Europe and South Asia resulting from analysis of ancient DNA. 
The new technology of ancient DNA has highlighted a remarkable parallel in the prehistory of Europe and South Asia. In both cases, the arrival of agriculture from southwest Asia after 9,000 years ago catalyzed profound population mixtures of groups related to Southwest Asian farmers and local hunter-gatherers. In both cases, the spread of ancestry ultimately deriving from Steppe pastoralists had a further major impact after 5,000 years ago and almost certainly brought Indo-European languages. Mixtures of these three source populations form the primary gradients of ancestry in both regions today. 
In this lecture, Prof. Reich will discuss his new book, Who We Are and How We Got Here: Ancient DNA and the New Science of the Human Past. 
There seems to be a strange glitch at 16:19 and again at 27:55 -- what did he say?

See also Reich's 2018 NYTimes editorial.

Wednesday, February 17, 2021

The Post-American World: Crooke, Escobar, Blumenthal, and Marandi

 

Even if you disagree violently with the viewpoints expressed in this discussion, it will inform you as to how the rest of the world thinks about the decline of US empire. 

The group is very diverse: a former UK diplomat, an Iranian professor educated in the West but now at University of Tehran, a progressive author and journalist (son of Clinton advisor Sidney Blumenthal) who spent 5 years reporting from Israel, and a Brazilian geopolitical analyst who writes for Asia Times (if I recall correctly, lives in Thailand).
Thirty years ago, the United States dominated the world politically, economically, and scientifically. But today? 
Watch this in-depth discussion with distinguished guests: 
Alastair Crooke - Former British Diplomat, Founder and Director of the Conflicts Forum 
Pepe Escobar - Brazilian Political Analyst and Author 
Max Blumenthal - American Journalist and Author from Grayzone 
Chaired by Dr. Mohammad Marandi - Professor at University of Tehran
See also two Escobar articles linked here. Related: Foreign Observers of US Empire.  

Sunday, February 14, 2021

Physics and AI: some recent papers


Three AI paper recommendations from a theoretical physicist (former collaborator) who now runs an AI lab in SV. Less than 5 years after leaving physics research, he and his team have shipped AI products that are used by millions of people. (Figure above is from the third paper below.)

This paper elucidates the relationship between symmetry principles (familiar from physics) and specific mathematical structures like convolutions used in DL.
Covariance in Physics and CNN  
https://arxiv.org/abs/1906.02481 
Cheng, et al.  (Amsterdam)
In this proceeding we give an overview of the idea of covariance (or equivariance) featured in the recent development of convolutional neural networks (CNNs). We study the similarities and differences between the use of covariance in theoretical physics and in the CNN context. Additionally, we demonstrate that the simple assumption of covariance, together with the required properties of locality, linearity and weight sharing, is sufficient to uniquely determine the form of the convolution.

The following two papers explore connections between AI/ML and statistical physics, including renormalization group (RG) flow. 

Theoretical Connections between Statistical Physics and RL 
https://arxiv.org/abs/1906.10228 
Rahme and Adams  (Princeton)
Sequential decision making in the presence of uncertainty and stochastic dynamics gives rise to distributions over state/action trajectories in reinforcement learning (RL) and optimal control problems. This observation has led to a variety of connections between RL and inference in probabilistic graphical models (PGMs). Here we explore a different dimension to this relationship, examining reinforcement learning using the tools and abstractions of statistical physics. The central object in the statistical physics abstraction is the idea of a partition function Z, and here we construct a partition function from the ensemble of possible trajectories that an agent might take in a Markov decision process. Although value functions and Q-functions can be derived from this partition function and interpreted via average energies, the Z-function provides an object with its own Bellman equation that can form the basis of alternative dynamic programming approaches. Moreover, when the MDP dynamics are deterministic, the Bellman equation for Z is linear, allowing direct solutions that are unavailable for the nonlinear equations associated with traditional value functions. The policies learned via these Z-based Bellman updates are tightly linked to Boltzmann-like policy parameterizations. In addition to sampling actions proportionally to the exponential of the expected cumulative reward as Boltzmann policies would, these policies take entropy into account favoring states from which many outcomes are possible.

 

RG-Flow: A hierarchical and explainable flow model based on renormalization group and sparse prior
ttps://arxiv.org/abs/2010.00029 
Hu et al.   (UCSD and Berkeley AI Lab) 
Flow-based generative models have become an important class of unsupervised learning approaches. In this work, we incorporate the key idea of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, called RG-Flow, which can separate information at different scales of images with disentangled representations at each scale. We demonstrate our method mainly on the CelebA dataset and show that the disentangled representations at different scales enable semantic manipulation and style mixing of the images. To visualize the latent representations, we introduce receptive fields for flow-based models and find that the receptive fields learned by RG-Flow are similar to those in convolutional neural networks. In addition, we replace the widely adopted Gaussian prior distribution by a sparse prior distribution to further enhance the disentanglement of representations. From a theoretical perspective, the proposed method has O(logL) complexity for image inpainting compared to previous generative models with O(L^2) complexity.
See related remarks: ICML notes (2018).
It may turn out that the problems on which DL works well are precisely those in which the training data (and underlying generative processes) have a hierarchical structure which is sparse, level by level. Layered networks perform a kind of coarse graining (renormalization group flow): first layers filter by feature, subsequent layers by combinations of features, etc. But the whole thing can be understood as products of sparse filters, and the performance under training is described by sparse performance guarantees (ReLU = thresholded penalization?). Given the inherent locality of physics (atoms, molecules, cells, tissue; atoms, words, sentences, ...) it is not surprising that natural phenomena generate data with this kind of hierarchical structure.

Blog Archive

Labels