Sunday, October 31, 2021

Demis Hassabis: Using AI to accelerate scientific discovery (protein folding) + Bonus: Bruno Pontecorvo

 


Recent talk (October 2021) by Demis Hassabis on the use of AI in scientific research. Second half of the talk is focused on protein folding. 

Below is part 2, by the AlphaFold research lead, which has more technical details.




Bonus: My former Oregon colleague David Strom recommended a CERN lecture by Frank Close on his biography of physicist (and atomic spy?) Bruno Pontecorvo.  David knew that The Battle of Algiers, which I blogged about recently, was directed by Gillo Pontecorvo, Bruno's brother.

Below is the closest thing I could find on YouTube -- it has better audio and video quality than the CERN talk. 

The amazing story of Bruno Pontecorvo involves topics such as the first nuclear reactions and reactors (work with Enrico Fermi), the Manhattan Project, neutrino flavors and oscillations, supernovae, atomic espionage, the KGB, Kim Philby, and the quote: 
I want to be remembered as a great physicist, not as your fucking spy!

Saturday, October 30, 2021

Slowed canonical progress in large fields of science (PNAS)




Sadly, the hypothesis described below is very plausible. 

The exception being that new tools or technological breakthroughs, especially those that can be validated relatively easily (e.g., by individual investigators or small labs), may still spread rapidly due to local incentives. CRISPR and Deep Learning are two good examples.
 
New theoretical ideas and paradigms have a much harder time in large fields dominated by mediocre talents: career success is influenced more by social dynamics than by real insight or capability to produce real results.
 
Slowed canonical progress in large fields of science 
Johan S. G. Chu and James A. Evans 
PNAS October 12, 2021 118 (41) e2021636118 
Significance The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas. 
Abstract In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.
See also Is science self-correcting?
A toy model of the dynamics of scientific research, with probability distributions for accuracy of experimental results, mechanisms for updating of beliefs by individual scientists, crowd behavior, bounded cognition, etc. can easily exhibit parameter regions where progress is limited (one could even find equilibria in which most beliefs held by individual scientists are false!). Obviously the complexity of the systems under study and the quality of human capital in a particular field are important determinants of the rate of progress and its character. 
In physics it is said that successful new theories swallow their predecessors whole. That is, even revolutionary new theories (e.g., special relativity or quantum mechanics) reduce to their predecessors in the previously studied circumstances (e.g., low velocity, macroscopic objects). Swallowing whole is a sign of proper function -- it means the previous generation of scientists was competent: what they believed to be true was (at least approximately) true. Their models were accurate in some limit and could continue to be used when appropriate (e.g., Newtonian mechanics). 
In some fields (not to name names!) we don't see this phenomenon. Rather, we see new paradigms which wholly contradict earlier strongly held beliefs that were predominant in the field* -- there was no range of circumstances in which the earlier beliefs were correct. We might even see oscillations of mutually contradictory, widely accepted paradigms over decades. 
It takes a serious interest in the history of science (and some brainpower) to determine which of the two regimes above describes a particular area of research. I believe we have good examples of both types in the academy. 
* This means the earlier (or later!) generation of scientists in that field was incompetent. One or more of the following must have been true: their experimental observations were shoddy, they derived overly strong beliefs from weak data, they allowed overly strong priors to determine their beliefs.

Thursday, October 28, 2021

The Night Porter (1974)



I first watched The Night Porter while in graduate school, and came across it again last weekend as a byproduct of ordering HBOMax to see the new Dune movie. There are quite a few film classics buried below the top level HBOMax recommendation engine -- you just have to search a bit. See also here on Kanopy. 

Opinions of the film vary widely. In my view it's a masterpiece: the performances by Charlotte Rampling and Dirk Bogarde are incredible, although I must say that I find the film very difficult to watch. 

Rampling portrays the Bene Gesserit Reverend Mother Gaius Helen Mohiam in the new Dune -- see director Denis Villeneuve's analysis of her Gom Jabbar scene.  

I've always wondered about the origins of The Night Porter and how it got made. The material is sensationalistic, even borders on exploitation, but the treatment has psychological and cinematic depth. 

This video contains remarkable interviews with the director Liliana Cavani, writer Italo Moscati, and Rampling. Short clips are interspersed with the interviews so you can get a sense of the film if you've never seen it. Unfortunately, these clips caused the video to be age restricted on YouTube so you have to click through and log in to your Google user account to view it.

Bogarde is not interviewed in the video, but his Wikipedia bio notes that
Bogarde was one of the first Allied officers in April 1945 to reach the Bergen-Belsen concentration camp in Germany, an experience that had the most profound effect on him and about which he found it difficult to speak for many years afterward.[6] [ Video of the interview below.]
"I think it was on the 13th of April—I'm not quite sure what the date was" ... "when we opened up Belsen Camp, which was the first concentration camp any of us had seen, we didn't even know what they were, we'd heard vague rumours that they were. I mean nothing could be worse than that. The gates were opened and then I realised that I was looking at Dante's Inferno, I mean ... I ... I still haven't seen anything as dreadful. And never will. And a girl came up who spoke English, because she recognised one of the badges, and she ... her breasts were like, sort of, empty purses, she had no top on, and a pair of man's pyjamas, you know, the prison pyjamas, and no hair. But I knew she was a girl because of her breasts, which were empty. She was I suppose, oh I don't know, twenty four, twenty five, and we talked, and she was, you know, so excited and thrilled, and all around us there were mountains of dead people, I mean mountains of them, and they were slushy, and they were slimy, so when you walked through them ... or walked—you tried not to, but it was like .... well you just walked through them. 
... there was a very nice British MP [Royal Military Police], and he said 'Don't have any more, come away, come away sir, if you don't mind, because they've all got typhoid and you'll get it, you shouldn't be here swanning-around' and she saw in the back of the jeep, the unexpired portion of the daily ration, wrapped in a piece of the Daily Mirror, and she said could she have it, and he" [the Military Police] "said 'Don't give her food, because they eat it immediately and they die, within ten minutes', but she didn't want the food, she wanted the piece of Daily Mirror—she hadn't seen newsprint for about eight years or five years, whatever it was she had been in the camp for. ... she was Estonian. ... that's all she wanted. She gave me a big kiss, which was very moving. The corporal" [Military Police] "was out of his mind and I was just dragged off. I never saw her again, of course she died. I mean, I gather they all did. But, I can't really describe it very well, I don't really want to. I went through some of the huts and there were tiers and tiers of rotting people, but some of them who were alive underneath the rot, and were lifting their heads and trying .... trying to do the victory thing. That was the worst."[4]
In her interview Rampling notes that it was Bogarde who insisted that she be given the role of Lucia.

 

Friday, October 22, 2021

The Principles of Deep Learning Theory - Dan Roberts IAS talk

 

This is a nice talk that discusses, among other things, subleading 1/width corrections to the infinite width limit of neural networks. I was expecting someone would work out these corrections when I wrote the post on NTK and large width limit at the link below. Apparently, the infinite width limit does not capture the behavior of realistic neural nets and it is only at the first nontrivial order in the expansion that the desired properties emerge. Roberts claims that when the depth to width ratio r is small but nonzero one can characterize network dynamics in a controlled expansion, whereas when r > 1 it becomes a problem of strong dynamics. 

The talk is based on the book
The Principles of Deep Learning Theory 
https://arxiv.org/abs/2106.10165 
This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.
Dan Roberts web page

This essay looks interesting:
Why is AI hard and Physics simple? 
https://arxiv.org/abs/2104.00008 
We discuss why AI is hard and why physics is simple. We discuss how physical intuition and the approach of theoretical physics can be brought to bear on the field of artificial intelligence and specifically machine learning. We suggest that the underlying project of machine learning and the underlying project of physics are strongly coupled through the principle of sparsity, and we call upon theoretical physicists to work on AI as physicists. As a first step in that direction, we discuss an upcoming book on the principles of deep learning theory that attempts to realize this approach.

May 2021 post: Neural Tangent Kernels and Theoretical Foundations of Deep Learning
Large width seems to provide a limiting case (analogous to the large-N limit in gauge theory) in which rigorous results about deep learning can be proved. ... 
The overparametrized (width ~ w^2) network starts in a random state and by concentration of measure this initial kernel K is just the expectation, which is the NTK. Because of the large number of parameters the effect of training (i.e., gradient descent) on any individual parameter is 1/w, and the change in the eigenvalue spectrum of K is also 1/w. It can be shown that the eigenvalue spectrum is positive and bounded away from zero, and this property does not change under training. Also, the evolution of f is linear in K up to corrections with are suppressed by 1/w. Hence evolution follows a convex trajectory and can achieve global minimum loss in a finite (polynomial) time. 
The parametric 1/w expansion may depend on quantities such as the smallest NTK eigenvalue k: the proof might require k >> 1/w or wk large. 
In the large w limit the function space has such high dimensionality that any typical initial f is close (within a ball of radius 1/w?) to an optimal f. These properties depend on specific choice of loss function.
See related remarks: ICML notes (2018).
It may turn out that the problems on which DL works well are precisely those in which the training data (and underlying generative processes) have a hierarchical structure which is sparse, level by level. Layered networks perform a kind of coarse graining (renormalization group flow): first layers filter by feature, subsequent layers by combinations of features, etc. But the whole thing can be understood as products of sparse filters, and the performance under training is described by sparse performance guarantees (ReLU = thresholded penalization?). Given the inherent locality of physics (atoms, molecules, cells, tissue; atoms, words, sentences, ...) it is not surprising that natural phenomena generate data with this kind of hierarchical structure.

Thursday, October 21, 2021

PRC Hypersonic Missiles, FOBS, and Qian Xuesen




There are deep connections between the images above and below. Qian Xuesen proposed the boost glide trajectory while still at Caltech.








Background on recent PRC test of FOBS/glider hypersonic missile/vehicle. More from Air Force Secretary Frank Kendall. Detailed report on PRC hypersonic systems development. Reuters: Rocket failure mars U.S. hypersonic weapon test (10/21/21)

The situation today is radically different from when Qian first returned to China. In a decade or two China may have ~10x as many highly able scientists and engineers as the US, comparable to the entire world (ex-China) combined [1]. Already the depth of human capital in PRC is apparent to anyone closely watching their rate of progress (first derivative) in space (Mars/lunar lander, space station, LEO), advanced weapons systems (stealth jets, radar, missiles, jet engines), AI/ML, alternative energy, materials science, nuclear energy, fundamental and applied physics, consumer electronics, drones, advanced manufacturing, robotics, etc. etc. The development of a broad infrastructure base for advanced manufacturing and R&D also contributes to this progress, of course.

[1] It is trivial to obtain this ~10x estimate: PRC population is ~4x US population, a larger fraction of PRC students pursue STEM degrees, and a larger proportion of PRC students reach elite levels of math proficiency, e.g., PISA Level 6.



"It was the stupidest thing this country ever did," former Navy Secretary Dan Kimball later said, according to Aviation Week. "He was no more a Communist than I was, and we forced him to go." ... 
Qian Xuesen, a former Caltech rocket scientist who helped establish the Jet Propulsion Laboratory before being deported in 1955 on suspicion of being a Communist and who became known as the father of China's space and missile programs, has died. He was 98. ... 
Qian, a Chinese-born aeronautical engineer educated at Caltech and the Massachusetts Institute of Technology, was a protege of Caltech's eminent professor Theodore von Karman, who recognized him as an outstanding mathematician and "undisputed genius."

Below, a documentary on Qian and a movie-length biopic (English subtitles).





Tuesday, October 19, 2021

Quantum Hair from Gravity

New paper!
Quantum Hair from Gravity 
https://arxiv.org/abs/2110.09386 
Xavier Calmet, Roberto Casadio, Stephen D. H. Hsu, and Folkert Kuipers 
We explore the relationship between the quantum state of a compact matter source and of its asymptotic graviton field. For a matter source in an energy eigenstate, the graviton state is determined at leading order by the energy eigenvalue. Insofar as there are no accidental energy degeneracies there is a one to one map between graviton states on the boundary of spacetime and the matter source states. A typical semiclassical matter source results in an entangled asymptotic graviton state. We exhibit a purely quantum gravitational effect which causes the subleading asymptotic behavior of the graviton state to depend on the internal structure of the source. These observations establish the existence of ubiquitous quantum hair due to gravitational effects.
From the introduction:
Classical no-hair theorems limit the information that can be obtained about the internal state of a black hole by outside observers [1]. External features (``hair'') of black hole solutions in general relativity are determined by specific conserved quantities such as mass, angular momentum, and charge. In this letter we investigate how the situation changes when both the matter source (black hole interior state) and the gravitational field itself are quantized. 
We begin by showing that the graviton state associated with an energy eigenstate source is determined, at leading order, by the energy eigenvalue of the source. These graviton states can be expressed as coherent states of non-propagating graviton modes, with explicit dependence on the source energy eigenvalue. Semiclassical matter sources (e.g., a star or black hole) are superpositions of energy eigenstates with support in some band of energies, and produce graviton states that are superpositions of the coherent states. ... We discuss implications for black hole information and holography in the conclusions.
General relativity relates the spacetime metric to the energy-momentum distribution of matter, but only applies when both the metric (equivalently, the gravitational field) and matter sources are semiclassical. A theory of quantum gravity is necessary to relate the quantum state of the gravitational field to the quantum state of the matter source. However, as we show in section 2 one can deduce this relationship either from a simple gedanken construction or from careful study of how the Gauss law affects quantization. It turns out the latter is common to both ordinary gauge theory (cf Coulomb potential) and gravity. 

Our results have important consequences for black hole information: they allow us to examine deviations from the semiclassical approximation used to calculate Hawking radiation and they show explicitly that the quantum spacetime of black hole evaporation is a complex superposition state.

See also 


Monday, October 18, 2021

Embryo Screening and Risk Calculus

Over the weekend The Guardian and The Times (UK) both ran articles on embryo selection. 



I recommend the first article. Philip Ball is an accomplished science writer and former scientist. He touches on many of the most important aspects of the topic, not easy given the length restriction he was working with. 

However I'd like to cover an aspect of embryo selection which is often missed, for example by the bioethicists quoted in Ball's article.

Several independent labs have published results on risk reduction from embryo selection, and all find that the technique is effective. But some people who are not following the field closely (or are not quantitative) still characterize the benefits -- incorrectly, in my view -- as modest. I honestly think they lack understanding of the actual numbers.

Some examples:
Carmi et al. find a ~50% risk reduction for schizophrenia from selecting the lowest risk embryo from a set of 5. For a selection among 2 embryos the risk reduction is ~30%. (We obtain a very similar result using empirical data: real adult siblings with known phenotype.) 
Visscher et al. find the following results, see Table 1 and Figure 2 in their paper. To their credit they compute results for a range of ancestries (European, E. Asian, African). We have performed similar calculations using siblings but have not yet published the results for all ancestries.  
Relative Risk Reduction (RRR)
Hypertension: 9-18% (ranges depend on specific ancestry) 
Type 2 Diabetes: 7-16% 
Coronary Artery Disease: 8-17% 
Absolute Risk Reduction (ARR)
Hypertension: 4-8.5% (ranges depend on specific ancestry) 
Type 2 Diabetes: 2.6-5.5% 
Coronary Artery Disease: 0.55-1.1%
I don't view these risk reductions as modest. Given that an IVF family is already going to make a selection they clearly benefit from the additional information that comes with genotyping each embryo. The cost is a small fraction of the overall cost of an IVF cycle.

But here is the important mathematical point which many people miss: We buy risk insurance even when the expected return is negative, in order to ameliorate the worst possible outcomes. 

Consider the example of home insurance. A typical family will spend tens of thousands of dollars over the years on home insurance, which protects against risks like fire or earthquake. However, very few homeowners (e.g., ~1 percent) ever suffer a really large loss! At the end of their lives, looking back, most families might conclude that the insurance was "a waste of money"!

So why buy the insurance? To avoid ruin in the event you are unlucky and your house does burn down. It is tail risk insurance.

Now consider an "unlucky" IVF family. At, say, the 1 percent level of "bad luck" they might have some embryos which are true outliers (e.g., at 10 times normal risk, which could mean over 50% absolute risk) for a serious condition like schizophrenia or breast cancer. This is especially likely if they have a family history. 

What is the benefit to this specific subgroup of families? It is enormous -- using the embryo risk score they can avoid having a child with very high likelihood of serious health condition. This benefit is many many times (> 100x!) larger than the cost of the genetic screening, and it is not characterized by the average risk reductions given above.

The situation is very similar to that of aneuploidy testing (screening against Down syndrome), which is widespread, not just in IVF. The prevalence of trisomy 21 (extra copy of chromosome 21) is only ~1 percent, so almost all families doing aneuploidy screening are "wasting their money" if one uses faulty logic! Nevertheless, the families in the affected category are typically very happy to have paid for the test, and even families with no trisomy warning understand that it was worthwhile.

The point is that no one knows ahead of time whether their house will burn down, or that one or more of their embryos has an important genetic risk. The calculus of average return is misleading -- i.e., it says that home insurance is a "rip off" when in fact it serves an important social purpose of pooling risk and helping the unfortunate. 

The same can be said for embryo screening in IVF -- one should focus on the benefit to "unlucky" families to determine the value. We can't identify the "unlucky" in advance, unless we do genetic screening!

Saturday, October 16, 2021

Dune 2021

I have high hopes for this new version of Dune.



 

Below, a re-post with two Frank Herbert interviews. Highly recommended to fans of the novel. 





The interviewer is Willis E. McNelly, a professor of English (specializing in science fiction). Herbert discusses artistic as well as conceptual decisions made in the writing and background world building for Dune. Highly recommended for any fan of the book.

See also Dune and The Butlerian Jihad and Darwin Among the Machines.
The Bene Gesserit program had as its target the breeding of a person they labeled "Kwisatz Haderach," a term signifying "one who can be many places at once." In simpler terms, what they sought was a human with mental powers permitting him to understand and use higher order dimensions.

They were breeding for a super-Mentat, a human computer with some of the prescient abilities found in Guild navigators. Now, attend these facts carefully:

Muad'Dib, born Paul Atreides, was the son of the Duke Leto, a man whose bloodline had been watched carefully for more than a thousand years. The Prophet's mother, Lady Jessica, was a natural daughter of the Baron Vladimir Harkonnen and carried gene-markers whose supreme importance to the breeding program was known for almost two thousand years. She was a Bene Gesserit bred and trained, and should have been a willing tool of the project.

The Lady Jessica was ordered to produce an Atreides daughter. The plan was to inbreed this daughter with Feyd-Rautha Harkonnen, a nephew of the Baron Vladimir, with the high probability of a Kwisatz Haderach from that union. Instead, for reasons she confesses have never been completely clear to her, the concubine Lady Jessica defied her orders and bore a son. This alone should have alerted the Bene Gesserit to the possibility that a wild variable had entered their scheme. But there were other far more important indications that they virtually ignored ...
"Kwisatz Haderach" is similar to the Hebrew "Kefitzat Haderech", which literally means "contracting the path"; Herbert defines Kwisatz Haderach as "the Shortening of the Way" (Dune: Appendix IV).

Another good recording of Herbert, but much later in his life.

Saturday, October 09, 2021

Leo Szilard, the Intellectual Bumblebee (lecture by William Lanouette)

 

This is a nice lecture on Leo Szilard by his biographer William Lanouette. See also ‘An Intellectual Bumblebee’ by Max Perutz.
Wikipedia: Leo Szilard was a Hungarian-American physicist and inventor. He conceived the nuclear chain reaction in 1933, patented the idea of a nuclear fission reactor in 1934, and in late 1939 wrote the letter for Albert Einstein's signature that resulted in the Manhattan Project that built the atomic bomb.
How Alexander Sachs, acting on behalf of Szilard and Einstein, narrowly convinced FDR to initiate the atomic bomb project: Contingency, History, and the Atomic Bomb

Szilard wrote children's stories and science fiction. His short story My Trial as a War Criminal begins after the USSR has defeated the US using biological weapons.
I was just about to lock the door of my hotel room and go to bed when there was a knock on the door and there stood a Russian officer and a young Russian civilian. I had expected something of this sort ever since the President signed the terms of unconditional surrender and the Russians landed a token occupation force in New York. The officer handed me something that looked like a warrant and said that I was under arrest as a war criminal on the basis of my activities during the Second World War in connection with the atomic bomb. There was a car waiting outside and they told me that they were going to take me to the Brookhaven National Laboratory on Long Island. Apparently, they were rounding up all the scientists who had ever worked in the field of atomic energy ...
This story was translated into Russian and it had a large impact on Andrei Sakharov, who showed it to his colleague Victor Adamsky:
A number of us discussed it. It was about a war between the USSR and the USA, a very devastating one, which brought victory to the USSR. Szilard and a number of other physicists are put under arrest and then face the court as war criminals for having created weapons of mass destruction. Neither they nor their lawyers could make up a cogent proof of their innocence. We were amazed by this paradox. You can’t get away from the fact that we were developing weapons of mass destruction. We thought it was necessary. Such was our inner conviction. But still the moral aspect of it would not let Andrei Dmitrievich and some of us live in peace.

See also The Many Worlds of Leo Szilard (APS symposium). Slides for Richard Garwin's excellent summary of Szilard's work, including nuclear physics, refrigeration, and Maxwell's Demon. One of Garwin's anecdotes:
Ted Puck was a distinguished biologist, originally trained in physics. ‘With the greatest possible reluctance I have come to the conclusion that it is not possible for me personally to work with you scientifically,’ he wrote Szilard. ‘Your mind is so much more powerful than mine that I find it impossible when I am with you to resist the tremendous polarizing forces of your ideas and outlook.’ Puck feared his ‘own flow of ideas would slow up & productivity suffer if we were to become continuously associated working in the same place and the same general kind of field.’ Puck said, ‘There is no living scientist whose intellect I respect more. But your tremendous intellectual force is a strain on a limited person like myself.’
Puck was a pioneer in single cell cloning, aided in part by Szilard:
When Szilard saw in 1954 that biologists Philip Marcus and Theodore Puck were having trouble growing individual cells into colonies, he concluded that “since cells grow with high efficiency when they have many neighbors, you should not let a single cell know it’s alone”. This was no flippant excursion into psychobiology. Rather, Szilard’s idea to use a layered feeder dish worked, while the open dish had not (Lanouette, 1992: 396–397).
After the war Szilard worked in molecular biology. This photo of Jacques Monod and Szilard is in the seminar room at Cold Spring Harbor Lab. Monod credits Szilard for the negative-feedback idea behind his 1965 Nobel prize.
“I have … recorded” in my Nobel lecture, said Monod, “how it was Szilard who decisively reconciled me with the idea (repulsive to me, until then) that enzyme induction reflected an anti-repressive effect, rather than the reverse, as I tried, unduly, to stick to.”

 

Friday, October 01, 2021

DNA forensics, genetic genealogy, and large databases (Veritasium video)

 

This is a good overview of DNA forensics, genetic genealogy, and existing databases like GEDmatch (Verogen).
@15:35 "Multiple law enforcement agencies have said that this is the most revolutionary tool they've had since the adoption of the fingerprint."
See Othram: the future of DNA forensics (2019):
The existing FBI standard (CODIS) for DNA identification uses only 20 markers (STRs -- previously only 13 loci were used!). By contrast, genome wide sequencing can reliably call millions of genetic variants. 
For the first time, the cost curves for these two methods have crossed: modern sequencing costs no more than extracting CODIS markers using the now ~30 year old technology. 
What can you do with millions of genetic markers? 
1. Determine relatedness of two individuals with high precision. This allows detectives to immediately identify a relative (ranging from distant cousin to sibling or parent) of the source of the DNA sample, simply by scanning through large DNA databases. 
...
More Othram posts.

Blog Archive

Labels