Tuesday, April 30, 2019


In a high corner office, overlooking Cambridge and the Harvard campus.
How big a role is deep learning playing right now in building genomic predictors?

So far, not a big one. Other ML methods perform roughly on par with DL. The additive component of variance is largest, and we have compressed sensing theorems showing near-optimal performance for capturing it. There are nonlinear effects, and eventually DL will likely be useful for learning multi-loci features. But at the moment everything is limited by statistical power, and nonlinear features are even harder to detect than additive ones. ...

The bottom line is that with enough statistical power predictors will capture the expected heritability for most traits. Are people in your field ready for this?

Some are, but for others it will be very difficult.
Conference on AI and Genomics / Precision Medicine (Boston).
I enjoyed your talk. I work for [leading AgBio company], but my PhD is in Applied Math. We've been computing Net Merit for bulls using SNPs for a long time. The human genetics people have been lagging...

Caught up now, though. And first derivative (sample size growth rate) is much larger...

Yes. It's funny because sperm is priced by Net Merit and when we or USDA revise models some farmers or breeders get very angry because the value of their bull can change a lot!
A Harvard Square restaurant.
I last saw Roman at the Fellows spring dinner, many years ago. I was back from Yale to see friends. He was drinking, with serious intent. He told me about working with Wilson at Cornell. He also told me an old story about Jeffrey and the Higgs mechanism. Jeffrey almost had it, soon after his work on the Goldstone boson. But Sidney talked him out of it -- something to the effect of "if you can only make sense of it in unitary gauge, it must be an artifact" ... Afterwards, at MIT they would say When push comes to shove, Sidney is wrong. ...

Genomics is in the details now. Lots of work to be done, but conceptually it's clear what to do. I wouldn't say that about AGI. There are still important conceptual breakthroughs that need to be made.
The Dunster House courtyard, overlooking the Charles.
We used to live here, can you let us in to look around?

I remember it all -- the long meals, the tutors, the students, the concerts in the library. Yo Yo Ma and Owen playing together.

A special time, at least for us. But long vanished except in memory.

Wheeler used to say that the past only exists as memory records.

Not very covariant! Why not a single four-manifold that exists all at once?
The Ritz-Carlton.
Flying private is like crack. Once you do it, you can't go back...
It's not like that. They never give you a number. They just tell you that the field house is undergoing a renovation and there's a naming opportunity. Then your kid is on the right list. They've been doing this for a hundred years...

Card had to do the analysis that way. Harvard was paying him...

I went to the session on VC for newbies. Now I realize "valuation" is just BS... Now you see how it really works...

Then Bobby says "What's an LP? I wanna be an LP because you gotta keep them happy."

Let me guess, you want a dataset with a million genomes and FICO scores?

I've helped US companies come to China for 20+ years. At first it was rough. Now if I'm back in the states for a while and return, Shenzhen seems like the Future. The dynamism is here.

To most of Eurasia it just looks like two competing hegemons. Both systems have their pluses and minuses, but it's not an existential problem...

Sure, Huawei is a big threat because they won't put in backdoors for the NSA. Who was tapping Merkel's cellphone? It was us...

Humans are just smart enough to create an AGI, but perhaps not smart enough to create a safe one.

Maybe we should make humans smarter first, so there is a better chance that our successors will look fondly on us. Genetically engineered super-geniuses might have a better chance at implementing Asimov's Laws of Robotics.  

No comments:

Blog Archive