Showing posts with label scifoo. Show all posts
Showing posts with label scifoo. Show all posts

Sunday, July 24, 2016

Scifoo 2016

Photos from Palo Alto and Scifoo 2016. We weren't allowed to take photos inside the Googleplex.










Wednesday, July 20, 2016

Farewell Asia, Hello Scifoo

Apologies for the lack of blog posts. I've been on the road in Asia and quite busy for the past week. I head back to the bay area for Scifoo this weekend. See you there!







Thursday, April 14, 2016

The story of the Monte Carlo Algorithm



George Dyson is Freeman's son. I believe this talk was given at SciFoo or Foo Camp.

More Ulam (neither he nor von Neumann were really logicians, at least not primarily).

Wikipedia on Monte Carlo Methods. I first learned these in Caltech's Physics 129: Mathematical Methods, which used the textbook by Mathews and Walker. This book was based on lectures taught by Feynman, emphasizing practical techniques developed at Los Alamos during the war. The students in the class were about half undergraduates and half graduate students. For example, Martin Savage was a first year graduate student that year. Martin is now a heavy user of Monte Carlo in lattice gauge theory :-)

Monday, November 23, 2015

Contemplating the Future


A great profile of Nick Bostrom in the New Yorker. I often run into Nick at SciFoo and other similar meetings. When Nick is around I know there's a much better chance the discussion will stay on a highbrow, constructive track. It's surprising how often, even at these heavily screened elitist meetings, precious time gets wasted in digressions away from the main points.

The article is long, but very well done. The New Yorker still has it ... sometimes :-(

I was a bit surprised to learn Nick does not like Science Fiction. To take a particular example, Dune explores (very well, I think) a future history in which mankind has a close brush with AI takeover, and ends up banning machines that can think. At the same time, a long term genetic engineering program is taken up in secret to produce a truly superior human intellect. See also Don’t Worry, Smart Machines Will Take Us With Them: Why human intelligence and AI will co-evolve.
New Yorker: ... Bostrom dislikes science fiction. “I’ve never been keen on stories that just try to present ‘wow’ ideas—the equivalent of movie productions that rely on stunts and explosions to hold the attention,” he told me. “The question is not whether we can think of something radical or extreme but whether we can discover some sufficient reason for updating our credence function.”

He believes that the future can be studied with the same meticulousness as the past, even if the conclusions are far less firm. “It may be highly unpredictable where a traveller will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination,” he once argued. “The very long-term future of humanity may be relatively easy to predict.” He offers an example: if history were reset, the industrial revolution might occur at a different time, or in a different place, or perhaps not at all, with innovation instead occurring in increments over hundreds of years. In the short term, predicting technological achievements in the counter-history might not be possible; but after, say, a hundred thousand years it is easier to imagine that all the same inventions would have emerged.

Bostrom calls this the Technological Completion Conjecture: “If scientific- and technological-development efforts do not effectively cease, then all impor­t­­­ant basic capabilities that could be obtained through some possible technology will be obtained.” In light of this, he suspects that the farther into the future one looks the less likely it seems that life will continue as it is. He favors the far ends of possibility: humanity becomes transcendent or it perishes. ...
I've never consumed Futurism as other than entertainment. (In fact I view most Futurism as on the same continuum as Science Fiction.) I think hard scientists tend to be among the most skeptical of medium to long term predictive power, and can easily see the mistakes that Futurists (and pundits and journalists) make about science and technology with great regularity. Bostrom is not in the same category as these others: he's very smart, tries to be careful, but remains willing to consider speculative possibilities.
... When he was a graduate student in London, thinking about how to maximize his ability to communicate, he pursued stand­­up comedy; he has a deadpan sense of humor, which can be found lightly buried among the book’s self-serious passages. “Many of the points made in this book are probably wrong,” he writes, with an endnote that leads to the line “I don’t know which ones.”

Bostrom prefers to act as a cartographer rather than a polemicist, but beneath his exhaustive mapping of scenarios one can sense an argument being built and perhaps a fear of being forthright about it. “Traditionally, this topic domain has been occupied by cranks,” he told me. “By popular media, by science fiction—or maybe by a retired physicist no longer able to do serious work, so he will write a popular book and pontificate. That is kind of the level of rigor that is the baseline. I think that a lot of reasons why there has not been more serious work in this area is that academics don’t want to be conflated with flaky, crackpot type of things. Futurists are a certain type.”

The book begins with an “unfinished” fable about a flock of sparrows that decide to raise an owl to protect and advise them. They go looking for an owl egg to steal and bring back to their tree, but, because they believe their search will be so difficult, they postpone studying how to domesticate owls until they succeed. Bostrom concludes, “It is not known how the story ends.”

The parable is his way of introducing the book’s core question: Will an A.I., if realized, use its vast capability in a way that is beyond human control?

Thursday, July 23, 2015

Drone Art



I saw this video at one of the Scifoo sessions on drones. Beautiful stuff!

I find this much more pleasing than fireworks. The amount of waste and debris generated by a big fireworks display is horrendous.

Monday, July 13, 2015

Productive Bubbles

These slides are from one of the best sessions I attended at scifoo. Bill Janeway's perspective was both theoretical and historical, but in addition we had Sam Altman of Y Combinator to discuss Airbnb and other examples of 2 way market platforms (Uber, etc.) that may be enjoying speculative bubbles at the moment.

See also Andrew Odlyzko (Caltech '71 ;-) on British railway manias for specific cases of speculative funding of useful infrastructure: herehere and here.



Friday, June 26, 2015

Sci Foo 2015


I'm in Palo Alto for this annual meeting of scientists and entrepreneurs at Google. If you read this blog, come over and say hello!

Action photos! Note most of the sessions were in smaller conference rooms, but we weren't allowed to take photographs there.

Friday, August 22, 2014

Two reflections on SCI FOO 2014

Two excellent blog posts on SCI FOO by Jacob Vanderplas (Astronomer and Data Scientist at the University of Washington) and Dominic Cummings (former director of strategy for the conservative party in the UK).

Hacking Academia: Data Science and the University (Vanderplas)

Almost a year ago, I wrote a post I called the Big Data Brain Drain, lamenting the ways that academia is neglecting the skills of modern data-intensive research, and in doing so is driving away many of the men and women who are perhaps best equipped to enable progress in these fields. This seemed to strike a chord with a wide range of people, and has led me to some incredible opportunities for conversation and collaboration on the subject. One of those conversations took place at the recent SciFoo conference, and this article is my way of recording some reflections on that conversation. ...

The problem we discussed is laid out in some detail in my Brain Drain post, but a quick summary is this: scientific research in many disciplines is becoming more and more dependent on the careful analysis of large datasets. This analysis requires a skill-set as broad as it is deep: scientists must be experts not only in their own domain, but in statistics, computing, algorithm building, and software design as well. Many researchers are working hard to attain these skills; the problem is that academia's reward structure is not well-poised to reward the value of this type of work. In short, time spent developing high-quality reusable software tools translates to less time writing and publishing, which under the current system translates to little hope for academic career advancement. ...




Few scientists know how to use the political system to effect change. We need help from people like Cummings.
AUGUST 19, 2014 BY DOMINIC CUMMINGS

... It was interesting that some very eminent scientists, all much cleverer than ~100% of those in politics [INSERT: better to say 'all with higher IQ than ~100% of those in politics'], have naive views about how politics works. In group discussions, there was little focused discussion about how they could influence politics better even though it is clearly a subject that they care about very much. (Gershenfeld said that scientists have recently launched a bid to take over various local government functions in Barcelona, which sounds interesting.)

... To get things changed in politics, scientists need mechanisms a) to agree priorities in order to focus their actions on b) roadmaps with specifics. Generalised whining never works. The way to influence politicians is to make it easy for them to fall down certain paths without much thought, and this means having a general set of goals but also a detailed roadmap the politicians can apply, otherwise they will drift by default to the daily fog of chaos and moonlight.

...

3. High status people have more confidence in asking basic / fundamental / possibly stupid questions. One can see people thinking ‘I thought that but didn’t say it in case people thought it was stupid and now the famous guy’s said it and everyone thinks he’s profound’. The famous guys don’t worry about looking stupid and they want to get down to fundamentals in fields outside their own.

4. I do not mean this critically but watching some of the participants I was reminded of Freeman Dyson’s comment:

‘I feel it myself, the glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands. To release the energy that fuels the stars. To let it do your bidding. And to perform these miracles, to lift a million tons of rock into the sky, it is something that gives people an illusion of illimitable power, and it is in some ways responsible for all our troubles... this is what you might call ‘technical arrogance’ that overcomes people when they see what they can do with their minds.’

People talk about rationales for all sorts of things but looking in their eyes the fundamental driver seems to be – am I right, can I do it, do the patterns in my mind reflect something real? People like this are going to do new things if they can and they are cleverer than the regulators. As a community I think it is fair to say that outside odd fields like nuclear weapons research (which is odd because it still requires not only a large collection of highly skilled people but also a lot of money and all sorts of elements that are hard (but not impossible) for a non-state actor to acquire and use without detection), they believe that pushing the barriers of knowledge is right and inevitable. ...

Saturday, August 16, 2014

Neural Networks and Deep Learning



One of the SCI FOO sessions I enjoyed the most this year was a discussion of deep learning by AI researcher Juergen Schmidhuber. For an overview of recent progress, see this paper. Also of interest: Michael Nielsen's pedagogical book project.

An application which especially caught my attention is described by Schmidhuber here:
Many traditional methods of Evolutionary Computation [15-19] can evolve problem solvers with hundreds of parameters, but not millions. Ours can [1,2], by greatly reducing the search space through evolving compact, compressed descriptions [3-8] of huge solvers. For example, a Recurrent Neural Network [34-36] with over a million synapses or weights learned (without a teacher) to drive a simulated car based on a high-dimensional video-like visual input stream.
More details here. They trained a deep neural net to drive a car using visual input (pixels from the driver's perspective, generated by a video game); output consists of steering orientation and accelerator/brake activation. There was no hard coded structure corresponding to physics -- the neural net optimized a utility function primarily defined by time between crashes. It learned how to drive the car around the track after less than 10k training sessions.

For some earlier discussion of deep neural nets and their application to language translation, see here. Schmidhuber has also worked on Solomonoff universal induction.

These TED videos give you some flavor of Schmidhuber's sense of humor :-) Apparently his younger brother (mentioned in the first video) has transitioned from theoretical physics to algorithmic finance. Schmidhuber on China.



Wednesday, August 13, 2014

Designer babies: selection vs editing



The discussion in this video is sophisticated enough to make the distinction between embryo selection -- the parents get a baby whose DNA originates from them, but the "best baby possible" -- and active genetic editing, which can give the child genes that neither parent had.

The movie GATTACA focuses on selection -- the director made a deliberate decision to eliminate reference to splicing or editing of genes. (Possibly because Ethan Hawke's character Vincent would have no chance competing against edited people.)

At SCI FOO, George Church seemed confident that editing would be an option in the near future. He is convinced that off-target mutations are not a problem for CRISPR. I have not yet seen this demonstrated in the literature, but of course George knows a lot more than what has been published. (Warning: I may have misunderstood his comments as there was a lot of background noise when we were talking.)

One interesting genetic variant (Lrp5?) that I learned about at the meeting, of obvious interest to future splicers and editors, apparently conveys an +8 SD increase in bone strength!

My views on all of this:
... given sufficient phenotype|genotype data, genomic prediction of traits such as cognitive ability will be possible. If, for example, 0.6 or 0.7 of total population variance is captured by the predictor, the accuracy will be roughly plus or minus half a standard deviation (e.g., a few cm of height, or 8 IQ points). The required sample size to extract a model of this accuracy is probably on the order of a million individuals. As genotyping costs continue to decline, it seems likely that we will reach this threshold within five years for easily acquired phenotypes like height (self-reported height is reasonably accurate), and perhaps within the next decade for more difficult phenotypes such as cognitive ability. At the time of this writing SNP genotyping costs are below $50 USD per individual, meaning that a single super-wealthy benefactor could independently fund a crash program for less than $100 million.

Once predictive models are available, they can be used in reproductive applications, rang- ing from embryo selection (choosing which IVF zygote to implant) to active genetic editing (e.g., using powerful new CRISPR techniques). In the former case, parents choosing between 10 or so zygotes could improve their expected phenotype value by a population standard de- viation. For typical parents, choosing the best out of 10 might mean the difference between a child who struggles in school, versus one who is able to complete a good college degree. Zygote genotyping from single cell extraction is already technically well developed [25], so the last remaining capability required for embryo selection is complex phenotype prediction. The cost of these procedures would be less than tuition at many private kindergartens, and of course the consequences will extend over a lifetime and beyond.

The corresponding ethical issues are complex and deserve serious attention in what may be a relatively short interval before these capabilities become a reality. Each society will decide for itself where to draw the line on human genetic engineering, but we can expect a diversity of perspectives. Almost certainly, some countries will allow genetic engineering, thereby opening the door for global elites who can afford to travel for access to reproductive technology. As with most technologies, the rich and powerful will be the first beneficiaries. Eventually, though, I believe many countries will not only legalize human genetic engineering, but even make it a (voluntary) part of their national healthcare systems [26]. The alternative would be inequality of a kind never before experienced in human history.

Here is the version of the GATTACA scene that was cut. The parents are offered the choice of edited or spliced genes conferring rare mathematical or musical ability.

Monday, August 11, 2014

SCI FOO 2014: photos

The day before SCI FOO I visited Complete Genomics, which is very close to the Googleplex.




Self-driving cars:



SCI FOO festivities:







I did an interview with O'Reilly. It should appear in podcast form at some point and I'll post a link.




Obligatory selfie:

Wednesday, October 30, 2013

Project Einstein


I met Jonathan Rothberg, a real pioneer in genetic sequencing technology, at Scifoo back in 2008 (see Gene machines). Jonathan's foundation is now backing an effort similar to the BGI Cognitive Genomics project. He may not remember, but we had a long conversation about this topic on the bus from the hotel to the Googleplex.

I've agreed to participate in Project Einstein (I am not worthy!) as a DNA donor, and I hope that our projects will someday share data and resources. Rothberg's attitude is typical of a true innovator: damn the critics, full speed ahead!
Nature: He founded two genetic-sequencing companies and sold them for hundreds of millions of dollars. He helped to sequence the genomes of a Neanderthal man and James Watson, who co-discovered DNA’s double helix. Now, entrepreneur Jonathan Rothberg has set his sights on another milestone: finding the genes that underlie mathematical genius.

Rothberg and physicist Max Tegmark, who is based at the Massachusetts Institute of Technology in Cambridge, have enrolled about 400 mathematicians and theoretical physicists from top-ranked US universities in a study dubbed ‘Project Einstein’. They plan to sequence the participants’ genomes using the Ion Torrent machine that Rothberg developed.

The team will be wading into a field fraught with controversy. Critics have assailed similar projects, such as one at the BGI (formerly the Beijing Genomics Institute) in Shenzhen, China, that is sequencing the genomes of 1,600 people identified as mathematically precocious children in the 1970s (see Nature 497, 297–299; 2013).

... Rothberg has long been interested in cognition. He is also in awe of the abilities of famous scientists. “Einstein said ‘the most incomprehensible thing about the Universe is that it is comprehensible’,” he says. “I’d love to find the genes that make the Universe comprehensible.”

There is precedent to the concept of sequencing extreme outliers in a population in the hunt for influential genes. Scientists have used the technique to sift for genes that influence medical conditions such as high blood pressure and bone loss. Some behavioural geneticists, such as Robert Plomin at King’s College London, who is involved with the BGI project, say that there is no reason that this same approach won’t work for maths ability. As much as two-thirds of a child’s mathematical aptitude seems to be influenced by genes (Y. Kovas et al. Psychol. Sci. 24, 2048–2056; 2013).

... The Rothberg Institute for Childhood Diseases, Rothberg’s private foundation based in Guilford, Connecticut, is the study’s sponsor. But Rothberg won’t say who is funding the project, which other geneticists estimate will cost at least US$1 million. Some speculate that Rothberg is funding it himself. In 2001, Fortune estimated his net worth to be $168 million, and that was before he sold the sequencing companies he founded — 454 Life Sciences and Ion Torrent, both based in Connecticut — for a combined total of $880 million.

Rothberg is adamant that the project is well worth the time and the money, whoever is paying for it. “This study may not work at all,” he says — before adding, quickly, that it “is not a crazy thing to do”. For a multimillionaire with time on his hands, that seems to be justification enough.
Let me repeat the scientific motivations for this type of project. The human brain is arguably the most complex object we know of in the universe. Yet, it is constructed from a blueprint containing less than a few gigabits of information. Unlocking the genetic architecture of cognition is one of the greatest challenges -- now feasible in the age of genomics that Rothberg and others helped bring into existence.

For a discussion of previous GWAS results on general cognition, and their implications for the prospects of studies like Project Einstein, see First GWAS hits for cognitive ability. For general background on the science, watch this video. Or read these: MIRI interview, FAQ.

Sunday, September 12, 2010

My overview of psychometrics

I used these slides in two talks given at Foo Camp 2010 and Sci Foo 2010. At these "self-organized" meetings attendees are encouraged to talk about whatever they find interesting, and I usually choose to talk about wacky stuff rather than my main research, which tends to be a bit too specialized for the audience. In previous years I've talked about ultimate fighting, internet security, startups, etc.

At Foo, which has a Silicon Valley flavor, I had several CEOs and a bunch of technologists in the audience, and didn't receive any objections to the material. One CEO (an IIT grad with a PhD in engineering from Princeton) who runs a software company employing 1000 developers in India, was very interested in my results and has since agreed to run some experiments (stay tuned!) related to personnel selection and the relation between g and coding ability. At Sci Foo, which has a more scientific or academic flavor, the audience consisted of science writers, Google engineers, physical and computer scientists, a neuroscientist, and (I think) a social scientist. Only the last two voiced objections -- the social scientist actually got up and walked out after 15 minutes. Others in the audience found these objections rather amusing -- How could they argue? All you did was show data; the conclusions are obvious!

See additional comments (elaboration from the talks) here.

Sunday, August 01, 2010

More SciFoo 2010 notes

Some quick notes. I don't know if I'll have the energy to put in a full set of links, but you can pursue any of these topics with your own google searches. I'm sure I missed a lot of good sessions -- too much good stuff going on :-)

Great talk on the Pirahã and Chomsky's universal grammar by Dan Everett. (The Pirahã have no words for distinct numbers and no recursion!)

Long discussion with Erik Verlinde about his idea that gravity is an emergent entropic force.

Dinosaurs and reptiles cannot gallop, with the exception of one weird crocodile -- Paul Sereno.

Ed Felton can uniquely identify individual physical objects (e.g., a particular sheet of paper) using an optical scanner, and, using cryptography, produce a digital signature (i.e., printed directly on the object) that proves that the particular object isn't counterfeit. This has tons of interesting applications.

Frank Wilczek got me excited about graphene. Max Tegmark got me intrigued about 21 cm radio waves and a funny FFT telescope.

Tsutomu Shimomura's startup may drastically reduce the cost of LED lighting.

Sendhil Mullainathan's research explores the psychological consequences of scarcity.


A few more photos from the closing session. Sorry my photos are so boring but we got a warning from one of the Google organizers right at the beginning not to take photos outside of certain restricted areas. Earlier post here.



Saturday, July 31, 2010

SciFoo 2010 notes

There seem to be a lot of physicists here this year. A partial list of theorists: Adi Stern, Chetan Nayak, Frank Wilczek, David Gross, David Tong, Eva Silverstein, Lee Smolin, Erik Verlinde, Alan Guth, Max Tegmark, Paul Davies, Giovanni Amelino-Camelia. Do I count Ed Lu? He was an astronaut for a long time. Guth, who is a very level-headed guy, told me he's now 99 percent confident that inflation is correct, given the CMB results from the last decade. I think I convinced Chetan and maybe Adi that they are actually many worlders ("... if you do a decoherence calculation, and at the end don't insist on throwing away all the parts of the wavefunction except one of the decoherent parts, then you're a many worlder" ;-) Max claims to have a way to get the Born rule from many worlds, but I don't believe him :-) Guth is a many worlder.

I could easily spend all my time at the physics talks, but I think it's better use of this kind of meeting to attend talks outside my specialty. At dinner I met a guy who does fMRI on psychopaths and the guy who built a wind-powered car that goes faster than the wind.




Larry Page addressing the campers.




The campers introducing themselves.




Goofing around with a super croc fossil.




Two Caltechers of my vintage: Tsutomu Shimomura and Ed Felton. We talked about the huge cognitive surplus in physics -- both of these guys were trained in physics before going on to other things.

Blog Archive

Labels