Tuesday, March 28, 2017

The brute tyranny of g-loading: Lawrence Krauss and Joe Rogan



I love Joe Rogan -- he has an open, inquisitive mind and is a sharp observer of life and human society. See, for example, this interview with Dan Bilzerian about special forces, professional poker, sex, drugs, heart attacks, life, happiness, hedonic treadmill, social media, girls, fame, prostitution, money, steroids, stem cell therapy, and plenty more.

I know Lawrence Krauss quite well -- he and I work in the same area of theoretical physics. However, the 20+ minute opening segment in which Krauss tries to explain gauge symmetry to Joe is downright painful. Some things are just conceptually hard, and are built up from other concepts that are themselves non-obvious.


Gauge symmetry is indeed central to modern theoretical physics -- all of the known forces of nature are gauge interactions. I've been at an uncountable number of cocktail parties (sometimes with other professors) where I've tried to explain this concept to someone as sincerely interested as Rogan is in the video. Who doesn't like to hear about fundamental laws of Nature and deep principles of physical reality?

No matter how clearly a very g-loaded concept is explained, it is challenging for an average person to comprehend. One sad aspect of the Internet is that there isn't any significant discussion forum or blog comment section where even much simpler concepts such as regression to the mean are understood by the majority of participants.

Listening to the conversation between Joe and Lawrence about gauge theory and the Higgs field, I couldn't help but think of this Far Side cartoon:


Just Like Heaven

Musical intermission. Choose your favorite!

For me it's not the same without that 80's synth...









Saturday, March 25, 2017

Robots Proctor Online Exams


For background on this subject, see How to beat online exam proctoring. It is easy for clever students to beat existing security systems for online exams. Enterprising students could even set up "cheating rooms" that make it easy for test takers to cheat. Judging by the amount of traffic this old post gets, cheating on online exams is a serious problem.

Machine learning to the rescue! :-) The machines don't have to be 100% accurate in detection -- they can merely flag suspicious moments in the data and ask a human proctor to look more carefully. This makes the overall system much more scalable.

The monitoring data (e.g., video from webcam + pov cam) from a particular exam could potentially be stored forever. In an extreme case, a potential employer who wants to be sure that Johnny passed the Python coding (or psychometric g) exam for real could be granted access to the stored data by Johnny, to see for themselves.
Automated Online Exam Proctoring
Atoum, Chen, Liu, Hsu, and Liu
IEEE Transactions on Multimedia

Abstract:
Massive open online courses (MOOCs) and other forms of remote education continue to increase in popularity and reach. The ability to efficiently proctor remote online examinations is an important limiting factor to the scalability of this next stage in education. Presently, human proctoring is the most common approach of evaluation, by either requiring the test taker to visit an examination center, or by monitoring them visually and acoustically during exams via a webcam. However, such methods are labor-intensive and costly. In this paper, we present a multimedia analytics system that performs automatic online exam proctoring. The system hardware includes one webcam, one wearcam, and a microphone, for the purpose of monitoring the visual and acoustic environment of the testing location. The system includes six basic components that continuously estimate the key behavior cues: user verification, text detection, voice detection, active window detection, gaze estimation and phone detection. By combining the continuous estimation components, and applying a temporal sliding window, we design higher level features to classify whether the test taker is cheating at any moment during the exam. To evaluate our proposed system, we collect multimedia (audio and visual) data from 24 subjects performing various types of cheating while taking online exams. Extensive experimental results demonstrate the accuracy, robustness, and efficiency of our online exam proctoring system.
This work is related to the issued patent
Online examination proctoring system
WO 2014130769 A1

ABSTRACT
The system to proctor an examination includes a first camera (10) worn by the examination taking subject (12) and directed to capture images in subject's field of vision. A second camera (14) is positioned to record an image of the subject's face during the examination. A microphone (26) captures sounds within the room, which are analyzed to detect speech utterances. The computer system (8) is programmed to store captured images from said first camera. The computer (16) is also programmed to issue prompting events instructing the subject to look in a direction specified by the computer at event intervals not disclosed to subject in advance and to index for analysis the captured images in association with indicia corresponding to the prompting events.

Publication number WO2014130769 A1
Publication type Application
Application number PCT/US2014/017584
Publication date Aug 28, 2014
Filing date Feb 21, 2014
Priority date Feb 25, 2013
Also published as US9154748, US20140240507
Inventors Stephen Hsu, Xiaoming Liu, Xiangyang Alexander LIU
Applicant Board Of Trustees Of Michigan State University

Thursday, March 23, 2017

The weirdest academic seminar ever

... but in a good way! :-)

Beijing-based economist and rapper Andrew Dougherty ("Big Daddy Dough") presents The Redprint: Rhyme and Reason in the Riddle Kingdom at the BYU Kennedy Center.




This is his classic cover Beijing State of Mind:




Dougherty interview
on Sinica podcast.

With Kasparov in NYC


We were together on a panel talking about AI. I challenged him to a quick game of blitz but he declined ;-)  The moderator is Josh Wolfe of Lux Capital.

More Kasparov on this blog.


Nunes, Trump, Obama and Who Watches the Watchers?



I've made this separate entry from the update to my earlier discussion FISA, EO 12333, Bulk Collection, and All That. I believe the Nunes revelations from yesterday support my contention that the Trump team intercepts are largely "incidental" collections (e.g., bulk collections using tapped fiber, etc.) under 12333, and the existence of many (leaked) intel reports featuring these intercepts is likely a consequence of Obama's relaxation of the rules governing access to this bulk data. At least, the large number of possible leakers helps hide the identities of the actual leakers!

EO12333 + Obama OKs unprecedented sharing of this info as he leaves office = recent leaks? Note the use of the term "incidentally" and the wide dissemination (thanks to Obama policy change as he left office).
WSJ: ... “I recently confirmed that on numerous occasions the intelligence community incidentally collected information about U.S. citizens involved in the Trump transition,” Mr. Nunes said, reading a brief statement to reporters on Capitol Hill on Wednesday afternoon. “Details about U.S. persons associated with the incoming administration—details with little or no apparent foreign intelligence value—were widely disseminated in intelligence community reporting.”

... Mr. Nunes added that it was “possible” the president himself had some of his communication intercepted, and has asked the Federal Bureau of Investigation, National Security Agency and other intelligence agencies for more information.
The change put in place as Obama left office is probably behind the large number of circulating reports that feature "incidentally" captured communications of the Trump team. The NYTimes article below is from February.
NYTimes: ... Until now, National Security Agency analysts have filtered the surveillance information for the rest of the government. They search and evaluate the information and pass only the portions of phone calls or email that they decide is pertinent on to colleagues at the Central Intelligence Agency, the Federal Bureau of Investigation and other agencies. And before doing so, the N.S.A. takes steps to mask the names and any irrelevant information about innocent Americans.

[ So FBI is only getting access to this data for the first time. It is interesting that Nunes said that NSA would comply with his request for more information but that FBI has not complied. It seems possible that FBI does not yet have good internal controls over how its agents use these new privileges. ]

The new system would permit analysts at other intelligence agencies to obtain direct access to raw information from the N.S.A.’s surveillance to evaluate for themselves. If they pull out phone calls or email to use for their own agency’s work, they would apply the privacy protections masking innocent Americans’ information — a process known as “minimization” — at that stage, Mr. Litt said.

... FISA covers a narrow band of surveillance: the collection of domestic or international communications from a wire on American soil, leaving most of what the N.S.A. does uncovered. In the absence of statutory regulation, the agency’s other surveillance programs are governed by rules the White House sets under a Reagan-era directive called Executive Order 12333.

... [it is unclear what] rules say about searching the raw data using names or keywords intended to bring up Americans’ phone calls or email that the security agency gathered “incidentally” under the 12333 surveillance programs ...
It appears that the number of individuals allowed to search bulk, incidentally collected data has been enlarged significantly. Who watches these watchers? (There must now be many thousands...)
Sophos: Obama administration signs off on wider data-sharing for NSA ... Patrick Toomey, a lawyer for the American Civil Liberties Union (ACLU), put it in an interview with the New York Times, 17 intelligence agencies are now going to be “rooting… through Americans’ emails with family members, friends and colleagues, all without ever obtaining a warrant”.

The new rules mean that the FBI, the CIA, the DEA, and intelligence agencies of the US military’s branches and more, will be able to search through raw signals intelligence (SIGINT): intercepted signals that include all manner of people’s communications, be it via satellite transmissions, phone calls and emails that cross network switches abroad, as well as messages between people abroad that cross domestic network switches.
AddedQuick and dirty summary of new rules governing access to raw SIGINT. Note, lots of room for abuse in what I quote below:
Section III: ... NSA may make raw SIGINT available through its own systems, through a shared IC or other government capability (like a cloud environment), or by transferring the information to the IC element's information systems.

Section V: ... Communications solely between U.S. persons “inadvertently retrieved during the selection of foreign communications” will be destroyed except if they contain significant foreign intelligence or counterintelligence as determined by the IC element.

Section VI: ... An IC element may disseminate U.S. person information "derived solely from raw SIGINT" under these procedures ... if ... the information is “necessary to understand the foreign intelligence or counterintelligence information,”
Here are the entities who now have access (thanks Obama!) to raw SIGINT, and seem to have the discretionary power to "unmask" US citizens appearing in the data.
IC elements are defined under 3.5(h) of E.O. 12333 as: (1) The Office of the Director of National Intelligence; (2) The Central Intelligence Agency; (3) The National Security Agency; (4) The Defense Intelligence Agency; (5) The National Geospatial-Intelligence Agency; (6) The National Reconnaissance Office; (7) The other offices within the Department of Defense for the collection of specialized national foreign intelligence through reconnaissance programs; (8) The intelligence and counterintelligence elements of the Army, the Navy, the Air Force, and the Marine Corps; (9) The intelligence elements of the Federal Bureau of Investigation; (10) The Office of National Security Intelligence of the Drug Enforcement Administration; (11) The Office of Intelligence and Counterintelligence of the Department of Energy; (12) The Bureau of Intelligence and Research of the Department of State; (13) The Office of Intelligence and Analysis of the Department of the Treasury; (14) The Office of Intelligence and Analysis of the Department of Homeland Security; (15) The intelligence and counterintelligence elements of the Coast Guard; and (16) Such other elements of any department or agency as may be designated by the President, or designated jointly by the Director and the head of the department or agency concerned, as an element of the Intelligence Community.

Tuesday, March 21, 2017

FISA, EO 12333, Bulk Collection, and All That


Some basic questions for the experts:

1. To what extent does EO12333 allow surveillance of US individuals without FISA warrant?

2. To what extent are US voice conversations recorded via bulk collection (and preserved for, e.g., 5 or more years)? The email answer is clear ... But now automated voice recognition and transcription make storage of voice conversations much more scalable.

3. To what extent do Five Eyes intel collaborators have direct access to preserved data?

4. Are "experts" and media pundits and Senators even asking the right questions on this topic? For example, can stored bulk-collected voice data from a US individual be accessed by NSA without FISA approval by invoking 12333? How can one prevent a search query on stored data from producing results of this type?

See, e.g., Overseas Surveillance in an Interconnected World (Brennan Center for Justice at NYU School of Law), ACLU.org, and Executive Order 12333 (epic.org):
EPIC has tracked the government's reliance on EO 12333, particularly the reliance on Section 1:12(b)(13), which authorizes the NSA to provide "such administrative and technical support activities within and outside the United States as are necessary to perform the functions described in sections (1) through (12) above, including procurement." This provision appears to have opened the door for the NSA's broad and unwarranted surveillance of U.S. and foreign citizens.

Executive Order 12333 was signed by President Ronald Reagan on December 4, 1981. It established broad new surveillance authorities for the intelligence community, outside the scope of public law. EO 12333 has been amended three times. It was amended by EO 13284 on January 23, 2003 and was then amended by EO 13555 on August 27, 2004. EO 13555 was subtitled "Strengthened Management of the Intelligence Community" and reflected the fact that the Director of National Intelligence (DNI) now existed as the head of the intelligence community, rather than the CIA which had previously served as the titular head of the IC. EO 13555 partially supplemented and superseded EO 12333. On July 30, 2008, President George W. Bush signed EO 13470, which further supplemented and superseded EO 12333 to strengthen the role of the Director of National Intelligence.

Since the Snowden revaluations there has been a great deal of discussion regarding the activities of the IC community, but relatively little attention has been paid to EO 12333. EO 12333 often serves an alternate basis of authority for surveillance activities, above and beyond Section 215 and 702. As Bruce Schneier has emphasized, "Be careful when someone from the intelligence community uses the caveat "not under this program," or "not under this authority"; almost certainly it means that whatever it is they're denying is done under some other program or authority. So when[NSA General Counsel Raj] De said that companies knew about NSA collection under Section 702, it doesn't mean they knew about the other collection programs." Senator Dianne Feinstein (D-CA), Chair of the Senate Intelligence Committee, has said in August 2013 that, "The committee does not receive the same number of official reports on other NSA surveillance activities directed abroad that are conducted pursuant to legal authorities outside of FISA (specifically Executive Order 12333), but I intend to add to the committee's focus on those activities." In July 2014, a former Obama State Department official, John Napier Tye, wrote an Op-Ed in the Washington Post calling for greater scrutiny of EO 12333. Tye noted that "based in part on classified facts that I am prohibited by law from publishing, I believe that Americans should be even more concerned about the collection and storage of their communications under Executive Order 12333 than under Section 215."
Tye in the WaPo:
... [EO 12333] authorizes collection of the content of communications, not just metadata, even for U.S. persons. Such persons cannot be individually targeted under 12333 without a court order. However, if the contents of a U.S. person’s communications are “incidentally” collected (an NSA term of art) in the course of a lawful overseas foreign intelligence investigation, then Section 2.3(c) of the executive order explicitly authorizes their retention. It does not require that the affected U.S. persons be suspected of wrongdoing and places no limits on the volume of communications by U.S. persons that may be collected and retained.

[ E.g., NSA could "incidentally" retain the email of a US individual which happens to be mirrored in Google or Yahoo data centers outside the US, as part of bulk collection for an ongoing (never ending) foreign intelligence or anti-terrorism investigation... ]

“Incidental” collection may sound insignificant, but it is a legal loophole that can be stretched very wide. Remember that the NSA is building a data center in Utah five times the size of the U.S. Capitol building, with its own power plant that will reportedly burn $40 million a year in electricity.
See also Mining your data at NSA (source of image at top).

UPDATE: EO12333 + Obama OKs unprecedented sharing of this info as he leaves office = recent leaks? Note the use of the term "incidentally" and the wide dissemination (thanks to Obama policy change as he left office).
WSJ: ... “I recently confirmed that on numerous occasions the intelligence community incidentally collected information about U.S. citizens involved in the Trump transition,” Mr. Nunes said, reading a brief statement to reporters on Capitol Hill on Wednesday afternoon. “Details about U.S. persons associated with the incoming administration—details with little or no apparent foreign intelligence value—were widely disseminated in intelligence community reporting.”

... Mr. Nunes added that it was “possible” the president himself had some of his communication intercepted, and has asked the Federal Bureau of Investigation, National Security Agency and other intelligence agencies for more information.




The change put in place as Obama left office is probably behind the large number of circulating reports that feature "incidentally" captured communications of the Trump team. The NYTimes article below is from February.
NYTimes: ... Until now, National Security Agency analysts have filtered the surveillance information for the rest of the government. They search and evaluate the information and pass only the portions of phone calls or email that they decide is pertinent on to colleagues at the Central Intelligence Agency, the Federal Bureau of Investigation and other agencies. And before doing so, the N.S.A. takes steps to mask the names and any irrelevant information about innocent Americans.

The new system would permit analysts at other intelligence agencies to obtain direct access to raw information from the N.S.A.’s surveillance to evaluate for themselves. If they pull out phone calls or email to use for their own agency’s work, they would apply the privacy protections masking innocent Americans’ information — a process known as “minimization” — at that stage, Mr. Litt said.

... FISA covers a narrow band of surveillance: the collection of domestic or international communications from a wire on American soil, leaving most of what the N.S.A. does uncovered. In the absence of statutory regulation, the agency’s other surveillance programs are governed by rules the White House sets under a Reagan-era directive called Executive Order 12333.

... [it is unclear what] rules say about searching the raw data using names or keywords intended to bring up Americans’ phone calls or email that the security agency gathered “incidentally” under the 12333 surveillance programs ...
It appears that the number of individuals allowed to search bulk, incidentally collected data has been enlarged significantly. Who watches these watchers? (There must now be many thousands...)
Sophos: ... Patrick Toomey, a lawyer for the American Civil Liberties Union (ACLU), put it in an interview with the New York Times, 17 intelligence agencies are now going to be “rooting… through Americans’ emails with family members, friends and colleagues, all without ever obtaining a warrant”.

The new rules mean that the FBI, the CIA, the DEA, and intelligence agencies of the US military’s branches and more, will be able to search through raw signals intelligence (SIGINT): intercepted signals that include all manner of people’s communications, be it via satellite transmissions, phone calls and emails that cross network switches abroad, as well as messages between people abroad that cross domestic network switches.

Monday, March 20, 2017

Everything is Heritable


The figure above comes from the paper below. A quick glance shows that for pairs of individuals: 1. Increasing genetic similarity implies increasing trait similarity (for traits including height, cognitive ability, years of education) 2. Home environments (raised Together vs Apart; Adoptees) have limited impact on the trait (at least in relatively egalitarian Sweden).

It's all here in one simple figure, but still beyond the grasp of most people struggling to understand how humans and human society work... See also The Mystery of Non-Shared Environment.
Genetics and educational attainment

David Cesarini & Peter M. Visscher
NPJ Science of Learning 2, Article number: 4 (2017)
doi:10.1038/s41539-017-0005-6

Abstract: We explore how advances in our understanding of the genetics of complex traits such as educational attainment could constructively be leveraged to advance research on education and learning. We discuss concepts and misconceptions about genetic findings with regard to causes, consequences, and policy. Our main thesis is that educational attainment as a measure that varies between individuals in a population can be subject to exactly the same experimental biological designs as other outcomes, for example, those studied in epidemiology and medical sciences, and the same caveats about interpretation and implication apply.

Wednesday, March 15, 2017

Ginormous Neural Nets and Networks of Networks

Now that we have neural nets that are good at certain narrow tasks, such as image or speech recognition, playing specific games, translating language, ... the next stage of development will involve 1. linking these specialized nets together in a more general architecture ("Mixtures of Experts"), and 2. generalizing what is learned in one class of problems to different situations ("transfer learning"). The first paper below is by Google Brain researchers and the second from Google DeepMind.

See also A Brief History of the Future, as told to the Masters of the Universe.
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
(Submitted on 23 Jan 2017)

The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A. Rusu, Alexander Pritzel, Daan Wierstra (Submitted on 30 Jan 2017)

For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).
The figure below describes the speedup in learning new games based on previous learning from playing a game of different type.

Sunday, March 12, 2017

Dalton Conley: The Bell Curve Revisited and The Genome Factor

Dalton Conley is the Henry Putnam University Professor of Sociology at Princeton University. He is unique in having earned a second PhD in behavior genetics after his first in Sociology.

In the talk and paper below he discusses molecular genetic tests of three hypotheses from Herrnstein and Murray's The Bell Curve: Intelligence and Class Structure in American Life. This much vilified book is indeed about intelligence and class structure, but almost entirely not about race. Racial differences in intelligence are only discussed in one chapter, and the authors do not make strong claims as to whether the causes for these differences are genetic or environmental. (They do leave open the possibility of a genetic cause for part of the gap, which has led to all kinds of trouble for the surviving author, Charles Murray.) The three questions addressed by Dalton do not involve race.

Harvard professor Harvey Mansfield organized a panel to commemorate the 20th anniversary of The Bell Curve back in 2014. You can find the video here.

1. How is it that the "core propositions" of The Bell Curve can be discussed in a paper published in Sociological Science and at an advanced seminar at Princeton, but Charles Murray is not allowed to speak at Middlebury College?

2. There must be many social scientists or academics in the humanities (or undergraduates at Middlebury) with strong opinions about The Bell Curve, despite never having read it, and despite having a completely erroneous understanding of what the book is about. If you are one of these people, shouldn't you feel embarrassed or ashamed?




The Bell Curve Revisited: Testing Controversial Hypotheses with Molecular Genetic Data

Dalton Conley, Benjamin Domingue

Sociological Science, July 5, 2016
DOI 10.15195/v3.a23

In 1994, the publication of Herrnstein’s and Murray’s The Bell Curve resulted in a social science maelstrom of responses. In the present study, we argue that Herrnstein’s and Murray’s assertions were made prematurely, on their own terms, given the lack of data available to test the role of genotype in the dynamics of achievement and attainment in U.S. society. Today, however, the scientific community has access to at least one dataset that is nationally representative and has genome-wide molecular markers. We deploy those data from the Health and Retirement Study in order to test the core series of propositions offered by Herrnstein and Murray in 1994. First, we ask whether the effect of genotype is increasing in predictive power across birth cohorts in the middle twentieth century. Second, we ask whether assortative mating on relevant genotypes is increasing across the same time period. Finally, we ask whether educational genotypes are increasingly predictive of fertility (number ever born [NEB]) in tandem with the rising (negative) association of educational outcomes and NEB. The answers to these questions are mostly no; while molecular genetic markers can predict educational attainment, we find little evidence for the proposition that we are becoming increasingly genetically stratified.
While I find the work described above to be commendable (i.e., it foreshadows how molecular genetic methods will eventually address even the most complex and controversial topics in social science), I don't feel that the conclusions reached are beyond question. For example, see this worthwhile comment at the journal web page:
This is a fascinating study! My comments below pertain to Proposition #1, that “The effect of genetic endowment is increasing over time with the rise of a meritocratic society”.

The data reported here do not seem to unequivocally contravene H&M’s hypothesis. The authors focus on the interaction terms, PGS x Birth Year (i.e. cohort), and show that interaction coefficient is slightly negative (b=-0.006, p=0.47), indicating a weakening of the association between genetic endowment and educational attainment, broadly conceived. The finding is that PGSs (polygenic scores) are (slightly) less predictive of educational attainment in later cohorts.

This isn’t that surprising, given educational inflation – over time, higher percentages of the population achieve any given level of educational attainment. In addition, as shown in Table 3 and mentioned in the Discussion section, this decline in importance of genetic endowment is restricted only to the ‘lower half of the educational distribution’. In contrast, genetic endowment (measured by PGSs) has become even more important across cohorts in predicting the ‘transition from a completed college degree to graduate education’ (534). Isn’t this what we’d expect to happen as the level of educational attainment at the lower half of the distribution becomes increasingly decoupled from cognitive ability?

H&M argued that cognitive ability is becoming more important in determining one’s life chances. The authors of this paper don’t actually test this hypothesis. They instead create polygenic scores *of educational attainment* (!) rather than cognitive ability – based on the GWAS of Rietveld et al. (2013) – and find that genetic predictors of *educational attainment* become (slightly) less predictive of educational attainment, on average, i.e. for high school and college. But again, they also report that the association of this genetic correlate (of educational attainment) and educational attainment has actually become stronger for transitions into graduate and professional schools from college.

If I’m not mistaken, the association between cognitive ability (as measured say by standardized tests, SAT, ACT, GRE, AFQT and NEA reports on reading and math ability) and educational attainment has weakened over time. It is possible that cognitive ability is becoming increasingly salient in determining SES as H&M maintain, and at the same time, educational attainment is becoming less salient, simply because the relationship between cognitive ability and educational attainment is becoming weaker. In other words, educational attainment, at least at the lower levels, is less salient in determining relative status. ...
Regarding fertility and dysgenic trends, see this more recent paper from the DeCode collaboration in Iceland, which reaches much stronger conclusions in agreement with H&M.

See also Conley's new book Genome Factor: What the Social Genomics Revolution Reveals about Ourselves, Our History, and the Future.



Conley, Steve Pinker, and I were on a 92nd Street Y panel together in 2016.

Wednesday, March 08, 2017

"We need to encourage real diversity of thought in the professoriate"


John Etchemendy is a former Provost of Stanford University.
The Threat From Within

... Over the years, I have watched a growing intolerance at universities in this country – not intolerance along racial or ethnic or gender lines – there, we have made laudable progress. Rather, a kind of intellectual intolerance, a political one-sidedness, that is the antithesis of what universities should stand for. It manifests itself in many ways: in the intellectual monocultures that have taken over certain disciplines; in the demands to disinvite speakers and outlaw groups whose views we find offensive; in constant calls for the university itself to take political stands. We decry certain news outlets as echo chambers, while we fail to notice the echo chamber we’ve built around ourselves.

This results in a kind of intellectual blindness that will, in the long run, be more damaging to universities than cuts in federal funding or ill-conceived constraints on immigration. It will be more damaging because we won’t even see it: We will write off those with opposing views as evil or ignorant or stupid, rather than as interlocutors worthy of consideration. We succumb to the all-purpose ad hominem because it is easier and more comforting than rational argument. But when we do, we abandon what is great about this institution we serve.

It will not be easy to resist this current. As an institution, we are continually pressed by faculty and students to take political stands, and any failure to do so is perceived as a lack of courage. But at universities today, the easiest thing to do is to succumb to that pressure. What requires real courage is to resist it. Yet when those making the demands can only imagine ignorance and stupidity on the other side, any resistance will be similarly impugned.

The university is not a megaphone to amplify this or that political view, and when it does it violates a core mission. Universities must remain open forums for contentious debate, and they cannot do so while officially espousing one side of that debate.

But we must do more. We need to encourage real diversity of thought in the professoriate, and that will be even harder to achieve. It is hard for anyone to acknowledge high-quality work when that work is at odds, perhaps opposed, to one’s own deeply held beliefs. But we all need worthy opponents to challenge us in our search for truth. It is absolutely essential to the quality of our enterprise.

I fear that the next few years will be difficult to navigate. We need to resist the external threats to our mission, but in this, we have many friends outside the university willing and able to help. But to stem or dial back our academic parochialism, we are pretty much on our own. The first step is to remind our students and colleagues that those who hold views contrary to one’s own are rarely evil or stupid, and may know or understand things that we do not. It is only when we start with this assumption that rational discourse can begin, and that the winds of freedom can blow.
See also
Why Universities Must Choose One Telos: Truth or Social Justice

by Jonathan Haidt | Oct 21, 2016

Aristotle often evaluated a thing with respect to its “telos” – its purpose, end, or goal. The telos of a knife is to cut. The telos of a physician is health or healing. What is the telos of university?

The most obvious answer is “truth” –- the word appears on so many university crests. But increasingly, many of America’s top universities are embracing social justice as their telos, or as a second and equal telos. But can any institution or profession have two teloses (or teloi)? What happens if they conflict?

As a social psychologist who studies morality, I have watched these two teloses come into conflict increasingly often during my 30 years in the academy. The conflicts seemed manageable in the 1990s. But the intensity of conflict has grown since then, at the same time as the political diversity of the professoriate was plummeting, and at the same time as American cross-partisan hostility was rising. ...

Examples of Perverse Incentives and Replication in Science

In an earlier post (Perverse Incentives and Replication in Science), I wrote:
Here's a depressing but all too common pattern in scientific research:

1. Study reports results which reinforce the dominant, politically correct, narrative.
2. Study is widely cited in other academic work, lionized in the popular press, and used to advance real world agendas.
3. Study fails to replicate, but no one (except a few careful and independent thinkers) notices.
This seems to have hit a nerve, as many people have come forward with their own examples of this pattern.

From a colleague at MSU:

Parts 1 and 2: Green revolution in Malawi from Farm Input Subsidy Program? Hurrah! Gushing coverage in the NYTimes, Jeffrey Sachs claiming credit, etc.

http://www.nytimes.com/2012/04/20/opinion/how-malawi-fed-its-own-people.html
http://www.nytimes.com/2007/12/02/world/africa/02malawi.html
http://www.nytimes.com/slideshow/2007/12/01/world/20071202MALAWI_index.html


Part 3: Failed replication? No actual green revolution? Will anyone notice?
Re-evaluating the Malawian Farm Input Subsidy Programme (Nature Plants)

Joseph P. Messina1*†, Brad G. Peter1† and Sieglinde S. Snapp2†

Abstract: The Malawian Farm Input Subsidy Programme (FISP) has received praise as a proactive policy that has transformed the nation’s food security, yet irreconcilable differences exist between maize production estimates distributed by the Food and Agriculture Organization of the United Nations (FAO), the Malawi Ministry of Agriculture and Food Security (MoAFS) and the National Statistical Office (NSO) of Malawi. These differences illuminate yield-reporting deficiencies and the value that alternative, politically unbiased yield estimates could play in understanding policy impacts. We use net photosynthesis (PsnNet) as an objective source of evidence to evaluate production history and production potential under a fertilizer input scenario. Even with the most generous harvest index (HI) and area manipulation to match a reported error, we are unable to replicate post-FISP production gains. In addition, we show that the spatial delivery of FISP may have contributed to popular perception of widespread maize improvement. These triangulated lines of evidence suggest that FISP may not have been the success it was thought to be. Lastly, we assert that fertilizer subsidies may not be sufficient or sustainable strategies for production gains in Malawi.

Introduction: Input subsidies targeting agricultural production are frequent and contentious development strategies. The national scale FISP implemented in Malawi has been heralded as an ‘African green revolution’ success story1. The programme was developed by the Malawian government in response to long-term recurring food shortages, following the notably poor maize harvest of 2005; the history of FISP is well described by Chirwa and Dorward2. Scholars and press sources alike commonly refer to government statistics regarding production and yields as having improved significantly. Reaching widespread audiences, Sachs broadcasted that “production doubled within one harvest season” following its deployment3. The influential policy paper by Denning et al. opened with the statement that the “Government of Malawi implemented one of the most ambitious and successful assaults on hunger in the history of the African continent”4. The Malawi success narrative has certainly influenced global development agencies, resulting in increased support for agricultural input subsidies; Tanzania, Zambia, Kenya and Rwanda have all followed suit and implemented some form of input subsidy programme. There has been mild economic criticism of the subsidy implementation process, including disruption of private fertilizer distribution networks within the policy’s first year5. Moreover, the sustainability of subsidies in Malawi has been debated6,7, yet crop productivity gains from subsidies have gone largely unquestioned. As Sanchez commented, “in spite of criticisms by donor agencies and academics, the seed and fertilizer subsidies provided food security to millions of Malawians”1. This optimistic assessment of potential for an “African green revolution” must be tempered by the fact that the Malawian production miracle appears, in part, to be a myth. ...
For more on the 1-2-3 pattern and replication, see this blog post by economist Douglas Campbell, and discussion here.

For more depressing narrative concerning the reliability of published results (this time in cancer research), see this front page NYTimes story from today.

Monday, March 06, 2017

The Eyes of Texas


Sorry for the blogging interruption. I'm at the annual AAU (Association of American Universities) meeting of Senior Research Officers in Austin, Texas.

UT Austin has a beautiful clock tower just up the street from our hotel. As pretty as it is I couldn't help but think about the 1966 tower sniper (45 casualties in 96 minutes) while walking around the main quad. It's a heartbreaking story.
The Eyes of Texas are upon you,
All the live long day.
The Eyes of Texas are upon you,
You can not get away.
Do not think you can escape them
At night or early in the morn
The Eyes of Texas are upon you
'Till Gabriel blows his horn.



Sunday, February 26, 2017

Perverse Incentives and Replication in Science

Here's a depressing but all too common pattern in scientific research:
1. Study reports results which reinforce the dominant, politically correct, narrative.

2. Study is widely cited in other academic work, lionized in the popular press, and used to advance real world agendas.

3. Study fails to replicate, but no one (except a few careful and independent thinkers) notices.
For numerous examples, see, e.g., any of Malcolm Gladwell's books :-(

A recent example: the idea that collective intelligence of groups (i.e., ability to solve problems and accomplish assigned tasks) is not primarily dependent on the cognitive ability of individuals in the group.

It seems plausible to me that by adopting certain best practices for collaboration one can improve group performance, and that diversity of knowledge base and personal experience could also enhance performance on certain tasks. But recent results in this direction were probably oversold, and seem to have failed to replicate.

James Thompson has given a good summary of the situation.

Parts 1 and 2 of our story:
MIT Center for Collective Intelligence: ... group-IQ, or “collective intelligence” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.
Is it true? The original paper on this topic, from 2010, has been cited 700+ times. See here for some coverage on this blog when it originally appeared.

Below is the (only independent?) attempt at replication, with strongly negative results. The first author is a regular (and very insightful) commenter here -- I hope he'll add his perspective to the discussion. Have we reached part 3 of the story?
Smart groups of smart people: Evidence for IQ as the origin of collective intelligence in the performance of human groups

Timothy C. Bates a,b,⁎, Shivani Gupta a
a Department of Psychology, University of Edinburgh
b Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh

What allows groups to behave intelligently? One suggestion is that groups exhibit a collective intelligence accounted for by number of women in the group, turn-taking and emotional empathizing, with group-IQ being only weakly-linked to individual IQ (Woolley, Chabris, Pentland, Hashmi, & Malone, 2010). Here we report tests of this model across three studies with 312 people. Contrary to prediction, individual IQ accounted for around 80% of group-IQ differences. Hypotheses that group-IQ increases with number of women in the group and with turn-taking were not supported. Reading the mind in the eyes (RME) performance was associated with individual IQ, and, in one study, with group-IQ factor scores. However, a well-fitting structural model combining data from studies 2 and 3 indicated that RME exerted no influence on the group-IQ latent factor (instead having a modest impact on a single group test). The experiments instead showed that higher individual IQ enhances group performance such that individual IQ determined 100% of latent group-IQ. Implications for future work on group-based achievement are examined.


From the paper:
Given the ubiquitous importance of group activities (Simon, 1997) these results have wide implications. Rather than hiring individuals with high cognitive skill who command higher salaries (Ritchie & Bates, 2013), organizations might select-for or teach social sensitivity thus raising collective intelligence, or even operate a female gender bias with the expectation of substantial performance gains. While the study has over 700 citations and was widely reported to the public (Woolley, Malone, & Chabris, 2015), to our knowledge only one replication has been reported (Engel, Woolley, Jing, Chabris, & Malone, 2014). This study used online (rather than in-person) tasks and did not include individual IQ. We therefore conducted three replication studies, reported below.

... Rather than a small link of individual IQ to group-IQ, we found that the overlap of these two traits was indistinguishable from 100%. Smart groups are (simply) groups of smart people. ... Across the three studies we saw no significant support for the hypothesized effects of women raising (or men lowering) group-IQ: All male, all female and mixed-sex groups performed equally well. Nor did we see any relationship of some members speaking more than others on either higher or lower group-IQ. These findings were weak in the initial reports, failing to survive incorporation of covariates. We attribute these to false positives. ... The present findings cast important doubt on any policy-style conclusions regarding gender composition changes cast as raising cognitive-efficiency. ...

In conclusion, across three studies groups exhibited a robust cognitive g-factor across diverse tasks. As in individuals, this g-factor accounted for approximately 50% of variance in cognition (Spearman, 1904). In structural tests, this group-IQ factor was indistinguishable from average individual IQ, and social sensitivity exerted no effects via latent group-IQ. Considering the present findings, work directed at developing group-IQ tests to predict team effectiveness would be redundant given the extremely high utility, reliability, validity for this task shown by individual IQ tests. Work seeking to raise group-IQ, like re- search to raise individual IQ might find this task achievable at a task- specific level (Ritchie et al., 2013; Ritchie, Bates, & Plomin, 2015), but less amenable to general change than some have anticipated. Our attempt to manipulate scores suggested that such interventions may even decrease group performance. Instead, work understanding the developmental conditions which maximize expression of individual IQ (Bates et al., 2013) as well as on personality and cultural traits supporting cooperation and cumulation in groups should remain a priority if we are to understand and develop cognitive ability. The present experiments thus provide new evidence for a central, positive role of individual IQ in enhanced group-IQ.
Meta-Observation: Given the 1-2-3 pattern described above, one should be highly skeptical of results in many areas of social science and even biomedical science (see link below). Serious researchers (i.e., those who actually aspire to participate in Science) in fields with low replication rates should (as a demonstration of collective intelligence!) do everything possible to improve the situation. Replication should be considered an important research activity, and should be taken seriously.

Most researchers I know in the relevant areas have not yet grasped that there is a serious problem. They might admit that "some studies fail to replicate" but don't realize the fraction might be in the 50 percent range!

More on the replication crisis in certain fields of science.

Thursday, February 23, 2017

A Professor meets the Alt-Right

Thomas Main, Professor in the School of Public Affairs at Baruch College, is working on a book about the Alt-Right, to be published by Brookings. Below you can listen to a conversation between Main and prominent Alt-Right figure Mike Enoch (pseudonym).

It's an interesting encounter between academic political theory and a new political movement that (so far) exists mostly on the internet. Both Main and Enoch take the other seriously in the discussion, leading to a clear expression of Alt-Right views on race, immigration, identity politics, and the idea of America.

See also Bannon, the Alt-Right, and the National Socialist Vision, and Identity Politics is a Dead End: Live by the Sword, Die by the Sword.


Monday, February 20, 2017

The Future of Thought, via Thought Vectors


In my opinion this is one of the most promising directions in AI. I expect significant progress in the next 5-10 years. Note the whole problem of parsing languages like English has been subsumed in the training of neural Encoders/Decoders used, e.g., in the translation problem (i.e., training on pairs of translated sentences, with an abstract thought vector as the intermediate state). See Toward a Geometry of Thought:
... the space of concepts (primitives) used in human language (or equivalently, in human thought) ...  has only ~1000 dimensions, and has some qualities similar to an actual vector space. Indeed, one can speak of some primitives being closer or further from others, leading to a notion of distance, and one can also rescale a vector to increase or decrease the intensity of meaning.

... we now have an automated method to extract an abstract representation of human thought from samples of ordinary language. This abstract representation will allow machines to improve dramatically in their ability to process language, dealing appropriately with semantics (i.e., meaning), which is represented geometrically.
Geoff Hinton (from a 2015 talk at the Royal Society in London):
The implications of this for document processing are very important. If we convert a sentence into a vector that captures the meaning of the sentence, then Google can do much better searches; they can search based on what's being said in a document.

Also, if you can convert each sentence in a document into a vector, then you can take that sequence of vectors and [try to model] natural reasoning. And that was something that old fashioned AI could never do.

If we can read every English document on the web, and turn each sentence into a thought vector, you've got plenty of data for training a system that can reason like people do.

Now, you might not want it to reason like people do, but at least we can see what they would think.

What I think is going to happen over the next few years is this ability to turn sentences into thought vectors is going to rapidly change the level at which we can understand documents.

To understand it at a human level, we're probably going to need human level resources and we have trillions of connections [in our brains], but the biggest networks we have built so far only have billions of connections. So we're a few orders of magnitude off, but I'm sure the hardware people will fix that.
This is a good discussion (source of the image at top and the text excerpted below), illustrating the concept of linearity in the contexts of human eigenfaces and thought vectors. See also here.



You can audit this Stanford class! CS224n: Natural Language Processing with Deep Learning.

More references.

Thursday, February 16, 2017

Management by the Unusually Competent



How did we get ICBMs? How did we get to the moon? What are systems engineering and systems management? Why do some large organizations make rapid progress, while others spin their wheels for decades at a time? Dominic Cummings addresses these questions in his latest essay.

Photo above of Schriever and Ramo. More Dom.
... In 1953, a relatively lowly US military officer Bernie Schriever heard von Neumann sketch how by 1960 the United States would be able to build a hydrogen bomb weighing less than a ton and exploding with the force of a megaton, about 80 times more powerful than Hiroshima. Schriever made an appointment to see von Neumann at the IAS in Princeton on 8 May 1953. As he waited in reception, he saw Einstein potter past. He talked for hours with von Neumann who convinced him that the hydrogen bomb would be progressively shrunk until it could fit on a missile. Schriever told Gardner about the discussion and 12 days later Gardner went to Princeton and had the same conversation with von Neumann. Gardner fixed the bureaucracy and created the Strategic Missiles Evaluation Committee. He persuaded von Neumann to chair it and it became known as ‘the Teapot committee’ or ‘the von Neumann committee’. The newly formed Ramo-Wooldridge company, which became Thompson-Ramo-Wooldridge (I’ll refer to it as TRW), was hired as the secretariat.

The Committee concluded (February 1954) that it would be possible to produce intercontinental ballistic missiles (ICBMs) by 1960 and deploy enough to deter the Soviets by 1962, that there should be a major crash programme to develop them, and that there was an urgent need for a new type of agency with a different management approach to control the project. Although intelligence was thin and patchy, von Neumann confidently predicted on technical and political grounds that the Soviet Union would engage in the same race. It was discovered years later that the race had already been underway partly driven by successful KGB operations. Von Neumann’s work on computer-aided air defence systems also meant he was aware of the possibilities for the Soviets to build effective defences against US bombers.

‘The nature of the task for this new agency requires that over-all technical direction be in the hands of an unusually competent group of scientists and engineers capable of making systems analyses, supervising the research phases, and completely controlling experimental and hardware phases of the program… It is clear that the operation of this new group must be relieved of excessive detailed regulation by existing government agencies.’ (vN Committee, emphasis added.)

A new committee, the ICBM Scientific Advisory Committee, was created and chaired by von Neumann so that eminent scientists could remain involved. One of the driving military characters, General Schriever, realised that people like von Neumann were an extremely unusual asset. He said later that ‘I became really a disciple of the scientists… I felt strongly that the scientists had a broader view and had more capabilities.’ Schriever moved to California and started setting up the new operation but had to deal with huge amounts of internal politics as the bureaucracy naturally resisted new ideas. The Defense Secretary, Wilson, himself opposed making ICBMs a crash priority.

... Almost everybody hated the arrangement. Even the Secretary of the Air Force (Talbott) tried to overrule Schriever and Ramo. It displaced the normal ‘prime contractor’ system in which one company, often an established airplane manufacturer, would direct the whole programme. Established businesses were naturally hostile. Traditional airplane manufacturers were run very much on Taylor’s principles with rigid routines. TRW employed top engineers who would not be organised on Taylor’s principles. Ramo, also a virtuoso violinist, had learned at Caltech the value of a firm grounding in physics and an interdisciplinary approach in engineering. He and his partner Wooridge had developed their ideas on systems engineering before starting their own company. The approach was vindicated quickly when TRW showed how to make the proposed Atlas missile much smaller and simpler therefore cheaper and faster to develop.

... According to Johnson, almost all the proponents of systems engineering had connections with either Caltech (where von Karman taught and JPL was born) or MIT (which was involved with the Radiation Lab and other military projects during World War 2). Bell Labs, which did R&D for AT&T, was also a very influential centre of thinking. The Jet Propulsion Laboratory (JPL) managed by Caltech also, under the pressure of repeated failure, independently developed systems management and configuration control. They became technical leaders in space vehicles. NASA, however, did not initially learn from JPL.

... Philip Morse, an MIT physicist who headed the Pentagon’s Weapons Systems Evaluation Group after the war, reflected on this resistance:
‘Administrators in general, even the high brass, have resigned themselves to letting the physical scientist putter around with odd ideas and carry out impractical experiments, as long as things experimented with are solutions or alloys or neutrons or cosmic rays. But when one or more start prying into the workings of his own smoothly running organization, asking him and others embarrassing questions not related to the problems he wants them to solve, then there’s hell to pay.’ (Morse, ‘Operations Research, What is It?’, Proceedings of the First Seminar in Operations Research, November 8–10, 1951.)



The Secret of Apollo: Systems Management in American and European Space Programs, Stephen B. Johnson.

Saturday, February 11, 2017

On the military balance of power in the Western Pacific

Some observations concerning the military balance of power in Asia. Even "experts" I have spoken to over the years seem to be confused about basic realities that are fundamental to strategic considerations.

1. Modern missile and targeting technology make the survivability of surface ships (especially carriers) questionable. Satellites can easily image surface ships and missiles can hit them from over a thousand miles away. Submarines are a much better investment and carriers may be a terrible waste of money, analogous to battleships in the WWII era. (Generals and Admirals typically prepare to fight the previous war, despite the advance of technology, often with disastrous consequences.)

2. US forward bases and surface deployments are hostages to advanced missile capability and would not survive the first days of a serious conventional conflict. This has been widely discussed, at least in some planning circles, since the 1990s. See second figure below and link.

3. PRC could easily block oil shipments to Taiwan or even Japan using Anti-Ship Ballistic Missiles (ASBM) or Anti-Ship Cruise Missiles (ASCM). This is a much preferable strategy to an amphibious attack on Taiwan in response to, e.g., a declaration of independence. A simple threat against oil tankers, or perhaps the demonstration sinking of a single tanker, would be enough to cut off supplies. A response to this threat would require attacking mobile DF21D missile launchers on the Chinese mainland. This would be highly escalatory, leading possibly to nuclear response.

4. The strategic importance of the South China Sea and artificial islands constructed there is primarily to the ability of the US to cut off the flow of oil to PRC. The islands may enable PRC to gain dominance in the region and make US submarine operations much more difficult. US reaction to these assets is not driven by "international law" or fishing or oil rights, or even the desire to keep shipping lanes open. What is at stake is the US capability to cut off oil flow, a non-nuclear but highly threatening card it has (until now?) had at its disposal to play against China.

The map below shows the consequences of full deployments of SAM, ASCM, and ASBM weaponry on the artificial islands. Consequences extend to the Malacca Strait (through which 80% of China's oil passes) and US basing in Singapore. Both linked articles are worth reading.

CHINA’S ARTIFICIAL ISLANDS ARE BIGGER (AND A BIGGER DEAL) THAN YOU THINK

Beijing's Go Big or Go Home Moment in the South China Sea



HAS CHINA BEEN PRACTICING PREEMPTIVE MISSILE STRIKES AGAINST U.S. BASES? (Lots of satellite photos at this link, revealing extensive ballistic missile tests against realistic targets.)



Terminal targeting of a moving aircraft carrier by an ASBM like the DF21D


Simple estimates: 10 min flight time means ~10km uncertainty in final position of a carrier (assume speed of 20-30 mph) initially located by satellite. Missile course correction at distance ~10km from target allows ~10s (assuming Mach 5-10 velocity) of maneuver, and requires only a modest angular correction. At this distance a 100m sized target has angular size ~0.01 so should be readily detectable from an optical image. (Carriers are visible to the naked eye from space!) Final targeting at distance ~km can use a combination of optical / IR / radar  that makes countermeasures difficult.

So hitting a moving aircraft carrier does not seem especially challenging with modern technology. The Chinese can easily test their terminal targeting technology by trying to hit, say, a very large moving truck at their ballistic missile impact range, shown above.

I do not see any effective countermeasures, and despite inflated claims concerning anti-missile defense capabilities, it is extremely difficult to stop an incoming ballistic missile with maneuver capability.


More analysis and links to strategic reports from RAND and elsewhere in this earlier post The Pivot and American Statecraft in Asia.
... These questions of military/technological capability stand prior to the prattle of diplomats, policy analysts, or political scientists. Perhaps just as crucial is whether top US and Chinese leadership share the same beliefs on these issues.

... It's hard to war game a US-China pacific conflict, even a conventional one. How long before the US surface fleet is destroyed by ASBM/ASCM? How long until forward bases are? How long until US has to strike at targets on the mainland? How long do satellites survive? How long before the conflict goes nuclear? I wonder whether anyone knows the answers to these questions with high confidence -- even very basic ones, like how well asymmetric threats like ASBM/ASCM will perform under realistic conditions. These systems have never been tested in battle.

The stakes are so high that China can just continue to establish "facts on the ground" (like building new island bases), with some confidence that the US will hesitate to escalate. If, for example, both sides secretly believe (at the highest levels; seems that Xi is behaving as if he might) that ASBM/ASCM are very effective, then sailing a carrier group through the South China Sea becomes an act of symbolism with meaning only to those that are not in the know.

Friday, February 10, 2017

Elon Musk: the BIG PROBLEMS worth working on




#1 AI
#2 Genomics

See also A Brief History of the Future, As Told To the Masters of the Universe.


Musk says he spends most of his time working on technical problems for Tesla and SpaceX, with half a day per week at OpenAI.

Thursday, February 09, 2017

Ratchets Within Ratchets



For those interested in political philosophy, or Trump's travel ban, I recommend this discussion on Scott Aaronson's blog, which features a commenter calling himself Boldmug (see also Bannon and Moldbug in the news recently ;-)

Both Scott and Boldmug seem to agree that scientific/technological progress is a positive ratchet caught within a negative ratchet of societal and political decay.
Boldmug Says:
Comment #181 January 27th, 2017 at 5:26 pm

Scott: An interesting term, “ratchet of progress.” Nature is full of ratchets. But ratchets of progress — extropic ratchets — are the exceptional case. Most ratchets are entropic ratchets, ratchets of decay.

You happen to live inside the ratchet of progress that is science and engineering. That ratchet produces beautiful wonders like seedless watermelons. It’s true that Talleyrand said, “no one who remembers the sweetness of life before the Revolution can even imagine it,” but even Louis XIV had to spit the seeds out of his watermelons.

This ratchet is 400 to 2400 years old, depending on how you count. The powers and ideologies that be are very good at taking credit for science and engineering, though it is much older than any of them. It is a powerful ratchet — not even the Soviet system could kill or corrupt science entirely, although it’s always the least political fields, like math and physics, that do the best.

But most ratchets are entropic ratchets of decay. The powers that be don’t teach you to see the ratchets of decay. You have to look for them with your own eyes.

The scientists and engineers who created the Antikythera mechanism lived inside a ratchet of progress. But that ratchet of progress lived inside a ratchet of decay, which is why we didn’t have an industrial revolution in 100BC. Instead we had war, tyranny, stagnation and (a few hundred years later) collapse.

Lucio Russo (https://en.wikipedia.org/wiki/Lucio_Russo) wrote an interesting, if perhaps a little overstated, book, on the Hellenistic (300-150BC, not to be confused with the Hellenic era proper) golden age of science. We really have no way of knowing how close to a scientific revolution the Alexandrians came. But it was political failure, not scientific failure, that destroyed their world. The ratchet of progress was inside a ratchet of decay. ...
It doesn't appear that Scott responded to this dig by Boldmug:
Boldmug Says:
Comment #153 January 27th, 2017 at 11:51 am

... Coincidentally, the latter is the side [THE LEFT] whose Jedi mind tricks are so strong, they almost persuaded someone with a 160 IQ to castrate himself.

And the Enlightenment? You mean the Enlightenment that guillotined Lavoisier? “The Republic has no need of savants.” Add 1789 and even 1641 to that list. Why would a savant pick Praisegod Barebones over Prince Rupert?

You might notice that in our dear modern world, whose quantum cryptography and seedless watermelons are so excellent, “the Republic has no need of savants” is out there still. Know anyone working on human genetics? ...
Don't believe in societal decay? Read this recent tour-de-force paper by DeCode researchers in Iceland, who have established beyond doubt the (long-term) dysgenic nature of modern society:
Selection against variants in the genome associated with educational attainment
Proceedings of the National Academy of Sciences of the United States of America (PNAS)

Epidemiological and genetic association studies show that genetics play an important role in the attainment of education. Here, we investigate the effect of this genetic component on the reproductive history of 109,120 Icelanders and the consequent impact on the gene pool over time. We show that an educational attainment polygenic score, POLYEDU, constructed from results of a recent study is associated with delayed reproduction (P < 10^(−100)) and fewer children overall. The effect is stronger for women and remains highly significant after adjusting for educational attainment. Based on 129,808 Icelanders born between 1910 and 1990, we find that the average POLYEDU has been declining at a rate of ∼0.010 standard units per decade, which is substantial on an evolutionary timescale. Most importantly, because POLYEDU only captures a fraction of the overall underlying genetic component the latter could be declining at a rate that is two to three times faster.
Note: these "educational attainment" variants are mostly variants which influence cognitive ability.

From the Discussion section of the paper:
... The main message here is that the human race is genetically far from being stagnant with respect to one of its most important traits. It is remarkable to report changes in POLYEDU that are measurable across the several decades covered by this study. In evolutionary time, this is a blink of an eye. However, if this trend persists over many centuries, the impact could be profound.

Monday, February 06, 2017

A Brief History of the Future, as told to the Masters of the Universe

This is a summary of remarks made at two not-Davos meetings, one in NYC and the other in LA. Most attendees were allocators of significant capital.

See also these two articles in Nautilus Magazine: Super-intelligent Humans Are Coming, Don't Worry, Smart Machines Will Take Us With Them.

Most of these topics have been covered in more detail in recent blog posts -- see relevant labels at bottom.

An Inflection Point in Human History, from recent Technological Developments

Genomics and Machine Learning:

Inexpensive genotyping has produced larger and larger datasets of human genomes + phenotypes, approaching sample sizes of a million individuals. Machine learning applied to this data has led to the ability to predict complex human traits (e.g., height, intelligence) as well as disease risk (e.g., type 1 diabetes, cancer, etc.). Among the applications of these advances is the ability to select embryos in IVF to avoid negative outcomes, and even to produce highly superior outcomes.

CRISPR -- a breakthrough technology for gene editing -- will find applications in medicine, agriculture, and eventually human reproduction (editing may eventually supplant selection).

The human species is poised, within the next generation, to take control of its own evolution. It is likely that affluent families will be the first to take advantage of these new capabilities, leading to even greater inequality in society.

Machine Learning and AI:

Routine tasks are being automated through machine intelligence, leading to pressure on low-skill human workers. Autonomous vehicles, probably no more than a decade away, will displace many jobs, such as those of truck and taxi drivers. The automobile industry is likely to experience massive creative destruction: the most valuable part of the car will be its brain (software, sensors, and cloud communication capability), not its drivetrain. The most likely winners in this race are not the major automakers.

AIs are already capable of outperforming even the best humans on any narrow task: e.g., Chess, Go, Texas Hold’em (Poker), facial recognition, voice recognition, etc. Many of these AIs are built using Deep Learning algorithms, which take advantage of neural net architectures. A neural net is an abstract network modeled after the human brain; each node in the network has a different connection strength to other nodes. While a neural net can be trained to outperform humans (see tasks listed above), the internal workings of the net tend to be mysterious even to the human designers. This is unlike the case of structured code, written in familiar high level programming languages. Neural net algorithms run better on specialized hardware, such as GPUs. Google has produced a special chipset, called the TPU, which now runs ~20% of all compute at its data centers. Google does not sell the TPU, and industry players and startups are racing to develop similar chips for neural net applications. (Nvidia is a leader in this new area.)

Neural nets used in language translation have mapped out an abstract ~1000 dimensional space which coincides with the space of “primitive concepts” used in human thought and language. It appears that rapid advances in the ability to read human generated text (e.g., Wikipedia) with comprehension will follow in the coming decade. It seems possible that AGI -- Artificial General Intelligence (analogous to a human intelligence, with a theory of the world, general knowledge about objects in the universe, etc.) -- will emerge within our lifetimes.




Saturday, February 04, 2017

Baby Universes in the Laboratory




This was on the new books table at our local bookstore. I had almost forgotten about doing an interview and corresponding with the author some time ago. See also here and here.

The book is a well-written overview of some of the more theoretical aspects of inflationary cosmology, the big bang, the multiverse, etc. It also fleshes out some of the individual stories of the physicists involved in this research.
Kirkus Reviews: ... In her elegant and perceptive book, Merali ... unpacks the science behind what we know about our universe’s beginnings and traces the paths that many renowned researchers have taken to translate these insights to new heights: the creation of a brand-new “baby” universe, and not an empty one, either, but one with its own physics, matter, and (possibly) life. ... Among the most significant scientific advances in the last half-century is the discovery that our universe is inflating exponentially, a theory that led to many more breakthroughs in physics and cosmology. Yet the big question—how did the universe form, triggering inflation to begin with?—remains opaque. Merali, who works at the Foundational Questions Institute, which explores the boundaries of physics and cosmology, effortlessly explains the complex theories that form the bedrock of this concept, and she brings to life the investigators who have dedicated much of their careers in pursuit of fundamental truths. She also neatly incorporates discussions of philosophy and religion—after all, nothing less than grand design itself is at stake here—without any heavy-handedness or agenda. Over the course of several years, she traveled the world to interview firsthand the most important figures behind the idea of laboratory universe creation ... and the anecdotes she includes surrounding these conversations make her portrait even more compelling.



Here are two illustrations of how a baby universe pinches off from the universe in which it was created. This is all calculable within general relativity, modulo an issue with quantum smoothing of a singularity. The remnant of the baby universe appears to outside observers as a black hole. But inside one finds an exponentially growing region of spacetime.






Blog Archive

Labels