Text

Physicist, Startup Founder, Blogger, Dad

Tuesday, July 17, 2018

ICML notes

It's never been a better time to work on AI/ML. Vast resources are being deployed in this direction, by corporations and governments alike. In addition to the marvelous practical applications in development, a theoretical understanding of Deep Learning may emerge in the next few years.

The notes below are to keep track of some interesting things I encountered at the meeting.

Some ML learning resources:

Metacademy
Depth First study of AlphaGo


I heard a more polished version of this talk by Elad at the Theory of Deep Learning workshop. He is trying to connect results in sparse learning (e.g., performance guarantees for L1 or threshold algos) to Deep Learning. (Video is from UCLA IPAM.)



It may turn out that the problems on which DL works well are precisely those in which the training data (and underlying generative processes) have a hierarchical structure which is sparse, level by level. Layered networks perform a kind of coarse graining (renormalization group flow): first layers filter by feature, subsequent layers by combinations of features, etc. But the whole thing can be understood as products of sparse filters, and the performance under training is described by sparse performance guarantees (ReLU = thresholded penalization?). Given the inherent locality of physics (atoms, molecules, cells, tissue; atoms, words, sentences, ...) it is not surprising that natural phenomena generate data with this kind of hierarchical structure.


Off-topic: At dinner with one of my former students and his colleague (both researchers at an AI lab in Germany), the subject of Finitism came up due to a throwaway remark about the Continuum Hypothesis.

Wikipedia
Horizons of Truth
Chaitin on Physics and Mathematics

David Deutsch:
The reason why we find it possible to construct, say, electronic calculators, and indeed why we can perform mental arithmetic, cannot be found in mathematics or logic. The reason is that the laws of physics "happen" to permit the existence of physical models for the operations of arithmetic such as addition, subtraction and multiplication.
My perspective: We experience the physical world directly, so the highest confidence belief we have is in its reality. Mathematics is an invention of our brains, and cannot help but be inspired by the objects we find in the physical world. Our idealizations (such as "infinity") may or may not be well-founded. In fact, mathematics with infinity included may be very sick, as evidenced by Godel's results, or paradoxes in set theory. There is no reason that infinity is needed (as far as we know) to do physics. It is entirely possible that there are only a (large but) finite number of degrees of freedom in the physical universe.

Paul Cohen:
I will ascribe to Skolem a view, not explicitly stated by him, that there is a reality to mathematics, but axioms cannot describe it. Indeed one goes further and says that there is no reason to think that any axiom system can adequately describe it.
This "it" (mathematics) that Cohen describes may be the set of idealizations constructed by our brains extrapolating from physical reality. But there is no guarantee that these idealizations have a strong kind of internal consistency and indeed they cannot be adequately described by any axiom system.

Monday, July 09, 2018

Game Over: Genomic Prediction of Social Mobility

The figure below shows SNP-based polygenic score and life outcome (socioeconomic index, on vertical axis) in four longitudinal cohorts, one from New Zealand (Dunedin) and three from the US. Each cohort (varying somewhat in size) has thousands of individuals, ~20k in total (all of European ancestry). The points displayed are averages over bins containing 10-50 individuals. For each cohort, the individuals have been grouped by childhood (family) social economic status. Social mobility can be predicted from polygenic score. Note that higher SES families tend to have higher polygenic scores on average -- which is what one might expect from a society that is at least somewhat meritocratic. The cohorts have not been used in training -- this is true out-of-sample validation. Furthermore, the four cohorts represent different geographic regions (even, different continents) and individuals born in different decades.

Everyone should stop for a moment and think carefully about the implications of the paragraph above and the figure below.


Caption from the PNAS paper.
Fig. 4. Education polygenic score associations with social attainment for Add Health Study, WLS, Dunedin Study, and HRS participants with low-, middle-, and high-socioeconomic status (SES) social origins. The figure plots polygenic score associations with socioeconomic attainment for Add Health Study (A), Dunedin Study (B), WLS (C), and HRS (D) participants who grew up in low-, middle-, and high-SES households. For the figure, low- middle-, and high-SES households were defined as the bottom quartile, middle 50%, and top quartile of the social origins score distributions for the Add Health Study, WLS, and HRS. For the Dunedin Study, low SES was defined as a childhood NZSEI of two or lower (20% of the sample), middle SES was defined as childhood NZSEI of three to four (63% of the sample), and high SES was defined as childhood NZSEI of five or six (17% of the sample). Attainment is graphed in terms of socioeconomic index scores for the Add Health Study, Dunedin Study, and WLS and in terms of household wealth in the HRS. Add Health Study and WLS socioeconomic index scores were calculated from Hauser and Warren (34) occupational income and occupational education scores. Dunedin Study socioeconomic index scores were calculated similarly, according to the Statistics New Zealand NZSEI (38). HRS household wealth was measured from structured interviews about assets. All measures were z-transformed to have mean = 0, SD = 1 for analysis. The individual graphs show binned scatterplots in which each plotted point reflects average x and y coordinates for a bin of 50 participants for the Add Health Study, WLS, and HRS and for a bin of 10 participants for the Dunedin Study. The red regression lines are plotted from the raw data. The box-and-whisker plots at the bottom of the graphs show the distribution of the education polygenic score for each childhood SES category. The blue diamond in the middle of the box shows the median; the box shows the interquartile range; and the whiskers show upper and lower bounds defined by the 25th percentile minus 1.5× the interquartile range and the 75th percentile plus 1.5× the interquartile range, respectively. The vertical line intersecting the x axis shows the cohort average polygenic score. The figure illustrates three findings observed consistently across cohorts: (i) participants who grew up in higher-SES households tended to have higher socioeconomic attainment independent of their genetics compared with peers who grew up in lower-SES households; (ii) participants’ polygenic scores were correlated with their social origins such that those who grew up in higher-SES households tended to have higher polygenic scores compared with peers who grew up in lower-SES households; (iii) participants with higher polygenic scores tended to achieve higher levels of attainment across strata of social origins, including those born into low-SES families.

The paper:
Genetic analysis of social-class mobility in five longitudinal studies, Belsky et al.

PNAS July 9, 2018. 201801238; published ahead of print July 9, 2018. https://doi.org/10.1073/pnas.1801238115

A summary genetic measure, called a “polygenic score,” derived from a genome-wide association study (GWAS) of education can modestly predict a person’s educational and economic success. This prediction could signal a biological mechanism: Education-linked genetics could encode characteristics that help people get ahead in life. Alternatively, prediction could reflect social history: People from well-off families might stay well-off for social reasons, and these families might also look alike genetically. A key test to distinguish biological mechanism from social history is if people with higher education polygenic scores tend to climb the social ladder beyond their parents’ position. Upward mobility would indicate education-linked genetics encodes characteristics that foster success. We tested if education-linked polygenic scores predicted social mobility in >20,000 individuals in five longitudinal studies in the United States, Britain, and New Zealand. Participants with higher polygenic scores achieved more education and career success and accumulated more wealth. However, they also tended to come from better-off families. In the key test, participants with higher polygenic scores tended to be upwardly mobile compared with their parents. Moreover, in sibling-difference analysis, the sibling with the higher polygenic score was more upwardly mobile. Thus, education GWAS discoveries are not mere correlates of privilege; they influence social mobility within a life. Additional analyses revealed that a mother’s polygenic score predicted her child’s attainment over and above the child’s own polygenic score, suggesting parents’ genetics can also affect their children’s attainment through environmental pathways. Education GWAS discoveries affect socioeconomic attainment through influence on individuals’ family-of-origin environments and their social mobility.

Note Added from comments: Plots would look much noisier if not for averaging many individuals into single point. Keep in mind that socioeconomic success depends on a lot more than just cognitive ability, or even cognitive ability + conscientiousness.

But, underlying predictor correlates ~0.35 with actual educational attainment, IIRC. That is, the polygenic score predicts EA about as well as standardized tests predict success in schooling.

This means you can at least use it to identify outliers: just as a very high/low test score (SAT, ACT, GRE) does not *guarantee* success/failure in school, nevertheless the signal is useful for selection = admissions.

Friday, July 06, 2018

Seven Years, Two Tweets

Is anyone keeping score?

See On the Genetic Architecture of Cognitive Ability (2014) and Nautilus Magazine: Super Intelligent Humans.



Thursday, July 05, 2018

Cognitive ability predicted from fMRI (Caltech Neuroscience)

Caltech researchers used elastic net (L1 and L2 penalization) to train a predictor using cognitive scores and fMRI data from ~900 individuals. The predictor captures about 20% of variance in intelligence; the score correlates a bit more than 0.45 with actual intelligence. This may validate earlier work by Korean researchers in 2015, although the Korean group claimed much higher predictive correlations.

Press release:
In a new study, researchers from Caltech, Cedars-Sinai Medical Center, and the University of Salerno show that their new computing tool can predict a person's intelligence from functional magnetic resonance imaging (fMRI) scans of their resting state brain activity. Functional MRI develops a map of brain activity by detecting changes in blood flow to specific brain regions. In other words, an individual's intelligence can be gleaned from patterns of activity in their brain when they're not doing or thinking anything in particular—no math problems, no vocabulary quizzes, no puzzles.

"We found if we just have people lie in the scanner and do nothing while we measure the pattern of activity in their brain, we can use the data to predict their intelligence," says Ralph Adolphs (PhD '92), Bren Professor of Psychology, Neuroscience, and Biology, and director and Allen V. C. Davis and Lenabelle Davis Leadership Chair of the Caltech Brain Imaging Center.

To train their algorithm on the complex patterns of activity in the human brain, Adolphs and his team used data collected by the Human Connectome Project (HCP), a scientific endeavor funded by the National Institutes of Health (NIH) that seeks to improve understanding of the many connections in the human brain. Adolphs and his colleagues downloaded the brain scans and intelligence scores from almost 900 individuals who had participated in the HCP, fed these into their algorithm, and set it to work.

After processing the data, the team's algorithm was able to predict intelligence at statistically significant levels across these 900 subjects, says Julien Dubois (PhD '13), a postdoctoral fellow at Cedars-Sinai Medical Center. But there is a lot of room for improvement, he adds. The scans are coarse and noisy measures of what is actually happening in the brain, and a lot of potentially useful information is still being discarded.

"The information that we derive from the brain measurements can be used to account for about 20 percent of the variance in intelligence we observed in our subjects," Dubois says. "We are doing very well, but we are still quite far from being able to match the results of hour-long intelligence tests, like the Wechsler Adult Intelligence Scale,"

Dubois also points out a sort of philosophical conundrum inherent in the work. "Since the algorithm is trained on intelligence scores to begin with, how do we know that the intelligence scores are correct?" The researchers addressed this issue by extracting a more precise estimate of intelligence across 10 different cognitive tasks that the subjects had taken, not only from an IQ test. ...
Paper:
A distributed brain network predicts general intelligence from resting-state human neuroimaging data

Individual people differ in their ability to reason, solve problems, think abstractly, plan and learn. A reliable measure of this general ability, also known as intelligence, can be derived from scores across a diverse set of cognitive tasks. There is great interest in understanding the neural underpinnings of individual differences in intelligence, since it is the single best predictor of long-term life success, and since individual differences in a similar broad ability are found across animal species. The most replicated neural correlate of human intelligence to date is total brain volume. However, this coarse morphometric correlate gives no insights into mechanisms; it says little about function. Here we ask whether measurements of the activity of the resting brain (resting-state fMRI) might also carry information about intelligence. We used the final release of the Young Adult Human Connectome Project dataset (N=884 subjects after exclusions), providing a full hour of resting-state fMRI per subject; controlled for gender, age, and brain volume; and derived a reliable estimate of general intelligence from scores on multiple cognitive tasks. Using a cross-validated predictive framework, we predicted 20% of the variance in general intelligence in the sampled population from their resting-state fMRI data. Interestingly, no single anatomical structure or network was responsible or necessary for this prediction, which instead relied on redundant information distributed across the brain.

Tuesday, July 03, 2018

In the land of the Gene Titans

Apologies for the lack of posts recently. I've been traveling and busy with meetings. For my own recollection, here is a partial list of places I've been in the past weeks.

Illumina (San Diego)
Ancestry (~10M genomes! San Francisco)
23andMe (~5M genomes! Mountain View)
OpenAI (machines beat pro human teams in complex Dota 2 game! San Francisco)
Affymetrix (Santa Clara)
Healdsburg, Sonoma (Talk at meeting of Oligarchs :-)
Soros Fund Management (Talk at leadership retreat, Museum of Arts and Design, NYC)


These GeneTitans are part of the Affy lab that did all of the genotyping for the UK Biobank project. The footprint for this kind of lab is shockingly small: ~6k samples per week per machine and ~10 machines means millions of individual genotypes per year. Illumina produces similar arrays/readers and a hundred square meters of lab space is enough to process millions of samples per year for DTC genomics companies like 23andMe and Ancestry.

We may have a lab like this soon at MSU ;-)

Thursday, June 21, 2018

Harvard Office of Institutional Research models: explicit racial penalty required to reproduce actual admit rates for Asian-Americans

This is my third post discussing the Students For Fair Admissions lawsuit against Harvard over discrimination against Asian-American applicants. Earlier posts here and here discussed, among other things, the tendency of the Admissions Office to assign low personal ratings to A-A applicants. A-As received, on average, the lowest such ratings among all ethnic groups from the Admissions Office. In contrast, alumni interviewers (who actually met the candidates) gave A-A applicants scores comparable to white applicants, and higher than other ethnic groups.

Harvard's Office of Institutional Research (OIR) produced a series of internal reports on discrimination against Asian-American applicants, beginning in 2013. They attempted to model the admissions process, and concluded there was outright penalization of A-A applicants:
Mark Hansen, the (now former) OIR employee, remembers far more. He remembers working with others in OIR on the project. He remembers gathering data, conducting the regression analysis, collaborating with colleagues, coordinating with the Admissions Office, and discussing the results of OIR’s investigation with Fitzsimmons and others on multiple occasions.  Hansen expressed no concerns with the quality and thoroughness of OIR’s statistical work. Moreover, he has a clear understanding of the implications of OIR’s findings. Hansen testified that the reports show that Asian Americans “are disadvantaged in the admissions process at Harvard.” And when asked: “Do you have any explanation other than intentional discrimination for your conclusions regarding the negative association between Asians and the Harvard admissions process?” Hansen responded: “I don’t.”
The figures below show several OIR models which try to fit the observed admit rates for various groups. The only model that comes close (Model 4) is one which assigns outright penalties to A-A applicants (using "demographic" -- i.e., explicitly racial -- factors). IIUC, this is *after* the low Personal Rating scores from the Admissions Office have already been accounted for!

In the decades leading up to the data discovery forced by the SFFA lawsuit, we heard many claims that legacy / recruited athlete status, or leadership characteristics, or extracurriculars, were the reasons for A-As having such a low acceptance rate (despite their strong academic records). The OIR analysis shows that these effects, while perhaps real, are only part of the story. In Model 4, pure racial bias reduces the A-A percentage of the entering class from 26% (after accounting for all the factors listed above) to the actual 18-19%!




Tuesday, June 19, 2018

Harvard Office of Institutional Research on Discrimination Against Asian-American Applicants

Harvard's Office of Institutional Research (OIR) produced a series of internal reports on discrimination against Asian-American applicants, beginning in 2013. I believe this was in response to Ron Unz's late 2012 article The Myth of American Meritocracy. These reports were shared with, among others, William Fitzsimmons (Dean of Admissions and Financial Aid) and Rakesh Khurana (Dean of Harvard College). Faced with an internal investigation showing systemic discrimination against Asian-American applicants, Harvard killed the study and quietly buried the reports. The Students For Fair Admissions (SFFA) supporting memo for Summary Judgment contains excerpts from depositions of these and other Harvard leaders concerning the internal reports. (Starting p.15 -- SAD!)

The second report included the figure below. Differences are in SDs, Asian = Asian-American (International applicants are distinct category), and Legacy and Recruited Athlete candidates have been excluded for this calculation.


As discussed in the previous post: When it comes to the score assigned by the Admissions Office, Asian-American applicants are assigned the lowest scores of any racial group. ... By contrast, alumni interviewers (who actually meet the applicants) rate Asian-Americans, on average, at the top with respect to personal ratings—comparable to white applicants ...

From the SFFA (Students For Fair Admissions) supporting memo for summary judgement:
OIR found that Asian-American admit rates were lower than white admit rates every year over a ten-year period even though, as the first of these two charts shows, white applicants materially outperformed Asian-American applicants only in the personal rating. Indeed, OIR found that the white applicants were admitted at a higher rate than their Asian-American counterparts at every level of academic-index level. But it is even worse than that. As the second chart shows, being Asian American actually decreases the chances of admissions. Like Professor Arcidiacono, OIR found that preferences for African American and Hispanic applicants could not explain the disproportionately negative effect Harvard’s admission system has on Asian Americans.
On David Card's obfuscatory analysis: the claim is that within the pool of "unhooked" applicants (excluding recruited athletes, legacies, children of major donors, etc.), Asian-Americans are discriminated against. Card's analysis obscures this point.
The task here is to determine whether “similarly situated” applicants have been treated differently on the basis of race; “apples should be compared to apples.” SBT Holdings, LLC v. Town of Westminster, 547 F.3d 28, 34 (1st Cir. 2008). Because certain applicants are in a special category, it is important to analyze the effect of race without them included. Excluding them allows for the effect of race to be tested on the bulk of the applicant pool (more than 95% of applicants and more than two-thirds of admitted students) that do not fall into one of these categories, i.e., the similarly situated applicants. For special-category applicants, race either does not play a meaningful role in their chances of admission or the discrimination is offset by the “significant advantage” they receive. Either way, they are not apples.

Professor Card’s inclusion of these applicants reflects his position that “there is no penalty against Asian-American applicants unless Harvard imposes a penalty on every Asian-American applicant.” But he is not a lawyer and he is wrong. It is illegal to discriminate against any Asian-American applicant or subset of applicants on the basis of race. Professor Card cannot escape that reality by trying to dilute the dataset. The claim here is not that Harvard, for example, “penalizes recruited athletes who are Asian-American because of their race.” The claim “is that the effects of Harvard’s use of race occur outside these special categories.” Professor Arcidiacono thus correctly excluded special-category applicants to isolate and highlight Harvard’s discrimination against Asian Americans. Professor Card, by contrast, includes “special recruiting categories in his models” to “obscure the extent to which race is affecting admissions decisions for those not fortunate enough to belong to one of these groups.” At bottom, SFFA’s claim is that Harvard penalizes Asian-American applicants who are not legacies or recruited athletes. Professor Card has shown that he is unwilling and unable to contest that claim.
This is an email from an alumni interviewer:
[M]y feelings towards Harvard have been slowly changing over the years. I’ve been interviewing for the college for almost 10 years now, and in those ten years, none of the Asian American students I’ve interviewed has been accepted (or even wait-listed). I’m 0 for about 20. This is the case despite the fact that their resumes are unbelievable and often superior to those of the non-Asian students I’ve interviewed who are admitted. I’ve also attended interviewer meetings where Asian candidates are summarily dismissed as “typical” or “not doing anything anyone else isn’t doing” while white or other minority candidates with similar resumes are lauded.
From p.18 of the SFFA memo:
Mark Hansen, the (now former) OIR employee, remembers far more. He remembers working with others in OIR on the project. He remembers gathering data, conducting the regression analysis, collaborating with colleagues, coordinating with the Admissions Office, and discussing the results of OIR’s investigation with Fitzsimmons and others on multiple occasions.  Hansen expressed no concerns with the quality and thoroughness of OIR’s statistical work. Moreover, he has a clear understanding of the implications of OIR’s findings. Hansen testified that the reports show that Asian Americans “are disadvantaged in the admissions process at Harvard.” And when asked: “Do you have any explanation other than intentional discrimination for your conclusions regarding the negative association between Asians and the Harvard admissions process?” Hansen responded: “I don’t.”
A very sad tweet:

Saturday, June 16, 2018

Harvard discrimination lawsuit: data show penalization of Asian-Americans on subjective personality evaluation


Harvard and Students For Fair Admissions (SFFA), which is suing Harvard over discrimination against Asian-American applicants, have released a large set of documents related to the case, including statistical analysis of records of more than 160,000 applicants who applied for admission over six cycles from 2000 to 2015.

Documents here and here. NYTimes coverage.

The following point does not require any sophisticated modeling (with inherent assumptions) or statistical expertise to understand.

Harvard admissions evaluators -- staffers who are likely under pressure to deliver a target mix of ethnicities each year -- rate Asian-American applicants far lower on subjective personality traits than do alumni interviewers who actually meet the applicants. The easiest way to limit the number of A-A admits each year would be to penalize them on the most subjective aspects of the evaluation...

As stated further below: When it comes to the score assigned by the Admissions Office, Asian-American applicants are assigned the lowest scores of any racial group. ... By contrast, alumni interviewers (who actually meet the applicants) rate Asian-Americans, on average, at the top with respect to personal ratings—comparable to white applicants...
SFFA Memorandum: Professor Arcidiacono found that Harvard’s admissions system discriminates against Asian-American applicants in at least three respects. First, he found discrimination in the personal rating. Asian-American applicants are significantly stronger than all other racial groups in academic performance. They also perform very well in non-academic categories and have higher extracurricular scores than any other racial group. Asian-American applicants (unsurprisingly, therefore) receive higher overall scores from alumni interviewers than all other racial groupsAnd they receive strong scores from teachers and guidance counselors—scores that are nearly identical to white applicants (and higher than African-American and Hispanic applicants). In sum, Professor Arcidiacono found that “Asian-American applicants as a whole are stronger on many objective measures than any other racial/ethnic group including test scores, academic achievement, and extracurricular activities.

Yet Harvard’s admissions officials assign Asian Americans the lowest score of any racial group on the personal rating—a “subjective” assessment of such traits as whether the student has a “positive personality” and “others like to be around him or her,” has “character traits” such as “likability ... helpfulness, courage, [and] kindness,” is an “attractive person to be with,” is “widely respected,” is a “good person,” and has good “human qualities.” Importantly, Harvard tracks two different personal ratings: one assigned by the Admissions Office and another by alumni interviewers. When it comes to the score assigned by the Admissions Office, Asian-American applicants are assigned the lowest scores of any racial group. ... By contrast, alumni interviewers (who actually meet the applicants) rate Asian Americans, on average, at the top with respect to personal ratings—comparable to white applicants and higher than African-American and Hispanic applicants.
From the Crimson:
The report found that Asian American applicants performed significantly better in rankings of test scores, academics, and overall scores from alumni interviews. Of 10 characteristics, white students performed significantly better in only one—rankings of personal qualities, which are assigned by the Admissions Office. [italics added]
See also Too Many Asian Americans: Affirmative Discrimination in Elite College Admissions. (Source of figure at top; the peak in A-A representation at Harvard, in the early 1990s, coincides with external pressure from an earlier DOJ investigation of the university for discrimination.)

A very sad tweet:


For the statistically sophisticated, see Duke Professor Arcidiacono's rebuttal to David Card's analysis for Harvard. If these entirely factual and easily verified characterizations of Card's modeling (see below) are correct, the work is laughable.
Professor Card’s models are distorted by his inclusion of applicants for whom there is no reason to believe race plays any role.

As my opening report noted, there are several categories of applicants to whom Harvard extends preferences for reasons other than race: recruited athletes, children of faculty and staff, those who are on the Dean’s List or Director’s List [i.e., Big Donors], legacies, and those who apply for early admission.1 Because of the significant advantage that each of these categories confers on applicants, my report analyzed the effect of race on an applicant pool without these special categories of applicants (the baseline dataset), which allowed me to test for the effect of race on the bulk of the applicant pool that did not fall into one of these categories.2

Professor Card, however, includes all of these applicants in his model, taking the remarkable position that there is no penalty against Asian-American applicants unless Harvard imposes a penalty on every Asian-American applicant. But this is an untenable position. I do not assert that Harvard uses race to penalize Asian-American applicants who are recruited athletes, children of donors (or others identified on the Dean’s List), legacies, or other preferred categories. By including these special recruiting categories in his models, Professor Card obscures the extent to which race is affecting admissions decisions for all other applicants.

Professor Card further exacerbates this problem by including in his calculations the large majority of applicants whose characteristics guarantee rejection regardless of their race. Harvard admits a tiny fraction of applicants – only five or six percent in recent years. This means that a huge proportion of applicants have no realistic chance of admission. If an applicant has no chance of admission, regardless of his race, then Harvard obviously does not “discriminate” based on race in rejecting that applicant. Professor Card uses this obvious fact to assert that Harvard does not consider race at all in most of its admissions decisions. Further, he constructs his models in ways that give great weight to these applicants, again watering down the effect of race in Harvard’s decisions where it clearly does matter. (To put it in simple terms, it is akin to reducing the value of a fraction by substantially increasing the size of its denominator.)


Professor Card removes interaction terms, which has the effect of understating the penalty Harvard imposes on Asian-American applicants.

As Professor Card notes, his model differs from mine in that he removes the interaction terms. An interaction term allows the effects of a particular factor to vary with another distinct factor. In the context of racial discrimination, interaction terms are especially helpful (and often necessary) in revealing where certain factors operate differently for subgroups within a particular racial or ethnic group. For example, if a law firm singled out African-American women for discriminatory treatment but treated African-American males and other women fairly, a regression model would probably not pick up the discrimination unless it included an interaction between African-American and female.

Professor Card rightly recognizes that interaction terms should be included in a model when there is evidence that racial preferences operate differently for particular groups of applicants; yet he nonetheless removes interaction terms for variables that satisfy this condition. The most egregious instance of this is Professor Card’s decision not to interact race with disadvantaged status—even though the data clearly indicate that Harvard treats disadvantaged students differently by race.

...

Professor Card’s report changes none of my conclusions; to the contrary, given how easy it is to alter the results of his models and that my own models report the same results even incorporating a number of his controls, my opinions in this case have only been strengthened: Harvard penalizes Asian-American applicants; Harvard imposes heavy racial preferences in favor of Hispanic and African-American applicants; and Harvard has been manipulating its admission of single-race African-American applicants to ensure their admission rate approximates or exceeds the overall admission rate. Professor Card has demonstrated that it is possible to mask the true effects of race in Harvard’s admission process by changing the scope of the analysis in incorrect ways and choosing inappropriate combinations of control variables. But Professor Card cannot reach these results by applying accepted statistical methods and treating the data fairly.

Tuesday, June 12, 2018

Big Ed on Classical and Quantum Information Theory

I'll have to carve out some time this summer to look at these :-) Perhaps on an airplane...

When I visited IAS earlier in the year, Witten was sorting out Lieb's (nontrivial) proof of strong subadditivity. See also Big Ed.
A Mini-Introduction To Information Theory
https://arxiv.org/abs/1805.11965

This article consists of a very short introduction to classical and quantum information theory. Basic properties of the classical Shannon entropy and the quantum von Neumann entropy are described, along with related concepts such as classical and quantum relative entropy, conditional entropy, and mutual information. A few more detailed topics are considered in the quantum case.
Notes On Some Entanglement Properties Of Quantum Field Theory
https://arxiv.org/abs/1803.04993

These are notes on some entanglement properties of quantum field theory, aiming to make accessible a variety of ideas that are known in the literature. The main goal is to explain how to deal with entanglement when – as in quantum field theory – it is a property of the algebra of observables and not just of the states.
Years ago at Caltech, walking back to Lauritsen after a talk on quantum information, with John Preskill and a famous string theorist not to be named. When I asked the latter what he thought of the talk, he laughed and said Well, after all, it's just linear algebra :-)

Sunday, June 10, 2018

The Life of this World


From this 2011 post:
I've been a fan of the writer James Salter (see also here) since discovering his masterpiece A Sport and a Pastime. Salter evokes Americans in France as no one since Hemingway in A Moveable Feast. The title comes from the Koran: Remember that the life of this world is but a sport and a pastime ... :-)

I can't think of higher praise than to say I've read every bit of Salter's work I could get my hands on.
For true Salter fans, a new (2017; he passed in 2015) collection of previously uncollected nonfiction: Don't Save Anything: Uncollected Essays, Articles, and Profiles. I especially liked the essay Younger Women, Older Men, originally published in Esquire in 1992.

From A Sport and a Pastime.
“When did you get out of Yale?”
“I didn’t,” he says. “I quit.”
“Oh.”

He describes it casually, without stooping to explain, but the authority of the act overwhelms me. If I had been an underclassman he would have become my hero, the rebel who, if I had only had the courage, I might have also become. ... Now, looking at him, I am convinced of all I missed. I am envious. Somehow his life seems more truthful than mine, stronger, even able to draw mine to it like the pull of a dark star.

He quit. It was too easy for him, his sister told me, and so he refused it. He had always been extraordinary in math. He had a scholarship. He knew he was exceptional. Once he took the anthropology final when he hadn’t taken the course. He wrote that at the top of the page. His paper was so brilliant the professor fell in love with him. Dean was disappointed, of course. It only proved how ridiculous everything was. ... He lived with various friends in New York and began to develop a style. ... in the end he quit altogether. Then he began educating himself.

...

She stoops with the match, inserts it, and the heater softly explodes. A blue flame rushes across the jets, then burns with a steady sound. There’s no other light in the room but this, which reflects from the floor. She stands up again. She drops the burnt match on the table and begins to arrange clothing on the grill of the heater, pajamas, spreading them out so they can be warmed. Dean helps her a bit. The silk, if it’s that, is quite cold. And there, back from the Vox opposite the Citroen garage, its glass doors now closed, they stand in the roaring dark. In a fond, almost brotherly gesture, he puts his arms around her. They hardly know one another. She accepts it without a word, without a movement, and they wait in a pure silence, the faint sweetness of gas in the air. After a while she turns the pajamas over. Her back is towards him. In a single move she pulls off her sweater and then, reaching behind herself in that elbow-awkward way, unfastens her brassiere. Slowly he turns her around.

...
From a message to a friend, who knew Salter, and asked me to articulate what I most admire about his work.
About 5 years ago I became friends with the writer Richard Ford, who offered to introduce me to his friend Salter. I was less enthusiastic to meet him than I would have been when he was younger. I did not go out of my way, and we never met.

Since he lived in Aspen, and I was often there in the summers at the Physics institute, I have sometimes imagined that we crossed paths without knowing it.

I admire, of course, his prose style. Sentence for sentence, he is the master.

But perhaps even more I admire his view of the world -- of courage, honor, daring to attempt the impossible, men and women, what is important in life.

Saturday, June 09, 2018

The Rise of AI (Bloomberg Hello World documentary)



Great profile of Geoff Hinton, Yoshua Bengio, etc., but covers many other topics.

Note to readers: I'll be at the 35th International Conference on Machine Learning (ICML 2018) in Stockholm, Sweden (July 10-15, 2018), giving a talk at the Reproducibility in ML Workshop.

Let me know if you want to meet up!

Wednesday, May 30, 2018

Deep Learning as a branch of Statistical Physics

Via Jess Riedel, an excellent talk by Naftali Tishby given recently at the Perimeter Institute.

The first 15 minutes is a very nice summary of the history of neural nets, with an emphasis on the connection to statistical physics. In the large network (i.e., thermodynamic) limit, one observes phase transition behavior -- sharp transitions in performance, and also a kind of typicality (concentration of measure) that allows for general statements that are independent of some detailed features.

Unfortunately I don't know how to embed video from Perimeter so you'll have to click here to see the talk.

An earlier post on this work: Information Theory of Deep Neural Nets: "Information Bottleneck"

Title and Abstract:
The Information Theory of Deep Neural Networks: The statistical physics aspects

The surprising success of learning with deep neural networks poses two fundamental challenges: understanding why these networks work so well and what this success tells us about the nature of intelligence and our biological brain. Our recent Information Theory of Deep Learning shows that large deep networks achieve the optimal tradeoff between training size and accuracy, and that this optimality is achieved through the noise in the learning process.

In this talk, I will focus on the statistical physics aspects of our theory and the interaction between the stochastic dynamics of the training algorithm (Stochastic Gradient Descent) and the phase structure of the Information Bottleneck problem. Specifically, I will describe the connections between the phase transition and the final location and representation of the hidden layers, and the role of these phase transitions in determining the weights of the network.

About Tishby:
Naftali (Tali) Tishby נפתלי תשבי

Physicist, professor of computer science and computational neuroscientist
The Ruth and Stan Flinkman professor of Brain Research
Benin school of Engineering and Computer Science
Edmond and Lilly Safra Center for Brain Sciences (ELSC)
Hebrew University of Jerusalem, 96906 Israel

I work at the interfaces between computer science, physics, and biology which provide some of the most challenging problems in today’s science and technology. We focus on organizing computational principles that govern information processing in biology, at all levels. To this end, we employ and develop methods that stem from statistical physics, information theory and computational learning theory, to analyze biological data and develop biologically inspired algorithms that can account for the observed performance of biological systems. We hope to find simple yet powerful computational mechanisms that may characterize evolved and adaptive systems, from the molecular level to the whole computational brain and interacting populations.

Saturday, May 26, 2018

Vinyl Sounds

Vinyl + Vacuum Tubes ... Still unsurpassed for warmth and richness of sound.








When I lived in New Haven in the 90s I took the train in on weekends to visit old friends from physics and mathematics, most of whom worked in finance. One Sunday morning in the spring I found myself with a friend of a friend, a big fixed income trader and devoted audiophile. His apartment in the Village had a large room with a balcony surrounded by leafy trees. In the room he kept only two things: a giant divan next to the balcony, on which several people at a time could recline, and the most expensive audio system I have ever seen. We spent hours listening to jazz and eating fresh cannoli with his actress girlfriend.

Off Grid Tiny Homes

This is the kind of thing I fantasize about doing after I retire :-)





Friday, May 25, 2018

Too Many Asian Americans: Affirmative Discrimination in Elite College Admissions

An updated analysis of discrimination against Asian-American applicants at elite universities. Figures below are from the paper. See also The Content of their Character: Ed Blum and Jian Li.
Too Many Asian Americans: Affirmative Discrimination in Elite College Admissions

Althea Nagai, Ph.D.

Asian Americans are “overrepresented” in certain elite schools relative to their numbers in the U.S. population. In pursuit of racial and ethnic diversity, these schools will admit some Asian American applicants but not as many as their academic qualifications would justify. As a case study, I examine three private universities and Asian American enrollment in those universities over time.

No “Ceiling” on Asian Americans at Caltech But One at MIT and Harvard.
Some basic facts: Caltech has race-blind admissions. The fraction of Asian-Americans enrolled there tends to track the growth in the overall applicant pool in recent decades. Harvard does use race as a factor, and is being sued for discrimination against Asian-Americans. The peak in A-A representation at Harvard, in the early 1990s, coincides with external pressure from an earlier DOJ investigation of the university for discrimination (dramatic race-based adjustments, revealing the craven subjectivity of holistic admissions!). Despite the much stronger and larger pool of applicants today (second figure below), A-A representation at Harvard has never recovered to those 1990s levels.




Wednesday, May 23, 2018

Dominic Cummings on Fighting, Physics, and Learning from tight feedback loops

Another great post from Dom.

Once something has become widely understood, it is difficult to recreate or fully grasp the mindset that prevailed before. But I can attest to the fact that until the 1990s and the advent of MMA, even "experts" (like boxing coaches, karate and kung fu instructors, Navy SEALs) did not know how to fight -- they were deeply confused as to which techniques were most effective in unarmed combat.

Soon our ability to predict heritable outcomes using DNA alone (i.e., Genomic Prediction) will be well-established. Future generations will have difficulty understanding the mindset of people (even, scientists) today who deny that it is possible.

The same will be true of AGI... For example, see the well-known "Chinese Room" argument against AGI, advanced by Berkeley Philosopher John Searle (discussed before in The Mechanical Turk and Searle's Chinese Room). Searle's confusion as to where, exactly, the understanding resides inside a complex computation seems silly to us today given recent developments with deep neural nets and, e.g., machine translation (the very problem used in his thought experiment). Understanding doesn't exist in any sub-portion of the network, it is embodied in the network. (See also Thought vectors and the dimensionality of the space of concepts :-)
Effective action #4a: ‘Expertise’ from fighting and physics to economics, politics and government

Extreme sports: fast feedback = real expertise

In the 1980s and early 1990s, there was an interesting case study in how useful new knowledge jumped from a tiny isolated group to the general population with big effects on performance in a community. Expertise in Brazilian jiu-jitsu was taken from Brazil to southern California by the Gracie family. There were many sceptics but they vanished rapidly because the Gracies were empiricists. They issued ‘the Gracie challenge’.

All sorts of tough guys, trained in all sorts of ways, were invited to come to their garage/academy in Los Angeles to fight one of the Gracies or their trainees. Very quickly it became obvious that the Gracie training system was revolutionary and they were real experts because they always won. There was very fast and clear feedback on predictions. Gracie jiujitsu quickly jumped from an LA garage to TV. At the televised UFC 1 event in 1993 Royce Gracie defeated everyone and a multi-billion dollar business was born.

People could see how training in this new skill could transform performance. Unarmed combat changed across the world. Disciplines other than jiu jitsu have had to make a choice: either isolate themselves and not compete with jiu jitsu or learn from it. If interested watch the first twenty minutes of this documentary (via professor Steve Hsu, physicist, amateur jiu jitsu practitioner, and predictive genomics expert).

...

[[ On politics, a field in which Dom has few peers: ]]

... The faster the feedback cycle, the more likely you are to develop a qualitative improvement in speed that destroys an opponent’s decision-making cycle. If you can reorient yourself faster to the ever-changing environment than your opponent, then you operate inside their ‘OODA loop’ (Observe-Orient-Decide-Act) and the opponent’s performance can quickly degrade and collapse.

This lesson is vital in politics. You can read it in Sun Tzu and see it with Alexander the Great. Everybody can read such lessons and most people will nod along. But it is very hard to apply because most political/government organisations are programmed by their incentives to prioritise seniority, process and prestige over high performance and this slows and degrades decisions. Most organisations don’t do it. Further, political organisations tend to make too slowly those decisions that should be fast and too quickly those decisions that should be slow — they are simultaneously both too sluggish and too impetuous, which closes off favourable branching histories of the future.




See also Kosen Judo and the origins of MMA.


Choking out a Judo black belt in the tatami room at the Payne Whitney gymnasium at Yale. My favorite gi choke is Okuri eri jime.


Training in Hawaii at Relson Gracie's and Enson Inoue's schools. The shirt says Yale Brazilian Jiujitsu -- a club I founded. I was also the faculty advisor to the already existing Judo Club :-)

Saturday, May 19, 2018

Deep State Update


It's been clear for well over a year now that the Obama DOJ-FBI-CIA used massive surveillance powers (FISA warrant, and before that, national security letters and illegal contractor access to intelligence data) against the Trump campaign. In addition to SIGINT (signals intelligence, such as email or phone intercepts), we now know that HUMINT (spies, informants) was also used.

Until recently one could still be called a conspiracy theorist by the clueless for stating the facts in the paragraph above. But a few days ago the NYTimes and WaPo finally gave up (in an effort to shape the narrative in advance of DOJ Inspector General report(s) and other document releases that are imminent) and admitted that all of these things actually happened. The justification advanced by the lying press is that this was all motivated by fear of Russian interference -- there was no partisan political motivation for the Obama administration to investigate the opposition party during a presidential election.

If the Times and Post were dead wrong a year ago, what makes you think they are correct now?

Here are the two recent NYTimes propaganda articles:

F.B.I. Used Informant to Investigate Russia Ties to Campaign, Not to Spy, as Trump Claims


Code Name Crossfire Hurricane: The Secret Origins of the Trump Investigation

Don't believe in the Deep State? Here is a 1983 Times article about dirty tricks HUMINT spook Stefan Halper (he's the CIA-FBI informant described in the recent articles above). Much more at the left of center Intercept.

Why doesn't Trump just fire Sessions/Rosenstein/Mueller or declassify all the docs?

For example, declassifying the first FISA application would show, as claimed by people like Chuck Grassley and Trey Gowdy, who have read the unredacted original, that it largely depends on the fake Steele Dossier, and that the application failed to conform to the required Woods procedures.

The reason for Trump's restraint is still not widely understood. There is and has always been strong GOP opposition to his candidacy and presidency ("Never Trumpers"). The anti-Trump, pro-immigration wing of his party would likely support impeachment under the right conditions. To their ends, the Mueller probe keeps Trump weak enough that he will do their bidding (lower taxes, help corporations and super-wealthy oligarchs) without straying too far from the bipartisan globalist agenda (pro-immigration, anti-nativism, anti-nationalism). If Trump were to push back too hard on the Deep State conspiracy against him, he would risk attack from his own party.

I believe Trump's strategy is to let the DOJ Inspector General process work its way through this mess -- there are several more reports coming, including one on the Hillary email investigation (draft available for DOJ review now; will be public in a few weeks), and another on FISA abuse and surveillance of the Trump campaign. The OIG is working with a DOJ prosecutor (John Huber, Utah) on criminal referrals emerging from the investigation. Former Comey deputy Andrew McCabe has already been referred for possible criminal charges due to the first OIG report. I predict more criminal referrals of senior DOJ/FBI figures in the coming months. Perhaps they will even get to former CIA Director Brennan (pictured at top), who seems to have lied under oath about his knowledge of the Steele dossier.

Trump may be saving his gunpowder for later, and if he has to expend some, it will be closer to the midterm elections in the fall.


Note added: For those who are not tracking this closely, one of the reasons the Halper story is problematic for the bad guys is explained in The Intercept:
... the New York Times reported in December of last year that the FBI investigation into possible ties between the Trump campaign and Russia began when George Papadopoulos drunkenly boasted to an Australian diplomat about Russian dirt on Hillary Clinton. It was the disclosure of this episode by the Australians that “led the F.B.I. to open an investigation in July 2016 into Russia’s attempts to disrupt the election and whether any of President Trump’s associates conspired,” the NYT claimed.

But it now seems clear that Halper’s attempts to gather information for the FBI began before that. “The professor’s interactions with Trump advisers began a few weeks before the opening of the investigation, when Page met the professor at the British symposium,” the Post reported. While it’s not rare for the FBI to gather information before formally opening an investigation, Halper’s earlier snooping does call into question the accuracy of the NYT’s claim that it was the drunken Papadopoulos ramblings that first prompted the FBI’s interest in these possible connections. And it suggests that CIA operatives, apparently working with at least some factions within the FBI, were trying to gather information about the Trump campaign earlier than had been previously reported.
Hmm.. so what made CIA/FBI assign Halper to probe Trump campaign staffers in the first place? It seems the cover story for the start of the anti-Trump investigation needs some reformulation...

Friday, May 18, 2018

Digital Cash in China



WSJ: "Are they ahead of us here?"

UK Expat in Shenzhen: "It's a strange realization, but Yes."

Thursday, May 17, 2018

Exponential growth in compute used for AI training


Chart shows the total amount of compute, in petaflop/s-days, used in training (e.g., optimizing an objective function in a high dimensional space). This exponential trend is likely to continue for some time -- leading to qualitative advances in machine intelligence.
AI and Compute (OpenAI blog): ... since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.

... Three factors drive the advance of AI: algorithmic innovation, data (which can be either supervised data or interactive environments), and the amount of compute available for training. Algorithmic innovation and data are difficult to track, but compute is unusually quantifiable, providing an opportunity to measure one input to AI progress. Of course, the use of massive compute sometimes just exposes the shortcomings of our current algorithms. But at least within many current domains, more compute seems to lead predictably to better performance, and is often complementary to algorithmic advances.

...We see multiple reasons to believe that the trend in the graph could continue. Many hardware startups are developing AI-specific chips, some of which claim they will achieve a substantial increase in FLOPS/Watt (which is correlated to FLOPS/$) over the next 1-2 years. ...

Tuesday, May 15, 2018

AGI in the Alps: Schmidhuber in Bloomberg


A nice profile of AI researcher Jurgen Schmidhuber in Bloomberg. I first met Schmidhuber at SciFoo some years ago. See also Deep Learning in Nature.
Bloomberg: ... Schmidhuber’s dreams of an AGI began in Bavaria. The middle-class son of an architect and a teacher, he grew up worshipping Einstein and aspired to go a step further. “As a teenager, I realized that the grandest thing that one could do as a human is to build something that learns to become smarter than a human,” he says while downing a latte. “Physics is such a fundamental thing, because it’s about the nature of the world and how the world works, but there is one more thing that you can do, which is build a better physicist.”

This goal has been Schmidhuber’s all-consuming obsession for four decades. His younger brother, Christof, remembers taking long family drives through the Alps with Jürgen philosophizing away in the back seat. “He told me that you can build intelligent robots that are smarter than we are,” Christof says. “He also said that you could rebuild a brain atom by atom, and that you could do it using copper wires instead of our slow neurons as the connections. Intuitively, I rebelled against this idea that a manufactured brain could mimic a human’s feelings and free will. But eventually, I realized he was right.” Christof went on to work as a researcher in nuclear physics before settling into a career in finance.

... AGI is far from inevitable. At present, humans must do an incredible amount of handholding to get AI systems to work. Translations often stink, computers mistake hot dogs for dachshunds, and self-driving cars crash. Schmidhuber, though, sees an AGI as a matter of time. After a brief period in which the company with the best one piles up a great fortune, he says, the future of machine labor will reshape societies around the world.

“In the not-so-distant future, I will be able to talk to a little robot and teach it to do complicated things, such as assembling a smartphone just by show and tell, making T-shirts, and all these things that are currently done under slavelike conditions by poor kids in developing countries,” he says. “Humans are going to live longer, healthier, happier, and easier lives, because lots of jobs that are now demanding on humans are going to be replaced by machines. Then there will be trillions of different types of AIs and a rapidly changing, complex AI ecology expanding in a way where humans cannot even follow.” ...
Schmidhuber has annoyed many of his colleagues in AI by insisting on proper credit assignment for groundbreaking work done in earlier decades. Because neural networks languished in obscurity through the 1980s and 1990s, a lot of theoretical ideas that were developed then do not today get the recognition they deserve.

Schmidhuber points out that machine learning is itself based on accurate credit assignment. Good learning algorithms assign higher weights to features or signals that correctly predict outcomes, and lower weights to those that are not predictive. His analogy between science itself and machine learning is often lost on critics.

What is still missing on the road to AGI:
... Ancient algorithms running on modern hardware can already achieve superhuman results in limited domains, and this trend will accelerate. But current commercial AI algorithms are still missing something fundamental. They are no self-referential general purpose learning algorithms. They improve some system’s performance in a given limited domain, but they are unable to inspect and improve their own learning algorithm. They do not learn the way they learn, and the way they learn the way they learn, and so on (limited only by the fundamental limits of computability). As I wrote in the earlier reply: "I have been dreaming about and working on this all-encompassing stuff since my 1987 diploma thesis on this topic." However, additional algorithmic breakthroughs may be necessary to make this a practical reality.

Sunday, May 13, 2018

Feynman 100 at Caltech


https://feynman100.caltech.edu

AI, AGI, and ANI in The New Yorker


A good long read in The New Yorker on AI, AGI, and all that. Note the article appears in the section "Dept. of Speculation" :-)
How Frightened Should We Be of A.I.?

Precisely how and when will our curiosity kill us? I bet you’re curious. A number of scientists and engineers fear that, once we build an artificial intelligence smarter than we are, a form of A.I. known as artificial general intelligence, doomsday may follow. Bill Gates and Tim Berners-Lee, the founder of the World Wide Web, recognize the promise of an A.G.I., a wish-granting genie rubbed up from our dreams, yet each has voiced grave concerns. Elon Musk warns against “summoning the demon,” envisaging “an immortal dictator from which we can never escape.” Stephen Hawking declared that an A.G.I. “could spell the end of the human race.” Such advisories aren’t new. In 1951, the year of the first rudimentary chess program and neural network, the A.I. pioneer Alan Turing predicted that machines would “outstrip our feeble powers” and “take control.” In 1965, Turing’s colleague Irving Good pointed out that brainy devices could design even brainier ones, ad infinitum: “Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” It’s that last clause that has claws.

Many people in tech point out that artificial narrow intelligence, or A.N.I., has grown ever safer and more reliable—certainly safer and more reliable than we are. (Self-driving cars and trucks might save hundreds of thousands of lives every year.) For them, the question is whether the risks of creating an omnicompetent Jeeves would exceed the combined risks of the myriad nightmares—pandemics, asteroid strikes, global nuclear war, etc.—that an A.G.I. could sweep aside for us.

The assessments remain theoretical, because even as the A.I. race has grown increasingly crowded and expensive, the advent of an A.G.I. remains fixed in the middle distance. In the nineteen-forties, the first visionaries assumed that we’d reach it in a generation; A.I. experts surveyed last year converged on a new date of 2047. A central tension in the field, one that muddies the timeline, is how “the Singularity”—the point when technology becomes so masterly it takes over for good—will arrive. Will it come on little cat feet, a “slow takeoff” predicated on incremental advances in A.N.I., taking the form of a data miner merged with a virtual-reality system and a natural-language translator, all uploaded into a Roomba? Or will it be the Godzilla stomp of a “hard takeoff,” in which some as yet unimagined algorithm is suddenly incarnated in a robot overlord?

A.G.I. enthusiasts have had decades to ponder this future, and yet their rendering of it remains gauzy: we won’t have to work, because computers will handle all the day-to-day stuff, and our brains will be uploaded into the cloud and merged with its misty sentience, and, you know, like that. ...

Thursday, May 10, 2018

Google Duplex and the (short) Turing Test

Click this link and listen to the brief conversation. No cheating! Which speaker is human and which is a robot?

I wrote about a "strong" version of the Turing Test in this old post from 2004:
When I first read about the Turing test as a kid, I thought it was pretty superficial. I even wrote some silly programs which would respond to inputs, mimicking conversation. Over short periods of time, with an undiscerning tester, computers can now pass a weak version of the Turing test. However, one can define the strong version as taking place over a long period of time, and with a sophisticated tester. Were I administering the test, I would try to teach the second party something (such as quantum mechanics) and watch carefully to see whether it could learn the subject and eventually contribute something interesting or original. Any machine that could do so would, in my opinion, have to be considered intelligent.
AI isn't ready to pass the strong Turing Test, yet. But humans will become increasing unsure about the machine intelligences proliferating in the world around them.

The key to all AI advances is to narrow the scope of the problem so that the machine can deal with it. Optimization/Learning in lower dimensional spaces is much easier than in high dimensional spaces. In sufficiently narrow situations (specific tasks, abstract games of strategy, etc.), machines are already better than humans.

Google AI Blog:
Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone

...Today we announce Google Duplex, a new technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.

One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.

Here are examples of Duplex making phone calls (using different voices)...
I switched from iOS to Android in the last year because I could see that Google Assistant was much better than Siri and was starting to have very intriguing capabilities!


Blog Archive

Labels