Sunday, February 26, 2017

Perverse Incentives and Replication in Science

Here's a depressing but all too common pattern in scientific research:
1. Study reports results which reinforce the dominant, politically correct, narrative.

2. Study is widely cited in other academic work, lionized in the popular press, and used to advance real world agendas.

3. Study fails to replicate, but no one (except a few careful and independent thinkers) notices.
For numerous examples, see, e.g., any of Malcolm Gladwell's books :-(

A recent example: the idea that collective intelligence of groups (i.e., ability to solve problems and accomplish assigned tasks) is not primarily dependent on the cognitive ability of individuals in the group.

It seems plausible to me that by adopting certain best practices for collaboration one can improve group performance, and that diversity of knowledge base and personal experience could also enhance performance on certain tasks. But recent results in this direction were probably oversold, and seem to have failed to replicate.

James Thompson has given a good summary of the situation.

Parts 1 and 2 of our story:
MIT Center for Collective Intelligence: ... group-IQ, or “collective intelligence” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.
Is it true? The original paper on this topic, from 2010, has been cited 700+ times. See here for some coverage on this blog when it originally appeared.

Below is the (only independent?) attempt at replication, with strongly negative results. The first author is a regular (and very insightful) commenter here -- I hope he'll add his perspective to the discussion. Have we reached part 3 of the story?
Smart groups of smart people: Evidence for IQ as the origin of collective intelligence in the performance of human groups

Timothy C. Bates a,b,⁎, Shivani Gupta a
a Department of Psychology, University of Edinburgh
b Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh

What allows groups to behave intelligently? One suggestion is that groups exhibit a collective intelligence accounted for by number of women in the group, turn-taking and emotional empathizing, with group-IQ being only weakly-linked to individual IQ (Woolley, Chabris, Pentland, Hashmi, & Malone, 2010). Here we report tests of this model across three studies with 312 people. Contrary to prediction, individual IQ accounted for around 80% of group-IQ differences. Hypotheses that group-IQ increases with number of women in the group and with turn-taking were not supported. Reading the mind in the eyes (RME) performance was associated with individual IQ, and, in one study, with group-IQ factor scores. However, a well-fitting structural model combining data from studies 2 and 3 indicated that RME exerted no influence on the group-IQ latent factor (instead having a modest impact on a single group test). The experiments instead showed that higher individual IQ enhances group performance such that individual IQ determined 100% of latent group-IQ. Implications for future work on group-based achievement are examined.


From the paper:
Given the ubiquitous importance of group activities (Simon, 1997) these results have wide implications. Rather than hiring individuals with high cognitive skill who command higher salaries (Ritchie & Bates, 2013), organizations might select-for or teach social sensitivity thus raising collective intelligence, or even operate a female gender bias with the expectation of substantial performance gains. While the study has over 700 citations and was widely reported to the public (Woolley, Malone, & Chabris, 2015), to our knowledge only one replication has been reported (Engel, Woolley, Jing, Chabris, & Malone, 2014). This study used online (rather than in-person) tasks and did not include individual IQ. We therefore conducted three replication studies, reported below.

... Rather than a small link of individual IQ to group-IQ, we found that the overlap of these two traits was indistinguishable from 100%. Smart groups are (simply) groups of smart people. ... Across the three studies we saw no significant support for the hypothesized effects of women raising (or men lowering) group-IQ: All male, all female and mixed-sex groups performed equally well. Nor did we see any relationship of some members speaking more than others on either higher or lower group-IQ. These findings were weak in the initial reports, failing to survive incorporation of covariates. We attribute these to false positives. ... The present findings cast important doubt on any policy-style conclusions regarding gender composition changes cast as raising cognitive-efficiency. ...

In conclusion, across three studies groups exhibited a robust cognitive g-factor across diverse tasks. As in individuals, this g-factor accounted for approximately 50% of variance in cognition (Spearman, 1904). In structural tests, this group-IQ factor was indistinguishable from average individual IQ, and social sensitivity exerted no effects via latent group-IQ. Considering the present findings, work directed at developing group-IQ tests to predict team effectiveness would be redundant given the extremely high utility, reliability, validity for this task shown by individual IQ tests. Work seeking to raise group-IQ, like re- search to raise individual IQ might find this task achievable at a task- specific level (Ritchie et al., 2013; Ritchie, Bates, & Plomin, 2015), but less amenable to general change than some have anticipated. Our attempt to manipulate scores suggested that such interventions may even decrease group performance. Instead, work understanding the developmental conditions which maximize expression of individual IQ (Bates et al., 2013) as well as on personality and cultural traits supporting cooperation and cumulation in groups should remain a priority if we are to understand and develop cognitive ability. The present experiments thus provide new evidence for a central, positive role of individual IQ in enhanced group-IQ.
Meta-Observation: Given the 1-2-3 pattern described above, one should be highly skeptical of results in many areas of social science and even biomedical science (see link below). Serious researchers (i.e., those who actually aspire to participate in Science) in fields with low replication rates should (as a demonstration of collective intelligence!) do everything possible to improve the situation. Replication should be considered an important research activity, and should be taken seriously.

Most researchers I know in the relevant areas have not yet grasped that there is a serious problem. They might admit that "some studies fail to replicate" but don't realize the fraction might be in the 50 percent range!

More on the replication crisis in certain fields of science.

Thursday, February 23, 2017

A Professor meets the Alt-Right

Thomas Main, Professor in the School of Public Affairs at Baruch College, is working on a book about the Alt-Right, to be published by Brookings. Below you can listen to a conversation between Main and prominent Alt-Right figure Mike Enoch (pseudonym).

It's an interesting encounter between academic political theory and a new political movement that (so far) exists mostly on the internet. Both Main and Enoch take the other seriously in the discussion, leading to a clear expression of Alt-Right views on race, immigration, identity politics, and the idea of America.

See also Bannon, the Alt-Right, and the National Socialist Vision, and Identity Politics is a Dead End: Live by the Sword, Die by the Sword.


Monday, February 20, 2017

The Future of Thought, via Thought Vectors


In my opinion this is one of the most promising directions in AI. I expect significant progress in the next 5-10 years. Note the whole problem of parsing languages like English has been subsumed in the training of neural Encoders/Decoders used, e.g., in the translation problem (i.e., training on pairs of translated sentences, with an abstract thought vector as the intermediate state). See Toward a Geometry of Thought:
... the space of concepts (primitives) used in human language (or equivalently, in human thought) ...  has only ~1000 dimensions, and has some qualities similar to an actual vector space. Indeed, one can speak of some primitives being closer or further from others, leading to a notion of distance, and one can also rescale a vector to increase or decrease the intensity of meaning.

... we now have an automated method to extract an abstract representation of human thought from samples of ordinary language. This abstract representation will allow machines to improve dramatically in their ability to process language, dealing appropriately with semantics (i.e., meaning), which is represented geometrically.
Geoff Hinton (from a 2015 talk at the Royal Society in London):
The implications of this for document processing are very important. If we convert a sentence into a vector that captures the meaning of the sentence, then Google can do much better searches; they can search based on what's being said in a document.

Also, if you can convert each sentence in a document into a vector, then you can take that sequence of vectors and [try to model] natural reasoning. And that was something that old fashioned AI could never do.

If we can read every English document on the web, and turn each sentence into a thought vector, you've got plenty of data for training a system that can reason like people do.

Now, you might not want it to reason like people do, but at least we can see what they would think.

What I think is going to happen over the next few years is this ability to turn sentences into thought vectors is going to rapidly change the level at which we can understand documents.

To understand it at a human level, we're probably going to need human level resources and we have trillions of connections [in our brains], but the biggest networks we have built so far only have billions of connections. So we're a few orders of magnitude off, but I'm sure the hardware people will fix that.
This is a good discussion (source of the image at top and the text excerpted below), illustrating the concept of linearity in the contexts of human eigenfaces and thought vectors. See also here.



You can audit this Stanford class! CS224n: Natural Language Processing with Deep Learning.

More references.

Thursday, February 16, 2017

Management by the Unusually Competent



How did we get ICBMs? How did we get to the moon? What are systems engineering and systems management? Why do some large organizations make rapid progress, while others spin their wheels for decades at a time? Dominic Cummings addresses these questions in his latest essay.

Photo above of Schriever and Ramo. More Dom.
... In 1953, a relatively lowly US military officer Bernie Schriever heard von Neumann sketch how by 1960 the United States would be able to build a hydrogen bomb weighing less than a ton and exploding with the force of a megaton, about 80 times more powerful than Hiroshima. Schriever made an appointment to see von Neumann at the IAS in Princeton on 8 May 1953. As he waited in reception, he saw Einstein potter past. He talked for hours with von Neumann who convinced him that the hydrogen bomb would be progressively shrunk until it could fit on a missile. Schriever told Gardner about the discussion and 12 days later Gardner went to Princeton and had the same conversation with von Neumann. Gardner fixed the bureaucracy and created the Strategic Missiles Evaluation Committee. He persuaded von Neumann to chair it and it became known as ‘the Teapot committee’ or ‘the von Neumann committee’. The newly formed Ramo-Wooldridge company, which became Thompson-Ramo-Wooldridge (I’ll refer to it as TRW), was hired as the secretariat.

The Committee concluded (February 1954) that it would be possible to produce intercontinental ballistic missiles (ICBMs) by 1960 and deploy enough to deter the Soviets by 1962, that there should be a major crash programme to develop them, and that there was an urgent need for a new type of agency with a different management approach to control the project. Although intelligence was thin and patchy, von Neumann confidently predicted on technical and political grounds that the Soviet Union would engage in the same race. It was discovered years later that the race had already been underway partly driven by successful KGB operations. Von Neumann’s work on computer-aided air defence systems also meant he was aware of the possibilities for the Soviets to build effective defences against US bombers.

‘The nature of the task for this new agency requires that over-all technical direction be in the hands of an unusually competent group of scientists and engineers capable of making systems analyses, supervising the research phases, and completely controlling experimental and hardware phases of the program… It is clear that the operation of this new group must be relieved of excessive detailed regulation by existing government agencies.’ (vN Committee, emphasis added.)

A new committee, the ICBM Scientific Advisory Committee, was created and chaired by von Neumann so that eminent scientists could remain involved. One of the driving military characters, General Schriever, realised that people like von Neumann were an extremely unusual asset. He said later that ‘I became really a disciple of the scientists… I felt strongly that the scientists had a broader view and had more capabilities.’ Schriever moved to California and started setting up the new operation but had to deal with huge amounts of internal politics as the bureaucracy naturally resisted new ideas. The Defense Secretary, Wilson, himself opposed making ICBMs a crash priority.

... Almost everybody hated the arrangement. Even the Secretary of the Air Force (Talbott) tried to overrule Schriever and Ramo. It displaced the normal ‘prime contractor’ system in which one company, often an established airplane manufacturer, would direct the whole programme. Established businesses were naturally hostile. Traditional airplane manufacturers were run very much on Taylor’s principles with rigid routines. TRW employed top engineers who would not be organised on Taylor’s principles. Ramo, also a virtuoso violinist, had learned at Caltech the value of a firm grounding in physics and an interdisciplinary approach in engineering. He and his partner Wooridge had developed their ideas on systems engineering before starting their own company. The approach was vindicated quickly when TRW showed how to make the proposed Atlas missile much smaller and simpler therefore cheaper and faster to develop.

... According to Johnson, almost all the proponents of systems engineering had connections with either Caltech (where von Karman taught and JPL was born) or MIT (which was involved with the Radiation Lab and other military projects during World War 2). Bell Labs, which did R&D for AT&T, was also a very influential centre of thinking. The Jet Propulsion Laboratory (JPL) managed by Caltech also, under the pressure of repeated failure, independently developed systems management and configuration control. They became technical leaders in space vehicles. NASA, however, did not initially learn from JPL.

... Philip Morse, an MIT physicist who headed the Pentagon’s Weapons Systems Evaluation Group after the war, reflected on this resistance:
‘Administrators in general, even the high brass, have resigned themselves to letting the physical scientist putter around with odd ideas and carry out impractical experiments, as long as things experimented with are solutions or alloys or neutrons or cosmic rays. But when one or more start prying into the workings of his own smoothly running organization, asking him and others embarrassing questions not related to the problems he wants them to solve, then there’s hell to pay.’ (Morse, ‘Operations Research, What is It?’, Proceedings of the First Seminar in Operations Research, November 8–10, 1951.)



The Secret of Apollo: Systems Management in American and European Space Programs, Stephen B. Johnson.

Saturday, February 11, 2017

On the military balance of power in the Western Pacific

Some observations concerning the military balance of power in Asia. Even "experts" I have spoken to over the years seem to be confused about basic realities that are fundamental to strategic considerations.

1. Modern missile and targeting technology make the survivability of surface ships (especially carriers) questionable. Satellites can easily image surface ships and missiles can hit them from over a thousand miles away. Submarines are a much better investment and carriers may be a terrible waste of money, analogous to battleships in the WWII era. (Generals and Admirals typically prepare to fight the previous war, despite the advance of technology, often with disastrous consequences.)

2. US forward bases and surface deployments are hostages to advanced missile capability and would not survive the first days of a serious conventional conflict. This has been widely discussed, at least in some planning circles, since the 1990s. See second figure below and link.

3. PRC could easily block oil shipments to Taiwan or even Japan using Anti-Ship Ballistic Missiles (ASBM) or Anti-Ship Cruise Missiles (ASCM). This is a much preferable strategy to an amphibious attack on Taiwan in response to, e.g., a declaration of independence. A simple threat against oil tankers, or perhaps the demonstration sinking of a single tanker, would be enough to cut off supplies. A response to this threat would require attacking mobile DF21D missile launchers on the Chinese mainland. This would be highly escalatory, leading possibly to nuclear response.

4. The strategic importance of the South China Sea and artificial islands constructed there is primarily to the ability of the US to cut off the flow of oil to PRC. The islands may enable PRC to gain dominance in the region and make US submarine operations much more difficult. US reaction to these assets is not driven by "international law" or fishing or oil rights, or even the desire to keep shipping lanes open. What is at stake is the US capability to cut off oil flow, a non-nuclear but highly threatening card it has (until now?) had at its disposal to play against China.

The map below shows the consequences of full deployments of SAM, ASCM, and ASBM weaponry on the artificial islands. Consequences extend to the Malacca Strait (through which 80% of China's oil passes) and US basing in Singapore. Both linked articles are worth reading.

CHINA’S ARTIFICIAL ISLANDS ARE BIGGER (AND A BIGGER DEAL) THAN YOU THINK

Beijing's Go Big or Go Home Moment in the South China Sea



HAS CHINA BEEN PRACTICING PREEMPTIVE MISSILE STRIKES AGAINST U.S. BASES? (Lots of satellite photos at this link, revealing extensive ballistic missile tests against realistic targets.)



Terminal targeting of a moving aircraft carrier by an ASBM like the DF21D


Simple estimates: 10 min flight time means ~10km uncertainty in final position of a carrier (assume speed of 20-30 mph) initially located by satellite. Missile course correction at distance ~10km from target allows ~10s (assuming Mach 5-10 velocity) of maneuver, and requires only a modest angular correction. At this distance a 100m sized target has angular size ~0.01 so should be readily detectable from an optical image. (Carriers are visible to the naked eye from space!) Final targeting at distance ~km can use a combination of optical / IR / radar  that makes countermeasures difficult.

So hitting a moving aircraft carrier does not seem especially challenging with modern technology. The Chinese can easily test their terminal targeting technology by trying to hit, say, a very large moving truck at their ballistic missile impact range, shown above.

I do not see any effective countermeasures, and despite inflated claims concerning anti-missile defense capabilities, it is extremely difficult to stop an incoming ballistic missile with maneuver capability.


More analysis and links to strategic reports from RAND and elsewhere in this earlier post The Pivot and American Statecraft in Asia.
... These questions of military/technological capability stand prior to the prattle of diplomats, policy analysts, or political scientists. Perhaps just as crucial is whether top US and Chinese leadership share the same beliefs on these issues.

... It's hard to war game a US-China pacific conflict, even a conventional one. How long before the US surface fleet is destroyed by ASBM/ASCM? How long until forward bases are? How long until US has to strike at targets on the mainland? How long do satellites survive? How long before the conflict goes nuclear? I wonder whether anyone knows the answers to these questions with high confidence -- even very basic ones, like how well asymmetric threats like ASBM/ASCM will perform under realistic conditions. These systems have never been tested in battle.

The stakes are so high that China can just continue to establish "facts on the ground" (like building new island bases), with some confidence that the US will hesitate to escalate. If, for example, both sides secretly believe (at the highest levels; seems that Xi is behaving as if he might) that ASBM/ASCM are very effective, then sailing a carrier group through the South China Sea becomes an act of symbolism with meaning only to those that are not in the know.

Friday, February 10, 2017

Elon Musk: the BIG PROBLEMS worth working on




#1 AI
#2 Genomics

See also A Brief History of the Future, As Told To the Masters of the Universe.


Musk says he spends most of his time working on technical problems for Tesla and SpaceX, with half a day per week at OpenAI.

Thursday, February 09, 2017

Ratchets Within Ratchets



For those interested in political philosophy, or Trump's travel ban, I recommend this discussion on Scott Aaronson's blog, which features a commenter calling himself Boldmug (see also Bannon and Moldbug in the news recently ;-)

Both Scott and Boldmug seem to agree that scientific/technological progress is a positive ratchet caught within a negative ratchet of societal and political decay.
Boldmug Says:
Comment #181 January 27th, 2017 at 5:26 pm

Scott: An interesting term, “ratchet of progress.” Nature is full of ratchets. But ratchets of progress — extropic ratchets — are the exceptional case. Most ratchets are entropic ratchets, ratchets of decay.

You happen to live inside the ratchet of progress that is science and engineering. That ratchet produces beautiful wonders like seedless watermelons. It’s true that Talleyrand said, “no one who remembers the sweetness of life before the Revolution can even imagine it,” but even Louis XIV had to spit the seeds out of his watermelons.

This ratchet is 400 to 2400 years old, depending on how you count. The powers and ideologies that be are very good at taking credit for science and engineering, though it is much older than any of them. It is a powerful ratchet — not even the Soviet system could kill or corrupt science entirely, although it’s always the least political fields, like math and physics, that do the best.

But most ratchets are entropic ratchets of decay. The powers that be don’t teach you to see the ratchets of decay. You have to look for them with your own eyes.

The scientists and engineers who created the Antikythera mechanism lived inside a ratchet of progress. But that ratchet of progress lived inside a ratchet of decay, which is why we didn’t have an industrial revolution in 100BC. Instead we had war, tyranny, stagnation and (a few hundred years later) collapse.

Lucio Russo (https://en.wikipedia.org/wiki/Lucio_Russo) wrote an interesting, if perhaps a little overstated, book, on the Hellenistic (300-150BC, not to be confused with the Hellenic era proper) golden age of science. We really have no way of knowing how close to a scientific revolution the Alexandrians came. But it was political failure, not scientific failure, that destroyed their world. The ratchet of progress was inside a ratchet of decay. ...
It doesn't appear that Scott responded to this dig by Boldmug:
Boldmug Says:
Comment #153 January 27th, 2017 at 11:51 am

... Coincidentally, the latter is the side [THE LEFT] whose Jedi mind tricks are so strong, they almost persuaded someone with a 160 IQ to castrate himself.

And the Enlightenment? You mean the Enlightenment that guillotined Lavoisier? “The Republic has no need of savants.” Add 1789 and even 1641 to that list. Why would a savant pick Praisegod Barebones over Prince Rupert?

You might notice that in our dear modern world, whose quantum cryptography and seedless watermelons are so excellent, “the Republic has no need of savants” is out there still. Know anyone working on human genetics? ...
Don't believe in societal decay? Read this recent tour-de-force paper by DeCode researchers in Iceland, who have established beyond doubt the (long-term) dysgenic nature of modern society:
Selection against variants in the genome associated with educational attainment
Proceedings of the National Academy of Sciences of the United States of America (PNAS)

Epidemiological and genetic association studies show that genetics play an important role in the attainment of education. Here, we investigate the effect of this genetic component on the reproductive history of 109,120 Icelanders and the consequent impact on the gene pool over time. We show that an educational attainment polygenic score, POLYEDU, constructed from results of a recent study is associated with delayed reproduction (P < 10^(−100)) and fewer children overall. The effect is stronger for women and remains highly significant after adjusting for educational attainment. Based on 129,808 Icelanders born between 1910 and 1990, we find that the average POLYEDU has been declining at a rate of ∼0.010 standard units per decade, which is substantial on an evolutionary timescale. Most importantly, because POLYEDU only captures a fraction of the overall underlying genetic component the latter could be declining at a rate that is two to three times faster.
Note: these "educational attainment" variants are mostly variants which influence cognitive ability.

From the Discussion section of the paper:
... The main message here is that the human race is genetically far from being stagnant with respect to one of its most important traits. It is remarkable to report changes in POLYEDU that are measurable across the several decades covered by this study. In evolutionary time, this is a blink of an eye. However, if this trend persists over many centuries, the impact could be profound.

Monday, February 06, 2017

A Brief History of the Future, as told to the Masters of the Universe

This is a summary of remarks made at two not-Davos meetings, one in NYC and the other in LA. Most attendees were allocators of significant capital.

See also these two articles in Nautilus Magazine: Super-intelligent Humans Are Coming, Don't Worry, Smart Machines Will Take Us With Them.

Most of these topics have been covered in more detail in recent blog posts -- see relevant labels at bottom.

An Inflection Point in Human History, from recent Technological Developments

Genomics and Machine Learning:

Inexpensive genotyping has produced larger and larger datasets of human genomes + phenotypes, approaching sample sizes of a million individuals. Machine learning applied to this data has led to the ability to predict complex human traits (e.g., height, intelligence) as well as disease risk (e.g., type 1 diabetes, cancer, etc.). Among the applications of these advances is the ability to select embryos in IVF to avoid negative outcomes, and even to produce highly superior outcomes.

CRISPR -- a breakthrough technology for gene editing -- will find applications in medicine, agriculture, and eventually human reproduction (editing may eventually supplant selection).

The human species is poised, within the next generation, to take control of its own evolution. It is likely that affluent families will be the first to take advantage of these new capabilities, leading to even greater inequality in society.

Machine Learning and AI:

Routine tasks are being automated through machine intelligence, leading to pressure on low-skill human workers. Autonomous vehicles, probably no more than a decade away, will displace many jobs, such as those of truck and taxi drivers. The automobile industry is likely to experience massive creative destruction: the most valuable part of the car will be its brain (software, sensors, and cloud communication capability), not its drivetrain. The most likely winners in this race are not the major automakers.

AIs are already capable of outperforming even the best humans on any narrow task: e.g., Chess, Go, Texas Hold’em (Poker), facial recognition, voice recognition, etc. Many of these AIs are built using Deep Learning algorithms, which take advantage of neural net architectures. A neural net is an abstract network modeled after the human brain; each node in the network has a different connection strength to other nodes. While a neural net can be trained to outperform humans (see tasks listed above), the internal workings of the net tend to be mysterious even to the human designers. This is unlike the case of structured code, written in familiar high level programming languages. Neural net algorithms run better on specialized hardware, such as GPUs. Google has produced a special chipset, called the TPU, which now runs ~20% of all compute at its data centers. Google does not sell the TPU, and industry players and startups are racing to develop similar chips for neural net applications. (Nvidia is a leader in this new area.)

Neural nets used in language translation have mapped out an abstract ~1000 dimensional space which coincides with the space of “primitive concepts” used in human thought and language. It appears that rapid advances in the ability to read human generated text (e.g., Wikipedia) with comprehension will follow in the coming decade. It seems possible that AGI -- Artificial General Intelligence (analogous to a human intelligence, with a theory of the world, general knowledge about objects in the universe, etc.) -- will emerge within our lifetimes.




Saturday, February 04, 2017

Baby Universes in the Laboratory




This was on the new books table at our local bookstore. I had almost forgotten about doing an interview and corresponding with the author some time ago. See also here and here.

The book is a well-written overview of some of the more theoretical aspects of inflationary cosmology, the big bang, the multiverse, etc. It also fleshes out some of the individual stories of the physicists involved in this research.
Kirkus Reviews: ... In her elegant and perceptive book, Merali ... unpacks the science behind what we know about our universe’s beginnings and traces the paths that many renowned researchers have taken to translate these insights to new heights: the creation of a brand-new “baby” universe, and not an empty one, either, but one with its own physics, matter, and (possibly) life. ... Among the most significant scientific advances in the last half-century is the discovery that our universe is inflating exponentially, a theory that led to many more breakthroughs in physics and cosmology. Yet the big question—how did the universe form, triggering inflation to begin with?—remains opaque. Merali, who works at the Foundational Questions Institute, which explores the boundaries of physics and cosmology, effortlessly explains the complex theories that form the bedrock of this concept, and she brings to life the investigators who have dedicated much of their careers in pursuit of fundamental truths. She also neatly incorporates discussions of philosophy and religion—after all, nothing less than grand design itself is at stake here—without any heavy-handedness or agenda. Over the course of several years, she traveled the world to interview firsthand the most important figures behind the idea of laboratory universe creation ... and the anecdotes she includes surrounding these conversations make her portrait even more compelling.



Here are two illustrations of how a baby universe pinches off from the universe in which it was created. This is all calculable within general relativity, modulo an issue with quantum smoothing of a singularity. The remnant of the baby universe appears to outside observers as a black hole. But inside one finds an exponentially growing region of spacetime.






Buchanan and Nader on the Trump presidency



I highly recommend this podcast from Radio Open Source and Christopher Lydon. You may be surprised at how much two former independent presidential candidates, one on the Left and the other on the Right, can agree on. The common factor is their love for this country and concern for ordinary people. Listen carefully to what they say about Hillary.

If the embedded player doesn't work just click the link below.
The Great Trump Debate: Pat Buchanan and Ralph Nader

On Super Bowl weekend, we’ve lined up a couple of hall of fame political players who run outside Establishment lines to help us watch the game that’s unfolding so far in the Trump White House. Pat Buchanan was the pit-bull strategist in Richard Nixon’s White House; he’s a Latin-Mass Catholic, a cultural conservative and America First nationalist who’s turned sharply anti-Empire, calmly post-Cold War with Russia and flat-out anti-war in the Middle East. Ralph Nader was Mr. Citizen as auto-safety crusader, then first among the relentless Raiders against corporate power, and a prickly third-party candidate in three presidential campaigns.

It was this left-right pair that practically called the game for Trump way back in August 2015. Both said that a man backed by his own billionaire funds and showbiz glam could run the ball all the way to the White House.

After the election, though, both men are turning their eyes to the man who may be quarterbacking the presidency: Steve Bannon.

Buchanan—a “paleoconservative” who coined the term “America First,” essentially drafting the Bannon playbook—now hopes that Trump doesn’t drop the ball after his executive order blitz. “Republicans have waited a long time for this,” Buchanan says. “[Trump] ought to keep moving on ahead, take the hits he’s gonna take.” If he keeps it up, Bannon might bring the political right “very close to a political revolution.”

Nader, as a green-tinted independent on the left, understands the enthusiasm that his longtime sparring partner has for Trumpism. Yet he also sees the contradictions and challenges Trump presents, not only for Buchanan’s vision of America, but also for Nader’s own: Both men share a strong, anti-corporate stance and are worried about the Goldman Sachs and Wall Street executives Trumped has packed his cabinet with. What Buchanan and Nader fear most is that a thin-skinned president, egged on by his hawkish advisors, could spark a war with Iran if provoked.

Friday, February 03, 2017

When UC Berkeley allowed Free Speech

Hail Libratus! AI beats human pros in no-limit Texas Hold'em



AI already dominates humans in any narrowly defined task. Perhaps another 30-50 years until AGI?
IEEE Spectrum: Humanity has finally folded under the relentless pressure of an artificial intelligence named Libratus in a historic poker tournament loss. ...

Libratus lived up to its “balanced but forceful” Latin name by becoming the first AI to beat professional poker players at heads-up, no-limit Texas Hold'em. The tournament was held at the Rivers Casino in Pittsburgh from 11–30 January. Developed by Carnegie Mellon University, the AI won the “Brains vs. Artificial Intelligence” tournament against four poker pros by US $1,766,250 in chips over 120,000 hands (games). Researchers can now say that the victory margin was large enough to count as a statistically significant win, meaning that they could be at least 99.98 percent sure that the AI victory was not due to chance.

... the victory demonstrates how AI has likely surpassed the best humans at doing strategic reasoning in “imperfect information” games such as poker. The no-limit Texas Hold’em version of poker is a good example of an imperfect information game because players must deal with the uncertainty of two hidden cards and unrestricted bet sizes. An AI that performs well at no-limit Texas Hold’em could also potentially tackle real-world problems with similar levels of uncertainty.

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

... Libratus played the same overall strategy against all the players based on three main components:

First, the AI’s algorithms computed a strategy before the tournament by running for 15 million processor-core hours on a new supercomputer called Bridges.

Second, the AI would perform “end-game solving” during each hand to precisely calculate how much it could afford to risk in the third and fourth betting rounds (the “turn” and “river” rounds in poker parlance). Sandholm credits the end-game solver algorithms as contributing the most to the AI victory. The poker pros noticed Libratus taking longer to compute during these rounds and realized that the AI was especially dangerous in the final rounds, but their “bet big early” counter strategy was ineffective.

Third, Libratus ran background computations during each night of the tournament so that it could fix holes in its overall strategy. That meant Libratus was steadily improving its overall level of play and minimizing the ways that its human opponents could exploit its mistakes. It even prioritized fixes based on whether or not its human opponents had noticed and exploited those holes. By comparison, the human poker pros were able to consistently exploit strategic holes in the 2015 tournament against the predecessor AI called Claudico.

... The Libratus victory translates into an astounding winning rate of 14.7 big blinds per 100 hands in poker parlance—and that’s a very impressive winning rate indeed considering the AI was playing four human poker pros. Prior to the start of the tournament, online betting sites had been giving odds of 4:1 with Libratus seen as the underdog.
Here's a recent paper on deep learning and poker. The program DeepStack is not Libratus (thanks to a commenter for pointing this out), but both have managed to outperform human players.
DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker

https://arxiv.org/abs/1701.01724

Artificial intelligence has seen a number of breakthroughs in recent years, with games often serving as significant milestones. A common feature of games with these successes is that they involve information symmetry among the players, where all players have identical information. This property of perfect information, though, is far more common in games than in real-world problems. Poker is the quintessential game of imperfect information, and it has been a longstanding challenge problem in artificial intelligence. In this paper we introduce DeepStack, a new algorithm for imperfect information settings such as poker. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition about arbitrary poker situations that is automatically learned from self-play games using deep learning. In a study involving dozens of participants and 44,000 hands of poker, DeepStack becomes the first computer program to beat professional poker players in heads-up no-limit Texas hold'em. Furthermore, we show this approach dramatically reduces worst-case exploitability compared to the abstraction paradigm that has been favored for over a decade.

Wednesday, February 01, 2017

A far greater peril than Donald Trump

Richard Fernandez:
The more fundamental unsolved problem is why the progressive project collapsed in the first place. How could something at the seeming height of its power; in control of the EU, the US Federal government, the UN, the press, the academe and industry collapse in one fatal year? The globalist conference in Davos still doesn't know. In that ignorance lurks a peril far greater than DJT.
The rapid collapse of a false worldview -- have we seen this before?

Hans Christian Anderson, The Emperor's New Clothes:
... Both the swindlers begged him to be so kind as to come near to approve the excellent pattern, the beautiful colors. They pointed to the empty looms, and the poor old minister stared as hard as he dared. He couldn't see anything, because there was nothing to see. "Heaven have mercy," he thought. "Can it be that I'm a fool? I'd have never guessed it, and not a soul must know. Am I unfit to be the minister? It would never do to let on that I can't see the cloth."

... they all joined the Emperor in exclaiming, "Oh! It's very pretty," ... "Magnificent! Excellent! Unsurpassed!"

... "Oh, how fine are the Emperor's new clothes! Don't they fit him to perfection? And see his long train!" Nobody would confess that he couldn't see anything, for that would prove him either unfit for his position, or a fool.

... "But he hasn't got anything on," a little child said.

"Did you ever hear such innocent prattle?" said its father. And one person whispered to another what the child had said, "He hasn't anything on. A child says he hasn't anything on."

"But he hasn't got anything on!" the whole town cried out at last.

The Emperor shivered, for he suspected they were right. But he thought, "This procession has got to go on." So he walked more proudly than ever, as his noblemen held high the train that wasn't there at all.
Richard Feynman:
The first principle is that you must not fool yourself and you are the easiest person to fool.
Antonio Gramsci:
Pessimism of the Intellect, Optimism of the Will.

Sunday, January 29, 2017

The Making of Blade Runner: Like Tears in Rain

I always wondered how Dick's Do Androids Dream of Electric Sheep? became Ridley Scott's cyberpunk noir Blade Runner. Watch this documentary to find out!

See also Philip K. Dick's First Science Fiction Story.







Not Davos

Sorry again for the pause in blogging -- I've been on the road. These are photos from meetings I attended in Los Angeles and on the east coast. The selfie below is with Bill Richardson, former Governor of New Mexico and Secretary of Energy.













Monday, January 23, 2017

Seminars, Colloquia, and Slides I have known

I think I've made this Google drive folder publicly readable. It contains slides for many talks I've given over the years, going back almost to 2000 or so.

Topics include black hole information, monsters in curved space, entanglement entropy, dark energy, insider's guide to startups, the financial crisis of 2008, foundations of quantum mechanics, and more.






(Second slide is from this talk given at the Institute for Quantum Information at Caltech.)

Wednesday, January 18, 2017

Oppenheimer on Bohr (1964 UCLA)



I came across this 1964 UCLA talk by Oppenheimer, on his hero Niels Bohr.

Oppenheimer: Mathematics is "an immense enlargement of language, an ability to talk about things which in words would be simply inaccessible."

I find it strange that psychometricians usually define "verbal ability" over a vocabulary set that excludes words from mathematics and other scientific areas. A person's verbal score is enhanced by knowing many (increasingly obscure) words for the same concept, as opposed to knowing words which describe new concepts beyond those which appear in ordinary language.

Is it more valuable to have mastery of these words: esoteric, abstruse, enigmatic, cryptic, recondite, inscrutable, opaque, ... (all describe similar concepts; they are synonyms for not easily understood),

or these: mean, variance, standard deviation, fluctuation, scaling, dimensionality, eigenvector, orthogonal, kernel, null space (these describe distinct but highly useful concepts not found in ordinary language)?

Among the simplest (and most useful) mathematical words/concepts that flummox ordinary people are statistical terms such as mean, variance, standard deviation, etc. One could be familiar with all of these words and concepts, yet obtain a low score on a test of verbal ability due to an insufficiently large grasp of (relatively useless) esoteric synonyms.

See also Thought vectors and the dimensionality of the space of concepts , Toward a Geometry of Thought and High V, Low M.

Sunday, January 15, 2017

Dangerous Knowledge and Existential Risk (Dominic Cummings)

Dominic Cummings begins a new series of blog posts. Highly recommended!

It's worth noting a few "factor of a million" advances that have happened recently, largely due to physical science, applied mathematics, and engineering:

1. Destructive power of an H-bomb is a million times greater than that of conventional explosives. This advance took ~20 years.

2. Computational power (Moore's Law) has advanced a million times over a roughly similar timescale.

3. Genome sequencing (and editing) capabilities have improved similarly, just in the 21st century.

How much have machine intelligence and AI progressed, say, in the last 20 years? If it isn't a factor of a million (whatever that means in this context), it soon will be ...
Dominic Cummings: ... The big big problem we face – the world is ‘undersized and underorganised’ because of a collision between four forces: 1) our technological civilisation is inherently fragile and vulnerable to shocks, 2) the knowledge it generates is inherently dangerous, 3) our evolved instincts predispose us to aggression and misunderstanding, and 4) there is a profound mismatch between the scale and speed of destruction our knowledge can cause and the quality of individual and institutional decision-making in ‘mission critical’ political institutions ...

... Politics is profoundly nonlinear. (I have written a series of blogs about complexity and prediction HERE which are useful background for those interested.) Changing the course of European history via the referendum only involved about 10 crucial people controlling ~£10^7 while its effects over ten years could be on the scale of ~10^8 – 10^9 people and ~£10^12: like many episodes in history the resources put into it are extremely nonlinear in relation to the potential branching histories it creates. Errors dealing with Germany in 1914 and 1939 were costly on the scale of ~100,000,000 (10^8) lives. If we carry on with normal human history – that is, international relations defined as out-groups competing violently – and combine this with modern technology then it is extremely likely that we will have a disaster on the scale of billions (10^9) or even all humans (~10^10). The ultimate disaster would kill about 100 times more people than our failure with Germany. Our destructive power is already much more than 100 times greater than it was then.

Even if we dodge this particular bullet there are many others lurking. New genetic engineering techniques such as CRISPR allow radical possibilities for re-engineering organisms including humans in ways thought of as science fiction only a decade ago. We will soon be able to remake human nature itself. CRISPR-enabled ‘gene drives’ enable us to make changes to the germ-line of organisms permanent such that changes spread through the entire wild population, including making species extinct on demand. Unlike nuclear weapons such technologies are not complex, expensive, and able to be kept secret for a long time. The world’s leading experts predict that people will be making them cheaply at home soon – perhaps they already are.

It is already practically possible to deploy a cheap, autonomous, and anonymous drone with facial-recognition software and a one gram shaped-charge to identify a relevant face and blow it up. Military logic is driving autonomy. ...
Dangers have increased, but quality of decision making and institutions has not:
... The national institutions we have to deal with such crises are pretty similar to those that failed so spectacularly in summer 1914 yet they now face crises involving 10^2 – 10^3 times more physical destruction moving at least 10^3 times faster. The international institutions developed post-1945 (UN, EU etc) contribute little to solving the biggest problems and in many ways make them worse. These institutions fail constantly and do not – cannot – learn much.

If we keep having crises like we have experienced over the past century then this combination of problems pushes the probability of catastrophe towards ‘overwhelmingly likely’.

... Can a big jump in performance – ‘better and more powerful thinking programs for man and machine’ – somehow be systematised?

Feynman once gave a talk titled ‘There’s plenty of room at the bottom’ about the huge performance improvements possible if we could learn to do engineering at the atomic scale – what is now called nanotechnology. There is also ‘plenty of room at the top’ of political structures for huge improvements in performance. As I explained recently, the victory of the Leave campaign owed more to the fundamental dysfunction of the British Establishment than it did to any brilliance from Vote Leave. Despite having the support of practically every force with power and money in the world (including the main broadcasters) and controlling the timing and legal regulation of the referendum, they blew it. This was good if you support Leave but just how easily the whole system could be taken down should be frightening for everybody .

Creating high performance teams is obviously hard but in what ways is it really hard?

... The real obstacle is that although we can all learn and study HPTs it is extremely hard to put this learning to practical use and sustain it against all the forces of entropy that constantly operate to degrade high performance once the original people have gone. HPTs are episodic. They seem to come out of nowhere, shock people, then vanish with the rare individuals. People write about them and many talk about learning from them but in fact almost nobody ever learns from them – apart, perhaps, from those very rare people who did not need to learn – and nobody has found a method to embed this learning reliably and systematically in institutions that can maintain it. ...

Wednesday, January 11, 2017

Brexit in the Multiverse: Dominic Cummings on the Vote Leave campaign


It's not entirely an exaggeration to say that my friend Dominic Cummings both kept the UK out of the Euro, and allowed it to (perhaps) escape the clutches of the EU. Whether or not you consider these outcomes to be positive, one can't deny the man his influence on history.
Wikipedia: Dominic Mckenzie Cummings (born November 1971)[1] is a British political advisor and strategist.

He served as the Campaign Director of Vote Leave, the official and successful campaign in favour of leaving the European Union for the United Kingdom European Union membership referendum, 2016.[2] He is a former special adviser to Michael Gove. He has a reputation for both his intelligence and divisiveness.

... From 1999 to 2002, Cummings was campaign director at Business for Sterling, the campaign against the UK joining the Euro.

... Cummings worked for Michael Gove from 2007 to January 2014, first in opposition and then as a special adviser in the Department of Education after the 2010 general election. He was Gove's chief of staff,[4] an appointment blocked by Andy Coulson until his own resignation.[5][7] In this capacity Cummings wrote a 240-page essay, "Some thoughts on education and political priorities",[8] about transforming Britain into a "meritocratic technopolis",[4] described by Patrick Wintour as "either mad, bad or brilliant – and probably a bit of all three."[7] He became known for his blunt style and "not suffering fools gladly", and as an idealist.

... Dominic Cummings became Campaign Director of Vote Leave upon the creation of the organisation in October 2015. He is credited with having created the official slogan of Vote Leave, "Take back control" and with being the leading strategist of the campaign.
Posts about Dom on this blog.

How did he do it? Perhaps we can learn from Bismarck, a historical figure Dom admires greatly -- see Brexit, Victory over the Hollow Men.
The scale of Bismarck's triumph cannot be exaggerated. He alone had brought about a complete transformation of the European international order. He had told those who would listen what he intended to do, how he intended to do it, and he did it. He achieved this incredible feat without commanding an army, and without the ability to give an order to the humblest common soldier, without control of a large party, without public support, indeed, in the face of almost universal hostility, without a majority in parliament, without control of his cabinet, and without a loyal following in the bureaucracy.
For a detailed 20 thousand word account of the Brexit campaign, including a meditation on the problem of causality in History, and the contingency of events in our multiverse, and the unreasonable effectiveness of physicists, and much, much more, see this recent post on Dom's blog:
On the referendum #21: Branching histories of the 2016 referendum and ‘the frogs before the storm’

... Why and how? The first draft of history was written in the days and weeks after the 23 June and the second draft has appeared over the past few weeks in the form of a handful of books. There is no competition between them. Shipman’s is by far the best and he is the only one to have spoken to key people. I will review it soon. One of his few errors is to give me the credit for things that were done by others, often people in their twenties like Oliver Lewis, Jonny Suart, and Cleo Watson who, unknown outside the office, made extreme efforts and ran rings around supposed ‘experts’. His book has encouraged people to exaggerate greatly my importance.

I have been urged by some of those who worked on the campaign to write about it. I have avoided it, and interviews, for a few reasons (though I had to write one blog to explain that with the formal closing of VL we had made the first online canvassing software that really works in the UK freely available HERE). For months I couldn’t face it. The idea of writing about the referendum made me feel sick. It still does but a bit less.

For about a year I worked on this project every day often for 18 hours and sometimes awake almost constantly. Most of the ‘debate’ was moronic as political debate always is. Many hours of life I’m never getting back were spent dealing with abysmal infighting among dysfunctional egomaniacs while trying to build a ~£10 million startup in 10 months when very few powerful people thought the probability of victory was worth the risk of helping us. ...

... Discussions about things like ‘why did X win/lose?’ are structured to be misleading and I could not face trying to untangle everything. There are strong psychological pressures that lead people to create post facto stories that seem to add up to ‘I always said X and X happened.’ Even if people do not think this at the start they rapidly construct psychologically appealing stories that overwrite memories. Many involved with this extraordinary episode feel the need to justify themselves and this means a lot of rewriting of history. I also kept no diary so I have no clear source for what I really thought other than some notes here and there. I already know from talking to people that my lousy memory has conflated episodes, tried to impose patterns that did not actually exist and so on – all the usual psychological issues. To counter all this in detail would require going through big databases of emails, printouts of appointment diaries, notebooks and so on, and even then I would rarely be able to reconstruct reliably what I thought. Life’s too short.

I’ve learned over the years that ‘rational discussion’ accomplishes almost nothing in politics, particularly with people better educated than average. Most educated people are not set up to listen or change their minds about politics, however sensible they are in other fields. But I have also learned that when you say or write something, although it has roughly zero effect on powerful/prestigious people or the immediate course of any ‘debate’, you are throwing seeds into a wind and are often happily surprised. A few years ago I wrote something that was almost entirely ignored in SW1 [Southwest London] but someone at Harvard I’d never met read it. This ended up having a decisive effect on the referendum.

A warning. Politics is not a field which meets the two basic criteria for true expertise (see below). An effect of this is that arguments made by people who win are taken too seriously. People in my position often see victory as confirmation of ideas they had before victory but people often win for reasons they never understand or even despite their own efforts. Cameron’s win in 2015 was like this – he fooled himself about some of the reasons why he’d won and this error contributed to his errors on the referendum. Maybe Leave won regardless of or even despite my ideas. Maybe I’m fooling myself like Cameron. Some of my arguments below have as good an empirical support as is possible in politics (i.e. not very good objectively) but most of them do not even have that. Also, it is clear that almost nobody agrees with me about some of my general ideas. It is more likely that I am wrong than 99% of people who work in this field professionally. Still, cognitive diversity is inherently good for political analysis so I’ll say what I think and others will judge if there’s anything to learn. ...
After reading these 20 thousand words, perhaps you'll have an opinion as to whether Dom, one of the most successful and experienced observers (and users!) of democracy, agrees with Robert Heinlein that The Gulf is Deep ;-)

Monday, January 09, 2017

The Gulf is Deep (Heinlein)


The novella Gulf predates almost all of Heinlein's novels. Online version. The book Friday (1982) is a loose sequel.
Wikipedia: Gulf is a novella by Robert A. Heinlein, originally published as a serial in the November and December 1949 issues of Astounding Science Fiction and later collected in Assignment in Eternity. It concerns a secret society of geniuses who act to protect humanity. ...

The story postulates that humans of superior intelligence could, if they banded together and kept themselves genetically separate, create a new species. In the process they would develop into a hidden and benevolent "ruling" class.
Do you still believe in Santa Claus?
He stopped and brooded. “I confess to that same affection for democracy, Joe. But it’s like yearning for the Santa Claus you believed in as a child. For a hundred and fifty years or so democracy, or something like it, could flourish safely. The issues were such as to be settled without disaster by the votes of common men, befogged and ignorant as they were. But now, if the race is simply to stay alive, political decisions depend on real knowledge of such things as nuclear physics, planetary ecology, genetic theory, even system mechanics. They aren’t up to it, Joe. With goodness and more will than they possess less than one in a thousand could stay awake over one page of nuclear physics; they can’t learn what they must know.”

Gilead brushed it aside. “It’s up to us to brief them. Their hearts are all right; tell them the score—they’ll come down with the right answers.”

“No, Joe. We’ve tried it; it does not work. As you say, most of them are good, the way a dog can be noble and good. ... Reason is poor propaganda when opposed by the yammering, unceasing lies of shrewd and evil and self-serving men. The little man has no way to judge and the shoddy lies are packaged more attractively. There is no way to offer color to a colorblind man, nor is there any way for us to give the man of imperfect brain the canny skill to distinguish a lie from a truth.

“No, Joe. The gulf between us and them is narrow, but it is very deep. We cannot close it.”

China’s Crony Capitalism: The Dynamics of Regime Decay (Minxin Pei)


Minxin Pei is an exceptional observer of modern Chinese politics, although he tends toward the pessimistic. In his new book he has assembled a dataset of 260 major corruption cases involving officials at the highest levels, covering roughly the last 25 years.

There is no doubt that corruption is a major problem in China. Is it merely a quantitative impediment to efficiency, or an existential threat to the CCP regime? See also The truth about the Chinese economy.
China’s Crony Capitalism: The Dynamics of Regime Decay
Minxin Pei

When Deng Xiaoping launched China on the path to economic reform in the late 1970s, he vowed to build “socialism with Chinese characteristics.” More than three decades later, China’s efforts to modernize have yielded something very different from the working people’s paradise Deng envisioned: an incipient kleptocracy, characterized by endemic corruption, soaring income inequality, and growing social tensions. China’s Crony Capitalism traces the origins of China’s present-day troubles to the series of incomplete reforms from the post-Tiananmen era that decentralized the control of public property without clarifying its ownership.

Beginning in the 1990s, changes in the control and ownership rights of state-owned assets allowed well-connected government officials and businessmen to amass huge fortunes through the systematic looting of state-owned property—in particular land, natural resources, and assets in state-run enterprises. Mustering compelling evidence from over two hundred corruption cases involving government and law enforcement officials, private businessmen, and organized crime members, Minxin Pei shows how collusion among elites has spawned an illicit market for power inside the party-state, in which bribes and official appointments are surreptitiously but routinely traded. This system of crony capitalism has created a legacy of criminality and entrenched privilege that will make any movement toward democracy difficult and disorderly.

Rejecting conventional platitudes about the resilience of Chinese Communist Party rule, Pei gathers unambiguous evidence that beneath China’s facade of ever-expanding prosperity and power lies a Leninist state in an advanced stage of decay.
Pei discusses his book at Stanford's Center on Democracy, Development, and the Rule of Law in the video below. Here is another video with an excellent panel discussion beginning 1 hr in.



This debate from a few years ago between Pei and venture capitalist / optimist / apologist Eric X. Li is very good. James Fallows is the moderator.

Sunday, January 08, 2017

AlphaGo (BetaGo?) Returns

Rumors over the summer suggested that AlphaGo had some serious problems that needed to be fixed -- i.e., whole lines of play that it pursued poorly, despite its thrashing of one of the world's top players in a highly publicized match. But tuning a neural net is trickier than tuning, for example, an expert system or more explicitly defined algorithm...

AlphaGo (or its successor) has quietly returned, shocking the top players in the world.
Fortune: In a series of unofficial online games, an updated version of Google’s AlphaGo artificial intelligence has compiled a 60-0 record against some of the game’s premier players. Among the defeated, according to the Wall Street Journal, were China’s Ke Jie, reigning world Go champion.

The run follows AlphaGo’s defeat of South Korea’s Lee Se-dol in March of 2016, in a more official setting and using a previous version of the program.

The games were played by the computer through online accounts dubbed Magister and Master—names that proved prophetic. As described by the Journal, the AI’s strategies were unconventional and unpredictable, including moves that only revealed their full implications many turns later. That pushed its human opponents into deep reflections that mirror the broader questions posed by computer intelligence.

“AlphaGo has completely subverted the control and judgment of us Go players,” wrote Gu Li, a grandmaster defeated by the program, in an online post. “When you find your previous awareness, cognition and choices are all wrong, will you keep going along the wrong path or reject yourself?”

Another Go player, Ali Jabarin, described running into Ke Jie after he had been defeated by the program. According to Jabarin, Jie was “a bit shocked . . . just repeating ‘it’s too strong’.”
As originally reported in the Wall Street Journal:
WSJ: A mysterious character named “Master” has swept through China, defeating many of the world’s top players in the ancient strategy game of Go.

Master played with inhuman speed, barely pausing to think. With a wide-eyed cartoon fox as an avatar, Master made moves that seemed foolish but inevitably led to victory this week over the world’s reigning Go champion, Ke Jie of China. ...

Master revealed itself Wednesday as an updated version of AlphaGo, an artificial-intelligence program designed by the DeepMind unit of Alphabet Inc.’s Google.

AlphaGo made history in March by beating South Korea’s top Go player in four of five games in Seoul. Now, under the guise of a friendly fox, it has defeated the world champion.

It was dramatic theater, and the latest sign that artificial intelligence is peerless in solving complex but defined problems. AI scientists predict computers will increasingly be able to search through thickets of alternatives to find patterns and solutions that elude the human mind.

Master’s arrival has shaken China’s human Go players.

“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.” ...
We are witness to the psychological shock of a species encountering, for the first time, an alien and superior intelligence. See also The Laskers and the Go Master.

Thursday, January 05, 2017

20 years after the Sokal Hoax

The Chronicle of Higher Education has a nice article on the occasion of the 20th anniversary of the Sokal hoax. Has anything changed in the last 20 years? Sokal's parody language resembles standard academic cant of 2016.
Wikipedia: The Sokal affair, also called the Sokal hoax,[1] was a publishing hoax perpetrated by Alan Sokal, a physics professor at New York University and University College London. In 1996, Sokal submitted an article to Social Text, an academic journal of postmodern cultural studies. The submission was an experiment to test the journal's intellectual rigor and, specifically, to investigate whether "a leading North American journal of cultural studies – whose editorial collective includes such luminaries as Fredric Jameson and Andrew Ross – [would] publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions".[2]

The article, "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity",[3] was published in the Social Text spring/summer 1996 "Science Wars" issue. It proposed that quantum gravity is a social and linguistic construct. At that time, the journal did not practice academic peer review and it did not submit the article for outside expert review by a physicist.[4][5] On the day of its publication in May 1996, Sokal revealed in Lingua Franca that the article was a hoax, identifying it as "a pastiche of left-wing cant, fawning references, grandiose quotations, and outright nonsense ... structured around the silliest quotations [by postmodernist academics] he could find about mathematics and physics."[2]
The Chronicle article describes Sokal's original motivation for the hoax.
Chronicle: ... It was all a big joke, but one motivated by a serious intention: to expose the sloppiness, absurd relativism, and intellectual arrogance of "certain precincts of the academic humanities." His beef was political, too: He feared that by tossing aside their centuries-old promotion of scientific rationality, progressives were eroding their ability to speak truth to power. ...

ALAN SOKAL: In the spring of 1994, I saw a reference to the book by Paul Gross and Norman Levitt, Higher Superstition: The Academic Left and Its Quarrels With Science. My first thought was, Oh, no, not another one of those right-wing diatribes that tell how the Marxist deconstructionist professors are taking over the universities and brainwashing our children. There had been a whole spate of such books in the early 1990s — Dinesh D’Souza and others.

My second thought was "academic left and its quarrels with science"? I mean, that’s a little weird. I’m an academic leftist. So I decided to read it. I learned about a corner of the academy where people were employing either deconstructionist literary theory or extreme social constructivist sociology of science to make comments about both the content of science and the philosophy of science, often in gross ignorance of the science. The first thing I wanted to do was go to the library and check out the original works that Gross and Levitt were criticizing to see whether they were being fair. I found that in about 80 percent of the cases, in my judgment, they were.

... I thought, well, I could write an article to add to the Gross and Levitt critique, and it would probably disappear into a black hole. So I had the idea of writing an article that would be both a parody and an admittedly uncontrolled experiment: I would submit the article to a trendy journal and see whether it would be accepted. Writing the parody took maybe two or three months.

Before I submitted it I did show it to a few friends — I tested them blind to see how long it would take them to figure out that it was a parody. The scientists would figure out quickly that either it was a parody or I had gone off my rocker. But I mostly tried it on nonscientist friends, in part to see whether there were any obvious giveaways. ...
The following paragraphs are taken from Sokal's paper (the first two from the beginning, the last from the end):
There are many natural scientists, and especially physicists, who continue to reject the notion that the disciplines concerned with social and cultural criticism can have anything to contribute, except perhaps peripherally, to their research. Still less are they receptive to the idea that the very foundations of their worldview must be revised or rebuilt in the light of such criticism. Rather, they cling to the dogma imposed by the long post-Enlightenment hegemony over the Western intellectual outlook, which can be summarized briefly as follows: that there exists an external world, whose properties are independent of any individual human being and indeed of humanity as a whole; that these properties are encoded in "eternal" physical laws; and that human beings can obtain reliable, albeit imperfect and tentative, knowledge of these laws by hewing to the "objective" procedures and epistemological strictures prescribed by the (so-called) scientific method.

But deep conceptual shifts within twentieth-century science have undermined this Cartesian-Newtonian metaphysics1; revisionist studies in the history and philosophy of science have cast further doubt on its credibility2; and, most recently, feminist and poststructuralist critiques have demystified the substantive content of mainstream Western scientific practice, revealing the ideology of domination concealed behind the façade of "objectivity".3 It has thus become increasingly apparent that physical "reality", no less than social "reality", is at bottom a social and linguistic construct; that scientific "knowledge", far from being objective, reflects and encodes the dominant ideologies and power relations of the culture that produced it; that the truth claims of science are inherently theory-laden and self-referential; and consequently, that the discourse of the scientific community, for all its undeniable value, cannot assert a privileged epistemological status with respect to counter-hegemonic narratives emanating from dissident or marginalized communities.

...

Finally, the content of any science is profoundly constrained by the language within which its discourses are formulated; and mainstream Western physical science has, since Galileo, been formulated in the language of mathematics.100 101 But whose mathematics? The question is a fundamental one, for, as Aronowitz has observed, "neither logic nor mathematics escapes the `contamination' of the social.''102 And as feminist thinkers have repeatedly pointed out, in the present culture this contamination is overwhelmingly capitalist, patriarchal and militaristic: "mathematics is portrayed as a woman whose nature desires to be the conquered Other.''103 104 Thus, a liberatory science cannot be complete without a profound revision of the canon of mathematics.105 As yet no such emancipatory mathematics exists, and we can only speculate upon its eventual content. We can see hints of it in the multidimensional and nonlinear logic of fuzzy systems theory106; but this approach is still heavily marked by its origins in the crisis of late-capitalist production relations.
See also Frauds!

Blog Archive

Labels