Sunday, February 26, 2017

Perverse Incentives and Replication in Science

Here's a depressing but all too common pattern in scientific research:
1. Study reports results which reinforce the dominant, politically correct, narrative.

2. Study is widely cited in other academic work, lionized in the popular press, and used to advance real world agendas.

3. Study fails to replicate, but no one (except a few careful and independent thinkers) notices.
For numerous examples, see, e.g., any of Malcolm Gladwell's books :-(

A recent example: the idea that collective intelligence of groups (i.e., ability to solve problems and accomplish assigned tasks) is not primarily dependent on the cognitive ability of individuals in the group.

It seems plausible to me that by adopting certain best practices for collaboration one can improve group performance, and that diversity of knowledge base and personal experience could also enhance performance on certain tasks. But recent results in this direction were probably oversold, and seem to have failed to replicate.

James Thompson has given a good summary of the situation.

Parts 1 and 2 of our story:
MIT Center for Collective Intelligence: ... group-IQ, or “collective intelligence” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.
Is it true? The original paper on this topic, from 2010, has been cited 700+ times. See here for some coverage on this blog when it originally appeared.

Below is the (only independent?) attempt at replication, with strongly negative results. The first author is a regular (and very insightful) commenter here -- I hope he'll add his perspective to the discussion. Have we reached part 3 of the story?
Smart groups of smart people: Evidence for IQ as the origin of collective intelligence in the performance of human groups

Timothy C. Bates a,b,⁎, Shivani Gupta a
a Department of Psychology, University of Edinburgh
b Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh

What allows groups to behave intelligently? One suggestion is that groups exhibit a collective intelligence accounted for by number of women in the group, turn-taking and emotional empathizing, with group-IQ being only weakly-linked to individual IQ (Woolley, Chabris, Pentland, Hashmi, & Malone, 2010). Here we report tests of this model across three studies with 312 people. Contrary to prediction, individual IQ accounted for around 80% of group-IQ differences. Hypotheses that group-IQ increases with number of women in the group and with turn-taking were not supported. Reading the mind in the eyes (RME) performance was associated with individual IQ, and, in one study, with group-IQ factor scores. However, a well-fitting structural model combining data from studies 2 and 3 indicated that RME exerted no influence on the group-IQ latent factor (instead having a modest impact on a single group test). The experiments instead showed that higher individual IQ enhances group performance such that individual IQ determined 100% of latent group-IQ. Implications for future work on group-based achievement are examined.


From the paper:
Given the ubiquitous importance of group activities (Simon, 1997) these results have wide implications. Rather than hiring individuals with high cognitive skill who command higher salaries (Ritchie & Bates, 2013), organizations might select-for or teach social sensitivity thus raising collective intelligence, or even operate a female gender bias with the expectation of substantial performance gains. While the study has over 700 citations and was widely reported to the public (Woolley, Malone, & Chabris, 2015), to our knowledge only one replication has been reported (Engel, Woolley, Jing, Chabris, & Malone, 2014). This study used online (rather than in-person) tasks and did not include individual IQ. We therefore conducted three replication studies, reported below.

... Rather than a small link of individual IQ to group-IQ, we found that the overlap of these two traits was indistinguishable from 100%. Smart groups are (simply) groups of smart people. ... Across the three studies we saw no significant support for the hypothesized effects of women raising (or men lowering) group-IQ: All male, all female and mixed-sex groups performed equally well. Nor did we see any relationship of some members speaking more than others on either higher or lower group-IQ. These findings were weak in the initial reports, failing to survive incorporation of covariates. We attribute these to false positives. ... The present findings cast important doubt on any policy-style conclusions regarding gender composition changes cast as raising cognitive-efficiency. ...

In conclusion, across three studies groups exhibited a robust cognitive g-factor across diverse tasks. As in individuals, this g-factor accounted for approximately 50% of variance in cognition (Spearman, 1904). In structural tests, this group-IQ factor was indistinguishable from average individual IQ, and social sensitivity exerted no effects via latent group-IQ. Considering the present findings, work directed at developing group-IQ tests to predict team effectiveness would be redundant given the extremely high utility, reliability, validity for this task shown by individual IQ tests. Work seeking to raise group-IQ, like re- search to raise individual IQ might find this task achievable at a task- specific level (Ritchie et al., 2013; Ritchie, Bates, & Plomin, 2015), but less amenable to general change than some have anticipated. Our attempt to manipulate scores suggested that such interventions may even decrease group performance. Instead, work understanding the developmental conditions which maximize expression of individual IQ (Bates et al., 2013) as well as on personality and cultural traits supporting cooperation and cumulation in groups should remain a priority if we are to understand and develop cognitive ability. The present experiments thus provide new evidence for a central, positive role of individual IQ in enhanced group-IQ.
Meta-Observation: Given the 1-2-3 pattern described above, one should be highly skeptical of results in many areas of social science and even biomedical science (see link below). Serious researchers (i.e., those who actually aspire to participate in Science) in fields with low replication rates should (as a demonstration of collective intelligence!) do everything possible to improve the situation. Replication should be considered an important research activity, and should be taken seriously.

Most researchers I know in the relevant areas have not yet grasped that there is a serious problem. They might admit that "some studies fail to replicate" but don't realize the fraction might be in the 50 percent range!

More on the replication crisis in certain fields of science.

Thursday, February 23, 2017

A Professor meets the Alt-Right

Thomas Main, Professor in the School of Public Affairs at Baruch College, is working on a book about the Alt-Right, to be published by Brookings. Below you can listen to a conversation between Main and prominent Alt-Right figure Mike Enoch (pseudonym).

It's an interesting encounter between academic political theory and a new political movement that (so far) exists mostly on the internet. Both Main and Enoch take the other seriously in the discussion, leading to a clear expression of Alt-Right views on race, immigration, identity politics, and the idea of America.

See also Bannon, the Alt-Right, and the National Socialist Vision, and Identity Politics is a Dead End: Live by the Sword, Die by the Sword.


Monday, February 20, 2017

The Future of Thought, via Thought Vectors


In my opinion this is one of the most promising directions in AI. I expect significant progress in the next 5-10 years. Note the whole problem of parsing languages like English has been subsumed in the training of neural Encoders/Decoders used, e.g., in the translation problem (i.e., training on pairs of translated sentences, with an abstract thought vector as the intermediate state). See Toward a Geometry of Thought:
... the space of concepts (primitives) used in human language (or equivalently, in human thought) ...  has only ~1000 dimensions, and has some qualities similar to an actual vector space. Indeed, one can speak of some primitives being closer or further from others, leading to a notion of distance, and one can also rescale a vector to increase or decrease the intensity of meaning.

... we now have an automated method to extract an abstract representation of human thought from samples of ordinary language. This abstract representation will allow machines to improve dramatically in their ability to process language, dealing appropriately with semantics (i.e., meaning), which is represented geometrically.
Geoff Hinton (from a 2015 talk at the Royal Society in London):
The implications of this for document processing are very important. If we convert a sentence into a vector that captures the meaning of the sentence, then Google can do much better searches; they can search based on what's being said in a document.

Also, if you can convert each sentence in a document into a vector, then you can take that sequence of vectors and [try to model] natural reasoning. And that was something that old fashioned AI could never do.

If we can read every English document on the web, and turn each sentence into a thought vector, you've got plenty of data for training a system that can reason like people do.

Now, you might not want it to reason like people do, but at least we can see what they would think.

What I think is going to happen over the next few years is this ability to turn sentences into thought vectors is going to rapidly change the level at which we can understand documents.

To understand it at a human level, we're probably going to need human level resources and we have trillions of connections [in our brains], but the biggest networks we have built so far only have billions of connections. So we're a few orders of magnitude off, but I'm sure the hardware people will fix that.
This is a good discussion (source of the image at top and the text excerpted below), illustrating the concept of linearity in the contexts of human eigenfaces and thought vectors. See also here.



You can audit this Stanford class! CS224n: Natural Language Processing with Deep Learning.

More references.

Thursday, February 16, 2017

Management by the Unusually Competent



How did we get ICBMs? How did we get to the moon? What are systems engineering and systems management? Why do some large organizations make rapid progress, while others spin their wheels for decades at a time? Dominic Cummings addresses these questions in his latest essay.

Photo above of Schriever and Ramo. More Dom.
... In 1953, a relatively lowly US military officer Bernie Schriever heard von Neumann sketch how by 1960 the United States would be able to build a hydrogen bomb weighing less than a ton and exploding with the force of a megaton, about 80 times more powerful than Hiroshima. Schriever made an appointment to see von Neumann at the IAS in Princeton on 8 May 1953. As he waited in reception, he saw Einstein potter past. He talked for hours with von Neumann who convinced him that the hydrogen bomb would be progressively shrunk until it could fit on a missile. Schriever told Gardner about the discussion and 12 days later Gardner went to Princeton and had the same conversation with von Neumann. Gardner fixed the bureaucracy and created the Strategic Missiles Evaluation Committee. He persuaded von Neumann to chair it and it became known as ‘the Teapot committee’ or ‘the von Neumann committee’. The newly formed Ramo-Wooldridge company, which became Thompson-Ramo-Wooldridge (I’ll refer to it as TRW), was hired as the secretariat.

The Committee concluded (February 1954) that it would be possible to produce intercontinental ballistic missiles (ICBMs) by 1960 and deploy enough to deter the Soviets by 1962, that there should be a major crash programme to develop them, and that there was an urgent need for a new type of agency with a different management approach to control the project. Although intelligence was thin and patchy, von Neumann confidently predicted on technical and political grounds that the Soviet Union would engage in the same race. It was discovered years later that the race had already been underway partly driven by successful KGB operations. Von Neumann’s work on computer-aided air defence systems also meant he was aware of the possibilities for the Soviets to build effective defences against US bombers.

‘The nature of the task for this new agency requires that over-all technical direction be in the hands of an unusually competent group of scientists and engineers capable of making systems analyses, supervising the research phases, and completely controlling experimental and hardware phases of the program… It is clear that the operation of this new group must be relieved of excessive detailed regulation by existing government agencies.’ (vN Committee, emphasis added.)

A new committee, the ICBM Scientific Advisory Committee, was created and chaired by von Neumann so that eminent scientists could remain involved. One of the driving military characters, General Schriever, realised that people like von Neumann were an extremely unusual asset. He said later that ‘I became really a disciple of the scientists… I felt strongly that the scientists had a broader view and had more capabilities.’ Schriever moved to California and started setting up the new operation but had to deal with huge amounts of internal politics as the bureaucracy naturally resisted new ideas. The Defense Secretary, Wilson, himself opposed making ICBMs a crash priority.

... Almost everybody hated the arrangement. Even the Secretary of the Air Force (Talbott) tried to overrule Schriever and Ramo. It displaced the normal ‘prime contractor’ system in which one company, often an established airplane manufacturer, would direct the whole programme. Established businesses were naturally hostile. Traditional airplane manufacturers were run very much on Taylor’s principles with rigid routines. TRW employed top engineers who would not be organised on Taylor’s principles. Ramo, also a virtuoso violinist, had learned at Caltech the value of a firm grounding in physics and an interdisciplinary approach in engineering. He and his partner Wooridge had developed their ideas on systems engineering before starting their own company. The approach was vindicated quickly when TRW showed how to make the proposed Atlas missile much smaller and simpler therefore cheaper and faster to develop.

... According to Johnson, almost all the proponents of systems engineering had connections with either Caltech (where von Karman taught and JPL was born) or MIT (which was involved with the Radiation Lab and other military projects during World War 2). Bell Labs, which did R&D for AT&T, was also a very influential centre of thinking. The Jet Propulsion Laboratory (JPL) managed by Caltech also, under the pressure of repeated failure, independently developed systems management and configuration control. They became technical leaders in space vehicles. NASA, however, did not initially learn from JPL.

... Philip Morse, an MIT physicist who headed the Pentagon’s Weapons Systems Evaluation Group after the war, reflected on this resistance:
‘Administrators in general, even the high brass, have resigned themselves to letting the physical scientist putter around with odd ideas and carry out impractical experiments, as long as things experimented with are solutions or alloys or neutrons or cosmic rays. But when one or more start prying into the workings of his own smoothly running organization, asking him and others embarrassing questions not related to the problems he wants them to solve, then there’s hell to pay.’ (Morse, ‘Operations Research, What is It?’, Proceedings of the First Seminar in Operations Research, November 8–10, 1951.)



The Secret of Apollo: Systems Management in American and European Space Programs, Stephen B. Johnson.

Saturday, February 11, 2017

On the military balance of power in the Western Pacific

Some observations concerning the military balance of power in Asia. Even "experts" I have spoken to over the years seem to be confused about basic realities that are fundamental to strategic considerations.

1. Modern missile and targeting technology make the survivability of surface ships (especially carriers) questionable. Satellites can easily image surface ships and missiles can hit them from over a thousand miles away. Submarines are a much better investment and carriers may be a terrible waste of money, analogous to battleships in the WWII era. (Generals and Admirals typically prepare to fight the previous war, despite the advance of technology, often with disastrous consequences.)

2. US forward bases and surface deployments are hostages to advanced missile capability and would not survive the first days of a serious conventional conflict. This has been widely discussed, at least in some planning circles, since the 1990s. See second figure below and link.

3. PRC could easily block oil shipments to Taiwan or even Japan using Anti-Ship Ballistic Missiles (ASBM) or Anti-Ship Cruise Missiles (ASCM). This is a much preferable strategy to an amphibious attack on Taiwan in response to, e.g., a declaration of independence. A simple threat against oil tankers, or perhaps the demonstration sinking of a single tanker, would be enough to cut off supplies. A response to this threat would require attacking mobile DF21D missile launchers on the Chinese mainland. This would be highly escalatory, leading possibly to nuclear response.

4. The strategic importance of the South China Sea and artificial islands constructed there is primarily to the ability of the US to cut off the flow of oil to PRC. The islands may enable PRC to gain dominance in the region and make US submarine operations much more difficult. US reaction to these assets is not driven by "international law" or fishing or oil rights, or even the desire to keep shipping lanes open. What is at stake is the US capability to cut off oil flow, a non-nuclear but highly threatening card it has (until now?) had at its disposal to play against China.

The map below shows the consequences of full deployments of SAM, ASCM, and ASBM weaponry on the artificial islands. Consequences extend to the Malacca Strait (through which 80% of China's oil passes) and US basing in Singapore. Both linked articles are worth reading.

CHINA’S ARTIFICIAL ISLANDS ARE BIGGER (AND A BIGGER DEAL) THAN YOU THINK

Beijing's Go Big or Go Home Moment in the South China Sea



HAS CHINA BEEN PRACTICING PREEMPTIVE MISSILE STRIKES AGAINST U.S. BASES? (Lots of satellite photos at this link, revealing extensive ballistic missile tests against realistic targets.)



Terminal targeting of a moving aircraft carrier by an ASBM like the DF21D


Simple estimates: 10 min flight time means ~10km uncertainty in final position of a carrier (assume speed of 20-30 mph) initially located by satellite. Missile course correction at distance ~10km from target allows ~10s (assuming Mach 5-10 velocity) of maneuver, and requires only a modest angular correction. At this distance a 100m sized target has angular size ~0.01 so should be readily detectable from an optical image. (Carriers are visible to the naked eye from space!) Final targeting at distance ~km can use a combination of optical / IR / radar  that makes countermeasures difficult.

So hitting a moving aircraft carrier does not seem especially challenging with modern technology. The Chinese can easily test their terminal targeting technology by trying to hit, say, a very large moving truck at their ballistic missile impact range, shown above.

I do not see any effective countermeasures, and despite inflated claims concerning anti-missile defense capabilities, it is extremely difficult to stop an incoming ballistic missile with maneuver capability.


More analysis and links to strategic reports from RAND and elsewhere in this earlier post The Pivot and American Statecraft in Asia.
... These questions of military/technological capability stand prior to the prattle of diplomats, policy analysts, or political scientists. Perhaps just as crucial is whether top US and Chinese leadership share the same beliefs on these issues.

... It's hard to war game a US-China pacific conflict, even a conventional one. How long before the US surface fleet is destroyed by ASBM/ASCM? How long until forward bases are? How long until US has to strike at targets on the mainland? How long do satellites survive? How long before the conflict goes nuclear? I wonder whether anyone knows the answers to these questions with high confidence -- even very basic ones, like how well asymmetric threats like ASBM/ASCM will perform under realistic conditions. These systems have never been tested in battle.

The stakes are so high that China can just continue to establish "facts on the ground" (like building new island bases), with some confidence that the US will hesitate to escalate. If, for example, both sides secretly believe (at the highest levels; seems that Xi is behaving as if he might) that ASBM/ASCM are very effective, then sailing a carrier group through the South China Sea becomes an act of symbolism with meaning only to those that are not in the know.

Friday, February 10, 2017

Elon Musk: the BIG PROBLEMS worth working on




#1 AI
#2 Genomics

See also A Brief History of the Future, As Told To the Masters of the Universe.


Musk says he spends most of his time working on technical problems for Tesla and SpaceX, with half a day per week at OpenAI.

Thursday, February 09, 2017

Ratchets Within Ratchets



For those interested in political philosophy, or Trump's travel ban, I recommend this discussion on Scott Aaronson's blog, which features a commenter calling himself Boldmug (see also Bannon and Moldbug in the news recently ;-)

Both Scott and Boldmug seem to agree that scientific/technological progress is a positive ratchet caught within a negative ratchet of societal and political decay.
Boldmug Says:
Comment #181 January 27th, 2017 at 5:26 pm

Scott: An interesting term, “ratchet of progress.” Nature is full of ratchets. But ratchets of progress — extropic ratchets — are the exceptional case. Most ratchets are entropic ratchets, ratchets of decay.

You happen to live inside the ratchet of progress that is science and engineering. That ratchet produces beautiful wonders like seedless watermelons. It’s true that Talleyrand said, “no one who remembers the sweetness of life before the Revolution can even imagine it,” but even Louis XIV had to spit the seeds out of his watermelons.

This ratchet is 400 to 2400 years old, depending on how you count. The powers and ideologies that be are very good at taking credit for science and engineering, though it is much older than any of them. It is a powerful ratchet — not even the Soviet system could kill or corrupt science entirely, although it’s always the least political fields, like math and physics, that do the best.

But most ratchets are entropic ratchets of decay. The powers that be don’t teach you to see the ratchets of decay. You have to look for them with your own eyes.

The scientists and engineers who created the Antikythera mechanism lived inside a ratchet of progress. But that ratchet of progress lived inside a ratchet of decay, which is why we didn’t have an industrial revolution in 100BC. Instead we had war, tyranny, stagnation and (a few hundred years later) collapse.

Lucio Russo (https://en.wikipedia.org/wiki/Lucio_Russo) wrote an interesting, if perhaps a little overstated, book, on the Hellenistic (300-150BC, not to be confused with the Hellenic era proper) golden age of science. We really have no way of knowing how close to a scientific revolution the Alexandrians came. But it was political failure, not scientific failure, that destroyed their world. The ratchet of progress was inside a ratchet of decay. ...
It doesn't appear that Scott responded to this dig by Boldmug:
Boldmug Says:
Comment #153 January 27th, 2017 at 11:51 am

... Coincidentally, the latter is the side [THE LEFT] whose Jedi mind tricks are so strong, they almost persuaded someone with a 160 IQ to castrate himself.

And the Enlightenment? You mean the Enlightenment that guillotined Lavoisier? “The Republic has no need of savants.” Add 1789 and even 1641 to that list. Why would a savant pick Praisegod Barebones over Prince Rupert?

You might notice that in our dear modern world, whose quantum cryptography and seedless watermelons are so excellent, “the Republic has no need of savants” is out there still. Know anyone working on human genetics? ...
Don't believe in societal decay? Read this recent tour-de-force paper by DeCode researchers in Iceland, who have established beyond doubt the (long-term) dysgenic nature of modern society:
Selection against variants in the genome associated with educational attainment
Proceedings of the National Academy of Sciences of the United States of America (PNAS)

Epidemiological and genetic association studies show that genetics play an important role in the attainment of education. Here, we investigate the effect of this genetic component on the reproductive history of 109,120 Icelanders and the consequent impact on the gene pool over time. We show that an educational attainment polygenic score, POLYEDU, constructed from results of a recent study is associated with delayed reproduction (P < 10^(−100)) and fewer children overall. The effect is stronger for women and remains highly significant after adjusting for educational attainment. Based on 129,808 Icelanders born between 1910 and 1990, we find that the average POLYEDU has been declining at a rate of ∼0.010 standard units per decade, which is substantial on an evolutionary timescale. Most importantly, because POLYEDU only captures a fraction of the overall underlying genetic component the latter could be declining at a rate that is two to three times faster.
Note: these "educational attainment" variants are mostly variants which influence cognitive ability.

From the Discussion section of the paper:
... The main message here is that the human race is genetically far from being stagnant with respect to one of its most important traits. It is remarkable to report changes in POLYEDU that are measurable across the several decades covered by this study. In evolutionary time, this is a blink of an eye. However, if this trend persists over many centuries, the impact could be profound.

Monday, February 06, 2017

A Brief History of the Future, as told to the Masters of the Universe

This is a summary of remarks made at two not-Davos meetings, one in NYC and the other in LA. Most attendees were allocators of significant capital.

See also these two articles in Nautilus Magazine: Super-intelligent Humans Are Coming, Don't Worry, Smart Machines Will Take Us With Them.

Most of these topics have been covered in more detail in recent blog posts -- see relevant labels at bottom.

An Inflection Point in Human History, from recent Technological Developments

Genomics and Machine Learning:

Inexpensive genotyping has produced larger and larger datasets of human genomes + phenotypes, approaching sample sizes of a million individuals. Machine learning applied to this data has led to the ability to predict complex human traits (e.g., height, intelligence) as well as disease risk (e.g., type 1 diabetes, cancer, etc.). Among the applications of these advances is the ability to select embryos in IVF to avoid negative outcomes, and even to produce highly superior outcomes.

CRISPR -- a breakthrough technology for gene editing -- will find applications in medicine, agriculture, and eventually human reproduction (editing may eventually supplant selection).

The human species is poised, within the next generation, to take control of its own evolution. It is likely that affluent families will be the first to take advantage of these new capabilities, leading to even greater inequality in society.

Machine Learning and AI:

Routine tasks are being automated through machine intelligence, leading to pressure on low-skill human workers. Autonomous vehicles, probably no more than a decade away, will displace many jobs, such as those of truck and taxi drivers. The automobile industry is likely to experience massive creative destruction: the most valuable part of the car will be its brain (software, sensors, and cloud communication capability), not its drivetrain. The most likely winners in this race are not the major automakers.

AIs are already capable of outperforming even the best humans on any narrow task: e.g., Chess, Go, Texas Hold’em (Poker), facial recognition, voice recognition, etc. Many of these AIs are built using Deep Learning algorithms, which take advantage of neural net architectures. A neural net is an abstract network modeled after the human brain; each node in the network has a different connection strength to other nodes. While a neural net can be trained to outperform humans (see tasks listed above), the internal workings of the net tend to be mysterious even to the human designers. This is unlike the case of structured code, written in familiar high level programming languages. Neural net algorithms run better on specialized hardware, such as GPUs. Google has produced a special chipset, called the TPU, which now runs ~20% of all compute at its data centers. Google does not sell the TPU, and industry players and startups are racing to develop similar chips for neural net applications. (Nvidia is a leader in this new area.)

Neural nets used in language translation have mapped out an abstract ~1000 dimensional space which coincides with the space of “primitive concepts” used in human thought and language. It appears that rapid advances in the ability to read human generated text (e.g., Wikipedia) with comprehension will follow in the coming decade. It seems possible that AGI -- Artificial General Intelligence (analogous to a human intelligence, with a theory of the world, general knowledge about objects in the universe, etc.) -- will emerge within our lifetimes.




Saturday, February 04, 2017

Baby Universes in the Laboratory




This was on the new books table at our local bookstore. I had almost forgotten about doing an interview and corresponding with the author some time ago. See also here and here.

The book is a well-written overview of some of the more theoretical aspects of inflationary cosmology, the big bang, the multiverse, etc. It also fleshes out some of the individual stories of the physicists involved in this research.
Kirkus Reviews: ... In her elegant and perceptive book, Merali ... unpacks the science behind what we know about our universe’s beginnings and traces the paths that many renowned researchers have taken to translate these insights to new heights: the creation of a brand-new “baby” universe, and not an empty one, either, but one with its own physics, matter, and (possibly) life. ... Among the most significant scientific advances in the last half-century is the discovery that our universe is inflating exponentially, a theory that led to many more breakthroughs in physics and cosmology. Yet the big question—how did the universe form, triggering inflation to begin with?—remains opaque. Merali, who works at the Foundational Questions Institute, which explores the boundaries of physics and cosmology, effortlessly explains the complex theories that form the bedrock of this concept, and she brings to life the investigators who have dedicated much of their careers in pursuit of fundamental truths. She also neatly incorporates discussions of philosophy and religion—after all, nothing less than grand design itself is at stake here—without any heavy-handedness or agenda. Over the course of several years, she traveled the world to interview firsthand the most important figures behind the idea of laboratory universe creation ... and the anecdotes she includes surrounding these conversations make her portrait even more compelling.



Here are two illustrations of how a baby universe pinches off from the universe in which it was created. This is all calculable within general relativity, modulo an issue with quantum smoothing of a singularity. The remnant of the baby universe appears to outside observers as a black hole. But inside one finds an exponentially growing region of spacetime.






Buchanan and Nader on the Trump presidency



I highly recommend this podcast from Radio Open Source and Christopher Lydon. You may be surprised at how much two former independent presidential candidates, one on the Left and the other on the Right, can agree on. The common factor is their love for this country and concern for ordinary people. Listen carefully to what they say about Hillary.

If the embedded player doesn't work just click the link below.
The Great Trump Debate: Pat Buchanan and Ralph Nader

On Super Bowl weekend, we’ve lined up a couple of hall of fame political players who run outside Establishment lines to help us watch the game that’s unfolding so far in the Trump White House. Pat Buchanan was the pit-bull strategist in Richard Nixon’s White House; he’s a Latin-Mass Catholic, a cultural conservative and America First nationalist who’s turned sharply anti-Empire, calmly post-Cold War with Russia and flat-out anti-war in the Middle East. Ralph Nader was Mr. Citizen as auto-safety crusader, then first among the relentless Raiders against corporate power, and a prickly third-party candidate in three presidential campaigns.

It was this left-right pair that practically called the game for Trump way back in August 2015. Both said that a man backed by his own billionaire funds and showbiz glam could run the ball all the way to the White House.

After the election, though, both men are turning their eyes to the man who may be quarterbacking the presidency: Steve Bannon.

Buchanan—a “paleoconservative” who coined the term “America First,” essentially drafting the Bannon playbook—now hopes that Trump doesn’t drop the ball after his executive order blitz. “Republicans have waited a long time for this,” Buchanan says. “[Trump] ought to keep moving on ahead, take the hits he’s gonna take.” If he keeps it up, Bannon might bring the political right “very close to a political revolution.”

Nader, as a green-tinted independent on the left, understands the enthusiasm that his longtime sparring partner has for Trumpism. Yet he also sees the contradictions and challenges Trump presents, not only for Buchanan’s vision of America, but also for Nader’s own: Both men share a strong, anti-corporate stance and are worried about the Goldman Sachs and Wall Street executives Trumped has packed his cabinet with. What Buchanan and Nader fear most is that a thin-skinned president, egged on by his hawkish advisors, could spark a war with Iran if provoked.

Friday, February 03, 2017

When UC Berkeley allowed Free Speech

Hail Libratus! AI beats human pros in no-limit Texas Hold'em



AI already dominates humans in any narrowly defined task. Perhaps another 30-50 years until AGI?
IEEE Spectrum: Humanity has finally folded under the relentless pressure of an artificial intelligence named Libratus in a historic poker tournament loss. ...

Libratus lived up to its “balanced but forceful” Latin name by becoming the first AI to beat professional poker players at heads-up, no-limit Texas Hold'em. The tournament was held at the Rivers Casino in Pittsburgh from 11–30 January. Developed by Carnegie Mellon University, the AI won the “Brains vs. Artificial Intelligence” tournament against four poker pros by US $1,766,250 in chips over 120,000 hands (games). Researchers can now say that the victory margin was large enough to count as a statistically significant win, meaning that they could be at least 99.98 percent sure that the AI victory was not due to chance.

... the victory demonstrates how AI has likely surpassed the best humans at doing strategic reasoning in “imperfect information” games such as poker. The no-limit Texas Hold’em version of poker is a good example of an imperfect information game because players must deal with the uncertainty of two hidden cards and unrestricted bet sizes. An AI that performs well at no-limit Texas Hold’em could also potentially tackle real-world problems with similar levels of uncertainty.

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

... Libratus played the same overall strategy against all the players based on three main components:

First, the AI’s algorithms computed a strategy before the tournament by running for 15 million processor-core hours on a new supercomputer called Bridges.

Second, the AI would perform “end-game solving” during each hand to precisely calculate how much it could afford to risk in the third and fourth betting rounds (the “turn” and “river” rounds in poker parlance). Sandholm credits the end-game solver algorithms as contributing the most to the AI victory. The poker pros noticed Libratus taking longer to compute during these rounds and realized that the AI was especially dangerous in the final rounds, but their “bet big early” counter strategy was ineffective.

Third, Libratus ran background computations during each night of the tournament so that it could fix holes in its overall strategy. That meant Libratus was steadily improving its overall level of play and minimizing the ways that its human opponents could exploit its mistakes. It even prioritized fixes based on whether or not its human opponents had noticed and exploited those holes. By comparison, the human poker pros were able to consistently exploit strategic holes in the 2015 tournament against the predecessor AI called Claudico.

... The Libratus victory translates into an astounding winning rate of 14.7 big blinds per 100 hands in poker parlance—and that’s a very impressive winning rate indeed considering the AI was playing four human poker pros. Prior to the start of the tournament, online betting sites had been giving odds of 4:1 with Libratus seen as the underdog.
Here's a recent paper on deep learning and poker. The program DeepStack is not Libratus (thanks to a commenter for pointing this out), but both have managed to outperform human players.
DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker

https://arxiv.org/abs/1701.01724

Artificial intelligence has seen a number of breakthroughs in recent years, with games often serving as significant milestones. A common feature of games with these successes is that they involve information symmetry among the players, where all players have identical information. This property of perfect information, though, is far more common in games than in real-world problems. Poker is the quintessential game of imperfect information, and it has been a longstanding challenge problem in artificial intelligence. In this paper we introduce DeepStack, a new algorithm for imperfect information settings such as poker. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition about arbitrary poker situations that is automatically learned from self-play games using deep learning. In a study involving dozens of participants and 44,000 hands of poker, DeepStack becomes the first computer program to beat professional poker players in heads-up no-limit Texas hold'em. Furthermore, we show this approach dramatically reduces worst-case exploitability compared to the abstraction paradigm that has been favored for over a decade.

Wednesday, February 01, 2017

A far greater peril than Donald Trump

Richard Fernandez:
The more fundamental unsolved problem is why the progressive project collapsed in the first place. How could something at the seeming height of its power; in control of the EU, the US Federal government, the UN, the press, the academe and industry collapse in one fatal year? The globalist conference in Davos still doesn't know. In that ignorance lurks a peril far greater than DJT.
The rapid collapse of a false worldview -- have we seen this before?

Hans Christian Anderson, The Emperor's New Clothes:
... Both the swindlers begged him to be so kind as to come near to approve the excellent pattern, the beautiful colors. They pointed to the empty looms, and the poor old minister stared as hard as he dared. He couldn't see anything, because there was nothing to see. "Heaven have mercy," he thought. "Can it be that I'm a fool? I'd have never guessed it, and not a soul must know. Am I unfit to be the minister? It would never do to let on that I can't see the cloth."

... they all joined the Emperor in exclaiming, "Oh! It's very pretty," ... "Magnificent! Excellent! Unsurpassed!"

... "Oh, how fine are the Emperor's new clothes! Don't they fit him to perfection? And see his long train!" Nobody would confess that he couldn't see anything, for that would prove him either unfit for his position, or a fool.

... "But he hasn't got anything on," a little child said.

"Did you ever hear such innocent prattle?" said its father. And one person whispered to another what the child had said, "He hasn't anything on. A child says he hasn't anything on."

"But he hasn't got anything on!" the whole town cried out at last.

The Emperor shivered, for he suspected they were right. But he thought, "This procession has got to go on." So he walked more proudly than ever, as his noblemen held high the train that wasn't there at all.
Richard Feynman:
The first principle is that you must not fool yourself and you are the easiest person to fool.
Antonio Gramsci:
Pessimism of the Intellect, Optimism of the Will.

Blog Archive

Labels