Showing posts sorted by relevance for query edinburgh. Sort by date Show all posts
Showing posts sorted by relevance for query edinburgh. Sort by date Show all posts

Friday, June 22, 2012

Edinburgh 2

This is the physics building at Edinburgh University, where I gave a talk:


A portrait of Peter Higgs:



The opening reception for BGA 2012:






BGA talks at the Royal College of Physicians:





Archival interview with twins research pioneer Tom Bouchard:



Robert Plomin on twins and modern SNP-based heritability:



My last evening in beautiful Edinburgh:





Tuesday, June 12, 2012

University of Edinburgh: Introduction to Technology Startups

The afternoon of June 20 we have a break from ICQG 2012 (International Conference on Quantitative Genetics). I'll be giving a talk at the University of Edinburgh, in case anyone is interested. Both Charles Darwin and James Clerk Maxwell started their higher education at Edinburgh. It's quite an honor to give a lecture in the James Clerk Maxwell Building! :-)

Mmmm... doughnuts!

 

Roberts Funded Lecture

Introduction to technology startups

Steve Hsu

Professor of Theoretical Physics & Director, Institute for Theoretical Science

University of Oregon

Wednesday 20 June 2012 from 1600

All welcome!

In the past, applied research and development was concentrated at corporate labs like Bell, IBM, Xerox PARC, etc. Today, innovation is more likely to be found at small venture capital backed companies founded by creative risk takers. The odds have never been greater that you, a scientist or engineer, might someday work at (or found!) a startup company. The talk is an introduction to this important and dynamic part of our economy, from the perspective of a physics professor and serial entrepreneur.

Lecture Theatre C, School of Physics & Astronomy, James Clerk Maxwell Building, Kings Buildings

Doughnuts and coffee at 1600

Talk 1630-1730

Sunday, February 26, 2017

Perverse Incentives and Replication in Science

Here's a depressing but all too common pattern in scientific research:
1. Study reports results which reinforce the dominant, politically correct, narrative.

2. Study is widely cited in other academic work, lionized in the popular press, and used to advance real world agendas.

3. Study fails to replicate, but no one (except a few careful and independent thinkers) notices.
For numerous examples, see, e.g., any of Malcolm Gladwell's books :-(

A recent example: the idea that collective intelligence of groups (i.e., ability to solve problems and accomplish assigned tasks) is not primarily dependent on the cognitive ability of individuals in the group.

It seems plausible to me that by adopting certain best practices for collaboration one can improve group performance, and that diversity of knowledge base and personal experience could also enhance performance on certain tasks. But recent results in this direction were probably oversold, and seem to have failed to replicate.

James Thompson has given a good summary of the situation.

Parts 1 and 2 of our story:
MIT Center for Collective Intelligence: ... group-IQ, or “collective intelligence” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.
Is it true? The original paper on this topic, from 2010, has been cited 700+ times. See here for some coverage on this blog when it originally appeared.

Below is the (only independent?) attempt at replication, with strongly negative results. The first author is a regular (and very insightful) commenter here -- I hope he'll add his perspective to the discussion. Have we reached part 3 of the story?
Smart groups of smart people: Evidence for IQ as the origin of collective intelligence in the performance of human groups

Timothy C. Bates a,b,⁎, Shivani Gupta a
a Department of Psychology, University of Edinburgh
b Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh

What allows groups to behave intelligently? One suggestion is that groups exhibit a collective intelligence accounted for by number of women in the group, turn-taking and emotional empathizing, with group-IQ being only weakly-linked to individual IQ (Woolley, Chabris, Pentland, Hashmi, & Malone, 2010). Here we report tests of this model across three studies with 312 people. Contrary to prediction, individual IQ accounted for around 80% of group-IQ differences. Hypotheses that group-IQ increases with number of women in the group and with turn-taking were not supported. Reading the mind in the eyes (RME) performance was associated with individual IQ, and, in one study, with group-IQ factor scores. However, a well-fitting structural model combining data from studies 2 and 3 indicated that RME exerted no influence on the group-IQ latent factor (instead having a modest impact on a single group test). The experiments instead showed that higher individual IQ enhances group performance such that individual IQ determined 100% of latent group-IQ. Implications for future work on group-based achievement are examined.


From the paper:
Given the ubiquitous importance of group activities (Simon, 1997) these results have wide implications. Rather than hiring individuals with high cognitive skill who command higher salaries (Ritchie & Bates, 2013), organizations might select-for or teach social sensitivity thus raising collective intelligence, or even operate a female gender bias with the expectation of substantial performance gains. While the study has over 700 citations and was widely reported to the public (Woolley, Malone, & Chabris, 2015), to our knowledge only one replication has been reported (Engel, Woolley, Jing, Chabris, & Malone, 2014). This study used online (rather than in-person) tasks and did not include individual IQ. We therefore conducted three replication studies, reported below.

... Rather than a small link of individual IQ to group-IQ, we found that the overlap of these two traits was indistinguishable from 100%. Smart groups are (simply) groups of smart people. ... Across the three studies we saw no significant support for the hypothesized effects of women raising (or men lowering) group-IQ: All male, all female and mixed-sex groups performed equally well. Nor did we see any relationship of some members speaking more than others on either higher or lower group-IQ. These findings were weak in the initial reports, failing to survive incorporation of covariates. We attribute these to false positives. ... The present findings cast important doubt on any policy-style conclusions regarding gender composition changes cast as raising cognitive-efficiency. ...

In conclusion, across three studies groups exhibited a robust cognitive g-factor across diverse tasks. As in individuals, this g-factor accounted for approximately 50% of variance in cognition (Spearman, 1904). In structural tests, this group-IQ factor was indistinguishable from average individual IQ, and social sensitivity exerted no effects via latent group-IQ. Considering the present findings, work directed at developing group-IQ tests to predict team effectiveness would be redundant given the extremely high utility, reliability, validity for this task shown by individual IQ tests. Work seeking to raise group-IQ, like re- search to raise individual IQ might find this task achievable at a task- specific level (Ritchie et al., 2013; Ritchie, Bates, & Plomin, 2015), but less amenable to general change than some have anticipated. Our attempt to manipulate scores suggested that such interventions may even decrease group performance. Instead, work understanding the developmental conditions which maximize expression of individual IQ (Bates et al., 2013) as well as on personality and cultural traits supporting cooperation and cumulation in groups should remain a priority if we are to understand and develop cognitive ability. The present experiments thus provide new evidence for a central, positive role of individual IQ in enhanced group-IQ.
Meta-Observation: Given the 1-2-3 pattern described above, one should be highly skeptical of results in many areas of social science and even biomedical science (see link below). Serious researchers (i.e., those who actually aspire to participate in Science) in fields with low replication rates should (as a demonstration of collective intelligence!) do everything possible to improve the situation. Replication should be considered an important research activity, and should be taken seriously.

Most researchers I know in the relevant areas have not yet grasped that there is a serious problem. They might admit that "some studies fail to replicate" but don't realize the fraction might be in the 50 percent range!

More on the replication crisis in certain fields of science.

Monday, June 18, 2012

Intellectual tourism

Greetings from Edinburgh. I'm here for ICQG 2012 and BGA 2012.

What could be more fun than exploring a new world?  :-)











Sunday, June 08, 2008

MacKenzie on the credit crisis

Edinburgh sociology professor Donald MacKenzie wrote what I feel is the best history (so far) of modern finance and derivatives. In this article in the London Review of Books, he tackles the current credit crisis. Highly recommended.

On Gaussian copula (cognitive limitations restrict attention to an obviously oversimplified model; big brains were worried from the start):

Correlation is by far the trickiest issue in valuing a CDO. Indeed, it is difficult to be precise about what correlation actually means: in practice, its determination is a task of mathematical modelling. Over the past ten years, a model known as the ‘single-factor Gaussian copula’ has become standard. ‘Single-factor’ means that the degree of correlation is assumed to reflect the varying extent to which fortunes of each debt-issuer depend on a single underlying variable, which one can interpret as the health of the economy. ‘Copula’ indicates that the mathematical issue being addressed is the connectedness of default risks, and ‘Gaussian’ refers to the use of a multi-dimensional variant of the statistician’s standard bell-shaped curve to model this connectedness.

The single-factor Gaussian copula is far from perfect: even before the crisis hit, I wasn’t able to get a single insider to express complete confidence in it. Nevertheless, it became a market Esperanto, allowing people in different institutions to discuss CDO valuation in a mutually intelligible way. But having a standard model is only part of the task of understanding correlation. Historical data are much less useful here. Defaults are rare events, and producing a plausible statistical estimate of the extent of the correlation between, say, the risk of default by Ford and by General Motors is difficult or impossible. So as CDOs gained popularity in the late 1990s and early years of this decade, often the best one could do was simply to employ a uniform, standard figure such as 30 per cent correlation, or use the correlation between two corporations’ stock prices as a proxy for their default correlations.

Ratings, indices and implied correlation:

However imperfect the modelling of CDOs was, the results were regarded by the rating agencies as facts solid enough to allow them to grade CDO tranches. Indeed, the agencies made the models they used public knowledge in the credit markets: Standard & Poor’s, for example, was prepared to supply participants with copies of its ‘CDO Evaluator’ software package. A bank or hedge fund setting up a standard CDO could therefore be confident of the ratings it would achieve. Creators of CDOs liked that it was then possible to offer attractive returns to investors – which are normally banks, hedge funds, insurance companies, pension funds and the like, not private individuals – while retaining enough of the cash-flow from the asset pool to make the effort worthwhile. As markets recovered from the bursting of the dotcom and telecom bubble in 2000-2, the returns from traditional assets – including the premium for holding risky assets – fell sharply. (The effectiveness of CDOs and other credit derivatives in allowing banks to shed credit risk meant that they generally survived the end of the bubble without significant financial distress.) By early 2007, market conditions had been benign for nearly five years, and central bankers were beginning to talk of the ‘Great Stability’. In it, CDOs flourished.

Ratings aside, however, the world of CDOs remained primarily one of private facts. Each CDO is normally different from every other, and the prices at which tranches are sold to investors are not usually publicly known. So credible market prices did not exist. The problem was compounded by one of the repercussions of the Enron scandal. A trader who has done a derivatives deal wants to be able to ‘book’ the profits immediately, in other words have them recognised straightaway in his employer’s accounts and thus in the bonus that he is awarded that year. Enron and its traders had been doing this on the basis of questionable assumptions, and accounting regulators and auditors – the latter mindful of the way in which the giant auditing firm Arthur Andersen collapsed having been prosecuted for its role in the Enron episode – began to clamp down, insisting on the use of facts (observable market values) rather than mere assumptions in ‘booking’ derivatives. That credit correlation was not observable thus became much more of a problem.

From 2003 to 2004, however, the leading dealers in the credit-derivatives market set up fact-generating mechanisms that alleviated these difficulties: credit indices. These resemble CDOs, but do not involve the purchase of assets and, crucially, are standard in their construction. For example, the European and the North American investment-grade indices (the iTraxx and CDX IG) cover set lists of 125 investment-grade corporations. In the terminology of the market, you can ‘buy protection’ or ‘sell protection’ on either an index as a whole or on standard tranches of it. A protection seller receives fees from the buyer, but has to pay out if one or more defaults hit the index or tranche in question.

The fluctuating price of protection on an index as a whole, which is publicly known, provides a snapshot of market perceptions of credit conditions, while the trading of index tranches made correlation into something apparently observable and even tradeable. The Gaussian copula or a similar model can be applied ‘backwards’ to work out the level of correlation implied by the cost of protection on a tranche, which again is publicly known. That helped to satisfy auditors and to facilitate the booking of profits. A new breed of ‘correlation traders’ emerged, who trade index tranches as a way of taking a position on shifts in credit correlation.

Indices and other tranches quickly became a huge-volume, liquid market. They facilitated the creation not just of standard CDOs but of bespoke products such as CDO-like structures that consist only of mezzanine tranches (which offer combinations of returns and ratings that many investors found especially attractive). Products of this kind leave their creators heavily exposed to changes in credit-market conditions, but the index market permitted them to hedge (that is, offset) this exposure.

Quants and massive computational power (one wonders whether the mathematics and computers did nothing more than lend a spurious air of technicality to untrustworthy basic assumptions):

With problems such as the non-observability of correlation apparently adequately solved by the development of indices, the credit-derivatives market, which emerged little more than a decade ago, had grown by June 2007 to an aggregate total of outstanding contracts of $51 trillion, the equivalent of $7,700 for every person on the planet. It is perhaps the most sophisticated sector of the global financial markets, and a fertile source of employment for mathematicians, whose skills are needed to develop models better than the single-factor Gaussian copula.

The credit market is also one of the most computationally intensive activities in the modern world. An investment bank with a big presence in the market will have thousands of positions in credit default swaps, CDOs, indices and similar products. The calculations needed to understand and hedge the exposure of this portfolio to market movements are run, often overnight, on grids of several hundred interconnected computers. The banks’ modellers would love to add as many extra computers as possible to the grids, but often they can’t do so because of the limits imposed by the capacity of air-conditioning systems to remove heat from computer rooms. In the City, the strain put on electricity-supply networks can also be a problem. Those who sell computer hardware to investment banks are now sharply aware that ‘performance per watt’ is part of what they have to deliver.

Collapse of rating agency credibility:

The rating agencies are businesses, and the issuers of debt instruments pay the agencies to rate them. The potential conflict of interest has always been there, even in the days when the agencies mainly graded bonds, which generally they did quite sensibly. However, the way in which the crisis has thrust the conflict into the public eye has further threatened the credibility of ratings. ‘In today’s market, you really can’t trust any ratings,’ one money-market fund manager told Bloomberg Markets in October 2007. She was far from alone in that verdict, and the result was cognitive contagion. Most investors’ ‘knowledge’ of the properties of CDOs and other structured products had been based chiefly on ratings, and the loss of confidence in them affected all such products, not just those based on sub-prime mortgages. Since last summer, it has been just about impossible to set up a new CDO.

Illiquid assets, difficulty of mark to market:

Over recent months, banks have frequently been accused of hiding their credit losses. The truth is scarier: such losses are extremely hard to measure credibly. Marking-to-market requires that there be plausible market prices to use in valuing a portfolio. But the issuing of CDOs has effectively stopped, liquidity has dried up in large sectors of the credit default swap market, and the credibility of the cost of protection in the index market has been damaged by processes of the kind I’ve been discussing.

How, for example, can one value a portfolio of mortgage-backed securities when trading in those securities has ceased? It has become common to use a set of credit indices, the ABX-HE (Asset Backed, Home Equity), as a proxy for the underlying mortgage market, which is now too illiquid for prices in it to be credible. However, the ABX-HE is itself affected by the processes that have undermined the robustness of the apparent facts produced by other sectors of the index market; in particular, the large demand for protection and reduced supply of it may mean the indices have often painted too uniformly dire a picture of the prospects for mortgage-backed securities. One trader told the Financial Times in April that the liquidity of the indices had become very poor: ‘Trading is mostly happening on interdealer screens between eight or ten guys, and this means that prices can move wildly on very light volume.’ Yet because the level of the ABX-HE indices is used by banks’ accountants and auditors to value their multi-billion dollar portfolios of mortgage-backed securities, this esoteric market has considerable effects, since low valuations weaken banks’ balance sheets, curtailing their capacity to lend and thus damaging the wider economy.

Josef Ackermann, the head of Deutsche Bank, has caused a stir by admitting ‘I no longer believe in the market’s self-healing power.’ ...

Sunday, June 10, 2012

ICQG and BGA 2012

In a week I'll be attending ICQG 2012 (International Conference on Quantitative Genetics) and BGA 2012 (annual meeting of the Behavior Genetics Association), both in Edinburgh. If you want to meet up, let me know!

The title of my talk is Some results on the genetic architecture of human intelligence. I will post my slides on the blog at some point. For now, here are a few (click for larger versions).











As a physicist and quantum mechanic it is natural for me to think about a genome as a vector in a very high dimensional space. Geometrical notions follow ...

Friday, April 11, 2014

Human Capital, Genetics and Behavior

See you in Chicago next week :-)
HCEO: Human Capital and Economic Opportunity Global Working Group

Conference on Genetics and Behavior

April 18, 2014 to April 19, 2014

This meeting will bring together researchers from a range of disciplines who have been exploring the role of genetic influences on socioeconomic outcomes. The approaches taken to incorporating genes into social science models differ widely. The first goal of the conference is to provide a forum in which alternative frameworks are discussed and critically evaluated. Second, we are hopeful that the meeting will trigger extended interactions and even future collaboration. Third, the meeting will help focus future genetics-related initiatives by the Human Capital and Economic Opportunity Global Working Group, which is pursuing the study of inequality and social mobility over the next several years.

PROGRAM

9:00 to 11:00
Genes and Socioeconomic Aggregates
Gregory Cochran University of Utah
Steven Durlauf University of Wisconsin–Madison
Henry Harpending University of Utah
Aldo Rustichini University of Minnesota
Enrico Spolaore Tufts University

11:30 to 1:30
Population-Based Studies
Sara Jaffee University of Pennsylvania
Matthew McGue University of Minnesota
Peter Molenaar
Jenae Neiderhiser

2:30 to 4:30
Genome-Wide Association Studies (GWAS)
Daniel Benjamin Cornell University
David Cesarini New York University
Dalton Conley New York University/NBER
Jason Fletcher University of Wisconsin–Madison
Philipp Koellinger University of Amsterdam

APRIL 19, 2014

9:00 to 11:00
Neuroscience
Paul Glimcher New York University
Jonathan King National Institute on Aging
Aldo Rustichini University of Minnesota

11:30 to 1:30
Intelligence
Stephen Hsu Michigan State University
Wendy Johnson University of Edinburgh
Rodrigo Pinto The University of Chicago

2:30 to 4:30
Role of Genes in Understanding Socioeconomic Status
Gabriella Conti University College London
Steven Durlauf University of Wisconsin–Madison
Felix Elwert University of Wisconsin–Madison
James Lee University of Minnesota

Tuesday, September 07, 2021

Kathryn Paige Harden Profile in The New Yorker (Behavior Genetics)

This is a good profile of behavior geneticist Paige Harden (UT Austin professor of psychology, former student of Eric Turkheimer), with a balanced discussion of polygenic prediction of cognitive traits and the culture war context in which it (unfortunately) exists.
Can Progressives Be Convinced That Genetics Matters? 
The behavior geneticist Kathryn Paige Harden is waging a two-front campaign: on her left are those who assume that genes are irrelevant, on her right those who insist that they’re everything. 
Gideon Lewis-Kraus
Gideon Lewis-Kraus is a talented writer who also wrote a very nice article on the NYTimes / Slate Star Codex hysteria last summer.

Some references related to the New Yorker profile:
1. The paper Harden was attacked for sharing while a visiting scholar at the Russell Sage Foundation: Game Over: Genomic Prediction of Social Mobility 

2. Harden's paper on polygenic scores and mathematics progression in high school: Genomic prediction of student flow through high school math curriculum 

3. Vox article; Turkheimer and Harden drawn into debate including Charles Murray and Sam Harris: Scientific Consensus on Cognitive Ability?

A recent talk by Harden, based on her forthcoming book The Genetic Lottery: Why DNA Matters for Social Equality



Regarding polygenic prediction of complex traits 

I first met Eric Turkheimer in person (we had corresponded online prior to that) at the Behavior Genetics Association annual meeting in 2012, which was back to back with the International Conference on Quantitative Genetics, both held in Edinburgh that year (photos and slides [1] [2] [3]). I was completely new to the field but they allowed me to give a keynote presentation (if memory serves, together with Peter Visscher). Harden may have been at the meeting but I don't recall whether we met. 

At the time, people were still doing underpowered candidate gene studies (there were many talks on this at BGA although fewer at ICQG) and struggling to understand GCTA (Visscher group's work showing one can estimate heritability from modestly large GWAS datasets, results consistent with earlier twins and adoption work). Consequently a theoretical physicist talking about genomic prediction using AI/ML and a million genomes seemed like an alien time traveler from the future. Indeed, I was.

My talk is largely summarized here:
On the genetic architecture of intelligence and other quantitative traits 
https://arxiv.org/abs/1408.3421 
How do genes affect cognitive ability or other human quantitative traits such as height or disease risk? Progress on this challenging question is likely to be significant in the near future. I begin with a brief review of psychometric measurements of intelligence, introducing the idea of a "general factor" or g score. The main results concern the stability, validity (predictive power), and heritability of adult g. The largest component of genetic variance for both height and intelligence is additive (linear), leading to important simplifications in predictive modeling and statistical estimation. Due mainly to the rapidly decreasing cost of genotyping, it is possible that within the coming decade researchers will identify loci which account for a significant fraction of total g variation. In the case of height analogous efforts are well under way. I describe some unpublished results concerning the genetic architecture of height and cognitive ability, which suggest that roughly 10k moderately rare causal variants of mostly negative effect are responsible for normal population variation. Using results from Compressed Sensing (L1-penalized regression), I estimate the statistical power required to characterize both linear and nonlinear models for quantitative traits. The main unknown parameter s (sparsity) is the number of loci which account for the bulk of the genetic variation. The required sample size is of order 100s, or roughly a million in the case of cognitive ability.
The predictions in my 2012 BGA talk and in the 2014 review article above have mostly been validated. Research advances often pass through the following phases of reaction from the scientific community:
1. It's wrong ("genes don't affect intelligence! anyway too complex to figure out... we hope")
2. It's trivial ("ofc with lots of data you can do anything... knew it all along")
3. I did it first ("please cite my important paper on this")
Or, as sometimes attributed to Gandhi: "First they ignore you, then they laugh at you, then they fight you, then you win.”



Technical note

In 2014 I estimated that ~1 million genotype | phenotype pairs would be enough to capture most of the common SNP heritability for height and cognitive ability. This was accomplished for height in 2017. However, the sample size of well-phenotyped individuals is much smaller for cognitive ability, even in 2021, than for height in 2017. For example, in UK Biobank the cognitive test is very brief (~5 minutes IIRC, a dozen or so questions), but it has not even been administered to the full cohort as yet. In the Educational Attainment studies the phenotype EA is only moderately correlated (~0.3 ?) or so with actual cognitive ability.

Hence, although the most recent EA4 results use 3 million individuals [1], and produce a predictor which correlates ~0.4 with actual EA, the statistical power available is still less than what I predicted would be required to train a really good cognitive ability predictor.

In our 2017 height paper, which also briefly discussed bone density and cognitive ability prediction, we built a cognitve ability predictor roughly as powerful as EA3 using only ~100k individuals with the noisy UKB test data. So I remain confident that  ~million individuals with good cognitive scores (e.g., SAT, AFQT, full IQ test) would deliver results far beyond what we currently have available. We also found that our predictor, built using actual (albeit noisy) cognitive scores exhibits less power reduction in within-family (sibling) analyses compared to EA. So there is evidence that (no surprise) EA is more influenced by environmental factors, including so-called genetic nurture effects, than is cognitive ability.

A predictor which captures most of the common SNP heritability for cognitive ability might correlate ~0.5 or 0.6 with actual ability. Applications of this predictor in, e.g., studies of social mobility or educational success or even longevity using existing datasets would be extremely dramatic.

Tuesday, November 18, 2008

Bill Janeway interview

Via The Big Picture, this wonderful interview with Bill Janeway. Janeway was trained as an academic economist (PhD Cambridge), but spent his career on Wall Street, most recently in private equity. I first met Bill at O'Reilly's foo camp; we've had several long conversations about finance and the markets. The interview is long, but read the whole thing! Topics covered include: physicists and quants in finance, mark to market, risk, regulatory and accounting regimes, market efficiency.

The IRA: How did we get into this mess?

Janeway: It took two generations of the best and the brightest who were mathematically quick and decided to address themselves to the issues of capital markets. They made it possible to create the greatest mountain of leverage that the world has ever seen. In my own way, I do track it back to the construction of the architecture of modern finance theory, all the way back to Harry Markowitz writing a thesis at the University of Chicago which Milton Friedman didn’t think was economics. He was later convinced to allow Markowitz to get his doctorate at the University of Chicago in 1950. Then we go on through the evolution of modern finance and the work that led to the Nobel prizes, Miller, Modigliani, Scholes and Merton. The core of this grand project was to reconstruct financial economics as a branch of physics. If we could treat the agents, the atoms of the markets, people buying and selling, as if they were molecules, we could apply the same differential equations to finance that describe the behavior of molecules. What that entails is to take as the raw material, time series data, prices and returns, and look at them as the observables generated by processes which are stationary. By this I mean that the distribution of observables, the distribution of prices, is stable over time. So you can look at the statistical attributes like volatility and correlation amongst them, above all liquidity, as stable and mathematically describable. So consequently, you could construct ways to hedge any position by means of a “replicating portfolio” whose statistics would offset the securities you started with. There is a really important book written by a professor at the University of Edinburgh named Donald MacKenzie. He is a sociologist of economics and he went into the field, onto the floor in Chicago and the trading rooms, to do his research. He interviewed everybody and wrote a great book called An Engine Not a Camera. It is an analytical history of the evolution of modern finance theory. Where the title comes from is that modern finance theory was not a camera to capture how the markets worked, but rather an engine to transform them.

...

Janeway: Yes, but here the agents were principals! I think something else was going on. It was my son, who worked for Bear, Stearns in the equity department in 2007, who pointed out to me that Bear, Stearns and Lehman Brothers had the highest proportion of employee stock ownership on Wall Street. Many people believed, by no means only the folks at Bear and Lehman, that the emergence of Basel II and the transfer to the banks themselves of responsibility for determining the amount of required regulatory capital based upon internal ratings actually reduced risk and allowed higher leverage. The move by the SEC in 2004 to give regulatory discretion to the dealers regarding leverage was the same thing again.

The IRA: And both regimes falsely assume that banks and dealers can actually construct a viable ratings methodology, even relying heavily on vendors and ratings firms. There are still some people at the BIS and the other central banks who believe that Basel II is viable and effective, but none of the risk practitioners with whom we work has anything but contempt for the whole framework. It reminds us of other utopian initiatives such as fair value accounting or affordable housing, everyone sells the vision but misses the pesky details that make it real! And the same religious fervor behind the application of physics to finance was behind the Basel II framework and complex structured assets.

Janeway: That’s my point. It was a kind of religious movement, a willed suspension of disbelief. If we say that the assumptions necessary to produce the mathematical models hold in the real world, namely that markets are efficient and complete, that agents are rational, that agents have access to all of the available data, and that they all share the same model for transforming that data into actionable information, and finally that this entire model is true, then at the end of the day, leverage should be infinite. Market efficiency should rise to the point where there isn’t any spread left to be captured. The fact that a half a percent unhedged swing in your balance sheet can render you insolvent, well it doesn’t fit with this entire constructed intellectual universe that goes back 50 years.

...

Janeway: There are a couple of steps along the way here that got us to the present circumstance, such as the issue of regulatory capture. When you talk about regulatory capture and risk, the capture here of the regulators by the financial industry was not the usual situation of corrupt capture. The critical moment came in the early 1980s, which is very well documented in MacKenzie’s book, when the Chicago Board appealed to academia because it was then the case that in numerous states, cash settlement futures were considered gambling and were banned by law.

...

Janeway: The point here is that the regulators were captured intellectually, not monetarily. And the last to be converted, to have the religious conversion experience, were the accountants, leading to fair value accounting rules. I happen to be the beneficiary of a friendship with a wonderful man, Geoff Whittington, who is a professor emeritus of accounting at Cambridge, who was chief accountant of the British Accounting Standards Board and was a founder of the International Accounting Standards Board. He is from the inside an appropriately knowledgeable, balanced skeptic, who has done a wonderful job of parsing out what is involved in this discussion in a paper called “Two World Views.” Basically, he says that if you really do believe that we live in a world of complete and efficient markets, then you have no choice but to be an advocate of fair value, mark-to-market accounting. If, on the other hand, you see us living in a world of incomplete, but reasonably efficient markets, in which the utility of the numbers you are trying to generate have to do with stewardship of a business through real, historical time rather than a snapshot of “truth,” then you are in a different world. And that is a world where the concept of fair value is necessarily contingent.

Previous posts on Donald MacKenzie's work. MacKenzie is perhaps the most insightful of academics working on the history and development of modern finance.

Friday, September 20, 2013

Childhood SES amplifies genetic effects on adult intelligence

Timothy Bates, a professor of psychology at the University of Edinburgh, and an occasional commenter on this blog, has a new paper out, which looks quite interesting. [See comments for references to additional literature and an overview from Tim!]
Childhood Socioeconomic Status Amplifies Genetic Effects on Adult Intelligence

Studies of intelligence in children reveal significantly higher heritability among groups with high socioeconomic status (SES) than among groups with low SES. These interaction effects, however, have not been examined in adults, when between-families environmental effects are reduced. Using 1,702 adult twins (aged 24–84) for whom intelligence assessment data were available, we tested for interactions between childhood SES and genetic effects, between-families environmental effects, and unique environmental effects. Higher SES was associated with higher mean intelligence scores. Moreover, the magnitude of genetic influences on intelligence was proportional to SES. By contrast, environmental influences were constant. These results suggest that rather than setting lower and upper bounds on intelligence, genes multiply environmental inputs that support intellectual growth. This mechanism implies that increasing SES may raise average intelligence but also magnifies individual differences in intelligence.
See also WSJ coverage by Alison Gopnik:
... When psychologists first started studying twins, they found identical twins much more likely to have similar IQs than fraternal ones. They concluded that IQ was highly "heritable"—that is, due to genetic differences. But those were all high SES twins. Erik Turkheimer of the University of Virginia and his colleagues discovered that the picture was very different for poor, low-SES twins. For these children, there was very little difference between identical and fraternal twins: IQ was hardly heritable at all. Differences in the environment, like whether you lucked out with a good teacher, seemed to be much more important.

In the new study, the Bates team found this was even true when those children grew up. IQ was much less heritable for people who had grown up poor. This might seem paradoxical: After all, your DNA stays the same no matter how you are raised. The explanation is that IQ is influenced by education. Historically, absolute IQ scores have risen substantially as we've changed our environment so that more people go to school longer.

Richer children have similarly good educational opportunities, so genetic differences among them become more apparent. And since richer children have more educational choice, they (or their parents) can choose environments that accentuate and amplify their particular skills. A child who has genetic abilities that make her just slightly better at math may be more likely to take a math class, so she becomes even better at math.

But for poor children, haphazard differences in educational opportunity swamp genetic differences. Ending up in a terrible school or one a bit better can make a big difference. And poor children have fewer opportunities to tailor their education to their particular strengths. ...

Tuesday, October 08, 2013

Nobels for Higgs and Englert


Congratulations to Peter Higgs and François Englert on their Nobel prize. A bit of background from an earlier post How the Higgs boson became the Higgs boson:
IIRC, I met Peter Higgs in Erice in 1990. He was quite a nice fellow, but the story below by Steve Weinberg illustrates how capricious is the allocation of credit in science.

NYBooks: (Footnote 1) In his recent book, The Infinity Puzzle (Basic Books, 2011), Frank Close points out that a mistake of mine was in part responsible for the term “Higgs boson.” In my 1967 paper on the unification of weak and electromagnetic forces, I cited 1964 work by Peter Higgs and two other sets of theorists. This was because they had all explored the mathematics of symmetry-breaking in general theories with force-carrying particles, though they did not apply it to weak and electromagnetic forces. As known since 1961, a typical consequence of theories of symmetry-breaking is the appearance of new particles, as a sort of debris. A specific particle of this general class was predicted in my 1967 paper; this is the Higgs boson now being sought at the LHC.
As to my responsibility for the name “Higgs boson,” because of a mistake in reading the dates on these three earlier papers, I thought that the earliest was the one by Higgs, so in my 1967 paper I cited Higgs first, and have done so since then. Other physicists apparently have followed my lead. But as Close points out, the earliest paper of the three I cited was actually the one by Robert Brout and François Englert. In extenuation of my mistake, I should note that Higgs and Brout and Englert did their work independently and at about the same time, as also did the third group (Gerald Guralnik, C.R. Hagen, and Tom Kibble). But the name “Higgs boson” seems to have stuck.

[ Note that to Higgs' credit his is the only paper that clearly works out the properties of the excitation now known as the Higgs boson. ]
Jeffrey Goldstone showed (1961) that when rigid ("global") continuous symmetries are spontaneously broken by the vacuum (the vacuum configuration is not invariant under the symmetry), a massless boson necessarily results. This boson is the eponymous Goldstone boson: the particle excitation corresponding to small perturbations of the vacuum state in the direction of the symmetry. The natural next step is to ask what happens if the broken symmetry is a gauge (local) symmetry. This is the problem that Higgs et al. solved. But Goldstone had one of the first cracks at the problem. Indeed, Jeffrey deduced the existence of a massive excitation (i.e., the Higgs boson), but its physical reality was in question -- only apparent in certain "choices of gauge"; gauge theory was not then very well understood. According to legend, Sidney Coleman convinced Goldstone that the boson was only a gauge artifact. For years afterward Goldstone would say that Sidney, despite his obvious brilliance, was, when it really counted, always wrong!

I met Englert for the first time in 2008 at a workshop in Paris on the black hole information problem. Over coffee, he explained to me some mysterious comments 't Hooft had made in his talk. A real gentleman, and still very sharp.

A photo from the summer school in Erice, Sicily 1990. Higgs is in the blue socks and sandals, holding a glass of wine. I'm in a maroon shirt two rows back.


A portrait of Higgs in the physics department of the University of Edinburgh.

Monday, April 22, 2013

Common variants vs mutational load

I recommend this blog post (The Differentialist) by Timothy Bates of the University of Edinburgh. (I met Tim there at last year's Behavior Genetics meeting.) He discusses the implications of GCTA results showing high heritability of IQ as measured using common SNPs (see related post Eric, why so gloomy?). One unresolved issue (see comments there) is to what extent mutational load (deleterious effects due to very rare variants) can account for population variation in IQ. The standard argument is that very rare variants will not be well tagged by common SNPs and hence the heritability results (e.g., of about 0.5) found by GCTA suggest that a good chunk of variation is accounted for by common variants (e.g., MAF > 0.05). The counter argument (which I have not yet seen investigated fully) is that relatedness defined over a set of common SNPs is correlated to the similarity in mutational load of a pair of individuals, due to the complex family history of human populations. IIRC, "unrelated" individuals selected at random from a common ethnic group and region are, on average, roughly as related as third cousins (say, r ~ 1E-02?).

Is the heritability detected using common SNPs due to specific common variants tagged by SNPs, or due to a general correlation between SNP relatedness and overall similarity of genomes?

My guess is that we'll find that both common variants and mutational load are responsible for variation in cognitive ability. Does existing data provide any limit on the relative ratio? This requires a calculation, but my intuition is that mutational load cannot account for everything. Fortunately, with whole genome data you can look both for common variants and at mutational load at the same time.

In the case of height it's now clear that common variants account for a significant fraction of heritability, but there is also evidence for a mutational load component. Note that we don't expect to discover any common variants for IQ until past a threshold in sample size, which for height turned out to be about 10k.



Hmm, now that I think about it ... there does seem to be a relevant calculation :-)

In the original GCTA paper (Yang et al. Nature Genetics 2010), it was found that relatedness computed on a set of common genotyped SNPs is a poor predictor of relatedness on rare SNPs (e.g., MAF < 0.1). The rare SNPs are in poor linkage disequilibrium (LD) with the genotyped SNPs, due to the difference in MAF. This was proposed as a plausible mechanism for the still-missing heritability (e.g., 0.4 vs 0.8 expected from classical twin/sib studies; Yang et al. specifically looked at height): if the actual causal variants tend to be rarer than the common genotyped SNPs, the genotypic similarity of two individuals where it counts -- on the causal variants -- would be incorrectly estimated, leading to an underestimate of heritability.

If these simulations are any guide, rare mutations are unlikely to account for the GCTA heritability, but rather may account for (some of) the gap between it and the total additive heritability. See, for example, the following discussion:
A commentary on “Common SNPs explain a large proportion of the heritability for human height” by Yang et al. (2010)

(p.6) ... We cannot measure the LD between causal variants and genotyped SNPs directly because we do not know the causal variants. However, we can estimate the LD between SNPs. If the causal variants have similar characteristics to the SNPs, the LD between causal variants and SNPs should be similar to that between the SNPs themselves. One causal variant can be in LD with multiple SNPs and so the SNPs collectively could trace the causal variant even though no one SNP was in perfect LD with it. Therefore we divided the SNPs randomly into two groups and treated the first group as if they were causal variants and asked how well the second group of SNPs tracked these simulated causal variants. This can be judged by the extent to which the relationship matrices calculated from the SNPs agree with the relationship matrix calculated from the ‘causal variants’. The covariance between the estimated relationships for the two sets of SNPs equals the true variance of relatedness whereas the variance of the estimates of relatedness for each set of SNPs equals true variation in relatedness plus estimation error. Therefore, from the regression of pairwise relatedness estimated from one of the set of SNPs onto the estimated pairwise relatedness from the other set of SNPs we can quantify the amount of error and ‘regress back’ or ‘shrink’ the estimate of relatedness towards the mean to take account of the prediction error.

... If causal variants have a lower MAF than common SNPs the LD between SNPs and causal variants is likely to be lower than the LD between random SNPs. To investigate the effect of this possibility we used SNPs with low MAF to mimic causal variants. We found that the relationship estimated by random SNPs (with MAF typical of the genotyped SNPs on the array) was a poorer predictor of the relationship at these ‘causal variants’ than it was of the relationship at other random SNPs. When the relationship matrix at the SNPs is shrunk to provide an unbiased estimate of the relationship at these ‘causal variants’, we find that the ‘causal variants’ would explain 80% of the phenotypic variance ...

Friday, March 05, 2021

Genetic correlation of social outcomes between relatives (Fisher 1918) tested using lineage of 400k English individuals

Greg Clark (UC Davis and London School of Economics) deserves enormous credit for producing a large multi-generational dataset which is relevant to some of the most fundamental issues in social science: inequality, economic development, social policy, wealth formation, meritocracy, and recent human evolution. If you have even a casual interest in the dynamics of human society you should study these results carefully...

See previous discussion on this blog. 

Clark recently posted this preprint on his web page. A book covering similar topics is forthcoming.
For Whom the Bell Curve Tolls: A Lineage of 400,000 English Individuals 1750-2020 shows Genetics Determines most Social Outcomes 
Gregory Clark, University of California, Davis and LSE (March 1, 2021) 
Economics, Sociology, and Anthropology are dominated by the belief that social outcomes depend mainly on parental investment and community socialization. Using a lineage of 402,000 English people 1750-2020 we test whether such mechanisms better predict outcomes than a simple additive genetics model. The genetics model predicts better in all cases except for the transmission of wealth. The high persistence of status over multiple generations, however, would require in a genetic mechanism strong genetic assortative in mating. This has been until recently believed impossible. There is however, also strong evidence consistent with just such sorting, all the way from 1837 to 2020. Thus the outcomes here are actually the product of an interesting genetics-culture combination.
The correlational results in the table below were originally deduced by Fisher under the assumption of additive genetic inheritance: h2 is heritability, m is assortativity by genotype, r assortativity by phenotype. (Assortative mating describes the tendency of husband and wife to resemble each other more than randomly chosen M-F pairs in the general population.)
Fisher, R. A. 1918. “The Correlation between Relatives on the Supposition of Mendelian Inheritance.” Transactions of the Royal Society of Edinburgh, 52: 399-433
Thanks to Clark the predictions of Fisher's models, applied to social outcomes, can now be compared directly to data through many generations and across many branches of English family trees. (Figures below from the paper.)





The additive model fits the data well, but requires high heritabilities h2 and a high level m of assortative mating. Most analysts, including myself, thought that the required values of m were implausibly large. However, using modern genomic datasets one can estimate the level of assortative mating by simply looking at the genotypes of married couples. 

From the paper:
(p.26) a recent study from the UK Biobank, which has a collection of genotypes of individuals together with measures of their social characteristics, supports the idea that there is strong genetic assortment in mating. Robinson et al. (2017) look at the phenotype and genotype correlations for a variety of traits – height, BMI, blood pressure, years of education - using data from the biobank. For most traits they find as expected that the genotype correlation between the parties is less than the phenotype correlation. But there is one notable exception. For years of education, the phenotype correlation across spouses is 0.41 (0.011 SE). However, the correlation across the same couples for the genetic predictor of educational attainment is significantly higher at 0.654 (0.014 SE) (Robinson et al., 2017, 4). Thus couples in marriage in recent years in England were sorting on the genotype as opposed to the phenotype when it comes to educational status. 
It is not mysterious how this happens. The phenotype measure here is just the number of years of education. But when couples interact they will have a much more refined sense of what the intellectual abilities of their partner are: what is their general knowledge, ability to reason about the world, and general intellectual ability. Somehow in the process of matching modern couples in England are combining based on the weighted sum of a set of variations at several hundred locations on the genome, to the point where their correlation on this measure is 0.65.
Correction: Height, Educational Attainment (EA), and cognitive ability predictors are controlled by many thousands of genetic loci, not hundreds! 


This is a 2018 talk by Clark which covers most of what is in the paper.



For out of sample validation of the Educational Attainment (EA) polygenic score, see Game Over: Genomic Prediction of Social Mobility.

 

Wednesday, August 17, 2005

Brain cells from stem cells

Can we please have sensible science policy in the US before it is too late? Why are we allowing the UK and S. Korea to race ahead in this important area of research? (See previous post on therapeutic cloning by Seoul National University researchers.)

Guardian: British scientists create first pure brain stem cells

Scientists have made the world's first pure batch of brain stem cells from human stem cells. The breakthrough is important in the fight against neuro-degenerative diseases such as Alzheimer's and Parkinson's and could also reduce the number of animals used in medical research.
Stem cells can change into any type of cell in the body. How they change, a process known as differentiation, remains a mystery but scientists think certain chemical and environmental signals must trigger it.

Austin Smith of Edinburgh University's institute for stem cell research bathed stem cells taken from mouse embryos with two proteins called epidermal growth factor and fibroblast growth factor, both of which are known to be involved in the normal development of the embryonic brain. After his team had shown the process turned embryonic mouse stem cells into brain stem cells, they repeated the experiment on human embryonic stem cells.

Brain stem cells have been grown before but the results have been impure. "You end up with a mixed culture at the end which has not just neural stem cells, it has a lot of contaminating embryonic stem cells," said Steve Pollard, one of Professor Smith's colleagues and a co-author of the results, published yesterday in the journal PLoS Biology.

The work comes three months after scientists at Newcastle University cloned a human embryo using donated eggs and genetic material from stem cells. Human embryos were first cloned last year by South Korean scientists.

In the short term, the technique will allow scientists to develop cell cultures for their research. "We'll use them in the basic biology sense to try to understand how stem cells work," Professor Pollard said. "It's a good opportunity to understand what the difference is between an embryonic stem cell, which can make anything, and a brain stem cell, which can just make brain."

Through genetic modification, scientists will also use the technique to mimic brain diseases.

Tim Allsopp, the chief scientific officer of Stem Cell Science, the company given an exclusive licence to commercialise the research, said: "The remarkable stability and purity of the cells is something unique in the field of tissue stem cells and a great step forward. We have already had a number of approaches from pharmaceutical companies interested in using these cells to test and develop new drugs, and are looking forward to working with them to further develop and license the technology."

In the longer term the technology raises hopes of growing cells to replace damaged parts of the brain. But Professor Smith said there was a long way to go: "We know these cells can survive if we put them back in the brain but whether they can do anything useful is a much more complicated question."

Blog Archive

Labels