Showing posts with label bounded rationality. Show all posts
Showing posts with label bounded rationality. Show all posts

Thursday, July 27, 2023

Paul Huang, the real situation in Taiwan: politics, military, China — Manifold #40

 


Paul Huang is a journalist and research fellow with the Taiwanese Public Opinion Foundation. He is currently based in Taipei, Taiwan. 

Sample articles: 

Taiwan’s Military Has Flashy American Weapons but No Ammo (in Foreign Policy): https://foreignpolicy.com/2020/08/20/taiwan-military-flashy-american-weapons-no-ammo/ 

Taiwan’s  Military Is a Hollow Shell (Foreign Policy): 


Audio-only and transcript:


Steve and Paul discuss: 

0:00 Introduction 
1:44 Paul’s background; the Green Party (DPP) and Blue Party (KMT) in Taiwan 
4:40 How the Taiwanese people view themselves vs mainland Chinese 
15:02 Taiwan taboos: politics and military preparedness 
15:27 Effect of Ukraine conflict on Taiwanese opinion 
29:56 Lack of realistic military planning 
37:20 Is there a political solution to reunification with China? What influence does the U.S. have? 
51:34 The likelihood of peaceful reunification of Taiwan and China 
56:45 Honest views on Taiwanese and U.S. military readiness for a conflict with China

Wednesday, June 28, 2023

Embryo Selection: Healthy Babies vs Bad Arguments

Great article by Diana Fleischman, Ives Parr, Jonathan Anomaly, and Laurent Tellier.
Polygenic screening and its discontents 
... But monogenic and chromosomal screening can only address a part of disease risk because most health conditions that afflict people are polygenic, meaning they are not simply caused by one gene or by a chromosomal abnormality. Instead, they are caused by a huge number of small additive effects dispersed throughout the genome. For example, cancer, schizophrenia, and diabetes can be best predicted by models using tens of thousands of genes. 
A polygenic risk score (PRS) looks at a person’s DNA to see how many variants they have associated with a particular disease. Like BRCA1, polygenic risk scores are typically not determinative: “Polygenic screening is not a diagnosis: It is a prediction of relative future risk compared to other people.” In other words, someone with BRCA1 has a higher risk than someone without, and someone with a high breast cancer PRS has a higher risk than someone with a lower breast cancer PRS. But in principle, BRCA1 is just one gene out of thousands contributing to a PRS, with each bit contributing a small part of a total risk estimate. ... 

 

... Recently, a group of European scientists argued that polygenic screening should not be available to couples because it will lead to stigmatization, exacerbate inequalities, or lead to confusion by parents about how to weigh up information about risks before they decide which embryo to implant. These are indeed challenges, but they are not unique to embryo selection using polygenic scores, and they are not plausible arguments for restricting the autonomy of parents who wish to screen their embryos for polygenic traits. Furthermore, from an ethical perspective, it is unconscionable to deny polygenic screening to families with a history of any disease whose risk can be reduced by this lifesaving technology. 
Many new technologies are initially only available to people with more money, but these first adopters then end up subsidizing research that drives costs down and quality up. Many other medical choices involve complexity or might result in some people being stigmatized, but this is a reason to encourage genetic counseling and to encourage social tolerance. It is not a reason to marginalize, stigmatize, or criminalize IVF mothers and fathers who wish to use the best available science to increase the chances that their children will be healthy and happy.
This is a comment on the article:
1) They don't want to admit that some people are better than others, inherently. Boo hoo. 
2) You put a scorecard of embryos in front of everyone, and everyone has a pretty good ballpark estimate of which are better and which are worse. Nobody is going to pretend equality is true when they are choosing their kids genes. 
3) So bad feels. 
4) Must therefore retard all human progress and cause immense suffering because don't want to deal with bad feels. 
That's the anti-polygenic argument in a nutshell. I don't expect it to be very effective. At best it will cause it to take a bit longer before poor people have access.

Thursday, January 19, 2023

Dominic Cummings: Vote Leave, Brexit, COVID, and No. 10 with Boris — Manifold #28

 

Dominic Cummings is a major historical figure in UK politics. He helped save the Pound Sterling, led the Vote Leave campaign, Got Brexit Done, and guided the Tories to a landslide general election victory. His time in No. 10 Downing Street as Boris Johnson's Chief Advisor was one of the most interesting and impactful periods in modern UK political history.  Dom and Steve discuss all of this and more in this 2-hour episode. 

0:00 Early Life: Oxford, Russia, entering politics 
16:49 Keeping the UK out of the Euro 
19:41 How Dominic and Steve became acquainted: blogs, 2008 financial crisis, meeting at Google 
27:37 Vote Leave, the science of polling 
43:46 Cambridge Analytica conspiracy; History is impossible 
48:41 Dominic on Benedict Cumberbatch’s portrayal of him and the movie “Brexit: The Uncivil War” 
54:05 On joining British Prime Minister Boris Johnson’s office: an ultimatum 
1:06:31 The pandemic 
1:21:28 The Deep State, talent pipeline for public service 
1:47:25 Quants and weirdos invade No.10 
1:52:06 Can the Tories win the next election? 
1:56:27 Trump in 2024? 



References: 

Dominic's Substack newsletter: https://dominiccummings.substack.com/

Sunday, November 13, 2022

Smart Leftists vs Dumb Leftists

Thursday, November 03, 2022

Richard Sander on SCOTUS Oral Arguments: Affirmative Action and Discrimination against Asian Americans at Harvard and UNC (Manifold #23)

 

Richard Sander is Jesse Dukeminier Professor at UCLA Law School. AB Harvard, JD, PhD (Economics) Northwestern. 

Sander has studied the structure and effects of law school admissions policies. He coined the term "Mismatch" to describe the negative consequences resulting from large admissions preferences. 

Rick and Steve discuss recent oral arguments at the Supreme Court in Students for Fair Admissions vs Harvard College and Students For Fair Admissions vs the University of North Carolina. 

0:00 Rick’s experience at the Supreme Court 
4:11 Rick’s impression of the oral arguments 
16:24 Analyzing the court’s questions 
29:09 The negative impact on Asian American students 
34:41 Shifting sentiment on affirmative action 
40:04 Three potential outcomes for Harvard and UNC cases 
44:00 Possible reasons for conservatives to be optimistic 
50:31 Final thoughts on experiencing oral arguments in person 
52:12 Mismatch theory 
56:31 The future of higher education 

Resources 

Background on the Harvard case: 

Transcripts: 

Previous interview with Richard (Manifold #6)

See the Crimson for some photos of the parties involved

Sunday, August 14, 2022

Tweet Treats: AI in PRC, Semiconductors and the Russian War Machine, Wordcels are Midwits

Some recent tweets which might be of interest :-)

Tuesday, May 03, 2022

How We Learned, Then Forgot, About Human Intelligence... And Witnessing the Live Breakdown of Academia (podcast interview with Cactus Chu)

This is a long interview I did recently with Cactus Chu, a math prodigy turned political theorist and podcaster. (Unfortunately I can't embed the podcast here.)


Timestamps: 
3:24 Interview Starts  
15:49 Cactus' Experience with High Math People 
19:49 High School Sports 
21:26 Comparison to Intelligence 
26:29 Is Lack of Understanding due to Denial or Ignorance? 
29:29 The Past and Present of Selection in Academia 
37:02 How Universities Look from the Inside 
44:19 Informal Networks Replacing Credentials 
48:37 Capture of Research Positions 
50:24 Progressivism as Demagoguery Against the Self-Made 
55:31 Innumeracy is Common 
1:06:53 Understanding Innumerate People 
1:13:53 Skill Alignment at Cactus' High School 
1:18:12 Free Speech in Academia 
1:21:00 You Shouldn't Fire Exceptional People 
1:23:03 The Anti-Excellence Progressives 
1:28:42 Rawls, Nozick, and Technology 
1:34:00 Freedom = Variance = Inequality 
1:37:58 Dating Apps 
1:41:27 Jumping Into Social Problems From a Technical Background 
1:41:50 Steve's High School Pranks 
1:46:43 996 and Cactus' High School 
1:50:26 The Vietnam War and Social Change 
1:53:07 Are Podcasts the Future? 
1:59:37 The Power of New Things 
2:02:56 The Birth of Twitter 
2:07:27 Selection Creates Quality 
2:10:21 Incentives of University Departments 
2:16:29 Woke Bureaucrats 
2:27:59 Building a New University 
2:30:42 What needs more order? 
2:31:56 What needs more chaos?

An automated (i.e., imperfect) transcript of our discussion.

Here's an excerpt from the podcast:

Saturday, November 27, 2021

Social and Educational Mobility: Denmark vs USA (James Heckman)




Despite generous social programs such as free pre-K education, free college, and massive transfer payments, Denmark is similar to the US in key measures of inequality, such as educational outcomes and cognitive test scores. 

While transfer payments can equalize, to some degree, disposable income, they do not seem to be able to compensate for large family effects on individual differences in development. 

These observations raise the following questions: 

1. What is the best case scenario for the US if all progressive government programs are implemented with respect to child development, free high quality K12 education, free college, etc.?

2. What is the causal mechanism for stubborn inequality of outcomes, transmitted from parent to child (i.e., within families)? 

Re #2: Heckman and collaborators focus on environmental factors, but do not (as far as I can tell) discuss genetic transmission. We already know that polygenic scores are correlated to the education and income levels of parents, and (from adoption studies) that children tend to resemble their biological parents much more strongly than their adoptive parents. These results suggest that genetic transmission of inequality may dominate environmental transmission.
  
See 



The Contribution of Cognitive and Noncognitive Skills to Intergenerational Social Mobility (McGue et al. 2020)


Note: Denmark is very homogenous in ancestry, and the data presented in these studies (e.g., polygenic scores and social mobility) are also drawn from European-ancestry cohorts. The focus here is not on ethnicity or group differences between ancestry groups. The focus is on social and educational mobility within European-ancestry populations, with or without generous government programs supporting free college education, daycare, pre-K, etc.

Lessons for Americans from Denmark about inequality and social mobility 
James Heckman and Rasmus Landersø 
Abstract Many progressive American policy analysts point to Denmark as a model welfare state with low levels of income inequality and high levels of income mobility across generations. It has in place many social policies now advocated for adoption in the U.S. Despite generous Danish social policies, family influence on important child outcomes in Denmark is about as strong as it is in the United States. More advantaged families are better able to access, utilize, and influence universally available programs. Purposive sorting by levels of family advantage create neighborhood effects. Powerful forces not easily mitigated by Danish-style welfare state programs operate in both countries.
Also discussed in this episode of EconTalk podcast. Russ does not ask the obvious question about disentangling family environment from genetic transmission of inequality.
 

The figure below appears in Game Over: Genomic Prediction of Social Mobility. It shows SNP-based polygenic score and life outcome (socioeconomic index, on vertical axis) in four longitudinal cohorts, one from New Zealand (Dunedin) and three from the US. Each cohort (varying somewhat in size) has thousands of individuals, ~20k in total (all of European ancestry). The points displayed are averages over bins containing 10-50 individuals. For each cohort, the individuals have been grouped by childhood (family) social economic status. Social mobility can be predicted from polygenic score. Note that higher SES families tend to have higher polygenic scores on average -- which is what one might expect from a society that is at least somewhat meritocratic. The cohorts have not been used in training -- this is true out-of-sample validation. Furthermore, the four cohorts represent different geographic regions (even, different continents) and individuals born in different decades.




The figure below appears in More on SES and IQ.

Where is the evidence for environmental effects described above in Heckman's abstract: "More advantaged families are better able to access, utilize, and influence universally available programs. Purposive sorting by levels of family advantage create neighborhood effects"? Do parents not seek these advantages for their adopted children as well as for their biological children? Or is there an entirely different causal mechanism based on shared DNA?

 


 

Tuesday, November 09, 2021

The Balance of Power in the Western Pacific and the Death of the Naval Surface Ship

Recent satellite photos suggest that PLARF (People's Liberation Army Rocket Forces) have been testing against realistic moving ship targets in the deserts of the northwest. Note the ship model is on rails in the second photo below. Apparently there are over 30km of rail lines, allowing the simulation of evasive maneuvers by an aircraft carrier (third figure below).


Large surface ships such as aircraft carriers are easy to detect (e.g., satellite imaging via radar sensors), and missiles (especially those with maneuver capability) are very difficult to stop. Advances in AI / machine learning tend to favor missile targeting, not defense of carriers. 

The key capability is autonomous final target acquisition by the missile at a range of tens of km -- i.e., the distance the ship can move during missile flight time after launch. State of the art air to air missiles already do this in BVR (Beyond Visual Range) combat. Note, they are much smaller than anti-ship missiles, with presumably much smaller radar seekers, yet are pursuing a smaller, faster, more maneuverable target (enemy aircraft). 

It seems highly likely that the technical problem of autonomous targeting of a large surface ship during final missile approach has already been solved some time ago by the PLARF. 

With this capability in place one only has to localize the carrier to within few x 10km for initial launch, letting the smart final targeting do the rest. The initial targeting location can be obtained through many methods, including aircraft/drone probes, targeting overflight by another kind of missile, LEO micro-satellites, etc. Obviously if the satellite retains coverage of the ship during the entire attack, and can communicate with the missile, even this smart final targeting is not required.

This is what a ship looks like to Synthetic Aperture Radar (SAR) from Low Earth Orbit (LEO).  PRC has had a sophisticated system (Yaogan) in place for almost a decade, and continues to launch new satellites for this purpose.



See LEO SAR, hypersonics, and the death of the naval surface ship:

In an earlier post we described how sea blockade (e.g., against Japan or Taiwan) can be implemented using satellite imaging and missiles, drones, AI/ML. Blue water naval dominance is not required. 
PLAN/PLARF can track every container ship and oil tanker as they approach Kaohsiung or Nagoya. All are in missile range -- sitting ducks. Naval convoys will be just as vulnerable. 
Sink one tanker or cargo ship, or just issue a strong warning, and no shipping company in the world will be stupid enough to try to run the blockade. 

But, But, But, !?! ...
USN guy: We'll just hide the carrier from the satellite and missile seekers using, you know, countermeasures! [Aside: don't cut my carrier budget!] 
USAF guy: Uh, the much smaller AESA/IR seeker on their AAM can easily detect an aircraft from much longer ranges. How will you hide a huge ship? 
USN guy: We'll just shoot down the maneuvering hypersonic missile using, you know, methods. [Aside: don't cut my carrier budget!] 
Missile defense guy: Can you explain to us how to do that? If the incoming missile maneuvers we have to adapt the interceptor trajectory (in real time) to where we project the missile to be after some delay. But we can't know its trajectory ahead of time, unlike for a ballistic (non-maneuvering) warhead.
More photos and maps in this 2017 post.

Saturday, October 30, 2021

Slowed canonical progress in large fields of science (PNAS)




Sadly, the hypothesis described below is very plausible. 

The exception being that new tools or technological breakthroughs, especially those that can be validated relatively easily (e.g., by individual investigators or small labs), may still spread rapidly due to local incentives. CRISPR and Deep Learning are two good examples.
 
New theoretical ideas and paradigms have a much harder time in large fields dominated by mediocre talents: career success is influenced more by social dynamics than by real insight or capability to produce real results.
 
Slowed canonical progress in large fields of science 
Johan S. G. Chu and James A. Evans 
PNAS October 12, 2021 118 (41) e2021636118 
Significance The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas. 
Abstract In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.
See also Is science self-correcting?
A toy model of the dynamics of scientific research, with probability distributions for accuracy of experimental results, mechanisms for updating of beliefs by individual scientists, crowd behavior, bounded cognition, etc. can easily exhibit parameter regions where progress is limited (one could even find equilibria in which most beliefs held by individual scientists are false!). Obviously the complexity of the systems under study and the quality of human capital in a particular field are important determinants of the rate of progress and its character. 
In physics it is said that successful new theories swallow their predecessors whole. That is, even revolutionary new theories (e.g., special relativity or quantum mechanics) reduce to their predecessors in the previously studied circumstances (e.g., low velocity, macroscopic objects). Swallowing whole is a sign of proper function -- it means the previous generation of scientists was competent: what they believed to be true was (at least approximately) true. Their models were accurate in some limit and could continue to be used when appropriate (e.g., Newtonian mechanics). 
In some fields (not to name names!) we don't see this phenomenon. Rather, we see new paradigms which wholly contradict earlier strongly held beliefs that were predominant in the field* -- there was no range of circumstances in which the earlier beliefs were correct. We might even see oscillations of mutually contradictory, widely accepted paradigms over decades. 
It takes a serious interest in the history of science (and some brainpower) to determine which of the two regimes above describes a particular area of research. I believe we have good examples of both types in the academy. 
* This means the earlier (or later!) generation of scientists in that field was incompetent. One or more of the following must have been true: their experimental observations were shoddy, they derived overly strong beliefs from weak data, they allowed overly strong priors to determine their beliefs.

Friday, August 27, 2021

Tragedy of Empire / Mostly Sociopaths at the Top

 

Ecclesiastes 1:9 (KJV) The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.

Turn off the TV and close the browser tabs with mainstream media content produced by middlebrow conformists. Watch this video instead and read the links below. 

If you were surprised by events in Afghanistan over the past weeks, ask yourself why you were so out of touch with a reality that has been clear to careful observers for over a decade. Then ask yourself what other things you are dead wrong about...

Ray McGovern is a retired CIA analyst who served as Chief of the Soviet Foreign Policy Branch and preparer/briefer of the President’s Daily Brief. Prior to that he served as an infantry/intelligence officer in the 1960s. 

McGovern wrote Welcome to Vietnam, Mr. President (addressed to President Obama, about Afghanistan) in 2009. 

See also: The Strategic Lessons Unlearned from Vietnam, Iraq, and Afghanistan: Why the Afghan National Security Forces Will Not Hold, and the Implications for the U.S. Army in Afghanistan (Strategic Studies Institute and U.S. Army War College Press 2015) M. Chris Mason  

Related posts:

Tears before the Rain: An Oral History of the Fall of South Vietnam (Afghanistan darkness over Kabul edition) 



Afghanistan is lost (2012)




Podcast version of the interview at top:
   



More color here from Danny Sjursen, West Point graduate, former US Army Major. Sjursen is a combat veteran who served in Iraq and later as an Army captain in Afghanistan in command of 4-4 Cavalry B Troop in Kandahar Province from February 2011 to January 2012.




Added from comments:
At the strategic level it has been clear for 10+ years that our resources were better used elsewhere. It was obvious as well that we were not succeeding in nation building or creating a self-sustaining government there. I could go into more detail but you can get it from the links / interviews in the post. 
At the tactical level it should have been obvious that a quick collapse was very possible, just as in S. Vietnam (see earlier oral history post). Off-topic: same thing could happen in Taiwan in event of an actual invasion, but US strategists are clueless. 
Biden deserves credit for staying the course and not kicking the can down the road, as effectively a generation (slight exaggeration) of military and political leaders have done. 
The distortion of the truth by senior leaders in the military and in politics is clear for all to see. Just read what mid-level commanders (e.g., Sjursen) and intel analysts with real familiarity have to say. This was true for Vietnam and Iraq as well. Don't read media reports or listen to what careerist generals (or even worse, politicos) have to say. 
Execution by Biden team was terrible and I think they really believed the corrupt US-puppet Afghan govt could survive for months or even years (i.e., they are really stupid). Thus their exit planning was deeply flawed and events overtook them. However, even a well-planned exit strategy would likely not have avoided similar (but perhaps smaller in magnitude) tragic events like the ones we are seeing now. 
ISS attack on airport was 100% predictable. I don't think most Americans (even "leaders" and "experts") understood ISS and Taliban are mortal enemies, etc. etc. 
There is more of a late-stage imperial decline feel to Afghanistan and Iraq -- use of mercenaries, war profiteering, etc. -- than in Vietnam. All of these wars were tragic and unnecessary, but there really was a Cold War against an existential rival. The "war on terrorism" should always have been executed as a police / intel activity, not one involving hundreds of thousands of US soldiers. 
All of this is (in part) an unavoidable cost of having intellectually weak leaders struggling with difficult problems, while subject to low-information populist democracy (this applies to both parties and even to "highly educated" coastal elites; the latter are also low-information from my perspective). This situation is only going to get worse with time for the US. 
BTW, I could describe an exactly analogous situation in US higher ed (with which I am quite familiar): leaders are intellectually weak, either do not understand or understand and cynically ignore really serious problems, are mainly concerned with their own careers and not the real mission goals, are subject to volatility from external low-information interest groups, etc.

Monday, July 19, 2021

The History of the Planck Length and the Madness of Crowds

I had forgotten about the 2005-06 email correspondence reproduced below, but my collaborator Xavier Calmet reminded me of it today and I was able to find these messages.

The idea of a minimal length of order the Planck length, arising due to quantum gravity (i.e., quantum fluctuations in the structure of spacetime), is now widely accepted by theoretical physicists. But as Professor Mead (University of Minnesota, now retired) elaborates, based on his own experience, it was considered preposterous for a long time. 

Large groups of people can be wrong for long periods of time -- in financial markets, academia, even theoretical physics. 

Our paper, referred to by Mead, is 

Minimum Length from Quantum Mechanics and Classical General Relativity 

X. Calmet, M. Graesser, and S. Hsu  

https://arxiv.org/abs/hep-th/0405033  

Phys Rev Letters Vol. 93, 21101 (2004)

The related idea, first formulated by R. Buniy, A. Zee, and myself, that the structure of Hilbert Space itself is likely discrete (or "granular") at some fundamental level, is currently considered preposterous, but time will tell. 

More here

At bottom I include a relevant excerpt from correspondence with Freeman Dyson in 2005.


Dear Drs. Calmet, Graesser, Hsu,

I read with interest your article in Phys Rev Letters Vol. 93, 21101 (2004), and was pleasantly surprised to see my 1964 paper cited (second citation of your ref. 1).  Not many people have cited this paper, and I think it was pretty much forgotten the day it was published, & has remained so ever since.  To me, your paper shows again that, no matter how one looks at it, one runs into problems trying to measure a distance (or synchronize clocks) with greater accuracy than the Planck length (or time).

I feel rather gratified that the physics community, which back then considered the idea of the Planck length as a fundamental limitation to be quite preposterous, has since come around to (more or less) my opinion.  Obviously, I deserve ZERO credit for this, since I'm sure that the people who finally reached this conclusion, whoever they were, were unaware of my work.  To me, this is better than if they had been influenced by me, since it's good to know that the principles of physics lead to this conclusion, rather than the influence of an individual.  I hope that makes sense. ...

You might be amused by one story about how I finally got the (first) paper published after 5 years of referee problems.  A whole series of referees had claimed that my eq. (1), which is related to your eq. (1), could not be true.  I suspect that they just didn't want to read any further.  Nothing I could say would convince them, though I'm sure you would agree that the result is transparently obvious.  So I submitted another paper which consisted of nothing but a lengthy detailed proof of eq. (1), without mentioning the connection with the gravitation paper.  The referees of THAT paper rejected it on the grounds that the result was trivially obvious!!  When I pointed out this discrepancy to the editors, I got the gravitation paper reconsidered and eventually published.

But back then no one considered the Planck length to be a candidate as a fundamental limitation.  Well, almost no one.  I did receive support from Henry Primakoff, David Bohm, and Roger Penrose.  As far as I can recall, these were the only theoretical physicists of note who were willing to take this idea seriously (and I talked to many, in addition to reading the reports of all the referees).

Well anyway, I greet you, thank you for your paper and for the citation, and hope you haven't found this e-mail too boring.

Yours Sincerely,

C.  Alden  Mead


Dear Dr. Mead,

Thank you very much for your email message. It is fascinating to learn the history behind your work. We found your paper to be clearly written and useful.

Amusingly, we state at the beginning of our paper something like "it is widely believed..." that there is a fundamental Planck-length limit. I am sure your paper made a contribution to this change in attitude. The paper is not obscure as we were able to find it without much digging.

Your story about the vicissitudes of publishing rings true to me. I find such stories reassuring given the annoying obstacles we all face in trying to make our little contributions to science.

Finally, we intend to have a look at your second paper. Perhaps we will find another interesting application of your ideas.

Warm regards,

Stephen Hsu

Xavier Calmet

Michael Graesser

 

Dear Steve,

Many thanks for your kind reply.  I find the information quite interesting, though as you say it leaves some historical questions unanswered.  I think that Planck himself arrived at his length by purely dimensional considerations, and he supposedly considered this very important.

As you point out, it's physically very reasonable, perhaps more so in view of more recent developments.  It seemed physically reasonable to me back in 1959, but not to most of the mainstream theorists of the time.

I think that physical considerations (such as yours and mine) and mathematical ones should support and complement each other.  The Heisenberg-Bohr thought experiments tell us what a correct mathematical formalism should provide, and the formal quantum mechanics does this and, of course, much more.  Same with the principle of equivalence and general relativity.  Now, the physical ideas regarding the Planck length & time may serve as a guide in constructing a satisfactory formalism.  Perhaps string theory will prove to be the answer, but I must admit that I'm ignorant of all details of that theory.

Anyway, I'm delighted to correspond with all of you as much as you wish, but I emphasize that I don't want to be intrusive or become a nuisance.

As my wife has written you (her idea, not mine), your e-mail was a nice birthday present.

Kindest Regards, Alden


See also this letter from Mead which appeared in Physics Today.  


The following is from Freeman Dyson:
 ... to me the most interesting is the discrete Hilbert Space paper, especially your reference [2] proving that lengths cannot be measured with error smaller than the Planck length. I was unaware of this reference but I had reached the same conclusion independently.

 

Tuesday, July 06, 2021

Decline of the American Empire: Afghan edition (stay tuned for more)

There are photos and video to remind us of the ignominious US withdrawal from S. Vietnam after twenty years of conflict and millions of deaths.

Over the July 4th weekend, the US military abandoned Bagram airforce base in Afghanistan, without even informing the Afghan commander and his troops. 

Conveniently for our warmongering neocon "nation-building" interventionist elites, there are (as yet) no photos of this pullout.
BAGRAM, Afghanistan (AP) — The U.S. left Afghanistan’s Bagram Airfield after nearly 20 years by shutting off the electricity and slipping away in the night without notifying the base’s new Afghan commander, who discovered the Americans’ departure more than two hours after they left, Afghan military officials said. 
Afghanistan’s army showed off the sprawling air base Monday, providing a rare first glimpse of what had been the epicenter of America’s war to unseat the Taliban and hunt down the al-Qaida perpetrators of the 9/11 attacks on America. The U.S. announced Friday it had completely vacated its biggest airfield in the country in advance of a final withdrawal the Pentagon says will be completed by the end of August. 
“We (heard) some rumor that the Americans had left Bagram ... and finally by seven o’clock in the morning, we understood that it was confirmed that they had already left Bagram,” Gen. Mir Asadullah Kohistani, Bagram’s new commander said. ...
I wrote the following (2017) in Remarks on the Decline of American Empire:
1. US foreign policy over the last decades has been disastrous -- trillions of dollars and thousands of lives expended on Middle Eastern wars, culminating in utter defeat. This defeat is still not acknowledged among most of the media or what passes for intelligentsia in academia and policy circles, but defeat it is. Iran now exerts significant control over Iraq and a swath of land running from the Persian Gulf to the Mediterranean. None of the goals of our costly intervention have been achieved. We are exhausted morally, financially, and militarily, and still have not fully extricated ourselves from a useless morass. George W. Bush should go down in history as the worst US President of the modern era. 
2. We are fortunate that the fracking revolution may lead to US independence from Middle Eastern energy. But policy elites have to fully recognize this possibility and pivot our strategy to reflect the decreased importance of the region. The fracking revolution is a consequence of basic research from decades ago (including investment from the Department of Energy) and the work of private sector innovators and risk-takers. 
3. US budget deficits are a ticking time bomb, which cripple investment in basic infrastructure and also in research that creates strategically important new technologies like AI. US research spending has been roughly flat in inflation adjusted dollars over the last 20 years, declining as a fraction of GDP. 
4. Divisive identity politics and demographic trends in the US will continue to undermine political cohesion and overall effectiveness of our institutions. ("Civilizational decline," as one leading theoretical physicist observed to me recently, remarking on our current inability to take on big science projects.) 
5. The Chinese have almost entirely closed the technology gap with the West, and dominate important areas of manufacturing. It seems very likely that their economy will eventually become significantly larger than the US economy. This is the world that strategists have to prepare for. Wars involving religious fanatics in unimportant regions of the world should not distract us from a possible future conflict with a peer competitor that threatens to match or exceed our economic, technological, and even military capability.
If you are young and naive and still believe that we can mostly trust our media and government, watch these videos for a dose of reality.




[ The video embedded above was a documentary about Julian Assange and Wikileaks on the DW channel, which I had queued to show the Collateral Murder video. It included an interview with the US soldier who saved one of the children in the rescue van that was hit with 30mm Apache fire. Inexplicably, DW has now removed the video from their channel. Click through to YouTube below for the content. ]




Some things never change. Recall the personal sacrifices made by people like Daniel Ellsberg to reveal the truth about the Vietnam war. Today it is Julian Assange...




Note Added: While it was easy to predict this outcome in 2017, it wasn't much harder to call it in 2011. See this piece from The Onion:
KABUL, AFGHANISTAN—In what officials said was the "only way" to move on from what has become a "sad and unpleasant" situation, all 100,000 U.S. military and intelligence personnel crept out of their barracks in the dead of night Sunday and quietly slipped out of Afghanistan. 
U.S. commanders explained their sudden pullout in a short, handwritten note left behind at Bagram Airfield, their largest base of operations in the country. 
"By the time you read this, we will be gone," the note to the nation of Afghanistan read in part. "We regret any pain this may cause you, but this was something we needed to do. We couldn't go on like this forever." 
"We still care about you very much, but, in the end, we feel this is for the best," the note continued. "Please, just know that we are truly sorry and that we wish you all the greatest of happiness in the future." 
... After reportedly taking a "long look in the mirror" last week, senior defense officials came to the conclusion that they had "wasted a decade of [their] lives" with Afghanistan ...

Saturday, June 19, 2021

LEO SAR, hypersonics, and the death of the naval surface ship

 

Duh... Let's spend ~$10B each for new aircraft carriers that can be easily monitored from space and attacked using hypersonic missiles. 

Sure, in a real war with a peer competitor we'll have to hide them far from the conflict zone. But they're great for intimidating small countries...

More on aircraft carriers.

The technology described in the videos is LEO SAR = Low Earth Orbit Synthetic Aperture Radar. For some people it takes vivid imagery to convey rather basic ideas.

In an earlier post we described how sea blockade (e.g., against Japan or Taiwan) can be implemented using satellite imaging and missiles, drones, AI/ML. Blue water naval dominance is not required. PLAN/PLARF can track every container ship and oil tanker as they approach Kaohsiung or Nagoya. All are in missile range -- sitting ducks. Naval convoys will be just as vulnerable. 

Sink one tanker or cargo ship, or just issue a strong warning, and no shipping company in the world will be stupid enough to try to run the blockade. With imaging accuracy of ~1m, missile accuracy will be similar to that of precision guided munitions using GPS.
 


Excerpt below from China’s Constellation of Yaogan Satellites and the Anti-Ship Ballistic Missile – An Update, International Strategic and Security Studies Programme (ISSSP), National Institute of Advanced Studies (NIAS -- India), December 2013. With present technology it is easy to launch LEO (Low Earth Orbit) micro-satellites on short notice to track ships, but PRC has had a sophisticated system in place for almost a decade.
Authors: Professor S. Chandrashekar and Professor Soma Perumal 
We can state with confidence that the Yaogan satellite constellation and its associated ASBM system provide visible proof of Chinese intentions and capabilities to keep ACG strike groups well away from the Chinese mainland. 
Though the immediate purpose of the system is to deter the entry of a hostile aircraft carrier fleet into waters that directly threatens its security interests especially during a possible conflict over Taiwan, the same approach can be adopted to deter entry into other areas of strategic interest
Viewed from this perspective the Chinese do seem to have in place an operational capability for denying or deterring access into areas which it sees as crucial for preserving its sovereignty and security.
ICEYE, a Finnish micro-satellite company, wants to use its constellation to monitor the entire planet -- Every Square Meter, Every Hour. This entire network would cost well under a billion USD, and it uses off-the-shelf technology. 

It seems plausible to me that PLARF would be able to put up additional microsats of this type even during a high intensity conflict, e.g., using mobile launchers like for the DF21/26/41. A few ~10 minute contacts per day from a small LEO SAR constellation (i.e., just a few satellites) provides enough targeting data to annihilate a surface fleet in the western Pacific.




Added from comments
:
... you can make some good guesses based on physics and the technologies involved. 
1. Very hard to hit a hypersonic missile that is maneuvering on its way in. It's faster than the interceptor missiles and they can't anticipate its trajectory if it, e.g., selects a random maneuver pattern. 
2. I don't think there are good countermeasures for hiding the carrier from LEO SAR. I don't even think there are good countermeasures against final targeting seekers (IR/radar) on the ASBM (or a hypersonic cruise missile) but this depends on details. 
3. If the satellite has the target acquired during the final approach it can transmit the coordinates to the missile in flight and the missile does not have to depend on the seeker. On the Chinese side it is claimed that the ASBM can receive both satellite and OTH radar targeting info while in flight. This seems plausible technologically, and similar capability is already present in PLAAF AAM (i.e., mid-flight targeting data link from jet which launched the AAM). 
4. The radar cross section of a large ship is orders of magnitude larger than, e.g., a jet fighter. The payload of a DF21/26/17 is much larger than an AAM so I would guess the seeker could be much more powerful than the IR/AESA seeker in, e.g., PL-15 or similar. (Note PL-15 and PL-XX/21 have very long (BVR) engagement ranges, like 150km or even 400km and this is against aircraft targets, not massive ships.) The IR/radar seeker in an ASBM could be comparable to those in a jet fighter. 
I seriously doubt you can hide a big ship from a hypersonic missile seeker that is much larger and more powerful than anything on an AAM, possibly as powerful as the sensors on a jet fighter. 
On launch the missile will have a good fix on the target location from the satellite data. In the ~10m time of flight the uncertainty in the location of, e.g., a carrier is ~10km. So the seeker needs to find the target in a region of roughly that size, assuming no in-flight update of target location. 
https://www.iiss.org/public... 
https://sameerjoshi73.mediu... 
Finally, keep in mind that sensor (both the missile seeker and on the satellite) and AI/ML capability are improving rapidly, so the trend is definitely against the carrier.

USN guy: We'll just hide the carrier from the satellite and missile seekers using, you know, countermeasures!  [Aside: don't cut my carrier budget!]

USAF guy: Uh, the much smaller AESA/IR seeker on their AAM can easily detect an aircraft from much longer ranges. How will you hide a huge ship?

USN guy: We'll just shoot down the maneuvering hypersonic missile using, you know, methods. [Aside: don't cut my carrier budget!]

Missile defense guy: Can you explain to us how to do that? If the incoming missile maneuvers we have to adapt the interceptor trajectory (in real time) to where we project the missile to be after some delay. But we can't know its trajectory ahead of time, unlike for a ballistic (non-maneuvering) warhead.

Wednesday, November 18, 2020

Polls, Election Predictions, Political Correctness, Bounded Cognition (2020 Edition!)

Some analysis of the crap polling and poor election prediction leading up to Nov 2020. See earlier post (and comments): Election 2020: quant analysis of new party registrations vs actual votes, where I wrote (Oct 14)
I think we should ascribe very high uncertainty to polling results in this election, for a number of reasons including the shy Trump voter effect as well as the sampling corrections applied which depend heavily on assumptions about likely turnout. ... 
This is an unusual election for a number of reasons so it's quite hard to call the outcome. There's also a good chance the results on election night will be heavily contested.
Eric Kaufmann is Professor of Politics at Birkbeck College, University of London.
UnHerd: ... Far from learning from the mistakes of 2016, the polling industry seemed to have got things worse. Whether conducted by private or public firms, at the national or local, presidential or senatorial, levels, polls were off by wide margins. The Five Thirty-Eight final poll of polls put Biden ahead by 8.4 points, but the actual difference in popular vote is likely to be closer to 3-4 points. In some close state races, the error was even greater. 
Why did they get it so wrong? Pollsters typically receive low response rates to calls, which leads them to undercount key demographics. To get around this, they typically weight for key categories like race, education or gender. If they get too few Latinos or whites without degrees, they adjust their numbers to match the actual electorate. But most attitudes vary far more within a group like university graduates, than between graduates and non-graduates. So even if you have the correct share of graduates and non-graduates, you might be selecting the more liberal-minded among them. 
For example, in the 2019 American National Election Study pilot survey, education level predicts less than 1% of the variation in whether a white person voted for Trump in 2016. By contrast, their feelings towards illegal immigrants on a 0-100 thermometer predicts over 30% of the variation. Moreover, immigration views pick out Trump from Clinton voters better within the university-educated white population than among high school-educated whites. Unless pollsters weight for attitudes and psychology – which is tricky because these positions can be caused by candidate support – they miss much of the action. 
Looking at this election’s errors — which seems to have been concentrated among white college graduates — I wonder if political correctness lies at the heart of the problem
... According to a Pew survey on October 9, Trump was leading Biden by 21 points among white non-graduates but trailing him by 26 points among white graduates. Likewise, a Politico/ABC poll on October 11 found that ‘Trump leads by 26 points among white voters without four-year college degrees, but Biden holds a 31-point lead with white college graduates.’ The exit polls, however, show that Trump ran even among white college graduates 49-49, and even had an edge among white female graduates of 50-49! This puts pre-election surveys out by a whopping 26-31 points among white graduates. By contrast, among whites without degrees, the actual tilt in the election was 64-35, a 29-point gap, which the polls basically got right.
See also this excellent podcast interview with Kaufmann: Shy Trump Voters And The Blue Wave That Wasn’t 

Bonus (if you dare): this other podcast from the Federalist: How Serious Is The 2020 Election Fraud?

Added: ‘Shy Trump Voters’ Re-Emerge as Explanation for Pollsters’ Miss
Bloomberg: ... “Shy Trump voters are only part of the equation. The other part is poll deniers,” said Neil Newhouse, a Republican pollster. “Trump spent the last four years beating the crap out of polls, telling people they were fake, and a big proportion of his supporters just said, ‘I’m not participating.’” 
In a survey conducted after Nov. 3, Newhouse found that 19% of people who voted for Trump had kept their support secret from most of their friends. And it’s not that they were on the fence: They gave Trump a 100% approval rating and most said they made up their minds before Labor Day. 
Suburbanites, moderates and college-educated voters — especially women — were more likely to report that they had been ostracized or blocked on social media for their support of Trump. ... 
... University of Arkansas economist Andy Brownback conducted experiments in 2016 that allowed respondents to hide their support for Trump in a list of statements that could be statistically reconstructed. He found people who lived in counties that voted for Clinton were less likely to explicitly state they agreed with Trump. 
“I get a little frustrated with the dismissiveness of social desirability bias among pollsters,” said “I just don’t see a reason you could say this is a total non-issue, especially when one candidate has proven so difficult to poll.”

Thursday, October 22, 2020

Replications of Height Genomic Prediction: Harvard, Stanford, 23andMe

These are two replications of our 2017 height prediction results (also recently validated using sibling data) that I neglected to blog about previously.

1. Senior author Liang is in Epidemiology and Biostatistics at Harvard.
Efficient cross-trait penalized regression increases prediction accuracy in large cohorts using secondary phenotypes 
Wonil Chung, Jun Chen, Constance Turman, Sara Lindstrom, Zhaozhong Zhu, Po-Ru Loh, Peter Kraft and Liming Liang 
Nature Communications volume 10, Article number: 569 (2019) 
We introduce cross-trait penalized regression (CTPR), a powerful and practical approach for multi-trait polygenic risk prediction in large cohorts. Specifically, we propose a novel cross-trait penalty function with the Lasso and the minimax concave penalty (MCP) to incorporate the shared genetic effects across multiple traits for large-sample GWAS data. Our approach extracts information from the secondary traits that is beneficial for predicting the primary trait based on individual-level genotypes and/or summary statistics. Our novel implementation of a parallel computing algorithm makes it feasible to apply our method to biobank-scale GWAS data. We illustrate our method using large-scale GWAS data (~1M SNPs) from the UK Biobank (N = 456,837). We show that our multi-trait method outperforms the recently proposed multi-trait analysis of GWAS (MTAG) for predictive performance. The prediction accuracy for height by the aid of BMI improves from R2 = 35.8% (MTAG) to 42.5% (MCP + CTPR) or 42.8% (Lasso + CTPR) with UK Biobank data.


2. This is a 2019 Stanford paper. Tibshirani and Hastie are famous researchers in statistics and machine learning. Figure is from their paper.


A Fast and Flexible Algorithm for Solving the Lasso in Large-scale and Ultrahigh-dimensional Problems 
Junyang Qian, Wenfei Du, Yosuke Tanigawa, Matthew Aguirre, Robert Tibshirani, Manuel A. Rivas, Trevor Hastie 
1Department of Statistics, Stanford University 2Department of Biomedical Data Science, Stanford University 
Since its first proposal in statistics (Tibshirani, 1996), the lasso has been an effective method for simultaneous variable selection and estimation. A number of packages have been developed to solve the lasso efficiently. However as large datasets become more prevalent, many algorithms are constrained by efficiency or memory bounds. In this paper, we propose a meta algorithm batch screening iterative lasso (BASIL) that can take advantage of any existing lasso solver and build a scalable lasso solution for large datasets. We also introduce snpnet, an R package that implements the proposed algorithm on top of glmnet (Friedman et al., 2010a) for large-scale single nucleotide polymorphism (SNP) datasets that are widely studied in genetics. We demonstrate results on a large genotype-phenotype dataset from the UK Biobank, where we achieve state-of-the-art heritability estimation on quantitative and qualitative traits including height, body mass index, asthma and high cholesterol.

The very first validation I heard about was soon after we posted our paper (2018 IIRC): I visited 23andMe to give a talk about genomic prediction and one of the PhD researchers there said that they had reproduced our results, presumably using their own data. At a meeting later in the day, one of the VPs from the business side who had missed my talk in the morning was shocked when I mentioned few cm accuracy for height. He turned to one of the 23andMe scientists in the room and exclaimed 

I thought WE were the best in the world at this stuff!?

Sunday, October 04, 2020

Genomic Prediction and Embryo Selection (video panel discussion)

 


This is a recent panel discussion on genomic prediction, and applications in IVF and health systems (e.g., early screening of high risk individuals for breast cancer, heart disease). 

Jamie Metzl and Simon Fishel are my co-panelists. Metzl is the author of the best seller Hacking Darwin: Genetic Engineering and the Future of Humanity. Fishel was part of the team that produced the first IVF baby in 1978, and has been a leader in IVF research ever since. 

Today millions of babies are produced through IVF. In most developed countries roughly 3-5 percent of all births are through IVF, and in Denmark the fraction is about 10 percent! But when the technology was first introduced with the birth of Louise Brown in 1978, the pioneering scientists had to overcome significant resistance. There may be an alternate universe in which IVF was not allowed to develop, and those millions of children were never born.
Wikipedia: ...During these controversial early years of IVF, Fishel and his colleagues received extensive opposition from critics both outside of and within the medical and scientific communities, including a civil writ for murder.[16] Fishel has since stated that "the whole establishment was outraged" by their early work and that people thought that he was "potentially a mad scientist".[17]
I predict that within 5 years the use of polygenic risk scores will become common in some health systems and in IVF. Reasonable people will wonder why the technology was ever controversial at all, just as in the case of IVF.

Previous discussion: Sibling Validation of Polygenic Risk Scores and Complex Trait Prediction (Nature Scientific Reports)

Saturday, September 12, 2020

Orwell: 1944, 1984, and Today

George Orwell 1944 Letter foreshadows 1984, and today:
... Already history has in a sense ceased to exist, i.e. there is no such thing as a history of our own times which could be universally accepted, and the exact sciences are endangered as soon as military necessity ceases to keep people up to the mark. Hitler can say that the Jews started the war, and if he survives that will become official history. He can’t say that two and two are five, because for the purposes of, say, ballistics they have to make four. But if the sort of world that I am afraid of arrives, a world of two or three great superstates which are unable to conquer one another, two and two could become five if the fuhrer wished it. That, so far as I can see, is the direction in which we are actually moving ... 
... intellectuals are more totalitarian in outlook than the common people. On the whole the English intelligentsia have opposed Hitler, but only at the price of accepting Stalin. Most of them are perfectly ready for dictatorial methods, secret police, systematic falsification of history etc. so long as they feel that it is on ‘our’ side.
I am sure any reader can provide examples of the following from the "news" or academia or even from a national lab:
there is no such thing as a history of our own times which could be universally accepted  
the exact sciences are endangered  
two and two could become five
dictatorial methods ... systematic falsification of history etc. so long as they feel that it is on ‘our’ side.

Of course, there is nothing new under the sun. It takes only a generation for costly lessons to be entirely forgotten...


Wikipedia: Trofim Denisovich Lysenko ...Soviet agronomist and biologist. Lysenko was a strong proponent of soft inheritance and rejected Mendelian genetics in favor of pseudoscientific ideas termed Lysenkoism.[1][2] In 1940, Lysenko became director of the Institute of Genetics within the USSR's Academy of Sciences, and he used his political influence and power to suppress dissenting opinions and discredit, marginalize, and imprison his critics, elevating his anti-Mendelian theories to state-sanctioned doctrine. 
Soviet scientists who refused to renounce genetics were dismissed from their posts and left destitute. Hundreds if not thousands of others were imprisoned. Several were sentenced to death as enemies of the state, including the botanist Nikolai Vavilov. Scientific dissent from Lysenko's theories of environmentally acquired inheritance was formally outlawed in the Soviet Union in 1948. As a result of Lysenkoism and forced collectivization, 15-30 million Soviet and Chinese citizens starved to death in the Holodomor and the Great Chinese Famine. ...

 

In 1964, physicist Andrei Sakharov spoke out against Lysenko in the General Assembly of the Academy of Sciences of the USSR: "He is responsible for the shameful backwardness of Soviet biology and of genetics in particular, for the dissemination of pseudo-scientific views, for adventurism, for the degradation of learning, and for the defamation, firing, arrest, even death, of many genuine scientists."

Blog Archive

Labels