1. Required use of facemasks
2. Cocooning of vulnerable populations
3. Contact tracing and forced isolation of cases, perhaps using geolocation technology
See related posts
COVID-19: Smart Technologies and Exit from Lockdown (Singapore)
COVID-19: CBA, CFR, Open Borders
COVID-19: Cocoon the vulnerable, save the economy?
COVID-19 Notes
WSJ: Western governments aiming to relax restrictions on movement are turning to unprecedented surveillance to track people infected with the new coronavirus and identify those with whom they have been in contact.Google, Apple, Facebook, etc. are reluctant to draw attention to their already formidable geolocation capabilites. But this crisis may focus public awareness on their ability to track almost all Americans throughout the day.
Governments in China, Singapore, Israel and South Korea that are already using such data credit the practice with helping slow the spread of the virus. The U.S. and European nations, which have often been more protective of citizens’ data than those countries, are now looking at a similar approach, using apps and cellphone data.
“I think that everything is gravitating towards proximity tracking,” said Chris Boos, a member of Pan-European Privacy-Preserving Proximity Tracing, a project that is working to create a shared system that could take uploads from apps in different countries. “If somebody gets sick, we know who could be infected, and instead of quarantining millions, we’re quarantining 10.” ...
Some European countries are going further, creating programs to help track individuals—with their permission—who have been exposed and must be quarantined. The Czech Republic and Iceland have introduced such programs and larger countries including the U.K., Germany and Spain are studying similar efforts. Hundreds of new location-tracking apps are being developed and pitched to those governments, Mr. Boos said.
U.S. authorities are able to glean data on broad population movements from the mobile-marketing industry, which has geographic data points on hundreds of millions of U.S. mobile devices, mainly taken from apps that users have installed on their phones.
Europe’s leap to collecting personal data marks a shift for the continent, where companies face more legal restrictions on what data they may collect. Authorities say they have found workarounds that don’t violate the European Union’s General Data Protection Regulation, or GDPR, which restricts how personal information can be shared. ...
WSJ: Google will help public health officials use its vast storage of data to track people’s movements amid the coronavirus pandemic, in what the company called an effort to assist in “unprecedented times.”This is just a hint at what Google is capable of. Check out Google Timeline! Of course, users have to opt in to create their Google Timeline. But it should be immediately obvious that Google already HAS the information necessary to populate a detailed geolocation history of every individual...
The initiative, announced by the company late Thursday, uses a portion of the information that the search giant has collected on users, including through Google Maps, to create reports on the degree to which locales are abiding by social-distancing measures. The “mobility reports” will be posted publicly and show, for instance, whether particular localities, states or countries are seeing more or less people flow into shops, grocery stores, pharmacies and parks. ...
Added from the comments:
There are really two separate issues here:
1. What is the basic epidemiology of CV19? i.e., R0, CFR, age distribution of vulnerability, comorbidities, mechanism of spread, utility of masks, etc.
2. What is the cost benefit analysis for various strategies (e.g., lockdown vs permissive sweep with cocooning)
While we have not reached full convergence on #1 I think reasonable people agree that the "mainstream" consensus has a decent chance of being correct: e.g., CFR ~ 1% or so, possibility of wide sweep in any population, overload of ICUs means much higher CFR, warmer weather might not save the day, etc. Once this scenario for #1 has, say, >50% chance of being right you are forced to at least take it seriously and then you are on to #2. (It is not required to believe that the scenario above is true at 95% or 99% confidence level...)
#2 is a question of trade-offs and two reasonable people can easily disagree until the end of time... I've already posted very simple CBA that show the answer can go either way depending on how you "price" QALYs, what you think long term effects on economy are from lockdown -- i.e., how fragile you think financial, supply chain, psychological systems are in various places; is it a ~$trillion cost, or could it go nonlinear?
Re: Physicists (and addressing gmachine comment below which has a lot of truth in it), we have no trouble understanding modeling done by other people (whether in finance, climate, or epidemiology), and we are also trained to deal with very uncertain data / statistical situations. We can "take apart" the model in our head to see where the dependencies are and how the uncertainties propagate through the model. I am amazed often to meet people who built a very complex model (e.g., thousands of lines of code, lots of input parameters), but they lack the chops to develop good intuition for how their model works, to make qualitative estimates for uncertainty quantification, etc. I have seen this in economics, finance, biology, and climate contexts many times. "There are levels to this thing..." Understanding the model can be more g-loaded than building it!
Finally, we are trained to think from first principles -- which assumptions are crucial to reach the conclusions, which are not? What are the key uncertainties in the analysis? Do we really need very specific assumptions about, e.g., social interaction rates as in the Imperial models? Or can I do a quick Fermi estimate which gets me a more robust answer at the cost of a factor of 2 uncertainty that does not really affect the main conclusion -- e.g., will ICU overload happen?
Enrico Fermi at the Trinity test: "I tried to estimate its strength by dropping from about six feet small pieces of paper before, during, and after the passage of the blast wave. Since, at the time, there was no wind I could observe very distinctly and actually measure the displacement of the pieces of paper that were in the process of falling while the blast was passing. The shift was about 2 1/2 meters, which, at the time, I estimated to correspond to the blast that would be produced by ten thousand tons of T.N.T." The actual yield was about 20 kt. Sometimes a smart guy can get to within a factor of two, and with much greater clarity, than a huge team of modelers...
No comments:
Post a Comment