Is Sweden’s COVID-19 Response a Cautionary Tale or a Model to Follow? It’s Complicated

Is Sweden’s COVID-19 Response a Cautionary Tale or a Model to Follow? It’s Complicated

In the ongoing debate about lockdowns in the US, Sweden has become the battleground.

To mainstream media outlets, Sweden’s experience is cited as a cautionary tale. CBS writes that Sweden has become “an example of how not to handle COVID-19”.

Meanwhile, to those who have been skeptical of the lockdown policy all along, Sweden’s results are occasionally cited in glowing terms. For instance, Jeffrey Tucker of AIER tweeted this out last week, showing that Sweden’s daily death toll has slowed to a crawl:

2020 07 20 Jeffreyatucker Sweden Daily Deaths Trend

So which version is true?

Did Sweden’s less restrictive approach to COVID-19 usher in the hellscape that US public health officials have warned us about? Or is it actually a model for the rest of us to follow?

It’s too soon to know for sure. But the data we have so far suggests the answer is not black-and-white.

The Problem of Cherry-Picking
In the CBS article, they point to the per capita COVID-19 death toll in Sweden to declare it a policy failure. Writing last month on July 17, CBS notes:

…the death toll from Sweden’s outbreak is now the fifth-worst in the world, per capita. The country’s mortality rate from the coronavirus is now 30% higher than that of the United States when adjusted for population size.

On that date, this was true. It is missing some important context, however.

For starters, if Sweden was fifth-worst in the world, why is the article about Sweden? If the point is to identify some COVID-19 policies that clearly failed, there would seem to be at least four candidates just as worthy of criticism.

Excluding the tiny nation-states of Andorra and San Marino, the four European countries that had experienced higher per capita death tolls than Sweden at the time of CBS’s piece were Belgium, UK, Spain, and Italy. In the last two weeks, Peru has also overtaken Sweden in terms of per capita death tolls.


All of these countries imposed lockdown policies, and still experienced death tolls higher than Sweden. Italy might have an excuse as the first major hotspot in Europe, but what accounts for the others?

This selective criticism of Sweden by mainstream media has been called out elsewhere with good reason. This is classic cherry-picking–finding facts to fit a predetermined narrative.

To be fair, Sweden’s proponents can also be guilty of omitting context.

In the tweet noted above, Tucker points to very low rates of new deaths as a sign of success. This is good news, but it doesn’t tell us much. It’s widely understood that viruses will burn themselves out eventually. The lockdown debate is about how best to mitigate the damage in the meantime.

In another example, this tweet from Yinon Weiss, favorably compares the experience of Sweden to New York. The comparison is correct–Sweden has fared far better than New York on a per capita basis. However, this is better evidence of New York’s extreme failure rather than Sweden’s success. If we draw sweeping conclusions from this data point, then it’s just cherry-picking in the other direction.

2020 07 12 Yinonw Sweden Vs. New York Covid 19
Obviously, writing a tweet is different than writing an article. Twitter isn’t exactly built for nuance.

The point is that, so far, Sweden’s results are mixed. They don’t warrant a victory lap for anyone.

Voluntary Versus Coercive
While Sweden seems to be viewed by the rest of the world as a radical experiment when it comes to COVID-19, that’s not how Sweden sees itself.

Speaking to Nature magazine early on in the pandemic, Sweden’s state epidemiologist Anders Tegnell explained bluntly, “I think it has been overstated how unique [Sweden’s] approach is.”

For Tegnell, Sweden’s policy objective is the same as for most other Western countries–flatten the curve to avoid overrunning the healthcare system.

 
The primary difference is in Sweden’s laws. As he explained in the interview (emphasis added):

The Swedish laws on communicable diseases are mostly based on voluntary measures — on individual responsibility…This is the core we started from, because there is not much legal possibility to close down cities in Sweden using the present laws.

By itself, this almost implies that Sweden would have been just as coercive as other countries if they had the authority. (And as an American, it sounds extremely odd to hear a national government official acknowledge any legal constraints on their power. But I digress.)

However, in a separate April interview with Haaretz, Tegnell argued that the voluntary approach has strategic advantages over coercion. In particular, Tegnell noted that the voluntary measures could be kept in place for an extended period of time. In his words, “We believe that what we are doing is more sustainable and effective in the long term.”
 
The Importance of Sustainability
Like most other countries, Sweden’s experts share the view that the virus will only stop being a threat once herd immunity is reached or an effective vaccine is developed. “Every other solution is temporary,” Tegnell told Haaretz.
 
Since both of those solutions are likely months away, the sustainability of the policy response is critical. This is why Sweden’s approach might ultimately prove more successful than its peers.
 
Although Tegnell doesn’t say this outright, the subtext of Sweden’s approach seems to be that all countries’ COVID-19 policies will look like Sweden’s eventually.
 
The lockdowns cannot eradicate the virus on their own and cannot be kept in place indefinitely. That means that when the lockdowns are inevitably relaxed, the virus is still around and able to spread.
 
When the virus starts to spread anew, the authorities in most democratic countries won’t have the political ability to reimpose lockdowns. So their only real option is to impose lighter, mostly voluntary measures like Sweden has done from the start. 
 
Unfortunately, this is how things have played out in many places..
 
Consider the United States. Many states closed down before they had significant spread and reopened while new infections remained at low, but nonzero, levels. Now cases have surged in several states, and the lockdown measures being reimposed are far less strict than those enforced early on. Noncompliance is also on the rise. 
 
Today, most states’ lockdown policies are still more restrictive than Sweden’s. But this does show the unsustainable nature of the prior approach. Given that the states have landed on less restrictive policies anyway, the utility of the initial authoritarian policies is unclear. The collateral damage of those policies, on the other hand, is visible everywhere.
 
Premature Conclusions
While Sweden states that its policy goal is to flatten the curve like most other countries, it’s clear that they have taken a less aggressive approach. 
 
It follows that Sweden’s virus curve should be steeper than the curve seen in lockdown jurisdictions. We can see this visually in the chart below (adapted from PBS):
 
 
(Since Sweden is taking some measures to slow the spread, it’s likely that the relative steepness of their curve wouldn’t be as radically different as what this graphic implies. But it does illustrate the nature of the difference we should expect.)
 
This presents a major challenge for gauging the success of the different approaches while the pandemic is still underway.
 
The total deaths experienced by the countries in the chart above would be found by taking the area underneath the curve. Because Sweden has accepted a steeper curve, it should experience more total deaths early on. But the number of new daily deaths in Sweden should also drop to near zero earlier than it will elsewhere.
 
That’s what these curves would suggest, and it’s consistent with what has actually happened.
 
Now we can see the problems with some of the condemnation and praise of Sweden’s results. Yes, Sweden has experienced more deaths than many of its peers. And yes, for now, the pandemic in Sweden seems to be mostly over even as it rages on elsewhere. Neither of these outcomes should come as a surprise.
 
To consider the question settled right now is rather like declaring victory based on the score at halftime. It’s not the end of the story.
2020 08 02 Sweden Pandemic Curve Illustration
 
 
Where We Go From Here
As I write this, the virus looks to be mostly contained in Europe, but continues to spread significantly in the US. Based on the data we have so far, it is unlikely that Sweden or any other large country has reached true herd immunity. Promising vaccine headlines get published regularly. But even in the most optimistic scenario, we’re a few months away from a vaccine being proven safe and effective, let alone mass produced.
 
If current trends persist, the nationwide US per capita death toll is likely to catch and surpass Sweden in the coming months. In just the last two weeks, the gap between Sweden’s death toll and the US’s, has fallen from 30% higher to 19%. Virus cases have shot up in the most populous states (California, Texas, and Florida) that had been largely spared until this point. Increasingly, it looks like the US shutdown caused massive collateral damage without any lasting containment benefit. In a strictly US context, the Swedish approach is looking pretty good.
 
This conclusion is less obvious when looking at the results of countries in Asia, Europe, or the South Pacific. Countries like South Korea and Taiwan managed to slow the spread of COVID-19 with targeted quarantines instead of all-encompassing lockdown restrictions. Europe has several countries like Austria and Switzerland that locked down and then reopened quickly without reigniting a major new outbreak of the virus so far. In the South Pacific, New Zealand’s more comprehensive lockdown and travel restrictions managed to eliminate COVID-19 locally, and a major new outbreak has not yet occurred.
 
It’s too early to say which strategy will look optimal in the long run. The final analysis will also need to consider more holistic data points such as excess mortality and economic outcomes. That data isn’t available in real-time like the official COVID-19 statistics, but it will be necessary to properly compare the costs and benefits across countries.
 
For now, Sweden’s policy isn’t a panacea or a disaster. It remains a crucial control group for the lockdown experiments of 2020.
NPR Report on Florida’s Record Infections Misleads Its Audience

NPR Report on Florida’s Record Infections Misleads Its Audience

This Monday, NPR consumers woke up to this alarming report on the coronavirus:

Florida Smashes U.S. State Record Of Daily New Cases: More Than 15,200

Not content to scare readers in the headline, the dire framing continues in body of the report, which was discussed on the Up First podcast. Some illustrative quotes below:

Florida reported 15,299 new coronavirus cases on Sunday, marking the largest single-day increase of any state since the start of the pandemic.
Sunday’s number exceeds New York’s peak of more than 12,200 new cases in one day back in April, when it was the epicenter of the outbreak…

As of Saturday, 7,186 people were hospitalized in Florida, according to the COVID Tracking Project.
Florida started reopening in early May and has continually shattered state records for single-day increases since cases began surging there in June.

Reading this, our discerning audience comes a way with a couple things:

  • Florida’s shattering records for new infections
  • Things in Florida are so bad right now that they’re even worse than New York at its peak

The problem with the statistics they’re using here is not that they are explicitly incorrect. They are correctly citing the official case numbers.

The problem is that they fail to provide any of the context needed for someone to understand what those case numbers actually mean, and what they can tell us about the severity of Florida’s situation.

First things first: Reporting case numbers without adjusting for population is just confusing.

Yes, Florida really has set the state record for new positive confirmed cases in a single day. Florida is also the third most populous state in the country–behind Texas and California, but 10% larger than New York.

So Florida took the lead today. And if current trends continue, Texas and California will claim the mantle in a few weeks’ time. Those records will be trumpeted too, but they will matter just as little.

A better way to compare totals among states is per capita, usually per 100,000 people. On this score, Florida still would have beaten out New York with Sunday’s case count (71 per 100k vs. 63 per 100k), but that comparison is at least superficially meaningful. It also has the virtue that it can be compared to other states and counties.

A more urgent problem with this analysis is that we know the official case numbers dramatically understate the true number of infections. This remains a problem today, but it was much worse early in the pandemic.

The reason for the understatement is the lack of testing capacity. When New York was at its peak, it only had enough tests to use them on healthcare workers and people with the most severe symptoms. It couldn’t test everyone with symptoms, let alone everyone who had contact with a COVID-positive person. As a result of this necessary strategy, many infections went undetected, and the percentage of tests coming back positive–the positivity rate–was extremely high.

Notably, this wasn’t really New York’s fault. It’s more a by-product of them being first (and the CDC and FDA botching the testing rollout).

In Florida today, the total number of infections are still being undercounted. But the testing shortage is less acute, so more people can be tested and the confirmed number is going to be closer to the real thing.

We can see this reflected in the data in two ways. The first is the positivity rate mentioned before. The second is per capita hospitalizations. (Since severely ill patients were able to get tested in New York, we can assume the hospitalization number there is reasonably accurate.)

At the height of its outbreak, New York was reporting a weekly positivity rate over 40% and 97 out of every 100,000 people were hospitalized.

But as of July 13, in Florida, the weekly positivity rate stands just below 19% and 37 out of every 100,000 people are hospitalized. (Note that my data source here–the COVID Tracking Project–only recently started getting hospitalization data for Florida, so that data point doesn’t exist earlier in the crisis.)

To be sure, Florida is definitely having an outbreak right now. No one is suggesting these numbers are a cause for celebration.

But it’s completely misleading to suggest that Florida’s situation today is worse than things were in New York a few months ago. The only metric that supports this notion right now is the one we know, with certainty, is unreliable.

The point here is that we want people to have a realistic understanding of the how bad or benign things truly are. It’s okay if people are worried, but we want the level of concern to be proportionate to what’s actually happening.

So yes, it’s a problem when Trump says the coronavirus is just like the flu and will magically disappear. It’s also a problem when hyperbolic media reports make people so afraid of COVID-19 that they refuse to go to the ER when they have a heart attack.

Three Observations on the Second Wave of COVID-19

Three Observations on the Second Wave of COVID-19

The long-feared second wave of COVID-19 in the United States appears to have arrived. National case numbers are making new records and two states have started to move back towards quarantine.

Since news reports on the virus continue to emphasize the wrong metrics, some important facts about the new wave often get missed. Here are three things to know about the new rise in cases.

1. The recent rise in cases cannot be explained by increases in testing.

This is an important point because the onset of a second jump in cases has been declared prematurely several times before now. These types of reports came in different flavors. Sometimes, they calculated percentage growth rates off extremely small numbers (at a county level for instance) to report an eye-popping rate of growth. More commonly, they failed to highlight the fact that total testing was increasing faster than new cases–suggesting the virus was probably just as common as it had been days earlier, but the state was able to confirm more cases.

This time is different. Many states are experiencing both a rise in absolute cases, and a rise in the percentage of cases that is coming back positive (the positivity rate). This is a clear sign that things in these places is getting worse with respect to the coronavirus.

As of June 27, these are all the states that was seeing both a positivity rate above 10% and weekly rise in that rate. The original data for the table below comes from The COVID Tracking Project:

So, while not all of these states are in crisis right now, it is correct to say we are seeing a pronounced jump in cases in many places. It’s not just a figment of the data / reporting like it had been before.

2. The rise in cases is primarily occurring in places that were not hit hard before.

When you look at a chart of the national numbers of new cases, you can clearly see the second wave that’s occurring. Using the data through June 27, our weekly figures of new cases is rapidly approaching the previous peaks set in April.

The trend below shows rolling 7-day positive cases for the US (all states plus DC and Puerto Rico) per 100,000 people:

On a national basis, the second wave description looks appropriate. But this obscures very different trends at the state level. In reality, the places where cases are rising did not experience much of a peak earlier on the crisis.

Consider the trends below for four states that have been making headlines in the past few days: Texas, Florida, California, and Arizona.

In this data, we see that Texas experienced a slight uptick in April, but it was much smaller than what we’re seeing now. For Arizona, Florida, and California, the recent rise they are experiencing is their first serious increase when adjusted for population. Conversely, states like New York and New Jersey saw their large increase in March and April, but are now seeing stable or declining cases.

This pattern demonstrates one of the many problems with demanding a nationwide lockdown in a country as large as the US. In effect, all states shut down (to varying degrees) based on the experience of New York, New Jersey, and a couple other hotspots. They did this without regard to whether the outbreaks they were experiencing could possibly warrant such dramatic action.

Now that some of these same states are facing a real outbreak close to home, they’re starting to do a new round of limited restrictions. In the face of an economy on life support, widespread social unrest, and an election year, it’s unclear much people will tolerate or comply with another aggressive attempt at quarantine.

3. So far, the second wave appears to be less lethal than the first wave.

Another important characteristic of the second wave is that, at least so far, it looks like to be less deadly than the earlier spikes.

Some commentators have made this point by looking at the trends in death counts for the new hotspots. However, this is not a good way to evaluate the lethality of the second wave at this point.

Deaths are a lagging indicator in this data. According to facts summarized by Our World in Data, death typically occurs between 2 weeks to 8 weeks after the onset of symptoms, which in turn show up several days after initial infection. Many of the cases being discovered now will eventually prove fatal, but they won’t be counted for several weeks. Looking at death rates today in the new hotspots risks providing a false sense of reassurance.

A better way to evaluate the likely lethality of this second wave in real time is to look at the trends in hospitalized COVID-19 cases.

We saw previously that the rise in new cases is now above the levels seen in April. Fortunately, for now the hospitalization data is not following the same trajectory.

In the chart below, we see the national trend in per capita positive cases combined with per capita hospitalization:


Here, we see that national hospitalization data has stabilized but has started to move upward, but is not accelerated at the same pace as total cases.

When we look on a state-by-state basis, a similar pattern emerges. Below, we present the hospitalization trends for current and prior hotspot states. (Florida is omitted due to a lack of reliable hospitalization data.):

Of the new epicenters, Arizona again shows up as an outlier on hospitalization. But even so, it’s still a ways out from the extreme per capita hospitalization levels seen earlier in the northeast. Meanwhile, the other states are on a slight upswing, but still low in terms of overall numbers.

There are several different reasons that might help account for the lower hospitalization rates in this cycle.

One explanation is that the average age of COVID-19 individuals is lower than it was in the first wave of cases. CNN recently reported on this rise in infections among young people as a cause for alarm…

“It’s a little bit of a disturbing trend, and what frightens me is not only that they are younger, the potential of them infecting other people, particularly parents and grandparents,” Dr. Robert Jansen, chief medical officer at Grady Health System, told WSB.

…but in fact, it’s much better than the alternative. If more people are going to be infected, it’s obviously preferable that the people with the lowest chance of serious illness are the ones that get it.

It’s worth remembering that one of the reasons that New York and New Jersey fared worse than other states is that they had a policy which inadvertently increased the probability that older, more vulnerable people would be infected. In an attempt to preserve hospital capacity, these governments required hospitals to discharge COVID-19 patients back to nursing homes before it was confirmed that they no longer had the virus. The result was that the virus was effectively being reintroduced in nursing homes, spreading widely among a high-risk population.

Another reason for lower relative hospitalization rate in the new epicenters is that the testing is far more widespread. When New York was dealing with its peak in April, testing capacity was still being ramped up. This meant that the tests had to be reserved for healthcare workers and people with severe symptoms. In turn, we know that the total confirmed cases in April for New York and other states significantly understated the true number of cases. We just don’t know how large the understatement was.

Since the virus is hitting states like Arizona later, the testing limitations are not as severe. In all probability, this means that the confirmed case count in Arizona today is closer to the true number of infections.

This is not to suggest hospitals will have the capacity they need. When new cases are concentrated in specific parts of a state (as is occurring in Houston, Texas), hospitals will again be strained beyond their normal limits.

The point is that, at least for now, the trend is not nearly as dire as what the northeastern states saw previously. That nuance is easily lost among a sea of news headlines about record new cases.

NPR Report on Florida’s Record Infections Misleads Its Audience

Three Ways Not To Analyze COVID-19 Statistics

The COVID-19 pandemic and lockdowns continue to cause unprecedented devastation of everyday life in the United States–approximately 100,000 deaths, tens of millions unemployed, and countless plans, activities, and goals put on an indefinite hold.

In this context, news outlets, politicians, and consumers are closely following the trends in the COVID-19 statistics, trying to answer the most pressing questions. Are things getting better or worse in the US? Have we succeeded in the flattening the curve? Are the reopened states seeing a surge in new cases that many have feared?

These are important questions. Unfortunately, much of the reporting on the COVID-19 data obfuscates the underlying reality. In most cases, the problem is not that the reporting is literally false. But it typically focuses on the wrong metrics and fails to account for the severe limitations in the underlying data. The end result is that readers–and perhaps policymakers–come away with a more optimistic or pessimistic understanding than is actually warranted.

With that in mind, here are three errors to watch out for in discussions on COVID-19 data.

1. Focusing on the number of newly reported positive cases

This problem has become more common, particularly since some states have started to reopen. Here are some examples of recent headlines that commit this error:

Virginia Reports Highest One Day Increase in Coronavirus Cases After Gov. Ralph Northam Criticized For Not Wearing Mask – Newsweek, 5/25/2020

Texas sees highest single-day hike in coronavirus deaths, cases – Texas Statesman, 5/14/2020

Intuitively, it seems like the number and trend of newly confirmed COVID-19 cases must be an important number. But by itself, it doesn’t tell us much at all. To properly understand it, we also need to know the number and trend in total COVID-19 tests conducted over the same period.

As an illustrative example, let’s consider two random days of test results from Virginia. All results that follow are originally sourced from The Atlantic’s COVID Tracking Project:

With these facts alone, it would appear May 25 was a much worse day than April 13 for Virginia when it comes to the coronavirus. Over three times as many people were confirmed as positive. Surely, must mean the virus was spreading wider and was more out of control on May 25–after the reopening–than it was in mid-April during the lockdown, right?

Well, not quite. When we add the context of the number of tests performed and the positivity rate (the rate of positive tests out of total test results reported), a very different picture emerges. See below:

From this, we can see a more compelling explanation for why positive tests on April 13 were so much lower–namely, far fewer tests were conducted.

Based on these figures, there’s very good reason to assume the virus situation was actually worse on April 13. The high rate of positives suggests that they were unable to test enough people. So if they had had enough resources to test all suspected individuals, it’s likely that the number of positives would have been much higher.

But if you only focus on the positive cases, this reality gets completely turned on its head.

A similar version of this general error can be observed in many reports on record increases in daily cases. Confirmed cases are indeed continuing to rise throughout the US. But the good news is that in most places, the total number of tests is rising at an even faster clip.

2. Focusing on the percentage growth rate (or the doubling rate) of confirmed cases

A related analytical error gets made when media outlets report on the percentage growth rate. Examples of this error in the wild can be routinely found in Bloomberg Radio news updates. Last week, they were reporting around a 1.1% increase in cases, which varied slightly depending on the day.

For a print example, I offer this highly neutral take from Willamette Week in Oregon from May 21, “A Rise in COVID-19 Cases in Deschutes County Tests Whether the State Will Close Bars, Restaurants Again. (So Far? No.)”:

The number of COVID-19 cases in Deschutes County has increased over the last seven days. On Wednesday, the county reported nine cases—more cases than it has on any other single day.

 

Those increases raise the question of whether the state will order the county to shut down the bars, restaurants and hair salons that reopened just six days ago…

 

A 5 percent increase in COVID cases is the benchmark the state set for reviewing the status of a county and possibly shuttering it again. [Health Researcher Numi Lee] Griffith pointed to a 27 percent increase in cases in Deschutes County during the week ending May 20. (emphasis added)

This article is interesting for a couple reasons. First, we see that it actually starts out by committing error #1, reporting a record increase of nine cases without providing information about the number of tests.

(Later on, the article even notes that many of the new cases were actually identified proactively through contact-tracing rather than simple symptomatic testing. If anything, that’s actually a positive indication about the county’s preparedness to mitigate the virus, not a cause for alarm.)

But I digress. The key points in the Willamette Week article are that a) Oregon has actually built this metric into its reopening guidelines and b) Deschutes would have violated it with a 27% increase.

The reason people tend to focus on the growth rate (or in some cases, the days-to-doubling) is because we know that the virus naturally spreads at an exponential rate. One person gives it to three people who each give it to three more people and so on.

In theory, the growth rate is useful because it could offer a window into how quickly the virus is spreading currently, and whether the curve has been sufficiently flattened.

But here’s the problem. One of the key features that makes COVID-19 harder to deal with is that many people who contract the virus, experience no symptoms at all. And while this is not entirely proven, it’s generally believed that these asymptomatic individuals are still contagious and thus contribute to the exponential spread of the disease.

The challenge is that testing capacity has been so limited that states have not been able to conduct the kind of widespread random testing that would be needed to identify all of the asymptomatic cases. The other way to plausibly identify all or most asymptomatic cases is through a robust contact-tracing system like that of South Korea or Taiwan. But the US’s capabilities here are still limited. Instead, COVID-19 testing around the country has been prioritized for people with symptoms and healthcare workers.

The upshot of all this is that the growth rate is not a useful proxy for the thing we’re actually trying to measure. What we want to know is the true rate of spread for the virus, in real-time. But due to testing limitations, the growth rate mostly reflects a) the growth rate in testing capacity and b) the growth rate in symptomatic patients.

This error actually cuts in both directions. Early on in the COVID-19 crisis in February–when the CDC was hard at work developing a faulty test and the FDA was simultaneously preventing others from creating a better one–the nation was testing virtually no one. So most metrics looked good.

Then in mid-March as testing capacity finally got built out, the number of positive cases quickly exploded. Positive cases were doubling every two to three days, as this chart shows:

And then, starting in mid-April and continuing to the present, the growth rate and doubling rate slowed back down. Perhaps this can be partly explained by the voluntary precautions and the lockdowns. But clearly, the more important driver is this: While the virus may grow exponentially, US testing capacity does not.

At each point in the process, including today, these metrics have not been meaningful in the US. In March, they offered a belated confirmation that the virus was already spreading widely. And now, they suggest that virus is slowing down, in part because testing capacity can only grow so fast.

3. Citing the case fatality rate as a meaningful statistic

As its name implies, the case fatality rate (CFR) is calculated by taking the total number of deaths attributed to COVID-19 and dividing by the total number of confirmed cases. The calculation is straightforward, and but the result is worse than useless in the case of COVID-19, as we’ll see.

The most high profile example of bad reporting on the CFR comes from the World Health Organization, whose director said this on March 3:

Globally, about 3.4% of reported COVID-19 cases have died. By comparison, seasonal flu generally kills far fewer than 1% of those infected.

This shockingly high 3.4% figure was used as one of the reasons to justify widespread lockdowns. And yet, the statement itself offers a clue about the problems with this metric.

In that quote, the WHO is comparing the then-calculated CFR of COVID-19 to the infection fatality rate (IFR) of seasonal influenza. These are not the same metric.

In effect, the CFR is what we can easily observe and calculate. The IFR is what we actually care about, but it’s harder to determine. The difference between the two metrics is the denominator. The CFR divides by total confirmed cases, and the IFR divides by total infections.

Since confirmed cases are a subset of total infections, the CFR will always be higher this the IFR. This doesn’t mean that COVID-19 is the same as the flu. But it does mean that comparing the CFR of one disease to the IFR of another is unlikely to provide useful information.

To be fair, it’s conceivable that the gap between the CFR and IFR will not be significant for some diseases. If there was a well-known disease and widespread testing was available, it’s likely that the number of confirmed cases would approximate the total number of infections and thus the CFR would be close to the IFR. However, this is not remotely true for COVID-19 now, and it was even less true at the beginning of March.

For COVID-19, there have been testing shortages all over the world, with a few exceptions. As a practical matter, this meant that tests were generally prioritized for people with severe symptoms and healthcare workers. This prioritization was necessary to try to treat patients more effectively and reduce spread in the hospital environment. But it also compromises the value of a CFR calculated off the resulting data.

The first problem is selection bias. If you’re primarily testing patients that already had severe symptoms, then the population of confirmed cases is skewed towards those that are going to have worse health outcomes from the disease. In turn, this will systematically push up the CFR.

A related problem is that limited testing obviously means the number of confirmed cases will be far lower than the total number of infections. By contrast, the COVID-19 death count, while imperfect, should at least be less understated. The reason is that some jurisdictions, like the US, now include “probable” cases of COVID-19 in the death counts, even without a confirmed test. Thus, although limited testing will effectively cap the number of confirmed cases reported, it does not cap the number of deaths reported. This reality will also tend to systematically inflate the CFR.

The final problem with the CFR occurs simply because COVID-19 is a new disease, and there’s a significant time lag between when someone contracts the disease and when they might actually pass away as a result. At any given time, some percentage of the total confirmed and active cases, relate to individuals who will eventually die from the disease. This will cause the CFR to be artificially lower than reality (though the effect is diluted as the disease progresses over time).

As we see, the errors in the CFR are considerable. And while they point in different directions, there’s every reason to believe that on net, the CFR significantly overestimates the true lethality rate of COVID-19.

The problem is not that the CFR is literally false. The CFR for COVID-19 is being calculated correctly, it’s just not a meaningful number.

“Follow the Data”

These days, it seems like we are constantly being told by pundits and politicians that we need to “follow the data” when it comes to COVID-19.

By itself, that’s not bad advice. But too often, these people act as though the data provides a script. We just look at the data, put it in our model, and voila! enlightenment rains down upon us.

It would be nice if it worked that way. In reality, “The Data” doesn’t tell us anything. People interpreting the data tell us about their conclusions, and they’re not always right.

Using Bad Math, Media Claims No State Has Met the Reopening Guidelines on New Cases

Using Bad Math, Media Claims No State Has Met the Reopening Guidelines on New Cases

As the debate over lifting the lockdowns in the US intensifies, key data points on testing and infections are routinely mischaracterized.

Consider this summary from Chris Wallace on Fox News Sunday on May 3:

As we said, about half the states — more than half the states have started in some way, shape, or form, reopening. But we’ve crunched the numbers, Doctor, and not a single state has met the White House gating guidelines of two weeks of steady decline in new cases.

It’s not just Fox News that is describing the data in this way. Here’s how NBC News reported it on April 28:

As a handful of states begin to ease stay-at-home restrictions, no state that has opted to reopen has come close to the federally recommended decline in cases over a 14-day period…

 

Some states, such as Colorado and Kentucky, have reported fewer new cases in the past week. But no single state has had a two-week decline in case numbers.

The guidelines being referenced here are the White House’s “Opening Up America Guidelines”. The document offers criteria in three different areas that states should meet before reopening, but the reports above are focused only on the standards related to new cases. These are the official criteria on new infections:

Downward trajectory of documented cases within a 14-day period

 

OR

 

Downward trajectory of positive tests as a percent of total tests within a 14-day period (flat or increasing volume of tests)

So what’s going on here? Is it really true that no state in the US is on the down-slope of the COVID-19 crisis?

No, it’s not.

In fact, a quick analysis of publicly available data shows that a full 40 states met one of the suggested criterion on new cases as of May 2. (Specifically, 22 meet the documented cases criterion and 37 states fit the alternative positive test percentage criterion.)

The disconnect arises because the White House guidelines don’t directly spell out the calculation they have in mind when they ask for a “downward trajectory” within a 14-day period. Instead, different people apply their own calculation and reach different conclusions about the exact same data and exact same guidelines.

Worse still, the media outlets cited above are choosing to interpret the guidelines in a way that will be very difficult for any state to meet prior to outright eradication of COVID-19.

As Chris Wallace says in the quote, no state has had a steady decline in new cases over a two week period. If we assume a “steady decline” would require that a state report fewer (or flat) new cases every day compared to the prior day, then this is technically true. But it’s also a really bad standard to use when dealing with data that is expected to have natural variation from day-to-day.

To see how this fails in practice, consider the state of Alaska. As of May 2, this is what Alaska’s last 14 days of test results looked like, using data from the COVID Tracking Project:

Alaska Daily Cases

In total, Alaska had 51 new cases over the two-week period that ended on May 2 out of nearly 12,000 people tested, for a positivity rate of under 0.5%. Even accounting for Alaska’s smaller population, this data strongly suggests that Alaska has the virus reasonably under control. (For comparison, New Jersey’s stats over this same period was around 42,000 new positive cases and a 42% positivity rate.)

That said, under the “steady decline” standard used by Fox (and NBC*), it would still fail. May 1 saw cases jump from 0 to 9, and broke a 2-day streak of declines. As it happens, the previous day with 0 positive cases also had 0 total tests, highlighting the extreme volatility in daily testing data at a state level. These types of wild fluctuations in daily test counts mean that any daily calculation is bound to be unreliable.

Alaska’s data also shows us why this standard requires almost complete eradication before it can be met. During this period, Alaska had one day (April 25) with 0 positive test results in spite of many new tests being run. Thus, if they were to report even 1 new case in the next 13 days, it would still fail to meet the extreme “steady decline” standard. Clearly, this is not a reasonable requirement and is also not the intent behind the reopening guidelines.

The more appropriate way to do this calculation is to compare the total results from two sequential two-week periods against each other. This prevents volatility in any single day  from distorting the result, and provides a much better picture of the trend. Here’s what the results look like for Alaska under those conditions:

Alaska 2 Week Cases

In this view, we can see that, while Alaska never experienced much of a spike, its cases do appear to be on the decline. Despite tripling the number of new tests, they still reported fewer cases. The positivity rate shows a precipitous drop accordingly.

Stepping back from the details of Alaska, we can see this same trend play out across numerous other states. The table below shows the key data points for all states for the last two sequential two-week periods using the approach I discussed above. It also identifies which states would currently meet the case component of the White House reopening guidelines based on those results:

All State 2 Week Analysis

Of course, none of this means that the White House reopening guidelines are a good standard, nor that the lockdown policies are even desirable in the first place.

But, if journalists and public officials are going to treat these guidelines as a reasonable standard for phasing out the lockdowns, it’s important to get the math right.

 

*Note: The NBC article cited above includes a correction indicating that they misstated the guidelines as requiring daily declines. But even though they changed some of the wording in their article, they apparently didn’t rerun the analysis and still concluded no states met the guidelines. That is not true now, and it was not true at the time of their article.

Between a Rock and San Francisco

Between a Rock and San Francisco

Over at Reason, Christian Britschgi reports on one of the unfortunate choices that faced San Franciscans in this week’s vote–whether to increase a business receipts tax by ~500 or ~1,000 percent.
As I write this, it’s too close to call and more votes are coming in, but it’s possible that San Franciscans might end up declining them both. Here’s hoping.

Between a Rock and San Francisco

Congress Lifts a Stress Off the Financial System

Congress has finally taken action to liberalize the nation’s financial system. The bill was S. 2155, the Economic Growth, Regulatory Relief, and Consumer Protection Act. It passed both houses with a surprising degree of bipartisan support and was signed by President Trump last week on May 24.
Opponents cast the bill as a gift to the largest banks that will put the economy at greater risk for a new financial crisis.
In fact, the bill represents a modest and limited reform of Dodd-Frank. The law scales back an onerous regulatory exercise of questionable utility, and it actually reduces the artificial incentive for most financial institutions to get bigger.
The US financial system is still a long way from a true free market, but after this bill is implemented, the regulated market that remains will be less distorted than before.
What the Bill Does
The most important–and contentious–aspect of the new law is that it reduces the stress test requirements imposed on financial institutions.
Under the previous law, there were three tiers of stress test requirements based on the total assets of the financial institution:

  • $10B or less: No explicit stress test requirements.
  • $10B to $50B: Annual stress test required.
  • $50B and above: Semiannual stress tests required to be conducted. One of these tests will be subject to the Comprehensive Capital Analysis and Review (CCAR) process, which requires more detail and a compressed timeline relative to what is required for institutions in the $10 billion to $50 billion tier.

As the size of the financial institution increases, and the implicit risk to the stability of financial system grows, the regulations get more stringent and costly to implement.
The new reform bill, S. 2155, preserves the same general structure for stress testing, but it modifies the thresholds and some other details. The new total asset tiers and requirements under S. 2155 are as follows:

  • Under $100B: No explicit stress test requirements.
  • $100B to $250B: Stress tests required on a “periodic” basis; the definition of periodic is not immediately clear. The bill also includes a provision that allows the Federal Reserve Board (FRB) to impose, at its discretion, other prudential standards.
  • $250B and above: Annual stress test required, which would be subject to the CCAR process mentioned previously.

So as a result of the new rule, all institutions that are $10B in assets or above will have the regulatory burden from stress testing reduced to some extent. The largest institutions will only need to perform the test annually, instead of semiannually, while many smaller institutions will see the stress testing requirements from the Dodd-Frank Act eliminated.
Notably, these changes are set to take effect 18 months from the date of enactment for financial institutions with $100B or more in assets. Thus, the previous stress testing thresholds and requirements will continue to apply to them in 2018 and 2019.
The effective date of these changes for financial institutions under $100B in assets may occur sooner, depending on the way regulatory agencies choose to interpret the law. As of this writing, this interpretation is not yet certain.
Stressful Tests
Opponents of the new reforms argue that the thresholds for stress testing are far too high in S. 2155 and will put taxpayers at risk for more bailouts. Thus, one coalition lobbying against the bill accused the Democrats who crossed the aisle to support the legislation of being part of the #BailoutCaucus.
While it is heartening to see progressive groups making appeals to protect taxpayers, it is misguided in this case.
Such arguments take for granted that mandated stress tests really are an effective way to enhance and verify the stability of a financial institution. Somewhat paradoxically, the opponents also assume that the cost of completing such a test is relatively low.
As an example of this type of perspective, consider the comments offered by Dr. Dean Baker, an economist who has strongly opposed the recent reforms. Here’s how he explained the stress testing process in a recent interview:

The main part of the law that they changed was that they [the banks] had to undergo stress testing… What that means is that they just put their assets on a spreadsheet. So they say how many mortgage loans they have, how many car loans, business loans.

And then they’re…given a number by the Federal Reserve Board. Assume 10% of those go bad, and they’ll have an extreme case, assume 15%. I’m picking those numbers out of the air, but they’re given numbers and then they go, okay, how would our books look if that was the case?

That’s a very simple exercise, or at least it should be for any institution of that size. So the idea that this is some sort of huge regulatory burden is basically utter nonsense.

It’s hard to overstate just how incorrect this explanation is. Still, it’s useful to raise because I think it’s likely similar to the image many defenders of mandated stress tests have in their mind.
To people like Dr. Baker, this is just a minor exercise that helps stabilize the financial system, and the greedy snowflake bankers are making a frivolous complaint to their beholden lawmakers.
To people like me, who have directly participated in the Dodd-Frank stress testing process–and completed literally hundreds of pages of documentation related to it, per year—the stress tests are a serious and onerous undertaking that, in their present form, provide little discernible value.
Given this wide disparity between the perspective of pundits and practitioners, it’s useful to explain what the stress testing process actually looks like.
What Mandated Stress Testing Actually Looks Like
Pursuant to the Dodd-Frank Act, each year, the FRB publishes three new supervisory scenarios for use in stress testing: baseline, adverse, and severely adverse. As the names imply, the baseline scenario represents a business-as-usual environment with moderate economic growth; the adverse scenario reflects a minor economic recession; and the severely adverse scenario includes a deep economic recession, which is generally on par with the severity of the Great Recession, though it will vary in some particulars.
For each of these scenarios, the FRB provides 16 domestic variables and 12 international variables in total. These variables include economic metrics like GDP growth and the unemployment rate as well as some key interest rates. The variables are provided as a quarterly average for the next 13 quarters.
(Notably, contrary to Dr. Baker, the FRB does not provide loss rates for loans; companies are required to forecast these loss rates and be able to defend those forecasts as we’ll see.)
In addition to the variables, the FRB provides a brief qualitative description of what’s happening in each scenario, which runs around one page for each. Here are the scenarios that the FRB provided for 2018 testing cycle as an example.
And that’s it.
From this information, the company will need to forecast out their consolidated financial results–balance sheet, income statement, and capital ratios–for the next 9 quarters in each scenario. Since the end results will be reviewed and scrutinized by regulators, the company needs to be able to explain and support each aspect of those forecasts.
This is no small task.
One challenge that immediately arises in this process is that the range of variables provided by the FRB is insufficient for forecasting everything that is expected. For example, the FRB provides just six interest rates in its scenarios, and it omits many commonly referenced rates like 1-month LIBOR or the 1-year LIBOR, among others. Many loans use these types of rates as an index rate, so it’s necessary to know what they are in the scenarios to accurately project interest income and effective yields.
As a result, one of the first steps in the forecasting process is to find a way to forecast the values of the additional variables and interest rates that are needed. The company could forecast the variables in-house or pay a third party that has already forecast an expanded set of stress test variables. This is a good deal for the third party, but it represents a needless headache, and cost, for the company performing the stress test. In the real world, it would never be necessary for a company to derive these variables because they are all readily available in real-time.
With that hurdle addressed, the company can move on to the actual forecasting.
To give you a sense for the complexity of this process, let us consider what is involved in forecasting credit losses in these scenarios.
The first point to note here is that it’s not possible to forecast total credit losses for all loan balances in the aggregate. The total loan balance will include many different types of loans, each of which will behave differently under times of economic stress and will experience different loss rates. Likewise, different loan types will be more sensitive to some economic variables than others. For instance, a collapse in the home price index (HPI) variable would have a major negative impact on residential mortgages. As borrowers find themselves underwater on their homes, credit losses for this portfolio would rise significantly. However, one wouldn’t expect a drop in HPI to cause the same spike in losses for, say, commercial lines of credit.
So the company knows it needs to forecast credit losses, at least, at the portfolio or loan type level to have a meaningful outcome. In practice, the company could forecast at a more granular level, but we will stick with the loan type level to avoid over-complicating the example.
Preparing the baseline forecast is relatively easy. Most companies will already have some internal projections credit losses by loan type, and these projections will generally serve as a decent starting point.
Things get trickier when it comes to the adverse and the severely adverse scenario. The company can try to rely on their experience and the experience of peers in the past recession, but this will be highly imperfect. Here are a few of the questions that would need to be considered:

  • Have underwriting standards improved since the last recession? If so, then the average borrower may be more creditworthy and the portfolio should generate fewer losses.
  • Has the geographic concentration of the portfolio changed since the last recession? Although the FRB only provides national variables, some cities and some parts of the country tend to be more resilient than others in a downturn, which could impact credit losses.
  • How does the current scenario compare with the economic conditions in the last recession? Are the factors that impact this specific portfolio more or less severe than before?

All of these types of changes could have a major impact on the results. The company would need to take them into consideration to support their forecast, either qualitatively or quantitatively.
Then, once all the credit loss forecasts are developed, the company will use the results and determine various other line-items that are affected–provision for loan losses, allowance for loan losses, actual loan balances, and so on.
We’ve still only really scratched the surface here, but the example I’ve described is a simplified version of the process that could go into forecasting one part of the overall financial performance. Some items won’t be as complicated as credit losses, but the company will have to repeat a variant of this process to forecast all the other components of the financial statement in each scenario.
Suffice it to say, the full process requires a significant amount of work in order to produce a stress test forecast that can stand up to scrutiny. I’ll let you guess how many people have to be involved in the stress tests to complete them.
The First Crisis Without Uncertainty
Given all the work that goes into the stress testing process, you might hope that the result really is valuable. Unfortunately, this is far from clear.
In theory, the stress test is supposed to test how a financial institution will perform during another severe crisis. The problem is that the structure of the stress test removes one of the most critical and defining characteristics of an actual crisis: uncertainty.
As mentioned above, the FRB provides the scenario variables for the full 13-quarter duration upfront. In other words, the FRB is effectively telling companies what the path of the economy will be for the next 13 quarters. The companies know when the recession starts, how deep it goes, which aspects of the economy are hit hardest, and when the recovery begins to take hold.
In a real recession, of course, the financial institutions would not know any of these things. They might have an educated guess, or hope, but they do not know. They have to make critical financial decisions in a recession in the dark, and many mistakes will be made. This is how uncertainty works. But in the stress tests mandated by Dodd-Frank, uncertainty has been replaced by omniscience.
As a result, the severely adverse scenario is not really testing how a financial institution will fare in a deep recession. It’s testing something more specific, and less useful:
“If financial institutions have a crystal ball at the onset of a severe recession and have definite knowledge about the path of broader economy for next 13 quarters, will they be able to survive?”
Knowingly or not, that is the question that the Dodd-Frank stress tests are answering.
As a purely academic matter, I suppose the question is an interesting one. But we should not delude ourselves into believing that such a test is necessary or sufficient, to ensure financial stability.
Unintended Consequences
While the precise value of the Dodd-Frank stress tests is uncertain, the costs it imposes are quite real.
Under Dodd-Frank, the rigor and frequency of the tests escalated as the size of the financial institution increased. At $10B, financial institutions became subject to an annual stress test requirement. At $50B, financial institutions had to perform stress tests semiannually, and one of these tests would be subject to the more exhaustive CCAR process.
What’s important to note here is that once you’ve crossed a given threshold, the cost of compliance is largely fixed. That is, the absolute cost of performing a stress test for an $11B bank is not much different than the total cost of a stress test for a $49B bank. But since the smaller bank has much less revenue to offset this cost, the end result is that the stress test is relatively more harmful to small banks than larger ones.
This dynamic creates odd incentives. When financial institutions are about to cross the $10B or $50B threshold, they have an incentive to do whatever they can to keep their assets below the limit. They may try to sell off loans, pay down liabilities, divest subsidiaries, or take other actions to avoid breaching the limit and triggering large new compliance costs. Likewise, if a financial institution has just crossed one of these thresholds, it finds itself at an immediate disadvantage compared to its larger competitors. It has an incentive to pursue acquisitions or get acquired itself so it can dilute the fixed costs created by the Dodd-Frank stress tests.
In other words, one of the unintended consequences of Dodd-Frank is that it created artificial economies of scale. Although the law was intended to prevent banks from being “too big to fail,” one of its core features actually gave banks a huge financial incentive to get bigger. Is it any wonder that the leaders of America’s largest banks, like Goldman Sachs and Bank of America, have defended Dodd-Frank?
Since the new law doesn’t eliminate stress tests entirely, this artificial incentive will unfortunately still exist. But by moving the thresholds higher, the incentive will only apply to a smaller number of institutions. That is an improvement.
A Safe Prediction
When the next recession eventually comes, some banks will probably fail. Shareholders and creditors will lose money. If they have effective lobbyists, they might convince Congress that taxpayers should lose money too.
We don’t know whether the next recession will be better or worse than the last one. But we can be reasonably certain that when it occurs, this bill and its modest reform of stress testing, will be blamed by many pundits and politicians for causing the crisis. No doubt they will call for more regulation in response.
When that debate arrives, it will be important to separate fact from fiction with regard to the Dodd-Frank stress testing program. In the abstract, stress testing sounds like a good idea. However, details matter, and the types of stress tests mandated by Dodd-Frank do much more harm than good.
They are a significant drain on resources for small banks, they create an artificial competitive advantage for the largest banks, and because the stress tests remove uncertainty, they don’t even provide a meaningful simulation of how a bank will perform in a downturn.
Given all these flaws, reducing the scope of the stress testing mandate was a reasonable and admirable step towards liberalizing the US financial system.
Disclaimer
I am employed as a financial analyst at a bank. The views expressed above are solely my own and do not represent the views of my employer.
UPDATE (6/3/2018): This piece was updated to note that under the new law, institutions with between $100B and $250B in assets will still be subject to periodic stress tests.

The Absurdity of Trump's Syria Strikes

The Absurdity of Trump's Syria Strikes

Last Friday night, President Trump launched cruise missiles against government facilities in Syria. His partners in the strike were France and the UK, leading to an appealing acronym for this trilateral band: the FUKUS coalition.*
(more…)

The Absurdity of Trump's Syria Strikes

The Absurdity of Trump’s Syria Strikes

Last Friday night, President Trump launched cruise missiles against government facilities in Syria. His partners in the strike were France and the UK, leading to an appealing acronym for this trilateral band: the FUKUS coalition.*

(more…)

Between a Rock and San Francisco

New Bill in Senate Would Legalize Growing Hemp

Senator Mitch McConnell (R-KY) announced a surprisingly sensible bill this week that would end the prohibition on hemp farming at the federal level. The bill is co-sponsored by fellow Kentuckian Rand Paul (R) and Oregon’s Ron Wyden (D).

Heretofore, growing hemp has been illegal in the US because it is related to marijuana. The important difference, of course, is that you can’t get high from hemp. Even so, hemp has been classified as a controlled substance. It can be imported from abroad and it’s legal to own hemp, but it can’t really be grown domestically.

Obviously, this policy doesn’t make much sense. What’s odd is that it has a real chance of getting fixed, given that the Senate Majority Leader is backing it.

The reason for the change of heart? Economics.

It turns out Kentucky is one of the states that could have a major agricultural hemp industry if the law permitted it. This creates a potent constituency group that would stand to benefit if the ban was lifted.

As a result, Senator McConnell finds himself on the same side of the issue as the libertarian-ish Senator Paul and the Oregon Democrat who previously sponsored a bill to legalize actual marijuana, not just hemp.

The Stormy Daniels Story Finally Has a Newsworthy Allegation

The Stormy Daniels Story Finally Has a Newsworthy Allegation

After the interview with 60 Minutes, the story surrounding Stormy Daniels finally has something newsworthy about it.

In the discussion, Daniels said she was threatened to keep silent about her alleged affair with Donald Trump. The threat allegedly occurred in 2011, shortly after In Touch Magazine had offered her $15,000 for the story. In Touch ultimately declined to pursue the story.

Daniels said she was with her infant daughter at the time of the threat. This is how she described it in the interview:

“And a guy walked up on me and said to me, ‘Leave Trump alone. Forget the story,'” she said. “And then he leaned around and looked at my daughter and said, ‘That’s a beautiful little girl. It’d be a shame if something happened to her mom.’

“And then he was gone,” she said.

If true and ordered by Trump, this threat is the first aspect of the story that would rise to the level of criminality. It’s also the first element of the plot that could begin to justify the sustained media attention this case has received.

Until now, it was an extramarital affair and a non-disclosure agreement. If Trump had run as an exemplar of virtue and good moral character, the story would have been relevant. He did not.

Indeed, compared to the infamous  “grab ’em by the p****” line in the Access Hollywood tape, an extramarital affair would be quite bland for President Trump.

Trump’s voters preferred him in spite of his moral standing, not because of it.

There’s little doubt that the Stormy Daniels story will continue to dominate headlines and displace coverage of more important problems in the Trump administration–like the appointment of John Bolton, the ongoing US-backed onslaught in Yemen, or the apparent plan to withdraw from the Iran Deal, just to name a few.

Obviously, the media’s priorities are not ideal. But with the new allegation, at least there’s finally something in this story worth covering.

Between a Rock and San Francisco

Trump: ‘Trade Wars Are Good’

Yes, that’s apparently a real quote from President Trump.

This is likely the worst economic position he has staked out so far.

On the plus side, if he puts his trade war optimism to the test and thus sparks the next recession, perhaps we won’t have to hear contrived arguments about how tax cuts cause recessions…

Minimalistic Mockup Of A Paperback Book With A Customizable Background 3438 El1

Fool’s Errand: Time to End the War in Afghanistan

by Scott Horton

Minimalistic Mockup Of A Paperback Book With A Customizable Background 3438 El1 (3)

What Social Animals Owe to Each Other

by Sheldon Richman

Minimalistic Mockup Of A Paperback Book With A Customizable Background 3438 El1 (2)

Coming to Palestine

by Sheldon Richman

Minimalistic Mockup Of A Paperback Book With A Customizable Background 3438 El1 (4)

No Quarter: The Ravings of William Norman Grigg

by Will Grigg

Minimalistic Mockup Of A Paperback Book With A Customizable Background 3438 El1 (1)

The Great Ron Paul

by Scott Horton

Pin It on Pinterest