Three Ways Not To Analyze COVID-19 Statistics

Three Ways Not To Analyze COVID-19 Statistics

The COVID-19 pandemic and lockdowns continue to cause unprecedented devastation of everyday life in the United States–approximately 100,000 deaths, tens of millions unemployed, and countless plans, activities, and goals put on an indefinite hold.

In this context, news outlets, politicians, and consumers are closely following the trends in the COVID-19 statistics, trying to answer the most pressing questions. Are things getting better or worse in the US? Have we succeeded in the flattening the curve? Are the reopened states seeing a surge in new cases that many have feared?

These are important questions. Unfortunately, much of the reporting on the COVID-19 data obfuscates the underlying reality. In most cases, the problem is not that the reporting is literally false. But it typically focuses on the wrong metrics and fails to account for the severe limitations in the underlying data. The end result is that readers–and perhaps policymakers–come away with a more optimistic or pessimistic understanding than is actually warranted.

With that in mind, here are three errors to watch out for in discussions on COVID-19 data.

1. Focusing on the number of newly reported positive cases

This problem has become more common, particularly since some states have started to reopen. Here are some examples of recent headlines that commit this error:

Virginia Reports Highest One Day Increase in Coronavirus Cases After Gov. Ralph Northam Criticized For Not Wearing Mask – Newsweek, 5/25/2020

Texas sees highest single-day hike in coronavirus deaths, cases – Texas Statesman, 5/14/2020

Intuitively, it seems like the number and trend of newly confirmed COVID-19 cases must be an important number. But by itself, it doesn’t tell us much at all. To properly understand it, we also need to know the number and trend in total COVID-19 tests conducted over the same period.

As an illustrative example, let’s consider two random days of test results from Virginia. All results that follow are originally sourced from The Atlantic’s COVID Tracking Project:

With these facts alone, it would appear May 25 was a much worse day than April 13 for Virginia when it comes to the coronavirus. Over three times as many people were confirmed as positive. Surely, must mean the virus was spreading wider and was more out of control on May 25–after the reopening–than it was in mid-April during the lockdown, right?

Well, not quite. When we add the context of the number of tests performed and the positivity rate (the rate of positive tests out of total test results reported), a very different picture emerges. See below:

From this, we can see a more compelling explanation for why positive tests on April 13 were so much lower–namely, far fewer tests were conducted.

Based on these figures, there’s very good reason to assume the virus situation was actually worse on April 13. The high rate of positives suggests that they were unable to test enough people. So if they had had enough resources to test all suspected individuals, it’s likely that the number of positives would have been much higher.

But if you only focus on the positive cases, this reality gets completely turned on its head.

A similar version of this general error can be observed in many reports on record increases in daily cases. Confirmed cases are indeed continuing to rise throughout the US. But the good news is that in most places, the total number of tests is rising at an even faster clip.

2. Focusing on the percentage growth rate (or the doubling rate) of confirmed cases

A related analytical error gets made when media outlets report on the percentage growth rate. Examples of this error in the wild can be routinely found in Bloomberg Radio news updates. Last week, they were reporting around a 1.1% increase in cases, which varied slightly depending on the day.

For a print example, I offer this highly neutral take from Willamette Week in Oregon from May 21, “A Rise in COVID-19 Cases in Deschutes County Tests Whether the State Will Close Bars, Restaurants Again. (So Far? No.)”:

The number of COVID-19 cases in Deschutes County has increased over the last seven days. On Wednesday, the county reported nine cases—more cases than it has on any other single day.

 

Those increases raise the question of whether the state will order the county to shut down the bars, restaurants and hair salons that reopened just six days ago…

 

A 5 percent increase in COVID cases is the benchmark the state set for reviewing the status of a county and possibly shuttering it again. [Health Researcher Numi Lee] Griffith pointed to a 27 percent increase in cases in Deschutes County during the week ending May 20. (emphasis added)

This article is interesting for a couple reasons. First, we see that it actually starts out by committing error #1, reporting a record increase of nine cases without providing information about the number of tests.

(Later on, the article even notes that many of the new cases were actually identified proactively through contact-tracing rather than simple symptomatic testing. If anything, that’s actually a positive indication about the county’s preparedness to mitigate the virus, not a cause for alarm.)

But I digress. The key points in the Willamette Week article are that a) Oregon has actually built this metric into its reopening guidelines and b) Deschutes would have violated it with a 27% increase.

The reason people tend to focus on the growth rate (or in some cases, the days-to-doubling) is because we know that the virus naturally spreads at an exponential rate. One person gives it to three people who each give it to three more people and so on.

In theory, the growth rate is useful because it could offer a window into how quickly the virus is spreading currently, and whether the curve has been sufficiently flattened.

But here’s the problem. One of the key features that makes COVID-19 harder to deal with is that many people who contract the virus, experience no symptoms at all. And while this is not entirely proven, it’s generally believed that these asymptomatic individuals are still contagious and thus contribute to the exponential spread of the disease.

The challenge is that testing capacity has been so limited that states have not been able to conduct the kind of widespread random testing that would be needed to identify all of the asymptomatic cases. The other way to plausibly identify all or most asymptomatic cases is through a robust contact-tracing system like that of South Korea or Taiwan. But the US’s capabilities here are still limited. Instead, COVID-19 testing around the country has been prioritized for people with symptoms and healthcare workers.

The upshot of all this is that the growth rate is not a useful proxy for the thing we’re actually trying to measure. What we want to know is the true rate of spread for the virus, in real-time. But due to testing limitations, the growth rate mostly reflects a) the growth rate in testing capacity and b) the growth rate in symptomatic patients.

This error actually cuts in both directions. Early on in the COVID-19 crisis in February–when the CDC was hard at work developing a faulty test and the FDA was simultaneously preventing others from creating a better one–the nation was testing virtually no one. So most metrics looked good.

Then in mid-March as testing capacity finally got built out, the number of positive cases quickly exploded. Positive cases were doubling every two to three days, as this chart shows:

And then, starting in mid-April and continuing to the present, the growth rate and doubling rate slowed back down. Perhaps this can be partly explained by the voluntary precautions and the lockdowns. But clearly, the more important driver is this: While the virus may grow exponentially, US testing capacity does not.

At each point in the process, including today, these metrics have not been meaningful in the US. In March, they offered a belated confirmation that the virus was already spreading widely. And now, they suggest that virus is slowing down, in part because testing capacity can only grow so fast.

3. Citing the case fatality rate as a meaningful statistic

As its name implies, the case fatality rate (CFR) is calculated by taking the total number of deaths attributed to COVID-19 and dividing by the total number of confirmed cases. The calculation is straightforward, and but the result is worse than useless in the case of COVID-19, as we’ll see.

The most high profile example of bad reporting on the CFR comes from the World Health Organization, whose director said this on March 3:

Globally, about 3.4% of reported COVID-19 cases have died. By comparison, seasonal flu generally kills far fewer than 1% of those infected.

This shockingly high 3.4% figure was used as one of the reasons to justify widespread lockdowns. And yet, the statement itself offers a clue about the problems with this metric.

In that quote, the WHO is comparing the then-calculated CFR of COVID-19 to the infection fatality rate (IFR) of seasonal influenza. These are not the same metric.

In effect, the CFR is what we can easily observe and calculate. The IFR is what we actually care about, but it’s harder to determine. The difference between the two metrics is the denominator. The CFR divides by total confirmed cases, and the IFR divides by total infections.

Since confirmed cases are a subset of total infections, the CFR will always be higher this the IFR. This doesn’t mean that COVID-19 is the same as the flu. But it does mean that comparing the CFR of one disease to the IFR of another is unlikely to provide useful information.

To be fair, it’s conceivable that the gap between the CFR and IFR will not be significant for some diseases. If there was a well-known disease and widespread testing was available, it’s likely that the number of confirmed cases would approximate the total number of infections and thus the CFR would be close to the IFR. However, this is not remotely true for COVID-19 now, and it was even less true at the beginning of March.

For COVID-19, there have been testing shortages all over the world, with a few exceptions. As a practical matter, this meant that tests were generally prioritized for people with severe symptoms and healthcare workers. This prioritization was necessary to try to treat patients more effectively and reduce spread in the hospital environment. But it also compromises the value of a CFR calculated off the resulting data.

The first problem is selection bias. If you’re primarily testing patients that already had severe symptoms, then the population of confirmed cases is skewed towards those that are going to have worse health outcomes from the disease. In turn, this will systematically push up the CFR.

A related problem is that limited testing obviously means the number of confirmed cases will be far lower than the total number of infections. By contrast, the COVID-19 death count, while imperfect, should at least be less understated. The reason is that some jurisdictions, like the US, now include “probable” cases of COVID-19 in the death counts, even without a confirmed test. Thus, although limited testing will effectively cap the number of confirmed cases reported, it does not cap the number of deaths reported. This reality will also tend to systematically inflate the CFR.

The final problem with the CFR occurs simply because COVID-19 is a new disease, and there’s a significant time lag between when someone contracts the disease and when they might actually pass away as a result. At any given time, some percentage of the total confirmed and active cases, relate to individuals who will eventually die from the disease. This will cause the CFR to be artificially lower than reality (though the effect is diluted as the disease progresses over time).

As we see, the errors in the CFR are considerable. And while they point in different directions, there’s every reason to believe that on net, the CFR significantly overestimates the true lethality rate of COVID-19.

The problem is not that the CFR is literally false. The CFR for COVID-19 is being calculated correctly, it’s just not a meaningful number.

“Follow the Data”

These days, it seems like we are constantly being told by pundits and politicians that we need to “follow the data” when it comes to COVID-19.

By itself, that’s not bad advice. But too often, these people act as though the data provides a script. We just look at the data, put it in our model, and voila! enlightenment rains down upon us.

It would be nice if it worked that way. In reality, “The Data” doesn’t tell us anything. People interpreting the data tell us about their conclusions, and they’re not always right.

Using Bad Math, Media Claims No State Has Met the Reopening Guidelines on New Cases

Using Bad Math, Media Claims No State Has Met the Reopening Guidelines on New Cases

As the debate over lifting the lockdowns in the US intensifies, key data points on testing and infections are routinely mischaracterized.

Consider this summary from Chris Wallace on Fox News Sunday on May 3:

As we said, about half the states — more than half the states have started in some way, shape, or form, reopening. But we’ve crunched the numbers, Doctor, and not a single state has met the White House gating guidelines of two weeks of steady decline in new cases.

It’s not just Fox News that is describing the data in this way. Here’s how NBC News reported it on April 28:

As a handful of states begin to ease stay-at-home restrictions, no state that has opted to reopen has come close to the federally recommended decline in cases over a 14-day period…

 

Some states, such as Colorado and Kentucky, have reported fewer new cases in the past week. But no single state has had a two-week decline in case numbers.

The guidelines being referenced here are the White House’s “Opening Up America Guidelines”. The document offers criteria in three different areas that states should meet before reopening, but the reports above are focused only on the standards related to new cases. These are the official criteria on new infections:

Downward trajectory of documented cases within a 14-day period

 

OR

 

Downward trajectory of positive tests as a percent of total tests within a 14-day period (flat or increasing volume of tests)

So what’s going on here? Is it really true that no state in the US is on the down-slope of the COVID-19 crisis?

No, it’s not.

In fact, a quick analysis of publicly available data shows that a full 40 states met one of the suggested criterion on new cases as of May 2. (Specifically, 22 meet the documented cases criterion and 37 states fit the alternative positive test percentage criterion.)

The disconnect arises because the White House guidelines don’t directly spell out the calculation they have in mind when they ask for a “downward trajectory” within a 14-day period. Instead, different people apply their own calculation and reach different conclusions about the exact same data and exact same guidelines.

Worse still, the media outlets cited above are choosing to interpret the guidelines in a way that will be very difficult for any state to meet prior to outright eradication of COVID-19.

As Chris Wallace says in the quote, no state has had a steady decline in new cases over a two week period. If we assume a “steady decline” would require that a state report fewer (or flat) new cases every day compared to the prior day, then this is technically true. But it’s also a really bad standard to use when dealing with data that is expected to have natural variation from day-to-day.

To see how this fails in practice, consider the state of Alaska. As of May 2, this is what Alaska’s last 14 days of test results looked like, using data from the COVID Tracking Project:

Alaska Daily Cases

In total, Alaska had 51 new cases over the two-week period that ended on May 2 out of nearly 12,000 people tested, for a positivity rate of under 0.5%. Even accounting for Alaska’s smaller population, this data strongly suggests that Alaska has the virus reasonably under control. (For comparison, New Jersey’s stats over this same period was around 42,000 new positive cases and a 42% positivity rate.)

That said, under the “steady decline” standard used by Fox (and NBC*), it would still fail. May 1 saw cases jump from 0 to 9, and broke a 2-day streak of declines. As it happens, the previous day with 0 positive cases also had 0 total tests, highlighting the extreme volatility in daily testing data at a state level. These types of wild fluctuations in daily test counts mean that any daily calculation is bound to be unreliable.

Alaska’s data also shows us why this standard requires almost complete eradication before it can be met. During this period, Alaska had one day (April 25) with 0 positive test results in spite of many new tests being run. Thus, if they were to report even 1 new case in the next 13 days, it would still fail to meet the extreme “steady decline” standard. Clearly, this is not a reasonable requirement and is also not the intent behind the reopening guidelines.

The more appropriate way to do this calculation is to compare the total results from two sequential two-week periods against each other. This prevents volatility in any single day  from distorting the result, and provides a much better picture of the trend. Here’s what the results look like for Alaska under those conditions:

Alaska 2 Week Cases

In this view, we can see that, while Alaska never experienced much of a spike, its cases do appear to be on the decline. Despite tripling the number of new tests, they still reported fewer cases. The positivity rate shows a precipitous drop accordingly.

Stepping back from the details of Alaska, we can see this same trend play out across numerous other states. The table below shows the key data points for all states for the last two sequential two-week periods using the approach I discussed above. It also identifies which states would currently meet the case component of the White House reopening guidelines based on those results:

All State 2 Week Analysis

Of course, none of this means that the White House reopening guidelines are a good standard, nor that the lockdown policies are even desirable in the first place.

But, if journalists and public officials are going to treat these guidelines as a reasonable standard for phasing out the lockdowns, it’s important to get the math right.

 

*Note: The NBC article cited above includes a correction indicating that they misstated the guidelines as requiring daily declines. But even though they changed some of the wording in their article, they apparently didn’t rerun the analysis and still concluded no states met the guidelines. That is not true now, and it was not true at the time of their article.

Between a Rock and San Francisco

Between a Rock and San Francisco

Over at Reason, Christian Britschgi reports on one of the unfortunate choices that faced San Franciscans in this week’s vote–whether to increase a business receipts tax by ~500 or ~1,000 percent.
As I write this, it’s too close to call and more votes are coming in, but it’s possible that San Franciscans might end up declining them both. Here’s hoping.

Between a Rock and San Francisco

Congress Lifts a Stress Off the Financial System

Congress has finally taken action to liberalize the nation’s financial system. The bill was S. 2155, the Economic Growth, Regulatory Relief, and Consumer Protection Act. It passed both houses with a surprising degree of bipartisan support and was signed by President Trump last week on May 24.
Opponents cast the bill as a gift to the largest banks that will put the economy at greater risk for a new financial crisis.
In fact, the bill represents a modest and limited reform of Dodd-Frank. The law scales back an onerous regulatory exercise of questionable utility, and it actually reduces the artificial incentive for most financial institutions to get bigger.
The US financial system is still a long way from a true free market, but after this bill is implemented, the regulated market that remains will be less distorted than before.
What the Bill Does
The most important–and contentious–aspect of the new law is that it reduces the stress test requirements imposed on financial institutions.
Under the previous law, there were three tiers of stress test requirements based on the total assets of the financial institution:

  • $10B or less: No explicit stress test requirements.
  • $10B to $50B: Annual stress test required.
  • $50B and above: Semiannual stress tests required to be conducted. One of these tests will be subject to the Comprehensive Capital Analysis and Review (CCAR) process, which requires more detail and a compressed timeline relative to what is required for institutions in the $10 billion to $50 billion tier.

As the size of the financial institution increases, and the implicit risk to the stability of financial system grows, the regulations get more stringent and costly to implement.
The new reform bill, S. 2155, preserves the same general structure for stress testing, but it modifies the thresholds and some other details. The new total asset tiers and requirements under S. 2155 are as follows:

  • Under $100B: No explicit stress test requirements.
  • $100B to $250B: Stress tests required on a “periodic” basis; the definition of periodic is not immediately clear. The bill also includes a provision that allows the Federal Reserve Board (FRB) to impose, at its discretion, other prudential standards.
  • $250B and above: Annual stress test required, which would be subject to the CCAR process mentioned previously.

So as a result of the new rule, all institutions that are $10B in assets or above will have the regulatory burden from stress testing reduced to some extent. The largest institutions will only need to perform the test annually, instead of semiannually, while many smaller institutions will see the stress testing requirements from the Dodd-Frank Act eliminated.
Notably, these changes are set to take effect 18 months from the date of enactment for financial institutions with $100B or more in assets. Thus, the previous stress testing thresholds and requirements will continue to apply to them in 2018 and 2019.
The effective date of these changes for financial institutions under $100B in assets may occur sooner, depending on the way regulatory agencies choose to interpret the law. As of this writing, this interpretation is not yet certain.
Stressful Tests
Opponents of the new reforms argue that the thresholds for stress testing are far too high in S. 2155 and will put taxpayers at risk for more bailouts. Thus, one coalition lobbying against the bill accused the Democrats who crossed the aisle to support the legislation of being part of the #BailoutCaucus.
While it is heartening to see progressive groups making appeals to protect taxpayers, it is misguided in this case.
Such arguments take for granted that mandated stress tests really are an effective way to enhance and verify the stability of a financial institution. Somewhat paradoxically, the opponents also assume that the cost of completing such a test is relatively low.
As an example of this type of perspective, consider the comments offered by Dr. Dean Baker, an economist who has strongly opposed the recent reforms. Here’s how he explained the stress testing process in a recent interview:

The main part of the law that they changed was that they [the banks] had to undergo stress testing… What that means is that they just put their assets on a spreadsheet. So they say how many mortgage loans they have, how many car loans, business loans.

And then they’re…given a number by the Federal Reserve Board. Assume 10% of those go bad, and they’ll have an extreme case, assume 15%. I’m picking those numbers out of the air, but they’re given numbers and then they go, okay, how would our books look if that was the case?

That’s a very simple exercise, or at least it should be for any institution of that size. So the idea that this is some sort of huge regulatory burden is basically utter nonsense.

It’s hard to overstate just how incorrect this explanation is. Still, it’s useful to raise because I think it’s likely similar to the image many defenders of mandated stress tests have in their mind.
To people like Dr. Baker, this is just a minor exercise that helps stabilize the financial system, and the greedy snowflake bankers are making a frivolous complaint to their beholden lawmakers.
To people like me, who have directly participated in the Dodd-Frank stress testing process–and completed literally hundreds of pages of documentation related to it, per year—the stress tests are a serious and onerous undertaking that, in their present form, provide little discernible value.
Given this wide disparity between the perspective of pundits and practitioners, it’s useful to explain what the stress testing process actually looks like.
What Mandated Stress Testing Actually Looks Like
Pursuant to the Dodd-Frank Act, each year, the FRB publishes three new supervisory scenarios for use in stress testing: baseline, adverse, and severely adverse. As the names imply, the baseline scenario represents a business-as-usual environment with moderate economic growth; the adverse scenario reflects a minor economic recession; and the severely adverse scenario includes a deep economic recession, which is generally on par with the severity of the Great Recession, though it will vary in some particulars.
For each of these scenarios, the FRB provides 16 domestic variables and 12 international variables in total. These variables include economic metrics like GDP growth and the unemployment rate as well as some key interest rates. The variables are provided as a quarterly average for the next 13 quarters.
(Notably, contrary to Dr. Baker, the FRB does not provide loss rates for loans; companies are required to forecast these loss rates and be able to defend those forecasts as we’ll see.)
In addition to the variables, the FRB provides a brief qualitative description of what’s happening in each scenario, which runs around one page for each. Here are the scenarios that the FRB provided for 2018 testing cycle as an example.
And that’s it.
From this information, the company will need to forecast out their consolidated financial results–balance sheet, income statement, and capital ratios–for the next 9 quarters in each scenario. Since the end results will be reviewed and scrutinized by regulators, the company needs to be able to explain and support each aspect of those forecasts.
This is no small task.
One challenge that immediately arises in this process is that the range of variables provided by the FRB is insufficient for forecasting everything that is expected. For example, the FRB provides just six interest rates in its scenarios, and it omits many commonly referenced rates like 1-month LIBOR or the 1-year LIBOR, among others. Many loans use these types of rates as an index rate, so it’s necessary to know what they are in the scenarios to accurately project interest income and effective yields.
As a result, one of the first steps in the forecasting process is to find a way to forecast the values of the additional variables and interest rates that are needed. The company could forecast the variables in-house or pay a third party that has already forecast an expanded set of stress test variables. This is a good deal for the third party, but it represents a needless headache, and cost, for the company performing the stress test. In the real world, it would never be necessary for a company to derive these variables because they are all readily available in real-time.
With that hurdle addressed, the company can move on to the actual forecasting.
To give you a sense for the complexity of this process, let us consider what is involved in forecasting credit losses in these scenarios.
The first point to note here is that it’s not possible to forecast total credit losses for all loan balances in the aggregate. The total loan balance will include many different types of loans, each of which will behave differently under times of economic stress and will experience different loss rates. Likewise, different loan types will be more sensitive to some economic variables than others. For instance, a collapse in the home price index (HPI) variable would have a major negative impact on residential mortgages. As borrowers find themselves underwater on their homes, credit losses for this portfolio would rise significantly. However, one wouldn’t expect a drop in HPI to cause the same spike in losses for, say, commercial lines of credit.
So the company knows it needs to forecast credit losses, at least, at the portfolio or loan type level to have a meaningful outcome. In practice, the company could forecast at a more granular level, but we will stick with the loan type level to avoid over-complicating the example.
Preparing the baseline forecast is relatively easy. Most companies will already have some internal projections credit losses by loan type, and these projections will generally serve as a decent starting point.
Things get trickier when it comes to the adverse and the severely adverse scenario. The company can try to rely on their experience and the experience of peers in the past recession, but this will be highly imperfect. Here are a few of the questions that would need to be considered:

  • Have underwriting standards improved since the last recession? If so, then the average borrower may be more creditworthy and the portfolio should generate fewer losses.
  • Has the geographic concentration of the portfolio changed since the last recession? Although the FRB only provides national variables, some cities and some parts of the country tend to be more resilient than others in a downturn, which could impact credit losses.
  • How does the current scenario compare with the economic conditions in the last recession? Are the factors that impact this specific portfolio more or less severe than before?

All of these types of changes could have a major impact on the results. The company would need to take them into consideration to support their forecast, either qualitatively or quantitatively.
Then, once all the credit loss forecasts are developed, the company will use the results and determine various other line-items that are affected–provision for loan losses, allowance for loan losses, actual loan balances, and so on.
We’ve still only really scratched the surface here, but the example I’ve described is a simplified version of the process that could go into forecasting one part of the overall financial performance. Some items won’t be as complicated as credit losses, but the company will have to repeat a variant of this process to forecast all the other components of the financial statement in each scenario.
Suffice it to say, the full process requires a significant amount of work in order to produce a stress test forecast that can stand up to scrutiny. I’ll let you guess how many people have to be involved in the stress tests to complete them.
The First Crisis Without Uncertainty
Given all the work that goes into the stress testing process, you might hope that the result really is valuable. Unfortunately, this is far from clear.
In theory, the stress test is supposed to test how a financial institution will perform during another severe crisis. The problem is that the structure of the stress test removes one of the most critical and defining characteristics of an actual crisis: uncertainty.
As mentioned above, the FRB provides the scenario variables for the full 13-quarter duration upfront. In other words, the FRB is effectively telling companies what the path of the economy will be for the next 13 quarters. The companies know when the recession starts, how deep it goes, which aspects of the economy are hit hardest, and when the recovery begins to take hold.
In a real recession, of course, the financial institutions would not know any of these things. They might have an educated guess, or hope, but they do not know. They have to make critical financial decisions in a recession in the dark, and many mistakes will be made. This is how uncertainty works. But in the stress tests mandated by Dodd-Frank, uncertainty has been replaced by omniscience.
As a result, the severely adverse scenario is not really testing how a financial institution will fare in a deep recession. It’s testing something more specific, and less useful:
“If financial institutions have a crystal ball at the onset of a severe recession and have definite knowledge about the path of broader economy for next 13 quarters, will they be able to survive?”
Knowingly or not, that is the question that the Dodd-Frank stress tests are answering.
As a purely academic matter, I suppose the question is an interesting one. But we should not delude ourselves into believing that such a test is necessary or sufficient, to ensure financial stability.
Unintended Consequences
While the precise value of the Dodd-Frank stress tests is uncertain, the costs it imposes are quite real.
Under Dodd-Frank, the rigor and frequency of the tests escalated as the size of the financial institution increased. At $10B, financial institutions became subject to an annual stress test requirement. At $50B, financial institutions had to perform stress tests semiannually, and one of these tests would be subject to the more exhaustive CCAR process.
What’s important to note here is that once you’ve crossed a given threshold, the cost of compliance is largely fixed. That is, the absolute cost of performing a stress test for an $11B bank is not much different than the total cost of a stress test for a $49B bank. But since the smaller bank has much less revenue to offset this cost, the end result is that the stress test is relatively more harmful to small banks than larger ones.
This dynamic creates odd incentives. When financial institutions are about to cross the $10B or $50B threshold, they have an incentive to do whatever they can to keep their assets below the limit. They may try to sell off loans, pay down liabilities, divest subsidiaries, or take other actions to avoid breaching the limit and triggering large new compliance costs. Likewise, if a financial institution has just crossed one of these thresholds, it finds itself at an immediate disadvantage compared to its larger competitors. It has an incentive to pursue acquisitions or get acquired itself so it can dilute the fixed costs created by the Dodd-Frank stress tests.
In other words, one of the unintended consequences of Dodd-Frank is that it created artificial economies of scale. Although the law was intended to prevent banks from being “too big to fail,” one of its core features actually gave banks a huge financial incentive to get bigger. Is it any wonder that the leaders of America’s largest banks, like Goldman Sachs and Bank of America, have defended Dodd-Frank?
Since the new law doesn’t eliminate stress tests entirely, this artificial incentive will unfortunately still exist. But by moving the thresholds higher, the incentive will only apply to a smaller number of institutions. That is an improvement.
A Safe Prediction
When the next recession eventually comes, some banks will probably fail. Shareholders and creditors will lose money. If they have effective lobbyists, they might convince Congress that taxpayers should lose money too.
We don’t know whether the next recession will be better or worse than the last one. But we can be reasonably certain that when it occurs, this bill and its modest reform of stress testing, will be blamed by many pundits and politicians for causing the crisis. No doubt they will call for more regulation in response.
When that debate arrives, it will be important to separate fact from fiction with regard to the Dodd-Frank stress testing program. In the abstract, stress testing sounds like a good idea. However, details matter, and the types of stress tests mandated by Dodd-Frank do much more harm than good.
They are a significant drain on resources for small banks, they create an artificial competitive advantage for the largest banks, and because the stress tests remove uncertainty, they don’t even provide a meaningful simulation of how a bank will perform in a downturn.
Given all these flaws, reducing the scope of the stress testing mandate was a reasonable and admirable step towards liberalizing the US financial system.
Disclaimer
I am employed as a financial analyst at a bank. The views expressed above are solely my own and do not represent the views of my employer.
UPDATE (6/3/2018): This piece was updated to note that under the new law, institutions with between $100B and $250B in assets will still be subject to periodic stress tests.

The Absurdity of Trump's Syria Strikes

The Absurdity of Trump's Syria Strikes

Last Friday night, President Trump launched cruise missiles against government facilities in Syria. His partners in the strike were France and the UK, leading to an appealing acronym for this trilateral band: the FUKUS coalition.*
(more…)

The Absurdity of Trump's Syria Strikes

The Absurdity of Trump’s Syria Strikes

Last Friday night, President Trump launched cruise missiles against government facilities in Syria. His partners in the strike were France and the UK, leading to an appealing acronym for this trilateral band: the FUKUS coalition.*

(more…)

Between a Rock and San Francisco

New Bill in Senate Would Legalize Growing Hemp

Senator Mitch McConnell (R-KY) announced a surprisingly sensible bill this week that would end the prohibition on hemp farming at the federal level. The bill is co-sponsored by fellow Kentuckian Rand Paul (R) and Oregon’s Ron Wyden (D).

Heretofore, growing hemp has been illegal in the US because it is related to marijuana. The important difference, of course, is that you can’t get high from hemp. Even so, hemp has been classified as a controlled substance. It can be imported from abroad and it’s legal to own hemp, but it can’t really be grown domestically.

Obviously, this policy doesn’t make much sense. What’s odd is that it has a real chance of getting fixed, given that the Senate Majority Leader is backing it.

The reason for the change of heart? Economics.

It turns out Kentucky is one of the states that could have a major agricultural hemp industry if the law permitted it. This creates a potent constituency group that would stand to benefit if the ban was lifted.

As a result, Senator McConnell finds himself on the same side of the issue as the libertarian-ish Senator Paul and the Oregon Democrat who previously sponsored a bill to legalize actual marijuana, not just hemp.

The Stormy Daniels Story Finally Has a Newsworthy Allegation

The Stormy Daniels Story Finally Has a Newsworthy Allegation

After the interview with 60 Minutes, the story surrounding Stormy Daniels finally has something newsworthy about it.

In the discussion, Daniels said she was threatened to keep silent about her alleged affair with Donald Trump. The threat allegedly occurred in 2011, shortly after In Touch Magazine had offered her $15,000 for the story. In Touch ultimately declined to pursue the story.

Daniels said she was with her infant daughter at the time of the threat. This is how she described it in the interview:

“And a guy walked up on me and said to me, ‘Leave Trump alone. Forget the story,'” she said. “And then he leaned around and looked at my daughter and said, ‘That’s a beautiful little girl. It’d be a shame if something happened to her mom.’

“And then he was gone,” she said.

If true and ordered by Trump, this threat is the first aspect of the story that would rise to the level of criminality. It’s also the first element of the plot that could begin to justify the sustained media attention this case has received.

Until now, it was an extramarital affair and a non-disclosure agreement. If Trump had run as an exemplar of virtue and good moral character, the story would have been relevant. He did not.

Indeed, compared to the infamous  “grab ’em by the p****” line in the Access Hollywood tape, an extramarital affair would be quite bland for President Trump.

Trump’s voters preferred him in spite of his moral standing, not because of it.

There’s little doubt that the Stormy Daniels story will continue to dominate headlines and displace coverage of more important problems in the Trump administration–like the appointment of John Bolton, the ongoing US-backed onslaught in Yemen, or the apparent plan to withdraw from the Iran Deal, just to name a few.

Obviously, the media’s priorities are not ideal. But with the new allegation, at least there’s finally something in this story worth covering.

Between a Rock and San Francisco

Trump: ‘Trade Wars Are Good’

Yes, that’s apparently a real quote from President Trump.

This is likely the worst economic position he has staked out so far.

On the plus side, if he puts his trade war optimism to the test and thus sparks the next recession, perhaps we won’t have to hear contrived arguments about how tax cuts cause recessions…

Between a Rock and San Francisco

Trump: 'Trade Wars Are Good'

Yes, that’s apparently a real quote from President Trump.
This is likely the worst economic position he has staked out so far.
On the plus side, if he puts his trade war optimism to the test and thus sparks the next recession, perhaps we won’t have to hear contrived arguments about how tax cuts cause recessions…

Between a Rock and San Francisco

New Bipartisan Effort in Senate to Stop the War in Yemen

Progressive Senator Bernie Sanders (I-VT), Senator Chris Murphy (D-CT) and libertarian-leaning Senator Mike Lee (R-UT) have introduced new legislation this week that would end US support in the Yemen War.

It’s unclear if the legislation will be allowed to come to a vote by the leadership, but it’s good to see the protracted conflict finally back in the news. Yemen is home to one of the most acute humanitarian crises in the world right now, with millions internally displaced and suffering from malnutrition as a result of the war.

In a time of increasing polarization, this is also a good reminder that progressives and libertarians can often find common ground when it comes to the most important issues.

Between a Rock and San Francisco

WaPo Debunks the Non-Scandal of “Anglo-American” Law

In a recent speech, Attorney General Jeff Sessions made a passing reference to the “Anglo-American heritage of law enforcement”. Naturally, this sparked outrage and new accusations that Sessions is racist.

Those accusations may be well-founded, but this is about the weakest evidence that could be offered to support them.

A solid new piece at The Washington Post explains why.

In short, descriptions of the US legal system as “Anglo-American” are actually quite mainstream, appearing routinely in legal arguments and even Supreme Court opinions. The reason this description is common is because it’s literal. The roots of the American legal system can be found in the English (that is, Anglo) common law tradition. Since the US started out as thirteen English colonies, it should be a surprise to precisely no one that the American legal system was influenced by the English one.

Don’t get me wrong; I’m all for criticizing Jeff Sessions. Among other problems, he’s a hard-liner on immigration, a staunch supporter of the Drug War, and an advocate for civil asset forfeiture.

The point is that we should criticize politicians primarily for the things they do, and the policies they promote–not just the words they say.

Pin It on Pinterest