JP Morgan “London Whale” series by Lisa Pollack

I discuss the JP Morgan “London Whale” credit derivatives trading loss in my “Practical Risk Management Course” at the University of Chicago Booth School of Business. I found Lisa Pollack’s discussion of the background and details invaluable. The following is my guide for students (and myself) to her FT Alphaville blog posts.

THE BELLY OF THE WHALE SERIES
Lisa Pollack, Financial Times Alpahville Blog

This is a really fun series of pieces by Lisa Pollack from the FT in 2013, covering the London Whale fiasco. By the end of the course you will be able to understand virtually everything she is talking about.

Lisa dug through JP Morgan’s Task Force report and the US Senate’s Permanent Subcommittee on Investigations report to provide some of the most amusing and insightful analysis I have seen. Here is my outline and guide to the posts – they are all there on the FT Alphaville site but a guide can be valuable for navigating around. (And this is only for her “Belly of the Whale” series – there is also the CSI: CIO series that came before – a link to that is at the bottom).

You will need to register for the Financial Times to read the blog, but the Alphaville blog is free content.

Now, as she says, “let’s dig in …”

The Senate’s Permanent Subcommittee on Investigations spent several months looking into the credit derivatives trades placed by JPMorgan’s chief investment office. The trades ultimately lost the bank $6.2bn. The resultant report, and exhibits associated with a hearing in the US Senate on March 15th [2013], has provided a great deal of background information previously unavailable anywhere else. We dig in…
  1. Its purpose limited only by one’s imagination…
    • “What was the SCP meant to be doing?”
      • General discussion of the role of the Structured Credit Portfolio (SCP)
      • From Senate subcommittee: “While some evidence supports that view of the SCP [intended generally to offset some of the credit risk that JPMorgan faces], there is a dearth of contemporaneous SCP documentation establishing what exact credit risks, potential losses, or tail risks were supposedly being hedge by the SCP.”
    • http://ftalphaville.ft.com/2013/03/19/1427912/its-purpose-limited-only-by-ones-imagination/
  2. Humongous credit derivatives cake proves inedible
    • Argues that SCP was more prop trading than hedging. Touches (once again) on contradictory goals for SCP. Discusses positions put on during 1st quarter 2012 in IG.9, Markit ITraxx Europe indices, and in tranches.
      • A very useful table from Senate report showing positions (notional) for quarter-end. Not detailed by series, but shows which index and indices vs. tranches. The increase in long IG and short HY index positions during Q1 is clear.
    • http://ftalphaville.ft.com/2013/03/19/1428102/humongous-credit-derivatives-cake-proves-inedible/
  3. 03/23/2012 06:20:09 BRUNO IKSIL, JPMORGAN CHASE BANK, says: i did not fail
    • This is the key post that shows the dynamics and psychology of the SCP strategy going bad. Narrative for January, February, March 2012 focusing on Bruno Iksil (“the London Whale”)
      • Lays out reporting lines (using Senate Staff Report exhibits)
      • Describes how the long IG positions were not producing the expected profits during January. (At one point Iksil suggests letting book run off, but then also mention that VaR and CSBPV / CS01 limiting adding more positions.)
      • Then in February longs were increased, seemingly for two reasons:
        • To offset (hedge) the HY short positions that were losing money but which the traders did not want to trade out of (cost too much).
        • To “defend p&l” – i.e. “keep trading in order to not get even deeper into the red.”
        • But this makes no sense at all. They should have had a liquidity reserve of mid-to-bid/offer (like at TMG) that would have been released when they traded out. This would have removed the disincentive to hold onto their position rather than trade out.
      • Passing mention (expanded in next post) about increasingly-aggressive marks during March.
      • March – “doubling-down” – increased long IG positions – now in IG.17 and IG.18. See “Correlation: the credit trader’s kryptonite” from the CSI:CIO series.
      • Really useful table (from Staff Report) on April 9th notionals vs. daily trading volumes.
    • http://ftalphaville.ft.com/2013/03/19/1428372/03232012-062009-bruno-iksil-jpmorgan-chase-bank-says-i-did-not-fail/
  4. This is the CIO! Take your silly market-making prices and [redacted] – Part 1
    • Talks about marks on the book but more narrative than numbers. Points out there were three valuable warning signs: Breach of risk limits (breaches ignored), Marks on the book (book mismarked), Collateral disputes with numerous counterparties. None of these warning signs triggered meaningful action.
      • Two valuable tables (end-February and end-March) that show bid / offer / CIO mark for various indices and tranches held by the CIO, and where the CIO took bid vs. offer. This shows pretty definitively that the marks were all slanted to minimize losses for the SCP book.
        • Question – what are the units for the various indexes? All spreads in bp? Or some prices? It looks like at the HY are prices (labeled, plus look like 16ths)
      • Explains how SCP behaved more like a buy-side client in an illiquid market
        • Taking traders’ marks for end-of-day marks
        • Having mid-office or back-office verify marks within thresholds
        • My question – were these markets truly illiquid? Particularly for big indices like NA.IG.9?
        • Standard for a dealer book in a liquid market – traders have nothing to do with marks, mid-office or back-office gets external marks. (And this is the way we ran our hedge fund, even though we were “buy-side”.)
      • There is mention about $17m adjustment for end-March marks, but cf post “Can Haz Spredshetz” under the “CSI: CIO” series. That subsequently grew to $400-600mn. There here were process and spreadsheet problems with the Valuation Control Group’s price-testing practices.
      • See post 6 below (“I thought, I thought …”) for much more detail, with numbers and tables, on problems with marks.
    • http://ftalphaville.ft.com/2013/03/21/1433822/this-is-the-cio-take-your-silly-market-making-prices-and-redacted-part-1/
  5. This is the CIO! Take your silly market-making prices and [redacted] – Part 2
  6. “I thought, I thought that was, that was not realistic, you know, what we were doing” – The London Whale
    • Detail on marks and mismarking.
      • “By now, it should be well understood that the credit derivatives book in JPMorgan’s chief investment office was woefully mismarked.”
      • “At March 31, 2012, the sensitivity to a 1bp move in credit spreads across the investment grad and high yield spectrum was approximately ($84) million, including ($134) million from long risk positions, offset by $50 million from short risk positions.” This means roughly $184/bp to mismarking (i.e. if longs mismarked down by 1bp and shorts mismarked up by 1bp)
      • An aside – Lisa objects to JPM changing a mark by 1.75bp but that strikes me as not a huge change in a mark, but rather a reflection of the position sizes being so large (since a moderate change in mark has a very big dollar impact).
      • Grout spread-sheet with totals for mismarking for mid-March.
      • Rather disjointed (and sad) conversation between Bruno Iksil and his boss Martin-Artajo.
      • Question – in Grout spreadsheet showing mismarking what are the units for CDX.HY? Does 0.34 mean 0.34bp? Or is that in price terms (i.e. 34 cents)?
    • http://ftalphaville.ft.com/2013/03/22/1435372/i-thought-i-thought-that-was-that-was-not-realistic-you-know-what-we-were-doing-the-london-whale/
  7. Risk limits are made to be broken
    • All about risk limits
      • Great graph of CSBPV showing change from roughly +/- $5mn through 12/11 to -$60mn by end-April.
      • Quote from Senate staff report (attributed to CEO Jamie Dimon and others): “risk limits at CIO were not intended to function as ‘hard stops,’ but rather as opportunities for discussion and analysis.” I believe this is actually a reasonable approach, but in fact breach of limits did not trigger any discussion or analysis.
      • A few tables detailing breaches of limits of all kinds, from January through April, of VaR, CSBPV, etc. Really pretty bad.
      • Also stop-loss limit breaches (but these were towards end-March, partly because book was mismarked so losses did not show up).
      • Risk limits were supposed to be reviewed annually or semi-annually. CIO did not perform such reviews.
    • http://ftalphaville.ft.com/2013/04/08/1450082/risk-limits-are-made-to-be-broken/
  8. Ten times on the board: I will not put “Optimizing regulatory capital” in the subject line of an email
  9. This is the VaR that slipped through the cracks
    • Primarily concerned with the introduction of new VaR model for the CIO during January 2012.
      • CIO was in breach of VaR limit, and it was so much that it put the whole bank in breach of VaR limits.
      • Documentation of some of the emails authorizing temporary waiver of the VaR limit, and push to get new VaR model authorized.
      • A few specific issues re VaR model:
        • Old VaR model was supposed to produce 5% VaR – P&L should exceed that level roughly 5 days out of 100. But P&L actually did not exceed once in a year. Patrick Hagan (developer of new VaR model) explained that this was a problem, essentially invalidating the the old model. Lisa Pollack comments after Hagan’s explanation: “[Skeptical-about-sample-size face goes here]” but this is one case where she is wrong: If p=0.05 that VaR=X, then P[no observations during a year > X] = 0.95^250 = 0.00027% – pretty small. We can use Bayes’ rule to see how much this might change my confidence in the original model. If my confidence is 99.9% that X is indeed the 5% quantile (the model is correct) vs 0.1% that X is, say, the 0.4% quantile, then P[X is 5% quantile | no observations during a year] = (.0000027*0.999) / (.0000027*0.999 + 0.36714*0.001) = 0.73%. In other words the evidence should change my prior from 99.9% to 0.73% – a really big impact on the prior probability.
        • There was no parallel run between old and new VaR models
        • The new VaR model lowered the VaR by roughly 50% instead of the expected 20%. This may seem like too much but for reference think that for a normal distribution the 5% quantile (-1.64) is 62% below the 0.4% quantile (-2.65). (The 0.4% quantile is a relevant reference because P[no exceedances of the 0.4% quantile during 250 business days] = (1-.004)^250 = 0.367. This is not too high a probability so maybe the model produces the 0.4% quantile.)
    • http://ftalphaville.ft.com/2013/04/10/1455152/this-is-the-var-that-slipped-through-the-cracks/
Lisa wrote an earlier series of blog posts (the CSI: CIO Series) that you can find at http://ftalphaville.ft.com/2013/01/16/1339792/jpm-task-force-stunningly-arrives-at-same-conclusion-as-jpm-chairman-and-ceo/

About Thomas Coleman

Thomas S. Coleman is Senior Advisor at the Becker Friedman Institute for Research in Economics and Adjunct Professor of Finance at the Booth School of Business at the University of Chicago. Prior to returning to academia, Mr. Coleman worked in the finance industry for more than twenty years with considerable experience in trading, risk management, and quantitative modeling. Mr. Coleman earned a PhD in economics from the University of Chicago and a BA in physics from Harvard College.
This entry was posted in Risk Management and tagged , , . Bookmark the permalink.