Last few years saw something of a gold rush into quantitative investment strategies. Their appeal is obvious as a way to put discipline into trading and take the emotion and stress out. Quantitative strategies might even help improve performance. Here’s how Black Rock President Rob Kapito articulated the industry hopes:
“As people get the data and learn how to use the data, I think there is going to be alpha generated and, therefore, will give active managers more opportunity than they‘ve had in the past to actually create returns.” Rob Kapito, Black Rock
In pursuit of the great expectations, Black Rock assembled more than 90 scientists, 28 of them with PhDs and even went as far as poaching one of Google’s leading scientists, Bill McCartney to develop the BlackRock’s machine learning applications. In practice Black Rock’s and other firms’ results have proven to be a mixed bag at best and it seems that most quantitative strategies have tended to underperform or even generate losses. The question is, why?
Quantitative trading strategies have been my focus since the 1990s. Since then I worked in and collaborated with several business and academic organizations and I can appreciate why experience so often falls short of expectations.
Here’s the disconnect: developing quantitative strategies is an engineering problem, but the industry is stubbornly trying to solve it by employing quantitative analysts (quants). Quants are not engineers – they are scientists, usually mathematicians or physicists. They can be very effective as researchers. As a rule, they are also capable of writing software code, which they learn as part of their training.
However, quants are not professional programmers. They are also not software engineers and a depressing majority of the finance industry professionals don’t really understand the difference. I’ve asked many quants if they had any training in software engineering. Virtually none of them do and many of them were unsure what I meant by software engineering – isn’t it the same thing as programming? Well no, it isn’t.
A software engineer is to a programmer as an architect is to a construction worker. You could get away with hiring a construction worker to build a simple object, but to accomplish a more complex, mission-critical structure like an airport or a hospital, you had better hire a capable architect and one or more experienced project-managers. Only once you know exactly what you are planning to build should you bring in the builders.
When I make the analogy in terms of physical structures, it is more intuitive and people get it. But in finance, even when they get it, many professional resist – and even resent – the implications: that if you want to get your quantitative strategies right, use your quants as researches. Let them do research and come up with ideas. When they formulate good ideas, team them up with software engineers and programmers to build good quality, robust solutions. Do not expect them to do the whole thing themselves; at any rate, don’t expect them to do it well.
In finance, the typical approach is going straight from ideas to coding, done by the same individuals who originated the idea. As a rule, the job is rushed as all involved are eager to see the proverbial rubber hit the road, and to start making money. Applying best practices in systems engineering, sticking with methodology, documenting the process and conducting the necessary testing are all regarded as overkill, unwelcome waste of resources and needless delay of the trading gratification. In finance, the quants are expected to do the work of scientific researchers, software engineers and programmers. This is unrealistic and profoundly mistaken.
Software programming: a very error-prone business
Professional programmers on average make as many as 100 to 150 errors per 1,000 lines of code. This is according to a multi-year study of 13,000 programs by Watts S. Humphrey of Carnegie Mellon University’s Software Engineering Institute. At times coding errors can be extremely difficult to detect – until they cause an adverse outcome. A few examples should help illustrate the idea.
Knight Capital and other blowups
On August 1, 2012, Knight Capital implemented a trading algorithm that in a very short time caused the firm a direct cash loss of $440 million and a market cap loss of about $1 billion. The faulty algorithm bought securities at the offering price and sold them at the bid, and continued to do this some 40 times per second. Over about thirty minutes’ time, the algorithm wiped out four years’ worth of Knight’s profits. But this is just one of many quant trading blow-ups. Here are another few high-profile cases:
- In June 2010, an international bank’s algorithmic trading system acted on bad pricing inputs by placing 7,468 orders to sell Nikkei 225 futures contracts on the Osaka Stock Exchange. While the pricing error would have been rather obvious to any human participant, the trading algorithm proceeded to execute approximately $546 million of the orders before the error was caught.
- In the summer of 2018, the $150 billion asset manager GAM had to freeze fund withdrawals after steep losses at one of its quant funds triggered a surge in client redemptions.
- In 2006, Amaranth Advisors’ whiz-kid mathematician, Brian Hunter single-handedly lost $6 billion with his quantitative trading model in Natural Gas derivatives.
- And who could forget the 1998 collapse of LTCM whose all-star team of quants was led by Noble laureates Robert Merton and Myron Scholes.
Such incidents are not unusual and for every debacle that attracts media attention, many more of them go unreported and unknown.
80% probability of losing money
Anecdotal evidence from media stories tell us little about the relative merit of quantitative trading. But one company’s experience provides an empirical case study: in December 2006, world’s most popular trading platform provider MetaQuotes organized world’s first Automated Trading Competition. The $80,000 prize attracted 258 developers of quantitative strategies. More of them joined over the following six years and through 2012, a total of 2,726 quants competed in MetaQuotes’ challenge. Of the 2,726 only 567 (21%) finished their competitions in the black while 79% of them lost money.
More rigorously vetted quants supervised by experienced investment managers may not be quite such loose cannons, but the MetaQuote experience does indicate that quantitative investing isn’t as easy as many professionals like Black Rock’s Rob Kapito think it is.
Help your quants – or sack them!
The problem is not about access to data, sufficient computing power or recruiting superior talent. The problem is in methodology. Using an analogy with tangible, physical systems again: if you asked an engineer to build you any kind of a machine, you would not expect him to immediately start cutting pieces and assembling them. You’d expect him to spend some time drawing up the blueprint and working out exactly how the machine should operate, the sizes and shapes of the pieces and how they all interact and work together. Only when the concept was clear and the dimensions of every last bit is determined and documented would he start assembling the real thing. It is the engineer’s methodical approach to designing the machine that would ultimately result in a quality functioning system, not his imagination and creative genius alone.
That’s not how it’s done in finance. In finance, managers gamble on the creative genius of their quants and hope that they will conduct good research, build valid models, and write pristine code. It is very unlikely that this is a good gamble, but since everyone is doing it that way, few are questioning the approach. If investment managers are reluctant to allocate the time and resources necessary for their quants to turn their ideas into high quality, robust models, they might be better off simply sacking them altogether.
Getting it right is worth it
Some 20 years ago I faced this very issue for the first time. On the one hand I wanted to implement my model and start trading, pitching it to investors and building out my business. On the other hand, I knew there could be surprises in the model’s code, that maintaining it could turn out more involved than building it in the first place, and that my ambitions could easily be dashed if anything went wrong. Building my model the right way meant spending a lot more money and time, delaying the ‘fun’ part of putting it to work. Ultimately, I decided to do it the right way. The result: the mode has functioned daily with zero glitches or maintenance issues since 2003. As a matter of fact, it is my contention that the system we created is probably the best trend following model ever built. An audacious claim for sure, but one I can defend: see here.
Getting it right is worth it, and you only need to do it once. When you complete the work your reward is not only a robust, high quality solution – it is also the low maintenance productivity and the peace of mind that quality solutions afford. As Robert Pirsig wrote in “Zen and the Art of Motorcycle Maintenance,” Peace of mind isn’t at all superficial to technical work. It’s the whole thing. That which produces it is good work and that which destroys it is bad work.
Taking shortcuts is tempting: it is easier and cheaper. We can get on with the business of trading quicker. But in doing so you are taking an unlikely gamble on the quality of systems you are using.
Just ask Knight Capital’s CEO…
In the aftermath of Knight Capital’s blow-up, the firm’s CEO Thomas Joyce rather flippantly declared on Bloomberg TV that, “if you get involved in the day-to-day minutia, this will give you a headache occasionally.” I wonder if, with the benefit of hindsight, Mr. Joyce thinks losing $440 million was a good trade-off for avoiding some headache.
Alex Krainer, 8 June 2020.
 Kapito made these remarks at a Barclays conference in September 2016. Source: Durden, Tyler: “BlackRock’s Robo-Quants Are On Pace To Post Record Losses” – ZeroHedge, 11 January 2017.
 Robson, Ben. “Currency Kings” – McGraw-Hill Education, 2017.