The Need for Speed Never Stops
- James C. McGrath

- Feb 1
- 4 min read
Grant Stenger and Richard Dewey recently contributed an excellent piece to FT Alphaville, “The quant shop — AI lab convergence” (January 25, 2026). Both authors have real quantitative finance chops, and their central observation is provocative: finance and frontier AI development have converged into “variants of the same business,” running an identical technical pipeline of data, models, and constraints to optimize large-scale learning systems attached to balance sheets. This technical overlap explains how High-Flyer, a quantitative hedge fund, successfully pivoted to become an AI leader with DeepSeek by simply repointing its existing infrastructure from “next-price prediction” to “next-token prediction.” Both industries are now adopting a shared three-layer stack—massive feature-learning models paired with lightweight execution layers—while increasingly embracing strict secrecy around the “dark arts” of data curation and model deployment.
None of this should surprise anyone who’s watched finance over the decades—because we’ve seen it before. Long before LLMs became a thing, trading depended on technological edge. And latency has always been the obsession, because being faster than everyone else is often the whole game. Consider that carrier pigeons were once used to transmit time-sensitive financial information—stock prices, commodity rates—faster than ships or couriers could manage. In the 19th century, firms like Reuters built pigeon networks to receive market data hours or even days ahead of competitors. The medium changes, but the need for speed endures.
Finance has also long grappled with the limitations of whatever technology it embraces. Whether that meant sourcing high-test birdseed for the birds or, in our era, coping with the inscrutability of deep neural networks, the pattern is consistent: adopt powerful tools, then figure out how to govern them. The so-called “quantamental” approach anticipated exactly this moment. Before anyone worried about losing the thread inside a transformer, quants were already developing frameworks to harness sophisticated technology while preserving human judgment.
The article’s discussion of interpretability reveals the current compromise: a three-layer stack where big models learn representations, smaller distilled models make most real-time decisions, and human-governed constraints sit on top. This architecture acknowledges that while deep learning excels at discovering patterns in high-dimensional data, you still need interpretable guardrails “when billions of dollars are on the line.” Quantamental investing operates from a similar premise but approaches it differently. Rather than stacking model sizes, it blends systematic signals with discretionary fundamental analysis—the portfolio manager retains authority to override or contextualize what the models suggest. Both approaches answer the same question: how do we capture the power of complex models while preserving human judgment at critical decision points?
The key similarity is that both frameworks reject the false binary between “pure quant” and “pure discretion.” The three-layer stack keeps humans in the loop through the constraints layer; the risk limits, mandate restrictions, the “don't be the next Archegos” admonishments. Quantamental keeps humans in the loop through fundamental analysis: evaluating management quality, competitive dynamics, or macro context that models may miss. In both cases, raw model output is treated as an input to decision-making rather than the decision itself. Neither approach trusts the black box to run unsupervised.
The differences matter, though. The stack approach remains fundamentally systematic; it distributes complexity across layers optimized for different constraints (compute, latency, interpretability). Quantamental, by contrast, explicitly incorporates qualitative judgment—the belief that some information is irreducibly non-quantifiable, or that human pattern recognition captures something models cannot. The stack addresses interpretability through architectural choices; quantamental addresses it through epistemological humility about what numbers can and cannot tell you.
A quantamental mindset offers a useful framework for navigating AI guardrails in investing precisely because it normalizes structured human override. Current regulatory environments increasingly demand explainability: why did you make this trade, what risks were you managing, can you justify this to a compliance officer or a client? Pure deep learning struggles here. Try explaining to a potential investor how why you have all those hidden layers and what they do! But if you treat the AI as a sophisticated signal generator whose outputs feed into a human-interpretable decision framework, complete with explicit rules about when to trust the model, when to discount it, and when to override it entirely, you get a governance structure that that’s tractable. The quantamental practitioner already thinks in these terms: the model is a tool, and you aren’t giving over to the black box.
This suggests a path forward where AI’s role in investing is bounded not by technical capability but by institutional and strategy design. The firms that thrive will likely be those that treat deep learning the way quantamental shops treat factor models: as powerful but fallible inputs that require human stewardship, clear constraints, and explicit accountability structures. As the article puts it, “a risk committee deciding how much tail risk to tolerate looks a lot like a safety board deciding how much jailbreak risk is acceptable.” Both are governance mechanisms that mediate between what models can do and what institutions should let them do. And you retain defensibility and hopefully explanability, so that the LPs no longer feel a cold shoulder from your silicon PMs.

Comments