top of page

Randomness, Everywhere We Look!

Updated: Jan 20


"Fooled by Randomness" is a 2001 book by Nassim Nicholas Taleb that explores how luck and chance often carry the day in life and in financial markets, frequently being mistaken for talent.


Taleb’s book was written during a different technological epoch, but we are arguably even more at risk of being fooled by randomness today, thanks to the emergence of tools that are built on top of uncertainty. Understanding how to manage that uncertainty is a modern survival skill.


Up close and over small distances, life seems pretty predictable. You put one foot after another, and you can expect to move down the sidewalk methodically toward your destination.


Things that are sufficiently deterministic, where inputs produce the same outputs, are amenable to things like functions and algorithms.


But we don’t operate in closed systems. Walking down 5th Avenue sidewalks during the holidays introduces a lot more ‘noise’ into your path. Still, with lots of different micro-adjustments, you can expect to make your way through the throng.


These are more realistic scenarios, where there’s a ‘drift’—the trajectory toward 5th and 57th, say—and ‘noise’—getting jostled by people along the way.


That intuition underlies many formal stochastic models such as geometric Brownian motion and ARIMA formulations— mathematical frameworks that model uncertainty with explicit assumptions about distributions over time.


This complexity and randomness are what happens when you introduce a few more actors; markets have thousands, nay millions of these!


Still, finance practitioners, and everyone else, would prefer things to be more predictable. Everybody would love to model things where there are functional modalities… where there’s exactly one potential output given a certain input.


And that brings me to our interactions with some new technologies and what randomness means for those: AI and quantum computers.


Artificial Intelligence (It’s called Stochastic Gradient Descent for a Reason)

Users have gotten frustrated with so-called ‘hallucinations’ from their chatbots, which are understandably frustrating and often bewildering.


Hallucinations in AI are often experienced as randomness, but they usually arise from how models are trained and how answers are generated, rather than from pure randomness at the core of the system.


Neural networks, which underlie transformers, which power LLMs, and so forth, start with random weights for their parameters. Their job is to optimize across lots of layers of artificial neurons. And simply put, small perturbations early on can be amplified later. During training, that amplification can show up as instability—like exploding gradients. After training, it can still influence results in subtler ways. These effects are managed, but they’re part of the architecture.


The point is that in our daily lives we are dealing with AI agents that, for a variety of reasons, will possibly give you different answers to the same question.


Quantum Circuits

At its core, quantum computing is deliberately built on randomness. A quantum bit (qubit) doesn’t hold a single, definite value the way ordinary computers do. Instead, it exists in a blend of possible states at once. When you run a quantum computation, the machine carefully manipulates these probabilities—but the final act of measurement forces the qubit to produce an outcome, with different results possible across repeated runs, according to known probabilities. Run the exact same quantum program twice, and you can get different answers each time—not because the computer is sloppy, but because quantum mechanics itself only promises reliable patterns across many runs, not certainty in any single one.

This is like the drift concept from above but turned on its head.


On top of this built-in randomness, today’s qubits are also noisy. They are extremely sensitive to heat, radiation, and tiny imperfections in control signals, which can disturb their fragile quantum states before the computation finishes. This extra noise doesn’t represent meaningful probabilities the algorithm intended—it’s like those crowds on 5th Avenue that keep jostling you on your way to the Aman.


Living with Uncertainty from our Robot Advisors

This certainly won’t be put to bed here, but for now I’ll offer a framework.


The way to reconcile “random engines” with our desires for better certainty is to stop treating engagements with these tools as “answers” and start treating them as controlled sampling devices. Both deep learning and quantum circuits are, in different ways, machines for drawing structured samples from a complicated landscape: the variability you see across runs is not just error, it’s information about how stable the conclusion is under small perturbations of initialization, data order, hardware, or measurement.


In short, treat these tools as fallible advisors, not omniscient or deterministic oracles. If you asked a strategic advisor for an opinion, you wouldn't expect a robotic script. You would expect a general “drift” of opinion, perhaps phrased differently each time. We need to apply the same logic here:


In practice, that looks like using ensembles and repeated measurements to separate “signal that survives perturbation” from “insight that only appears in one lucky trajectory,” calibrating uncertainty so probabilities mean what they say, and reserving action for cases where the distribution is tight enough (or the downside is bounded enough) that you’d make the same choice across most plausible worlds.


The randomness doesn’t disappear; it becomes the mechanism by which you stress-test your own decision, quantify what you don’t know, and choose policies that are robust rather than merely optimistic. If you wrap these systems in a disciplined outer loop—repeat runs, vary seeds and conditions on purpose, aggregate into a posterior-like belief over outcomes, and then apply an explicit decision rule that penalizes fragility—you can distill uncertainty into confidence, even if your Chatbot or future quantum computer sometimes seems like it can't make up its mind!

Comments


bottom of page