top of page

NISQy Business: The Road to Quantum Supremacy in Finance

  • Writer: James C. McGrath
    James C. McGrath
  • May 15
  • 36 min read

Bloch sphere representation of a qubit. By Smite-Meister - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=5829358
Bloch sphere representation of a qubit. By Smite-Meister - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=5829358

[Hate to start out so droll, but this material is for information purposes only. The views, opinions, estimates, and strategies expressed herein are my opinions, relying on incomplete information. Any companies referenced are shown for illustrative purposes only and are not intended as a recommendation or endorsement. The author may or may not have negligible commercial stakes in any of the companies listed at any point in time. None of the following should be regarded as investment advice in any way. Additional disclosures here also apply. Finally, I’m nowhere close to a physicist, so this is as non-technical as possible.]


I. Introduction

The world is still coming to terms with the explosion of artificial intelligence. In many domains, it is just beginning to have an impact, with the most profound transformations still to come over the next few years. 


Arguably, the foundation for today’s AI can be traced back to the theoretical work of Warren McCulloch and Walter Pitts who introduced a model of a simplified artificial neuron in their 1943 paper "A Logical Calculus of the Ideas Immanent in Nervous Activity." What else happened in 1943? Los Alamos National Laboratory was founded (see Oppenheimer, the movie!), which marked the institutionalization and large-scale application of quantum mechanics to solve a real-world, high-stakes engineering problem: building the atomic bomb. The world was never the same. It’s also worth noting—something I will come back to—Monte Carlo techniques based on statistical quantum principles were also developed in 1943—to simulate nuclear reactions. 


Interestingly, while applied quantum theory made a bigger “bang” than neural networks at first, it had no direct impact on computing. Analog and then digital computers informed nuclear engineering, but never the other way around. 


The neural network architecture, too, was in amber, until much later. 


Important theoretical/architectural developments in AI along the way, like the evolved, non-linear activation function, the transformer, alongside the availability of training data for LLMs made possible with the Internet, set the stage for the explosion of AI applications we are seeing today. But there was one more propitious, arguably indispensable component: GPUs are essential in machine learning because their massively parallel architecture—thousands of cores designed for high-speed matrix and vector operations—makes them ideal for accelerating the intensive computations required in deep learning. 


[Unlike CPUs which powered the microcomputer revolution, GPUs offer significantly higher memory bandwidth and allow efficient (parallel) processing of large datasets and models, reducing training times from weeks to hours. Their importance surged around 2012, when AlexNet used NVIDIA GPUs to win the ImageNet competition, dramatically outperforming traditional methods and marking a turning point in AI. That has brought us to today, where AI, LLMs, and related tools are re-inventing the way the finance industry works.]


That said, we are still in the early innings of the AI revolution in finance (and even less in investment management), but as wild and wonderful as those applications are, they are more quantum leaps than a new way of looking at the world. 1943’s other seminal development, quantum computing, will be that—a true paradigm shift.


[Since we are discussing quantum, it's important to get the terminology right. Misapplied, “quantum leap” in intended to denote a dramatic or revolutionary change, suggesting something large and transformational, while in reality, a quantum leap (or quantum jump) refers to an extremely small, discrete change in the energy state of an atom—typically at the atomic or subatomic level. Those small transitions are fundamental to quantum computing Whereas a Kuhnian paradigm shift is a fundamental change in the basic understanding of reality when the prevailing framework can no longer explain accumulating anomalies. Rather than evolving gradually, science progresses through disruptive revolutions that replace one paradigm with another, often in ways that are incompatible with the old. Think of the Copernican Revolution.] 


Finance, which has been transformed by computing, starting with the quadratic programming that made (the variance-reduction objective of) Markowitz MPT possible, soon followed by the asset pricing revolution launched by the marriage of the Black-Scholes formula with microcomputers in the late 1970s and 1980s, had been coasting for a while, until the transformer architecture undergirding recent foundational LLMs upended the way managers use data, and soon, who does what in the industry. AI will be that disruptive. Quantum computing will be, too, but it’s further out. This article is an attempt to make sense of the promise and limitations of this technology for the finance practitioner. 


I’m not a fortune teller and I’m not a scientist. I know that quantum computing will transform our world, on the heels of a world transformed by AI, but it’s going to be like bankruptcy… slowly, then all at once. In the remainder of this note, I’m concentrating on the next 3-5 years, which is the period of time over which we can most confidently extrapolate. (There is the quantum uncertainty of course!) You’ll see there are many contingencies baked into where we go from here, and lots of them are purely hardware related. Then there are the algorithms, which are radically different from classical computing. There aren’t many that many Gits for quantum code, but there are some, and various SDKs are sprouting. 


Herein, I’m going to cover a few things… 1) First, what is quantum computing (QC); how’s it different from AI, 2) the most promising nearby applications for finance or finance-adjacent, 3) who’s doing what, and 4) why it won’t be here tomorrow. 


II. Quantum Computing vs. Artificial Intelligence

Like I said, the basis of quantum computing and today’s AI can be found in the same annus mirabilis. They’re both suddenly getting buzzy (after decades of inertia), but they are really very different. 


Quantum computing (QC) is a form of hardware-accelerated computing that leverages quantum physics (superposition, entanglement, interference) to perform certain computations extremely fast. AI, on the other hand, is a class of algorithms and software techniques (often run on classical hardware like GPUs) that enable machines to learn from data and make predictions or decisions. In essence, QC is about new physics for computation, while AI is about new algorithms (in classical computers) for cognition-like tasks


Quantum computing differs fundamentally from classical computing in how it represents and processes information. While classical computers use bits that exist in a state of either 0 or 1, quantum computers use qubits, which can exist in a superposition of both 0 and 1 simultaneously. This allows quantum computers to explore many computational paths at once, vastly increasing parallelism for certain types of problems. Additionally, entanglement (meaning the state of one qubit can depend on the state of another, no matter the distance between them) enable qubits to perform certain computations far more efficiently than classical bits, allowing for new algorithms and more efficient solutions in fields like cryptography, optimization, and simulation. However, quantum computers are highly sensitive to noise (outside influences) and require error correction techniques not needed in classical systems.


A. What’s A Quantum Good For?

A quantum computer excels at mathematically structured problems where quantum parallelism can be exploited (such as factoring integers, unstructured search, simulating quantum systems, or certain optimization problems). AI excels at pattern recognition and inductive inference – tasks like image classification, natural language understanding, and playing complex games – by training on vast datasets using statistical methods. These are tasks that don’t have exact algorithms but instead rely on learning from examples. In contrast, right now quantum computing may excel at problems with a clear mathematical structure like solving a large system of linear equations (MPT! Monte Carlo!) or optimizing a combinatorial function, a quantum algorithm might eventually solve it faster than any classical algorithm. If you have a problem like recognizing objects in photos or translating languages – tasks requiring semantic understanding and generalization from data – AI techniques are currently the state of the art. Or any unsupervised learning task—like clustering, etc. 


Quantum computers promise provable speedups (for certain problems) underpinned by quantum complexity theory. For instance, Shor’s algorithm is a quantum algorithm developed by Peter Shor in 1994 for efficiently factoring large integers—a task that is extremely slow for classical computers when the numbers are large. The algorithm runs in polynomial time on a quantum computer, making it exponentially faster than the best-known classical factoring algorithms. This is significant because it can break RSA encryption, which relies on the difficulty of factoring large composite numbers.


[Polynomial time refers to how quickly an algorithm runs as the size of its input increases. This is important in computer science because polynomial-time algorithms are generally considered efficient or tractable—meaning they can solve large problems in a reasonable amount of time. In contrast, exponential-time algorithms (like 2ⁿ) quickly become infeasible as inputs grow. So, when a quantum algorithm like Shor’s is said to factor numbers in polynomial time, it means it can solve a problem much faster than the best-known classical algorithms, which require exponential time.]


Grover’s algorithm is similar. It’s a quantum search algorithm that offers a quadratic speedup for finding a specific item in an unsorted dataset. Developed by Lov Grover in 1996, it significantly reduces the number of steps needed compared to classical search methods by leveraging features of quantum systems. While its speedup is less dramatic than Shor’s algorithm, it has broad applications, especially in cryptography and optimization. 


In contrast to these stylized (but potentially disruptive) use cases, AI doesn’t guarantee speedup on a given algorithmic problem; instead, AI (especially deep learning) finds approximate solutions to problems that are hard to even define algorithmically (like “Is there a cat in this picture?”). The success of AI is measured by accuracy or utility, not by asymptotic complexity improvement. In practice, classical AI models require huge computing resources (today’s largest neural networks are trained on petaflops of compute) but run on the aforementioned GPUs (which are parallel classical processors, albeit ones uniquely suited to solve the gradient descent problem.) 


B. Is it Faster?

The preceding examples aside, today’s quantum processors are sadly much slower and more limited than classical processors in most other practical matters. A quantum circuit must be executed multiple times (shots) to get probabilistic results, and current qubit counts (tens to a few hundred) are tiny compared to the billions of bits in classical memory. Meanwhile, classical AI models can utilize specialized hardware (GPUs/TPUs with thousands of cores) to process millions of parameters efficiently. So, for current practical tasks, AI running on classical hardware vastly outstrips what today’s quantum hardware can do. The game-changing aspect of QC is that as qubit counts and fidelity grow, some computations will become feasible that no amount of classical parallelism could handle, because of exponential complexity. (That exponential complexity cuts both ways, however. QC’s, at least in the near term, suffer from their own scaling complexities.)


A striking example was Google’s 2019 self-proclaimed quantum supremacy demonstration: their 53-qubit processor performed a contrived random circuit sampling task in 200 seconds, which was estimated to take 10,000 years on a classical supercomputer (a 1.5 trillion-fold speedup). Notably, that task had no AI or practical relevance—it was chosen just to showcase quantum capability. In contrast, AI has been achieving “supremacy” in practical domains (like beating human champions in Go or generating human-like text) by leveraging classical computation; but there is no single computation in AI that we can point to and say, “this fundamentally can’t be done by classical computing, unless the model or data is too large.” In summary, QC’s capability lies in solving certain previously intractable computational problems through new algorithms, whereas AI’s capability lies in solving previously un-automatable cognitive tasks through learned models.


C. NISQy Business

Quantum computing’s limitations are largely hardware-related, which is going to be a fact of life over the next few years. I suspect there will could be an inflection point, somewhere over the near-term akin to the adoption of transformer architecture in AI, that will upend the entire approach to QC. Until then, we live in a pre-Cambrian, primordial NISQ world. NISQ stands for Noisy Intermediate-Scale Quantum, referring to the current generation of quantum computers with 50–1,000 qubits (far short of the number we need to do real work) that are powerful but still error-prone and not yet fault-tolerant. These devices are unable to run full-scale quantum algorithms like Shor’s or fault-tolerant quantum error correction. However, they may offer practical advantages for certain problems, such as quantum chemistry simulations or portfolio optimization tasks, using variational or hybrid quantum-classical algorithms. 


NISQy computers cannot run deep, complex algorithms without errors accumulating. Moreover, quantum algorithms often require error-corrected qubits to realize their speedups fully—which means hundreds or thousands of physical qubits to encode one high-fidelity logical qubit. For example, breaking current cryptography with Shor’s algorithm might need on the order of a few thousand logical qubits, which could translate to millions of physical qubits given the overhead of error correction. This is far beyond the hundreds of qubits available today. Even in the 3–5 year timeframe, quantum computers will likely remain limited to at most a few thousand physical qubits, and still prone to errors (though manufacturers plan for incremental improvements in error rates). It’s worth emphasizing that engineers know how to scale up normal bits and have been good at it for decades now—it’s really a linear scaling relationship; qubits, in contrast, in the context of entangled systems grow exponentially more complex with the number of qubits to be entangled. Scaling up normal bits doesn’t bump into the same difficulty.


D. Intelligent, but Not Omniscient

Artificial intelligence’s limitations are more algorithmic and data-related: AI models require large, labeled datasets and are often “black boxes” that lack transparency. They can fail in unpredictable ways (e.g., adversarial examples) and do not provide guarantees of correctness – an AI might give a very convincing answer that is completely wrong, whereas a quantum algorithm (like Shor’s) when run on sufficient hardware gives a provably correct answer (e.g. the exact prime factors) but only if the hardware is capable. AI also struggles with generalizing beyond its training data and with problems that require explicit logical reasoning or long-term planning without huge search (though new techniques are improving these). In terms of technical use cases: quantum computers are not “better” at general intelligence or reasoning – those remain AI’s domain for now – and you wouldn’t use a quantum computer to run a web server or a word processor. Conversely, classical AI algorithms are not good at problems like simulating quantum physics or breaking cryptographic math problems – areas where quantum algorithms shine.


III. Putting the QuBit on the Finance Horse (Use Cases)

As I mentioned, modern finance is built on yesterday’s high-performance computing, particularly for areas like risk analysis, option pricing, and portfolio optimization. Below, I identify some of the more likely areas of finance to be entangled in quantum first. This article from Nail, et. al., is an excellent guide to where quantum computing is currently with respect to finance and is the source of much of the below commentary, which I attempted to translate from the original.


A. Monte Carlo, asset pricing, and risk modeling

Besides helping to build the nuclear bomb, Monte Carlo simulation has been huge for finance and owes its success to early digital computers. Briefly, Monte Carlo is a computational technique that estimates uncertain processes by running a large number of random samples. A large number of random samples… feels like it might be amenable to quantum techniques? A paper from Woerner and Egger (2019) showed just that. Using a method called quantum amplitude estimation, they demonstrated that quantum algorithms can significantly reduce the number of samples needed to estimate expected values like Value-at-Risk, effectively, working smarter and better than classical techniques. Importantly, they applied this to a real-world credit risk model and provided practical circuit designs and resource estimates, making the approach tangible rather than purely theoretical. Follow-up research applied this to credit risk, estimating economic capital requirements more efficiently than classical simulations. 


While promising, these studies show we aren’t there yet: achieving practical advantage will require much larger, error-corrected quantum hardware Quantum Monte Carlo methods show tremendous potential, but we aren’t there yet, because they need many more high-fidelity qubits to outclass classical methods. Thus, in the 3–5 year horizon, we may see prototypes and hybrid quantum-classical algorithms demonstrating faster simulations on small problem instances, but full-scale pricing of complex portfolios with quantum speedup will likely await more advanced hardware, or a defter architecture and approach. 


B. Portfolio Optimization and Trading

Another promising area is portfolio optimization – selecting the best asset mix under constraints, a problem that grows exponentially with the number of assets. Quantum computing approaches include quantum annealing and variational algorithms (QAOA, VQE) to tackle these combinatorial optimizations. (A quantum annealer is a specialized type of quantum computer designed to solve optimization problems by finding the lowest-energy configuration of a system—ideal for tasks like portfolio optimization, scheduling, or routing. Quantum annealing is a bit like gradient descent… it uses quantum tunneling to  "jump" through energy barriers rather than climb over them, potentially escaping local minima more effectively than other techniques. In contrast, a gate-model quantum computer is fully programmable and capable of running a wide range of quantum algorithms, including those for cryptography (like Shor’s), search (like Grover’s), and simulating quantum systems—making it more versatile. While annealers are useful for specific, well-defined optimization problems, gate-model systems are essential for more general-purpose quantum computing applications.)


Experiments on D-Wave’s (more on them, below) quantum annealer have shown some success on small portfolios. For instance, a D-Wave 2000Q system optimized a 40-asset portfolio and achieved a better risk-return tradeoff (per a custom metric) than a brute-force Monte Carlo search. (It selected portfolios with higher returns for a given risk than the classical Monte Carlo method did.) Nonetheless, classical heuristic approaches like tried and true genetic algorithms still outperformed the quantum annealer in that test. Even with a 60-asset universe, the annealer could find the optimal portfolio only when aided by classical pre-processing and was ultimately matched or beaten by classical simulated annealing. 


These results highlight that near-term quantum optimizers can contribute to finance (especially via hybrid methods), but they do not yet decisively outrun state-of-the-art classical algorithms. Looking ahead 3–5 years, we expect further quantum optimization trials in trading strategies, asset allocation, and arbitrage. Gate-model devices running QAOA and other variational algorithms are being refined for these problems. While early quantum optimizers may find novel solutions or speed up certain subproblems, they will likely serve as co-processors alongside classical HPC, given hardware limits. Financial institutions are actively experimenting here: e.g. JPMorgan has prototyped option pricing on IBM quantum hardware, and fintech startups are exploring quantum approaches to fraud detection and high-frequency trading optimizations.


C. Fraud Detection and Anomaly Detection

In banking and payments, detecting fraudulent transactions in real time is vital and computationally challenging. Quantum machine learning offers intriguing potential to sift through large, complex datasets for subtle patterns. Research in quantum support vector machines (QSVM) demonstrates how a quantum model might flag fraud patterns that classical models miss. For example, Grossi et al. implemented an end-to-end QSVM for credit-card fraud detection on IBM quantum hardware using real payment data. They compared its performance to classical machine-learning models (like random forests and XGBoost) as well as human-crafted rules. The quantum model had to be simplified (using feature selection and dimensionality reduction) to fit on current hardware. 


While the hybrid quantum-classical approach did not yet surpass the best classical methods, it showed that quantum feature encoding can work on real financial data. The study also introduced new ways to identify important features via the quantum kernel, a technique that could improve classical models as well. In the next few years, quantum-enhanced anomaly detection may be further applied in finance (for fraud, algorithmic trading irregularities, or credit scoring). Even if quantum models only match classical accuracy initially, their different way of encoding data might provide new insights or robustness. Banks are interested in this intersection of QC and AI as a way to bolster security: quantum detection of anomalies could eventually operate in tandem with quantum-safe encryption to detect fraud.


D. “Black hat” or “White Hat” (or both at the same time?): Cryptography, Blockchain, and Digital Assets

One of the most critical impacts of QC on finance is its ability to break certain cryptographic schemes. Many blockchain and cryptocurrency systems rely on cryptography (e.g., RSA, ECC signatures) that could be compromised by a sufficiently powerful quantum computer running Shor’s algorithm. This is not a use-case per se, but a security threat that the financial and digital asset sector must address. Naik et al. notes that recent quantum computational advances “pose serious security challenges to cryptography-based technologies, such as blockchain.”  


For example, Bitcoin’s elliptic-curve-based signatures and the SHA-256 mining hash could be vulnerable: as mentioned, Shor’s algorithm can factor large numbers and compute discrete logarithms in polynomial time, undermining RSA/ECDSA, while Grover’s algorithm can speed up brute-force hashing (though Grover offers a quadratic, not exponential, advantage). 


Still, we don’t have to worry, yet. Within 3–5 years, it’s unlikely that fully cryptography-breaking quantum computers will exist—experts estimate that breaking RSA-2048 would require thousands of logical qubits, which in turn means millions of physical qubits when error-corrected. In any case, this has gotten people's attention: the looming threat is driving a lot of work on quantum-resistant cryptography and blockchains.


In the near term, the researchers anticipate financial networks and blockchain developers will integrate post-quantum cryptographic algorithms (from lattice-based or hash-based cryptography, for example) to future-proof digital assets. On the positive side, quantum technology might also enhance blockchain platforms: research is exploring “quantum-safe” blockchains and even quantum-powered consensus. Some proposals include using quantum random number generators for better cryptographic keys, or even quantum blockchain protocols that use quantum communication for security. While these ideas are mostly theoretical now, pilot projects could emerge in the next few years. Overall, the finance sector is both target and testbed for quantum computing — target, in that institutions must guard against quantum attacks on security, and testbed, in that they can deploy early quantum algorithms for competitive advantage in analytics.


Finally, while not strictly within the domain of the finance practitioner, but rather the target of some, pharma and materials science may benefit from some of the earlier applications of this tech.


E. Pharmaceuticals and Materials Science

Quantum computers are inherently suited to simulate quantum-mechanical systems, like the behavior of molecules and materials, which is exponentially hard for classical computers. In drug discovery and materials design, quantum simulation could accelerate finding new compounds (for example, designing a better catalyst or battery material). Even in the NISQ era, quantum chemistry algorithms (e.g., Variational Quantum Eigensolver) have been used to compute small molecular energies that challenge classical methods. As hardware improves, we anticipate quantum-assisted discovery of pharmaceuticals and advanced materials in the coming years. Companies like IBM and Google have already demonstrated quantum simulations of simple chemical systems, and startups (e.g., QunaSys, Zapata) are partnering with pharma firms to explore quantum chemistry use cases. Early applications might include computing reaction rates for drug metabolites or optimizing molecular structures for solar cells, where a modest quantum advantage could significantly shorten R&D cycles.


IV. Picks and Shovels! (The Players)

This is hardly exhaustive, but it is worthwhile starting to track who’s active in the space. The great thing about quantum research is that the hard-core science research is happening simultaneously in university labs, national laboratories, the largest technology conglomerates, and even within the global banks (such as JPM). (This part gets a little technical, but it’s worth it.) Most of the companies I mention below are publicly-listed (obviously) and a few are pure-plays, with modest market cap. The order listed is arbitrary. As an aside, regarding investing in quantum computing—not only is it highly speculative, it’s also highly susceptible to the AI tail for now… Mr. Nvidia managed to tank the entire sector in just a few minutes. Word to the wise.  


A. IBM

As befitting a company older than Warren Buffet, IBM has done a few things. Today, IBM is a leader in superconducting-qubit technology and has made its quantum processors accessible via the cloud since 2016. IBM’s roadmap is ambitious—they broke the 100-qubit barrier in 2021 with the 127-qubit Eagle chip and followed with a 433-qubit Osprey in 2022. You can see an overview of their recent accomplishments and future directions here.) Briefly, at the IBM Quantum Summit in 2023, they unveiled Condor, a 1,121-qubit processor—the first to surpass one thousand qubits. This milestone tested the limits of fabrication and cryogenics for a single chip. IBM is now pioneering modular quantum architectures: by the end of this year they plan to introduce systems (HeronCrossbillFlamingoKookaburra) that link multiple chips via high-speed interconnects and even quantum communication, scaling to an envisioned 4,000+ qubit networked system by 2025. Alongside hardware, IBM’s open-source software framework Qiskit and its quantum cloud services have a large user community and motivated dozens of research papers. IBM’s research has produced advances in quantum error correction, including demonstrations of stabilizer codes and the development of error mitigation techniques on real hardware.


B. Google

As mentioned, Google has worked the sizzle since the 2019 demonstration of its 53-qubit Sycamore processor. Google used a superconducting qubit architecture to sample random quantum circuits faster than any known classical method, a historic scientific milestone (albeit with really 0 practical import). Since then, Google’s focus has been on quantum error correction and scalability. In 2021–2023, the Google team demonstrated that increasing the size of a surface code (from a 5×5 grid of qubits to a 7×7 grid) reduced the logical error rate, showing the first empirical evidence of error correction improving with scale. This crossing of the error-correction threshold was a crucial step toward building a fault-tolerant quantum computer. Google is also exploring alternative qubit technologies (e.g., bosonic qubits in cavities) and has a long-term goal of building a useful error-corrected quantum computer by the end of the decade. They collaborate with NASA and academic researchers and have open-sourced tools like Cirq for quantum circuit design. Key individuals include Hartmut Neven, who leads the Quantum AI program, and notable researchers like John Martinis (who led the hardware effort through the supremacy experiment) and Sergio Boixo. Google’s researchers have contributed to important papers in Nature and Science, so they continue to be influential is the basic science realm as well. 


C. Microsoft

As it’s doing in AI, Microsoft is a collaborator while also pursuing very proprietary projects. Today’s its quantum effort is focused on “topological quantum computing. They are working on exotic quasiparticles called Majorana zero modes to serve as qubits that are intrinsically protected from noise. It’s a dicey approach, with a potential big payout—extremely challenging, but promises qubits with much lower error rates if realized. In 2022–2023, Microsoft reported evidence for Majorana states in nanowire devices, a step toward demonstrating a topological qubit. In parallel, Microsoft provides Azure Quantum, a cloud platform offering access to various quantum hardware backends (ion traps, superconducting qubits from partners) and their own quantum-inspired optimization algorithms. They also develop the Q# programming language and tools for quantum software. Though Microsoft has yet to unveil a working high-qubit-count device, their theoretical research (led by Matthias Troyer, Michael Freedman, and others) in quantum error correction, fermionic simulation algorithms, and quantum-safe cryptography is influential. Microsoft’s topological qubits gambit reflects a long-term strategy: if it succeeds, it could leapfrog in scalability. In the next 3–5 years, Microsoft is likely to continue in this vein, refining the physics of their qubits and possibly demonstrate a small topological qubit, while, in their inimitable way, flooding the zone with their quantum services and software ecosystem through Azure for researchers and developers. So, yes, a quantum SDK!


D. IonQ

IONQ, which spun out of University of Maryland and Duke University research, went public via a Special Purpose Acquisition Company (SPAC) merger on October 1, 2021, trading on the NYSE as IONQ. The SPAC was dMY Technology Group, Inc. III (DTG). This merger made IonQ the first publicly traded, pure-play quantum computing company. IonQ specializes in trapped-ion quantum computers. Trapped ions (electrically charged atoms confined by electromagnetic fields) have some of the longest coherence times and highest gate fidelities of any qubit technology. This makes them highly precise, with low error rates, making them very reliable for running quantum algorithms. They also allow any qubit to connect with any other and can hold quantum states for a long time, which helps with more complex computations (think of the “attention” concept in a transformer.)  


IonQ’s systems currently operate with tens of qubits (an earlier model had 11 qubits; newer ones have 23+ physical qubits). Thanks to high fidelity, IonQ reported that its effective quantum volume (a holistic performance metric) is among the highest publicly available. IonQ has an aggressive roadmap aiming for hundreds of qubits via modular ion trap architectures within a few years. They are also developing photonic networking between ion traps to scale further. Co-founders Chris Monroe and Jungsang Kim are prominent figures in quantum computing: Monroe’s lab performed many landmark ion-trap experiments (like demonstrating some of the first quantum logic gates). IonQ’s machines are accessible via cloud providers (Amazon Braket, Azure) and are used in research for quantum chemistry, optimization, and machine learning experiments. The company’s success in going public and securing industry partnerships hints at the commercial interest in quantum computing, albeit with a modest $8BN market cap). In the next 3–5 years, IonQ and other ion-trap companies (like Quantinuum, see below) aim to demonstrate mid-size quantum processors with error rates low enough to test quantum error correction on multiple logical qubits.


E. Quantinuum

Formed by the merger of Honeywell Quantum Solutions (which built trapped-ion systems) and Cambridge Quantum, Quantinuum is now one of the largest integrated quantum companies, with over 500 employees, operating in both hardware and algorithms. Honeywell’s ion-trap hardware, known for its all-to-all qubit connectivity, realized high-fidelity gates (>99%) and mid-circuit measurement, allowing primitive error correction demos. Cambridge Quantum brought software expertise (notably in quantum cryptography and chemistry algorithms). Quantinuum’s 2022 H1-1 machine featured 20 qubits and achieved record quantum volume. They have since released an H2 processor and are working on scaling the ion technology using shuttling and parallel trap zones. On the software side, Quantinuum’s TKET compiler and t|ket〉 toolkit are widely used to optimize quantum circuits for different hardware.


A notable achievement from their team was the first implementation of quantum random number generation meeting governmental standards for cryptographic use. In the near term, Quantinuum is focusing on quantum error mitigation and small-scale error-corrected circuits; for instance, they announced progress in reducing logical error rates using a “Twist” code (a form of error-correcting code tailored to their hardware) in real-time, arguably a first. The firm has a number of existing commercial partnerships, including with Italy’s HPE Group (a Ferrari affiliate.) Quantinuum  just announced a Qatari-based joint venture with Al Rabban Capital, potentially worth up to $1BN as part of a broader deal brokered by the US. Among other things, this investment is intended to accelerate their efforts in Generative Quantum AI, building on their Generative Quantum AI framework (Gen QAI).


F. D-Wave Systems

D-Wave, based in Canada, took a different path by building quantum annealers rather than gate-model quantum computers. Their latest machines now feature over 5,000 flux qubits. D-Wave’s systems have been used by companies like Volkswagen, Lockheed Martin, and Los Alamos National Lab to experiment on scheduling, traffic flow, molecular folding, and more. While D-Wave’s approach does not directly implement universal quantum logic, it has the advantage of much higher qubit counts and easier qubit control (operations are just setting up an energy landscape and letting the system evolve). Recent D-Wave models (Advantage) provide improved connectivity and qubit coherence, and D-Wave is also working on a gate-model processor (announced in 2021) to join the mainstream approach. In the next few years, D-Wave’s quantum annealing might achieve quantum advantage on certain optimization problems if they can demonstrate significantly better results or speed than all classical algorithms for a particular use case. Even now, research has shown D-Wave can sometimes find good solutions faster than classical heuristics for specific crafted problems, but a broad advantage is still unproven. Nonetheless, D-Wave’s contributions, including a robust software stack (Ocean) and a large user base testing real-world problems.


G. Classiq Technologies

Last but not least: Classiq Technologies is a venture-backed Israeli quantum computing software company that has recently secured $110 million in Series C funding, valuing the company at approximately $500 million. Classiq is focusing on the software side of things, with a proprietary software platform designed to simplify the development of quantum algorithms, aiming to function as an operating system for quantum computing. This platform enables developers to design, optimize, analyze, and execute quantum algorithms without requiring deep expertise in quantum mechanics. What sets Classiq apart is its high-level programming language and automated synthesis engine, which allow for the creation of hardware-aware quantum circuits from functional models. This abstraction reduces the complexity traditionally associated with quantum programming, making it more accessible to a broader range of developers. The platform's hardware-agnostic nature ensures compatibility across various quantum processors, facilitating flexibility and scalability in quantum application development. In some ways Classiq is doing more important work than some of the big guys, because 1) they abstract from the hardware layer, and 2) enable development without a background in quantum mechanics. It’s almost like a quantum version of Java on steroids. Classiq has a user base at BMW, Citigroup, and Deloitte and more. 


H. Quantum communication

Quantum computing isn’t just wanting for low-error rate, practical processors, and new algorithm design, it also needs components to talk to one another, which introduces fundamentally different problems than classical computing. So-called quantum networking, or even the quantum Internet, will need to grow apace. 


Here, we are unsurprised to see one of the usual suspects but operating far beyond the NICs in your dad’s 386.


i. Cisco

Cisco’s most recent innovations in quantum computing focus on building the infrastructure for quantum networking, a critical step toward scalable and practical quantum computing. In collaboration with UC Santa Barbara, Cisco developed a quantum entanglement chip capable of producing up to 200 million entangled photon pairs per second. Designed for room temperature operation and standard telecom wavelengths (1550 nm), the chip is highly energy-efficient and compatible with existing fiber-optic infrastructure. This enables distributed quantum computing, where smaller quantum processors are networked into more powerful systems. Cisco’s work positions it as a key enabler of the future “quantum internet.”


To support this vision, Cisco launched the Cisco Quantum Lab in Santa Monica, which is developing a full quantum networking stack. This includes quantum switches, network interface cards (NICs), a distributed quantum computing compiler, and a quantum network development kit (QNDK). In parallel, Cisco is advancing post-quantum cryptography (PQC) to secure classical networks against future quantum threats, adopting NIST-standardized PQC algorithms across its products. By focusing on quantum networking rather than building a quantum processor, Cisco aims to accelerate the practical deployment of quantum computing in the next 5–10 years, enabling cross-platform interoperability and secure communications.


ii. QuNett

Of course, there are smaller, pure plays here, too, such as QuNett who, using advanced photonics, has developed a router to send quantum information over long distances. There’s already a quantum backbone out there, again involving IonQ’s University of Maryland, which is as close to the US quantum capital as anywhere.


I. Academic and Government Research Labs


Befitting its genesis, quantum computing is being propelled by numerous universities and national laboratories.


In the United States, universities like MIT, Caltech, Harvard, Stanford, University of Maryland, Duke, and Yale have premier quantum labs. Caltech’s IQIM (led by John Preskill) has been influential in quantum information theory (Preskill coined the term NISQ) and drives research on quantum algorithms and error correction). Yale’s groups (led by Michel Devoret and Robert Schoelkopf) pioneered superconducting circuit designs and quantum error correction in superconducting qubits, including the first demonstration of a logical qubit with longer coherence than its components. University of Maryland and Duke (groups of Monroe and Kim, now IonQ leaders) advanced trapped-ion entanglement gates and modular architectures. National labs like Sandia, Oak Ridge, Lawrence Berkeley, and, of course, Los Alamos are deeply involved – for example, Lawrence Berkeley manages the Quantum Systems Accelerator, a multi-institution effort on algorithms and engineering, and Oak Ridge’s Summit supercomputer was used to simulate quantum circuits as classical verification. The U.S. government has funded Quantum Information Science Centers (like Fermilab’s SQMS focusing on superconducting cavities, and a Duke-led center on trapped ions) to push the frontiers of hardware.


In Europe, the EU’s Quantum Flagship program has united academia and industry in projects ranging from superconducting and ion hardware to software and applications. The Netherlands’ Delft University (QuTech) is renowned for spin qubit and topological qubit research (led by Lieven Vandersypen and Leo Kouwenhoven, respectively). The University of Innsbruck and Institute for Quantum Optics (Austria), led by Rainer Blatt and others, have focused on ion-trap quantum computing (e.g., high-fidelity multi-ion entanglement). Germany’s MPI for Quantum Optics and Forschungszentrum Jülich are other players, and France’s CEA and CNRS labs collaborate with startups like Pasqal (neutral atoms) and Quandela (photonic QCs). The UK has initiatives at Oxford (ion traps, photonics), University College London, and University of Sussex (developing trapped-ion microchip arrays). Oxford Quantum Circuits (OQC) is a UK startup providing cloud access to an 8-qubit superconducting device and they plan larger chips soon.


In Canada, alongside D-Wave, the University of Waterloo’s Institute for Quantum Computing (IQC) is a powerhouse for quantum research (founded by Mike Lazaridis). IQC and Canadian universities have contributed significantly to quantum error correction theory (e.g., the surface code was co-developed by Canadian theorists) and quantum algorithms. Canada also has emerging startups like Xanadu (photonic quantum computing, known for their 216-mode borealis photonic processor that achieved Gaussian boson sampling quantum advantage in 2022) and quantum software firms (e.g., Classiq, Quantum Benchmark).


In Asia, China has rapidly advanced its quantum program. The University of Science and Technology of China (USTC) led by Pan Jianwei has demonstrated quantum supremacy experiments both in photonics (boson sampling with the Jiuzhang photonic computer) and superconducting circuits (the Zuchongzhi 56-qubit processor) – both claimed to perform tasks infeasible for classical supercomputers. China’s institutes are also exploring spin qubits and quantum communication (QKD satellites), showing a national commitment to leadership in quantum tech. Japan’s RIKEN and University of Tokyo collaborate with IBM on the first IBM Quantum System One installed outside the US, and Japanese researchers (like those at NTT, Toshiba) are noted for work on silicon-based qubits and quantum cryptography. Australia has a long-standing effort on silicon quantum dots (UNSW’s Quantum Engineering lab under Michelle Simmons) and photonic systems and has produced startups like Silicon Quantum Computing and Q-CTRL (quantum control software).


Whereas AI advancements have largely came out of computer science, abetted by co-opting the gamers’ hardware from Nvidia, quantum computing takes a village. It’s a synergy of physicists (to build and control hardware), computer scientists/mathematicians (to develop algorithms and theory), and engineers (to integrate qubits into scalable systems). Recent progress has seen lots of collaborations across large and small entities, e.g., IBM partnering with university groups for materials improvements, or Google funding academic research on algorithms. The field’s global nature means that breakthroughs can come from anywhere—a theory paper from a small university or a fabrication leap from a national lab can suddenly change the state of the art.


V. Hold Your Horses!

The promise is immense, but so are the technical challenges. The ECDSA won’t be hacked tomorrow! Nothing ventured, nothing gained, they say. 


Again, this list is merely a casual survey, but before we get to commercially viable technology, the industry has to address these issues (and again, this section may be a bit eye-wateringly technical):


A. Scalability of Qubit Hardware

Current quantum prototypes have on the order of 100 to 1,000 qubits (IBM just surpassed 1,000 physical qubits, but many applications will require thousands or millions. Scaling up qubit count is non-trivial: qubits must be controlled and connected without introducing excessive noise—in other words, maintaining coherence. This becomes exponentially harder as you introduce new qubits, and moreover, proving your computer maintains all its entanglements 100% of the time is exceptionally non-trivial. 


B. Qubit Quality and Error Rates

Raw qubit count alone is not enough; the fidelity of qubit operations (gates, measurements) and the coherence time of qubits (how long they maintain quantum state) are critical. Today’s best qubits have error rates on the order of 0.1%–0.01% per operation. For reference, to do useful algorithmic tasks, many experts say error rates may need to be in the 0.001% or lower range, especially when executing millions of operations for a complex algorithm. Improving qubit quality is thus an ongoing battle.


A potential solution to reliability is quantum error correction (QEC), which encodes logical qubits into multiple physical qubits so that errors can be detected and corrected on the fly. QEC is conceptually well-understood, but in practice it is extremely resource-intensive. Most estimates suggest that each logical qubit may require thousands of physical qubits devoted to error correction (the exact number depends on physical error rates and the code used). For example, a surface code might need ~1,000 physical qubits per or more if error rates are higher. The grand challenge is to reduce this overhead by improving physical qubits and by designing more efficient code and algorithms. 


Recently, there have been promising developments in that vein: Google’s demonstration that a 49-qubit surface code had a lower error rate than a 17-qubit one was a proof-of-principle that QEC can work as it scales. Researchers are now trying to reach the next milestone: a logical qubit that has a longer lifetime than any of the physical qubits that make it up. Some claims of this “break-even” point have been made in superconducting and ion systems for very small codes, but a robust logical qubit is still elusive. Achieving fault tolerance (where error-corrected qubits can be scaled arbitrarily with errors limited) is the holy grail; it may not be achieved within 5 years, but incremental progress is expected. Each step, such as demonstrating two logical qubits interacting, or a logical qubit surviving for seconds, will be big news. Error correction also imposes a speed challenge; it’s a sort of race—error detection must happen faster than errors accumulate. This means classical co-processors for error correction must be incredibly fast—so that’s another area of research.


C. More than 2 Bits

Qubits are hardly the endgame. Qutrits (three-level systems) and ququarts (four-level systems) are potentially the next evolutionary step in qudits (higher-order qubits). Simply put, by encoding more information per unit—qutrits can represent three states and ququarts four—they enable more compact and potentially more efficient quantum circuits. Electrons can occupy more than two states, after all. This increased dimensionality can lead to reduced circuit complexity and lower gate counts, which are beneficial for scaling quantum systems. Some say that these higher-dimensional systems may have better error resilience, as they can distribute quantum information across more states, potentially mitigating certain types of errors.


This is cutting edge right now, but some promising results have already come in. The Google and Yale effort demonstrated error-corrected qudit systems (including qutrits and ququarts) have superior coherence times compared to traditional qubit systems. Still, putting these to work pose additional challenges, starting with control mechanisms and error correction protocols.


D. Engineering challenges

Adding control lines, microwave generators, or laser beams for each qubit eventually runs into space, heat, and cross-talk issues. For example, superconducting qubits require complex cryogenic wiring (IBM’s 1121-qubit Condor chip contains over a mile of wiring in a dilution refrigerator) and new methods like multiplexing signals and 3D integration are needed to go higher. Companies are exploring modular designs (linking multiple smaller chips) to scale beyond the limits of a single chip. Ensuring that qubits on different modules can communicate quantum information (via photonic links or shared resonators) without degrading fidelity is a big challenge (recall Cisco and QuNett). Over the next few years, we expect demonstrations of modular quantum computing: e.g. IBM’s planned 2025 multi-chip 1,386-qubit Kookaburra system will test quantum interconnects at scale. Similarly, ion trap systems will work on shuttling ions between trap zones or connecting traps with photonic interfaces. Achieving scalable “quantum fabric”—analogous to multi-core scaling in classical CPUs – is essential for growth and will be an active area of engineering research.


i. Materials science issues

In superconducting qubits, sources of error include microscopic two-level system defects in materials, cross-talk between qubits, and stray microwave photons. Research is aimed at better fabrication techniques (e.g., eliminating lossy interfaces, using purer materials) and new designs (like fluxonium qubits or capacitively shunted qubits) that are more noise-resilient. In ion traps, motional mode heating and laser noise can cause gate infidelity, so improving vacuum, laser stability, and using sympathetic cooling are active areas. For spin qubits in semiconductors, uniformity of quantum dots and reducing charge noise are key challenges.


  • Stability and calibration: Qubit systems are sensitive and often require frequent calibration (tuning qubit frequencies, pulse shapes, etc.). Automation of calibration using AI techniques is being developed so that a quantum computer can operate for hours or days with minimal human intervention. In 3–5 years, one can expect better closed-loop calibration systems that keep qubits at optimal settings and perhaps self-correcting architectures that adjust on the fly when drift is detected.

  • Noise mitigation: While full error correction is still a dream, techniques to mitigate errors in software are crucial. This includes methods like zero-noise extrapolation, probabilistic error cancellation, and symmetry verification. Many quantum software groups are working on error mitigation strategies that can stretch the capabilities of NISQ hardware by effectively lowering error rates at the cost of more runs or post-processing. These techniques will be integral to near-term quantum computing experiments and are a big part of making NISQ devices useful in the next few years.


ii. Integration and Cryogenics

Many quantum technologies (superconducting qubits, spin qubits) operate at millikelvin temperatures, requiring dilution refrigerators. As systems scale, the size and complexity of these fridges increase, raising engineering issues of heat load, wiring, and mechanical stability. Companies like IBM and Intel have shown refrigerator designs that could house thousands of qubits in the future, but ensuring uniform cooling and minimizing vibration noise (which can decohere qubits) is tough. Moreover, new cryogenic electronics (classical control circuits that operate at cold temperature) are being developed to place more of the control logic inside the fridge, which reduces latency and cabling to room temperature.


On a 5-year horizon, we might anticipate prototypes of cryo-electronics controlling moderate-scale quantum chips, and perhaps the use of novel cooling solutions (like closed-cycle dilution fridges for continuous operation). For trapped ions and photonic systems, cryogenics might not be needed (many ion systems run at near-room temperature using vacuum chambers, and photonic systems can often be room-temp), but they have their own integration challenges, such as laser stability and optical coupling on a large scale. In all cases, hardware footprint and stability must be improved for commercial viability—a future quantum computer should ideally be an installation that can run continuously with manageable maintenance, not a fragile lab experiment that requires constant babysitting.


E. Software and Algorithms Maturity

On the software side, there is a need for more practical algorithms that can tolerate the limitations of near-term hardware. While we have famous algorithms like Shor’s and Grover’s for the long term, much of the near-term focus is on algorithms in the “quantum-classical hybrid” category (like the Variational Quantum Eigensolver, Quantum Approximate Optimization Algorithm, quantum machine learning models, etc.). These algorithms typically involve a quantum subroutine within a larger classical optimization loop and are somewhat resilient to noise. A challenge here is discovering which applications and problem instances actually give an advantage with these algorithms on NISQ devices. 


For instance, in theory quantum amplitude estimation offers quadratic speedup for Monte Carlo, but a recent review pointed out that most real-world Monte Carlo finance applications “require a large number of error-free qubits, which are not yet available”. Researchers are thus inventing algorithmic tricks to reduce resource requirements, e.g., algorithmic error mitigation, smarter encoding of problem data into qubits, using problem structure to reduce circuit depth, etc. Over the next few years, expect improvements in compilers and software that optimize circuit layouts to hardware (placing qubits and routing operations to minimize noise) and possibly new algorithms that are designed specifically for NISQ constraints. Another aspect is developing programming languages, libraries, and benchmarks so that more developers can engage with quantum computing. Just like classical computing needed decades to evolve from assembly to high-level languages and robust compilers, quantum computing’s software stack is in early development. Projects like Qiskit, Cirq, and t|ket〉 are laying the groundwork, but further abstraction and auto-optimization will be needed to handle larger programs. A commercial quantum computer will also require a full software ecosystem: scheduling jobs, error diagnostics, perhaps even quantum operating systems; all of these are being prototyped now (for example, IBM’s Qiskit Runtime and “Quantum Serverless” approach aims to integrate quantum jobs into cloud computing workflows seamlessly).


F. Cost and Commercial Viability

Another challenge is the economic and practical viability. Quantum hardware is expensive: the refrigeration, precision electronics, and specialized facilities cost millions of dollars per system. For quantum computing to be commercial at scale, costs must come down or be justified by superior performance on valuable problems. In the meantime, much like AI, but arguably worse, getting to quantum supremacy will require prodigious capex without commensurate RoI. In the near term, we will see more bootstrapping: a likely model is cloud access to quantum machines, as already provided by IBM, Amazon Braket, Microsoft Azure, etc., where many users time-share a few advanced quantum processors. This avoids each customer needing their own dilution refrigerator and quantum engineers. However, even for cloud, the throughput of current quantum chips is low (each job might take a few seconds and must be repeated for many shots, and only one job can typically use the full processor at a time). Increasing throughput via faster gate speeds, parallel operation (e.g., using different sections of a chip for different jobs), or having error-corrected qubits that can run longer algorithms without restart will be important for commercial use. Additionally, demonstrating a clear quantum advantage on a useful task will be key to commercial adoption. 


Many companies and investors are eager for a “useful quantum app”—something that outperforms classical methods in, say, finance optimization or molecular simulation. Achieving that in 3–5 years is possible in a narrow domain, but it will likely be a close race where quantum hardware and algorithms must beat very optimized classical heuristics or approximate methods. One oft-cited potential milestone is achieving quantum advantage in quantum chemistry simulation, e.g., computing a molecular energy or reaction rate that classical chemistry software cannot achieve with equal accuracy. If a quantum computer helps discover a new drug or high-efficiency battery material faster than conventional means, that would strongly drive commercial interest. Thus, solving the “last mile” of turning quantum experiments into industry solutions is as much a challenge as the science. It involves collaboration between quantum experts and domain specialists in chemistry, finance, logistics, etc., to tailor quantum algorithms to real-world problems and integrate them into existing workflows.


VI. The Chimera! (What Happens When AI and QC Get Together)

For the foreseeable future, QC and AI remain very much is separate lanes. Therefore, maybe smart people can yoke them together? Like Iolaus, perhaps? 


Here’s a suggestion: AI can assist quantum computing development (for instance, using AI to optimize quantum error-correcting codes or calibrate qubits), and quantum computing might someday enhance AI (with quantum machine learning algorithms that detect patterns faster, or quantum sampling methods to speed up training). A new interdisciplinary field of “Quantum AI” is emerging to explore this fusion. For example, quantum computers can generate richer probability distributions than classical computers, which could be used as powerful random feature generators or to initialize neural networks in novel ways. Additionally, some combinatorial optimization problems in AI (like tuning hyperparameters or feature selection) could leverage quantum optimization algorithms. In the next few years, we expect to see hybrid workflows: a classical AI might call a quantum subroutine for a specific task (as in quantum kernel classification, where a quantum circuit computes kernel entries for an SVM). 


Indeed, the first real quantum advantage in machine learning might come from such a hybrid approach solving a very specific subproblem faster than classical methods. That said, any synergy will have to overcome the data-loading bottleneck (getting classical data into a quantum form) and the current small scale of quantum devices. Thus, while AI is already widely deployed and generating value with today’s computers, quantum computing’s impact in the next 5 years will be narrower, likely confined to specialized tasks where a quantum algorithm plus modest qubit counts can outperform brute force. AI will continue to handle the broad realm of perceptual and cognitive tasks, potentially incorporating some quantum techniques if they prove advantageous. 


Machine Learning and Big Data

Quantum computing and AI may intersect in the field of quantum machine learning. While classical AI (deep learning) currently dominates, researchers are developing quantum algorithms for pattern recognition, clustering, and regression that could eventually handle certain data problems faster. In the next few years, we might see quantum accelerators used for specialized tasks like accelerating linear algebra subroutines in machine learning (e.g., using quantum sampling for faster matrix computations). Tech companies are already experimenting with quantum kernels for classification and quantum-enhanced feature spaces that could improve classical models. However, given the overhead of data loading and the limited size of quantum memory, most near-term gains in AI from QC will likely come from hybrid approaches: for example, using a quantum optimizer to fine-tune a classical neural network’s parameters, or using quantum-generated synthetic data to augment training sets.


In summary, AI is about smart algorithms on classical hardware, QC is about powerful hardware for specific algorithms; each has its niche. In the near term, AI will remain unmatched for things like vision, language, and real-time decisions, while quantum computing will strive to prove itself in areas like cryptography, optimization, and scientific simulation. Both will evolve in parallel, occasionally intersecting to create Quantum AI solutions, but generally addressing different classes of problems.


VII. A More Certain Future

In conclusion, while the promise of quantum computing is enormous, the engineering challenges are formidable. The next 3–5 years will be a period of intense research and development focused on making quantum computers more robust, scalable, and integrated. We will likely witness steady improvements: qubit counts creeping upward, error rates inching downward, and prototype error-corrected qubits emerging. Each challenge addressed, be it demonstrating a logical qubit, achieving 99.99% gate fidelity, or networking multiple quantum modules, will mark a step closer to the ultimate goal of large-scale, fault-tolerant quantum computers.


Financial applications are at the forefront of near-term quantum computing use cases, with quantum algorithms promising faster simulations, better optimizations, and new cryptographic considerations. Other industries like biotech, chemistry, logistics, and energy are also preparing for potential quantum advantage in specialized tasks. Over the next 3–5 years, we expect to see proof-of-concept demonstrations in these areas: small-scale problems where quantum methods show equal or slightly better performance than classical ones, and where lessons about algorithms and error mitigation are learned. Truly transformative impact (e.g., cracking industry-standard cryptography or simulating complex proteins exactly) will likely require beyond five years, but the groundwork for those breakthroughs is being laid now.


These incremental advances will also feed back into academic knowledge (improving our understanding of quantum device physics and error correction) and into commercial readiness (developing supply chains for quantum hardware, standardization of interfaces, etc.). The quantum computing journey is analogous to the early days of classical computing in the 1940s–1950s: we have prototype “mainframes” that fill laboratories, and the path to widespread utility will involve overcoming technical hurdles one by one. If current trends continue, by the end of this decade we could see the first quantum computers that developers without deep quantum expertise can use productively—that will be the moment quantum computing truly transitions from lab curiosity to a mainstream technology tool, accessible to the finance practitioner. 


Finance is all about constraining uncertainty over time in complex, indeterministic systems. Quantum physics is all about harnessing—benefitting from—uncertainty; artificial intelligence is all about making sense of complexity without heuristics or algorithms. As these two tools are further refined, the smart money will figure out how to use them both.  


 

Sources:

  • Naik, A. S., et al. “From portfolio optimization to quantum blockchain and security: a systematic review of quantum computing in finance.” Financial Innovation 11, 88 (2025): Discusses quantum computing applications in finance (portfolio optimization, Monte Carlo simulations for pricing and risk, fraud detection) and blockchain, including benefits and current limitations. Here.

  • Woerner, S., & Egger, D. J. “Quantum risk analysis.” Quantum Information 5, 15 (2019): Introduces a quantum algorithm for Monte Carlo risk evaluation (Value-at-Risk, etc.) using amplitude estimation, achieving faster convergence than classical Monte Carlo. Here.

  • Egger, D. J., et al. “Credit Risk Analysis using Quantum Computers.” (2019): Demonstrates a quantum algorithm for estimating credit risk (economic capital) more efficiently than classical methods and discusses qubit/circuit requirements for realistic problems. Here.

  • Stamatopoulos, N., et al. “Option Pricing using Quantum Computers.” arXiv:1905.02666 (2019): Applies quantum amplitude estimation to price financial derivatives, showing quadratic speedup in sample complexity for option pricing. Here.

  • Gómez, J. et al. “Applications of Quantum Monte Carlo methods in Finance.” Quantitative Finance 22, 1 (2022): Surveys quantum Monte Carlo approaches for derivative pricing and risk, noting the large qubit counts needed for practical advantage Here.

  • Grossi, M., et al. “Quantum Machine Learning for Fraud Detection.” IEEE Transactions on Computers 71, 2735 (2022): Implements a QSVM on real payment data and compares to classical ML for fraud detection, illustrating current quantum-classical performance tradeoffs.

  • Sun, S. et al. “IBM Quantum Computers: Evolution, Performance, and Future Directions.” arXiv:2410.00916 (2024): Reviews IBM’s hardware progress (including crossing 1000+ qubits) and discusses performance metrics, error rates, and scaling plans. Here.

  • Arute, F. et al. (Google Quantum AI). “Quantum supremacy using a programmable superconducting processor.” Nature 574, 505 (2019): Google’s landmark experiment where a 53-qubit processor performed a random circuit sampling task vastly faster than a classical supercomputer, establishing a quantum computational advantage. Here.

  • Acharya, R. et al. (Google Quantum AI). “Suppressing quantum errors by scaling a surface code logical qubit.” Nature 614, 676 (2023): Demonstrates that a larger distance-5 surface code had lower error rates than a distance-3 code, a key step toward fully error-corrected qubits.

  • Quanta Magazine. “Quantum Computers Cross Critical Error Threshold” (Dec 2024): Reports on recent advancements in quantum error correction, including the need for thousands of physical qubits per logical qubit and Google’s progress in demonstrating error reduction with larger codes. Here.

  • IBM Quantum Roadmap and Blog (2022–2024): IBM’s public plans for scaling quantum hardware, including the development of 433-qubit and 1121-qubit processors and modular “quantum-centric supercomputing” with linked processors by 2025. Here.





bottom of page