May 2021

Deep Tech

Salman Ullah

Quantum computing - a realist's primer

Quantum computing is a very exciting and potentially impactful technology, but it remains somewhat mysterious. As a scientist and venture capitalist, I’m intrigued by technologies that promise to revolutionize industries, so it makes sense to try to understand what’s behind the curtain in order to gain a better appreciation of its promises and challenges. To do so, we’ll need to dig into a bit of physics.

Everyday objects like bicycles, laptops and airplanes live in a given position in space at a given time. And since the age of Newton, that self-evident fact has enabled us to understand pretty much everything that we see around us.

However, it’s also true that when we look at things a bit more closely, say at how electrons flow in a wire or through the semiconductors which form the very fabric of every computing device, we realize that the classical or Newtonian picture (the one that gave us F=ma) is a complete failure. To drive home the point, the classical picture cannot account for the existence of magnets, the periodic table or the stability of DNA. To explain all these phenomena, we need a completely different theory of the world—quantum theory.

Quantum theory in its most useful form was cooked up in the middle of the 1920s by the big three: Heisenberg, Schrödinger and Dirac. For our purposes, we’ll concentrate on what Dirac taught us: all of the basic building blocks that make up the physical world (atoms, molecules, photons, etc.) live not only in ordinary space, but also in another realm altogether—Hilbert space (named after David Hilbert, one of the 20th century’s most insightful and influential mathematicians).

Hilbert space is an abstract notion, but one of its key characteristics is that it’s very roomy.  Things that live in this space can move around in lots of interesting ways that have no analog in the classical world. One reason why Hilbert space is so rich is that the ‘position’ of any object in this space isn’t just described by real numbers but by complex numbers, that is, numbers that involve the square root of minus 1. We’ll return to this feature when we describe the scaling advantage of quantum computers over classical computers.

Before we go any further, if there’s one takeaway from this primer it is this: classical computers compute in real space, quantum computers compute in Hilbert space.

And it’s because of the richness of Hilbert space that quantum computers can ‘outperform’ classical computers. To get a feel for the mechanism underlying this advantage, let’s compare a 2-bit classical memory register with its quantum counterpart: the 2-qbit register.

Classical 2-bit register

Each register can contain a 1 or 0. There are 2 registers, so we can have the following configurations at any given time:

The first of these represents the number 0 and the last of these the number 3. So with a 2-bit register, we can encode any number between 0 and 3. And to repeat the obvious, the 2-bit register can only hold one set of values at a time (the equivalent of 0, 1, 2, or 3). You could always add more registers to hold larger numbers, e.g., with a 3-bit register you can represent all the numbers 0 to 7, but again, only one of these numbers can be held at any one time.

Quantum 2-bit register

A quantum 2-bit or 2-qbit register is an object that lives in Hilbert space as well as in physical space. To make this idea more concrete, let’s imagine two atoms in a box, where each atom can be hot or cold depending on whether it was hit with a laser recently.

While we won’t be doing any math here, we’ll see that by merely writing down an equation that describes this 2-qbit , we can determine how it differs profoundly from its classical counterpart. Dirac taught us how to write down the state of this 2-qbit:

|2-qbit> = a1 |00> + a2 |01> + a3 |10>  + a4 |11>

The numbers in the | > simply describe whether each atom is hot (1) or cold (0). There are two important things to notice: (1) the 2-qbit register is a sum of all the different possibilities of the 1-qbits at the same time, (2) the numbers in front of each pair state, the ai, are complex.

(The ai are called amplitudes. If you square them appropriately, you get the probability of finding the memory register in that state. This is the origin of the notion that quantum theory deals only in probabilities. In fact, the theory deals with amplitudes—it’s only when you perform a measurement that you get a probability. Anticipating some of the discussion to follow, in the case of quantum computing, the computing takes place with amplitudes, but the results are generated as probabilities.)

We are now in a position to see explicitly why quantum computing is, in principle, far more powerful than classical computers. In short, they are massively parallel devices. While the classical 2-bit register can only hold one integer at a time (0, 1, 2 or 3), the 2-qbit register can hold 4 complex numbers of any size, the amplitudes ai.

Imagine we had 32 atoms instead of just two. In this case, the 32-qbit could hold 2^32 or about 4 billion amplitudes at the same time, while the classical memory register can hold only one integer ranging in size from 0 to about 4 billion.

So what exactly is a quantum computer?

We now have a handle on why quantum computers hold so much promise. But they’re not magical devices. To better appreciate their nature, it’s helpful to go back to their origin story.

Around 1981, Feynman was trying to better understand the physics of computation when he realized that any such theory better rely on quantum theory “because nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical …and it’s a wonderful problem, because it doesn’t look easy.”

The problem he was trying to solve was the following: can you use a classical computer to solve a quantum problem? For our purposes, a quantum problem is one that tries to figure out how a bunch of atoms behave. With a classical computer, you can definitely simulate a single atom of hydrogen. However, you have no chance of repeating that for even a handful of atoms or molecules, let alone for computing how semiconductors behave. Why not? Because as we have seen, quantum objects live in Hilbert space and have lots of amplitudes associated with them. The only feasible way to figure out what those amplitudes are is to use a quantum computer to track them. Classical computers just aren’t cut out to solve quantum problems—they don’t scale.

But you don’t need to build a quantum computer per se to figure out how electrons flow in a wire. You can just do a bunch of experiments. By measuring the flow of the current in a wire you are directly probing a quantum phenomenon! Or, take another example more closely tied to actual quantum computers: how do two photons behave when they travel through a given optical circuit? Instead of trying to calculate the result using all the fancy machinery of quantum mechanics and a powerful classical supercomputer, why not just measure what happens? Measuring what happens is called a quantum computer.

There is no difference between a quantum computer and a physics experiment. A quantum computer is a physics experiment that’s set up to solve a specific math problem.

Let’s return to the experiment of two photons traveling through an optical circuit. It turns out that this experimental setup is related to certain search and sampling problems that are essentially intractable on classical computers. At the present time, with about 50 photons you can achieve so-called quantum supremacy (more on this below). In other words, you can crack a problem that is essentially impossible to do on any conceivable classical computer.

Quantum computing in 3 steps

So, the recipe for quantum computing is straightforward:

  1. Encode the math problem (factoring, search, sorting, etc.) in a physical quantum system (e.g., a bunch of photons in an optical circuit)

  2. Let the quantum experiment proceed (the photons create an interference pattern)

  3. Perform a specific measurement on the system that is engineered to give you the answer you are looking for with a certain probability (measure the interference pattern)

Unfortunately, each of these steps is very difficult to envision, design, and implement, and all are severely constrained by both intrinsic (to quantum systems) and extrinsic limitations.

For example, step 1 requires fundamental insights in algorithms (there are no off-the-shelf libraries of subroutines to be had). There are only about two dozen algorithms that have been developed so far and many are named after their creators. They are closer to being works of art than engineering exercises.

To get a feel for why it’s so challenging to come up with quantum algorithms, consider the simple notion of copying the content of one memory register to another. In a classical computer this is trivial—consider copying from register A and register B: see what’s in register A and if it’s 0 (1), then make register B 0 (1).

This simple copying operation is impossible for quantum registers. It’s such a fundamental constraint that it has its own name, “the no cloning-theorem”.

If quantum computing has an Achilles’ heel, it’s step 2. The innocuous phrase “let the quantum experiment proceed” elides over a fundamental challenge of quantum systems. Quantum systems are extremely sensitive to outside disturbances. This sensitivity is likely the only intuitive feature of quantum systems: they consist of some of the smallest entities we can manipulate and the trouble with being small is that you get knocked about very easily. As quantum systems get bigger, it becomes ever harder to isolate them from the outside (classical) world and the errors at each step of the computation grow. Just as in classical coding theory, there are error-correcting solutions. However, unlike the classical case, there is no clear evidence that these solutions are useful beyond a certain system size.

Finally, and again in the interest of demystifying quantum computing, let’s look at quantum ‘supremacy’. This is the notion that a quantum computer can carry out a particular calculation much faster than any classical computer. From our physics perspective, this claim is almost trivial. A quantum experiment (configured to generate some computational result) cannot be easily simulated by a classical computer—Feynman’s conclusion from 40 years ago.

Dozens, if not hundreds, of teams around the world are working on the myriad aspects of quantum computing. From developing algorithms to building the experiments and pioneering new techniques in photonic physics, ion-trapping, quantum dots and many other approaches. These efforts have pushed the frontiers of experimental physics and introduced the world of computer science to the intricacies of quantum theory.

The technical challenges for quantum computing are fundamental and formidable. It’s most likely the deepest of deep-tech endeavors and very, very unlikely to be cracked open by two people tinkering in a garage. So, while the inherent power of the quantum computing paradigm is undeniable, its commercialization will continue to pose a challenge. Hilbert space will not give up its riches easily.