Controlled color cascade yields better parallelism for photonic qubits.

Way back when I started writing for Ars, experimental quantum computing had just started to take off. At the time, the big demonstrations of quantum computation were very simple calculations, performed using single photons as repositories of quantum information. Back then, demonstrating even a single logical gate was a challenge. Light ruled the roost, and charged particles were reduced to the status of not-quantum-enough.

That changed, of course. Now, all the big demonstrations make use of charged particles: little superconducting current loops, rows of ions, or others. Light, it seems, has been reduced to a way of moving qubits between charged particles.

But a recent result shows that there is life left in photon-based quantum computers and that the degree of parallelization available to a photon-based quantum computer will be difficult to beat using other qubit technologies.

Why charge when you have a light brigade?

To understand the changing fortunes of different quantum technologies, we need to return to the past. Back in the '80s, physicists were interested in really testing some of the unusual consequences of quantum mechanics, like whether reality is really real. Thought theoreticians had proposed cats that were alive and dead, and these descriptions were all formal. If physicists had sources that could produce special quantum states, then actual tests could be done in the real world.

It just so happened that those states were easiest to produce with light. And developing the ability to perform the experiments also provided the foundation on which a quantum computer could be developed用hysicists ended up in the happy position of having labs filled with equipment that could demonstrate the basic logic operations of a quantum computer.

This all occurred because photons are special: they can pass through each other without influence. So a qubit can pass through another qubit without changing either value. Even better, high-quality optics meant that qubits encoded in photons were preserved as they passed through the hardware. Physicists had exactly what they needed: an environment with very little noise and a high degree of control over the qubit.

But this work eventually pushed up against some limits. It's hard to take a four-qubit optical quantum computer and scale it up to ever-larger numbers of qubits. Even using all the really cool technology developed for fiber optic communication, qubit scaling is a real problem for optical systems.

Compare that to something like superconducting quantum interference devices. All they need is a nearly complete ring of metals on a circuit board. Engineers have the technology to scale qubits based on these, and they know how to create circuits that can perform operations on multiple qubits to implement reasonably sophisticated algorithms.

The problem is that the particles that make up a supercurrent are charged. The charges feel the fields from all their neighbors溶ot just other qubits, but from the power supply, your mobile phone, the laptop... everything.

These qubits are exactly like toddlers. They start out clean and shiny, encoded with the purest of messages. But the toddler quickly acquires grime through a game known as persistently picking up dirt, corrupting the information. When the researchers try to keep the toddler from picking up dirt, it throws a tantrum that persists until it accidentally throws up on the cat. Not only has its information gotten extremely messy, the process has destroyed the information encoded in everything around it.

Even now, in terms of the success of individual logic operations, quantum interference qubits are not good enough for useful calculations. Despite several demonstrations of very cool calculations, their quantum state decays away too quickly to perform more complex math. In this regard, qubits encoded in photons still provide the best results, despite their apparent inability to scale.

The return of the light

The lovers of light have not been sitting on their hands when it comes to scaling, though. If you have lots of qubits, you need a way to split and combine them in order for the qubits to perform operations. For light, if there are only two qubits, this is trivial: a partially reflective mirror or some other standard optic will do the trick. As soon as you start thinking about three or more qubits, though, splitting and combining photons becomes very difficult.

This is where the latest research comes into play. To achieve a multi-qubit combiner and splitter, the researchers encoded each qubit in a slightly different color. They did this using something called an electro-optic modulator. An electro-optic modulator takes in a microwave signal, which changes the refractive index of a bit of glass. As light passes through the glass, its frequency is changed by the periodic changes in refractive index. In fact, it is shifted by exactly the frequency of the microwave.

And this effect cascades. So, I put in a single color and, out the other end, I get the color I put in and a whole bunch of others. Each color is separated from its neighboring colors by the frequency of the microwave. Now, this sounds excellent, because the modulator automatically generates a vast superposition state. I put in a single qubit with a certain color, and that single photon comes out in superposition state of all the colors available from the modulator (If I were to measure the photon, I still get a single color, though擁t's only in the superposition until measurement happens). I can also go the other way: if I have my photon spread across all of these different frequencies, then I can recombine them so that the photon is only at a single frequency.

This all sounds awesome, but it isn't. The problem is that the cascade doesn't stop. Let's say that I have the ability to compute with 30 qubits. This process will create a superposition state that extends far beyond the intended 30, ruining any potential computation. To put it in more solid terms: if I could limit the cascaded process to only the 30 frequencies that correspond to my qubits, then the probability of success for any operation that I perform on my qubits would be very nearly unity. Every additional frequency that is unintentionally included reduces the probability of success.

We can simplify this in a way that helps us understand how that works and how to solve the problem. Let's just have a qubit that goes through a modulator and emerges as two qubits at adjacent frequencies. Now, the light field of the two qubits has a phase葉he way the peaks and troughs of the two light waves line up葉hat was fixed by the incoming light and microwave field. Next, let's send these qubits through a second modulator. Because the phase is unchanged, the two qubits become four. But if we delay one qubit with respect to the other, so that the peaks of one qubit line up with the troughs of the second, then the modulator won't create two extra qubits, it will recombine the two qubits into a single qubit.

This provides a solution. Our qubit is spread across (potentially) hundreds of qubits by the first modulator. To limit our computational space to the first 30, we place a device after the first modulator that shifts the phases of all the individual qubits. We arrange it so that, when they enter the second modulator, the light at frequencies outside of the computational space is destroyed and re-emitted in the computational space.

Determined control

The operation I just described basically takes one qubit and spreads it precisely across a desired computational space. The reverse can also be performed: all the qubits recombined into a single qubit. And depending on the choice of phase, arbitrary combinations can also be obtained.

Now, all of this can be done in more traditional electronic systems, too, so what is special here? Perhaps the biggest difference is that this can be done in a remarkably simple and noise-free manner. The researchers show success probabilities that are as good as the best single-ion and single-photon systems. And unlike ion-based qubits, operations are performed very fast and in parallel on the entire qubit space. That gives this combination of modulators and phase control the potential to be quite a powerful technique.

The biggest problem is that this still uses relatively large and clumsy hardware from the telecommunications industry. However, being an optics guy, I am going to make a predictable prediction. Light will win. I think that in all other systems葉rapped ions, superconducting qubits, or whatever your favorite system is葉he noise will prove to be a limiting factor. Researchers will demonstrate quantum control, but they will find it very difficult to scale up further to systems that can solve real problems.

Light-based systems, on the other hand, face problems associated with scale in the first place. The very properties that make it so easy to work with as a quantum computational system are the ones that make it difficult to make a big quantum computer. My belief is that those scale problems will be easier to solve than the noise problems faced in other systems.