Tag Archives: Future

What is quantum supremacy?

The race to make faster and faster computers – whether they are designed to play the newest games or predict the weather – has been a cut-throat business for many decades. But there is another computing race that has also been getting more competative in the last few years: The race between quantum computers and the machines they are intended to replace.

Quantum computers work quite differently from the regular computers that power the modern world. Regular computers process and store data as a series of binary bits, which can be either zero or one. On the other hand, quantum computers process data using qubits (quantum bits) that can be zero, one, or any combination of them both. By utilising the immense scope of this additional freedom in how data is encoded, computer scientists have shown that several common computing tasks can be massively speeded up. I wrote about some of the possibilities before, so that post might be a good background.

18-01-14-gates
Operations to carry out modular arithmetic using four qubits. (Source: IBM)

At the moment, performing a particular task using a quantum computer is generally slower than using a regular (or “classical”) one. In fact, some tasks that quantum computers should be very good at are simply too complicated for existing quantum hardware to attempt. But as the technology progresses, eventually quantum machines might be able to out-perform classical ones.

If it happens, that is the moment at which “quantum supremacy” is established.

One factor in determining when quantum supremacy is reached is obviously the performance of quantum computers. More on that later. But their competitors – the classical computers – are also getting faster. Recently, a big step forward in the ability of classical supercomputers to perform tasks that should be well suited to quantum computers was reported by researchers at IBM.

As classical computers get better, the bar for quantum supremacy is being raised.

It is possible to simulate a quantum computer by running a program on a classical computer. The output of the simulated quantum machine should be exactly what an actual quantum device would create. The problem is that the amount of processing power and memory required to do this goes up very quickly as more qubits are simulated. It had been thought that the maximum number of qubits that could be simulated in a classical supercomputer was roughly fifty. After that, it would simply require too much memory.  So quantum supremacy would be established if a quantum computer with 50 working qubits could be made.

What the researchers from IBM have done is to design a program which allows the simulation of 56 qubits. This makes it just that bit harder to get to quantum supremacy!

Intel-49-qubit-chip
Intel’s 49-qubit chip. (Source: Intel)

But what about the other side in the race? The hardware for quantum computing is also getting better, and just this week, Intel announced that it now has a chip that contains 49 qubits. This sounds great, but so far it is quite difficult to assess how good it actually is because a lot of the important data is not available.

The number of qubits is an important indicator of the overall performance of a quantum computer, but there are other very important factors. For instance, qubits have to be linked to each other (or, in the quantum-mechanical language, “entangled”) so that they can share quantum information and carry out the multi-qubit operations that are required to exploit their power. It can be hard to entangle two qubits unless they are close to each other and so, in current devices, often not all the qubits in a chip will be linked. The fewer qubits that each one is linked to, the more inefficient it is to do a calculation, and so this connectivity has a big impact on performance.

Secondly, controlling a qubit is much more difficult than the control of a classical bit. Usually, delicate pulses of microwave radiation are needed to manipulate them, and so they can make errors. Because of this, calculations often have to be repeated several times to make sure that the answer is correct, and not the result of a control error. The higher the error rate, the more times a calculation must be run to be sure that it gives the right answer.

Finally, there is the decoherence time of the qubits. This one is a bit more technical, but the data stored in a qubit can be lost because the outside world impinges on the qubit, destroying the sensitive quantum information. Because of this, the decoherence time limits how long a quantum computer has to complete a calculation: If it can’t finish in time, it might lose the data it is working on. So, if the decoherence time for the qubits is too short, they are next to useless.

And of course, none of these things are problems for simulations using classical computers, because those programs work perfectly!

So far, these numbers are not available for Intel’s new chip. In contrast, IBM have this information freely accessible on github for their machines! Getting this information will be crucial to understanding just how close they are to establishing quantum supremacy.

But for now, the race is well and truly on!


If you want to read a preprint of the paper reporting the 56 qubit simulation, you can find it here.

Also, if you want to learn more about quantum computing, and even run your own programs on a small quantum computer, check out IBM’s public web site. They’ve got a bunch of neat tutorials and a four-qubit machine on their cloud that you can play with.

Advertisements

What next for integrated circuits?

There is currently a big problem in the semiconductor industry. While technological progress and commercial pressure demand that electronics must be made smaller and faster, we are getting increasingly close to the fundamental limits of what can be achieved with current materials.

In the last couple of weeks, two academic papers have come out which describe ways in which we might be able to get around these limitations.

Size matters

A quick reminder about how transistors work. (You can read more detail here.) Transistors are switches which can be either on or off. They have a short conducting channel through which electricity can flow. When they are on, electrical current is flowing through them, and when they are off it is not. They have three connections, one which supplies current (called the source), one which collects it (the drain), and one which controls whether the channel is open or closed.

2017-07-10-BasicTransistorSketch
A rough sketch of a transistor, showing the contact length LC and the gate length LG.

There is something called the International Technology Roadmap for Semiconductors which lays out targets for improvements in transistor technology which companies such as Intel are supposed to aim for. The stages in this plan are called “nodes”, which are described by the size of the transistor. Having smaller transistors is better because you can fit more into a chip and do more computations in a given space.

At the moment, transistors at the 14 nanometre node are being produced. This means that the length of the gate/channel is 14nm (a nanometre is one millionth of a millimetre). According to the roadmap, within a decade or so, the channel length is supposed to be as short as 3nm. But, overall, transistors are rather bigger than this length, in part because of the size of the source and drain contacts. Transistors at the 3nm node will have an overall size of about 40nm.

Carbon nanotube transistors

The first paper I want to mention, which came out in the journal Science, reports the fabrication of a transistor made out of different materials which allows the overall size to be reduced. Instead of using doped silicon for the contacts and channel,  these researchers made the channel out of a carbon nanotube, and contacts from a cobalt-molybdenum alloy.

Carbon nanotubes are pretty much graphene which has been rolled up into a cylinder which is a few nanometres wide. Depending on the details, they can have semiconducting electronic properties which are excellent for making transistors from, but they also are interesting for a whole range of other reasons.

By doing this, they could make a channel/gate region of about 11 nm long, with two contacts that were about 10nm each. Even with some small spacers, the total width of the transistor was only 40nm. This should satisfy the demands of the 3nm node of the roadmap, even though the channel is nearly four times as long as that.

3D chips

The second approach is completely different. At the moment, integrated circuits are mostly made in a single layer, although there are some exceptions to this in the most modern chips. This means that the various parts of the chip that do calculations and store memory can be located quite a long way away from each other. This can lead to a bottleneck as data is moved around to where it is needed.

A group of researchers, publishing in the journal Nature, designed an entirely new architecture for a chip in which the memory, computation, input, and output were all stacked on top of each other. This means that even though the transistors in their device are not particularly small, the data transfer between memory and computation can all happen at the same time. This leads to a huge increase in speed because the bottleneck is now much wider.

2017-07-10-VerticalGasSensor.png

 

The prototype they designed was actually a gas sensor, and a rough idea of its construction is shown in the sketch above. Gas molecules fall on the top layer, which is made up of a large number of individual detectors that can react to single molecules. These sensors can then write the information about their state into the memory which is directly below it via vertical connections that are built into the chip itself.

The point of the sensor is to work out what type of gas has fallen on it. To do this, the information stored in the memory from the sensors must be processed by a pattern recognition algorithm which involves a lot of calculations. This is done by a layer of transistors which are placed below the memory, and are directly connected to it. In the new architecture, the transistors doing the computation have much quicker access to the data they are processing than if it were stored in another location on the chip. Finally, an interface layer allows the chip to be controlled and through which it outputs the result of the calculation are below the transistors, again connected vertically.

The paper shows results for accurate sensing of gaseous nitrogen, lemon juice, rubbing alcohol, and even beer! But that’s not really the crucial point. The big new step is the vertical integration of several components which would otherwise be spaced out on a chip. This allows for much quicker data processing, because the bottleneck of transferring data in and out of memory is drastically reduced.

So, the bottom line here is that simply finding ways to make traditional silicon transistors smaller and smaller is only one way to approach the impending problems facing the electronics industry. It will be a while before innovations like this become the norm for consumer electronics, and perhaps these specific breakthroughs will not be the eventual solution. But, in general, finding new materials to make transistors from and designing clever new architectures are very promising routes forward.