Category Archives: Technology

What is quantum supremacy?

The race to make faster and faster computers – whether they are designed to play the newest games or predict the weather – has been a cut-throat business for many decades. But there is another computing race that has also been getting more competative in the last few years: The race between quantum computers and the machines they are intended to replace.

Quantum computers work quite differently from the regular computers that power the modern world. Regular computers process and store data as a series of binary bits, which can be either zero or one. On the other hand, quantum computers process data using qubits (quantum bits) that can be zero, one, or any combination of them both. By utilising the immense scope of this additional freedom in how data is encoded, computer scientists have shown that several common computing tasks can be massively speeded up. I wrote about some of the possibilities before, so that post might be a good background.

Operations to carry out modular arithmetic using four qubits. (Source: IBM)

At the moment, performing a particular task using a quantum computer is generally slower than using a regular (or “classical”) one. In fact, some tasks that quantum computers should be very good at are simply too complicated for existing quantum hardware to attempt. But as the technology progresses, eventually quantum machines might be able to out-perform classical ones.

If it happens, that is the moment at which “quantum supremacy” is established.

One factor in determining when quantum supremacy is reached is obviously the performance of quantum computers. More on that later. But their competitors – the classical computers – are also getting faster. Recently, a big step forward in the ability of classical supercomputers to perform tasks that should be well suited to quantum computers was reported by researchers at IBM.

As classical computers get better, the bar for quantum supremacy is being raised.

It is possible to simulate a quantum computer by running a program on a classical computer. The output of the simulated quantum machine should be exactly what an actual quantum device would create. The problem is that the amount of processing power and memory required to do this goes up very quickly as more qubits are simulated. It had been thought that the maximum number of qubits that could be simulated in a classical supercomputer was roughly fifty. After that, it would simply require too much memory.  So quantum supremacy would be established if a quantum computer with 50 working qubits could be made.

What the researchers from IBM have done is to design a program which allows the simulation of 56 qubits. This makes it just that bit harder to get to quantum supremacy!

Intel’s 49-qubit chip. (Source: Intel)

But what about the other side in the race? The hardware for quantum computing is also getting better, and just this week, Intel announced that it now has a chip that contains 49 qubits. This sounds great, but so far it is quite difficult to assess how good it actually is because a lot of the important data is not available.

The number of qubits is an important indicator of the overall performance of a quantum computer, but there are other very important factors. For instance, qubits have to be linked to each other (or, in the quantum-mechanical language, “entangled”) so that they can share quantum information and carry out the multi-qubit operations that are required to exploit their power. It can be hard to entangle two qubits unless they are close to each other and so, in current devices, often not all the qubits in a chip will be linked. The fewer qubits that each one is linked to, the more inefficient it is to do a calculation, and so this connectivity has a big impact on performance.

Secondly, controlling a qubit is much more difficult than the control of a classical bit. Usually, delicate pulses of microwave radiation are needed to manipulate them, and so they can make errors. Because of this, calculations often have to be repeated several times to make sure that the answer is correct, and not the result of a control error. The higher the error rate, the more times a calculation must be run to be sure that it gives the right answer.

Finally, there is the decoherence time of the qubits. This one is a bit more technical, but the data stored in a qubit can be lost because the outside world impinges on the qubit, destroying the sensitive quantum information. Because of this, the decoherence time limits how long a quantum computer has to complete a calculation: If it can’t finish in time, it might lose the data it is working on. So, if the decoherence time for the qubits is too short, they are next to useless.

And of course, none of these things are problems for simulations using classical computers, because those programs work perfectly!

So far, these numbers are not available for Intel’s new chip. In contrast, IBM have this information freely accessible on github for their machines! Getting this information will be crucial to understanding just how close they are to establishing quantum supremacy.

But for now, the race is well and truly on!

If you want to read a preprint of the paper reporting the 56 qubit simulation, you can find it here.

Also, if you want to learn more about quantum computing, and even run your own programs on a small quantum computer, check out IBM’s public web site. They’ve got a bunch of neat tutorials and a four-qubit machine on their cloud that you can play with.


How do solar cells and LEDs work?

It’s obvious to point out that generating renewable energy is hugely important, and one way of doing that is to make electricity using solar cells. Solar cells turn the energy carried by light into an electrical current, which can directly power a device, be connected into the grid, or be stored in a battery for future use. Understanding how solar cells work depends on the principles of quantum mechanics that I’ve already written about on this blog, so the middle of this long, dark, northern winter is the perfect time to think about it and dream of all that sunlight!

It’s possible to understand how a solar works by looking at band structure. I’ve written about band structure before, so feel free to read that post for more details. To briefly recap, quantum particles can only exist in certain allowed states and the band structure is essentially a map of these states in a crystal material. Electrons fill up all these states starting from the lowest energy, but there are usually more possible states than there are electrons to fill them, meaning that some of the higher energy states are not filled. The energy of the last filled state or first empty state is called the Fermi level.

Absorption of light

At the fundamental level, light is also made up of quantum particles, called photons. The amount of energy that is carried by a photon is directly related to the wavelength (or colour) of the light. When a photon hits an object it can be absorbed, but the energy that it carries cannot be created or destroyed, so it must be transferred into the material.

It just so happens that the amount of energy that is carried by a photon of visible light is in the same range as the energy spacing between the quantum-mechanical states in lots of crystals. This is why most materials are opaque: they are good at absorbing photons.

Absorption of a photon (green) by an electron. The photon’s energy is transferred to the electron and must match the energy of the transition between the states.

The energy absorbed by the material is accounted for by an electron changing its state and gaining energy in the process. I’ve tried to show this in the sketch above. A photon (green wavy line) plus an electron below the Fermi level (black circles) becomes an electron above the Fermi level. There is also a space left below the Fermi level from which the absorbing electron came (the hollow circle). This space is called a hole.

How to capture electrons

A solar cell converts light into electrical current by capturing the excited electron and hole. But this process can be quite difficult to engineer because naturally electrons (and everything else) will tend to rearrange so that they lower their energy. For the electron, this is most obviously done by “falling” back down into the state that it left, while emitting another photon to make sure that energy is conserved. So, the solar cell must have a way of driving the electron and hole apart from each other so that they can be captured before this recombination happens.

One way that can be done is to make a structure that has different band structure in different places, shown in the sketch below. There, a solar cell device is shown on the top, and the band structure of three different regions is below. The left-hand end of the solar cell is made so that it is “p-type” meaning that it has an excess of positively charged holes. Another way of saying this is that the Fermi level is in the valence band. The right-hand end is “n-type” meaning that it has extra negative electrons, or the Fermi level is in the conduction band.

A schematic of a solar cell and the band structure of the different regions.

The electron-hole pairs that are formed in either the p-type or n-type regions will recombine very quickly, but those that are made in the zone in between (called a “p-n junction”, highlighted by the dashed box) might not. Another way in which the excited electron can lose energy is by moving into an unoccupied state in the n-type region (shown by the blue arrow). Simultaneously, electrons in the valence band of the p-type region can lose energy by filling in the hole in the junction region. This process is equivalent to the hole moving to the left, towards the p-type region (red arrow).

This moving charge is exactly the current that the solar cell is designed to generate, and it can be collected by attaching wires (or “contacts”) at the ends (brown areas).


There are a few things that can be optimised to make solar cells more efficient. The obvious thing is to use materials which absorb a lot of photons, so finding a material that has energy transitions at lots of different energies (corresponding to a lot of different photon wave lengths) is very important. Then, the recombination time can be increased so that the electrons and holes have more time to move apart from each other. Lastly, using a material that has a good electrical conductivity will allow the electrons to move faster and so can get more separation from the holes within the recombination time. This is a massive industry, and even small gains in efficiency can be worth a lot of money!


As a little coda, there is another electronic device which can be understood from this kind of thinking. Instead of absorbing light and creating current, an LED does the opposite: It uses current flowing through it to emit light.

Electrons moving through the p-n junction have to lose energy to get from the n-type region to the p-type region and they can do this by emitting photons – the inverse of the absorption process. By changing the energy spacing between the levels that the electron has to move between, the colour of the emitted light can be changed. Of course, there is a bit of detail which I am leaving out here, but it’s kinda neat that an LED is like a solar cell running in reverse!

What next for integrated circuits?

There is currently a big problem in the semiconductor industry. While technological progress and commercial pressure demand that electronics must be made smaller and faster, we are getting increasingly close to the fundamental limits of what can be achieved with current materials.

In the last couple of weeks, two academic papers have come out which describe ways in which we might be able to get around these limitations.

Size matters

A quick reminder about how transistors work. (You can read more detail here.) Transistors are switches which can be either on or off. They have a short conducting channel through which electricity can flow. When they are on, electrical current is flowing through them, and when they are off it is not. They have three connections, one which supplies current (called the source), one which collects it (the drain), and one which controls whether the channel is open or closed.

A rough sketch of a transistor, showing the contact length LC and the gate length LG.

There is something called the International Technology Roadmap for Semiconductors which lays out targets for improvements in transistor technology which companies such as Intel are supposed to aim for. The stages in this plan are called “nodes”, which are described by the size of the transistor. Having smaller transistors is better because you can fit more into a chip and do more computations in a given space.

At the moment, transistors at the 14 nanometre node are being produced. This means that the length of the gate/channel is 14nm (a nanometre is one millionth of a millimetre). According to the roadmap, within a decade or so, the channel length is supposed to be as short as 3nm. But, overall, transistors are rather bigger than this length, in part because of the size of the source and drain contacts. Transistors at the 3nm node will have an overall size of about 40nm.

Carbon nanotube transistors

The first paper I want to mention, which came out in the journal Science, reports the fabrication of a transistor made out of different materials which allows the overall size to be reduced. Instead of using doped silicon for the contacts and channel,  these researchers made the channel out of a carbon nanotube, and contacts from a cobalt-molybdenum alloy.

Carbon nanotubes are pretty much graphene which has been rolled up into a cylinder which is a few nanometres wide. Depending on the details, they can have semiconducting electronic properties which are excellent for making transistors from, but they also are interesting for a whole range of other reasons.

By doing this, they could make a channel/gate region of about 11 nm long, with two contacts that were about 10nm each. Even with some small spacers, the total width of the transistor was only 40nm. This should satisfy the demands of the 3nm node of the roadmap, even though the channel is nearly four times as long as that.

3D chips

The second approach is completely different. At the moment, integrated circuits are mostly made in a single layer, although there are some exceptions to this in the most modern chips. This means that the various parts of the chip that do calculations and store memory can be located quite a long way away from each other. This can lead to a bottleneck as data is moved around to where it is needed.

A group of researchers, publishing in the journal Nature, designed an entirely new architecture for a chip in which the memory, computation, input, and output were all stacked on top of each other. This means that even though the transistors in their device are not particularly small, the data transfer between memory and computation can all happen at the same time. This leads to a huge increase in speed because the bottleneck is now much wider.



The prototype they designed was actually a gas sensor, and a rough idea of its construction is shown in the sketch above. Gas molecules fall on the top layer, which is made up of a large number of individual detectors that can react to single molecules. These sensors can then write the information about their state into the memory which is directly below it via vertical connections that are built into the chip itself.

The point of the sensor is to work out what type of gas has fallen on it. To do this, the information stored in the memory from the sensors must be processed by a pattern recognition algorithm which involves a lot of calculations. This is done by a layer of transistors which are placed below the memory, and are directly connected to it. In the new architecture, the transistors doing the computation have much quicker access to the data they are processing than if it were stored in another location on the chip. Finally, an interface layer allows the chip to be controlled and through which it outputs the result of the calculation are below the transistors, again connected vertically.

The paper shows results for accurate sensing of gaseous nitrogen, lemon juice, rubbing alcohol, and even beer! But that’s not really the crucial point. The big new step is the vertical integration of several components which would otherwise be spaced out on a chip. This allows for much quicker data processing, because the bottleneck of transferring data in and out of memory is drastically reduced.

So, the bottom line here is that simply finding ways to make traditional silicon transistors smaller and smaller is only one way to approach the impending problems facing the electronics industry. It will be a while before innovations like this become the norm for consumer electronics, and perhaps these specific breakthroughs will not be the eventual solution. But, in general, finding new materials to make transistors from and designing clever new architectures are very promising routes forward.