# Why does light exist?

Light is something that we probably take for granted, but have you ever thought about why it should exist at all? From the viewpoint of quantum mechanics, it turns out that it must be there to satisfy a fundamental symmetry of the universe.

Electric and magnetic fields are created by electric charges, and electric charges also move in response to those fields. The electromagnetism that you probably learned at school is one description of this. The electric potential and the electric field are related to each other because the field is given by the gradient (or slope) of the potential. It’s not possible to directly measure a potential, so in some sense it is only the field that has a physical reality. Charged objects moving in the field experience a force which changes their speed or direction of travel.

This description works very nicely for many everyday situations and, on the face of it, there is no obvious role for light here. But, at the scales of individual atoms and electrons, electromagnetism has to be described in the framework of quantum mechanics. It turns out that light is the thing that carries the forces that charged objects feel in electromagnetic fields.

The point of this post is to try and explain why that is.

As I’ve said before, looking at symmetries can help to simplify physics problems. Symmetries are transformations which leave the object that is transformed in an identical state from how it started. For example, a square can be rotated by 90 degrees about its centre point and the result will look the same as the unrotated version. This is an example of a discrete symmetry – there are only four rotations of the square that are symmetry transformations (90, 180, 270, and 360 degrees). In contrast, a circle has a continuous symmetry – you can rotate a circle about its centre point by any angle and end up with a circle that looks just the same as the unrotated one.

This is the moment that things start to get a bit less easy to visualise, because talking about a different kind of symmetry is unavoidable: We have to delve into gauge transformations and gauge symmetries.

To try and explain the concept of a gauge transformation, look at the left-hand picture below. The lines represent potentials at each position. Start with the lower, blue line. It has a field associated with it, which is the slope of the line at each point. To get the red line from the blue line, you have to add on a fixed amount of potential at every position. This is represented by the three dashed black arrows, which are all the same length.

But here is the crucial point: The field associated with the red line is exactly the same as the field associated with the blue line, because the slopes of the two lines are the same at every position. Adding on the extra potential hasn’t changed this.

So, adding a fixed amount to the potential at every position does not change the field at all. And remember, the field is the only thing that is physically observable – we can’t measure the potential. Therefore, this is a continuous symmetry.

The fact that many different potentials give the same field is called “gauge symmetry”. The type of gauge symmetry illustrated here is a “global” symmetry, because the amount of potential that you add or subtract at each point in space is the same (i.e. it’s a global amount).

However, the crucial part for the existence of light comes from a slightly different type of gauge symmetry, called a “local” one. For a local gauge transformation, instead of adding the same amount of potential at every point, you have the freedom to add different amounts of potential in different places. This is shown in the right-hand graph above. The red line is obtained from the blue line by adding different amounts of potential at each position. Notice that the three dashed black arrows are now different lengths.

Electromagnetism in quantum mechanics works under the assumption that this local gauge transformation is also a symmetry. For this to be true, the physical observables, including the field, must stay the same after the local transformation.

But, looking at the graph, it immediately becomes apparent that adding a different amount of potential at different positions to the blue line means that the red line has a different slope. This would change the field, and so this local transformation should not be a symmetry at all!

The only way that this can work is if our description of the quantum field is incomplete: In addition to the potential, there must be another part which also feels the effect of the local gauge transformation. When the combined transformation of the potential and this additional object are added together, the fields remain unchanged so that the local gauge symmetry is intact.

For the electromagnetic field in quantum mechanics, it turns out that this secondary part is a photon. Looking deeper into the mathematics, we find that their existence explains how charged objects feel a force from the field: they emit and absorb photons.

But photons are also the particles which carry light! So, one answer to the question “why do we have light?” is simply that photons must exist to preserve local gauge symmetry.

I appreciate that this has the whiff of a magic trick to it: Why should local gauge symmetry be something that we insist must exist? Perhaps there is some deep answer to this question that I don’t know, but the best response might simply be that this theory works.

To add more to the “it just works” line of reasoning, local gauge symmetry is also the reason that gluons (which carry the strong interaction) and the W and Z bosons (which carry the weak interaction) exist. In those cases the symmetry operations are more complicated than adding a potential, but the fundamental assumptions and logic are the same. So, this is a powerful concept which seems to be important in describing quantum physics, and gives one explanation for light comes from.

# What is quantum supremacy?

The race to make faster and faster computers – whether they are designed to play the newest games or predict the weather – has been a cut-throat business for many decades. But there is another computing race that has also been getting more competative in the last few years: The race between quantum computers and the machines they are intended to replace.

Quantum computers work quite differently from the regular computers that power the modern world. Regular computers process and store data as a series of binary bits, which can be either zero or one. On the other hand, quantum computers process data using qubits (quantum bits) that can be zero, one, or any combination of them both. By utilising the immense scope of this additional freedom in how data is encoded, computer scientists have shown that several common computing tasks can be massively speeded up. I wrote about some of the possibilities before, so that post might be a good background.

At the moment, performing a particular task using a quantum computer is generally slower than using a regular (or “classical”) one. In fact, some tasks that quantum computers should be very good at are simply too complicated for existing quantum hardware to attempt. But as the technology progresses, eventually quantum machines might be able to out-perform classical ones.

If it happens, that is the moment at which “quantum supremacy” is established.

One factor in determining when quantum supremacy is reached is obviously the performance of quantum computers. More on that later. But their competitors – the classical computers – are also getting faster. Recently, a big step forward in the ability of classical supercomputers to perform tasks that should be well suited to quantum computers was reported by researchers at IBM.

As classical computers get better, the bar for quantum supremacy is being raised.

It is possible to simulate a quantum computer by running a program on a classical computer. The output of the simulated quantum machine should be exactly what an actual quantum device would create. The problem is that the amount of processing power and memory required to do this goes up very quickly as more qubits are simulated. It had been thought that the maximum number of qubits that could be simulated in a classical supercomputer was roughly fifty. After that, it would simply require too much memory.  So quantum supremacy would be established if a quantum computer with 50 working qubits could be made.

What the researchers from IBM have done is to design a program which allows the simulation of 56 qubits. This makes it just that bit harder to get to quantum supremacy!

But what about the other side in the race? The hardware for quantum computing is also getting better, and just this week, Intel announced that it now has a chip that contains 49 qubits. This sounds great, but so far it is quite difficult to assess how good it actually is because a lot of the important data is not available.

The number of qubits is an important indicator of the overall performance of a quantum computer, but there are other very important factors. For instance, qubits have to be linked to each other (or, in the quantum-mechanical language, “entangled”) so that they can share quantum information and carry out the multi-qubit operations that are required to exploit their power. It can be hard to entangle two qubits unless they are close to each other and so, in current devices, often not all the qubits in a chip will be linked. The fewer qubits that each one is linked to, the more inefficient it is to do a calculation, and so this connectivity has a big impact on performance.

Secondly, controlling a qubit is much more difficult than the control of a classical bit. Usually, delicate pulses of microwave radiation are needed to manipulate them, and so they can make errors. Because of this, calculations often have to be repeated several times to make sure that the answer is correct, and not the result of a control error. The higher the error rate, the more times a calculation must be run to be sure that it gives the right answer.

Finally, there is the decoherence time of the qubits. This one is a bit more technical, but the data stored in a qubit can be lost because the outside world impinges on the qubit, destroying the sensitive quantum information. Because of this, the decoherence time limits how long a quantum computer has to complete a calculation: If it can’t finish in time, it might lose the data it is working on. So, if the decoherence time for the qubits is too short, they are next to useless.

And of course, none of these things are problems for simulations using classical computers, because those programs work perfectly!

So far, these numbers are not available for Intel’s new chip. In contrast, IBM have this information freely accessible on github for their machines! Getting this information will be crucial to understanding just how close they are to establishing quantum supremacy.

But for now, the race is well and truly on!

If you want to read a preprint of the paper reporting the 56 qubit simulation, you can find it here.

Also, if you want to learn more about quantum computing, and even run your own programs on a small quantum computer, check out IBM’s public web site. They’ve got a bunch of neat tutorials and a four-qubit machine on their cloud that you can play with.

# How do solar cells and LEDs work?

It’s obvious to point out that generating renewable energy is hugely important, and one way of doing that is to make electricity using solar cells. Solar cells turn the energy carried by light into an electrical current, which can directly power a device, be connected into the grid, or be stored in a battery for future use. Understanding how solar cells work depends on the principles of quantum mechanics that I’ve already written about on this blog, so the middle of this long, dark, northern winter is the perfect time to think about it and dream of all that sunlight!

It’s possible to understand how a solar works by looking at band structure. I’ve written about band structure before, so feel free to read that post for more details. To briefly recap, quantum particles can only exist in certain allowed states and the band structure is essentially a map of these states in a crystal material. Electrons fill up all these states starting from the lowest energy, but there are usually more possible states than there are electrons to fill them, meaning that some of the higher energy states are not filled. The energy of the last filled state or first empty state is called the Fermi level.

### Absorption of light

At the fundamental level, light is also made up of quantum particles, called photons. The amount of energy that is carried by a photon is directly related to the wavelength (or colour) of the light. When a photon hits an object it can be absorbed, but the energy that it carries cannot be created or destroyed, so it must be transferred into the material.

It just so happens that the amount of energy that is carried by a photon of visible light is in the same range as the energy spacing between the quantum-mechanical states in lots of crystals. This is why most materials are opaque: they are good at absorbing photons.

The energy absorbed by the material is accounted for by an electron changing its state and gaining energy in the process. I’ve tried to show this in the sketch above. A photon (green wavy line) plus an electron below the Fermi level (black circles) becomes an electron above the Fermi level. There is also a space left below the Fermi level from which the absorbing electron came (the hollow circle). This space is called a hole.

### How to capture electrons

A solar cell converts light into electrical current by capturing the excited electron and hole. But this process can be quite difficult to engineer because naturally electrons (and everything else) will tend to rearrange so that they lower their energy. For the electron, this is most obviously done by “falling” back down into the state that it left, while emitting another photon to make sure that energy is conserved. So, the solar cell must have a way of driving the electron and hole apart from each other so that they can be captured before this recombination happens.

One way that can be done is to make a structure that has different band structure in different places, shown in the sketch below. There, a solar cell device is shown on the top, and the band structure of three different regions is below. The left-hand end of the solar cell is made so that it is “p-type” meaning that it has an excess of positively charged holes. Another way of saying this is that the Fermi level is in the valence band. The right-hand end is “n-type” meaning that it has extra negative electrons, or the Fermi level is in the conduction band.

The electron-hole pairs that are formed in either the p-type or n-type regions will recombine very quickly, but those that are made in the zone in between (called a “p-n junction”, highlighted by the dashed box) might not. Another way in which the excited electron can lose energy is by moving into an unoccupied state in the n-type region (shown by the blue arrow). Simultaneously, electrons in the valence band of the p-type region can lose energy by filling in the hole in the junction region. This process is equivalent to the hole moving to the left, towards the p-type region (red arrow).

This moving charge is exactly the current that the solar cell is designed to generate, and it can be collected by attaching wires (or “contacts”) at the ends (brown areas).

### Efficiency

There are a few things that can be optimised to make solar cells more efficient. The obvious thing is to use materials which absorb a lot of photons, so finding a material that has energy transitions at lots of different energies (corresponding to a lot of different photon wave lengths) is very important. Then, the recombination time can be increased so that the electrons and holes have more time to move apart from each other. Lastly, using a material that has a good electrical conductivity will allow the electrons to move faster and so can get more separation from the holes within the recombination time. This is a massive industry, and even small gains in efficiency can be worth a lot of money!

### LEDs

As a little coda, there is another electronic device which can be understood from this kind of thinking. Instead of absorbing light and creating current, an LED does the opposite: It uses current flowing through it to emit light.

Electrons moving through the p-n junction have to lose energy to get from the n-type region to the p-type region and they can do this by emitting photons – the inverse of the absorption process. By changing the energy spacing between the levels that the electron has to move between, the colour of the emitted light can be changed. Of course, there is a bit of detail which I am leaving out here, but it’s kinda neat that an LED is like a solar cell running in reverse!

# What is the holographic correspondence?

One of the hardest things to describe in theoretical physics is what happens when lots of particles interact with each other. Essentially, it is impossible to solve this problem exactly, and so the approaches that are currently used rely on several types of approximation.

What I want to describe is how, maybe, approaches in String Theory might be used to solve some of these really important “hard” problems. There’s no way that I can explain all the details (honestly, I don’t understand them!) but hopefully this will be a picture of how weird, esoteric, and very mathematical concepts can be say something useful about reality.

This approach is generically called “holography” for reasons that will become clear(er) later.

One of the approximate approaches to describing interacting particles that has been used to great effect is called “perturbation theory”. This applies when the interactions between the particles are relatively weak. How it works could be a whole post in itself, but perhaps for now it is enough to say that the existence of perturbation theory makes some problems with weak interactions “easy” in the sense that they can be approximately solved.

Crucially, it turns out that many of the complicated string theories that try to describe how quantum gravity works have interactions between particles which can be treated in perturbation theory.

The point of holography is that it might be possible to discover a dictionary or a way of translating between the “easy” string theory and a “hard” theory with strong interactions. Using this dictionary, it is possible to start from the “hard” theory, translate the calculation into the “easy” gravity analogue, do the calculation, and translate the results back to the “hard” context.

The diagram above is a sketch of how to visualise this process. The “easy” gravity theory exists in a bulk with a certain number of dimensions, whereas the “hard” theory lives in a space which is one dimension smaller, at the edge (or “boundary”). This is where the term holography comes from: The physical theory is a hologram which is projected from the bulk like R2D2’s message from Princess Leia.

Most intriguingly, when the “hard” theory has a temperature above absolute zero (which all physical materials must have) the gravity theory contains a black hole at its centre which has an event horizon.

So, the calculation for the complicated experimental quantity that you are interested in on the boundary can be translated through the bulk to the event horizon of the black hole. There, the properties of the theory on the boundary get converted into the properties of space-time near the black hole. This is what he dictionary does. Perturbation theory can then be used to get an approximate answer in that context. Finally, the answer is moved back through the bulk to the boundary where it can be interpreted in the original context.

Of  course, the technical details of how to actually do this in mathematics is very complicated, but there is one well-understood example of this process.

Quarks are fundamental particles and can be glued together to make protons and neutrons. The particles which do the glueing are called gluons. The gluons and the quarks are strongly interacting and so they fall into the category of “hard” theories. But, there is a well-defined correspondence between a supersymmetric particle theory which lives in eight spatial dimensions and one time dimension (so, nine in total) and an “easy” string theory which lives in ten dimensions. This correspondence has been used to derive results which would otherwise not be possible.

One of the current questions for people who work on holography is whether this is just a fortuitous specific case, or whether these correspondences are more general.

In condensed matter, there are also strongly interacting materials which theorists find very difficult to describe. One really important example is the high temperature superconductor materials.

The question is whether a holographic correspondence can be found for a theory that can make predictions about these materials? To put that another way, is there a higher-dimensional, gravity-like theory which gives a theory for a superconductor as its hologram?

A lot of people are looking at this question at the moment.

There are some encouraging things which have been done already. For example, the materials which go superconducting at low temperature also have weird behaviour at higher temperatures where they don’t superconduct. These properties have been calculated within the gravity theory, and shows some similar features to those seen in experiments.

But there is also a lot that is not known yet. For example, it is very difficult to include effects of the underlying material crystal, or include the existence of the quantum-mechanical spin of the particles. Both of these details will be important to design new materials which sustain superconductivity at even higher temperature.

This is really a field which is still in its infancy, but the underlying idea behind it is intriguing: if the theorists working on it can progress to the point where it can make predictions, it would be very exciting indeed.

# How do you measure the quantum states of a material?

I’ve talked a lot on this blog about how understanding the quantum states of a material can be helpful for working out its properties. But is it possible to directly measure these states in an experiment? And what sort of equipment is needed to do so? I’ll try to explain here.

First, a quick recap. The band structure is like a map of the allowed quantum states for the electrons in a material. The coordinates of the map are the momentum of the electron, and at each point there are a series of energy levels which the electron can be in. The energy states close to the “Fermi energy” largely determine things like whether the material can conduct electricity and heat, absorb light, or do interesting magnetic things.

There are various ways that the band structure can be investigated. Some of them are quite indirect, but last week, I visited an experimental facility in the UK where they can do (almost) direct measurements of the band structure using X-rays.

The technical name for this technique is “angle-resolved photoemission spectroscopy”, or ARPES for short. Let’s break that down a bit. Spectroscopy just means that it’s a way of measuring the spectrum of something. In this case, it’s the electrons in the material. I’ll come back to the “angle-resolved” part in a minute, but the crucial thing to explain here is what photoemission is.

The sketch above shows a hypothetical band structure. When light is shone on a material, the photons (green wavy arrows) that make up the beam can be absorbed by one of the electrons in the filled bands below the Fermi energy. When this happens, the energy and momentum of the photon is transferred into the electron.

This means that the electron must change its quantum state. But the band structure gives the map of the only allowed states in the material, so the electron must end up in one of the other bands. In the left-hand picture, the energy of the photon is just right for the electron at the bottom of the red arrow to jump to an unfilled state above the Fermi energy. This is called “excitation”.

But in the right-hand picture, the energy of the photon is larger (see the thicker line and bigger wiggles on the green arrow) so there is no allowed energy level for the excited electron to move to. Instead, the electron is kicked completely out of the material. To put that another way, the high-energy photons cause the material to emit electrons. This is photoemission!

The crucial part about ARPES is that the emitted electrons retain information about the quantum state that they were in before they absorbed the photons. In particular, the photons carry almost no momentum, so the momentum of the electron can’t really change during the emission process. But also, energy must be conserved, so the energy of the emitted electron must be the energy of the photon, plus the energy of the quantum state that the electron was in before emission.

So, if you can catch the emitted electrons, and measure their energy and momentum, then you can recover the band structure! The angle-resolved part in the ARPES acronym means that the momentum of the electrons is deduced from what angle they are emitted at.

But what does this look like in practise? Fortunately, a friendly guide from Diamond showed me around and let me take pictures.

The upper-left picture is an outside view of the Diamond facility. (The cover picture for this blog entry is an aerial view.) It’s a circular building, although this picture is taken from close enough that this might be hard to see. This gives a sense of scale for the place!

Inside is a machine called a synchrotron. They didn’t let us go near this, so I don’t have any pictures, but it is a circular particle accelerator which keeps bunches of electrons flowing around it very, very fast. As they go around, they release a lot of X-ray photons which can be captured and focused. (There is a really cool animation of this on their web site.) The X-rays come down a “beam line” and into one of many experimental “hutches” which stand around the outside of the accelerator.

The upper-right picture shows the ARPES machine inside the main hutch of beamline I05. Most of the stuff you can see at the front is designed for making samples under high vacuum, which can then be transferred straight into the sample chamber without exposure to air.

The lower-left picture is behind the machine, where the beam line comes in. It’s kinda hard to see the metal-coloured pipe, so I’ve drawn arrows. The lower-right picture shows where the real action happens. The sample chamber is near the bottom (there is a window above it which allows the experimentalists to visually check that the sample is okay), and you can just about see the beam line coming in from behind the rack in the foreground.

The X-rays come into the sample chamber from the beam line, strike the sample, and the emitted electrons are funnelled into the analyser which is the big metallic hemisphere towards the right of the picture. The spherical shape is important, because the momentum of the electrons is detected by how much they are deflected by a strong electric field inside the analyser. This separates the high momentum electrons from the low momentum ones in a similar way that a centrifuge separates heavy items from light ones.

And what can you get after all of this? The energy and momentum of all the electrons is recorded, and pretty graphs can be made!

Above is a picture that I stole from the Diamond web site. On the left is a theoretical calculation for the band structure of a material called tungsten diselenide (WSe2). On the right is the ARPES data. The colour scheme shows the intensity of the photoemitted electrons. As you can see, the prediction and data match very well. After all the effort of building a massive machine, it works! Hooray science!

# What next for integrated circuits?

There is currently a big problem in the semiconductor industry. While technological progress and commercial pressure demand that electronics must be made smaller and faster, we are getting increasingly close to the fundamental limits of what can be achieved with current materials.

In the last couple of weeks, two academic papers have come out which describe ways in which we might be able to get around these limitations.

### Size matters

A quick reminder about how transistors work. (You can read more detail here.) Transistors are switches which can be either on or off. They have a short conducting channel through which electricity can flow. When they are on, electrical current is flowing through them, and when they are off it is not. They have three connections, one which supplies current (called the source), one which collects it (the drain), and one which controls whether the channel is open or closed.

There is something called the International Technology Roadmap for Semiconductors which lays out targets for improvements in transistor technology which companies such as Intel are supposed to aim for. The stages in this plan are called “nodes”, which are described by the size of the transistor. Having smaller transistors is better because you can fit more into a chip and do more computations in a given space.

At the moment, transistors at the 14 nanometre node are being produced. This means that the length of the gate/channel is 14nm (a nanometre is one millionth of a millimetre). According to the roadmap, within a decade or so, the channel length is supposed to be as short as 3nm. But, overall, transistors are rather bigger than this length, in part because of the size of the source and drain contacts. Transistors at the 3nm node will have an overall size of about 40nm.

### Carbon nanotube transistors

The first paper I want to mention, which came out in the journal Science, reports the fabrication of a transistor made out of different materials which allows the overall size to be reduced. Instead of using doped silicon for the contacts and channel,  these researchers made the channel out of a carbon nanotube, and contacts from a cobalt-molybdenum alloy.

Carbon nanotubes are pretty much graphene which has been rolled up into a cylinder which is a few nanometres wide. Depending on the details, they can have semiconducting electronic properties which are excellent for making transistors from, but they also are interesting for a whole range of other reasons.

By doing this, they could make a channel/gate region of about 11 nm long, with two contacts that were about 10nm each. Even with some small spacers, the total width of the transistor was only 40nm. This should satisfy the demands of the 3nm node of the roadmap, even though the channel is nearly four times as long as that.

### 3D chips

The second approach is completely different. At the moment, integrated circuits are mostly made in a single layer, although there are some exceptions to this in the most modern chips. This means that the various parts of the chip that do calculations and store memory can be located quite a long way away from each other. This can lead to a bottleneck as data is moved around to where it is needed.

A group of researchers, publishing in the journal Nature, designed an entirely new architecture for a chip in which the memory, computation, input, and output were all stacked on top of each other. This means that even though the transistors in their device are not particularly small, the data transfer between memory and computation can all happen at the same time. This leads to a huge increase in speed because the bottleneck is now much wider.

The prototype they designed was actually a gas sensor, and a rough idea of its construction is shown in the sketch above. Gas molecules fall on the top layer, which is made up of a large number of individual detectors that can react to single molecules. These sensors can then write the information about their state into the memory which is directly below it via vertical connections that are built into the chip itself.

The point of the sensor is to work out what type of gas has fallen on it. To do this, the information stored in the memory from the sensors must be processed by a pattern recognition algorithm which involves a lot of calculations. This is done by a layer of transistors which are placed below the memory, and are directly connected to it. In the new architecture, the transistors doing the computation have much quicker access to the data they are processing than if it were stored in another location on the chip. Finally, an interface layer allows the chip to be controlled and through which it outputs the result of the calculation are below the transistors, again connected vertically.

The paper shows results for accurate sensing of gaseous nitrogen, lemon juice, rubbing alcohol, and even beer! But that’s not really the crucial point. The big new step is the vertical integration of several components which would otherwise be spaced out on a chip. This allows for much quicker data processing, because the bottleneck of transferring data in and out of memory is drastically reduced.

So, the bottom line here is that simply finding ways to make traditional silicon transistors smaller and smaller is only one way to approach the impending problems facing the electronics industry. It will be a while before innovations like this become the norm for consumer electronics, and perhaps these specific breakthroughs will not be the eventual solution. But, in general, finding new materials to make transistors from and designing clever new architectures are very promising routes forward.

# What is high temperature superconductivity?

It was March, 1987. The meeting of the Condensed Matter section of the American Physical Society. It doesn’t sound like much, but this meeting has gone down in history as the “Woodstock of Physics”. Experimental results were shown which proved that superconductivity is possible at a much higher temperature than had ever been thought possible. This result came completely out of left field and captured the imagination of physicists all over the world. It has been a huge area of research ever since.

But why is this a big deal? Superconductors can conduct electricity without any resistance, so it costs no energy and generates no heat. This is different from regular metal wires which get hot and lose energy when electricity passes through them. Imagine having superconducting power lines, or very strong magnets that don’t need to be super-cooled. This would lead to huge energy savings which would be great for the environment and make a lot of technology cost less too.

I guess it makes sense to clarify what “high temperature” means in this context. Most superconductors behave like normal metals at regular temperatures, but if they are cooled far enough (beyond the “critical temperature”, which is usually called Tc) then their properties change and they start to superconduct. Traditional superconducting materials have a Tc in the range of a few Kelvin, so only a few degrees above absolute zero. These new “high temperature” materials have their Tc at up to 120 Kelvin, so substantially warmer, but still pretty cold by everyday standards. (For what it’s worth, 120K is -153°C.)

But, if we could understand how this ‘new’ type of superconductivity works, then maybe we could design materials that superconduct at everyday temperatures and make use of the technological revolution that this would enable.

Unfortunately, the elephant in the room is that, even after thirty years of vigorous research, physicists currently still don’t really understand why and how this high Tc superconductivity happens.

I have written about superconductivity before, but that was the old “low temperature” version. What happens in a superconductor is that electrons pair up into new particles called “Cooper pairs”, and these particles can move through the material without experiencing collisions which slow them down. In the low temperature superconductors, the glue that holds the pairs together is made from vibrations of the crystal structure of the material itself.

But this mechanism of lattice vibrations (phonons) is not what happens in the high temperature version.

To explain the possible mechanisms, it’s important to see the atomic structure of these materials. To the right is a sketch of one high Tc superconductor, called bismuth strontium calcium copper oxide, or BSCCO (pronounced “bisco”) for short. The superconducting electrons are thought to live in the copper oxide (CuO4) layers.

One likely scenario is that instead of the lattice vibrations gluing the Cooper pairs together, it is fluctuations of the spins of the electrons that does it. Of course, electrons can interact with each other because they are electrically charged (and like charges repel each other), but spins can interact too. This interaction can either be attractive or repulsive, strong or weak, depending on the details.

In this case, it is thought that the spins of the electrons in the copper atoms are all pointing in nearly the same direction. But these spins can rotate a bit due to temperature or random motion. When they do this, it changes the interactions with other nearby spins and can create ripples in the spins on the lattice. In an analogy with the phonons that describe ripples in the positions of the atoms, these spin ripples can be described as particles called magnons. It is these that provide the glue: Under the right conditions, they can cause the electrons to be attracted to each other and form the Cooper pairs.

Another possibility comes from the layered structure. If electrons in the CuO4 layers can hop to the strontium or calcium layers, and then hop back again at a different point in space, this could induce correlations between the electrons that would result in superconductivity. (I appreciate that it’s probably far from obvious why this would work, but unfortunately, the explanation is too long and technical for this post.)

In principle, these two different mechanisms should give measurable effects that are slightly different from each other because the symmetry associated with the effective interactions are different. This would allow experimentalists to tell them apart and make the conclusive statement about what is going on. Naturally, these experiments have been done but so far, there is no consensus within the results. Some experiments show symmetry properties that would suggest the magnons are important, others suggest the interlayer hopping is important. Personally, I tend to think that the magnons are more likely to be the reason, but it’s really difficult to know for sure and I could well be wrong.

So, we’re kinda stuck and the puzzle of high Tc superconductivity remains one of condensed matter’s most tantalising and most embarrassing enigmas. We know a lot more than we did thirty years ago, but we are still a very long way from having superconductors that work at everyday temperatures.

# How does a transistor work?

The world would be a very different place if the transistor had never been invented. They are everywhere. They underpin all digital technology, they are the workhorses of consumer electronics, and they can be bewilderingly small. For example, the latest Core i7 microchips from Intel have over two billion transistors packed onto them.

But what are they, and how do they work?

In some ways, they are beguilingly simple. Transistors are tiny switches: they can be on or off. When they are on, electric current can flow through them, but when they are off it can’t.

The most common way this is achieved is in a device called called a “field effect transistor”, or FET. It gets this name because a small electric field is used to change the device from its conducting ‘on’ state to it’s non-conducting ‘off’ state.

At the bottom of the transistor is the semiconductor substrate, which is usually made out of silicon. (This is why silicon is such a big deal in computing.) Silicon is a fantastic crystal, because by adding a few atoms of another type of element to its crystal, it can become positively or negatively charged. To explain why, we need to turn to chemistry! A silicon atom has 14 electrons in it, but ten of these are bound tightly to the atomic nucleus and are very difficult to move. The other four are much more loosely bound and are what determines how it bonds to other atoms.

When a silicon crystal forms, the four loose electrons from each atom form bonds with the electrons from nearby atoms, and the geometry of these bonds is what makes the regular crystal structure. However, it is possible to take out a small number of the silicon atoms and replace them with some other type of atom. If this is done with an atom like phosphorus or nitrogen which has five loose electrons, then four of them are used to make the chemical bonds and one is left over. This left-over electron is free to move around the crystal easily, and it gives the crystal an overall negative charge. In the physics language, the silicon has become “n-doped”.

But, if some silicon atoms are replaced by something like boron or aluminium which has only three loose electrons, the atom has to ‘borrow’ an extra electron from the rest of the crystal, meaning that this electron is ‘lost’ and the crystal becomes positively charged. This is called “p-doped”.

Okay, so much for the chemistry, now back to the transistor itself. Transistors have three connections to the outside world, which are usually called the source, drain, and gate. The source is the input for electric current, the drain is the output, and the gate is the control which determines if current can flow or not.

The source and drain both connect to a small area of n-doped silicon (i.e. they have extra electrons) which can provide or collect the electric current which will flow through the switch. The central part of the device, called the “channel” is p-doped which means that there are not enough electrons in it.

Now, here’s where the quantum mechanics comes in!

A while back, I described the band structure of a material. Essentially, it is a map of the quantum mechanical states of the material. If there are no states in a particular region, then electrons cannot go there. The “Fermi energy” is the energy at which states stop being filled. I’ve drawn a rough version of the band structure of the three regions in the diagram below. In the n-doped regions, the states made by the extra electrons are below the Fermi surface and so they are filled. But in the p-doped channel, the unfilled extra states are above the Fermi energy. This makes a barrier between the source and drain and stops electrons from moving between the two.

Now for the big trick. When a voltage is applied to the gate, it makes an electric field in the channel region. This extra energy that the electrons get because they are in this field has the effect of moving energy of the quantum states in the channel region to a different energy. This is shown on the right hand side of the band diagrams. Now, the extra states are moved below the Fermi energy, but the silicon can’t create more electrons so these unfilled states make a path through which the extra electrons in the source can move to the drain. This removes the barrier meaning that applying the electric field to the channel region opens up the device to carrying current.

In the schematic of the device above, the left-hand sketch shows the transistor in the off state with no conducting channel in the p-doped region. The right-hand sketch shows the on-state, where the gate voltage has induced a conducting channel near the gate.

So, that’s how a transistor can turn on and off. But it’s a long leap from there to the integrated circuits that power your phone or laptop. Exactly how those microchips work is another subject, but briefly, the output from the drain of one transistor can be linked to the source or the gate of another one. This means that the state of a transistor can be used to control the state of another transistor. If they are put together in the right way, they can process information.

# What is peer review?

Chances are you’ve heard of peer review. The media often use it as an adjective to indicate a respectable piece of research. If a study has not been peer reviewed then this is taken as a shorthand that it might be unreliable. But is that a fair way of framing things? How does the process of peer review work? And does it do the job?

So, first things first – what is peer review? Essentially, it’s a stamp of approval by other experts in the field. Knowledgeable people will read a paper before it’s published and critique it. Usually, a paper won’t be published unless these experts are fairly satisfied that the paper is correct and measures up to the standards of importance or “impact” that the particular journal requires.

The specifics of the peer review process vary between different fields and different journals, but here is how things typically go in physics. Usually, a journal editor will send the paper to two or more people. These could be high profile professors or Ph.D students or anyone in between, but they are almost always people who work on similar topics.

The reviewers then read the paper carefully, and write a report for the editor, including a recommendation of whether the paper should be published or not. Often, they will suggest modifications to make it better, or ask questions about parts that they don’t understand. These reports are sent to the authors, who then have a chance to respond to the inevitable criticisms of the referees, and resubmit a modified manuscript.

After resubmission, many things can happen. If the referee’s recommendations are similar, then the editor will normally send the new version back to them so they can assess whether their comments and suggestions have been adequately addressed in the new version. They will then write another report for the editor.

But if the opinions of the referees are split, then the editor might well ask for a third opinion. This is the infamous Reviewer 3, and their recommendation is often crucial. In fact, it’s so crucial that the very existence of Reviewer 3 has lead to internet memes, a life on twitter (see #reviewer3), and mention in countless satires of academic life including this particularly excellent one by Kelly Oakes of BuzzFeed (link).

But, once the editor has gathered all the reports and recommendation, they will make a final decision about whether the paper will be published or not. For the authors, this is the moment of truth!

When it works, this can be a constructive process. I’ve certainly had papers that have been improved by the suggestions and feedback. But the process does not always work well. For example, not all journals always carry out the review process with complete rigour. The existence of for-profit, commercial journals who charge authors a publication fee is a subject for another day, but in those journals it is easy to believe that there is a pressure on the editors to maximise the number of papers that they accept. Then it’s only natural that review standards may not be well enforced.

And the anonymity that reviewers enjoy can lead to bad outcomes. By definition, reviewers have to be working in a similar field to the authors of the paper otherwise they would not be sufficiently expert to judge the merits of the work. So sometimes a paper is judged by competitors. There are many stories of papers being deliberately slowed down by referees, perhaps while they complete their own competing project. Or of times when a referee might stubbornly refuse to recommend publication in spite of good arguments. And there are even stories of outright theft of ideas and results during review.

Finally, there is also the possibility of straightforward human error. Two or three reviewers is not a huge number and so it can be hard to catch the mistakes. And not all reviewers are completely suitable for the papers they read. Review work is almost always done on a voluntary basis and so it can be hard for editors to find a sufficient number of people who are willing to give up their time.

I can think of a few times when I have not really understood the technical aspects of a paper, or I have not been sufficiently close to the field to judge whether the work is important. Perhaps I should have declined to review those manuscripts. Or maybe it’s okay because the paper should not be published if it cannot convince someone in an adjacent field of the merits of the work. There are arguments both ways.

The fact is that sometimes things slip through the net. Papers can be published with errors, or even worse, with fabricated data or plagiarism. There is no foolproof system for avoiding this, so in my opinion, robust post-publication review is important too. Exactly how to implement that is a tricky business though.

But, to sum up, my opinion is that peer review is an important – but not infallible – part of the academic process. Just because a paper has passed through this test does not automatically mean that it is correct or the last word on a subject, but it is a mark in its favour.

# Topology and the Nobel Prize

You may have seen that the Nobel Prize for Physics was awarded this week. The Prize was given “for theoretical discoveries of topological phase transitions and topological phases of matter”, which is a bit of a mouthful. Since this is an area that I have done a small amount of work in, I thought I would try to explain what it means.

You might have seen a video where a slightly nutty Swede talks about muffins, donuts, and pretzels. (He’s my boss, by the way!) The number of holes in each type of pastry defined a different “topology” of the lunch item. But what does that have to do with electrons? This is the bit that I want to flesh out. Then I’ll give an example of how it might be a useful concept.

### What is topology?

In a previous post, I talked about band structure of crystal materials. This is the starting point of explaining these topological phases, so I recommend you read that post before trying this one. There, I talked about the band structure being a kind of map of the allowed quantum states for electrons in a particular crystal. The coordinates of the map are the momentum of the electron.

Each of those quantum states has a wave function associated with it, which describes among other things, the probability of the electron in that state being at a particular point in space. To make a link with topology, we have to look at how the wave function changes in different parts of the map. To use a real map of a landscape as the analogy, you can associate the height of the ground with each point on the map, then by looking at how the height changes you can redraw the map to show how steep the slope of the ground is at each point.

We can do something like that in the mathematics of the wave functions. For example, in the sketches below, the arrows represent how the slope of the wave function looks for different momenta. You can get vortices (left picture) where the arrows form whirlpools, or you can get a source (right picture) where the arrows form a hedgehog shape. A sink is similar except that the arrows are pointing inwards, not outwards.

Now for the crucial part. There is a theorem in mathematics that says that if you multiply the slope of the wave function with the wave function itself at the same point, and add up all of these for every arrow on the map, then the result has to be a whole number. This isn’t obvious just by looking at the pictures but that’s why mathematics is great!

That whole number (which I’m going to call n from now on) is like the number of holes in the cinnamon bun or pretzel: It defines the topology of the electron states in the material. If n is zero then we say that the material is “topologically trivial”. If n is not zero then the material is “topologically non-trivial”. In many cases, n counts difference between the number of sources and the number of sinks of the arrows.

### What topology does

Okay, so that explains how topology enters into the understanding of electron states. But what impact does it have on the properties of a material? There are a number of things, but one of the most cool is about quantum states that can appear on the surface of topologically non-trivial materials. This is because of another theorem from mathematics, called the “bulk-boundary correspondence” which says that when a topologically non-trivial material meets a topologically trivial one, there must be quantum states localized at the interface.

Now, the air outside of a crystal is topologically trivial. (In fact, it has no arrows at all, so that when you take the sum there is no option but to get zero for the result.) So, at the edges of any topologically non-trivial material there must be quantum states at the edges. In some materials, like bismuth selenide for example, these quantum states have weird spin properties that might be used to encode information in the future.

And the best part is that because these quantum states at the edge are there because of the topology of the underlying material, they are really robust against things like impurities or roughness of the edge or other types of disorder which might destroy quantum states that don’t have this “topological protection”.

### An application

Now, finally, I want to give one more example of this type of consideration because it’s something I’ve been working on this year. But let me start at the beginning and explain the practical problem that I’m trying to solve. Let’s say that graphene, the wonder material is finally made into something useful that you can put on a computer chip. Then, you want to find a way to make these useful devices talk to each other by exchanging electric current. To do that, you need a conducting wire that is only a few nanometers thick which allows current to flow along it.

The obvious choice is to use a wire of graphene because then they can be fabricated at the same time as the graphene device itself. But the snag is that to make this work, the edges of that graphene wire have to be absolutely perfect. Essentially, any single atom out of place will make it very hard for the graphene wire to conduct electricity. That’s not good, because it’s very difficult to keep every atom in the right place!

The picture above shows a sketch of a narrow strip of graphene surrounded by boron nitride. Graphene is topologically trivial, but boron nitride is (in a certain sense) non-trivial and can have n equal to either plus or minus one, depending on details. So, remembering the bulk-boundary correspondence, the graphene in this construction works like an interface between two different topologically non-trivial regions, and therefore there must be quantum states in the graphene. These states are robust, and protected by the topology. I’ve tried to show these states by the black curved lines which illustrate that the electrons are located in the middle of the graphene strip.

Now, it is possible to use these topologically protected states to conduct current from left to right in the picture (or vice versa) and so this construction will work as a nanometer size wire, which is just what is needed. And the kicker is that because of the topological protection, there is no longer any requirement for the atoms of the graphene to be perfectly arranged: The topology beats the disorder!

Maybe this, and the example of the bismuth selenide I gave before show that the analysis of topology of quantum materials is a really useful way to think about their properties and helps us understand what’s going on at a deeper level.

(If you’re really masochistic and want to see the paper I just wrote on this, you can find it here.)