# What is the holographic correspondence?

One of the hardest things to describe in theoretical physics is what happens when lots of particles interact with each other. Essentially, it is impossible to solve this problem exactly, and so the approaches that are currently used rely on several types of approximation.

What I want to describe is how, maybe, approaches in String Theory might be used to solve some of these really important “hard” problems. There’s no way that I can explain all the details (honestly, I don’t understand them!) but hopefully this will be a picture of how weird, esoteric, and very mathematical concepts can be say something useful about reality.

This approach is generically called “holography” for reasons that will become clear(er) later.

One of the approximate approaches to describing interacting particles that has been used to great effect is called “perturbation theory”. This applies when the interactions between the particles are relatively weak. How it works could be a whole post in itself, but perhaps for now it is enough to say that the existence of perturbation theory makes some problems with weak interactions “easy” in the sense that they can be approximately solved.

Crucially, it turns out that many of the complicated string theories that try to describe how quantum gravity works have interactions between particles which can be treated in perturbation theory.

The point of holography is that it might be possible to discover a dictionary or a way of translating between the “easy” string theory and a “hard” theory with strong interactions. Using this dictionary, it is possible to start from the “hard” theory, translate the calculation into the “easy” gravity analogue, do the calculation, and translate the results back to the “hard” context.

The diagram above is a sketch of how to visualise this process. The “easy” gravity theory exists in a bulk with a certain number of dimensions, whereas the “hard” theory lives in a space which is one dimension smaller, at the edge (or “boundary”). This is where the term holography comes from: The physical theory is a hologram which is projected from the bulk like R2D2’s message from Princess Leia.

Most intriguingly, when the “hard” theory has a temperature above absolute zero (which all physical materials must have) the gravity theory contains a black hole at its centre which has an event horizon.

So, the calculation for the complicated experimental quantity that you are interested in on the boundary can be translated through the bulk to the event horizon of the black hole. There, the properties of the theory on the boundary get converted into the properties of space-time near the black hole. This is what he dictionary does. Perturbation theory can then be used to get an approximate answer in that context. Finally, the answer is moved back through the bulk to the boundary where it can be interpreted in the original context.

Of  course, the technical details of how to actually do this in mathematics is very complicated, but there is one well-understood example of this process.

Quarks are fundamental particles and can be glued together to make protons and neutrons. The particles which do the glueing are called gluons. The gluons and the quarks are strongly interacting and so they fall into the category of “hard” theories. But, there is a well-defined correspondence between a supersymmetric particle theory which lives in eight spatial dimensions and one time dimension (so, nine in total) and an “easy” string theory which lives in ten dimensions. This correspondence has been used to derive results which would otherwise not be possible.

One of the current questions for people who work on holography is whether this is just a fortuitous specific case, or whether these correspondences are more general.

In condensed matter, there are also strongly interacting materials which theorists find very difficult to describe. One really important example is the high temperature superconductor materials.

The question is whether a holographic correspondence can be found for a theory that can make predictions about these materials? To put that another way, is there a higher-dimensional, gravity-like theory which gives a theory for a superconductor as its hologram?

A lot of people are looking at this question at the moment.

There are some encouraging things which have been done already. For example, the materials which go superconducting at low temperature also have weird behaviour at higher temperatures where they don’t superconduct. These properties have been calculated within the gravity theory, and shows some similar features to those seen in experiments.

But there is also a lot that is not known yet. For example, it is very difficult to include effects of the underlying material crystal, or include the existence of the quantum-mechanical spin of the particles. Both of these details will be important to design new materials which sustain superconductivity at even higher temperature.

This is really a field which is still in its infancy, but the underlying idea behind it is intriguing: if the theorists working on it can progress to the point where it can make predictions, it would be very exciting indeed.

# How do you measure the quantum states of a material?

I’ve talked a lot on this blog about how understanding the quantum states of a material can be helpful for working out its properties. But is it possible to directly measure these states in an experiment? And what sort of equipment is needed to do so? I’ll try to explain here.

First, a quick recap. The band structure is like a map of the allowed quantum states for the electrons in a material. The coordinates of the map are the momentum of the electron, and at each point there are a series of energy levels which the electron can be in. The energy states close to the “Fermi energy” largely determine things like whether the material can conduct electricity and heat, absorb light, or do interesting magnetic things.

There are various ways that the band structure can be investigated. Some of them are quite indirect, but last week, I visited an experimental facility in the UK where they can do (almost) direct measurements of the band structure using X-rays.

The technical name for this technique is “angle-resolved photoemission spectroscopy”, or ARPES for short. Let’s break that down a bit. Spectroscopy just means that it’s a way of measuring the spectrum of something. In this case, it’s the electrons in the material. I’ll come back to the “angle-resolved” part in a minute, but the crucial thing to explain here is what photoemission is.

The sketch above shows a hypothetical band structure. When light is shone on a material, the photons (green wavy arrows) that make up the beam can be absorbed by one of the electrons in the filled bands below the Fermi energy. When this happens, the energy and momentum of the photon is transferred into the electron.

This means that the electron must change its quantum state. But the band structure gives the map of the only allowed states in the material, so the electron must end up in one of the other bands. In the left-hand picture, the energy of the photon is just right for the electron at the bottom of the red arrow to jump to an unfilled state above the Fermi energy. This is called “excitation”.

But in the right-hand picture, the energy of the photon is larger (see the thicker line and bigger wiggles on the green arrow) so there is no allowed energy level for the excited electron to move to. Instead, the electron is kicked completely out of the material. To put that another way, the high-energy photons cause the material to emit electrons. This is photoemission!

The crucial part about ARPES is that the emitted electrons retain information about the quantum state that they were in before they absorbed the photons. In particular, the photons carry almost no momentum, so the momentum of the electron can’t really change during the emission process. But also, energy must be conserved, so the energy of the emitted electron must be the energy of the photon, plus the energy of the quantum state that the electron was in before emission.

So, if you can catch the emitted electrons, and measure their energy and momentum, then you can recover the band structure! The angle-resolved part in the ARPES acronym means that the momentum of the electrons is deduced from what angle they are emitted at.

But what does this look like in practise? Fortunately, a friendly guide from Diamond showed me around and let me take pictures.

The upper-left picture is an outside view of the Diamond facility. (The cover picture for this blog entry is an aerial view.) It’s a circular building, although this picture is taken from close enough that this might be hard to see. This gives a sense of scale for the place!

Inside is a machine called a synchrotron. They didn’t let us go near this, so I don’t have any pictures, but it is a circular particle accelerator which keeps bunches of electrons flowing around it very, very fast. As they go around, they release a lot of X-ray photons which can be captured and focused. (There is a really cool animation of this on their web site.) The X-rays come down a “beam line” and into one of many experimental “hutches” which stand around the outside of the accelerator.

The upper-right picture shows the ARPES machine inside the main hutch of beamline I05. Most of the stuff you can see at the front is designed for making samples under high vacuum, which can then be transferred straight into the sample chamber without exposure to air.

The lower-left picture is behind the machine, where the beam line comes in. It’s kinda hard to see the metal-coloured pipe, so I’ve drawn arrows. The lower-right picture shows where the real action happens. The sample chamber is near the bottom (there is a window above it which allows the experimentalists to visually check that the sample is okay), and you can just about see the beam line coming in from behind the rack in the foreground.

The X-rays come into the sample chamber from the beam line, strike the sample, and the emitted electrons are funnelled into the analyser which is the big metallic hemisphere towards the right of the picture. The spherical shape is important, because the momentum of the electrons is detected by how much they are deflected by a strong electric field inside the analyser. This separates the high momentum electrons from the low momentum ones in a similar way that a centrifuge separates heavy items from light ones.

And what can you get after all of this? The energy and momentum of all the electrons is recorded, and pretty graphs can be made!

Above is a picture that I stole from the Diamond web site. On the left is a theoretical calculation for the band structure of a material called tungsten diselenide (WSe2). On the right is the ARPES data. The colour scheme shows the intensity of the photoemitted electrons. As you can see, the prediction and data match very well. After all the effort of building a massive machine, it works! Hooray science!

# What is high temperature superconductivity?

It was March, 1987. The meeting of the Condensed Matter section of the American Physical Society. It doesn’t sound like much, but this meeting has gone down in history as the “Woodstock of Physics”. Experimental results were shown which proved that superconductivity is possible at a much higher temperature than had ever been thought possible. This result came completely out of left field and captured the imagination of physicists all over the world. It has been a huge area of research ever since.

But why is this a big deal? Superconductors can conduct electricity without any resistance, so it costs no energy and generates no heat. This is different from regular metal wires which get hot and lose energy when electricity passes through them. Imagine having superconducting power lines, or very strong magnets that don’t need to be super-cooled. This would lead to huge energy savings which would be great for the environment and make a lot of technology cost less too.

I guess it makes sense to clarify what “high temperature” means in this context. Most superconductors behave like normal metals at regular temperatures, but if they are cooled far enough (beyond the “critical temperature”, which is usually called Tc) then their properties change and they start to superconduct. Traditional superconducting materials have a Tc in the range of a few Kelvin, so only a few degrees above absolute zero. These new “high temperature” materials have their Tc at up to 120 Kelvin, so substantially warmer, but still pretty cold by everyday standards. (For what it’s worth, 120K is -153°C.)

But, if we could understand how this ‘new’ type of superconductivity works, then maybe we could design materials that superconduct at everyday temperatures and make use of the technological revolution that this would enable.

Unfortunately, the elephant in the room is that, even after thirty years of vigorous research, physicists currently still don’t really understand why and how this high Tc superconductivity happens.

I have written about superconductivity before, but that was the old “low temperature” version. What happens in a superconductor is that electrons pair up into new particles called “Cooper pairs”, and these particles can move through the material without experiencing collisions which slow them down. In the low temperature superconductors, the glue that holds the pairs together is made from vibrations of the crystal structure of the material itself.

But this mechanism of lattice vibrations (phonons) is not what happens in the high temperature version.

To explain the possible mechanisms, it’s important to see the atomic structure of these materials. To the right is a sketch of one high Tc superconductor, called bismuth strontium calcium copper oxide, or BSCCO (pronounced “bisco”) for short. The superconducting electrons are thought to live in the copper oxide (CuO4) layers.

One likely scenario is that instead of the lattice vibrations gluing the Cooper pairs together, it is fluctuations of the spins of the electrons that does it. Of course, electrons can interact with each other because they are electrically charged (and like charges repel each other), but spins can interact too. This interaction can either be attractive or repulsive, strong or weak, depending on the details.

In this case, it is thought that the spins of the electrons in the copper atoms are all pointing in nearly the same direction. But these spins can rotate a bit due to temperature or random motion. When they do this, it changes the interactions with other nearby spins and can create ripples in the spins on the lattice. In an analogy with the phonons that describe ripples in the positions of the atoms, these spin ripples can be described as particles called magnons. It is these that provide the glue: Under the right conditions, they can cause the electrons to be attracted to each other and form the Cooper pairs.

Another possibility comes from the layered structure. If electrons in the CuO4 layers can hop to the strontium or calcium layers, and then hop back again at a different point in space, this could induce correlations between the electrons that would result in superconductivity. (I appreciate that it’s probably far from obvious why this would work, but unfortunately, the explanation is too long and technical for this post.)

In principle, these two different mechanisms should give measurable effects that are slightly different from each other because the symmetry associated with the effective interactions are different. This would allow experimentalists to tell them apart and make the conclusive statement about what is going on. Naturally, these experiments have been done but so far, there is no consensus within the results. Some experiments show symmetry properties that would suggest the magnons are important, others suggest the interlayer hopping is important. Personally, I tend to think that the magnons are more likely to be the reason, but it’s really difficult to know for sure and I could well be wrong.

So, we’re kinda stuck and the puzzle of high Tc superconductivity remains one of condensed matter’s most tantalising and most embarrassing enigmas. We know a lot more than we did thirty years ago, but we are still a very long way from having superconductors that work at everyday temperatures.

# How does a transistor work?

The world would be a very different place if the transistor had never been invented. They are everywhere. They underpin all digital technology, they are the workhorses of consumer electronics, and they can be bewilderingly small. For example, the latest Core i7 microchips from Intel have over two billion transistors packed onto them.

But what are they, and how do they work?

In some ways, they are beguilingly simple. Transistors are tiny switches: they can be on or off. When they are on, electric current can flow through them, but when they are off it can’t.

The most common way this is achieved is in a device called called a “field effect transistor”, or FET. It gets this name because a small electric field is used to change the device from its conducting ‘on’ state to it’s non-conducting ‘off’ state.

At the bottom of the transistor is the semiconductor substrate, which is usually made out of silicon. (This is why silicon is such a big deal in computing.) Silicon is a fantastic crystal, because by adding a few atoms of another type of element to its crystal, it can become positively or negatively charged. To explain why, we need to turn to chemistry! A silicon atom has 14 electrons in it, but ten of these are bound tightly to the atomic nucleus and are very difficult to move. The other four are much more loosely bound and are what determines how it bonds to other atoms.

When a silicon crystal forms, the four loose electrons from each atom form bonds with the electrons from nearby atoms, and the geometry of these bonds is what makes the regular crystal structure. However, it is possible to take out a small number of the silicon atoms and replace them with some other type of atom. If this is done with an atom like phosphorus or nitrogen which has five loose electrons, then four of them are used to make the chemical bonds and one is left over. This left-over electron is free to move around the crystal easily, and it gives the crystal an overall negative charge. In the physics language, the silicon has become “n-doped”.

But, if some silicon atoms are replaced by something like boron or aluminium which has only three loose electrons, the atom has to ‘borrow’ an extra electron from the rest of the crystal, meaning that this electron is ‘lost’ and the crystal becomes positively charged. This is called “p-doped”.

Okay, so much for the chemistry, now back to the transistor itself. Transistors have three connections to the outside world, which are usually called the source, drain, and gate. The source is the input for electric current, the drain is the output, and the gate is the control which determines if current can flow or not.

The source and drain both connect to a small area of n-doped silicon (i.e. they have extra electrons) which can provide or collect the electric current which will flow through the switch. The central part of the device, called the “channel” is p-doped which means that there are not enough electrons in it.

Now, here’s where the quantum mechanics comes in!

A while back, I described the band structure of a material. Essentially, it is a map of the quantum mechanical states of the material. If there are no states in a particular region, then electrons cannot go there. The “Fermi energy” is the energy at which states stop being filled. I’ve drawn a rough version of the band structure of the three regions in the diagram below. In the n-doped regions, the states made by the extra electrons are below the Fermi surface and so they are filled. But in the p-doped channel, the unfilled extra states are above the Fermi energy. This makes a barrier between the source and drain and stops electrons from moving between the two.

Now for the big trick. When a voltage is applied to the gate, it makes an electric field in the channel region. This extra energy that the electrons get because they are in this field has the effect of moving energy of the quantum states in the channel region to a different energy. This is shown on the right hand side of the band diagrams. Now, the extra states are moved below the Fermi energy, but the silicon can’t create more electrons so these unfilled states make a path through which the extra electrons in the source can move to the drain. This removes the barrier meaning that applying the electric field to the channel region opens up the device to carrying current.

In the schematic of the device above, the left-hand sketch shows the transistor in the off state with no conducting channel in the p-doped region. The right-hand sketch shows the on-state, where the gate voltage has induced a conducting channel near the gate.

So, that’s how a transistor can turn on and off. But it’s a long leap from there to the integrated circuits that power your phone or laptop. Exactly how those microchips work is another subject, but briefly, the output from the drain of one transistor can be linked to the source or the gate of another one. This means that the state of a transistor can be used to control the state of another transistor. If they are put together in the right way, they can process information.

# Topology and the Nobel Prize

You may have seen that the Nobel Prize for Physics was awarded this week. The Prize was given “for theoretical discoveries of topological phase transitions and topological phases of matter”, which is a bit of a mouthful. Since this is an area that I have done a small amount of work in, I thought I would try to explain what it means.

You might have seen a video where a slightly nutty Swede talks about muffins, donuts, and pretzels. (He’s my boss, by the way!) The number of holes in each type of pastry defined a different “topology” of the lunch item. But what does that have to do with electrons? This is the bit that I want to flesh out. Then I’ll give an example of how it might be a useful concept.

### What is topology?

In a previous post, I talked about band structure of crystal materials. This is the starting point of explaining these topological phases, so I recommend you read that post before trying this one. There, I talked about the band structure being a kind of map of the allowed quantum states for electrons in a particular crystal. The coordinates of the map are the momentum of the electron.

Each of those quantum states has a wave function associated with it, which describes among other things, the probability of the electron in that state being at a particular point in space. To make a link with topology, we have to look at how the wave function changes in different parts of the map. To use a real map of a landscape as the analogy, you can associate the height of the ground with each point on the map, then by looking at how the height changes you can redraw the map to show how steep the slope of the ground is at each point.

We can do something like that in the mathematics of the wave functions. For example, in the sketches below, the arrows represent how the slope of the wave function looks for different momenta. You can get vortices (left picture) where the arrows form whirlpools, or you can get a source (right picture) where the arrows form a hedgehog shape. A sink is similar except that the arrows are pointing inwards, not outwards.

Now for the crucial part. There is a theorem in mathematics that says that if you multiply the slope of the wave function with the wave function itself at the same point, and add up all of these for every arrow on the map, then the result has to be a whole number. This isn’t obvious just by looking at the pictures but that’s why mathematics is great!

That whole number (which I’m going to call n from now on) is like the number of holes in the cinnamon bun or pretzel: It defines the topology of the electron states in the material. If n is zero then we say that the material is “topologically trivial”. If n is not zero then the material is “topologically non-trivial”. In many cases, n counts difference between the number of sources and the number of sinks of the arrows.

### What topology does

Okay, so that explains how topology enters into the understanding of electron states. But what impact does it have on the properties of a material? There are a number of things, but one of the most cool is about quantum states that can appear on the surface of topologically non-trivial materials. This is because of another theorem from mathematics, called the “bulk-boundary correspondence” which says that when a topologically non-trivial material meets a topologically trivial one, there must be quantum states localized at the interface.

Now, the air outside of a crystal is topologically trivial. (In fact, it has no arrows at all, so that when you take the sum there is no option but to get zero for the result.) So, at the edges of any topologically non-trivial material there must be quantum states at the edges. In some materials, like bismuth selenide for example, these quantum states have weird spin properties that might be used to encode information in the future.

And the best part is that because these quantum states at the edge are there because of the topology of the underlying material, they are really robust against things like impurities or roughness of the edge or other types of disorder which might destroy quantum states that don’t have this “topological protection”.

### An application

Now, finally, I want to give one more example of this type of consideration because it’s something I’ve been working on this year. But let me start at the beginning and explain the practical problem that I’m trying to solve. Let’s say that graphene, the wonder material is finally made into something useful that you can put on a computer chip. Then, you want to find a way to make these useful devices talk to each other by exchanging electric current. To do that, you need a conducting wire that is only a few nanometers thick which allows current to flow along it.

The obvious choice is to use a wire of graphene because then they can be fabricated at the same time as the graphene device itself. But the snag is that to make this work, the edges of that graphene wire have to be absolutely perfect. Essentially, any single atom out of place will make it very hard for the graphene wire to conduct electricity. That’s not good, because it’s very difficult to keep every atom in the right place!

The picture above shows a sketch of a narrow strip of graphene surrounded by boron nitride. Graphene is topologically trivial, but boron nitride is (in a certain sense) non-trivial and can have n equal to either plus or minus one, depending on details. So, remembering the bulk-boundary correspondence, the graphene in this construction works like an interface between two different topologically non-trivial regions, and therefore there must be quantum states in the graphene. These states are robust, and protected by the topology. I’ve tried to show these states by the black curved lines which illustrate that the electrons are located in the middle of the graphene strip.

Now, it is possible to use these topologically protected states to conduct current from left to right in the picture (or vice versa) and so this construction will work as a nanometer size wire, which is just what is needed. And the kicker is that because of the topological protection, there is no longer any requirement for the atoms of the graphene to be perfectly arranged: The topology beats the disorder!

Maybe this, and the example of the bismuth selenide I gave before show that the analysis of topology of quantum materials is a really useful way to think about their properties and helps us understand what’s going on at a deeper level.

(If you’re really masochistic and want to see the paper I just wrote on this, you can find it here.)

# What is graphene and why all the hype?

There’s a decent chance you’ve heard of graphene. There are lots of big claims and grand promises made about it by scientists, technologists, and politicians. So what I thought I’d do is to go through some of these claims and almost ‘fact-check’ them so that the next time you hear about this “wonder material” you know what to make of it.

Let’s start at the beginning: what is graphene? It’s made out of carbon atoms arranged in a crystal. But what sets it apart from other crystals of carbon atoms is that it is only one atom thick (see the picture below). It’s not quite the thinnest thing that could ever exist because maybe you could make something similar using atoms that are smaller than carbon (for example, experimentalists can make certain types of helium in one layer), but given that carbon is the sixth smallest element, it’s really quite close!

Diamond and graphite are also crystals made only of carbon, but they have a different arrangement of the carbon atoms, and this means they have very different properties.

So, what has been claimed about graphene?

### Claim one: the “wonder material”

Graphene has some nice basic properties. It’s really strong and really flexible. It conducts electricity and heat really well. It simultaneously is almost transparent but absorbs light really strongly. It’s almost impermeable to gases. In fact, most of the proposals for applications of graphene in the Real World™ involve these physical and mechanical superlatives, not the electronic properties which in some ways are more interesting for a physicist.

For example, its conductivity and transparency mean that it could be the layer in a touch screen which senses where a finger or stylus is pressing. This could combine with its flexibility to make bendable (and wearable) electronics and displays. But for the moment, it’s “only” making current ideas work better, it doesn’t add any fundamentally new technology that we didn’t have before. If that’s your definition of a “wonder material” then okay, but personally I’m not quite convinced the label is merited.

### Claim two: Silicon replacement

In the first few years after graphene was made, there was a lot of excitement that it might be used to replace silicon in microchips and make smaller, faster, more powerful computers. It fairly quickly became obvious that this wouldn’t happen. The reason for this is to do with how transistors work. That’s a subject that I want to write more about in the future, but roughly speaking, a transistor is a switch that has an ‘on’ state where electrical current can flow through it, and an ‘off’ state where it can’t. The problem with graphene is turning it off: Current would always flow through! So this one isn’t happening.

Graphene electronics might still be useful though. For example, when your phone transmits and receives data from the network, it has to convert the analogue signal in the radio waves from the mast into a digital signal that the phone can process. Graphene could be very good for this particular job.

### Claim three: relativistic physics in the lab

This one is a bit more physicsy so takes a bit of explaining. In quantum mechanics, one of the most important pieces of information you can have is how the energy of a particle is related to its momentum. This is the ‘band structure’ that I wrote about before. In most cases, when electrons move around in crystals, their energy is proportional to their momentum squared. In special relativity there is a different relation: The energy is proportional to just the momentum, not to the square. For example, this is true for light or for neutrinos. One thing that researchers realized very early on about graphene is that electrons moving around on the hexagonal lattice had a ‘energy equals momentum’ band structure, just like in relativity. Therefore, the electrons in graphene behave a bit like neutrinos or photons. Some of the effects of this have been measured in experiments, so this is true.

### Claim four: Technological revolution

One other big problem that has to be overcome is that graphene is currently very expensive to make. And the graphene that is made at industrial scale tends to be quite poor quality. This is an issue that engineers and chemists are working really hard at. Since I’m neither an engineer or a chemist I probably shouldn’t say too much about it. But what is definitely true is that the fabrication issues have to be solved before you’ll see technology with graphene inside it in high street stores. Still, these are clever people so there is every chance it will still happen.

### Footnote

Near the top, I said graphene simultaneously absorbs a lot of light and is almost transparent. This makes no sense on the face of it!! So let me say what I mean. To be specific, a single layer of graphene absorbs about 2.3% of visible light that lands on it. Considering that graphene is only one layer of atoms, that seems like quite a lot. It’s certainly better than any other material that I know of. But at the same time, it means that it lets through about 97.7% of light, which also seems like a lot. I guess it’s just a question of perspective.

# Justin Trudeau and quantum computing

You’ve probably seen already that clip of Justin Trudeau, the Prime Minister of Canada, explaining to a surprised group of journalists why quantum computing excites him so much. In case you haven’t seen it, here is a link. A number of things strike me about this. Firstly, of course, he’s right: If we can get quantum computing to work then that would be a really, really big deal and it’s worth being excited about! Second, it’s a bit depressing that a politician having a vague idea about something scientific is a surprising exception to the rule. Thirdly, while his point about storage of information is right, there’s a whole lot more that quantum computers can do that he didn’t mention. Of course, that’s fair enough because he wasn’t trying to be comprehensive, but it gives me an opportunity to talk about some of the stuff that he missed out.

Before that, let’s go over exactly what a quantum computer is. As the Prime Minister said, a normal (or “classical”) computer operates using only ones and zeroes which are represented by current flowing or not flowing through a small “wire”. (However, as you might have already read, this might have to change in the future!) A quantum computer is completely different because instead of these binary bits, it has bits which can be in state that is a mixture of zero and one at the same time. This like the electron simultaneously going through both slits in the two slit experiment, or Schrödinger’s famous cat being alive and dead “at the same time”: It’s an example of a quantum mechanical superposition of states. A quantum computer is designed to operate on these quantum states and to take advantage of this indeterminacy, changing them from one superposition to another to do computations. If you can get the quantum bits to become entangled with each other (meaning that the quantum state of one bit will be affected by the quantum state of all the others that it is entangled with) then you can do quantum computing! Exactly how this would work from a technological point of view is a big subject which I’ll probably write about another time, but options that physicists and engineers are working on include using superconducting circuits, very cold gases of atoms, the spins of electrons or atomic nuclei, or special particles called majorana fermions.

A big field of study has been to find algorithms that allow this quantum-ness to be used to do things that classical computers can’t. There are a few examples that would really change everyday life if they could be implemented. The first sounds a bit boring on the face of it, but quantum algorithms allow you to search a list to determine if an item is in a list or not (i.e. to find that item) in a much shorter time than classical algorithms. So, if you want to search the internet for your favourite web site, a quantum google will do this much faster than a classical google. Quantum algorithms can also tell quickly whether all the items in a list are different from each other or not.

Another application is to solve “black box” problems. This has nothing to do with the flight data recorders in aircraft, but is the name given to the following problem. Say you have a set of inputs to a system and their corresponding output, but you don’t know what the system does to turn the input into the output. The system is the black box, and the difficult problem is to determine what operations the system does to the input. This is important because these black box problems occur in many different areas of science including artificial intelligence, studies of the brain, and climate science. For a classical computer to solve this exactly would require an exponential number of “guesses”, but a quantum computer could do this in just one “guess”!

But perhaps the most devastating use of a quantum computer is to break the internet. Let me explain this a bit! There is a mathematical theorem which says that every number can be represented as a list of prime numbers multiplied together, and that for each number there is only one such list. For example, $30=2\times 3\times5$, or $247=13\times19$. This matters because most digital security currently depends on the fact that this is a very difficult thing for classical computers to start with a big number and work out what the prime factors are. The way that most encryption on the internet works is that data is encoded using a big number that is the product of only two prime numbers. In order to decrypt the information again, you need to know what the two prime numbers that give you the big number are. Because it’s hard to work out what the two prime numbers are, it is safe to distribute the big number (your public key) so that anyone can encode information to send to you securely. But only you can decode the information because only you know what the two primes are (this is your private key). But, if it suddenly becomes easy to factorise the big number into the two primes then this whole mode of encryption does not work! Every interaction that you have with your bank, your email provider, social media, and online stores could be broken by someone else. The internet essentially wouldn’t be private! Or at least, it wouldn’t be private until a new method for doing encryption is found. This is the main reason why security agencies are working so hard on quantum computing.

Finally, I want to quickly mention one application is a bit more specialised to physics: Quantum computers will allow us to simulate quantum systems in a much more accurate way. Currently, the equations that determine how groups of quantum mechanical objects behave and interact with each other pretty much can’t be solved exactly, in part because the quantum behaviour is difficult to model accurately using classical computing. If you have a quantum computer, then part of this difficulty goes away because you can build the quantum interactions into the simulation in a much more natural way, using the entanglement of the quantum bits.

So in summary, Prime Minister Trudeau was right: Quantum computers have the potential to be absolutely amazing and to change society and are really exciting (and possibly slightly scary!) But storing information in a more compact manner is really only the tip of the iceberg.

# My new idea

It’s been a while! Part of the reason I’ve not written anything recently is that I’ve been busy preparing a grant proposal which has to be submitted in a few days. This means I’m begging the Swedish funding agency to give me money to spend on researching a new idea that I have been working on for a while. As part of this proposal, I am required to write a description of what I want to do that is understandable by people outside of physics, so I thought I’d share an edited version of it here. Maybe it’s interesting to read about something that might happen in the future, rather than things that are already well known. And it’s an idea that I’m pretty excited about because there’s some chance it might make a difference!

Computing technology is continuously getting smaller and more powerful. There is a rule-of-thumb, called Moore’s law, which encodes this by predicting that the computing power of consumer electronics will double every two years. So far, this prediction has been followed since microchips were invented in the 1970s. However, fundamental limits are about to be reached which will halt this progress. In particular, the individual transistors which make up the chips are becoming so small that quantum mechanical effects will soon start to dominate their operation and fundamentally change how they work. Removing the heat generated by their operation is also becoming hugely challenging.

A transistor is essentially just a switch that can be either on or off. At the present time, the difference between the on and off state is given by whether an electric current is flowing through the switch or not. If quantum mechanical effects start to dominate transistor operation, then the distinction between the on and off state becomes blurred because current flow becomes a more random process.

In this project, I will investigate a new method of making transistors, using the quantum mechanical properties of the electrons. The theoretical idea is to make two one-dimensional layers (for example, two nanowires) placed close enough to each other that the electrons in the material can interact with each other through Coulomb repulsion. If one of these nanowires has just a few electrons in it, while the other is almost full of electrons, then the electrons in the nearly empty wire can be attracted to the ‘holes’ in the nearly full wire, and they can pair up into new bound particles called excitons. What is special about these excitons is that they can form a superfluid which can be controlled electronically.

This can be made into a transistor in the following way. When the superfluid is absent, the two layers are quite well (although not perfectly) insulated from each other, so it is difficult for a current to flow between them. However, when the superfluid forms, one of the quantum mechanical implications is that it becomes possible to drive a substantial inter-layer current. This difference defines the on and off states of the transistor.

There are some mathematical reasons why one might expect that this cannot work for one-dimensional layers, but I have already demonstrated that there is a way around this. If the electrons can hop from one layer to the other, then the theorem which says that the superfluid cannot form in one dimension is not valid. What I will do next is a systematic investigation of lots of different types on one-dimensional materials to determine which is the best situation for experimentalists to look in for this superfluid. I will use approximate theories for the behaviour of electrons in nanowires or nanoribbons, carbon nanotubes, and core-shell nanowires to determine the temperature at which the superfluid can form for these different materials. When the superfluid is established, it can be described by a hydrodynamic theory which treats the superfluid as a large-scale object that can be described by simple equations that govern the flow of liquids. Analysing this theory will reveal information about the properties of the superfluid and allow optimisation of the operation of the switch. Finally, in reality, no material can be fabricated with perfect precision, so I will examine how imperfections will be detrimental to the formation of the superfluid to establish how accurate the production techniques need to be.

Another benefit of this superfluid is that it can conduct heat very efficiently. This means that it may have applications in cooling and refrigeration. I will also investigate the quantitative advantages that this may have over traditional thermoelectric materials. In both of these applications, the fact that the superfluid can exist in a one-dimensional material is a very advantageous factor for designing devices. In particular, because they are so small in two directions, it gives a huge amount of freedom for placing transistors or heat-conducting channels in optimal arrangements that would be impossible with two- or three-dimensional materials.

One final thing for some context: The picture at the top of the page shows a core-shell nanowire that was grown by some physicists in Lund, Sweden. It’s made out of two different types of semiconductor: Gallium antimonide (GaSb) in the core, and indium arsenic antimonide (InAsSb) in the shell. The core region is the nearly full layer that contains the ‘holes’, while the shell is the nearly empty layer with the electrons. The vertical white line on the left of the image is a scale bar that is 100nm long (that’s one ten-thousandth of a millimeter!) which shows that these wires are pretty small! (Picture credit: Ganjipour et al, Applied Physics Letters 101, 103501 (2012)).

# How a hard disk works

This post is going to explain the fundamental part of how the hard drive in your old computer works. Modern solid state disks work completely differently, so this applies only to the older type that have been common for several decades. Specifically, when your computer writes something to the drive, it has to turn the sequence of zeroes and ones which make up the binary data into something physical on the disk. Then, when it needs to read this information later, it can go back and look at that part of the disk and recover the zeroes and ones from whatever material they were written to. But how do you tell the difference between a one and a zero? That’s the question I’ll try to answer.

### Spin

But before we can get to that point, I have to explain a really important concept in quantum mechanics called “spin”. This is a quantity which is carried by all quantum mechanical particles, and is linked in a loose way to the rotational symmetry of the particle. Look at the right-pointing arrow in the picture. Hopefully it’s easy to see that the only way you can rotate the arrow so that it looks exactly the same as it does when you start (this is called a symmetry operation) is to rotate it through 360°. A particle that has this rotational symmetry is said to have a spin of 1. Now look at this double-headed arrow. If you rotate around the axis indicated by the red dot, you only have to rotate it by 180° to get back to where you started. This has a spin of 2 because you have to rotate half a turn to get the first symmetry operation. The other pictures show a few different spins.

But what about electrons? Well, they have spin of ½. Just to be clear about what that means, using the same analogy it implies that you have to rotate by 720° before the electron “looks” like it did when you started. There isn’t a good way to draw that so I can’t give you a picture of a spin-½ particle, so this is one of those places where quantum mechanics is weird and counter-intuitive and we just have to get on with it. The other building blocks of atoms (protons and neutrons) also have spin-½ so in this post I’ll focus on that strange case. The crucial thing about spin-½ particles is that their spin can exist in one of two states, usually called ‘up’ and ‘down’, and typically are represented by arrows pointing in those two directions.

But why does this matter? Well, individual spins generate a magnetic field. The reason that iron is a magnetic material is that the interaction between the spins in the iron atoms makes their spins all line up in the same direction. Therefore, the tiny magnetic fields associated with each of the spins all add up to make a large field. Non-magnetic materials don’t have this alignment (in fact, their spins are all randomly aligned) and so the tiny magnetic fields all cancel each other out because they are pointing in opposite directions. Materials like iron which have this alignment are called ‘ferromagnetic’.

### Reading and writing in a hard disk

But, what does this have to do with your laptop? Well, in a hard disk, the part where the zeroes and ones are stored is made from two small pieces of ferromagnetic material. Then, the difference between a one and a zero is made by manipulating the spins of the atoms in one of the ferromagnetic layers. When an electric current is passed through this region, the electrons behave differently depending on the spins. Specifically, if the electrons have the same spin as the atoms, then they don’t interact very strongly and the electrical resistance is quite low. But if they have opposite spins, the electrons interact strongly with the atoms so they bounce off the atoms (or “scatter” in the technical language), their progress is impeded, and the electrical resistance is high.

The way to encode a one or a zero is shown in the picture below. A one is encoded by aligning the ferromagnets (the pink layers) so that their spins point in the same direction. In the left-hand picture, I show this with both layers having up-spins. A current of electrons (shown by the red arrows) has a half-and-half mix of electrons with up-spin and down-spin. When it is passed through the stack, the up-spin electrons interact weakly with the ferromagnet up-spins in both layers (black arrows) and encounter low resistance. This means that some of the current put in at the top of the stack emerges from the bottom and this characterises the one state. Note that the down-spin electrons are blocked from getting to the bottom of the stack because they scatter strongly off the up-spin atoms in the first ferromagnet layer and so the resistance for them is high.

For the zero state, one of the ferromagnetic layers has its spins reversed. In the right-hand picture, this is shown by the lower layer now having a down-spin black arrow. For electric current, the down-spin electrons still scatter strongly from the up-spin atoms in the top layer. The up-spin electrons still pass through this layer, but then they encounter the down-spin atoms in the lower layer where the electrons and the atoms have opposite spin, so they scatter strongly. This means that no current emerges at the bottom of the device, and so this defines the zero state.

This means that, for the hard disk to work, it needs to be able to do two things. Firstly, the “write head”, which is the part that encodes the zeroes and ones when data is written to the disk, needs to be able to flip the spins of one of the ferromagnetic layers. Then, to recover the information at a later time, the “read head” tries to pass current through a specific piece of the disk material. If current flows (because the ferromagnet spins are the same) then this is a one. If current does not flow (because the ferromagnet spins are opposite) then it is a zero.

And this works entirely because of the quantum-mechanical property of particles called spin: aligned spins is a one, opposite spins is a zero. And as a bonus, it also explains why you have to be careful with hard drives and strong magnetic fields, because a magnet can change the alignment of all the ferromagnetic areas in the hard disk and destroy the encoded ones and zeros. Don’t say you weren’t warned!

# What is superconductivity?

Most fundamentally, a superconductor is a material which becomes a perfect conductor with no electrical resistance when it gets cold enough. It was first discovered in 1911 when some Dutch experimentalists were playing around with a new way of cooling things down, and one of the things they tried was to measure the electrical resistance of various metals as they got colder and colder. Some metals just kept doing the same things that were expected based on how they behave at higher temperatures. But for others (like mercury) the resistance suddenly dropped to zero when the temperature was lowered to within a few degrees of absolute zero: they became perfect conductors. By perfect, I mean that the amount of energy that was lost as electricity went along the superconducting wire was zero. Nowadays, superconductors are very useful materials and are used in a variety of technologies. For example, they make the coils of the powerful magnets inside an MRI machine or a maglev train, they can allow ultra-precise measurements of magnetic fields in a device called a SQUID (superconducting quantum interference device), and in the future, there is some chance that junctions between different superconductors might be crucial for implementing a quantum computer.

### So, how does this work?

Before I try to explain that, there is one crucial bit of terminology that I have to introduce. The types of particles that make up the universe can be classified into two types: One type is called fermions, the other type is called bosons. The big difference between these two types of particles is that for fermions, only one particle can ever be in a particular quantum state at any given time. For bosons, many particles can all be in the same state at the same time. The particles that carry electricity in metals are electrons, and they are a type of fermion. But when two fermions pair up and form a new particle, this new particle is a type of boson. Superconductivity happens when the electrons are able to form these boson pairs, and these pairs then all occupy the lowest possible energy state. In this state, they behave like a big soup of charge which can move without losing energy, and this gives the zero resistance for electrical current which we know as superconductivity.

This leaves a big unanswered question: How do the electrons pair up in the first place? If you remember back to high school, you probably learned that two objects with the same charge will repel each other, but that opposite charges attract. All electrons have negative charge and so should always repel, so how do they stay together close enough to make these pairs? The answer involves the fact that the metal in which the electrons are moving also contains lots of atoms. These atoms are arranged in a regular lattice pattern but they have positive charge because they have lost some of their electrons. (This is where the free electrons that can form the pairs come from.) So, as an electron moves past an atom, there is an attractive force between them, and the atom moves slightly towards the electron. Because electrons are small and light, they can move through the lattice quickly. The atoms are big and heavy so they move slowly and it takes them some time to go back to their original position in the lattice after the electron has gone by. So, as the electron moves through the lattice, it leaves a ripple behind it. A second electron some distance from the first one now feels the effect of this ripple, and because the atoms are positively charged, it is attracted to it. So, the second electron is indirectly attracted to the first, making them move together in a pair.

In the language of quantum mechanics, these ripples of the atoms are called phonons. (The name comes from the fact that these ripples are also what allows sound to travel through solids.) From this point of view, the first electron emits a phonon which is absorbed by the second electron, effectively gluing them together. But why does the metal have to get very cold before this phonon glue can be effective? The reason is that heat in a crystal lattice can also be thought of in terms of phonons. When the metal is warm, there are lots and lots of phonons flying around all over the place and it’s too chaotic for the electrons to feel the influence of just the phonons that were emitted by other electrons. As the metal cools down, the number of temperature phonons reduces, leaving only the ones that came from the other electrons, which allows the glue to work.

### Two disclaimers

Two quick disclaimers before I finish.

Number one: I glossed over one inconvenient fact when I described the electrons and atoms interacting with each other. I made it sound like they were small particles moving around like billiard balls. For the atoms, this is a reasonable picture because they pretty much have to stay near their lattice positions. But the electrons are not like that at all. Perhaps you’ve heard of particle-wave duality? In quantum mechanics, small objects like electrons are simultaneously a bit like particles and a bit like waves. That’s true here for the electrons, so they are not little billiard balls but are more wave-like. This makes it more difficult to have a good mental picture of what they’re doing, but the basics of the mechanism are still true.

Secondly, this post has been about the type of superconductivity that occurs in metals. The temperature associated with this kind of superconductivity is quite low – a few degrees above absolute zero. But there are other kinds of superconductivity which can occur at much higher temperatures. (Imaginatively, this is usually called ‘high temperature superconductivity’!) This works in a very different way to what I’ve talked about here. It’s also not very well understood and is and active area of research. Perhaps I’ll write something about that another time.