Tag Archives: What is

What is the holographic correspondence?

One of the hardest things to describe in theoretical physics is what happens when lots of particles interact with each other. Essentially, it is impossible to solve this problem exactly, and so the approaches that are currently used rely on several types of approximation.

What I want to describe is how, maybe, approaches in String Theory might be used to solve some of these really important “hard” problems. There’s no way that I can explain all the details (honestly, I don’t understand them!) but hopefully this will be a picture of how weird, esoteric, and very mathematical concepts can be say something useful about reality.

This approach is generically called “holography” for reasons that will become clear(er) later.

One of the approximate approaches to describing interacting particles that has been used to great effect is called “perturbation theory”. This applies when the interactions between the particles are relatively weak. How it works could be a whole post in itself, but perhaps for now it is enough to say that the existence of perturbation theory makes some problems with weak interactions “easy” in the sense that they can be approximately solved.

Crucially, it turns out that many of the complicated string theories that try to describe how quantum gravity works have interactions between particles which can be treated in perturbation theory.

The point of holography is that it might be possible to discover a dictionary or a way of translating between the “easy” string theory and a “hard” theory with strong interactions. Using this dictionary, it is possible to start from the “hard” theory, translate the calculation into the “easy” gravity analogue, do the calculation, and translate the results back to the “hard” context.

2017-10-04-BulkBoundary
The “hard” theory with strong interactions lives on the (red) boundary of a space with a (green) black hole at the centre.

The diagram above is a sketch of how to visualise this process. The “easy” gravity theory exists in a bulk with a certain number of dimensions, whereas the “hard” theory lives in a space which is one dimension smaller, at the edge (or “boundary”). This is where the term holography comes from: The physical theory is a hologram which is projected from the bulk like R2D2’s message from Princess Leia.

Most intriguingly, when the “hard” theory has a temperature above absolute zero (which all physical materials must have) the gravity theory contains a black hole at its centre which has an event horizon.

So, the calculation for the complicated experimental quantity that you are interested in on the boundary can be translated through the bulk to the event horizon of the black hole. There, the properties of the theory on the boundary get converted into the properties of space-time near the black hole. This is what he dictionary does. Perturbation theory can then be used to get an approximate answer in that context. Finally, the answer is moved back through the bulk to the boundary where it can be interpreted in the original context.

Of  course, the technical details of how to actually do this in mathematics is very complicated, but there is one well-understood example of this process.

Quarks are fundamental particles and can be glued together to make protons and neutrons. The particles which do the glueing are called gluons. The gluons and the quarks are strongly interacting and so they fall into the category of “hard” theories. But, there is a well-defined correspondence between a supersymmetric particle theory which lives in eight spatial dimensions and one time dimension (so, nine in total) and an “easy” string theory which lives in ten dimensions. This correspondence has been used to derive results which would otherwise not be possible.

One of the current questions for people who work on holography is whether this is just a fortuitous specific case, or whether these correspondences are more general.

In condensed matter, there are also strongly interacting materials which theorists find very difficult to describe. One really important example is the high temperature superconductor materials.

The question is whether a holographic correspondence can be found for a theory that can make predictions about these materials? To put that another way, is there a higher-dimensional, gravity-like theory which gives a theory for a superconductor as its hologram?

A lot of people are looking at this question at the moment.

There are some encouraging things which have been done already. For example, the materials which go superconducting at low temperature also have weird behaviour at higher temperatures where they don’t superconduct. These properties have been calculated within the gravity theory, and shows some similar features to those seen in experiments.

But there is also a lot that is not known yet. For example, it is very difficult to include effects of the underlying material crystal, or include the existence of the quantum-mechanical spin of the particles. Both of these details will be important to design new materials which sustain superconductivity at even higher temperature.

This is really a field which is still in its infancy, but the underlying idea behind it is intriguing: if the theorists working on it can progress to the point where it can make predictions, it would be very exciting indeed.

Advertisements

What is high temperature superconductivity?

It was March, 1987. The meeting of the Condensed Matter section of the American Physical Society. It doesn’t sound like much, but this meeting has gone down in history as the “Woodstock of Physics”. Experimental results were shown which proved that superconductivity is possible at a much higher temperature than had ever been thought possible. This result came completely out of left field and captured the imagination of physicists all over the world. It has been a huge area of research ever since.

But why is this a big deal? Superconductors can conduct electricity without any resistance, so it costs no energy and generates no heat. This is different from regular metal wires which get hot and lose energy when electricity passes through them. Imagine having superconducting power lines, or very strong magnets that don’t need to be super-cooled. This would lead to huge energy savings which would be great for the environment and make a lot of technology cost less too.

I guess it makes sense to clarify what “high temperature” means in this context. Most superconductors behave like normal metals at regular temperatures, but if they are cooled far enough (beyond the “critical temperature”, which is usually called Tc) then their properties change and they start to superconduct. Traditional superconducting materials have a Tc in the range of a few Kelvin, so only a few degrees above absolute zero. These new “high temperature” materials have their Tc at up to 120 Kelvin, so substantially warmer, but still pretty cold by everyday standards. (For what it’s worth, 120K is -153°C.)

But, if we could understand how this ‘new’ type of superconductivity works, then maybe we could design materials that superconduct at everyday temperatures and make use of the technological revolution that this would enable.

Unfortunately, the elephant in the room is that, even after thirty years of vigorous research, physicists currently still don’t really understand why and how this high Tc superconductivity happens.

BSCCOMeissner
A piece of superconducting BSCCO levitating due to the Meissner effect. (Image stolen from Wikimedia commons.)

I have written about superconductivity before, but that was the old “low temperature” version. What happens in a superconductor is that electrons pair up into new particles called “Cooper pairs”, and these particles can move through the material without experiencing collisions which slow them down. In the low temperature superconductors, the glue that holds the pairs together is made from vibrations of the crystal structure of the material itself.

But this mechanism of lattice vibrations (phonons) is not what happens in the high temperature version.

BISCO
Atomic structure of BSCCO. (Image stolen from chemtube3d.com).

To explain the possible mechanisms, it’s important to see the atomic structure of these materials. To the right is a sketch of one high Tc superconductor, called bismuth strontium calcium copper oxide, or BSCCO (pronounced “bisco”) for short. The superconducting electrons are thought to live in the copper oxide (CuO4) layers.

One likely scenario is that instead of the lattice vibrations gluing the Cooper pairs together, it is fluctuations of the spins of the electrons that does it. Of course, electrons can interact with each other because they are electrically charged (and like charges repel each other), but spins can interact too. This interaction can either be attractive or repulsive, strong or weak, depending on the details.

In this case, it is thought that the spins of the electrons in the copper atoms are all pointing in nearly the same direction. But these spins can rotate a bit due to temperature or random motion. When they do this, it changes the interactions with other nearby spins and can create ripples in the spins on the lattice. In an analogy with the phonons that describe ripples in the positions of the atoms, these spin ripples can be described as particles called magnons. It is these that provide the glue: Under the right conditions, they can cause the electrons to be attracted to each other and form the Cooper pairs.

Another possibility comes from the layered structure. If electrons in the CuO4 layers can hop to the strontium or calcium layers, and then hop back again at a different point in space, this could induce correlations between the electrons that would result in superconductivity. (I appreciate that it’s probably far from obvious why this would work, but unfortunately, the explanation is too long and technical for this post.)

In principle, these two different mechanisms should give measurable effects that are slightly different from each other because the symmetry associated with the effective interactions are different. This would allow experimentalists to tell them apart and make the conclusive statement about what is going on. Naturally, these experiments have been done but so far, there is no consensus within the results. Some experiments show symmetry properties that would suggest the magnons are important, others suggest the interlayer hopping is important. Personally, I tend to think that the magnons are more likely to be the reason, but it’s really difficult to know for sure and I could well be wrong.

So, we’re kinda stuck and the puzzle of high Tc superconductivity remains one of condensed matter’s most tantalising and most embarrassing enigmas. We know a lot more than we did thirty years ago, but we are still a very long way from having superconductors that work at everyday temperatures.

How does a transistor work?

The world would be a very different place if the transistor had never been invented. They are everywhere. They underpin all digital technology, they are the workhorses of consumer electronics, and they can be bewilderingly small. For example, the latest Core i7 microchips from Intel have over two billion transistors packed onto them.

But what are they, and how do they work?

transistorpics
Left: An old-school transistor with three ‘legs’. Right: An electron microscope image of one transistor on a microchip.

In some ways, they are beguilingly simple. Transistors are tiny switches: they can be on or off. When they are on, electric current can flow through them, but when they are off it can’t.

The most common way this is achieved is in a device called called a “field effect transistor”, or FET. It gets this name because a small electric field is used to change the device from its conducting ‘on’ state to it’s non-conducting ‘off’ state.

At the bottom of the transistor is the semiconductor substrate, which is usually made out of silicon. (This is why silicon is such a big deal in computing.) Silicon is a fantastic crystal, because by adding a few atoms of another type of element to its crystal, it can become positively or negatively charged. To explain why, we need to turn to chemistry! A silicon atom has 14 electrons in it, but ten of these are bound tightly to the atomic nucleus and are very difficult to move. The other four are much more loosely bound and are what determines how it bonds to other atoms.

When a silicon crystal forms, the four loose electrons from each atom form bonds with the electrons from nearby atoms, and the geometry of these bonds is what makes the regular crystal structure. However, it is possible to take out a small number of the silicon atoms and replace them with some other type of atom. If this is done with an atom like phosphorus or nitrogen which has five loose electrons, then four of them are used to make the chemical bonds and one is left over. This left-over electron is free to move around the crystal easily, and it gives the crystal an overall negative charge. In the physics language, the silicon has become “n-doped”.

But, if some silicon atoms are replaced by something like boron or aluminium which has only three loose electrons, the atom has to ‘borrow’ an extra electron from the rest of the crystal, meaning that this electron is ‘lost’ and the crystal becomes positively charged. This is called “p-doped”.

transistorchannel
Sketches of a FET in the ‘off’ state (left) and in the ‘on’ state (right) when the gate voltage is applied.

Okay, so much for the chemistry, now back to the transistor itself. Transistors have three connections to the outside world, which are usually called the source, drain, and gate. The source is the input for electric current, the drain is the output, and the gate is the control which determines if current can flow or not.

The source and drain both connect to a small area of n-doped silicon (i.e. they have extra electrons) which can provide or collect the electric current which will flow through the switch. The central part of the device, called the “channel” is p-doped which means that there are not enough electrons in it.

Now, here’s where the quantum mechanics comes in!

A while back, I described the band structure of a material. Essentially, it is a map of the quantum mechanical states of the material. If there are no states in a particular region, then electrons cannot go there. The “Fermi energy” is the energy at which states stop being filled. I’ve drawn a rough version of the band structure of the three regions in the diagram below. In the n-doped regions, the states made by the extra electrons are below the Fermi surface and so they are filled. But in the p-doped channel, the unfilled extra states are above the Fermi energy. This makes a barrier between the source and drain and stops electrons from moving between the two.

transistorbanddiags
Band diagrams for a FET. In the ‘on’ state, the missing electron levels are pushed below the Fermi surface and form a conducting channel.

Now for the big trick. When a voltage is applied to the gate, it makes an electric field in the channel region. This extra energy that the electrons get because they are in this field has the effect of moving energy of the quantum states in the channel region to a different energy. This is shown on the right hand side of the band diagrams. Now, the extra states are moved below the Fermi energy, but the silicon can’t create more electrons so these unfilled states make a path through which the extra electrons in the source can move to the drain. This removes the barrier meaning that applying the electric field to the channel region opens up the device to carrying current.

In the schematic of the device above, the left-hand sketch shows the transistor in the off state with no conducting channel in the p-doped region. The right-hand sketch shows the on-state, where the gate voltage has induced a conducting channel near the gate.

So, that’s how a transistor can turn on and off. But it’s a long leap from there to the integrated circuits that power your phone or laptop. Exactly how those microchips work is another subject, but briefly, the output from the drain of one transistor can be linked to the source or the gate of another one. This means that the state of a transistor can be used to control the state of another transistor. If they are put together in the right way, they can process information.

What is peer review?

Chances are you’ve heard of peer review. The media often use it as an adjective to indicate a respectable piece of research. If a study has not been peer reviewed then this is taken as a shorthand that it might be unreliable. But is that a fair way of framing things? How does the process of peer review work? And does it do the job?

So, first things first – what is peer review? Essentially, it’s a stamp of approval by other experts in the field. Knowledgeable people will read a paper before it’s published and critique it. Usually, a paper won’t be published unless these experts are fairly satisfied that the paper is correct and measures up to the standards of importance or “impact” that the particular journal requires.

The specifics of the peer review process vary between different fields and different journals, but here is how things typically go in physics. Usually, a journal editor will send the paper to two or more people. These could be high profile professors or Ph.D students or anyone in between, but they are almost always people who work on similar topics.

The reviewers then read the paper carefully, and write a report for the editor, including a recommendation of whether the paper should be published or not. Often, they will suggest modifications to make it better, or ask questions about parts that they don’t understand. These reports are sent to the authors, who then have a chance to respond to the inevitable criticisms of the referees, and resubmit a modified manuscript.

After resubmission, many things can happen. If the referee’s recommendations are similar, then the editor will normally send the new version back to them so they can assess whether their comments and suggestions have been adequately addressed in the new version. They will then write another report for the editor.

But if the opinions of the referees are split, then the editor might well ask for a third opinion. This is the infamous Reviewer 3, and their recommendation is often crucial. In fact, it’s so crucial that the very existence of Reviewer 3 has lead to internet memes, a life on twitter (see #reviewer3), and mention in countless satires of academic life including this particularly excellent one by Kelly Oakes of BuzzFeed (link).

reviewer3-buzzfeed
Credit: Kelly Oakes / BuzzFeed

But, once the editor has gathered all the reports and recommendation, they will make a final decision about whether the paper will be published or not. For the authors, this is the moment of truth!

When it works, this can be a constructive process. I’ve certainly had papers that have been improved by the suggestions and feedback. But the process does not always work well. For example, not all journals always carry out the review process with complete rigour. The existence of for-profit, commercial journals who charge authors a publication fee is a subject for another day, but in those journals it is easy to believe that there is a pressure on the editors to maximise the number of papers that they accept. Then it’s only natural that review standards may not be well enforced.

And the anonymity that reviewers enjoy can lead to bad outcomes. By definition, reviewers have to be working in a similar field to the authors of the paper otherwise they would not be sufficiently expert to judge the merits of the work. So sometimes a paper is judged by competitors. There are many stories of papers being deliberately slowed down by referees, perhaps while they complete their own competing project. Or of times when a referee might stubbornly refuse to recommend publication in spite of good arguments. And there are even stories of outright theft of ideas and results during review.

Finally, there is also the possibility of straightforward human error. Two or three reviewers is not a huge number and so it can be hard to catch the mistakes. And not all reviewers are completely suitable for the papers they read. Review work is almost always done on a voluntary basis and so it can be hard for editors to find a sufficient number of people who are willing to give up their time.

I can think of a few times when I have not really understood the technical aspects of a paper, or I have not been sufficiently close to the field to judge whether the work is important. Perhaps I should have declined to review those manuscripts. Or maybe it’s okay because the paper should not be published if it cannot convince someone in an adjacent field of the merits of the work. There are arguments both ways.

The fact is that sometimes things slip through the net. Papers can be published with errors, or even worse, with fabricated data or plagiarism. There is no foolproof system for avoiding this, so in my opinion, robust post-publication review is important too. Exactly how to implement that is a tricky business though.

But, to sum up, my opinion is that peer review is an important – but not infallible – part of the academic process. Just because a paper has passed through this test does not automatically mean that it is correct or the last word on a subject, but it is a mark in its favour.

What is graphene and why all the hype?

There’s a decent chance you’ve heard of graphene. There are lots of big claims and grand promises made about it by scientists, technologists, and politicians. So what I thought I’d do is to go through some of these claims and almost ‘fact-check’ them so that the next time you hear about this “wonder material” you know what to make of it.

Let’s start at the beginning: what is graphene? It’s made out of carbon atoms arranged in a crystal. But what sets it apart from other crystals of carbon atoms is that it is only one atom thick (see the picture below). It’s not quite the thinnest thing that could ever exist because maybe you could make something similar using atoms that are smaller than carbon (for example, experimentalists can make certain types of helium in one layer), but given that carbon is the sixth smallest element, it’s really quite close!

phases-of-carbon

Diamond and graphite are also crystals made only of carbon, but they have a different arrangement of the carbon atoms, and this means they have very different properties.

So, what has been claimed about graphene?

Claim one: the “wonder material”

Graphene has some nice basic properties. It’s really strong and really flexible. It conducts electricity and heat really well. It simultaneously is almost transparent but absorbs light really strongly. It’s almost impermeable to gases. In fact, most of the proposals for applications of graphene in the Real World™ involve these physical and mechanical superlatives, not the electronic properties which in some ways are more interesting for a physicist.

For example, its conductivity and transparency mean that it could be the layer in a touch screen which senses where a finger or stylus is pressing. This could combine with its flexibility to make bendable (and wearable) electronics and displays. But for the moment, it’s “only” making current ideas work better, it doesn’t add any fundamentally new technology that we didn’t have before. If that’s your definition of a “wonder material” then okay, but personally I’m not quite convinced the label is merited.

Claim two: Silicon replacement

In the first few years after graphene was made, there was a lot of excitement that it might be used to replace silicon in microchips and make smaller, faster, more powerful computers. It fairly quickly became obvious that this wouldn’t happen. The reason for this is to do with how transistors work. That’s a subject that I want to write more about in the future, but roughly speaking, a transistor is a switch that has an ‘on’ state where electrical current can flow through it, and an ‘off’ state where it can’t. The problem with graphene is turning it off: Current would always flow through! So this one isn’t happening.

Graphene electronics might still be useful though. For example, when your phone transmits and receives data from the network, it has to convert the analogue signal in the radio waves from the mast into a digital signal that the phone can process. Graphene could be very good for this particular job.

Claim three: relativistic physics in the lab

This one is a bit more physicsy so takes a bit of explaining. In quantum mechanics, one of the most important pieces of information you can have is how the energy of a particle is related to its momentum. This is the ‘band structure’ that I wrote about before. In most cases, when electrons move around in crystals, their energy is proportional to their momentum squared. In special relativity there is a different relation: The energy is proportional to just the momentum, not to the square. For example, this is true for light or for neutrinos. One thing that researchers realized very early on about graphene is that electrons moving around on the hexagonal lattice had a ‘energy equals momentum’ band structure, just like in relativity. Therefore, the electrons in graphene behave a bit like neutrinos or photons. Some of the effects of this have been measured in experiments, so this is true.

Claim four: Technological revolution

One other big problem that has to be overcome is that graphene is currently very expensive to make. And the graphene that is made at industrial scale tends to be quite poor quality. This is an issue that engineers and chemists are working really hard at. Since I’m neither an engineer or a chemist I probably shouldn’t say too much about it. But what is definitely true is that the fabrication issues have to be solved before you’ll see technology with graphene inside it in high street stores. Still, these are clever people so there is every chance it will still happen.

Footnote

Near the top, I said graphene simultaneously absorbs a lot of light and is almost transparent. This makes no sense on the face of it!! So let me say what I mean. To be specific, a single layer of graphene absorbs about 2.3% of visible light that lands on it. Considering that graphene is only one layer of atoms, that seems like quite a lot. It’s certainly better than any other material that I know of. But at the same time, it means that it lets through about 97.7% of light, which also seems like a lot. I guess it’s just a question of perspective.

What is condensed matter theory, and why is it so hard?

This post is about the general approach to physics that people who work on the theory of condensed matter take. As I’ll explain, it is basically impossible to calculate anything exactly, and so the whole field relies on choosing smart approximations that allow you to make some progress. Exactly what kind of approximations you make depends on what you want to achieve, and I’ll describe some of the common ones below.

But before that, what is ‘condensed matter physics’? Roughly speaking, it refers to anything that is a solid or a liquid (and also some gases) that you can see in the Real World around you. So it’s not stars and galaxies and space exploration, it’s not tiny sub-atomic particles like quarks and higgs bosons like they talk about at CERN, and it’s not what happened in the first fractions of a second after the Big Bang, or what it’s like inside a black hole. But it is about the materials that make the chip inside your phone or power a laser, or about making batteries store energy more efficiently, or finding new catalysts that make industrial chemical production cheaper (okay, so that one crosses into chemistry as well, but the lines are fuzzy!), or its about making superconductivity work at higher temperatures.

Why is it so hard?

Really, what makes condensed matter physics different from many other types of physics is that in many situations, the behaviour of the materials is governed by how many many particles interact with each other. Think about a small piece of metal for instance: You have millions and millions of atoms that form some bonds which give it a solid shape. Then some of the electrons in those atoms disassociate themselves and become a bit like a liquid that can move around inside the metal and conduct electricity or heat, or make the metal magnetic. In a small piece of metal there will be 10^22 atoms. (That notation means that the number is a one with twenty two zeroes after it. So it’s a lot.) And all of these atoms have an electric field which is felt by all the other atoms so that they all interact with each other. It is, in principle, possible to write down some equations which describe this, but there is no way that anyone can make a solution for these equations and work out exactly how all these atoms and electrons behave. I don’t just mean that it’s very difficult, I mean that it is mathematically proven to be impossible!

Watcha gunna do?

So, that begs the question what can we do? It is easy to connect a bit of metal to a battery and a current meter and see that it can conduct electricity, but how do we describe that theoretically? There are several different approaches to making the approximations needed, so I’ll try to explain them now.

  1. Use symmetry. By the magic of mathematics, the equations can often be simplified if you know something about the symmetry of the material you want to investigate. For example, the atoms in many metals sort themselves into a crystal lattice of repeated cubes. Group theory can then be used to reduce the complexity of the equations in a very helpful way. For instance, it might be possible to tell whether a material will conduct electricity or not even at this level of approximation. But this symmetry analysis contains an assumption because in reality materials won’t completely conform to the symmetry. They may have impurities in them, or the crystal structure might have irregularities, for example. So this isn’t a magic bullet. And also this might well not reduce the equations enough that they can be solved, so it is usually just a first step.

    cube_symmetry
    Symmetries of a cube. Image stolen from this page, which looks pretty cool!
  2. From this point, it is often possible to make simplifying assumptions so that the mathematically impossible theory becomes something that can be solved. Of course, by doing this you lose quite a lot of detail. It’s like the “spherical cows” analogy. In principle, cows have four legs, a tail, a head, and maybe some udders. But say you wanted to work out how many cows you could safely fit into a field. You don’t need to know any of that detail, so you can think of the cows as being a sphere which consumes a certain amount of hay each day. You can do something similar about the metal: Instead of keeping track of every detail, you can forget that the atoms have an internal structure (spherical atoms!). Or you could assume that the atoms interact with the electrons in a particularly simple way so that you can focus just on the disassociated electrons. Or you could assume that the electrons don’t interact with each other, but only with the atoms. In the jargon of the field, this general approach is called finding an “effective theory”. These theories can often give quite good estimates of not only whether a material will conduct, but how well it will do it.

    SphericalCows
    Some spherical cows in a field.
  3. These days, computers are really fast, and they can be used to numerically solve equations that are almost exact. However, computers are not good enough that they can do this for 10^22 atoms, so if you want to keep quite close to the original equations, they might be able to do fifty or so. Maybe a hundred. In the jargon, these methods are called “ab-initio” (from the beginning) because they do not make any approximations unless they absolutely have to. The fact that you can’t treat too many atoms limits what these methods can be applied to. For instance, they can be quite good for molecules, and crystals where the periodic repetition is not too complicated. But for these situations, you can get a level of detail which is simply impossible in the effective theories. So there’s a trade-off. And computers are getting better all the time so this is one area that will see a lot of progress.
  4. The final way that I’ll describe is sort-of the inverse process. Instead of starting from the mathematics which are impossible, you can start from experimental data and try to work backwards towards the theoretical description that gives you the right answer. Sometimes this is used in conjunction with one of the other methods as a way to give you some clues about what assumptions to make.

So, that’s how you do theory in condensed matter. Numbers 2 and 4 are basically my day job, on a good day at least!

What is superconductivity?

Most fundamentally, a superconductor is a material which becomes a perfect conductor with no electrical resistance when it gets cold enough. It was first discovered in 1911 when some Dutch experimentalists were playing around with a new way of cooling things down, and one of the things they tried was to measure the electrical resistance of various metals as they got colder and colder. Some metals just kept doing the same things that were expected based on how they behave at higher temperatures. But for others (like mercury) the resistance suddenly dropped to zero when the temperature was lowered to within a few degrees of absolute zero: they became perfect conductors. By perfect, I mean that the amount of energy that was lost as electricity went along the superconducting wire was zero. Nowadays, superconductors are very useful materials and are used in a variety of technologies. For example, they make the coils of the powerful magnets inside an MRI machine or a maglev train, they can allow ultra-precise measurements of magnetic fields in a device called a SQUID (superconducting quantum interference device), and in the future, there is some chance that junctions between different superconductors might be crucial for implementing a quantum computer.

So, how does this work?

Before I try to explain that, there is one crucial bit of terminology that I have to introduce. The types of particles that make up the universe can be classified into two types: One type is called fermions, the other type is called bosons. The big difference between these two types of particles is that for fermions, only one particle can ever be in a particular quantum state at any given time. For bosons, many particles can all be in the same state at the same time. The particles that carry electricity in metals are electrons, and they are a type of fermion. But when two fermions pair up and form a new particle, this new particle is a type of boson. Superconductivity happens when the electrons are able to form these boson pairs, and these pairs then all occupy the lowest possible energy state. In this state, they behave like a big soup of charge which can move without losing energy, and this gives the zero resistance for electrical current which we know as superconductivity.

This leaves a big unanswered question: How do the electrons pair up in the first place? If you remember back to high school, you probably learned that two objects with the same charge will repel each other, but that opposite charges attract. All electrons have negative charge and so should always repel, so how do they stay together close enough to make these pairs? The answer involves the fact that the metal in which the electrons are moving also contains lots of atoms. These atoms are arranged in a regular lattice pattern but they have positive charge because they have lost some of their electrons. (This is where the free electrons that can form the pairs come from.) So, as an electron moves past an atom, there is an attractive force between them, and the atom moves slightly towards the electron. Because electrons are small and light, they can move through the lattice quickly. The atoms are big and heavy so they move slowly and it takes them some time to go back to their original position in the lattice after the electron has gone by. So, as the electron moves through the lattice, it leaves a ripple behind it. A second electron some distance from the first one now feels the effect of this ripple, and because the atoms are positively charged, it is attracted to it. So, the second electron is indirectly attracted to the first, making them move together in a pair.

In the language of quantum mechanics, these ripples of the atoms are called phonons. (The name comes from the fact that these ripples are also what allows sound to travel through solids.) From this point of view, the first electron emits a phonon which is absorbed by the second electron, effectively gluing them together. But why does the metal have to get very cold before this phonon glue can be effective? The reason is that heat in a crystal lattice can also be thought of in terms of phonons. When the metal is warm, there are lots and lots of phonons flying around all over the place and it’s too chaotic for the electrons to feel the influence of just the phonons that were emitted by other electrons. As the metal cools down, the number of temperature phonons reduces, leaving only the ones that came from the other electrons, which allows the glue to work.

Two disclaimers

Two quick disclaimers before I finish.

Number one: I glossed over one inconvenient fact when I described the electrons and atoms interacting with each other. I made it sound like they were small particles moving around like billiard balls. For the atoms, this is a reasonable picture because they pretty much have to stay near their lattice positions. But the electrons are not like that at all. Perhaps you’ve heard of particle-wave duality? In quantum mechanics, small objects like electrons are simultaneously a bit like particles and a bit like waves. That’s true here for the electrons, so they are not little billiard balls but are more wave-like. This makes it more difficult to have a good mental picture of what they’re doing, but the basics of the mechanism are still true.

Secondly, this post has been about the type of superconductivity that occurs in metals. The temperature associated with this kind of superconductivity is quite low – a few degrees above absolute zero. But there are other kinds of superconductivity which can occur at much higher temperatures. (Imaginatively, this is usually called ‘high temperature superconductivity’!) This works in a very different way to what I’ve talked about here. It’s also not very well understood and is and active area of research. Perhaps I’ll write something about that another time.