What is graphene and why all the hype?

There’s a decent chance you’ve heard of graphene. There are lots of big claims and grand promises made about it by scientists, technologists, and politicians. So what I thought I’d do is to go through some of these claims and almost ‘fact-check’ them so that the next time you hear about this “wonder material” you know what to make of it.

Let’s start at the beginning: what is graphene? It’s made out of carbon atoms arranged in a crystal. But what sets it apart from other crystals of carbon atoms is that it is only one atom thick (see the picture below). It’s not quite the thinnest thing that could ever exist because maybe you could make something similar using atoms that are smaller than carbon (for example, experimentalists can make certain types of helium in one layer), but given that carbon is the sixth smallest element, it’s really quite close!

Diamond and graphite are also crystals made only of carbon, but they have a different arrangement of the carbon atoms, and this means they have very different properties.

So, what has been claimed about graphene?

Claim one: the “wonder material”

Graphene has some nice basic properties. It’s really strong and really flexible. It conducts electricity and heat really well. It simultaneously is almost transparent but absorbs light really strongly. It’s almost impermeable to gases. In fact, most of the proposals for applications of graphene in the Real World™ involve these physical and mechanical superlatives, not the electronic properties which in some ways are more interesting for a physicist.

For example, its conductivity and transparency mean that it could be the layer in a touch screen which senses where a finger or stylus is pressing. This could combine with its flexibility to make bendable (and wearable) electronics and displays. But for the moment, it’s “only” making current ideas work better, it doesn’t add any fundamentally new technology that we didn’t have before. If that’s your definition of a “wonder material” then okay, but personally I’m not quite convinced the label is merited.

Claim two: Silicon replacement

In the first few years after graphene was made, there was a lot of excitement that it might be used to replace silicon in microchips and make smaller, faster, more powerful computers. It fairly quickly became obvious that this wouldn’t happen. The reason for this is to do with how transistors work. That’s a subject that I want to write more about in the future, but roughly speaking, a transistor is a switch that has an ‘on’ state where electrical current can flow through it, and an ‘off’ state where it can’t. The problem with graphene is turning it off: Current would always flow through! So this one isn’t happening.

Graphene electronics might still be useful though. For example, when your phone transmits and receives data from the network, it has to convert the analogue signal in the radio waves from the mast into a digital signal that the phone can process. Graphene could be very good for this particular job.

Claim three: relativistic physics in the lab

This one is a bit more physicsy so takes a bit of explaining. In quantum mechanics, one of the most important pieces of information you can have is how the energy of a particle is related to its momentum. This is the ‘band structure’ that I wrote about before. In most cases, when electrons move around in crystals, their energy is proportional to their momentum squared. In special relativity there is a different relation: The energy is proportional to just the momentum, not to the square. For example, this is true for light or for neutrinos. One thing that researchers realized very early on about graphene is that electrons moving around on the hexagonal lattice had a ‘energy equals momentum’ band structure, just like in relativity. Therefore, the electrons in graphene behave a bit like neutrinos or photons. Some of the effects of this have been measured in experiments, so this is true.

Claim four: Technological revolution

One other big problem that has to be overcome is that graphene is currently very expensive to make. And the graphene that is made at industrial scale tends to be quite poor quality. This is an issue that engineers and chemists are working really hard at. Since I’m neither an engineer or a chemist I probably shouldn’t say too much about it. But what is definitely true is that the fabrication issues have to be solved before you’ll see technology with graphene inside it in high street stores. Still, these are clever people so there is every chance it will still happen.

Footnote

Near the top, I said graphene simultaneously absorbs a lot of light and is almost transparent. This makes no sense on the face of it!! So let me say what I mean. To be specific, a single layer of graphene absorbs about 2.3% of visible light that lands on it. Considering that graphene is only one layer of atoms, that seems like quite a lot. It’s certainly better than any other material that I know of. But at the same time, it means that it lets through about 97.7% of light, which also seems like a lot. I guess it’s just a question of perspective.

Why do some materials conduct electricity while others don’t?

Can you tell at a glance how the electrons in a material behave? Amazingly, the answer is “yes”, and in this post I’ll explain how.

I want to introduce the concept of something called ‘band structure’ because it is an idea that underpins a lot of the quantum mechanics of electrons in real materials. In particular, the band structure of material can make it really easy to know if a material is a good conductor of electricity or not. So, here goes.

To describe how electrons behave in a particular material, a good place to start is by working out what quantum states they are allowed to be in. In essence, the band structure is simply a map of these allowed quantum states. One place where things can be a bit confusing is the coordinates that are used to draw this map. Band structure uses the momentum of the quantum state as its coordinate, and gives the energy of that state at each point.

The reason for this is that the momentum and energy of the quantum states are linked to each other so it just makes sense to draw things this way. But why not use the position of the quantum state? This is because position and momentum cannot both be known at the same time due to Heisenberg’s Uncertainty Principle. If the momentum is known very accurately then the position must be completely unknown.

In fact, there’s even more to it than that. Most solids have a periodic lattice structure and this periodicity means that only certain momentum values are important. Roughly speaking, if the size of the repeating pattern in the lattice has length a, then there is a repeating pattern of allowed energy states in momentum with length 1/a. This means that we can draw the map of the allowed quantum states in only the first of these zones. This zone has a finite size, which is very helpful when trying to draw it!

The band structure of silicon. (Picture credit: Dissertation by Wilfried Wessner, TU Wien.)

Let’s take silicon as an example because it’s a really important material since a lot of electronics are made from it. The picture above shows the band structure (left) and the shape of the first repeating zone of allowed momenta (right) of silicon. The zone of allowed momenta has quite a complicated shape which is related to the crystal structure of the silicon. Some of the important points in that zone are labeled, for example, the center of the zone is called the Γ point (pronounced “gamma point”), while the center of the square face at the edge of the zone is the X point. It’s impossible to draw all the allowed states at every momentum point in a 3D zone, so what is usually done is to draw the allowed quantum states along certain lines between these important points, and that is what is on the left of the picture. You can probably see that these allowed states form bands, which is where the name ‘band structure’ comes from.

There’s one more concept that is really important, called the “Fermi surface”. Electrons are fermions, and so they are allowed to occupy these quantum states so that there is at most one electron in each state. In nature, the overwhelming tendency is for the total energy of a system to be minimized as this is the most efficient arrangement. This is done by filling up all the quantum states, starting from the bottom, until all the electrons are in their own state. There are never enough electrons to fill all the allowed quantum states, and so the energy of the last filled (or first empty) states is called the Fermi surface. In a three dimensional material, the cutoff between filled and empty states is a two-dimensional surface.

So, how does knowing the band structure help us to understand the electronic properties of a material? As an example, let’s think about whether the material conducts electricity well or not. It turns out that for electrical conduction, most of the quantum states of the electrons play no role at all. The important ones are those near the Fermi surface.

To conduct electricity, an electron has to jump from its state below the Fermi surface to one above it, where it is free to move around the material. To do this, it has to absorb some energy from somewhere. This usually either comes from an electric field that is driving the electrical current (like a battery or a plug socket), or from the thermal energy of the material itself.

Take a look at the sketches below. They are cartoons of band structures near the Fermi surface (which is shown by the green dotted line). The filled bands are shown by thick blue lines while the empty bands are shown by thin blue lines. In the left-hand cartoon there is a big gap between the filled and empty bands so it’s very difficult for an electron to gain enough energy to make the jump from the filled band to the empty band. That means that a material with a large band gap at the Fermi surface is an insulator – it can’t conduct electricity easily. The middle cartoon shows a material with only a small band gap. That means it’s possible, but kinda difficult for an electron to make the jump and become conducting. Materials with narrow gaps are semiconductors.

The right-hand cartoon shows a material where the Fermi surface goes through one of the bands, so there are both empty states and filled states right at the Fermi surface. This means it’s really easy for an electron to jump above the Fermi surface and become conducting because it takes only a tiny amount of energy to do this. These materials are conductors.

Going back to silicon, we can look at the band structure above and see that there is a gap of about 1 electron volt at the Fermi energy. (The Fermi energy is zero on the y axis). One electron volt is too large an energy for an electron to become conducting by absorbing thermal energy, but small enough that it can be done by an electric field. This means that silicon is a semiconductor – it has a narrow gap.

One final question: How do you find the band structure of your favorite material? There is an experimental technique called ARPES where you shine high energy light at a material, and the photons hitting it cause electrons to be ejected from the surface. These electrons can be caught and the energy and momentum that they have reflect the energy and momentum of the quantum states they were filling in the material. So by careful measurement you can reconstruct the map of these states.

Another way is to use mathematics to theoretically predict the band structure. There has been a huge amount of work done to come up with accurate ways to go from the spatial definition of a crystal to its band structure with no extra information. In some cases, these work very well, but the calculations which do this are often very complicated and require supercomputers to run!

So, that is band structure. An easy way to make a link between complicated quantum mechanics and everyday properties like conduction of electricity.

Justin Trudeau and quantum computing

You’ve probably seen already that clip of Justin Trudeau, the Prime Minister of Canada, explaining to a surprised group of journalists why quantum computing excites him so much. In case you haven’t seen it, here is a link. A number of things strike me about this. Firstly, of course, he’s right: If we can get quantum computing to work then that would be a really, really big deal and it’s worth being excited about! Second, it’s a bit depressing that a politician having a vague idea about something scientific is a surprising exception to the rule. Thirdly, while his point about storage of information is right, there’s a whole lot more that quantum computers can do that he didn’t mention. Of course, that’s fair enough because he wasn’t trying to be comprehensive, but it gives me an opportunity to talk about some of the stuff that he missed out.

Before that, let’s go over exactly what a quantum computer is. As the Prime Minister said, a normal (or “classical”) computer operates using only ones and zeroes which are represented by current flowing or not flowing through a small “wire”. (However, as you might have already read, this might have to change in the future!) A quantum computer is completely different because instead of these binary bits, it has bits which can be in state that is a mixture of zero and one at the same time. This like the electron simultaneously going through both slits in the two slit experiment, or Schrödinger’s famous cat being alive and dead “at the same time”: It’s an example of a quantum mechanical superposition of states. A quantum computer is designed to operate on these quantum states and to take advantage of this indeterminacy, changing them from one superposition to another to do computations. If you can get the quantum bits to become entangled with each other (meaning that the quantum state of one bit will be affected by the quantum state of all the others that it is entangled with) then you can do quantum computing! Exactly how this would work from a technological point of view is a big subject which I’ll probably write about another time, but options that physicists and engineers are working on include using superconducting circuits, very cold gases of atoms, the spins of electrons or atomic nuclei, or special particles called majorana fermions.

A big field of study has been to find algorithms that allow this quantum-ness to be used to do things that classical computers can’t. There are a few examples that would really change everyday life if they could be implemented. The first sounds a bit boring on the face of it, but quantum algorithms allow you to search a list to determine if an item is in a list or not (i.e. to find that item) in a much shorter time than classical algorithms. So, if you want to search the internet for your favourite web site, a quantum google will do this much faster than a classical google. Quantum algorithms can also tell quickly whether all the items in a list are different from each other or not.

Another application is to solve “black box” problems. This has nothing to do with the flight data recorders in aircraft, but is the name given to the following problem. Say you have a set of inputs to a system and their corresponding output, but you don’t know what the system does to turn the input into the output. The system is the black box, and the difficult problem is to determine what operations the system does to the input. This is important because these black box problems occur in many different areas of science including artificial intelligence, studies of the brain, and climate science. For a classical computer to solve this exactly would require an exponential number of “guesses”, but a quantum computer could do this in just one “guess”!

But perhaps the most devastating use of a quantum computer is to break the internet. Let me explain this a bit! There is a mathematical theorem which says that every number can be represented as a list of prime numbers multiplied together, and that for each number there is only one such list. For example, $30=2\times 3\times5$, or $247=13\times19$. This matters because most digital security currently depends on the fact that this is a very difficult thing for classical computers to start with a big number and work out what the prime factors are. The way that most encryption on the internet works is that data is encoded using a big number that is the product of only two prime numbers. In order to decrypt the information again, you need to know what the two prime numbers that give you the big number are. Because it’s hard to work out what the two prime numbers are, it is safe to distribute the big number (your public key) so that anyone can encode information to send to you securely. But only you can decode the information because only you know what the two primes are (this is your private key). But, if it suddenly becomes easy to factorise the big number into the two primes then this whole mode of encryption does not work! Every interaction that you have with your bank, your email provider, social media, and online stores could be broken by someone else. The internet essentially wouldn’t be private! Or at least, it wouldn’t be private until a new method for doing encryption is found. This is the main reason why security agencies are working so hard on quantum computing.

Finally, I want to quickly mention one application is a bit more specialised to physics: Quantum computers will allow us to simulate quantum systems in a much more accurate way. Currently, the equations that determine how groups of quantum mechanical objects behave and interact with each other pretty much can’t be solved exactly, in part because the quantum behaviour is difficult to model accurately using classical computing. If you have a quantum computer, then part of this difficulty goes away because you can build the quantum interactions into the simulation in a much more natural way, using the entanglement of the quantum bits.

So in summary, Prime Minister Trudeau was right: Quantum computers have the potential to be absolutely amazing and to change society and are really exciting (and possibly slightly scary!) But storing information in a more compact manner is really only the tip of the iceberg.

My new idea

It’s been a while! Part of the reason I’ve not written anything recently is that I’ve been busy preparing a grant proposal which has to be submitted in a few days. This means I’m begging the Swedish funding agency to give me money to spend on researching a new idea that I have been working on for a while. As part of this proposal, I am required to write a description of what I want to do that is understandable by people outside of physics, so I thought I’d share an edited version of it here. Maybe it’s interesting to read about something that might happen in the future, rather than things that are already well known. And it’s an idea that I’m pretty excited about because there’s some chance it might make a difference!

Computing technology is continuously getting smaller and more powerful. There is a rule-of-thumb, called Moore’s law, which encodes this by predicting that the computing power of consumer electronics will double every two years. So far, this prediction has been followed since microchips were invented in the 1970s. However, fundamental limits are about to be reached which will halt this progress. In particular, the individual transistors which make up the chips are becoming so small that quantum mechanical effects will soon start to dominate their operation and fundamentally change how they work. Removing the heat generated by their operation is also becoming hugely challenging.

A transistor is essentially just a switch that can be either on or off. At the present time, the difference between the on and off state is given by whether an electric current is flowing through the switch or not. If quantum mechanical effects start to dominate transistor operation, then the distinction between the on and off state becomes blurred because current flow becomes a more random process.

In this project, I will investigate a new method of making transistors, using the quantum mechanical properties of the electrons. The theoretical idea is to make two one-dimensional layers (for example, two nanowires) placed close enough to each other that the electrons in the material can interact with each other through Coulomb repulsion. If one of these nanowires has just a few electrons in it, while the other is almost full of electrons, then the electrons in the nearly empty wire can be attracted to the ‘holes’ in the nearly full wire, and they can pair up into new bound particles called excitons. What is special about these excitons is that they can form a superfluid which can be controlled electronically.

This can be made into a transistor in the following way. When the superfluid is absent, the two layers are quite well (although not perfectly) insulated from each other, so it is difficult for a current to flow between them. However, when the superfluid forms, one of the quantum mechanical implications is that it becomes possible to drive a substantial inter-layer current. This difference defines the on and off states of the transistor.

There are some mathematical reasons why one might expect that this cannot work for one-dimensional layers, but I have already demonstrated that there is a way around this. If the electrons can hop from one layer to the other, then the theorem which says that the superfluid cannot form in one dimension is not valid. What I will do next is a systematic investigation of lots of different types on one-dimensional materials to determine which is the best situation for experimentalists to look in for this superfluid. I will use approximate theories for the behaviour of electrons in nanowires or nanoribbons, carbon nanotubes, and core-shell nanowires to determine the temperature at which the superfluid can form for these different materials. When the superfluid is established, it can be described by a hydrodynamic theory which treats the superfluid as a large-scale object that can be described by simple equations that govern the flow of liquids. Analysing this theory will reveal information about the properties of the superfluid and allow optimisation of the operation of the switch. Finally, in reality, no material can be fabricated with perfect precision, so I will examine how imperfections will be detrimental to the formation of the superfluid to establish how accurate the production techniques need to be.

Another benefit of this superfluid is that it can conduct heat very efficiently. This means that it may have applications in cooling and refrigeration. I will also investigate the quantitative advantages that this may have over traditional thermoelectric materials. In both of these applications, the fact that the superfluid can exist in a one-dimensional material is a very advantageous factor for designing devices. In particular, because they are so small in two directions, it gives a huge amount of freedom for placing transistors or heat-conducting channels in optimal arrangements that would be impossible with two- or three-dimensional materials.

One final thing for some context: The picture at the top of the page shows a core-shell nanowire that was grown by some physicists in Lund, Sweden. It’s made out of two different types of semiconductor: Gallium antimonide (GaSb) in the core, and indium arsenic antimonide (InAsSb) in the shell. The core region is the nearly full layer that contains the ‘holes’, while the shell is the nearly empty layer with the electrons. The vertical white line on the left of the image is a scale bar that is 100nm long (that’s one ten-thousandth of a millimeter!) which shows that these wires are pretty small! (Picture credit: Ganjipour et al, Applied Physics Letters 101, 103501 (2012)).

How a hard disk works

This post is going to explain the fundamental part of how the hard drive in your old computer works. Modern solid state disks work completely differently, so this applies only to the older type that have been common for several decades. Specifically, when your computer writes something to the drive, it has to turn the sequence of zeroes and ones which make up the binary data into something physical on the disk. Then, when it needs to read this information later, it can go back and look at that part of the disk and recover the zeroes and ones from whatever material they were written to. But how do you tell the difference between a one and a zero? That’s the question I’ll try to answer.

Spin

But before we can get to that point, I have to explain a really important concept in quantum mechanics called “spin”. This is a quantity which is carried by all quantum mechanical particles, and is linked in a loose way to the rotational symmetry of the particle. Look at the right-pointing arrow in the picture. Hopefully it’s easy to see that the only way you can rotate the arrow so that it looks exactly the same as it does when you start (this is called a symmetry operation) is to rotate it through 360°. A particle that has this rotational symmetry is said to have a spin of 1. Now look at this double-headed arrow. If you rotate around the axis indicated by the red dot, you only have to rotate it by 180° to get back to where you started. This has a spin of 2 because you have to rotate half a turn to get the first symmetry operation. The other pictures show a few different spins.

But what about electrons? Well, they have spin of ½. Just to be clear about what that means, using the same analogy it implies that you have to rotate by 720° before the electron “looks” like it did when you started. There isn’t a good way to draw that so I can’t give you a picture of a spin-½ particle, so this is one of those places where quantum mechanics is weird and counter-intuitive and we just have to get on with it. The other building blocks of atoms (protons and neutrons) also have spin-½ so in this post I’ll focus on that strange case. The crucial thing about spin-½ particles is that their spin can exist in one of two states, usually called ‘up’ and ‘down’, and typically are represented by arrows pointing in those two directions.

But why does this matter? Well, individual spins generate a magnetic field. The reason that iron is a magnetic material is that the interaction between the spins in the iron atoms makes their spins all line up in the same direction. Therefore, the tiny magnetic fields associated with each of the spins all add up to make a large field. Non-magnetic materials don’t have this alignment (in fact, their spins are all randomly aligned) and so the tiny magnetic fields all cancel each other out because they are pointing in opposite directions. Materials like iron which have this alignment are called ‘ferromagnetic’.

Reading and writing in a hard disk

But, what does this have to do with your laptop? Well, in a hard disk, the part where the zeroes and ones are stored is made from two small pieces of ferromagnetic material. Then, the difference between a one and a zero is made by manipulating the spins of the atoms in one of the ferromagnetic layers. When an electric current is passed through this region, the electrons behave differently depending on the spins. Specifically, if the electrons have the same spin as the atoms, then they don’t interact very strongly and the electrical resistance is quite low. But if they have opposite spins, the electrons interact strongly with the atoms so they bounce off the atoms (or “scatter” in the technical language), their progress is impeded, and the electrical resistance is high.

The way to encode a one or a zero is shown in the picture below. A one is encoded by aligning the ferromagnets (the pink layers) so that their spins point in the same direction. In the left-hand picture, I show this with both layers having up-spins. A current of electrons (shown by the red arrows) has a half-and-half mix of electrons with up-spin and down-spin. When it is passed through the stack, the up-spin electrons interact weakly with the ferromagnet up-spins in both layers (black arrows) and encounter low resistance. This means that some of the current put in at the top of the stack emerges from the bottom and this characterises the one state. Note that the down-spin electrons are blocked from getting to the bottom of the stack because they scatter strongly off the up-spin atoms in the first ferromagnet layer and so the resistance for them is high.

For the zero state, one of the ferromagnetic layers has its spins reversed. In the right-hand picture, this is shown by the lower layer now having a down-spin black arrow. For electric current, the down-spin electrons still scatter strongly from the up-spin atoms in the top layer. The up-spin electrons still pass through this layer, but then they encounter the down-spin atoms in the lower layer where the electrons and the atoms have opposite spin, so they scatter strongly. This means that no current emerges at the bottom of the device, and so this defines the zero state.

This means that, for the hard disk to work, it needs to be able to do two things. Firstly, the “write head”, which is the part that encodes the zeroes and ones when data is written to the disk, needs to be able to flip the spins of one of the ferromagnetic layers. Then, to recover the information at a later time, the “read head” tries to pass current through a specific piece of the disk material. If current flows (because the ferromagnet spins are the same) then this is a one. If current does not flow (because the ferromagnet spins are opposite) then it is a zero.

And this works entirely because of the quantum-mechanical property of particles called spin: aligned spins is a one, opposite spins is a zero. And as a bonus, it also explains why you have to be careful with hard drives and strong magnetic fields, because a magnet can change the alignment of all the ferromagnetic areas in the hard disk and destroy the encoded ones and zeros. Don’t say you weren’t warned!

What is condensed matter theory, and why is it so hard?

This post is about the general approach to physics that people who work on the theory of condensed matter take. As I’ll explain, it is basically impossible to calculate anything exactly, and so the whole field relies on choosing smart approximations that allow you to make some progress. Exactly what kind of approximations you make depends on what you want to achieve, and I’ll describe some of the common ones below.

But before that, what is ‘condensed matter physics’? Roughly speaking, it refers to anything that is a solid or a liquid (and also some gases) that you can see in the Real World around you. So it’s not stars and galaxies and space exploration, it’s not tiny sub-atomic particles like quarks and higgs bosons like they talk about at CERN, and it’s not what happened in the first fractions of a second after the Big Bang, or what it’s like inside a black hole. But it is about the materials that make the chip inside your phone or power a laser, or about making batteries store energy more efficiently, or finding new catalysts that make industrial chemical production cheaper (okay, so that one crosses into chemistry as well, but the lines are fuzzy!), or its about making superconductivity work at higher temperatures.

Why is it so hard?

Really, what makes condensed matter physics different from many other types of physics is that in many situations, the behaviour of the materials is governed by how many many particles interact with each other. Think about a small piece of metal for instance: You have millions and millions of atoms that form some bonds which give it a solid shape. Then some of the electrons in those atoms disassociate themselves and become a bit like a liquid that can move around inside the metal and conduct electricity or heat, or make the metal magnetic. In a small piece of metal there will be $10^22$ atoms. (That notation means that the number is a one with twenty two zeroes after it. So it’s a lot.) And all of these atoms have an electric field which is felt by all the other atoms so that they all interact with each other. It is, in principle, possible to write down some equations which describe this, but there is no way that anyone can make a solution for these equations and work out exactly how all these atoms and electrons behave. I don’t just mean that it’s very difficult, I mean that it is mathematically proven to be impossible!

Watcha gunna do?

So, that begs the question what can we do? It is easy to connect a bit of metal to a battery and a current meter and see that it can conduct electricity, but how do we describe that theoretically? There are several different approaches to making the approximations needed, so I’ll try to explain them now.

1. Use symmetry. By the magic of mathematics, the equations can often be simplified if you know something about the symmetry of the material you want to investigate. For example, the atoms in many metals sort themselves into a crystal lattice of repeated cubes. Group theory can then be used to reduce the complexity of the equations in a very helpful way. For instance, it might be possible to tell whether a material will conduct electricity or not even at this level of approximation. But this symmetry analysis contains an assumption because in reality materials won’t completely conform to the symmetry. They may have impurities in them, or the crystal structure might have irregularities, for example. So this isn’t a magic bullet. And also this might well not reduce the equations enough that they can be solved, so it is usually just a first step.

2. From this point, it is often possible to make simplifying assumptions so that the mathematically impossible theory becomes something that can be solved. Of course, by doing this you lose quite a lot of detail. It’s like the “spherical cows” analogy. In principle, cows have four legs, a tail, a head, and maybe some udders. But say you wanted to work out how many cows you could safely fit into a field. You don’t need to know any of that detail, so you can think of the cows as being a sphere which consumes a certain amount of hay each day. You can do something similar about the metal: Instead of keeping track of every detail, you can forget that the atoms have an internal structure (spherical atoms!). Or you could assume that the atoms interact with the electrons in a particularly simple way so that you can focus just on the disassociated electrons. Or you could assume that the electrons don’t interact with each other, but only with the atoms. In the jargon of the field, this general approach is called finding an “effective theory”. These theories can often give quite good estimates of not only whether a material will conduct, but how well it will do it.

3. These days, computers are really fast, and they can be used to numerically solve equations that are almost exact. However, computers are not good enough that they can do this for $10^22$ atoms, so if you want to keep quite close to the original equations, they might be able to do fifty or so. Maybe a hundred. In the jargon, these methods are called “ab-initio” (from the beginning) because they do not make any approximations unless they absolutely have to. The fact that you can’t treat too many atoms limits what these methods can be applied to. For instance, they can be quite good for molecules, and crystals where the periodic repetition is not too complicated. But for these situations, you can get a level of detail which is simply impossible in the effective theories. So there’s a trade-off. And computers are getting better all the time so this is one area that will see a lot of progress.
4. The final way that I’ll describe is sort-of the inverse process. Instead of starting from the mathematics which are impossible, you can start from experimental data and try to work backwards towards the theoretical description that gives you the right answer. Sometimes this is used in conjunction with one of the other methods as a way to give you some clues about what assumptions to make.

So, that’s how you do theory in condensed matter. Numbers 2 and 4 are basically my day job, on a good day at least!

Particle-wave duality and the two slit experiment

Particle-wave duality is the concept in quantum mechanics that small objects simultaneously behave a bit like particles and a bit like waves. This comes very naturally from the mathematics, but instead of talking about those boring details, I’m going to describe a famous experiment that proves it.

Diffraction

It’s called the two slit experiment, and I’ve sketched how it works in the picture on the right. Before going into the full details, let’s look at the upper part of the picture. This shows a light wave shining on a barrier with a small slit in it. The thin black lines show the position of the peaks of the wave that describes the traveling light. Some of the light can get through that slit, but in doing so, it changes its form to become a circular wave with the slit at its source. This is called diffraction, and leads to a distinctive pattern when the light hits a screen placed some way behind the barrier. The red line behind the barrier shows the intensity of the light hitting the screen. This demonstrates that light can behave in a wave-like way because if the light was just particles you would not see the diffraction pattern, but there would be a small spot of light on the screen in line with the slit.

Now look at the lower part of the picture. Now the screen has been replaced with a second barrier that has two slits in it. Both of these slits act like the first one: they diffract the light that is coming through. So behind the second barrier, there are now two waves of light, one coming from each slit. These two waves interfere with each other, so that the pattern of light seen on the screen (the red line) looks very different from that made by just one slit. (I did actually calculate what the light should look like before I drew these pictures, so I hope both of the red lines are actually correct!) Interference is the process of these wave adding together to form one single pattern. The value of a light wave at a particular position can be either positive or negative. In the picture, the thin black lines show where the waves are at their maximum – so where they are their most positive. Exactly half-way between a pair of lines they are at their most negative. If the two waves are both positive at a particular position (like exactly at the center of the screen) then they add together to give intense light. But if one is positive and one is negative then they will cancel each other out and leave almost no light.

Electrons

That’s not very controversial. But it starts to get a bit more weird when you repeat the same experiment but using a beam of electrons instead of a beam of light. Electrons are one of the three types of “particle” which make up an atom: The protons and neutrons bind together to form the nucleus, and then electrons “orbit” around it. Until this experiment was done for the first time, most physicists thought that electrons were particles. But the result of the experiment was the same kind of two-slit diffraction pattern that they got when they used light. The electrons that went through each of the slits were interfering with each other just like the light waves did. The only possible conclusion: these electrons were also wave-like.

Then, they pushed the experiment a bit further. They had the same barriers, but instead of using a beam of electrons, they fired them through one at a time. Astonishingly, even though there was only one electron, the result was still a two-slit diffraction pattern. Somehow, the electron was going through both slits and interfering with itself. Conclusion: Electrons are not just wave-like when there are lots of them, they are wave-like on their own!

Now it gets weird

To try and verify this, they modified their apparatus to include detectors at both of the slits so they could tell which slit the electron was going though. Expecting to find a signal from both detectors, they were surprised to find that only one of the detectors sensed an electron going though, and instead of the two-slit diffraction pattern, they now saw a one-slit pattern on the screen. If they did the experiment with the detectors turned off, the two-slit diffraction pattern reappeared. It seemed like asking the electron which slit it had gone through forced it to choose one or the other. But get this: The experimentalists got sneaky. They took the electron detectors away and instead made slits that could be opened and closed very quickly. Starting with both slits open, they fired one electron from the gun. After it had passed the barrier with the two slits, but before it reached the screen, they closed one of the slits. Any guesses as to what pattern was measured on the screen?

They saw a single-slit diffraction pattern! Somehow, the electron knew that one of the slits had been closed after it went through, and behaved like only the other one had been open the whole time. This hints at many deep issues about quantum measurement and (gulp!) the nature of reality itself. But I’ll save that discussion for another time.

This experiment has been repeated with many different objects used instead of the light or electrons. Protons, whole atoms, and buckyballs all show the same behavior, so this is without doubt a general feature in quantum mechanics and not something oddly specific to light and electrons. In fact, once you allow for the possibility of wave-like particles, you start to see the effects of them in many places, including in the behavior of electrons in the materials which make computer chips and all the rest of information technology. So it’s a pretty big deal.

And finally…

One final point of detail which I think is worth pointing out. In the first paragraph, I mentioned that “small objects” are needed to do this experiment. But what does “small” mean in this context? It turns out, this can be written down in a really simple equation. The de Broglie wavelength, referred to by the symbol $\lambda$, is the wavelength associated with the quantum object. It turns out, that to see the wave-like properties, the size of the slits has to be similar to $\lambda$.

The formula is $\lambda = h / mv$. Here, $h$ is just a number that comes from quantum mechanics and can be forgotten about. The $m$ and $v$ are the mass and speed associated with the particle-like properties of the object. So, the heavier the “particle”, the smaller the associated wavelength is. This explains why you don’t see any wave-like effects for people or cars or golf balls. Just to illustrate the kind of size that we talking about, light has a $\lambda$ of half a micron or so. For electrons, it’s a few nanometers, and for buckyballs, it’s a few thousandths of a nanometer.

What is superconductivity?

Most fundamentally, a superconductor is a material which becomes a perfect conductor with no electrical resistance when it gets cold enough. It was first discovered in 1911 when some Dutch experimentalists were playing around with a new way of cooling things down, and one of the things they tried was to measure the electrical resistance of various metals as they got colder and colder. Some metals just kept doing the same things that were expected based on how they behave at higher temperatures. But for others (like mercury) the resistance suddenly dropped to zero when the temperature was lowered to within a few degrees of absolute zero: they became perfect conductors. By perfect, I mean that the amount of energy that was lost as electricity went along the superconducting wire was zero. Nowadays, superconductors are very useful materials and are used in a variety of technologies. For example, they make the coils of the powerful magnets inside an MRI machine or a maglev train, they can allow ultra-precise measurements of magnetic fields in a device called a SQUID (superconducting quantum interference device), and in the future, there is some chance that junctions between different superconductors might be crucial for implementing a quantum computer.

So, how does this work?

Before I try to explain that, there is one crucial bit of terminology that I have to introduce. The types of particles that make up the universe can be classified into two types: One type is called fermions, the other type is called bosons. The big difference between these two types of particles is that for fermions, only one particle can ever be in a particular quantum state at any given time. For bosons, many particles can all be in the same state at the same time. The particles that carry electricity in metals are electrons, and they are a type of fermion. But when two fermions pair up and form a new particle, this new particle is a type of boson. Superconductivity happens when the electrons are able to form these boson pairs, and these pairs then all occupy the lowest possible energy state. In this state, they behave like a big soup of charge which can move without losing energy, and this gives the zero resistance for electrical current which we know as superconductivity.

This leaves a big unanswered question: How do the electrons pair up in the first place? If you remember back to high school, you probably learned that two objects with the same charge will repel each other, but that opposite charges attract. All electrons have negative charge and so should always repel, so how do they stay together close enough to make these pairs? The answer involves the fact that the metal in which the electrons are moving also contains lots of atoms. These atoms are arranged in a regular lattice pattern but they have positive charge because they have lost some of their electrons. (This is where the free electrons that can form the pairs come from.) So, as an electron moves past an atom, there is an attractive force between them, and the atom moves slightly towards the electron. Because electrons are small and light, they can move through the lattice quickly. The atoms are big and heavy so they move slowly and it takes them some time to go back to their original position in the lattice after the electron has gone by. So, as the electron moves through the lattice, it leaves a ripple behind it. A second electron some distance from the first one now feels the effect of this ripple, and because the atoms are positively charged, it is attracted to it. So, the second electron is indirectly attracted to the first, making them move together in a pair.

In the language of quantum mechanics, these ripples of the atoms are called phonons. (The name comes from the fact that these ripples are also what allows sound to travel through solids.) From this point of view, the first electron emits a phonon which is absorbed by the second electron, effectively gluing them together. But why does the metal have to get very cold before this phonon glue can be effective? The reason is that heat in a crystal lattice can also be thought of in terms of phonons. When the metal is warm, there are lots and lots of phonons flying around all over the place and it’s too chaotic for the electrons to feel the influence of just the phonons that were emitted by other electrons. As the metal cools down, the number of temperature phonons reduces, leaving only the ones that came from the other electrons, which allows the glue to work.

Two disclaimers

Two quick disclaimers before I finish.

Number one: I glossed over one inconvenient fact when I described the electrons and atoms interacting with each other. I made it sound like they were small particles moving around like billiard balls. For the atoms, this is a reasonable picture because they pretty much have to stay near their lattice positions. But the electrons are not like that at all. Perhaps you’ve heard of particle-wave duality? In quantum mechanics, small objects like electrons are simultaneously a bit like particles and a bit like waves. That’s true here for the electrons, so they are not little billiard balls but are more wave-like. This makes it more difficult to have a good mental picture of what they’re doing, but the basics of the mechanism are still true.

Secondly, this post has been about the type of superconductivity that occurs in metals. The temperature associated with this kind of superconductivity is quite low – a few degrees above absolute zero. But there are other kinds of superconductivity which can occur at much higher temperatures. (Imaginatively, this is usually called ‘high temperature superconductivity’!) This works in a very different way to what I’ve talked about here. It’s also not very well understood and is and active area of research. Perhaps I’ll write something about that another time.