# Why does light exist?

Light is something that we probably take for granted, but have you ever thought about why it should exist at all? From the viewpoint of quantum mechanics, it turns out that it must be there to satisfy a fundamental symmetry of the universe.

Electric and magnetic fields are created by electric charges, and electric charges also move in response to those fields. The electromagnetism that you probably learned at school is one description of this. The electric potential and the electric field are related to each other because the field is given by the gradient (or slope) of the potential. It’s not possible to directly measure a potential, so in some sense it is only the field that has a physical reality. Charged objects moving in the field experience a force which changes their speed or direction of travel.

This description works very nicely for many everyday situations and, on the face of it, there is no obvious role for light here. But, at the scales of individual atoms and electrons, electromagnetism has to be described in the framework of quantum mechanics. It turns out that light is the thing that carries the forces that charged objects feel in electromagnetic fields.

The point of this post is to try and explain why that is.

As I’ve said before, looking at symmetries can help to simplify physics problems. Symmetries are transformations which leave the object that is transformed in an identical state from how it started. For example, a square can be rotated by 90 degrees about its centre point and the result will look the same as the unrotated version. This is an example of a discrete symmetry – there are only four rotations of the square that are symmetry transformations (90, 180, 270, and 360 degrees). In contrast, a circle has a continuous symmetry – you can rotate a circle about its centre point by any angle and end up with a circle that looks just the same as the unrotated one.

This is the moment that things start to get a bit less easy to visualise, because talking about a different kind of symmetry is unavoidable: We have to delve into gauge transformations and gauge symmetries.

To try and explain the concept of a gauge transformation, look at the left-hand picture below. The lines represent potentials at each position. Start with the lower, blue line. It has a field associated with it, which is the slope of the line at each point. To get the red line from the blue line, you have to add on a fixed amount of potential at every position. This is represented by the three dashed black arrows, which are all the same length.

But here is the crucial point: The field associated with the red line is exactly the same as the field associated with the blue line, because the slopes of the two lines are the same at every position. Adding on the extra potential hasn’t changed this.

So, adding a fixed amount to the potential at every position does not change the field at all. And remember, the field is the only thing that is physically observable – we can’t measure the potential. Therefore, this is a continuous symmetry.

The fact that many different potentials give the same field is called “gauge symmetry”. The type of gauge symmetry illustrated here is a “global” symmetry, because the amount of potential that you add or subtract at each point in space is the same (i.e. it’s a global amount).

However, the crucial part for the existence of light comes from a slightly different type of gauge symmetry, called a “local” one. For a local gauge transformation, instead of adding the same amount of potential at every point, you have the freedom to add different amounts of potential in different places. This is shown in the right-hand graph above. The red line is obtained from the blue line by adding different amounts of potential at each position. Notice that the three dashed black arrows are now different lengths.

Electromagnetism in quantum mechanics works under the assumption that this local gauge transformation is also a symmetry. For this to be true, the physical observables, including the field, must stay the same after the local transformation.

But, looking at the graph, it immediately becomes apparent that adding a different amount of potential at different positions to the blue line means that the red line has a different slope. This would change the field, and so this local transformation should not be a symmetry at all!

The only way that this can work is if our description of the quantum field is incomplete: In addition to the potential, there must be another part which also feels the effect of the local gauge transformation. When the combined transformation of the potential and this additional object are added together, the fields remain unchanged so that the local gauge symmetry is intact.

For the electromagnetic field in quantum mechanics, it turns out that this secondary part is a photon. Looking deeper into the mathematics, we find that their existence explains how charged objects feel a force from the field: they emit and absorb photons.

But photons are also the particles which carry light! So, one answer to the question “why do we have light?” is simply that photons must exist to preserve local gauge symmetry.

I appreciate that this has the whiff of a magic trick to it: Why should local gauge symmetry be something that we insist must exist? Perhaps there is some deep answer to this question that I don’t know, but the best response might simply be that this theory works.

To add more to the “it just works” line of reasoning, local gauge symmetry is also the reason that gluons (which carry the strong interaction) and the W and Z bosons (which carry the weak interaction) exist. In those cases the symmetry operations are more complicated than adding a potential, but the fundamental assumptions and logic are the same. So, this is a powerful concept which seems to be important in describing quantum physics, and gives one explanation for light comes from.

# How do you measure the quantum states of a material?

I’ve talked a lot on this blog about how understanding the quantum states of a material can be helpful for working out its properties. But is it possible to directly measure these states in an experiment? And what sort of equipment is needed to do so? I’ll try to explain here.

First, a quick recap. The band structure is like a map of the allowed quantum states for the electrons in a material. The coordinates of the map are the momentum of the electron, and at each point there are a series of energy levels which the electron can be in. The energy states close to the “Fermi energy” largely determine things like whether the material can conduct electricity and heat, absorb light, or do interesting magnetic things.

There are various ways that the band structure can be investigated. Some of them are quite indirect, but last week, I visited an experimental facility in the UK where they can do (almost) direct measurements of the band structure using X-rays.

The technical name for this technique is “angle-resolved photoemission spectroscopy”, or ARPES for short. Let’s break that down a bit. Spectroscopy just means that it’s a way of measuring the spectrum of something. In this case, it’s the electrons in the material. I’ll come back to the “angle-resolved” part in a minute, but the crucial thing to explain here is what photoemission is.

The sketch above shows a hypothetical band structure. When light is shone on a material, the photons (green wavy arrows) that make up the beam can be absorbed by one of the electrons in the filled bands below the Fermi energy. When this happens, the energy and momentum of the photon is transferred into the electron.

This means that the electron must change its quantum state. But the band structure gives the map of the only allowed states in the material, so the electron must end up in one of the other bands. In the left-hand picture, the energy of the photon is just right for the electron at the bottom of the red arrow to jump to an unfilled state above the Fermi energy. This is called “excitation”.

But in the right-hand picture, the energy of the photon is larger (see the thicker line and bigger wiggles on the green arrow) so there is no allowed energy level for the excited electron to move to. Instead, the electron is kicked completely out of the material. To put that another way, the high-energy photons cause the material to emit electrons. This is photoemission!

The crucial part about ARPES is that the emitted electrons retain information about the quantum state that they were in before they absorbed the photons. In particular, the photons carry almost no momentum, so the momentum of the electron can’t really change during the emission process. But also, energy must be conserved, so the energy of the emitted electron must be the energy of the photon, plus the energy of the quantum state that the electron was in before emission.

So, if you can catch the emitted electrons, and measure their energy and momentum, then you can recover the band structure! The angle-resolved part in the ARPES acronym means that the momentum of the electrons is deduced from what angle they are emitted at.

But what does this look like in practise? Fortunately, a friendly guide from Diamond showed me around and let me take pictures.

The upper-left picture is an outside view of the Diamond facility. (The cover picture for this blog entry is an aerial view.) It’s a circular building, although this picture is taken from close enough that this might be hard to see. This gives a sense of scale for the place!

Inside is a machine called a synchrotron. They didn’t let us go near this, so I don’t have any pictures, but it is a circular particle accelerator which keeps bunches of electrons flowing around it very, very fast. As they go around, they release a lot of X-ray photons which can be captured and focused. (There is a really cool animation of this on their web site.) The X-rays come down a “beam line” and into one of many experimental “hutches” which stand around the outside of the accelerator.

The upper-right picture shows the ARPES machine inside the main hutch of beamline I05. Most of the stuff you can see at the front is designed for making samples under high vacuum, which can then be transferred straight into the sample chamber without exposure to air.

The lower-left picture is behind the machine, where the beam line comes in. It’s kinda hard to see the metal-coloured pipe, so I’ve drawn arrows. The lower-right picture shows where the real action happens. The sample chamber is near the bottom (there is a window above it which allows the experimentalists to visually check that the sample is okay), and you can just about see the beam line coming in from behind the rack in the foreground.

The X-rays come into the sample chamber from the beam line, strike the sample, and the emitted electrons are funnelled into the analyser which is the big metallic hemisphere towards the right of the picture. The spherical shape is important, because the momentum of the electrons is detected by how much they are deflected by a strong electric field inside the analyser. This separates the high momentum electrons from the low momentum ones in a similar way that a centrifuge separates heavy items from light ones.

And what can you get after all of this? The energy and momentum of all the electrons is recorded, and pretty graphs can be made!

Above is a picture that I stole from the Diamond web site. On the left is a theoretical calculation for the band structure of a material called tungsten diselenide (WSe2). On the right is the ARPES data. The colour scheme shows the intensity of the photoemitted electrons. As you can see, the prediction and data match very well. After all the effort of building a massive machine, it works! Hooray science!

# Topology and the Nobel Prize

You may have seen that the Nobel Prize for Physics was awarded this week. The Prize was given “for theoretical discoveries of topological phase transitions and topological phases of matter”, which is a bit of a mouthful. Since this is an area that I have done a small amount of work in, I thought I would try to explain what it means.

You might have seen a video where a slightly nutty Swede talks about muffins, donuts, and pretzels. (He’s my boss, by the way!) The number of holes in each type of pastry defined a different “topology” of the lunch item. But what does that have to do with electrons? This is the bit that I want to flesh out. Then I’ll give an example of how it might be a useful concept.

### What is topology?

In a previous post, I talked about band structure of crystal materials. This is the starting point of explaining these topological phases, so I recommend you read that post before trying this one. There, I talked about the band structure being a kind of map of the allowed quantum states for electrons in a particular crystal. The coordinates of the map are the momentum of the electron.

Each of those quantum states has a wave function associated with it, which describes among other things, the probability of the electron in that state being at a particular point in space. To make a link with topology, we have to look at how the wave function changes in different parts of the map. To use a real map of a landscape as the analogy, you can associate the height of the ground with each point on the map, then by looking at how the height changes you can redraw the map to show how steep the slope of the ground is at each point.

We can do something like that in the mathematics of the wave functions. For example, in the sketches below, the arrows represent how the slope of the wave function looks for different momenta. You can get vortices (left picture) where the arrows form whirlpools, or you can get a source (right picture) where the arrows form a hedgehog shape. A sink is similar except that the arrows are pointing inwards, not outwards.

Now for the crucial part. There is a theorem in mathematics that says that if you multiply the slope of the wave function with the wave function itself at the same point, and add up all of these for every arrow on the map, then the result has to be a whole number. This isn’t obvious just by looking at the pictures but that’s why mathematics is great!

That whole number (which I’m going to call n from now on) is like the number of holes in the cinnamon bun or pretzel: It defines the topology of the electron states in the material. If n is zero then we say that the material is “topologically trivial”. If n is not zero then the material is “topologically non-trivial”. In many cases, n counts difference between the number of sources and the number of sinks of the arrows.

### What topology does

Okay, so that explains how topology enters into the understanding of electron states. But what impact does it have on the properties of a material? There are a number of things, but one of the most cool is about quantum states that can appear on the surface of topologically non-trivial materials. This is because of another theorem from mathematics, called the “bulk-boundary correspondence” which says that when a topologically non-trivial material meets a topologically trivial one, there must be quantum states localized at the interface.

Now, the air outside of a crystal is topologically trivial. (In fact, it has no arrows at all, so that when you take the sum there is no option but to get zero for the result.) So, at the edges of any topologically non-trivial material there must be quantum states at the edges. In some materials, like bismuth selenide for example, these quantum states have weird spin properties that might be used to encode information in the future.

And the best part is that because these quantum states at the edge are there because of the topology of the underlying material, they are really robust against things like impurities or roughness of the edge or other types of disorder which might destroy quantum states that don’t have this “topological protection”.

### An application

Now, finally, I want to give one more example of this type of consideration because it’s something I’ve been working on this year. But let me start at the beginning and explain the practical problem that I’m trying to solve. Let’s say that graphene, the wonder material is finally made into something useful that you can put on a computer chip. Then, you want to find a way to make these useful devices talk to each other by exchanging electric current. To do that, you need a conducting wire that is only a few nanometers thick which allows current to flow along it.

The obvious choice is to use a wire of graphene because then they can be fabricated at the same time as the graphene device itself. But the snag is that to make this work, the edges of that graphene wire have to be absolutely perfect. Essentially, any single atom out of place will make it very hard for the graphene wire to conduct electricity. That’s not good, because it’s very difficult to keep every atom in the right place!

The picture above shows a sketch of a narrow strip of graphene surrounded by boron nitride. Graphene is topologically trivial, but boron nitride is (in a certain sense) non-trivial and can have n equal to either plus or minus one, depending on details. So, remembering the bulk-boundary correspondence, the graphene in this construction works like an interface between two different topologically non-trivial regions, and therefore there must be quantum states in the graphene. These states are robust, and protected by the topology. I’ve tried to show these states by the black curved lines which illustrate that the electrons are located in the middle of the graphene strip.

Now, it is possible to use these topologically protected states to conduct current from left to right in the picture (or vice versa) and so this construction will work as a nanometer size wire, which is just what is needed. And the kicker is that because of the topological protection, there is no longer any requirement for the atoms of the graphene to be perfectly arranged: The topology beats the disorder!

Maybe this, and the example of the bismuth selenide I gave before show that the analysis of topology of quantum materials is a really useful way to think about their properties and helps us understand what’s going on at a deeper level.

(If you’re really masochistic and want to see the paper I just wrote on this, you can find it here.)

# Why do some materials conduct electricity while others don’t?

Can you tell at a glance how the electrons in a material behave? Amazingly, the answer is “yes”, and in this post I’ll explain how.

I want to introduce the concept of something called ‘band structure’ because it is an idea that underpins a lot of the quantum mechanics of electrons in real materials. In particular, the band structure of material can make it really easy to know if a material is a good conductor of electricity or not. So, here goes.

To describe how electrons behave in a particular material, a good place to start is by working out what quantum states they are allowed to be in. In essence, the band structure is simply a map of these allowed quantum states. One place where things can be a bit confusing is the coordinates that are used to draw this map. Band structure uses the momentum of the quantum state as its coordinate, and gives the energy of that state at each point.

The reason for this is that the momentum and energy of the quantum states are linked to each other so it just makes sense to draw things this way. But why not use the position of the quantum state? This is because position and momentum cannot both be known at the same time due to Heisenberg’s Uncertainty Principle. If the momentum is known very accurately then the position must be completely unknown.

In fact, there’s even more to it than that. Most solids have a periodic lattice structure and this periodicity means that only certain momentum values are important. Roughly speaking, if the size of the repeating pattern in the lattice has length a, then there is a repeating pattern of allowed energy states in momentum with length 1/a. This means that we can draw the map of the allowed quantum states in only the first of these zones. This zone has a finite size, which is very helpful when trying to draw it!

The band structure of silicon. (Picture credit: Dissertation by Wilfried Wessner, TU Wien.)

Let’s take silicon as an example because it’s a really important material since a lot of electronics are made from it. The picture above shows the band structure (left) and the shape of the first repeating zone of allowed momenta (right) of silicon. The zone of allowed momenta has quite a complicated shape which is related to the crystal structure of the silicon. Some of the important points in that zone are labeled, for example, the center of the zone is called the Γ point (pronounced “gamma point”), while the center of the square face at the edge of the zone is the X point. It’s impossible to draw all the allowed states at every momentum point in a 3D zone, so what is usually done is to draw the allowed quantum states along certain lines between these important points, and that is what is on the left of the picture. You can probably see that these allowed states form bands, which is where the name ‘band structure’ comes from.

There’s one more concept that is really important, called the “Fermi surface”. Electrons are fermions, and so they are allowed to occupy these quantum states so that there is at most one electron in each state. In nature, the overwhelming tendency is for the total energy of a system to be minimized as this is the most efficient arrangement. This is done by filling up all the quantum states, starting from the bottom, until all the electrons are in their own state. There are never enough electrons to fill all the allowed quantum states, and so the energy of the last filled (or first empty) states is called the Fermi surface. In a three dimensional material, the cutoff between filled and empty states is a two-dimensional surface.

So, how does knowing the band structure help us to understand the electronic properties of a material? As an example, let’s think about whether the material conducts electricity well or not. It turns out that for electrical conduction, most of the quantum states of the electrons play no role at all. The important ones are those near the Fermi surface.

To conduct electricity, an electron has to jump from its state below the Fermi surface to one above it, where it is free to move around the material. To do this, it has to absorb some energy from somewhere. This usually either comes from an electric field that is driving the electrical current (like a battery or a plug socket), or from the thermal energy of the material itself.

Take a look at the sketches below. They are cartoons of band structures near the Fermi surface (which is shown by the green dotted line). The filled bands are shown by thick blue lines while the empty bands are shown by thin blue lines. In the left-hand cartoon there is a big gap between the filled and empty bands so it’s very difficult for an electron to gain enough energy to make the jump from the filled band to the empty band. That means that a material with a large band gap at the Fermi surface is an insulator – it can’t conduct electricity easily. The middle cartoon shows a material with only a small band gap. That means it’s possible, but kinda difficult for an electron to make the jump and become conducting. Materials with narrow gaps are semiconductors.

The right-hand cartoon shows a material where the Fermi surface goes through one of the bands, so there are both empty states and filled states right at the Fermi surface. This means it’s really easy for an electron to jump above the Fermi surface and become conducting because it takes only a tiny amount of energy to do this. These materials are conductors.

Going back to silicon, we can look at the band structure above and see that there is a gap of about 1 electron volt at the Fermi energy. (The Fermi energy is zero on the y axis). One electron volt is too large an energy for an electron to become conducting by absorbing thermal energy, but small enough that it can be done by an electric field. This means that silicon is a semiconductor – it has a narrow gap.

One final question: How do you find the band structure of your favorite material? There is an experimental technique called ARPES where you shine high energy light at a material, and the photons hitting it cause electrons to be ejected from the surface. These electrons can be caught and the energy and momentum that they have reflect the energy and momentum of the quantum states they were filling in the material. So by careful measurement you can reconstruct the map of these states.

Another way is to use mathematics to theoretically predict the band structure. There has been a huge amount of work done to come up with accurate ways to go from the spatial definition of a crystal to its band structure with no extra information. In some cases, these work very well, but the calculations which do this are often very complicated and require supercomputers to run!

So, that is band structure. An easy way to make a link between complicated quantum mechanics and everyday properties like conduction of electricity.

# How a hard disk works

This post is going to explain the fundamental part of how the hard drive in your old computer works. Modern solid state disks work completely differently, so this applies only to the older type that have been common for several decades. Specifically, when your computer writes something to the drive, it has to turn the sequence of zeroes and ones which make up the binary data into something physical on the disk. Then, when it needs to read this information later, it can go back and look at that part of the disk and recover the zeroes and ones from whatever material they were written to. But how do you tell the difference between a one and a zero? That’s the question I’ll try to answer.

### Spin

But before we can get to that point, I have to explain a really important concept in quantum mechanics called “spin”. This is a quantity which is carried by all quantum mechanical particles, and is linked in a loose way to the rotational symmetry of the particle. Look at the right-pointing arrow in the picture. Hopefully it’s easy to see that the only way you can rotate the arrow so that it looks exactly the same as it does when you start (this is called a symmetry operation) is to rotate it through 360°. A particle that has this rotational symmetry is said to have a spin of 1. Now look at this double-headed arrow. If you rotate around the axis indicated by the red dot, you only have to rotate it by 180° to get back to where you started. This has a spin of 2 because you have to rotate half a turn to get the first symmetry operation. The other pictures show a few different spins.

But what about electrons? Well, they have spin of ½. Just to be clear about what that means, using the same analogy it implies that you have to rotate by 720° before the electron “looks” like it did when you started. There isn’t a good way to draw that so I can’t give you a picture of a spin-½ particle, so this is one of those places where quantum mechanics is weird and counter-intuitive and we just have to get on with it. The other building blocks of atoms (protons and neutrons) also have spin-½ so in this post I’ll focus on that strange case. The crucial thing about spin-½ particles is that their spin can exist in one of two states, usually called ‘up’ and ‘down’, and typically are represented by arrows pointing in those two directions.

But why does this matter? Well, individual spins generate a magnetic field. The reason that iron is a magnetic material is that the interaction between the spins in the iron atoms makes their spins all line up in the same direction. Therefore, the tiny magnetic fields associated with each of the spins all add up to make a large field. Non-magnetic materials don’t have this alignment (in fact, their spins are all randomly aligned) and so the tiny magnetic fields all cancel each other out because they are pointing in opposite directions. Materials like iron which have this alignment are called ‘ferromagnetic’.

### Reading and writing in a hard disk

But, what does this have to do with your laptop? Well, in a hard disk, the part where the zeroes and ones are stored is made from two small pieces of ferromagnetic material. Then, the difference between a one and a zero is made by manipulating the spins of the atoms in one of the ferromagnetic layers. When an electric current is passed through this region, the electrons behave differently depending on the spins. Specifically, if the electrons have the same spin as the atoms, then they don’t interact very strongly and the electrical resistance is quite low. But if they have opposite spins, the electrons interact strongly with the atoms so they bounce off the atoms (or “scatter” in the technical language), their progress is impeded, and the electrical resistance is high.

The way to encode a one or a zero is shown in the picture below. A one is encoded by aligning the ferromagnets (the pink layers) so that their spins point in the same direction. In the left-hand picture, I show this with both layers having up-spins. A current of electrons (shown by the red arrows) has a half-and-half mix of electrons with up-spin and down-spin. When it is passed through the stack, the up-spin electrons interact weakly with the ferromagnet up-spins in both layers (black arrows) and encounter low resistance. This means that some of the current put in at the top of the stack emerges from the bottom and this characterises the one state. Note that the down-spin electrons are blocked from getting to the bottom of the stack because they scatter strongly off the up-spin atoms in the first ferromagnet layer and so the resistance for them is high.

For the zero state, one of the ferromagnetic layers has its spins reversed. In the right-hand picture, this is shown by the lower layer now having a down-spin black arrow. For electric current, the down-spin electrons still scatter strongly from the up-spin atoms in the top layer. The up-spin electrons still pass through this layer, but then they encounter the down-spin atoms in the lower layer where the electrons and the atoms have opposite spin, so they scatter strongly. This means that no current emerges at the bottom of the device, and so this defines the zero state.

This means that, for the hard disk to work, it needs to be able to do two things. Firstly, the “write head”, which is the part that encodes the zeroes and ones when data is written to the disk, needs to be able to flip the spins of one of the ferromagnetic layers. Then, to recover the information at a later time, the “read head” tries to pass current through a specific piece of the disk material. If current flows (because the ferromagnet spins are the same) then this is a one. If current does not flow (because the ferromagnet spins are opposite) then it is a zero.

And this works entirely because of the quantum-mechanical property of particles called spin: aligned spins is a one, opposite spins is a zero. And as a bonus, it also explains why you have to be careful with hard drives and strong magnetic fields, because a magnet can change the alignment of all the ferromagnetic areas in the hard disk and destroy the encoded ones and zeros. Don’t say you weren’t warned!

# What is condensed matter theory, and why is it so hard?

This post is about the general approach to physics that people who work on the theory of condensed matter take. As I’ll explain, it is basically impossible to calculate anything exactly, and so the whole field relies on choosing smart approximations that allow you to make some progress. Exactly what kind of approximations you make depends on what you want to achieve, and I’ll describe some of the common ones below.

But before that, what is ‘condensed matter physics’? Roughly speaking, it refers to anything that is a solid or a liquid (and also some gases) that you can see in the Real World around you. So it’s not stars and galaxies and space exploration, it’s not tiny sub-atomic particles like quarks and higgs bosons like they talk about at CERN, and it’s not what happened in the first fractions of a second after the Big Bang, or what it’s like inside a black hole. But it is about the materials that make the chip inside your phone or power a laser, or about making batteries store energy more efficiently, or finding new catalysts that make industrial chemical production cheaper (okay, so that one crosses into chemistry as well, but the lines are fuzzy!), or its about making superconductivity work at higher temperatures.

### Why is it so hard?

Really, what makes condensed matter physics different from many other types of physics is that in many situations, the behaviour of the materials is governed by how many many particles interact with each other. Think about a small piece of metal for instance: You have millions and millions of atoms that form some bonds which give it a solid shape. Then some of the electrons in those atoms disassociate themselves and become a bit like a liquid that can move around inside the metal and conduct electricity or heat, or make the metal magnetic. In a small piece of metal there will be $10^22$ atoms. (That notation means that the number is a one with twenty two zeroes after it. So it’s a lot.) And all of these atoms have an electric field which is felt by all the other atoms so that they all interact with each other. It is, in principle, possible to write down some equations which describe this, but there is no way that anyone can make a solution for these equations and work out exactly how all these atoms and electrons behave. I don’t just mean that it’s very difficult, I mean that it is mathematically proven to be impossible!

### Watcha gunna do?

So, that begs the question what can we do? It is easy to connect a bit of metal to a battery and a current meter and see that it can conduct electricity, but how do we describe that theoretically? There are several different approaches to making the approximations needed, so I’ll try to explain them now.

1. Use symmetry. By the magic of mathematics, the equations can often be simplified if you know something about the symmetry of the material you want to investigate. For example, the atoms in many metals sort themselves into a crystal lattice of repeated cubes. Group theory can then be used to reduce the complexity of the equations in a very helpful way. For instance, it might be possible to tell whether a material will conduct electricity or not even at this level of approximation. But this symmetry analysis contains an assumption because in reality materials won’t completely conform to the symmetry. They may have impurities in them, or the crystal structure might have irregularities, for example. So this isn’t a magic bullet. And also this might well not reduce the equations enough that they can be solved, so it is usually just a first step.

2. From this point, it is often possible to make simplifying assumptions so that the mathematically impossible theory becomes something that can be solved. Of course, by doing this you lose quite a lot of detail. It’s like the “spherical cows” analogy. In principle, cows have four legs, a tail, a head, and maybe some udders. But say you wanted to work out how many cows you could safely fit into a field. You don’t need to know any of that detail, so you can think of the cows as being a sphere which consumes a certain amount of hay each day. You can do something similar about the metal: Instead of keeping track of every detail, you can forget that the atoms have an internal structure (spherical atoms!). Or you could assume that the atoms interact with the electrons in a particularly simple way so that you can focus just on the disassociated electrons. Or you could assume that the electrons don’t interact with each other, but only with the atoms. In the jargon of the field, this general approach is called finding an “effective theory”. These theories can often give quite good estimates of not only whether a material will conduct, but how well it will do it.

3. These days, computers are really fast, and they can be used to numerically solve equations that are almost exact. However, computers are not good enough that they can do this for $10^22$ atoms, so if you want to keep quite close to the original equations, they might be able to do fifty or so. Maybe a hundred. In the jargon, these methods are called “ab-initio” (from the beginning) because they do not make any approximations unless they absolutely have to. The fact that you can’t treat too many atoms limits what these methods can be applied to. For instance, they can be quite good for molecules, and crystals where the periodic repetition is not too complicated. But for these situations, you can get a level of detail which is simply impossible in the effective theories. So there’s a trade-off. And computers are getting better all the time so this is one area that will see a lot of progress.
4. The final way that I’ll describe is sort-of the inverse process. Instead of starting from the mathematics which are impossible, you can start from experimental data and try to work backwards towards the theoretical description that gives you the right answer. Sometimes this is used in conjunction with one of the other methods as a way to give you some clues about what assumptions to make.

So, that’s how you do theory in condensed matter. Numbers 2 and 4 are basically my day job, on a good day at least!

# Particle-wave duality and the two slit experiment

Particle-wave duality is the concept in quantum mechanics that small objects simultaneously behave a bit like particles and a bit like waves. This comes very naturally from the mathematics, but instead of talking about those boring details, I’m going to describe a famous experiment that proves it.

### Diffraction

It’s called the two slit experiment, and I’ve sketched how it works in the picture on the right. Before going into the full details, let’s look at the upper part of the picture. This shows a light wave shining on a barrier with a small slit in it. The thin black lines show the position of the peaks of the wave that describes the traveling light. Some of the light can get through that slit, but in doing so, it changes its form to become a circular wave with the slit at its source. This is called diffraction, and leads to a distinctive pattern when the light hits a screen placed some way behind the barrier. The red line behind the barrier shows the intensity of the light hitting the screen. This demonstrates that light can behave in a wave-like way because if the light was just particles you would not see the diffraction pattern, but there would be a small spot of light on the screen in line with the slit.

Now look at the lower part of the picture. Now the screen has been replaced with a second barrier that has two slits in it. Both of these slits act like the first one: they diffract the light that is coming through. So behind the second barrier, there are now two waves of light, one coming from each slit. These two waves interfere with each other, so that the pattern of light seen on the screen (the red line) looks very different from that made by just one slit. (I did actually calculate what the light should look like before I drew these pictures, so I hope both of the red lines are actually correct!) Interference is the process of these wave adding together to form one single pattern. The value of a light wave at a particular position can be either positive or negative. In the picture, the thin black lines show where the waves are at their maximum – so where they are their most positive. Exactly half-way between a pair of lines they are at their most negative. If the two waves are both positive at a particular position (like exactly at the center of the screen) then they add together to give intense light. But if one is positive and one is negative then they will cancel each other out and leave almost no light.

### Electrons

That’s not very controversial. But it starts to get a bit more weird when you repeat the same experiment but using a beam of electrons instead of a beam of light. Electrons are one of the three types of “particle” which make up an atom: The protons and neutrons bind together to form the nucleus, and then electrons “orbit” around it. Until this experiment was done for the first time, most physicists thought that electrons were particles. But the result of the experiment was the same kind of two-slit diffraction pattern that they got when they used light. The electrons that went through each of the slits were interfering with each other just like the light waves did. The only possible conclusion: these electrons were also wave-like.

Then, they pushed the experiment a bit further. They had the same barriers, but instead of using a beam of electrons, they fired them through one at a time. Astonishingly, even though there was only one electron, the result was still a two-slit diffraction pattern. Somehow, the electron was going through both slits and interfering with itself. Conclusion: Electrons are not just wave-like when there are lots of them, they are wave-like on their own!

### Now it gets weird

To try and verify this, they modified their apparatus to include detectors at both of the slits so they could tell which slit the electron was going though. Expecting to find a signal from both detectors, they were surprised to find that only one of the detectors sensed an electron going though, and instead of the two-slit diffraction pattern, they now saw a one-slit pattern on the screen. If they did the experiment with the detectors turned off, the two-slit diffraction pattern reappeared. It seemed like asking the electron which slit it had gone through forced it to choose one or the other. But get this: The experimentalists got sneaky. They took the electron detectors away and instead made slits that could be opened and closed very quickly. Starting with both slits open, they fired one electron from the gun. After it had passed the barrier with the two slits, but before it reached the screen, they closed one of the slits. Any guesses as to what pattern was measured on the screen?

They saw a single-slit diffraction pattern! Somehow, the electron knew that one of the slits had been closed after it went through, and behaved like only the other one had been open the whole time. This hints at many deep issues about quantum measurement and (gulp!) the nature of reality itself. But I’ll save that discussion for another time.

This experiment has been repeated with many different objects used instead of the light or electrons. Protons, whole atoms, and buckyballs all show the same behavior, so this is without doubt a general feature in quantum mechanics and not something oddly specific to light and electrons. In fact, once you allow for the possibility of wave-like particles, you start to see the effects of them in many places, including in the behavior of electrons in the materials which make computer chips and all the rest of information technology. So it’s a pretty big deal.

### And finally…

One final point of detail which I think is worth pointing out. In the first paragraph, I mentioned that “small objects” are needed to do this experiment. But what does “small” mean in this context? It turns out, this can be written down in a really simple equation. The de Broglie wavelength, referred to by the symbol $\lambda$, is the wavelength associated with the quantum object. It turns out, that to see the wave-like properties, the size of the slits has to be similar to $\lambda$.

The formula is $\lambda = h / mv$. Here, $h$ is just a number that comes from quantum mechanics and can be forgotten about. The $m$ and $v$ are the mass and speed associated with the particle-like properties of the object. So, the heavier the “particle”, the smaller the associated wavelength is. This explains why you don’t see any wave-like effects for people or cars or golf balls. Just to illustrate the kind of size that we talking about, light has a $\lambda$ of half a micron or so. For electrons, it’s a few nanometers, and for buckyballs, it’s a few thousandths of a nanometer.