# Why does light exist?

Light is something that we probably take for granted, but have you ever thought about why it should exist at all? From the viewpoint of quantum mechanics, it turns out that it must be there to satisfy a fundamental symmetry of the universe.

Electric and magnetic fields are created by electric charges, and electric charges also move in response to those fields. The electromagnetism that you probably learned at school is one description of this. The electric potential and the electric field are related to each other because the field is given by the gradient (or slope) of the potential. It’s not possible to directly measure a potential, so in some sense it is only the field that has a physical reality. Charged objects moving in the field experience a force which changes their speed or direction of travel.

This description works very nicely for many everyday situations and, on the face of it, there is no obvious role for light here. But, at the scales of individual atoms and electrons, electromagnetism has to be described in the framework of quantum mechanics. It turns out that light is the thing that carries the forces that charged objects feel in electromagnetic fields.

The point of this post is to try and explain why that is.

As I’ve said before, looking at symmetries can help to simplify physics problems. Symmetries are transformations which leave the object that is transformed in an identical state from how it started. For example, a square can be rotated by 90 degrees about its centre point and the result will look the same as the unrotated version. This is an example of a discrete symmetry – there are only four rotations of the square that are symmetry transformations (90, 180, 270, and 360 degrees). In contrast, a circle has a continuous symmetry – you can rotate a circle about its centre point by any angle and end up with a circle that looks just the same as the unrotated one.

This is the moment that things start to get a bit less easy to visualise, because talking about a different kind of symmetry is unavoidable: We have to delve into gauge transformations and gauge symmetries.

To try and explain the concept of a gauge transformation, look at the left-hand picture below. The lines represent potentials at each position. Start with the lower, blue line. It has a field associated with it, which is the slope of the line at each point. To get the red line from the blue line, you have to add on a fixed amount of potential at every position. This is represented by the three dashed black arrows, which are all the same length.

But here is the crucial point: The field associated with the red line is exactly the same as the field associated with the blue line, because the slopes of the two lines are the same at every position. Adding on the extra potential hasn’t changed this.

So, adding a fixed amount to the potential at every position does not change the field at all. And remember, the field is the only thing that is physically observable – we can’t measure the potential. Therefore, this is a continuous symmetry.

The fact that many different potentials give the same field is called “gauge symmetry”. The type of gauge symmetry illustrated here is a “global” symmetry, because the amount of potential that you add or subtract at each point in space is the same (i.e. it’s a global amount).

However, the crucial part for the existence of light comes from a slightly different type of gauge symmetry, called a “local” one. For a local gauge transformation, instead of adding the same amount of potential at every point, you have the freedom to add different amounts of potential in different places. This is shown in the right-hand graph above. The red line is obtained from the blue line by adding different amounts of potential at each position. Notice that the three dashed black arrows are now different lengths.

Electromagnetism in quantum mechanics works under the assumption that this local gauge transformation is also a symmetry. For this to be true, the physical observables, including the field, must stay the same after the local transformation.

But, looking at the graph, it immediately becomes apparent that adding a different amount of potential at different positions to the blue line means that the red line has a different slope. This would change the field, and so this local transformation should not be a symmetry at all!

The only way that this can work is if our description of the quantum field is incomplete: In addition to the potential, there must be another part which also feels the effect of the local gauge transformation. When the combined transformation of the potential and this additional object are added together, the fields remain unchanged so that the local gauge symmetry is intact.

For the electromagnetic field in quantum mechanics, it turns out that this secondary part is a photon. Looking deeper into the mathematics, we find that their existence explains how charged objects feel a force from the field: they emit and absorb photons.

But photons are also the particles which carry light! So, one answer to the question “why do we have light?” is simply that photons must exist to preserve local gauge symmetry.

I appreciate that this has the whiff of a magic trick to it: Why should local gauge symmetry be something that we insist must exist? Perhaps there is some deep answer to this question that I don’t know, but the best response might simply be that this theory works.

To add more to the “it just works” line of reasoning, local gauge symmetry is also the reason that gluons (which carry the strong interaction) and the W and Z bosons (which carry the weak interaction) exist. In those cases the symmetry operations are more complicated than adding a potential, but the fundamental assumptions and logic are the same. So, this is a powerful concept which seems to be important in describing quantum physics, and gives one explanation for light comes from.

# What is the holographic correspondence?

One of the hardest things to describe in theoretical physics is what happens when lots of particles interact with each other. Essentially, it is impossible to solve this problem exactly, and so the approaches that are currently used rely on several types of approximation.

What I want to describe is how, maybe, approaches in String Theory might be used to solve some of these really important “hard” problems. There’s no way that I can explain all the details (honestly, I don’t understand them!) but hopefully this will be a picture of how weird, esoteric, and very mathematical concepts can be say something useful about reality.

This approach is generically called “holography” for reasons that will become clear(er) later.

One of the approximate approaches to describing interacting particles that has been used to great effect is called “perturbation theory”. This applies when the interactions between the particles are relatively weak. How it works could be a whole post in itself, but perhaps for now it is enough to say that the existence of perturbation theory makes some problems with weak interactions “easy” in the sense that they can be approximately solved.

Crucially, it turns out that many of the complicated string theories that try to describe how quantum gravity works have interactions between particles which can be treated in perturbation theory.

The point of holography is that it might be possible to discover a dictionary or a way of translating between the “easy” string theory and a “hard” theory with strong interactions. Using this dictionary, it is possible to start from the “hard” theory, translate the calculation into the “easy” gravity analogue, do the calculation, and translate the results back to the “hard” context.

The diagram above is a sketch of how to visualise this process. The “easy” gravity theory exists in a bulk with a certain number of dimensions, whereas the “hard” theory lives in a space which is one dimension smaller, at the edge (or “boundary”). This is where the term holography comes from: The physical theory is a hologram which is projected from the bulk like R2D2’s message from Princess Leia.

Most intriguingly, when the “hard” theory has a temperature above absolute zero (which all physical materials must have) the gravity theory contains a black hole at its centre which has an event horizon.

So, the calculation for the complicated experimental quantity that you are interested in on the boundary can be translated through the bulk to the event horizon of the black hole. There, the properties of the theory on the boundary get converted into the properties of space-time near the black hole. This is what he dictionary does. Perturbation theory can then be used to get an approximate answer in that context. Finally, the answer is moved back through the bulk to the boundary where it can be interpreted in the original context.

Of  course, the technical details of how to actually do this in mathematics is very complicated, but there is one well-understood example of this process.

Quarks are fundamental particles and can be glued together to make protons and neutrons. The particles which do the glueing are called gluons. The gluons and the quarks are strongly interacting and so they fall into the category of “hard” theories. But, there is a well-defined correspondence between a supersymmetric particle theory which lives in eight spatial dimensions and one time dimension (so, nine in total) and an “easy” string theory which lives in ten dimensions. This correspondence has been used to derive results which would otherwise not be possible.

One of the current questions for people who work on holography is whether this is just a fortuitous specific case, or whether these correspondences are more general.

In condensed matter, there are also strongly interacting materials which theorists find very difficult to describe. One really important example is the high temperature superconductor materials.

The question is whether a holographic correspondence can be found for a theory that can make predictions about these materials? To put that another way, is there a higher-dimensional, gravity-like theory which gives a theory for a superconductor as its hologram?

A lot of people are looking at this question at the moment.

There are some encouraging things which have been done already. For example, the materials which go superconducting at low temperature also have weird behaviour at higher temperatures where they don’t superconduct. These properties have been calculated within the gravity theory, and shows some similar features to those seen in experiments.

But there is also a lot that is not known yet. For example, it is very difficult to include effects of the underlying material crystal, or include the existence of the quantum-mechanical spin of the particles. Both of these details will be important to design new materials which sustain superconductivity at even higher temperature.

This is really a field which is still in its infancy, but the underlying idea behind it is intriguing: if the theorists working on it can progress to the point where it can make predictions, it would be very exciting indeed.

# How do you measure the quantum states of a material?

I’ve talked a lot on this blog about how understanding the quantum states of a material can be helpful for working out its properties. But is it possible to directly measure these states in an experiment? And what sort of equipment is needed to do so? I’ll try to explain here.

First, a quick recap. The band structure is like a map of the allowed quantum states for the electrons in a material. The coordinates of the map are the momentum of the electron, and at each point there are a series of energy levels which the electron can be in. The energy states close to the “Fermi energy” largely determine things like whether the material can conduct electricity and heat, absorb light, or do interesting magnetic things.

There are various ways that the band structure can be investigated. Some of them are quite indirect, but last week, I visited an experimental facility in the UK where they can do (almost) direct measurements of the band structure using X-rays.

The technical name for this technique is “angle-resolved photoemission spectroscopy”, or ARPES for short. Let’s break that down a bit. Spectroscopy just means that it’s a way of measuring the spectrum of something. In this case, it’s the electrons in the material. I’ll come back to the “angle-resolved” part in a minute, but the crucial thing to explain here is what photoemission is.

The sketch above shows a hypothetical band structure. When light is shone on a material, the photons (green wavy arrows) that make up the beam can be absorbed by one of the electrons in the filled bands below the Fermi energy. When this happens, the energy and momentum of the photon is transferred into the electron.

This means that the electron must change its quantum state. But the band structure gives the map of the only allowed states in the material, so the electron must end up in one of the other bands. In the left-hand picture, the energy of the photon is just right for the electron at the bottom of the red arrow to jump to an unfilled state above the Fermi energy. This is called “excitation”.

But in the right-hand picture, the energy of the photon is larger (see the thicker line and bigger wiggles on the green arrow) so there is no allowed energy level for the excited electron to move to. Instead, the electron is kicked completely out of the material. To put that another way, the high-energy photons cause the material to emit electrons. This is photoemission!

The crucial part about ARPES is that the emitted electrons retain information about the quantum state that they were in before they absorbed the photons. In particular, the photons carry almost no momentum, so the momentum of the electron can’t really change during the emission process. But also, energy must be conserved, so the energy of the emitted electron must be the energy of the photon, plus the energy of the quantum state that the electron was in before emission.

So, if you can catch the emitted electrons, and measure their energy and momentum, then you can recover the band structure! The angle-resolved part in the ARPES acronym means that the momentum of the electrons is deduced from what angle they are emitted at.

But what does this look like in practise? Fortunately, a friendly guide from Diamond showed me around and let me take pictures.

The upper-left picture is an outside view of the Diamond facility. (The cover picture for this blog entry is an aerial view.) It’s a circular building, although this picture is taken from close enough that this might be hard to see. This gives a sense of scale for the place!

Inside is a machine called a synchrotron. They didn’t let us go near this, so I don’t have any pictures, but it is a circular particle accelerator which keeps bunches of electrons flowing around it very, very fast. As they go around, they release a lot of X-ray photons which can be captured and focused. (There is a really cool animation of this on their web site.) The X-rays come down a “beam line” and into one of many experimental “hutches” which stand around the outside of the accelerator.

The upper-right picture shows the ARPES machine inside the main hutch of beamline I05. Most of the stuff you can see at the front is designed for making samples under high vacuum, which can then be transferred straight into the sample chamber without exposure to air.

The lower-left picture is behind the machine, where the beam line comes in. It’s kinda hard to see the metal-coloured pipe, so I’ve drawn arrows. The lower-right picture shows where the real action happens. The sample chamber is near the bottom (there is a window above it which allows the experimentalists to visually check that the sample is okay), and you can just about see the beam line coming in from behind the rack in the foreground.

The X-rays come into the sample chamber from the beam line, strike the sample, and the emitted electrons are funnelled into the analyser which is the big metallic hemisphere towards the right of the picture. The spherical shape is important, because the momentum of the electrons is detected by how much they are deflected by a strong electric field inside the analyser. This separates the high momentum electrons from the low momentum ones in a similar way that a centrifuge separates heavy items from light ones.

And what can you get after all of this? The energy and momentum of all the electrons is recorded, and pretty graphs can be made!

Above is a picture that I stole from the Diamond web site. On the left is a theoretical calculation for the band structure of a material called tungsten diselenide (WSe2). On the right is the ARPES data. The colour scheme shows the intensity of the photoemitted electrons. As you can see, the prediction and data match very well. After all the effort of building a massive machine, it works! Hooray science!

# What is peer review?

Chances are you’ve heard of peer review. The media often use it as an adjective to indicate a respectable piece of research. If a study has not been peer reviewed then this is taken as a shorthand that it might be unreliable. But is that a fair way of framing things? How does the process of peer review work? And does it do the job?

So, first things first – what is peer review? Essentially, it’s a stamp of approval by other experts in the field. Knowledgeable people will read a paper before it’s published and critique it. Usually, a paper won’t be published unless these experts are fairly satisfied that the paper is correct and measures up to the standards of importance or “impact” that the particular journal requires.

The specifics of the peer review process vary between different fields and different journals, but here is how things typically go in physics. Usually, a journal editor will send the paper to two or more people. These could be high profile professors or Ph.D students or anyone in between, but they are almost always people who work on similar topics.

The reviewers then read the paper carefully, and write a report for the editor, including a recommendation of whether the paper should be published or not. Often, they will suggest modifications to make it better, or ask questions about parts that they don’t understand. These reports are sent to the authors, who then have a chance to respond to the inevitable criticisms of the referees, and resubmit a modified manuscript.

After resubmission, many things can happen. If the referee’s recommendations are similar, then the editor will normally send the new version back to them so they can assess whether their comments and suggestions have been adequately addressed in the new version. They will then write another report for the editor.

But if the opinions of the referees are split, then the editor might well ask for a third opinion. This is the infamous Reviewer 3, and their recommendation is often crucial. In fact, it’s so crucial that the very existence of Reviewer 3 has lead to internet memes, a life on twitter (see #reviewer3), and mention in countless satires of academic life including this particularly excellent one by Kelly Oakes of BuzzFeed (link).

But, once the editor has gathered all the reports and recommendation, they will make a final decision about whether the paper will be published or not. For the authors, this is the moment of truth!

When it works, this can be a constructive process. I’ve certainly had papers that have been improved by the suggestions and feedback. But the process does not always work well. For example, not all journals always carry out the review process with complete rigour. The existence of for-profit, commercial journals who charge authors a publication fee is a subject for another day, but in those journals it is easy to believe that there is a pressure on the editors to maximise the number of papers that they accept. Then it’s only natural that review standards may not be well enforced.

And the anonymity that reviewers enjoy can lead to bad outcomes. By definition, reviewers have to be working in a similar field to the authors of the paper otherwise they would not be sufficiently expert to judge the merits of the work. So sometimes a paper is judged by competitors. There are many stories of papers being deliberately slowed down by referees, perhaps while they complete their own competing project. Or of times when a referee might stubbornly refuse to recommend publication in spite of good arguments. And there are even stories of outright theft of ideas and results during review.

Finally, there is also the possibility of straightforward human error. Two or three reviewers is not a huge number and so it can be hard to catch the mistakes. And not all reviewers are completely suitable for the papers they read. Review work is almost always done on a voluntary basis and so it can be hard for editors to find a sufficient number of people who are willing to give up their time.

I can think of a few times when I have not really understood the technical aspects of a paper, or I have not been sufficiently close to the field to judge whether the work is important. Perhaps I should have declined to review those manuscripts. Or maybe it’s okay because the paper should not be published if it cannot convince someone in an adjacent field of the merits of the work. There are arguments both ways.

The fact is that sometimes things slip through the net. Papers can be published with errors, or even worse, with fabricated data or plagiarism. There is no foolproof system for avoiding this, so in my opinion, robust post-publication review is important too. Exactly how to implement that is a tricky business though.

But, to sum up, my opinion is that peer review is an important – but not infallible – part of the academic process. Just because a paper has passed through this test does not automatically mean that it is correct or the last word on a subject, but it is a mark in its favour.

# What is condensed matter theory, and why is it so hard?

This post is about the general approach to physics that people who work on the theory of condensed matter take. As I’ll explain, it is basically impossible to calculate anything exactly, and so the whole field relies on choosing smart approximations that allow you to make some progress. Exactly what kind of approximations you make depends on what you want to achieve, and I’ll describe some of the common ones below.

But before that, what is ‘condensed matter physics’? Roughly speaking, it refers to anything that is a solid or a liquid (and also some gases) that you can see in the Real World around you. So it’s not stars and galaxies and space exploration, it’s not tiny sub-atomic particles like quarks and higgs bosons like they talk about at CERN, and it’s not what happened in the first fractions of a second after the Big Bang, or what it’s like inside a black hole. But it is about the materials that make the chip inside your phone or power a laser, or about making batteries store energy more efficiently, or finding new catalysts that make industrial chemical production cheaper (okay, so that one crosses into chemistry as well, but the lines are fuzzy!), or its about making superconductivity work at higher temperatures.

### Why is it so hard?

Really, what makes condensed matter physics different from many other types of physics is that in many situations, the behaviour of the materials is governed by how many many particles interact with each other. Think about a small piece of metal for instance: You have millions and millions of atoms that form some bonds which give it a solid shape. Then some of the electrons in those atoms disassociate themselves and become a bit like a liquid that can move around inside the metal and conduct electricity or heat, or make the metal magnetic. In a small piece of metal there will be $10^22$ atoms. (That notation means that the number is a one with twenty two zeroes after it. So it’s a lot.) And all of these atoms have an electric field which is felt by all the other atoms so that they all interact with each other. It is, in principle, possible to write down some equations which describe this, but there is no way that anyone can make a solution for these equations and work out exactly how all these atoms and electrons behave. I don’t just mean that it’s very difficult, I mean that it is mathematically proven to be impossible!

### Watcha gunna do?

So, that begs the question what can we do? It is easy to connect a bit of metal to a battery and a current meter and see that it can conduct electricity, but how do we describe that theoretically? There are several different approaches to making the approximations needed, so I’ll try to explain them now.

1. Use symmetry. By the magic of mathematics, the equations can often be simplified if you know something about the symmetry of the material you want to investigate. For example, the atoms in many metals sort themselves into a crystal lattice of repeated cubes. Group theory can then be used to reduce the complexity of the equations in a very helpful way. For instance, it might be possible to tell whether a material will conduct electricity or not even at this level of approximation. But this symmetry analysis contains an assumption because in reality materials won’t completely conform to the symmetry. They may have impurities in them, or the crystal structure might have irregularities, for example. So this isn’t a magic bullet. And also this might well not reduce the equations enough that they can be solved, so it is usually just a first step.

2. From this point, it is often possible to make simplifying assumptions so that the mathematically impossible theory becomes something that can be solved. Of course, by doing this you lose quite a lot of detail. It’s like the “spherical cows” analogy. In principle, cows have four legs, a tail, a head, and maybe some udders. But say you wanted to work out how many cows you could safely fit into a field. You don’t need to know any of that detail, so you can think of the cows as being a sphere which consumes a certain amount of hay each day. You can do something similar about the metal: Instead of keeping track of every detail, you can forget that the atoms have an internal structure (spherical atoms!). Or you could assume that the atoms interact with the electrons in a particularly simple way so that you can focus just on the disassociated electrons. Or you could assume that the electrons don’t interact with each other, but only with the atoms. In the jargon of the field, this general approach is called finding an “effective theory”. These theories can often give quite good estimates of not only whether a material will conduct, but how well it will do it.

3. These days, computers are really fast, and they can be used to numerically solve equations that are almost exact. However, computers are not good enough that they can do this for $10^22$ atoms, so if you want to keep quite close to the original equations, they might be able to do fifty or so. Maybe a hundred. In the jargon, these methods are called “ab-initio” (from the beginning) because they do not make any approximations unless they absolutely have to. The fact that you can’t treat too many atoms limits what these methods can be applied to. For instance, they can be quite good for molecules, and crystals where the periodic repetition is not too complicated. But for these situations, you can get a level of detail which is simply impossible in the effective theories. So there’s a trade-off. And computers are getting better all the time so this is one area that will see a lot of progress.
4. The final way that I’ll describe is sort-of the inverse process. Instead of starting from the mathematics which are impossible, you can start from experimental data and try to work backwards towards the theoretical description that gives you the right answer. Sometimes this is used in conjunction with one of the other methods as a way to give you some clues about what assumptions to make.

So, that’s how you do theory in condensed matter. Numbers 2 and 4 are basically my day job, on a good day at least!