All posts by David Abergel

About David Abergel

I am an assistant professor at Nordita, the Nordic Institute for Theoretical Physics.

How does a transistor work?

The world would be a very different place if the transistor had never been invented. They are everywhere. They underpin all digital technology, they are the workhorses of consumer electronics, and they can be bewilderingly small. For example, the latest Core i7 microchips from Intel have over two billion transistors packed onto them.

But what are they, and how do they work?

Left: An old-school transistor with three ‘legs’. Right: An electron microscope image of one transistor on a microchip.

In some ways, they are beguilingly simple. Transistors are tiny switches: they can be on or off. When they are on, electric current can flow through them, but when they are off it can’t.

The most common way this is achieved is in a device called called a “field effect transistor”, or FET. It gets this name because a small electric field is used to change the device from its conducting ‘on’ state to it’s non-conducting ‘off’ state.

At the bottom of the transistor is the semiconductor substrate, which is usually made out of silicon. (This is why silicon is such a big deal in computing.) Silicon is a fantastic crystal, because by adding a few atoms of another type of element to its crystal, it can become positively or negatively charged. To explain why, we need to turn to chemistry! A silicon atom has 14 electrons in it, but ten of these are bound tightly to the atomic nucleus and are very difficult to move. The other four are much more loosely bound and are what determines how it bonds to other atoms.

When a silicon crystal forms, the four loose electrons from each atom form bonds with the electrons from nearby atoms, and the geometry of these bonds is what makes the regular crystal structure. However, it is possible to take out a small number of the silicon atoms and replace them with some other type of atom. If this is done with an atom like phosphorus or nitrogen which has five loose electrons, then four of them are used to make the chemical bonds and one is left over. This left-over electron is free to move around the crystal easily, and it gives the crystal an overall negative charge. In the physics language, the silicon has become “n-doped”.

But, if some silicon atoms are replaced by something like boron or aluminium which has only three loose electrons, the atom has to ‘borrow’ an extra electron from the rest of the crystal, meaning that this electron is ‘lost’ and the crystal becomes positively charged. This is called “p-doped”.

Sketches of a FET in the ‘off’ state (left) and in the ‘on’ state (right) when the gate voltage is applied.

Okay, so much for the chemistry, now back to the transistor itself. Transistors have three connections to the outside world, which are usually called the source, drain, and gate. The source is the input for electric current, the drain is the output, and the gate is the control which determines if current can flow or not.

The source and drain both connect to a small area of n-doped silicon (i.e. they have extra electrons) which can provide or collect the electric current which will flow through the switch. The central part of the device, called the “channel” is p-doped which means that there are not enough electrons in it.

Now, here’s where the quantum mechanics comes in!

A while back, I described the band structure of a material. Essentially, it is a map of the quantum mechanical states of the material. If there are no states in a particular region, then electrons cannot go there. The “Fermi energy” is the energy at which states stop being filled. I’ve drawn a rough version of the band structure of the three regions in the diagram below. In the n-doped regions, the states made by the extra electrons are below the Fermi surface and so they are filled. But in the p-doped channel, the unfilled extra states are above the Fermi energy. This makes a barrier between the source and drain and stops electrons from moving between the two.

Band diagrams for a FET. In the ‘on’ state, the missing electron levels are pushed below the Fermi surface and form a conducting channel.

Now for the big trick. When a voltage is applied to the gate, it makes an electric field in the channel region. This extra energy that the electrons get because they are in this field has the effect of moving energy of the quantum states in the channel region to a different energy. This is shown on the right hand side of the band diagrams. Now, the extra states are moved below the Fermi energy, but the silicon can’t create more electrons so these unfilled states make a path through which the extra electrons in the source can move to the drain. This removes the barrier meaning that applying the electric field to the channel region opens up the device to carrying current.

In the schematic of the device above, the left-hand sketch shows the transistor in the off state with no conducting channel in the p-doped region. The right-hand sketch shows the on-state, where the gate voltage has induced a conducting channel near the gate.

So, that’s how a transistor can turn on and off. But it’s a long leap from there to the integrated circuits that power your phone or laptop. Exactly how those microchips work is another subject, but briefly, the output from the drain of one transistor can be linked to the source or the gate of another one. This means that the state of a transistor can be used to control the state of another transistor. If they are put together in the right way, they can process information.

What is peer review?

Chances are you’ve heard of peer review. The media often use it as an adjective to indicate a respectable piece of research. If a study has not been peer reviewed then this is taken as a shorthand that it might be unreliable. But is that a fair way of framing things? How does the process of peer review work? And does it do the job?

So, first things first – what is peer review? Essentially, it’s a stamp of approval by other experts in the field. Knowledgeable people will read a paper before it’s published and critique it. Usually, a paper won’t be published unless these experts are fairly satisfied that the paper is correct and measures up to the standards of importance or “impact” that the particular journal requires.

The specifics of the peer review process vary between different fields and different journals, but here is how things typically go in physics. Usually, a journal editor will send the paper to two or more people. These could be high profile professors or Ph.D students or anyone in between, but they are almost always people who work on similar topics.

The reviewers then read the paper carefully, and write a report for the editor, including a recommendation of whether the paper should be published or not. Often, they will suggest modifications to make it better, or ask questions about parts that they don’t understand. These reports are sent to the authors, who then have a chance to respond to the inevitable criticisms of the referees, and resubmit a modified manuscript.

After resubmission, many things can happen. If the referee’s recommendations are similar, then the editor will normally send the new version back to them so they can assess whether their comments and suggestions have been adequately addressed in the new version. They will then write another report for the editor.

But if the opinions of the referees are split, then the editor might well ask for a third opinion. This is the infamous Reviewer 3, and their recommendation is often crucial. In fact, it’s so crucial that the very existence of Reviewer 3 has lead to internet memes, a life on twitter (see #reviewer3), and mention in countless satires of academic life including this particularly excellent one by Kelly Oakes of BuzzFeed (link).

Credit: Kelly Oakes / BuzzFeed

But, once the editor has gathered all the reports and recommendation, they will make a final decision about whether the paper will be published or not. For the authors, this is the moment of truth!

When it works, this can be a constructive process. I’ve certainly had papers that have been improved by the suggestions and feedback. But the process does not always work well. For example, not all journals always carry out the review process with complete rigour. The existence of for-profit, commercial journals who charge authors a publication fee is a subject for another day, but in those journals it is easy to believe that there is a pressure on the editors to maximise the number of papers that they accept. Then it’s only natural that review standards may not be well enforced.

And the anonymity that reviewers enjoy can lead to bad outcomes. By definition, reviewers have to be working in a similar field to the authors of the paper otherwise they would not be sufficiently expert to judge the merits of the work. So sometimes a paper is judged by competitors. There are many stories of papers being deliberately slowed down by referees, perhaps while they complete their own competing project. Or of times when a referee might stubbornly refuse to recommend publication in spite of good arguments. And there are even stories of outright theft of ideas and results during review.

Finally, there is also the possibility of straightforward human error. Two or three reviewers is not a huge number and so it can be hard to catch the mistakes. And not all reviewers are completely suitable for the papers they read. Review work is almost always done on a voluntary basis and so it can be hard for editors to find a sufficient number of people who are willing to give up their time.

I can think of a few times when I have not really understood the technical aspects of a paper, or I have not been sufficiently close to the field to judge whether the work is important. Perhaps I should have declined to review those manuscripts. Or maybe it’s okay because the paper should not be published if it cannot convince someone in an adjacent field of the merits of the work. There are arguments both ways.

The fact is that sometimes things slip through the net. Papers can be published with errors, or even worse, with fabricated data or plagiarism. There is no foolproof system for avoiding this, so in my opinion, robust post-publication review is important too. Exactly how to implement that is a tricky business though.

But, to sum up, my opinion is that peer review is an important – but not infallible – part of the academic process. Just because a paper has passed through this test does not automatically mean that it is correct or the last word on a subject, but it is a mark in its favour.

Topology and the Nobel Prize

You may have seen that the Nobel Prize for Physics was awarded this week. The Prize was given “for theoretical discoveries of topological phase transitions and topological phases of matter”, which is a bit of a mouthful. Since this is an area that I have done a small amount of work in, I thought I would try to explain what it means.

You might have seen a video where a slightly nutty Swede talks about muffins, donuts, and pretzels. (He’s my boss, by the way!) The number of holes in each type of pastry defined a different “topology” of the lunch item. But what does that have to do with electrons? This is the bit that I want to flesh out. Then I’ll give an example of how it might be a useful concept.

What is topology?

In a previous post, I talked about band structure of crystal materials. This is the starting point of explaining these topological phases, so I recommend you read that post before trying this one. There, I talked about the band structure being a kind of map of the allowed quantum states for electrons in a particular crystal. The coordinates of the map are the momentum of the electron.

Each of those quantum states has a wave function associated with it, which describes among other things, the probability of the electron in that state being at a particular point in space. To make a link with topology, we have to look at how the wave function changes in different parts of the map. To use a real map of a landscape as the analogy, you can associate the height of the ground with each point on the map, then by looking at how the height changes you can redraw the map to show how steep the slope of the ground is at each point.

We can do something like that in the mathematics of the wave functions. For example, in the sketches below, the arrows represent how the slope of the wave function looks for different momenta. You can get vortices (left picture) where the arrows form whirlpools, or you can get a source (right picture) where the arrows form a hedgehog shape. A sink is similar except that the arrows are pointing inwards, not outwards.


Now for the crucial part. There is a theorem in mathematics that says that if you multiply the slope of the wave function with the wave function itself at the same point, and add up all of these for every arrow on the map, then the result has to be a whole number. This isn’t obvious just by looking at the pictures but that’s why mathematics is great!

That whole number (which I’m going to call n from now on) is like the number of holes in the cinnamon bun or pretzel: It defines the topology of the electron states in the material. If n is zero then we say that the material is “topologically trivial”. If n is not zero then the material is “topologically non-trivial”. In many cases, n counts difference between the number of sources and the number of sinks of the arrows.

What topology does

Okay, so that explains how topology enters into the understanding of electron states. But what impact does it have on the properties of a material? There are a number of things, but one of the most cool is about quantum states that can appear on the surface of topologically non-trivial materials. This is because of another theorem from mathematics, called the “bulk-boundary correspondence” which says that when a topologically non-trivial material meets a topologically trivial one, there must be quantum states localized at the interface.

Now, the air outside of a crystal is topologically trivial. (In fact, it has no arrows at all, so that when you take the sum there is no option but to get zero for the result.) So, at the edges of any topologically non-trivial material there must be quantum states at the edges. In some materials, like bismuth selenide for example, these quantum states have weird spin properties that might be used to encode information in the future.

And the best part is that because these quantum states at the edge are there because of the topology of the underlying material, they are really robust against things like impurities or roughness of the edge or other types of disorder which might destroy quantum states that don’t have this “topological protection”.

An application

Now, finally, I want to give one more example of this type of consideration because it’s something I’ve been working on this year. But let me start at the beginning and explain the practical problem that I’m trying to solve. Let’s say that graphene, the wonder material is finally made into something useful that you can put on a computer chip. Then, you want to find a way to make these useful devices talk to each other by exchanging electric current. To do that, you need a conducting wire that is only a few nanometers thick which allows current to flow along it.

The obvious choice is to use a wire of graphene because then they can be fabricated at the same time as the graphene device itself. But the snag is that to make this work, the edges of that graphene wire have to be absolutely perfect. Essentially, any single atom out of place will make it very hard for the graphene wire to conduct electricity. That’s not good, because it’s very difficult to keep every atom in the right place!


The picture above shows a sketch of a narrow strip of graphene surrounded by boron nitride. Graphene is topologically trivial, but boron nitride is (in a certain sense) non-trivial and can have n equal to either plus or minus one, depending on details. So, remembering the bulk-boundary correspondence, the graphene in this construction works like an interface between two different topologically non-trivial regions, and therefore there must be quantum states in the graphene. These states are robust, and protected by the topology. I’ve tried to show these states by the black curved lines which illustrate that the electrons are located in the middle of the graphene strip.

Now, it is possible to use these topologically protected states to conduct current from left to right in the picture (or vice versa) and so this construction will work as a nanometer size wire, which is just what is needed. And the kicker is that because of the topological protection, there is no longer any requirement for the atoms of the graphene to be perfectly arranged: The topology beats the disorder!

Maybe this, and the example of the bismuth selenide I gave before show that the analysis of topology of quantum materials is a really useful way to think about their properties and helps us understand what’s going on at a deeper level.

(If you’re really masochistic and want to see the paper I just wrote on this, you can find it here.)

What is graphene and why all the hype?

There’s a decent chance you’ve heard of graphene. There are lots of big claims and grand promises made about it by scientists, technologists, and politicians. So what I thought I’d do is to go through some of these claims and almost ‘fact-check’ them so that the next time you hear about this “wonder material” you know what to make of it.

Let’s start at the beginning: what is graphene? It’s made out of carbon atoms arranged in a crystal. But what sets it apart from other crystals of carbon atoms is that it is only one atom thick (see the picture below). It’s not quite the thinnest thing that could ever exist because maybe you could make something similar using atoms that are smaller than carbon (for example, experimentalists can make certain types of helium in one layer), but given that carbon is the sixth smallest element, it’s really quite close!


Diamond and graphite are also crystals made only of carbon, but they have a different arrangement of the carbon atoms, and this means they have very different properties.

So, what has been claimed about graphene?

Claim one: the “wonder material”

Graphene has some nice basic properties. It’s really strong and really flexible. It conducts electricity and heat really well. It simultaneously is almost transparent but absorbs light really strongly. It’s almost impermeable to gases. In fact, most of the proposals for applications of graphene in the Real World™ involve these physical and mechanical superlatives, not the electronic properties which in some ways are more interesting for a physicist.

For example, its conductivity and transparency mean that it could be the layer in a touch screen which senses where a finger or stylus is pressing. This could combine with its flexibility to make bendable (and wearable) electronics and displays. But for the moment, it’s “only” making current ideas work better, it doesn’t add any fundamentally new technology that we didn’t have before. If that’s your definition of a “wonder material” then okay, but personally I’m not quite convinced the label is merited.

Claim two: Silicon replacement

In the first few years after graphene was made, there was a lot of excitement that it might be used to replace silicon in microchips and make smaller, faster, more powerful computers. It fairly quickly became obvious that this wouldn’t happen. The reason for this is to do with how transistors work. That’s a subject that I want to write more about in the future, but roughly speaking, a transistor is a switch that has an ‘on’ state where electrical current can flow through it, and an ‘off’ state where it can’t. The problem with graphene is turning it off: Current would always flow through! So this one isn’t happening.

Graphene electronics might still be useful though. For example, when your phone transmits and receives data from the network, it has to convert the analogue signal in the radio waves from the mast into a digital signal that the phone can process. Graphene could be very good for this particular job.

Claim three: relativistic physics in the lab

This one is a bit more physicsy so takes a bit of explaining. In quantum mechanics, one of the most important pieces of information you can have is how the energy of a particle is related to its momentum. This is the ‘band structure’ that I wrote about before. In most cases, when electrons move around in crystals, their energy is proportional to their momentum squared. In special relativity there is a different relation: The energy is proportional to just the momentum, not to the square. For example, this is true for light or for neutrinos. One thing that researchers realized very early on about graphene is that electrons moving around on the hexagonal lattice had a ‘energy equals momentum’ band structure, just like in relativity. Therefore, the electrons in graphene behave a bit like neutrinos or photons. Some of the effects of this have been measured in experiments, so this is true.

Claim four: Technological revolution

One other big problem that has to be overcome is that graphene is currently very expensive to make. And the graphene that is made at industrial scale tends to be quite poor quality. This is an issue that engineers and chemists are working really hard at. Since I’m neither an engineer or a chemist I probably shouldn’t say too much about it. But what is definitely true is that the fabrication issues have to be solved before you’ll see technology with graphene inside it in high street stores. Still, these are clever people so there is every chance it will still happen.


Near the top, I said graphene simultaneously absorbs a lot of light and is almost transparent. This makes no sense on the face of it!! So let me say what I mean. To be specific, a single layer of graphene absorbs about 2.3% of visible light that lands on it. Considering that graphene is only one layer of atoms, that seems like quite a lot. It’s certainly better than any other material that I know of. But at the same time, it means that it lets through about 97.7% of light, which also seems like a lot. I guess it’s just a question of perspective.

Why do some materials conduct electricity while others don’t?

Can you tell at a glance how the electrons in a material behave? Amazingly, the answer is “yes”, and in this post I’ll explain how.

I want to introduce the concept of something called ‘band structure’ because it is an idea that underpins a lot of the quantum mechanics of electrons in real materials. In particular, the band structure of material can make it really easy to know if a material is a good conductor of electricity or not. So, here goes.

To describe how electrons behave in a particular material, a good place to start is by working out what quantum states they are allowed to be in. In essence, the band structure is simply a map of these allowed quantum states. One place where things can be a bit confusing is the coordinates that are used to draw this map. Band structure uses the momentum of the quantum state as its coordinate, and gives the energy of that state at each point.

The reason for this is that the momentum and energy of the quantum states are linked to each other so it just makes sense to draw things this way. But why not use the position of the quantum state? This is because position and momentum cannot both be known at the same time due to Heisenberg’s Uncertainty Principle. If the momentum is known very accurately then the position must be completely unknown.

In fact, there’s even more to it than that. Most solids have a periodic lattice structure and this periodicity means that only certain momentum values are important. Roughly speaking, if the size of the repeating pattern in the lattice has length a, then there is a repeating pattern of allowed energy states in momentum with length 1/a. This means that we can draw the map of the allowed quantum states in only the first of these zones. This zone has a finite size, which is very helpful when trying to draw it!


The band structure of silicon. (Picture credit: Dissertation by Wilfried Wessner, TU Wien.)

Let’s take silicon as an example because it’s a really important material since a lot of electronics are made from it. The picture above shows the band structure (left) and the shape of the first repeating zone of allowed momenta (right) of silicon. The zone of allowed momenta has quite a complicated shape which is related to the crystal structure of the silicon. Some of the important points in that zone are labeled, for example, the center of the zone is called the Γ point (pronounced “gamma point”), while the center of the square face at the edge of the zone is the X point. It’s impossible to draw all the allowed states at every momentum point in a 3D zone, so what is usually done is to draw the allowed quantum states along certain lines between these important points, and that is what is on the left of the picture. You can probably see that these allowed states form bands, which is where the name ‘band structure’ comes from.

There’s one more concept that is really important, called the “Fermi surface”. Electrons are fermions, and so they are allowed to occupy these quantum states so that there is at most one electron in each state. In nature, the overwhelming tendency is for the total energy of a system to be minimized as this is the most efficient arrangement. This is done by filling up all the quantum states, starting from the bottom, until all the electrons are in their own state. There are never enough electrons to fill all the allowed quantum states, and so the energy of the last filled (or first empty) states is called the Fermi surface. In a three dimensional material, the cutoff between filled and empty states is a two-dimensional surface.

So, how does knowing the band structure help us to understand the electronic properties of a material? As an example, let’s think about whether the material conducts electricity well or not. It turns out that for electrical conduction, most of the quantum states of the electrons play no role at all. The important ones are those near the Fermi surface.

To conduct electricity, an electron has to jump from its state below the Fermi surface to one above it, where it is free to move around the material. To do this, it has to absorb some energy from somewhere. This usually either comes from an electric field that is driving the electrical current (like a battery or a plug socket), or from the thermal energy of the material itself.

Take a look at the sketches below. They are cartoons of band structures near the Fermi surface (which is shown by the green dotted line). The filled bands are shown by thick blue lines while the empty bands are shown by thin blue lines. In the left-hand cartoon there is a big gap between the filled and empty bands so it’s very difficult for an electron to gain enough energy to make the jump from the filled band to the empty band. That means that a material with a large band gap at the Fermi surface is an insulator – it can’t conduct electricity easily. The middle cartoon shows a material with only a small band gap. That means it’s possible, but kinda difficult for an electron to make the jump and become conducting. Materials with narrow gaps are semiconductors.


The right-hand cartoon shows a material where the Fermi surface goes through one of the bands, so there are both empty states and filled states right at the Fermi surface. This means it’s really easy for an electron to jump above the Fermi surface and become conducting because it takes only a tiny amount of energy to do this. These materials are conductors.

Going back to silicon, we can look at the band structure above and see that there is a gap of about 1 electron volt at the Fermi energy. (The Fermi energy is zero on the y axis). One electron volt is too large an energy for an electron to become conducting by absorbing thermal energy, but small enough that it can be done by an electric field. This means that silicon is a semiconductor – it has a narrow gap.

One final question: How do you find the band structure of your favorite material? There is an experimental technique called ARPES where you shine high energy light at a material, and the photons hitting it cause electrons to be ejected from the surface. These electrons can be caught and the energy and momentum that they have reflect the energy and momentum of the quantum states they were filling in the material. So by careful measurement you can reconstruct the map of these states.

Another way is to use mathematics to theoretically predict the band structure. There has been a huge amount of work done to come up with accurate ways to go from the spatial definition of a crystal to its band structure with no extra information. In some cases, these work very well, but the calculations which do this are often very complicated and require supercomputers to run!

So, that is band structure. An easy way to make a link between complicated quantum mechanics and everyday properties like conduction of electricity.

Justin Trudeau and quantum computing

You’ve probably seen already that clip of Justin Trudeau, the Prime Minister of Canada, explaining to a surprised group of journalists why quantum computing excites him so much. In case you haven’t seen it, here is a link. A number of things strike me about this. Firstly, of course, he’s right: If we can get quantum computing to work then that would be a really, really big deal and it’s worth being excited about! Second, it’s a bit depressing that a politician having a vague idea about something scientific is a surprising exception to the rule. Thirdly, while his point about storage of information is right, there’s a whole lot more that quantum computers can do that he didn’t mention. Of course, that’s fair enough because he wasn’t trying to be comprehensive, but it gives me an opportunity to talk about some of the stuff that he missed out.

Before that, let’s go over exactly what a quantum computer is. As the Prime Minister said, a normal (or “classical”) computer operates using only ones and zeroes which are represented by current flowing or not flowing through a small “wire”. (However, as you might have already read, this might have to change in the future!) A quantum computer is completely different because instead of these binary bits, it has bits which can be in state that is a mixture of zero and one at the same time. This like the electron simultaneously going through both slits in the two slit experiment, or Schrödinger’s famous cat being alive and dead “at the same time”: It’s an example of a quantum mechanical superposition of states. A quantum computer is designed to operate on these quantum states and to take advantage of this indeterminacy, changing them from one superposition to another to do computations. If you can get the quantum bits to become entangled with each other (meaning that the quantum state of one bit will be affected by the quantum state of all the others that it is entangled with) then you can do quantum computing! Exactly how this would work from a technological point of view is a big subject which I’ll probably write about another time, but options that physicists and engineers are working on include using superconducting circuits, very cold gases of atoms, the spins of electrons or atomic nuclei, or special particles called majorana fermions.

A big field of study has been to find algorithms that allow this quantum-ness to be used to do things that classical computers can’t. There are a few examples that would really change everyday life if they could be implemented. The first sounds a bit boring on the face of it, but quantum algorithms allow you to search a list to determine if an item is in a list or not (i.e. to find that item) in a much shorter time than classical algorithms. So, if you want to search the internet for your favourite web site, a quantum google will do this much faster than a classical google. Quantum algorithms can also tell quickly whether all the items in a list are different from each other or not.

Another application is to solve “black box” problems. This has nothing to do with the flight data recorders in aircraft, but is the name given to the following problem. Say you have a set of inputs to a system and their corresponding output, but you don’t know what the system does to turn the input into the output. The system is the black box, and the difficult problem is to determine what operations the system does to the input. This is important because these black box problems occur in many different areas of science including artificial intelligence, studies of the brain, and climate science. For a classical computer to solve this exactly would require an exponential number of “guesses”, but a quantum computer could do this in just one “guess”!

But perhaps the most devastating use of a quantum computer is to break the internet. Let me explain this a bit! There is a mathematical theorem which says that every number can be represented as a list of prime numbers multiplied together, and that for each number there is only one such list. For example, 30=2\times 3\times5, or 247=13\times19. This matters because most digital security currently depends on the fact that this is a very difficult thing for classical computers to start with a big number and work out what the prime factors are. The way that most encryption on the internet works is that data is encoded using a big number that is the product of only two prime numbers. In order to decrypt the information again, you need to know what the two prime numbers that give you the big number are. Because it’s hard to work out what the two prime numbers are, it is safe to distribute the big number (your public key) so that anyone can encode information to send to you securely. But only you can decode the information because only you know what the two primes are (this is your private key). But, if it suddenly becomes easy to factorise the big number into the two primes then this whole mode of encryption does not work! Every interaction that you have with your bank, your email provider, social media, and online stores could be broken by someone else. The internet essentially wouldn’t be private! Or at least, it wouldn’t be private until a new method for doing encryption is found. This is the main reason why security agencies are working so hard on quantum computing.

Finally, I want to quickly mention one application is a bit more specialised to physics: Quantum computers will allow us to simulate quantum systems in a much more accurate way. Currently, the equations that determine how groups of quantum mechanical objects behave and interact with each other pretty much can’t be solved exactly, in part because the quantum behaviour is difficult to model accurately using classical computing. If you have a quantum computer, then part of this difficulty goes away because you can build the quantum interactions into the simulation in a much more natural way, using the entanglement of the quantum bits.

So in summary, Prime Minister Trudeau was right: Quantum computers have the potential to be absolutely amazing and to change society and are really exciting (and possibly slightly scary!) But storing information in a more compact manner is really only the tip of the iceberg.

My new idea

It’s been a while! Part of the reason I’ve not written anything recently is that I’ve been busy preparing a grant proposal which has to be submitted in a few days. This means I’m begging the Swedish funding agency to give me money to spend on researching a new idea that I have been working on for a while. As part of this proposal, I am required to write a description of what I want to do that is understandable by people outside of physics, so I thought I’d share an edited version of it here. Maybe it’s interesting to read about something that might happen in the future, rather than things that are already well known. And it’s an idea that I’m pretty excited about because there’s some chance it might make a difference!

Computing technology is continuously getting smaller and more powerful. There is a rule-of-thumb, called Moore’s law, which encodes this by predicting that the computing power of consumer electronics will double every two years. So far, this prediction has been followed since microchips were invented in the 1970s. However, fundamental limits are about to be reached which will halt this progress. In particular, the individual transistors which make up the chips are becoming so small that quantum mechanical effects will soon start to dominate their operation and fundamentally change how they work. Removing the heat generated by their operation is also becoming hugely challenging.

A transistor is essentially just a switch that can be either on or off. At the present time, the difference between the on and off state is given by whether an electric current is flowing through the switch or not. If quantum mechanical effects start to dominate transistor operation, then the distinction between the on and off state becomes blurred because current flow becomes a more random process.

One-dimensional materials with excitons. Left, two parallel nanowires. Electrons in the nearly empty wire are shown in blue, ‘holes’ in the nearly full wire are in green. The red ellipses represent the pairing. Right, a core-shell nanowire.

In this project, I will investigate a new method of making transistors, using the quantum mechanical properties of the electrons. The theoretical idea is to make two one-dimensional layers (for example, two nanowires) placed close enough to each other that the electrons in the material can interact with each other through Coulomb repulsion. If one of these nanowires has just a few electrons in it, while the other is almost full of electrons, then the electrons in the nearly empty wire can be attracted to the ‘holes’ in the nearly full wire, and they can pair up into new bound particles called excitons. What is special about these excitons is that they can form a superfluid which can be controlled electronically.


This can be made into a transistor in the following way. When the superfluid is absent, the two layers are quite well (although not perfectly) insulated from each other, so it is difficult for a current to flow between them. However, when the superfluid forms, one of the quantum mechanical implications is that it becomes possible to drive a substantial inter-layer current. This difference defines the on and off states of the transistor.

There are some mathematical reasons why one might expect that this cannot work for one-dimensional layers, but I have already demonstrated that there is a way around this. If the electrons can hop from one layer to the other, then the theorem which says that the superfluid cannot form in one dimension is not valid. What I will do next is a systematic investigation of lots of different types on one-dimensional materials to determine which is the best situation for experimentalists to look in for this superfluid. I will use approximate theories for the behaviour of electrons in nanowires or nanoribbons, carbon nanotubes, and core-shell nanowires to determine the temperature at which the superfluid can form for these different materials. When the superfluid is established, it can be described by a hydrodynamic theory which treats the superfluid as a large-scale object that can be described by simple equations that govern the flow of liquids. Analysing this theory will reveal information about the properties of the superfluid and allow optimisation of the operation of the switch. Finally, in reality, no material can be fabricated with perfect precision, so I will examine how imperfections will be detrimental to the formation of the superfluid to establish how accurate the production techniques need to be.

Another benefit of this superfluid is that it can conduct heat very efficiently. This means that it may have applications in cooling and refrigeration. I will also investigate the quantitative advantages that this may have over traditional thermoelectric materials. In both of these applications, the fact that the superfluid can exist in a one-dimensional material is a very advantageous factor for designing devices. In particular, because they are so small in two directions, it gives a huge amount of freedom for placing transistors or heat-conducting channels in optimal arrangements that would be impossible with two- or three-dimensional materials.

One final thing for some context: The picture at the top of the page shows a core-shell nanowire that was grown by some physicists in Lund, Sweden. It’s made out of two different types of semiconductor: Gallium antimonide (GaSb) in the core, and indium arsenic antimonide (InAsSb) in the shell. The core region is the nearly full layer that contains the ‘holes’, while the shell is the nearly empty layer with the electrons. The vertical white line on the left of the image is a scale bar that is 100nm long (that’s one ten-thousandth of a millimeter!) which shows that these wires are pretty small! (Picture credit: Ganjipour et al, Applied Physics Letters 101, 103501 (2012)).

How a hard disk works

This post is going to explain the fundamental part of how the hard drive in your old computer works. Modern solid state disks work completely differently, so this applies only to the older type that have been common for several decades. Specifically, when your computer writes something to the drive, it has to turn the sequence of zeroes and ones which make up the binary data into something physical on the disk. Then, when it needs to read this information later, it can go back and look at that part of the disk and recover the zeroes and ones from whatever material they were written to. But how do you tell the difference between a one and a zero? That’s the question I’ll try to answer.


But before we can get to that point, I have to explain a really important concept in quantum mechanics called “spin”. This is a quantity which is carried by all quantum mechanical particles, and is linked in a loose way to the rotational symmetry of the particle. Look at the right-pointing arrow in the picture. Hopefully it’s easy to see that the only way you can rotate the arrow so that it looks exactly the same as it does when you start (this is called a symmetry operation) is to rotate it through 360°. A particle that has this rotational symmetry is said to have a spin of 1. Now look at this double-headed arrow. If you rotate around the axis indicated by the red dot, you only have to rotate it by 180° to get back to where you started. This has a spin of 2 because you have to rotate half a turn to get the first symmetry operation. The other pictures show a few different spins.

Shapes with different ‘spin’. Where it matters, the axis of rotation is shown by the red dots.

But what about electrons? Well, they have spin of ½. Just to be clear about what that means, using the same analogy it implies that you have to rotate by 720° before the electron “looks” like it did when you started. There isn’t a good way to draw that so I can’t give you a picture of a spin-½ particle, so this is one of those places where quantum mechanics is weird and counter-intuitive and we just have to get on with it. The other building blocks of atoms (protons and neutrons) also have spin-½ so in this post I’ll focus on that strange case. The crucial thing about spin-½ particles is that their spin can exist in one of two states, usually called ‘up’ and ‘down’, and typically are represented by arrows pointing in those two directions.

But why does this matter? Well, individual spins generate a magnetic field. The reason that iron is a magnetic material is that the interaction between the spins in the iron atoms makes their spins all line up in the same direction. Therefore, the tiny magnetic fields associated with each of the spins all add up to make a large field. Non-magnetic materials don’t have this alignment (in fact, their spins are all randomly aligned) and so the tiny magnetic fields all cancel each other out because they are pointing in opposite directions. Materials like iron which have this alignment are called ‘ferromagnetic’.

Reading and writing in a hard disk

But, what does this have to do with your laptop? Well, in a hard disk, the part where the zeroes and ones are stored is made from two small pieces of ferromagnetic material. Then, the difference between a one and a zero is made by manipulating the spins of the atoms in one of the ferromagnetic layers. When an electric current is passed through this region, the electrons behave differently depending on the spins. Specifically, if the electrons have the same spin as the atoms, then they don’t interact very strongly and the electrical resistance is quite low. But if they have opposite spins, the electrons interact strongly with the atoms so they bounce off the atoms (or “scatter” in the technical language), their progress is impeded, and the electrical resistance is high.

The way to encode a one or a zero is shown in the picture below. A one is encoded by aligning the ferromagnets (the pink layers) so that their spins point in the same direction. In the left-hand picture, I show this with both layers having up-spins. A current of electrons (shown by the red arrows) has a half-and-half mix of electrons with up-spin and down-spin. When it is passed through the stack, the up-spin electrons interact weakly with the ferromagnet up-spins in both layers (black arrows) and encounter low resistance. This means that some of the current put in at the top of the stack emerges from the bottom and this characterises the one state. Note that the down-spin electrons are blocked from getting to the bottom of the stack because they scatter strongly off the up-spin atoms in the first ferromagnet layer and so the resistance for them is high.

Hard disk segment in the one and zero states. Red arrows are the electrons forming the current passed by the read head.

For the zero state, one of the ferromagnetic layers has its spins reversed. In the right-hand picture, this is shown by the lower layer now having a down-spin black arrow. For electric current, the down-spin electrons still scatter strongly from the up-spin atoms in the top layer. The up-spin electrons still pass through this layer, but then they encounter the down-spin atoms in the lower layer where the electrons and the atoms have opposite spin, so they scatter strongly. This means that no current emerges at the bottom of the device, and so this defines the zero state.

This means that, for the hard disk to work, it needs to be able to do two things. Firstly, the “write head”, which is the part that encodes the zeroes and ones when data is written to the disk, needs to be able to flip the spins of one of the ferromagnetic layers. Then, to recover the information at a later time, the “read head” tries to pass current through a specific piece of the disk material. If current flows (because the ferromagnet spins are the same) then this is a one. If current does not flow (because the ferromagnet spins are opposite) then it is a zero.

And this works entirely because of the quantum-mechanical property of particles called spin: aligned spins is a one, opposite spins is a zero. And as a bonus, it also explains why you have to be careful with hard drives and strong magnetic fields, because a magnet can change the alignment of all the ferromagnetic areas in the hard disk and destroy the encoded ones and zeros. Don’t say you weren’t warned!

What is condensed matter theory, and why is it so hard?

This post is about the general approach to physics that people who work on the theory of condensed matter take. As I’ll explain, it is basically impossible to calculate anything exactly, and so the whole field relies on choosing smart approximations that allow you to make some progress. Exactly what kind of approximations you make depends on what you want to achieve, and I’ll describe some of the common ones below.

But before that, what is ‘condensed matter physics’? Roughly speaking, it refers to anything that is a solid or a liquid (and also some gases) that you can see in the Real World around you. So it’s not stars and galaxies and space exploration, it’s not tiny sub-atomic particles like quarks and higgs bosons like they talk about at CERN, and it’s not what happened in the first fractions of a second after the Big Bang, or what it’s like inside a black hole. But it is about the materials that make the chip inside your phone or power a laser, or about making batteries store energy more efficiently, or finding new catalysts that make industrial chemical production cheaper (okay, so that one crosses into chemistry as well, but the lines are fuzzy!), or its about making superconductivity work at higher temperatures.

Why is it so hard?

Really, what makes condensed matter physics different from many other types of physics is that in many situations, the behaviour of the materials is governed by how many many particles interact with each other. Think about a small piece of metal for instance: You have millions and millions of atoms that form some bonds which give it a solid shape. Then some of the electrons in those atoms disassociate themselves and become a bit like a liquid that can move around inside the metal and conduct electricity or heat, or make the metal magnetic. In a small piece of metal there will be 10^22 atoms. (That notation means that the number is a one with twenty two zeroes after it. So it’s a lot.) And all of these atoms have an electric field which is felt by all the other atoms so that they all interact with each other. It is, in principle, possible to write down some equations which describe this, but there is no way that anyone can make a solution for these equations and work out exactly how all these atoms and electrons behave. I don’t just mean that it’s very difficult, I mean that it is mathematically proven to be impossible!

Watcha gunna do?

So, that begs the question what can we do? It is easy to connect a bit of metal to a battery and a current meter and see that it can conduct electricity, but how do we describe that theoretically? There are several different approaches to making the approximations needed, so I’ll try to explain them now.

  1. Use symmetry. By the magic of mathematics, the equations can often be simplified if you know something about the symmetry of the material you want to investigate. For example, the atoms in many metals sort themselves into a crystal lattice of repeated cubes. Group theory can then be used to reduce the complexity of the equations in a very helpful way. For instance, it might be possible to tell whether a material will conduct electricity or not even at this level of approximation. But this symmetry analysis contains an assumption because in reality materials won’t completely conform to the symmetry. They may have impurities in them, or the crystal structure might have irregularities, for example. So this isn’t a magic bullet. And also this might well not reduce the equations enough that they can be solved, so it is usually just a first step.

    Symmetries of a cube. Image stolen from this page, which looks pretty cool!
  2. From this point, it is often possible to make simplifying assumptions so that the mathematically impossible theory becomes something that can be solved. Of course, by doing this you lose quite a lot of detail. It’s like the “spherical cows” analogy. In principle, cows have four legs, a tail, a head, and maybe some udders. But say you wanted to work out how many cows you could safely fit into a field. You don’t need to know any of that detail, so you can think of the cows as being a sphere which consumes a certain amount of hay each day. You can do something similar about the metal: Instead of keeping track of every detail, you can forget that the atoms have an internal structure (spherical atoms!). Or you could assume that the atoms interact with the electrons in a particularly simple way so that you can focus just on the disassociated electrons. Or you could assume that the electrons don’t interact with each other, but only with the atoms. In the jargon of the field, this general approach is called finding an “effective theory”. These theories can often give quite good estimates of not only whether a material will conduct, but how well it will do it.

    Some spherical cows in a field.
  3. These days, computers are really fast, and they can be used to numerically solve equations that are almost exact. However, computers are not good enough that they can do this for 10^22 atoms, so if you want to keep quite close to the original equations, they might be able to do fifty or so. Maybe a hundred. In the jargon, these methods are called “ab-initio” (from the beginning) because they do not make any approximations unless they absolutely have to. The fact that you can’t treat too many atoms limits what these methods can be applied to. For instance, they can be quite good for molecules, and crystals where the periodic repetition is not too complicated. But for these situations, you can get a level of detail which is simply impossible in the effective theories. So there’s a trade-off. And computers are getting better all the time so this is one area that will see a lot of progress.
  4. The final way that I’ll describe is sort-of the inverse process. Instead of starting from the mathematics which are impossible, you can start from experimental data and try to work backwards towards the theoretical description that gives you the right answer. Sometimes this is used in conjunction with one of the other methods as a way to give you some clues about what assumptions to make.

So, that’s how you do theory in condensed matter. Numbers 2 and 4 are basically my day job, on a good day at least!

Particle-wave duality and the two slit experiment

Particle-wave duality is the concept in quantum mechanics that small objects simultaneously behave a bit like particles and a bit like waves. This comes very naturally from the mathematics, but instead of talking about those boring details, I’m going to describe a famous experiment that proves it.


TwoSlitsIt’s called the two slit experiment, and I’ve sketched how it works in the picture on the right. Before going into the full details, let’s look at the upper part of the picture. This shows a light wave shining on a barrier with a small slit in it. The thin black lines show the position of the peaks of the wave that describes the traveling light. Some of the light can get through that slit, but in doing so, it changes its form to become a circular wave with the slit at its source. This is called diffraction, and leads to a distinctive pattern when the light hits a screen placed some way behind the barrier. The red line behind the barrier shows the intensity of the light hitting the screen. This demonstrates that light can behave in a wave-like way because if the light was just particles you would not see the diffraction pattern, but there would be a small spot of light on the screen in line with the slit.

Now look at the lower part of the picture. Now the screen has been replaced with a second barrier that has two slits in it. Both of these slits act like the first one: they diffract the light that is coming through. So behind the second barrier, there are now two waves of light, one coming from each slit. These two waves interfere with each other, so that the pattern of light seen on the screen (the red line) looks very different from that made by just one slit. (I did actually calculate what the light should look like before I drew these pictures, so I hope both of the red lines are actually correct!) Interference is the process of these wave adding together to form one single pattern. The value of a light wave at a particular position can be either positive or negative. In the picture, the thin black lines show where the waves are at their maximum – so where they are their most positive. Exactly half-way between a pair of lines they are at their most negative. If the two waves are both positive at a particular position (like exactly at the center of the screen) then they add together to give intense light. But if one is positive and one is negative then they will cancel each other out and leave almost no light.


That’s not very controversial. But it starts to get a bit more weird when you repeat the same experiment but using a beam of electrons instead of a beam of light. Electrons are one of the three types of “particle” which make up an atom: The protons and neutrons bind together to form the nucleus, and then electrons “orbit” around it. Until this experiment was done for the first time, most physicists thought that electrons were particles. But the result of the experiment was the same kind of two-slit diffraction pattern that they got when they used light. The electrons that went through each of the slits were interfering with each other just like the light waves did. The only possible conclusion: these electrons were also wave-like.

Then, they pushed the experiment a bit further. They had the same barriers, but instead of using a beam of electrons, they fired them through one at a time. Astonishingly, even though there was only one electron, the result was still a two-slit diffraction pattern. Somehow, the electron was going through both slits and interfering with itself. Conclusion: Electrons are not just wave-like when there are lots of them, they are wave-like on their own!

Now it gets weird

To try and verify this, they modified their apparatus to include detectors at both of the slits so they could tell which slit the electron was going though. Expecting to find a signal from both detectors, they were surprised to find that only one of the detectors sensed an electron going though, and instead of the two-slit diffraction pattern, they now saw a one-slit pattern on the screen. If they did the experiment with the detectors turned off, the two-slit diffraction pattern reappeared. It seemed like asking the electron which slit it had gone through forced it to choose one or the other. But get this: The experimentalists got sneaky. They took the electron detectors away and instead made slits that could be opened and closed very quickly. Starting with both slits open, they fired one electron from the gun. After it had passed the barrier with the two slits, but before it reached the screen, they closed one of the slits. Any guesses as to what pattern was measured on the screen?

They saw a single-slit diffraction pattern! Somehow, the electron knew that one of the slits had been closed after it went through, and behaved like only the other one had been open the whole time. This hints at many deep issues about quantum measurement and (gulp!) the nature of reality itself. But I’ll save that discussion for another time.

This experiment has been repeated with many different objects used instead of the light or electrons. Protons, whole atoms, and buckyballs all show the same behavior, so this is without doubt a general feature in quantum mechanics and not something oddly specific to light and electrons. In fact, once you allow for the possibility of wave-like particles, you start to see the effects of them in many places, including in the behavior of electrons in the materials which make computer chips and all the rest of information technology. So it’s a pretty big deal.

And finally…

One final point of detail which I think is worth pointing out. In the first paragraph, I mentioned that “small objects” are needed to do this experiment. But what does “small” mean in this context? It turns out, this can be written down in a really simple equation. The de Broglie wavelength, referred to by the symbol \lambda, is the wavelength associated with the quantum object. It turns out, that to see the wave-like properties, the size of the slits has to be similar to \lambda.

The formula is \lambda = h / mv. Here, h is just a number that comes from quantum mechanics and can be forgotten about. The m and v are the mass and speed associated with the particle-like properties of the object. So, the heavier the “particle”, the smaller the associated wavelength is. This explains why you don’t see any wave-like effects for people or cars or golf balls. Just to illustrate the kind of size that we talking about, light has a \lambda of half a micron or so. For electrons, it’s a few nanometers, and for buckyballs, it’s a few thousandths of a nanometer.