How do you measure the quantum states of a material?

I’ve talked a lot on this blog about how understanding the quantum states of a material can be helpful for working out its properties. But is it possible to directly measure these states in an experiment? And what sort of equipment is needed to do so? I’ll try to explain here.

First, a quick recap. The band structure is like a map of the allowed quantum states for the electrons in a material. The coordinates of the map are the momentum of the electron, and at each point there are a series of energy levels which the electron can be in. The energy states close to the “Fermi energy” largely determine things like whether the material can conduct electricity and heat, absorb light, or do interesting magnetic things.

There are various ways that the band structure can be investigated. Some of them are quite indirect, but last week, I visited an experimental facility in the UK where they can do (almost) direct measurements of the band structure using X-rays.

The technical name for this technique is “angle-resolved photoemission spectroscopy”, or ARPES for short. Let’s break that down a bit. Spectroscopy just means that it’s a way of measuring the spectrum of something. In this case, it’s the electrons in the material. I’ll come back to the “angle-resolved” part in a minute, but the crucial thing to explain here is what photoemission is.

2017-07-31-Photoemission
Excitation and emission of an electron by absorption of a photon.

The sketch above shows a hypothetical band structure. When light is shone on a material, the photons (green wavy arrows) that make up the beam can be absorbed by one of the electrons in the filled bands below the Fermi energy. When this happens, the energy and momentum of the photon is transferred into the electron.

This means that the electron must change its quantum state. But the band structure gives the map of the only allowed states in the material, so the electron must end up in one of the other bands. In the left-hand picture, the energy of the photon is just right for the electron at the bottom of the red arrow to jump to an unfilled state above the Fermi energy. This is called “excitation”.

But in the right-hand picture, the energy of the photon is larger (see the thicker line and bigger wiggles on the green arrow) so there is no allowed energy level for the excited electron to move to. Instead, the electron is kicked completely out of the material. To put that another way, the high-energy photons cause the material to emit electrons. This is photoemission!

The crucial part about ARPES is that the emitted electrons retain information about the quantum state that they were in before they absorbed the photons. In particular, the photons carry almost no momentum, so the momentum of the electron can’t really change during the emission process. But also, energy must be conserved, so the energy of the emitted electron must be the energy of the photon, plus the energy of the quantum state that the electron was in before emission.

So, if you can catch the emitted electrons, and measure their energy and momentum, then you can recover the band structure! The angle-resolved part in the ARPES acronym means that the momentum of the electrons is deduced from what angle they are emitted at.

But what does this look like in practise? Fortunately, a friendly guide from Diamond showed me around and let me take pictures.

The upper-left picture is an outside view of the Diamond facility. (The cover picture for this blog entry is an aerial view.) It’s a circular building, although this picture is taken from close enough that this might be hard to see. This gives a sense of scale for the place!

ARPESMontage-labels
Pictures taken at the Diamond Light Source, and inside the hutch of beamline I05.

Inside is a machine called a synchrotron. They didn’t let us go near this, so I don’t have any pictures, but it is a circular particle accelerator which keeps bunches of electrons flowing around it very, very fast. As they go around, they release a lot of X-ray photons which can be captured and focused. (There is a really cool animation of this on their web site.) The X-rays come down a “beam line” and into one of many experimental “hutches” which stand around the outside of the accelerator.

The upper-right picture shows the ARPES machine inside the main hutch of beamline I05. Most of the stuff you can see at the front is designed for making samples under high vacuum, which can then be transferred straight into the sample chamber without exposure to air.

The lower-left picture is behind the machine, where the beam line comes in. It’s kinda hard to see the metal-coloured pipe, so I’ve drawn arrows. The lower-right picture shows where the real action happens. The sample chamber is near the bottom (there is a window above it which allows the experimentalists to visually check that the sample is okay), and you can just about see the beam line coming in from behind the rack in the foreground.

The X-rays come into the sample chamber from the beam line, strike the sample, and the emitted electrons are funnelled into the analyser which is the big metallic hemisphere towards the right of the picture. The spherical shape is important, because the momentum of the electrons is detected by how much they are deflected by a strong electric field inside the analyser. This separates the high momentum electrons from the low momentum ones in a similar way that a centrifuge separates heavy items from light ones.

And what can you get after all of this? The energy and momentum of all the electrons is recorded, and pretty graphs can be made!

2017-07-31-WSe2-ARPES
ARPES data for the band structure of WSe2. Theoretical calculation on the left, real data on the right. Picture credit: Diamond web site.

Above is a picture that I stole from the Diamond web site. On the left is a theoretical calculation for the band structure of a material called tungsten diselenide (WSe2). On the right is the ARPES data. The colour scheme shows the intensity of the photoemitted electrons. As you can see, the prediction and data match very well. After all the effort of building a massive machine, it works! Hooray science!

Advertisements

What next for integrated circuits?

There is currently a big problem in the semiconductor industry. While technological progress and commercial pressure demand that electronics must be made smaller and faster, we are getting increasingly close to the fundamental limits of what can be achieved with current materials.

In the last couple of weeks, two academic papers have come out which describe ways in which we might be able to get around these limitations.

Size matters

A quick reminder about how transistors work. (You can read more detail here.) Transistors are switches which can be either on or off. They have a short conducting channel through which electricity can flow. When they are on, electrical current is flowing through them, and when they are off it is not. They have three connections, one which supplies current (called the source), one which collects it (the drain), and one which controls whether the channel is open or closed.

2017-07-10-BasicTransistorSketch
A rough sketch of a transistor, showing the contact length LC and the gate length LG.

There is something called the International Technology Roadmap for Semiconductors which lays out targets for improvements in transistor technology which companies such as Intel are supposed to aim for. The stages in this plan are called “nodes”, which are described by the size of the transistor. Having smaller transistors is better because you can fit more into a chip and do more computations in a given space.

At the moment, transistors at the 14 nanometre node are being produced. This means that the length of the gate/channel is 14nm (a nanometre is one millionth of a millimetre). According to the roadmap, within a decade or so, the channel length is supposed to be as short as 3nm. But, overall, transistors are rather bigger than this length, in part because of the size of the source and drain contacts. Transistors at the 3nm node will have an overall size of about 40nm.

Carbon nanotube transistors

The first paper I want to mention, which came out in the journal Science, reports the fabrication of a transistor made out of different materials which allows the overall size to be reduced. Instead of using doped silicon for the contacts and channel,  these researchers made the channel out of a carbon nanotube, and contacts from a cobalt-molybdenum alloy.

Carbon nanotubes are pretty much graphene which has been rolled up into a cylinder which is a few nanometres wide. Depending on the details, they can have semiconducting electronic properties which are excellent for making transistors from, but they also are interesting for a whole range of other reasons.

By doing this, they could make a channel/gate region of about 11 nm long, with two contacts that were about 10nm each. Even with some small spacers, the total width of the transistor was only 40nm. This should satisfy the demands of the 3nm node of the roadmap, even though the channel is nearly four times as long as that.

3D chips

The second approach is completely different. At the moment, integrated circuits are mostly made in a single layer, although there are some exceptions to this in the most modern chips. This means that the various parts of the chip that do calculations and store memory can be located quite a long way away from each other. This can lead to a bottleneck as data is moved around to where it is needed.

A group of researchers, publishing in the journal Nature, designed an entirely new architecture for a chip in which the memory, computation, input, and output were all stacked on top of each other. This means that even though the transistors in their device are not particularly small, the data transfer between memory and computation can all happen at the same time. This leads to a huge increase in speed because the bottleneck is now much wider.

2017-07-10-VerticalGasSensor.png

 

The prototype they designed was actually a gas sensor, and a rough idea of its construction is shown in the sketch above. Gas molecules fall on the top layer, which is made up of a large number of individual detectors that can react to single molecules. These sensors can then write the information about their state into the memory which is directly below it via vertical connections that are built into the chip itself.

The point of the sensor is to work out what type of gas has fallen on it. To do this, the information stored in the memory from the sensors must be processed by a pattern recognition algorithm which involves a lot of calculations. This is done by a layer of transistors which are placed below the memory, and are directly connected to it. In the new architecture, the transistors doing the computation have much quicker access to the data they are processing than if it were stored in another location on the chip. Finally, an interface layer allows the chip to be controlled and through which it outputs the result of the calculation are below the transistors, again connected vertically.

The paper shows results for accurate sensing of gaseous nitrogen, lemon juice, rubbing alcohol, and even beer! But that’s not really the crucial point. The big new step is the vertical integration of several components which would otherwise be spaced out on a chip. This allows for much quicker data processing, because the bottleneck of transferring data in and out of memory is drastically reduced.

So, the bottom line here is that simply finding ways to make traditional silicon transistors smaller and smaller is only one way to approach the impending problems facing the electronics industry. It will be a while before innovations like this become the norm for consumer electronics, and perhaps these specific breakthroughs will not be the eventual solution. But, in general, finding new materials to make transistors from and designing clever new architectures are very promising routes forward.

What is high temperature superconductivity?

It was March, 1987. The meeting of the Condensed Matter section of the American Physical Society. It doesn’t sound like much, but this meeting has gone down in history as the “Woodstock of Physics”. Experimental results were shown which proved that superconductivity is possible at a much higher temperature than had ever been thought possible. This result came completely out of left field and captured the imagination of physicists all over the world. It has been a huge area of research ever since.

But why is this a big deal? Superconductors can conduct electricity without any resistance, so it costs no energy and generates no heat. This is different from regular metal wires which get hot and lose energy when electricity passes through them. Imagine having superconducting power lines, or very strong magnets that don’t need to be super-cooled. This would lead to huge energy savings which would be great for the environment and make a lot of technology cost less too.

I guess it makes sense to clarify what “high temperature” means in this context. Most superconductors behave like normal metals at regular temperatures, but if they are cooled far enough (beyond the “critical temperature”, which is usually called Tc) then their properties change and they start to superconduct. Traditional superconducting materials have a Tc in the range of a few Kelvin, so only a few degrees above absolute zero. These new “high temperature” materials have their Tc at up to 120 Kelvin, so substantially warmer, but still pretty cold by everyday standards. (For what it’s worth, 120K is -153°C.)

But, if we could understand how this ‘new’ type of superconductivity works, then maybe we could design materials that superconduct at everyday temperatures and make use of the technological revolution that this would enable.

Unfortunately, the elephant in the room is that, even after thirty years of vigorous research, physicists currently still don’t really understand why and how this high Tc superconductivity happens.

BSCCOMeissner
A piece of superconducting BSCCO levitating due to the Meissner effect. (Image stolen from Wikimedia commons.)

I have written about superconductivity before, but that was the old “low temperature” version. What happens in a superconductor is that electrons pair up into new particles called “Cooper pairs”, and these particles can move through the material without experiencing collisions which slow them down. In the low temperature superconductors, the glue that holds the pairs together is made from vibrations of the crystal structure of the material itself.

But this mechanism of lattice vibrations (phonons) is not what happens in the high temperature version.

BISCO
Atomic structure of BSCCO. (Image stolen from chemtube3d.com).

To explain the possible mechanisms, it’s important to see the atomic structure of these materials. To the right is a sketch of one high Tc superconductor, called bismuth strontium calcium copper oxide, or BSCCO (pronounced “bisco”) for short. The superconducting electrons are thought to live in the copper oxide (CuO4) layers.

One likely scenario is that instead of the lattice vibrations gluing the Cooper pairs together, it is fluctuations of the spins of the electrons that does it. Of course, electrons can interact with each other because they are electrically charged (and like charges repel each other), but spins can interact too. This interaction can either be attractive or repulsive, strong or weak, depending on the details.

In this case, it is thought that the spins of the electrons in the copper atoms are all pointing in nearly the same direction. But these spins can rotate a bit due to temperature or random motion. When they do this, it changes the interactions with other nearby spins and can create ripples in the spins on the lattice. In an analogy with the phonons that describe ripples in the positions of the atoms, these spin ripples can be described as particles called magnons. It is these that provide the glue: Under the right conditions, they can cause the electrons to be attracted to each other and form the Cooper pairs.

Another possibility comes from the layered structure. If electrons in the CuO4 layers can hop to the strontium or calcium layers, and then hop back again at a different point in space, this could induce correlations between the electrons that would result in superconductivity. (I appreciate that it’s probably far from obvious why this would work, but unfortunately, the explanation is too long and technical for this post.)

In principle, these two different mechanisms should give measurable effects that are slightly different from each other because the symmetry associated with the effective interactions are different. This would allow experimentalists to tell them apart and make the conclusive statement about what is going on. Naturally, these experiments have been done but so far, there is no consensus within the results. Some experiments show symmetry properties that would suggest the magnons are important, others suggest the interlayer hopping is important. Personally, I tend to think that the magnons are more likely to be the reason, but it’s really difficult to know for sure and I could well be wrong.

So, we’re kinda stuck and the puzzle of high Tc superconductivity remains one of condensed matter’s most tantalising and most embarrassing enigmas. We know a lot more than we did thirty years ago, but we are still a very long way from having superconductors that work at everyday temperatures.

How does a transistor work?

The world would be a very different place if the transistor had never been invented. They are everywhere. They underpin all digital technology, they are the workhorses of consumer electronics, and they can be bewilderingly small. For example, the latest Core i7 microchips from Intel have over two billion transistors packed onto them.

But what are they, and how do they work?

transistorpics
Left: An old-school transistor with three ‘legs’. Right: An electron microscope image of one transistor on a microchip.

In some ways, they are beguilingly simple. Transistors are tiny switches: they can be on or off. When they are on, electric current can flow through them, but when they are off it can’t.

The most common way this is achieved is in a device called called a “field effect transistor”, or FET. It gets this name because a small electric field is used to change the device from its conducting ‘on’ state to it’s non-conducting ‘off’ state.

At the bottom of the transistor is the semiconductor substrate, which is usually made out of silicon. (This is why silicon is such a big deal in computing.) Silicon is a fantastic crystal, because by adding a few atoms of another type of element to its crystal, it can become positively or negatively charged. To explain why, we need to turn to chemistry! A silicon atom has 14 electrons in it, but ten of these are bound tightly to the atomic nucleus and are very difficult to move. The other four are much more loosely bound and are what determines how it bonds to other atoms.

When a silicon crystal forms, the four loose electrons from each atom form bonds with the electrons from nearby atoms, and the geometry of these bonds is what makes the regular crystal structure. However, it is possible to take out a small number of the silicon atoms and replace them with some other type of atom. If this is done with an atom like phosphorus or nitrogen which has five loose electrons, then four of them are used to make the chemical bonds and one is left over. This left-over electron is free to move around the crystal easily, and it gives the crystal an overall negative charge. In the physics language, the silicon has become “n-doped”.

But, if some silicon atoms are replaced by something like boron or aluminium which has only three loose electrons, the atom has to ‘borrow’ an extra electron from the rest of the crystal, meaning that this electron is ‘lost’ and the crystal becomes positively charged. This is called “p-doped”.

transistorchannel
Sketches of a FET in the ‘off’ state (left) and in the ‘on’ state (right) when the gate voltage is applied.

Okay, so much for the chemistry, now back to the transistor itself. Transistors have three connections to the outside world, which are usually called the source, drain, and gate. The source is the input for electric current, the drain is the output, and the gate is the control which determines if current can flow or not.

The source and drain both connect to a small area of n-doped silicon (i.e. they have extra electrons) which can provide or collect the electric current which will flow through the switch. The central part of the device, called the “channel” is p-doped which means that there are not enough electrons in it.

Now, here’s where the quantum mechanics comes in!

A while back, I described the band structure of a material. Essentially, it is a map of the quantum mechanical states of the material. If there are no states in a particular region, then electrons cannot go there. The “Fermi energy” is the energy at which states stop being filled. I’ve drawn a rough version of the band structure of the three regions in the diagram below. In the n-doped regions, the states made by the extra electrons are below the Fermi surface and so they are filled. But in the p-doped channel, the unfilled extra states are above the Fermi energy. This makes a barrier between the source and drain and stops electrons from moving between the two.

transistorbanddiags
Band diagrams for a FET. In the ‘on’ state, the missing electron levels are pushed below the Fermi surface and form a conducting channel.

Now for the big trick. When a voltage is applied to the gate, it makes an electric field in the channel region. This extra energy that the electrons get because they are in this field has the effect of moving energy of the quantum states in the channel region to a different energy. This is shown on the right hand side of the band diagrams. Now, the extra states are moved below the Fermi energy, but the silicon can’t create more electrons so these unfilled states make a path through which the extra electrons in the source can move to the drain. This removes the barrier meaning that applying the electric field to the channel region opens up the device to carrying current.

In the schematic of the device above, the left-hand sketch shows the transistor in the off state with no conducting channel in the p-doped region. The right-hand sketch shows the on-state, where the gate voltage has induced a conducting channel near the gate.

So, that’s how a transistor can turn on and off. But it’s a long leap from there to the integrated circuits that power your phone or laptop. Exactly how those microchips work is another subject, but briefly, the output from the drain of one transistor can be linked to the source or the gate of another one. This means that the state of a transistor can be used to control the state of another transistor. If they are put together in the right way, they can process information.

What is peer review?

Chances are you’ve heard of peer review. The media often use it as an adjective to indicate a respectable piece of research. If a study has not been peer reviewed then this is taken as a shorthand that it might be unreliable. But is that a fair way of framing things? How does the process of peer review work? And does it do the job?

So, first things first – what is peer review? Essentially, it’s a stamp of approval by other experts in the field. Knowledgeable people will read a paper before it’s published and critique it. Usually, a paper won’t be published unless these experts are fairly satisfied that the paper is correct and measures up to the standards of importance or “impact” that the particular journal requires.

The specifics of the peer review process vary between different fields and different journals, but here is how things typically go in physics. Usually, a journal editor will send the paper to two or more people. These could be high profile professors or Ph.D students or anyone in between, but they are almost always people who work on similar topics.

The reviewers then read the paper carefully, and write a report for the editor, including a recommendation of whether the paper should be published or not. Often, they will suggest modifications to make it better, or ask questions about parts that they don’t understand. These reports are sent to the authors, who then have a chance to respond to the inevitable criticisms of the referees, and resubmit a modified manuscript.

After resubmission, many things can happen. If the referee’s recommendations are similar, then the editor will normally send the new version back to them so they can assess whether their comments and suggestions have been adequately addressed in the new version. They will then write another report for the editor.

But if the opinions of the referees are split, then the editor might well ask for a third opinion. This is the infamous Reviewer 3, and their recommendation is often crucial. In fact, it’s so crucial that the very existence of Reviewer 3 has lead to internet memes, a life on twitter (see #reviewer3), and mention in countless satires of academic life including this particularly excellent one by Kelly Oakes of BuzzFeed (link).

reviewer3-buzzfeed
Credit: Kelly Oakes / BuzzFeed

But, once the editor has gathered all the reports and recommendation, they will make a final decision about whether the paper will be published or not. For the authors, this is the moment of truth!

When it works, this can be a constructive process. I’ve certainly had papers that have been improved by the suggestions and feedback. But the process does not always work well. For example, not all journals always carry out the review process with complete rigour. The existence of for-profit, commercial journals who charge authors a publication fee is a subject for another day, but in those journals it is easy to believe that there is a pressure on the editors to maximise the number of papers that they accept. Then it’s only natural that review standards may not be well enforced.

And the anonymity that reviewers enjoy can lead to bad outcomes. By definition, reviewers have to be working in a similar field to the authors of the paper otherwise they would not be sufficiently expert to judge the merits of the work. So sometimes a paper is judged by competitors. There are many stories of papers being deliberately slowed down by referees, perhaps while they complete their own competing project. Or of times when a referee might stubbornly refuse to recommend publication in spite of good arguments. And there are even stories of outright theft of ideas and results during review.

Finally, there is also the possibility of straightforward human error. Two or three reviewers is not a huge number and so it can be hard to catch the mistakes. And not all reviewers are completely suitable for the papers they read. Review work is almost always done on a voluntary basis and so it can be hard for editors to find a sufficient number of people who are willing to give up their time.

I can think of a few times when I have not really understood the technical aspects of a paper, or I have not been sufficiently close to the field to judge whether the work is important. Perhaps I should have declined to review those manuscripts. Or maybe it’s okay because the paper should not be published if it cannot convince someone in an adjacent field of the merits of the work. There are arguments both ways.

The fact is that sometimes things slip through the net. Papers can be published with errors, or even worse, with fabricated data or plagiarism. There is no foolproof system for avoiding this, so in my opinion, robust post-publication review is important too. Exactly how to implement that is a tricky business though.

But, to sum up, my opinion is that peer review is an important – but not infallible – part of the academic process. Just because a paper has passed through this test does not automatically mean that it is correct or the last word on a subject, but it is a mark in its favour.

Topology and the Nobel Prize

You may have seen that the Nobel Prize for Physics was awarded this week. The Prize was given “for theoretical discoveries of topological phase transitions and topological phases of matter”, which is a bit of a mouthful. Since this is an area that I have done a small amount of work in, I thought I would try to explain what it means.

You might have seen a video where a slightly nutty Swede talks about muffins, donuts, and pretzels. (He’s my boss, by the way!) The number of holes in each type of pastry defined a different “topology” of the lunch item. But what does that have to do with electrons? This is the bit that I want to flesh out. Then I’ll give an example of how it might be a useful concept.

What is topology?

In a previous post, I talked about band structure of crystal materials. This is the starting point of explaining these topological phases, so I recommend you read that post before trying this one. There, I talked about the band structure being a kind of map of the allowed quantum states for electrons in a particular crystal. The coordinates of the map are the momentum of the electron.

Each of those quantum states has a wave function associated with it, which describes among other things, the probability of the electron in that state being at a particular point in space. To make a link with topology, we have to look at how the wave function changes in different parts of the map. To use a real map of a landscape as the analogy, you can associate the height of the ground with each point on the map, then by looking at how the height changes you can redraw the map to show how steep the slope of the ground is at each point.

We can do something like that in the mathematics of the wave functions. For example, in the sketches below, the arrows represent how the slope of the wave function looks for different momenta. You can get vortices (left picture) where the arrows form whirlpools, or you can get a source (right picture) where the arrows form a hedgehog shape. A sink is similar except that the arrows are pointing inwards, not outwards.

bstopology-svg

Now for the crucial part. There is a theorem in mathematics that says that if you multiply the slope of the wave function with the wave function itself at the same point, and add up all of these for every arrow on the map, then the result has to be a whole number. This isn’t obvious just by looking at the pictures but that’s why mathematics is great!

That whole number (which I’m going to call n from now on) is like the number of holes in the cinnamon bun or pretzel: It defines the topology of the electron states in the material. If n is zero then we say that the material is “topologically trivial”. If n is not zero then the material is “topologically non-trivial”. In many cases, n counts difference between the number of sources and the number of sinks of the arrows.

What topology does

Okay, so that explains how topology enters into the understanding of electron states. But what impact does it have on the properties of a material? There are a number of things, but one of the most cool is about quantum states that can appear on the surface of topologically non-trivial materials. This is because of another theorem from mathematics, called the “bulk-boundary correspondence” which says that when a topologically non-trivial material meets a topologically trivial one, there must be quantum states localized at the interface.

Now, the air outside of a crystal is topologically trivial. (In fact, it has no arrows at all, so that when you take the sum there is no option but to get zero for the result.) So, at the edges of any topologically non-trivial material there must be quantum states at the edges. In some materials, like bismuth selenide for example, these quantum states have weird spin properties that might be used to encode information in the future.

And the best part is that because these quantum states at the edge are there because of the topology of the underlying material, they are really robust against things like impurities or roughness of the edge or other types of disorder which might destroy quantum states that don’t have this “topological protection”.

An application

Now, finally, I want to give one more example of this type of consideration because it’s something I’ve been working on this year. But let me start at the beginning and explain the practical problem that I’m trying to solve. Let’s say that graphene, the wonder material is finally made into something useful that you can put on a computer chip. Then, you want to find a way to make these useful devices talk to each other by exchanging electric current. To do that, you need a conducting wire that is only a few nanometers thick which allows current to flow along it.

The obvious choice is to use a wire of graphene because then they can be fabricated at the same time as the graphene device itself. But the snag is that to make this work, the edges of that graphene wire have to be absolutely perfect. Essentially, any single atom out of place will make it very hard for the graphene wire to conduct electricity. That’s not good, because it’s very difficult to keep every atom in the right place!

grbnhet-svg

The picture above shows a sketch of a narrow strip of graphene surrounded by boron nitride. Graphene is topologically trivial, but boron nitride is (in a certain sense) non-trivial and can have n equal to either plus or minus one, depending on details. So, remembering the bulk-boundary correspondence, the graphene in this construction works like an interface between two different topologically non-trivial regions, and therefore there must be quantum states in the graphene. These states are robust, and protected by the topology. I’ve tried to show these states by the black curved lines which illustrate that the electrons are located in the middle of the graphene strip.

Now, it is possible to use these topologically protected states to conduct current from left to right in the picture (or vice versa) and so this construction will work as a nanometer size wire, which is just what is needed. And the kicker is that because of the topological protection, there is no longer any requirement for the atoms of the graphene to be perfectly arranged: The topology beats the disorder!

Maybe this, and the example of the bismuth selenide I gave before show that the analysis of topology of quantum materials is a really useful way to think about their properties and helps us understand what’s going on at a deeper level.

(If you’re really masochistic and want to see the paper I just wrote on this, you can find it here.)

What is graphene and why all the hype?

There’s a decent chance you’ve heard of graphene. There are lots of big claims and grand promises made about it by scientists, technologists, and politicians. So what I thought I’d do is to go through some of these claims and almost ‘fact-check’ them so that the next time you hear about this “wonder material” you know what to make of it.

Let’s start at the beginning: what is graphene? It’s made out of carbon atoms arranged in a crystal. But what sets it apart from other crystals of carbon atoms is that it is only one atom thick (see the picture below). It’s not quite the thinnest thing that could ever exist because maybe you could make something similar using atoms that are smaller than carbon (for example, experimentalists can make certain types of helium in one layer), but given that carbon is the sixth smallest element, it’s really quite close!

phases-of-carbon

Diamond and graphite are also crystals made only of carbon, but they have a different arrangement of the carbon atoms, and this means they have very different properties.

So, what has been claimed about graphene?

Claim one: the “wonder material”

Graphene has some nice basic properties. It’s really strong and really flexible. It conducts electricity and heat really well. It simultaneously is almost transparent but absorbs light really strongly. It’s almost impermeable to gases. In fact, most of the proposals for applications of graphene in the Real World™ involve these physical and mechanical superlatives, not the electronic properties which in some ways are more interesting for a physicist.

For example, its conductivity and transparency mean that it could be the layer in a touch screen which senses where a finger or stylus is pressing. This could combine with its flexibility to make bendable (and wearable) electronics and displays. But for the moment, it’s “only” making current ideas work better, it doesn’t add any fundamentally new technology that we didn’t have before. If that’s your definition of a “wonder material” then okay, but personally I’m not quite convinced the label is merited.

Claim two: Silicon replacement

In the first few years after graphene was made, there was a lot of excitement that it might be used to replace silicon in microchips and make smaller, faster, more powerful computers. It fairly quickly became obvious that this wouldn’t happen. The reason for this is to do with how transistors work. That’s a subject that I want to write more about in the future, but roughly speaking, a transistor is a switch that has an ‘on’ state where electrical current can flow through it, and an ‘off’ state where it can’t. The problem with graphene is turning it off: Current would always flow through! So this one isn’t happening.

Graphene electronics might still be useful though. For example, when your phone transmits and receives data from the network, it has to convert the analogue signal in the radio waves from the mast into a digital signal that the phone can process. Graphene could be very good for this particular job.

Claim three: relativistic physics in the lab

This one is a bit more physicsy so takes a bit of explaining. In quantum mechanics, one of the most important pieces of information you can have is how the energy of a particle is related to its momentum. This is the ‘band structure’ that I wrote about before. In most cases, when electrons move around in crystals, their energy is proportional to their momentum squared. In special relativity there is a different relation: The energy is proportional to just the momentum, not to the square. For example, this is true for light or for neutrinos. One thing that researchers realized very early on about graphene is that electrons moving around on the hexagonal lattice had a ‘energy equals momentum’ band structure, just like in relativity. Therefore, the electrons in graphene behave a bit like neutrinos or photons. Some of the effects of this have been measured in experiments, so this is true.

Claim four: Technological revolution

One other big problem that has to be overcome is that graphene is currently very expensive to make. And the graphene that is made at industrial scale tends to be quite poor quality. This is an issue that engineers and chemists are working really hard at. Since I’m neither an engineer or a chemist I probably shouldn’t say too much about it. But what is definitely true is that the fabrication issues have to be solved before you’ll see technology with graphene inside it in high street stores. Still, these are clever people so there is every chance it will still happen.

Footnote

Near the top, I said graphene simultaneously absorbs a lot of light and is almost transparent. This makes no sense on the face of it!! So let me say what I mean. To be specific, a single layer of graphene absorbs about 2.3% of visible light that lands on it. Considering that graphene is only one layer of atoms, that seems like quite a lot. It’s certainly better than any other material that I know of. But at the same time, it means that it lets through about 97.7% of light, which also seems like a lot. I guess it’s just a question of perspective.

Why do some materials conduct electricity while others don’t?

Can you tell at a glance how the electrons in a material behave? Amazingly, the answer is “yes”, and in this post I’ll explain how.

I want to introduce the concept of something called ‘band structure’ because it is an idea that underpins a lot of the quantum mechanics of electrons in real materials. In particular, the band structure of material can make it really easy to know if a material is a good conductor of electricity or not. So, here goes.

To describe how electrons behave in a particular material, a good place to start is by working out what quantum states they are allowed to be in. In essence, the band structure is simply a map of these allowed quantum states. One place where things can be a bit confusing is the coordinates that are used to draw this map. Band structure uses the momentum of the quantum state as its coordinate, and gives the energy of that state at each point.

The reason for this is that the momentum and energy of the quantum states are linked to each other so it just makes sense to draw things this way. But why not use the position of the quantum state? This is because position and momentum cannot both be known at the same time due to Heisenberg’s Uncertainty Principle. If the momentum is known very accurately then the position must be completely unknown.

In fact, there’s even more to it than that. Most solids have a periodic lattice structure and this periodicity means that only certain momentum values are important. Roughly speaking, if the size of the repeating pattern in the lattice has length a, then there is a repeating pattern of allowed energy states in momentum with length 1/a. This means that we can draw the map of the allowed quantum states in only the first of these zones. This zone has a finite size, which is very helpful when trying to draw it!

SiliconBSBZ

The band structure of silicon. (Picture credit: Dissertation by Wilfried Wessner, TU Wien.)

Let’s take silicon as an example because it’s a really important material since a lot of electronics are made from it. The picture above shows the band structure (left) and the shape of the first repeating zone of allowed momenta (right) of silicon. The zone of allowed momenta has quite a complicated shape which is related to the crystal structure of the silicon. Some of the important points in that zone are labeled, for example, the center of the zone is called the Γ point (pronounced “gamma point”), while the center of the square face at the edge of the zone is the X point. It’s impossible to draw all the allowed states at every momentum point in a 3D zone, so what is usually done is to draw the allowed quantum states along certain lines between these important points, and that is what is on the left of the picture. You can probably see that these allowed states form bands, which is where the name ‘band structure’ comes from.

There’s one more concept that is really important, called the “Fermi surface”. Electrons are fermions, and so they are allowed to occupy these quantum states so that there is at most one electron in each state. In nature, the overwhelming tendency is for the total energy of a system to be minimized as this is the most efficient arrangement. This is done by filling up all the quantum states, starting from the bottom, until all the electrons are in their own state. There are never enough electrons to fill all the allowed quantum states, and so the energy of the last filled (or first empty) states is called the Fermi surface. In a three dimensional material, the cutoff between filled and empty states is a two-dimensional surface.

So, how does knowing the band structure help us to understand the electronic properties of a material? As an example, let’s think about whether the material conducts electricity well or not. It turns out that for electrical conduction, most of the quantum states of the electrons play no role at all. The important ones are those near the Fermi surface.

To conduct electricity, an electron has to jump from its state below the Fermi surface to one above it, where it is free to move around the material. To do this, it has to absorb some energy from somewhere. This usually either comes from an electric field that is driving the electrical current (like a battery or a plug socket), or from the thermal energy of the material itself.

Take a look at the sketches below. They are cartoons of band structures near the Fermi surface (which is shown by the green dotted line). The filled bands are shown by thick blue lines while the empty bands are shown by thin blue lines. In the left-hand cartoon there is a big gap between the filled and empty bands so it’s very difficult for an electron to gain enough energy to make the jump from the filled band to the empty band. That means that a material with a large band gap at the Fermi surface is an insulator – it can’t conduct electricity easily. The middle cartoon shows a material with only a small band gap. That means it’s possible, but kinda difficult for an electron to make the jump and become conducting. Materials with narrow gaps are semiconductors.

BandGaps

The right-hand cartoon shows a material where the Fermi surface goes through one of the bands, so there are both empty states and filled states right at the Fermi surface. This means it’s really easy for an electron to jump above the Fermi surface and become conducting because it takes only a tiny amount of energy to do this. These materials are conductors.

Going back to silicon, we can look at the band structure above and see that there is a gap of about 1 electron volt at the Fermi energy. (The Fermi energy is zero on the y axis). One electron volt is too large an energy for an electron to become conducting by absorbing thermal energy, but small enough that it can be done by an electric field. This means that silicon is a semiconductor – it has a narrow gap.

One final question: How do you find the band structure of your favorite material? There is an experimental technique called ARPES where you shine high energy light at a material, and the photons hitting it cause electrons to be ejected from the surface. These electrons can be caught and the energy and momentum that they have reflect the energy and momentum of the quantum states they were filling in the material. So by careful measurement you can reconstruct the map of these states.

Another way is to use mathematics to theoretically predict the band structure. There has been a huge amount of work done to come up with accurate ways to go from the spatial definition of a crystal to its band structure with no extra information. In some cases, these work very well, but the calculations which do this are often very complicated and require supercomputers to run!

So, that is band structure. An easy way to make a link between complicated quantum mechanics and everyday properties like conduction of electricity.

Justin Trudeau and quantum computing

You’ve probably seen already that clip of Justin Trudeau, the Prime Minister of Canada, explaining to a surprised group of journalists why quantum computing excites him so much. In case you haven’t seen it, here is a link. A number of things strike me about this. Firstly, of course, he’s right: If we can get quantum computing to work then that would be a really, really big deal and it’s worth being excited about! Second, it’s a bit depressing that a politician having a vague idea about something scientific is a surprising exception to the rule. Thirdly, while his point about storage of information is right, there’s a whole lot more that quantum computers can do that he didn’t mention. Of course, that’s fair enough because he wasn’t trying to be comprehensive, but it gives me an opportunity to talk about some of the stuff that he missed out.

Before that, let’s go over exactly what a quantum computer is. As the Prime Minister said, a normal (or “classical”) computer operates using only ones and zeroes which are represented by current flowing or not flowing through a small “wire”. (However, as you might have already read, this might have to change in the future!) A quantum computer is completely different because instead of these binary bits, it has bits which can be in state that is a mixture of zero and one at the same time. This like the electron simultaneously going through both slits in the two slit experiment, or Schrödinger’s famous cat being alive and dead “at the same time”: It’s an example of a quantum mechanical superposition of states. A quantum computer is designed to operate on these quantum states and to take advantage of this indeterminacy, changing them from one superposition to another to do computations. If you can get the quantum bits to become entangled with each other (meaning that the quantum state of one bit will be affected by the quantum state of all the others that it is entangled with) then you can do quantum computing! Exactly how this would work from a technological point of view is a big subject which I’ll probably write about another time, but options that physicists and engineers are working on include using superconducting circuits, very cold gases of atoms, the spins of electrons or atomic nuclei, or special particles called majorana fermions.

A big field of study has been to find algorithms that allow this quantum-ness to be used to do things that classical computers can’t. There are a few examples that would really change everyday life if they could be implemented. The first sounds a bit boring on the face of it, but quantum algorithms allow you to search a list to determine if an item is in a list or not (i.e. to find that item) in a much shorter time than classical algorithms. So, if you want to search the internet for your favourite web site, a quantum google will do this much faster than a classical google. Quantum algorithms can also tell quickly whether all the items in a list are different from each other or not.

Another application is to solve “black box” problems. This has nothing to do with the flight data recorders in aircraft, but is the name given to the following problem. Say you have a set of inputs to a system and their corresponding output, but you don’t know what the system does to turn the input into the output. The system is the black box, and the difficult problem is to determine what operations the system does to the input. This is important because these black box problems occur in many different areas of science including artificial intelligence, studies of the brain, and climate science. For a classical computer to solve this exactly would require an exponential number of “guesses”, but a quantum computer could do this in just one “guess”!

But perhaps the most devastating use of a quantum computer is to break the internet. Let me explain this a bit! There is a mathematical theorem which says that every number can be represented as a list of prime numbers multiplied together, and that for each number there is only one such list. For example, 30=2\times 3\times5, or 247=13\times19. This matters because most digital security currently depends on the fact that this is a very difficult thing for classical computers to start with a big number and work out what the prime factors are. The way that most encryption on the internet works is that data is encoded using a big number that is the product of only two prime numbers. In order to decrypt the information again, you need to know what the two prime numbers that give you the big number are. Because it’s hard to work out what the two prime numbers are, it is safe to distribute the big number (your public key) so that anyone can encode information to send to you securely. But only you can decode the information because only you know what the two primes are (this is your private key). But, if it suddenly becomes easy to factorise the big number into the two primes then this whole mode of encryption does not work! Every interaction that you have with your bank, your email provider, social media, and online stores could be broken by someone else. The internet essentially wouldn’t be private! Or at least, it wouldn’t be private until a new method for doing encryption is found. This is the main reason why security agencies are working so hard on quantum computing.

Finally, I want to quickly mention one application is a bit more specialised to physics: Quantum computers will allow us to simulate quantum systems in a much more accurate way. Currently, the equations that determine how groups of quantum mechanical objects behave and interact with each other pretty much can’t be solved exactly, in part because the quantum behaviour is difficult to model accurately using classical computing. If you have a quantum computer, then part of this difficulty goes away because you can build the quantum interactions into the simulation in a much more natural way, using the entanglement of the quantum bits.

So in summary, Prime Minister Trudeau was right: Quantum computers have the potential to be absolutely amazing and to change society and are really exciting (and possibly slightly scary!) But storing information in a more compact manner is really only the tip of the iceberg.

My new idea

It’s been a while! Part of the reason I’ve not written anything recently is that I’ve been busy preparing a grant proposal which has to be submitted in a few days. This means I’m begging the Swedish funding agency to give me money to spend on researching a new idea that I have been working on for a while. As part of this proposal, I am required to write a description of what I want to do that is understandable by people outside of physics, so I thought I’d share an edited version of it here. Maybe it’s interesting to read about something that might happen in the future, rather than things that are already well known. And it’s an idea that I’m pretty excited about because there’s some chance it might make a difference!

Computing technology is continuously getting smaller and more powerful. There is a rule-of-thumb, called Moore’s law, which encodes this by predicting that the computing power of consumer electronics will double every two years. So far, this prediction has been followed since microchips were invented in the 1970s. However, fundamental limits are about to be reached which will halt this progress. In particular, the individual transistors which make up the chips are becoming so small that quantum mechanical effects will soon start to dominate their operation and fundamentally change how they work. Removing the heat generated by their operation is also becoming hugely challenging.

A transistor is essentially just a switch that can be either on or off. At the present time, the difference between the on and off state is given by whether an electric current is flowing through the switch or not. If quantum mechanical effects start to dominate transistor operation, then the distinction between the on and off state becomes blurred because current flow becomes a more random process.

systems-small
One-dimensional materials with excitons. Left, two parallel nanowires. Electrons in the nearly empty wire are shown in blue, ‘holes’ in the nearly full wire are in green. The red ellipses represent the pairing. Right, a core-shell nanowire.

In this project, I will investigate a new method of making transistors, using the quantum mechanical properties of the electrons. The theoretical idea is to make two one-dimensional layers (for example, two nanowires) placed close enough to each other that the electrons in the material can interact with each other through Coulomb repulsion. If one of these nanowires has just a few electrons in it, while the other is almost full of electrons, then the electrons in the nearly empty wire can be attracted to the ‘holes’ in the nearly full wire, and they can pair up into new bound particles called excitons. What is special about these excitons is that they can form a superfluid which can be controlled electronically.

 

This can be made into a transistor in the following way. When the superfluid is absent, the two layers are quite well (although not perfectly) insulated from each other, so it is difficult for a current to flow between them. However, when the superfluid forms, one of the quantum mechanical implications is that it becomes possible to drive a substantial inter-layer current. This difference defines the on and off states of the transistor.

There are some mathematical reasons why one might expect that this cannot work for one-dimensional layers, but I have already demonstrated that there is a way around this. If the electrons can hop from one layer to the other, then the theorem which says that the superfluid cannot form in one dimension is not valid. What I will do next is a systematic investigation of lots of different types on one-dimensional materials to determine which is the best situation for experimentalists to look in for this superfluid. I will use approximate theories for the behaviour of electrons in nanowires or nanoribbons, carbon nanotubes, and core-shell nanowires to determine the temperature at which the superfluid can form for these different materials. When the superfluid is established, it can be described by a hydrodynamic theory which treats the superfluid as a large-scale object that can be described by simple equations that govern the flow of liquids. Analysing this theory will reveal information about the properties of the superfluid and allow optimisation of the operation of the switch. Finally, in reality, no material can be fabricated with perfect precision, so I will examine how imperfections will be detrimental to the formation of the superfluid to establish how accurate the production techniques need to be.

Another benefit of this superfluid is that it can conduct heat very efficiently. This means that it may have applications in cooling and refrigeration. I will also investigate the quantitative advantages that this may have over traditional thermoelectric materials. In both of these applications, the fact that the superfluid can exist in a one-dimensional material is a very advantageous factor for designing devices. In particular, because they are so small in two directions, it gives a huge amount of freedom for placing transistors or heat-conducting channels in optimal arrangements that would be impossible with two- or three-dimensional materials.

One final thing for some context: The picture at the top of the page shows a core-shell nanowire that was grown by some physicists in Lund, Sweden. It’s made out of two different types of semiconductor: Gallium antimonide (GaSb) in the core, and indium arsenic antimonide (InAsSb) in the shell. The core region is the nearly full layer that contains the ‘holes’, while the shell is the nearly empty layer with the electrons. The vertical white line on the left of the image is a scale bar that is 100nm long (that’s one ten-thousandth of a millimeter!) which shows that these wires are pretty small! (Picture credit: Ganjipour et al, Applied Physics Letters 101, 103501 (2012)).