Tuesday, January 31, 2012

Perturbation theory

Now, let us start with a look at the different methods in more detail. The first is the mainstay of theoretical physics, and the first thing everyone tries when encountering or designing a new theory: Perturbation theory, often abbreviated simply by PT.

The basic idea of perturbation theory is rather direct. When we have a theory, it is very often the case that we can solve a simpler version of it exactly. For example, if we have QED then we can solve exactly the case where the electromagnetic charge would be zero, because the particles then do not interact with each other. And free, non-interacting particles is something we can do very well. Of course, this is not what QED is really like. Otherwise, we could not see, as the electrons in our eyes would not react to the light made out of photons. To capture this, perturbation theory assumes that the interactions between electrons and photons is only a small alteration to the picture of free particle: A perturbation, and hence the name. Of course, finding a useful split depends on the theory in question, and many different types are actually in use.

Once such a setup is available, we have created powerful mathematical tools how to calculate then anything we want under this assumption. The most important principle is that we can reformulate what we mean by perturbation mathematically by stating that some quantity is small What this precisely means depends on the theory in question. In the above example of QED, it would be the electric charge.

We can then organize perturbation theory systematically by counting how often the small quantity appears in an expression. We then speak of the order of perturbation theory. If it appears the lowest possible number of times, which may be zero, we call this tree level. The reason for this name is that the mathematical expressions can be generated in the way a tree grows, i.e., in the form of starting somewhere and then moving on. In fact, in general this is often equivalent to a classical theory. This means that we treat the particles as quantum particles, but the interaction between them like a classical interaction, without additional quantum effects.

We can now increase the number of times the quantity appears. We also say that we calculate higher orders in the quantity, where order counts the number of times the quantity appears. If we calculate the contribution with the second-least number of times the quantity appears, we call this leading order correction. Since such a correction only appears in the quantum theory, we also call it a quantum correction. The contribution with third-least appearance is called next-to-leading order. If we further increase the order, we just add the corresponding number of times next-to in front, e.g. next-to-next-to-next-to-leading order. This seems to become quickly awkward, but no fear, no too high orders are often calculated.

The reason is that perturbation theory at higher order becomes rather complicated just from an organizational point of view. Quickly, perturbative expressions fill hundreds and thousands of pages with expressions, which have to be evaluated. The final end result will only fill a very few pages, if more than one at all. Over the time, we have developed very powerful methods to deal with this complexity. If you ever heard of Feynman diagrams, given that these have made their way even into some obscure corners of pop culture, then this is one of these tools. Its a very powerful graphical technique to organize perturbation theory in a very efficient way. And this is quite important. There are furthermore other ingenious methods to reduce the amount of calculations necessary. Nonetheless, in the end the expressions remain rather long, and it requires computers to evaluate them. This is in general straightforward to tell the computer what to do. But it is very challenging to do it in a way that the computer is not occupied for the next couple of years, but only a few days or less.

With these methods, we have went to order ten in QED, and for some quantities to order four in the standard model. This seems little, but because of the complexity required many people and decades of time. Still, some times experiments are so precise that the accuracy achieved by these calculations is not sufficient. But of course, there are also many cases where it is the other way around. In a way, it is a kind of arms race between theoreticians and experimentalists.

In the end, however, perturbation theory will not give you the full answer. You can mathematically prove that certain phenomena cannot be calculated using perturbation theory. You may be lucky, and using a different starting point, this can be circumvented for a certain quantity, but then other quantities will not be possible to access. Furthermore, we know that perturbation theory cannot be pursued to arbitrary order, but will collapse at a certain point, for mathematical reasons. Though also here progress has been made, we know that perturbation theory cannot provide the full answer to any question. Already as simple a quantity as the mass of the proton cannot be calculated in perturbation theory. Nonetheless, much of what we measure in experiments, say at LHC, can be very well and very accurately calculated with perturbation theory. Thus, perturbation theory remains to be one of the main tools in particle physics, and for very good reasons so.

Tuesday, January 24, 2012

The tools of the trade

By now, I have collected and presented you quite a number of the basic ingredients of the standard model (and beyond). You should be now well equipped to get a good understanding of what I am doing. Therefore, I can come back to the original idea of this blog, and can discuss some aspects of my own research. At times, and when need be, I will add further more general entries.

Before I can enter the subjects of my research, I have to present another important part of the work of a theoretical physicist: The methods she or he is using. Each methods has its distinct advantages and drawbacks. As a result, a given problem can often be addressed by multiple methods. If this is the case, it is also possible to combine the different methods.

The latter is of particular importance because of an insight of singular importance in physics: Any problem of fundamental interest in particle physics so far is so complicated that we were not (yet) able to find an exact solution. At first, this appears like a very depressing insight. It is usually a cultural shock for students when they enter research, as up to then one is usually only exposed to simple problems which an be solved exactly, for reasons of a pedagogical and manageable presentation. At times, one acquires the insight that this horrible complexity of real problems is just a natural consequence of the richness of physics, even of the very elementary particles which lie at the heart of our current understanding of the universe. Nonetheless, physicists strive for getting better and better and ultimately exact solutions, and perhaps this holy grail of a theoretician can be reached someday. For now, however, this is not the case, and we have to live with the fact that despite our methods working often exceptionally well, they can never give you the full answer. But for some questions they can provide answers, which are ten or more digits precise. And this is quite encouraging.

For the topics I am interested in such enormously good results have not been achieved. The reason for this is that problems become simpler the weaker the interactions are. The method perfectly suited for this is perturbation theory, the first method I will be introducing shortly.

However, if the interaction is weak not so much interesting is happening. Particles ignore each other most of the time, and if they meet, they, well interact weakly, and just scatter a bit off each other. If the interactions become stronger, interesting things start to happen. Bound states form, particles condense, and much more. That is where my interest lies.

The downside of this is that if the interactions between particles become strong, it becomes very hard to find a mathematical handle to treat them. That is the challenge, and the reason why rather few exact results are available. One solution is then to use brute force and just simulate the physics using a sufficiently large computer. That has provided us with very deep insights, and has become an invaluable tool in modern theoretical physics. For the type of problems I am most interested in such simulation methods are called lattice gauge theory, for reason I will explain later.

There are two major alternatives to such brute force simulations. One is the use of models and the other are so-called functional methods. In both cases the idea is to simplify the problem while capturing everything of interest.

Models, a term which I use here in a very broad sense, underlies the idea to find a simplified version of the theory at hand, sufficiently simplified to be easier to handle. Such theories than have often a very narrow range of applicability (for very similar reasons as the standard model itself ). However, if they are constructed very carefully such models very often help to understand not only broad features but often even quantitatively what is going on.

Functional methods are a different approach. The basic feature of theses methods are a set of equations which are in principle exact. Unfortunately, this set is often infinite, and in general approximations are needed to find solutions to them. If the approximations are good, it is possible to describe very much successfully with these equations and at the same time get deeper insight. Also, the approximations can be improved step-by-step, and thus permit eventually a full solution to the theory. I.e., at least in principle.

There are, of course, many other methods available, but these are the most important ones for my own research, and, except for models, I use them essentially on a day-by-day basis. The important methodological aspect in this is the combination of all the methods, and this results in something which is much more than just the sum of its parts.

Thursday, January 19, 2012

Wave functions and fields, once more

In the discussion about fermions, the concept of a wave function appeared, to explain what makes fermions so very strange under a change of coordinate systems. The analogy of particles with waves and oceans has been made also already quite a bit back. It is about time to be just a bit more precise about what a wave function and a field is for a theoretical physicists.

Go back to the idea that particles emerge a some waves at a particular point on an ocean. Two particles would then be just two such waves at two different points. Now the underlying concept appears just to be the ocean, rather than the waves. And indeed, the waves can very well be identical.

That is the underlying idea also in theoretical physics - not only particle physics, but this permeates many ares of theoretical physics: The basic object is the ocean. In the context of particle physics, this ocean is then called a field. Such a field is now existing at every point in space and at every instance in time. In the very literally meaning of the word, it fills up all of the universe. If there is nothing of interest around, this is because the size of the field at this point in space and time is small or even vanishing. However, if there is a spike at some point in the field then just as in the picture of the ocean there sits a particle. If there is a second spike somewhere else, then there is another particle, and so on. Since all the spikes belong to the same field, they describe the same type of particle, say an electron. The spikes may move with different speeds, so the electrons appear to have different speeds, but they are still electrons. That is the reason why all electrons are the same: They are just spikes in the same field. Such a spike is often called an excitation of the field, and this excitation is the electron.

Then what is about the other types of particles? The quarks, the gluons, the Higgs? Well, these belong just to other fields. That is, our universe is filled up with many fields, all existing simultaneously at every point in space and time.

You may be wondering how this should work, and if this is not a bit crowded. But you know already that fields are mathematical concepts. For example, you can associate with every point in space and time a temperature, and thus create a temperature field. At the same time, there is an atmospheric pressure field. Both can happily exist simultaneously. But they are not ignoring each other. As you know, both a related with each other: If either changes this indicates a change of the other as well. Though this analogy is not exactly the same as the particle physics fields, and there are more things involved, the basic idea is the same.

Also the particle physics fields interact, and thus not ignore each other. Their interaction can be more or less translated once more from the analogy with the waves, which has been discussed earlier. So, in this way, everything is realized we see in particle physics. There are fields for every type of particle, which may interact. We are then 'just' a very complicated, combined, and correlated simultaneous excitation of all of these fields, as is your desk or your computer.

Now, what are the wave-functions? Well, in the beginning, quantum physics was formulated not taking into account the effect of large speeds, i.e. of special relativity, something I will explain in more detail later. In this case, the concept of fields can be reduced to instead describing only the waves making up a single particle. In principle, you isolate each wave describing a particle, and discuss it alone. These mathematical quantities describing these single particles are then called wave functions. So wave functions can be thought of as the slow-speed limit of the fields, when all particles are treated separately. Mathematically, this is not quite precise, but should give a rough idea.

Now it is possible to come back to fermions. When you rotate the coordinate system once, it is this wave function (or the field), which change not directly back to the original, but only after a second rotation. Of course, nothing you can actually measure (or experience) changes when rotating your coordinate system once fully. That is because the wave function or the fields cannot be directly measured, just things we can derive from them. However, the underlying fact that you have this obscure change influences the properties of fermions, and leads, e.g., to the Pauli exclusion principle.

Thursday, January 12, 2012

Fermions

You thought bosons were strange? Well, wait, now comes the really strange quantum stuff - fermions.

At first sight, fermions are innocently looking and differing from bosons by the fact that they have half-integer spin. In the standard model, all quarks and leptons are fermions, and have spin one half. No elementary particle is known (though some hypothesized) which are fermions and have a larger spin than one half. But again, some particles made up from several elementary particles may look from afar like having a larger half-integer spin. E.g. the Delta, a heavier cousin of the proton and made up also from three quarks, has spin three halves.

In contrast to bosons, fermions dislike being at the same place. In fact, they can never take the same position, much like the classical balls. But there is a difference to the classical balls. For fermions, this not only applies to position, but also to all quantum numbers and energies. As a consequence, there can never be two fermions being having the same energy. This is the famous Pauli exclusion principle.

This principle has very fundamental consequences: It is responsible for the stability of all matter. If your desk would be made out of bosons, only electromagnetic repulsion would prevent it from collapsing to a pile of bosons. But because it is made out of fermions - all the quarks and electrons - it could never collapse to a single point. Because the fermions can just not get so near to each other. That is the fundamental reason which prevents a white dwarf or a neutron star from collapsing.

Very similar, it also prevents the electrons in an atom, which are attracted by the nucleus by electric forces, from collapsing into the lowest energy level, or into the nucleus outright. All of chemistry works the way it works because the electrons, since they are fermions, cannot all go into the lowest energy level. Otherwise, our chemistry, and thus our biology, would be very different, indeed.

But this is not the only strange thing about fermions. Fermions are also very strange in many other respects. As a consequence of the Pauli principle they obey again a different statistics, the so-called Fermi-Dirac statistics. The consequence of this are at the heart of why there are electric insulators.

But fermions are also strange in the sense that when you turn your coordinate system by 360 degrees, i.e. once fully around, everything is unchanged. Only the fermions do not play along: You have to turn your coordinate system twice around so that they look again the same (or, more precisely, their wave-function explained next time, looks the same). That is so mind-boggling that it is hard to believe it is true, and one cannot really intuitively understand this. It is a very deep combination of our space-time structure and quantum physics. There is no classical objects which behaves like this.

The mathematical consequences of these properties are little less strange. Fermions are the only objects which we cannot describe by ordinary numbers. Theoretical physicists had to invent a whole new type of numbers (well, actually borrow them from your friendly mathematician next door) to describe fermions - so-called Grassmann numbers. These are really strange. If you multiply an ordinary number with itself, you get a new number. If you multiply a Grassmann number with itself, you always get zero. That is the mathematical realization of the Pauli principle. This feature makes fermions very hard to handle in actual calculations, and they have been a bane especially to numerical simulations.

Nonetheless, they are there, and we are bound to live with them, as we are bound to live with bosons. Though - you can always combine two fermions to make something which looks from afar like a boson. But you can never combine two bosons such that they look from afar like a fermion. This fact has been found to be exploited very often by nature, as already described last time. And it lies at the heart of some ideas, so-called technicolor scenarios, to get rid of the Higgs with all its annoying properties: In such proposed extensions of the standard model, the Higgs is just a combinations of two new particles, so-called techniquarks.

Wednesday, January 11, 2012

Bosons

The first type of particles are bosons bosons. Those are these having integer spin. In the standard model, there is the Higgs particle, which has spin zero, and the photons, the W and Z bosons, and the gluons, which all have spin one.

Particles with spin zero are also called scalar particles. Since their spin is zero, the properties of such particles are the simplest when changing to a different coordinate system: They just look the same.

Particles with spin one are also called vector particles. Such vector particles are described like photons. The name vector stems from the fact that under a coordinate transformation the fields describing a vector particle changes in the same way as a line which connects the origin of a coordinate system and an event. The latter line is also called a vector, and hence the name for particles of spin one.

There is actually also a hypothetical particle with spin two, the graviton. Such a particle is also called a tensor particle. Tensors are generalizations of vectors when it comes to coordinate transformations, and fields of spin two particles transform in the same way as such tensors. In general, tensors are rectangular collections of numbers, where the columns transform like a vector under coordinate transformation.

Elementary particles with higher spin are not known. However, particles made up from elementary particles add their spin together (though not necessarily in the sense 1+1=2 - it can also be subtracted, 1-1=0, and everything in between), and can thus have higher spins.

Furthermore, to each such type of bosons, there exists a so-called pseudo bosons, i. e. a pseudo scalar, a pseudo vector (sometimes for historical reasons also called an axial vector), and a pseudo tensor. The difference between a boson and a pseudo boson is what happenes if you reflect the world in a mirror (a parity transformation). Ordinary bosons just become bosons once more. In contrast, the fields of pseudo bosons are multiplied by minus one.

Ok, after all this classification and name stuff, what is special about bosons? The most striking feature is that you can pile them upon each other. That is different from the small balls one often uses to imagine elementary particles: We can stack such balls next to each other, but never ever can two of these balls be at the same place. But bosons can. That is very hard to get in line with our ideas of how things work, and it shows just how quantum bosons are: they behave in a way which is just unexpected.

This is, of course, only true, if the bosons do not repel each other by some force. For example, if you would have two electrically same-name charged bosons, you would have a hard time to bring them together. But if they have oppositely named charges then they would just love to sit at exactly the same place.

In fact, if bosons do not repel each other because of a force acting between them, they have a tendency to lump together - two bosons rather prefer to be at the same place than being apart. This phenomenon is again a pure quantum effect: If you would have two balls, which are not talking to each other, they ignore each other very consequently. The reason for this different behavior is encoded in what physicists call statistics. In case of the boson this statistics is called Bose-Einstein statistics, in contrast to the classical statistics of the balls. Statistics describes how particles distribute themselves. Classical statistics is essentially randomly distributed, but bosons with Bose-Einstein statistics are not entirely randomly distributed but tend to get together.

This property also pertains to a different thing: The energies the particles have. While classical particles have just their energy, independent of every other particle, as long as they do not interact, bosons tend to have the same energy.

The extreme case of getting together is occurring when a sizable fraction of all available bosons are involved, and all of them have the lowest possible energy. That is what is called a Bose-Einstein condensate. This type of stuff is a state of matter similar to being liquid or being solid. But it only occurs under rather extreme conditions, in particular at very low temperatures. On Earth, there is no naturally occurring case of such a condensate. But it was possible to create such condensates in the laboratory using atoms.

In particle physics, such condensates play a central role. The Higgs effect was associated with a condensate of Higgs particles: It is just such a Bose-Einstein condensate. The same applies to the mass generation from the strong force, though in this case it is not the quarks that form a condensate. Since they are fermions, as will be discussed next, this is not directly possible. But states made up from two quarks (or a quark and an anti-quark) can condense. Since spin adds, such states have either spin zero or one, and thus behave like a boson, if one is not looking too closely. And these effective bosons are, loosely speaking, condensing to a Bose-Einstein condensate in this case.

These are only some examples, but such condensates play very often a role, from superconductors to the interiors of neutron stars. Thus bosons, with their strange properties, are very important to physics, and especially particle physics.

Monday, January 9, 2012

Spin

One of the most intriguing and most important properties of an elementary particle is its spin. At the same time, spin is one of the conceptually most problematic quantities, and has led to an enormous amount of misunderstandings.

The reason for this is that there is something in classical physics, which is very closely related to the concept of spin. But this relation is in spirit, rather than literally, and this has led to a lot of confusion. This analogue is angular momentum.

So, first, what is angular momentum? Angular momentum is connected with any kind of rotation of a particle around some center. Formally, it is a product involving the radius of the rotation and the speed along the path of the object. In classical physics, without friction, it is conserved, and it is what keeps the planets' orbits in their respective plane. It is likely also responsible for the fact that all the orbits are more or less in the same plane, or that the milky way has the over-all form of a discus (neglecting the spiral arms). In essence, it is just a reformulation of the ordinary speed, mixed with the mass of a particle. Essentially a kinematic quantity, despite its importance.

If an object just rotates, e.g. a ball, then each of the elements of the ball rotates. This can be described by giving the ball as such an angular momentum. Since the geometry of the ball is known and fixed, it is possible to defer from this total angular momentum the angular momentum of every piece of the ball.

In the world of particles, this angular momentum is reappearing whenever there is something having some kind of relative motion. E. g. in an atom, it is possible to assign the electrons an angular momentum, which is then often called orbital angular momentum (a somewhat complicated name). However, the electrons are not actually small spheres orbiting around the nucleus, bur rather smeared out over the whole of the atom. What this precisely means, I will discuss later. The important thing is that this smeared out something has a kind of orbital movement (the whole object 'rotates' in a certain sense), and can therefore be assigned such an orbital angular momentum.

It is a remarkable observation in quantum physics that angular momentum cannot take any value it likes. It is quantized. The reason for this quantization is the inherent relation between angular momentum and speed, and then speed and energy. Because energy is quantized this implies that angular momentum is quantized.

As orbital angular momentum depends on the momentum, and thus on the speed, its numerical value changes when we as the observer are changing our movement. This does not change the path of the rotating objects, just our perception of it, of course. Therefore, this change of values is closely tied to our change of our coordinate system, when we move.

Now enter spin: It was very early on recognized in quantum physics that elementary particles have a property which changes in the same way as the angular momentum of the ball when we change our coordinate system. This was an intrinsic property of the particles, unchangeable. However, the elementary particles are point-like, at least to the extent we can resolve them. Thus, they cannot rotate in any way, as they do not have any extension. In fact, if this would be an ordinary angular momentum, and the elementary particles would have a small extension, then within our experimental knowledge about the upper limit of this extension, their surface would need to rotate much faster than the speed of light.

Thus, this property got its own name: Spin. This is still inspired by the similarity to (orbital) angular momentum under a change of coordinate system, but by keeping strictly the difference in name, it can always be distinguished from it. However, from time to time its is useful to refer to them both together, and in this case they are called total angular momentum, which is in principle somewhat a misnomer.

Now spin is also quantized, and there exist both half-integer and integer values for it (when choosing appropriate units). This is different from ordinary angular momentum for two reasons. First, there is no simple explanation for the quantization like for angular momentum. There is indeed a complicated explanation, which shows that for the space-time structure which we have, these are the only two possibilities consistent with this type of change under a change of the coordinate system. Second, angular momentum, when measured in the same units, can have only integer values.

The latter is an intriguing difference. It has a very important consequence: Particles having integer spin behave very different from those having a half-integer spin. Therefore, these two types of particles received different names: The former are called bosons, and the latter are called fermions. This distinction is of fundamental importance to particle physics, and therefore the next two entries will discuss both types of particles in more detail. Also, none of these types behave in the same way as an ordinary small ball. But it turns out that if one takes the classical (long-distance) limit, both behave in the same way, and like small balls: Classically fermions and bosons can not be distinguished, their existence is a pure quantum effect, which is intricately linked to the structure of space and time. That is one of the reasons why some people believe that the quantum effect of spin and gravity may be related at a deeper level, but we are very far from understanding whether this suspicion is correct.