Wednesday, August 13, 2014

Triviality is not trivial

OK, starting with a pun is probably not the wisest course of action, but there is truth in it as well.

When you followed the various public discussiosn on the Higgs then you will probably have noticed the following: Though finding it, most physicists are not really satisfied with it. Some are even repelled by it. In fact, most of us are convinced that the Higgs is only a first step towards something bigger. Why is this so? Well, there are a number of reasons, from purely aesthetic ones to deeply troubling ones. As the latter also affect my own research, I will write about a particular annoying nuisance: The triviality referred to in the title.

To really understand this problem, I have to paint a somewhat bigger picture, before coming back to the Higgs. Let me start: As a theoretician, I can (artificially) distinguish between something I call classical physics, and something I call quantum physics.

Classical physics is any kind of physics which is fully predictive: If I know the start conditions with sufficient precision, I can predict the outcome as precisely as desired. Newton's law of gravity, and even the famous general theory of relativity belong to this class of classical physics.

Quantum physics is different. Quantum phenomena introduce a fundamental element of chance into physics. We do not know why this is so, but it is very well established experimentally. In fact, the computer you use to read this would not work without it. As a consequence, in quantum physics we cannot predict what will happen, even if we know the start as good as possible. The only thing we can do is make very reliable statements of how probable a certain outcome is.

All kinds of known particle physics are quantum physics, and have this element of chance. This is also experimentally very well established.

The connection between classical physics and quantum physics is the following: I can turn any kind of classical system into a quantum system by adding the element of chance, which we also call quantum fluctuations. This does not necessarily go the other way around. We know theories where quantum effects are so deeply ingrained that we cannot remove them without destroying the theory entirely.

Let me return to the Higgs. For the Higgs part in the standard model, we can write down a classical system. When we then want to analyze what happens at a particle physics experiment, we have to add the quantum fluctuations. And here enters the concept of triviality.

Adding quantum fluctuations is not necessarily a small effect. Indeed, quantum fluctuations can profoundly and completely alter the nature of a theory. One possible outcome of adding quantum fluctuations is that the theory becomes trivial. This technical term means the following: If I add quantum fluctuations to a theory, the resulting theory will describe particles which do not interact, no matter how complicated they do in the classical version. Hence, a trivial quantum theory describes nothing interesting. What is really driving this phenomena depends on the theory at hand. The important thing is that it can happen.

For the Higgs part of the standard model, there is the strong suspicion that it is trivial, though we do not have a full proof for (or against) it. Since we cannot solve the theory entirely, we cannot (yet) be sure. The only thing we can say is that if we add only a part of the quantum fluctuations, only a part of the so-called radiative corrections, the theory makes still sense. Hence it is not trivial to decide whether the theory is trivial, to reiterate the pun.

Assuming that the theory is trivial, can we escape it? Yes, this is possible: Adding something to a trivial theory can always make a theory non-trivial. So, if we knew for sure that the Higgs theory is trivial, we would know for sure that there is something else. On the other hand, trivial theories are annoying for a theoretician, because you either have nothing or have to remove artificially part of the quantum fluctuations. This is what annoys me right now with the Higgs. Especially as I have to deal with it in my own research.

Thus, this is one out of the many reasons people would prefer to discover soon more than 'just' the Higgs.

Thursday, July 17, 2014

Why continue into the beyond?

I have just returned from a very excellent 37th International Conference on High-Energy Physics. However, as splendid as the event itself was, it was in a sense bad news: No results which hint at anything substantial beyond the standard model, except for the usual suspect statistical fluctuations. This does not mean that there is nothing - we know there is more for many reasons. But in an increasingly frustrating sequence of years all our observational and experimental results keep pushing it beyond our reach. Even for me as a theorist there is just not enough substantial information to be able to do more than just vague speculation of what could be.

Nonetheless, I just wrote that I want to venture into this unknown beyond, and in force. Hence it is reasonable - in fact necessary - to pose the question: Why? If I do not know and have too little information, is there any chance to hit the right answer? The answer to this: Probably not. But...

Actually, there are two buts. One is simply curiosity. I am a theorist, and I can always pose the question how does something work, even without having a special application or situation in mind. Though this may just end up as nothing, it would not be the first time that the answer to a question has been discovered long before the question. In fact, the single most important building block of the standard-model, so-called Yang-Mills theory, has been discovered by theorists almost a decade before it was recognized to be the key to explain the experimental results.

But this is not the main reason for me to venture into this direction. The main reason has to do with the experience I made with Higgs physics - that despite appearance there is often a second layer to the theory. Such a second layer has in this case shifted the perception of how things we describe in theory correlate with the things we see in experiment. Since many proposed theories beyond the standard model, especially such as have caught my interest, are extensions of the Higgs of the standard model. It thus stands to reason that similar statements hold true in their cases. However, whether they hold true, and how they work cannot be fathomed without looking at theses theories. And that is what I want to do.

Why should one do this? Such subtle questions seem to be at first not really related to experiment. But understanding how a theory really works should also give us a better idea of what kind of observations such a theory can actually deliver. And now it becomes very interesting for an experiment. Since we do at the current time not know what to expect, we need to think about what we could expect. This is especially important as to look in every corner requires much more resources than available to us in the foreseeable future. Hence, any insights into what kind of experimental results a theory can yield is very important to select where to focus.

Of course, my research alone will not be sufficient to do this. Since it easily can be that I am looking at the 'wrong' theory, it would not be a good idea to put too much effort in it. But, when there are many theoreticians working on many theories, and many theories all say that it is a good idea to look into a particular direction: Then we have a guidance for where to look. Then there seems to be something special in this direction. And if not, then we have excluded a lot of theories in one go.

As one person in a discussion session (I could not figure out who precisely) has put it aptly at the conference: "The time of guaranteed discoveries is over.". This means that now that we have all pieces of the standard model, we cannot expect to find a new piece any time soon. All our indirect results even tell us that the next piece will be much harder to find. Hence, we are facing a situation as was last seen in physics in the second half of the 19th century and beginning 20th century: There are only some hints that something does not fit. And now we have to go looking, without knowing in advance how far we will have to walk. Or in which direction. This is probably more of an adventure than the last decades, where things where essentially happening on schedule. But is also requires more courage, since there will be much more dead ends (or worse) available.

Wednesday, June 11, 2014

Building a group

Those who follow my twitter feed have already seen that I will become a full professor at the Institute of Physics at the University of Graz in Austria from October on. This also implies that I will be building up a group to continue my research on the standard model of particle physics and beyond.

I would like to use this opportunity to write a little bit about what happens behind the scenes right now, rather than something about the outcome of the research we are doing. Such a 'behind-the-scene' look is also interesting, I think, since it shows how research is done, not only what it finds. Hence, I may write such entries more often in the future. If you have any comments or thoughts on this, I would be happy to read your opinion.

One of the major tasks right now for me is to decide what will be the research focus of this group, and how I will organize the resources I will have for this purpose. These are the most important steps, as I have to decide which kind of positions I will open (especially for PhD and master theses), as well as which kind of computers I have to arrange for. Since I will now have the resources to work on more projects than before, this means to structure my activities, such that I get not lost. It would not do to think that now that I have more possibilities I should just jump into many new fields, putting each and every member of the group on a separate topic. In the long run, I will have to take care that methods and techniques developed and used in my group will be handed down from one generation of students to the next. This will only be possible, if the topics they are applied to are sufficiently close that this is meaningfully possible. That is particularly true for my numerical and technically more involved analytical tools.

As a consequence, I decided to establish two main directions. One will be concerned with neutron stars. The other will be looking at Higgs physics, continuing my research on combinations of Higgs particles.

But, of course, just staying with what I already do will not be sufficient. Especially, as there are so many interesting problems offering themselves, like the one of electric charge on which I wrote last time. Since the technology I have accumulated so far is more than sufficient to start working on it, without needing to first invent a new approach, it is a natural way to expand. The same is true for the so-called technicolor theories, on which I did some exploratory work in the past. Hence I decided to make the first new additions to my research fields in these areas. Also, both can be done with already the present infrastructure, so I do not need to wait for new one.

Now that I decided what to do, I still have to make it happen. What are the steps I will have to take? The most important goal is to have some students with whom I can work on all these exciting research topics. This will mean to open positions for them, announce them, and find someone for it. This includes not only PhD students, but also master and bachelor students. To reach them, I will have to set up a new web page and other structures to show what I am working on. As important will be to give good lectures in general, and also special lectures about interesting topics. I am looking very much forward to this part. I have already started to prepare the first lecture I will give in the winter term. It will focus on supersymmetry, another candidate for something beyond the standard model.

In the long run, I will have to acquire third-party funding, to enlarge the group beyond what I will have when I start. That, and the accompanying work, is worth a blog entry on its own, so I will not write about it now. I will return to it at a later time.

Tuesday, May 27, 2014

Why does chemistry work?

This seems to be an odd question to ask in a blog about particle physics. But, as you will see, it actually makes connection to a very deep problem of particle physics. A problem, which I am currently turning my attention to. Hence, in preparation of things to come, I write this blog entry.

So, where is the connection? Well, chemistry is actually all about the electromagnetic interaction. One of the most important features is that atoms are electrically neutral. This is only possible, if the atomic nucleus has the same positive charge as its surrounding electrons have a negative one. The electrons are elementary particles, as far as we know. The atomic nucleus, however, is ultimately made up of quarks. So the last statement boils down to the fact that the total electric charge of the quarks in the nucleus has to compensate the one of the electrons. Sounds simple enough. And this is in fact something which has been established very exactly in experiment. The compensation is much better than one part in a billion - within our best efforts, the cancellation appears perfect.

The problem is that it is not necessary, according to our current knowledge. Quarks and electrons are very different objects in particle physics. So, why should they carry electric charge such that this balancing is possible? The answer to this is, as often: We do not know. Yet.

When we are just looking at electromagnetism, there is actually no theoretical reason why they should have balanced electric charges. Electromagnetism would work in exactly the same way if they did not, if they would have arbitrarily different charges. Of course, atoms are then no longer neutral. And chemistry would then work quite differently.

If there is no simple explanation in the details, one should look at the big picture. Perhaps it helps. In this case it does, but this time not in a very useful way.

Electromagnetism does not stand on its own. It is part of the standard model of particle physics. And here things start to become seriously bizarre.

I am a theorist. Hence, the internal consistency of a theory is something quite important to me. The standard model of particle physics as a theory turns out to be consistent if very precise relations exist between the various particles in it - and the charge they carry. The exact cancellation of electric charges in the atoms we observe is one of the very few possibilities how the standard model can work theoretically.

So, did we explain it now? Unfortunately, no. "The theory should work" is not an adequate requirement for a description of nature. The game goes the other way around. Nature dictates, and our theory must describe it. Experiment rules theory in physics.

So the fact that we need this cancellation is troublesome: We only know that we need it. But it is just there, we cannot explain it with what we know.

So that is the point to enter speculation. We know theories in which the electromagnetic charge cancellation is not 'just there', but it follows immediately from the structure of the theory. The best known examples of such theories are the so-called grand-unified theories. In these, there is a super-force, and the known forces of the standard model are just different facets of this super-force. The fact that electrons and quarks have canceling charges in such a theory just stems from the fact that everything originates in this one super-force.

It is possible to write down a theory of such a super-force, which is compatible with our current experiments. But so is the standard model. Hence, only if we find an experimental result, in which a theory of such a super-force shows a distinct behavior to the standard-model, we can be sure that it exists. This is not (yet?) the case.

At the same time, we so far know relatively little about many aspects of such a theory. This is the reason for me to start getting interested in it. Especially, there are still conceptual questions we need to answer. I will write about them in future entries. Because it will be quite interesting and challenging to understand these things.

Friday, April 11, 2014

News about the state of the art

Right now, I am at workshop in Benasque, Spain. This workshop is called 'After the Discovery: Hunting for a non-standard Higgs Sector'. The topic is essentially this: We now have a Higgs. How can we find what else is out there? Or at least assure that it is currently out of our reach? That there is something more is beyond doubt. We know too many cases where our current knowledge is certainly limited.

I will not go on with describing all what is presented on this workshop. This is too much. And there are certainly other places on the web, where this is done. In this entry I will therefore just describe how what is discussed at the workshop relates to my own research.

One point is certainly what experiments find. At such specialized workshops, you can get much more details of what they actually do. Since any theoretical investigation is to some extent approximative, it is always good to know, what is known experimentally. Hence, if I get a result in disagreement with the experiment, I know that there is something wrong. Usually, it is the theory, or the calculations performed. Some assumption being too optimistic, some approximation being too drastic.

Fortunately, so far nothing is at odds with what I have. That is encouraging. Though no reason for becoming overly confident.

The second aspect is to see what other peoples do. To see, which other ideas still hold up against experiment, and which failed. Since different people do different things, combining the knowledge, successes and failures of the different approaches helps you. It helps not only in avoiding too optimistic assumptions or other errors. But other people's successes provide new input.

One particular example at this workshop is for me the so-called 2-Higgs-Doublet models. Such models assume that there exists besides the known Higgs another set of Higgs particles. Though this is not obvious, the doublet in the name indicates that they have four more Higgs particles, one of them being just a heavier copy of the one we know. I have recently considered to look also into such models, though for quite different reasons. Here, I learned how they can be motivated for entirely different reasons, and especially why there are so interesting for ongoing experiments. I also learned much about their properties, and what is known (and not known) about them. This gives me quite some new perspectives, and some new ideas.

Ideas, I will certainly realize, once being back.

Finally, collecting all the talks together, they draw the big picture. They tell me, where we are now. What we know about the Higgs, what we do not know, and where there is room (left) for much more than just the 'ordinary' Higgs. It is an update for my own knowledge about particle physics. And it finally delivers the list, of what will become looked at in the next couple of months and years. I now know better where to look for the next result relevant for my research, and relevant for the big picture.

Wednesday, March 12, 2014

Precision may matter

The latest paper I have produced is an example of an often overlooked part of scientific research: It is not enough to get a qualitative picture. Sometimes the quantitative details modify or even alter the picture. Or, put more bluntly, sometimes precision matters.

When we encounter a new problem, we usually first try to get a rough idea of what is going on. It starts with a first rough calculation. Such an approach is often not very precise. Still, this creates a first qualitative picture of what is going on. This may be rough around the edges, and often does not perfectly fit the bill. But it usually gets the basic features right. Performing such a first estimate is often not a too serious challenge.

But once this rough picture is there, the real work begins. Almost fitting is not quite the same as fitting. This is the time where we need to get quantitative. This implies that we need to use more precise, probably different, but almost certainly more tedious methods. These calculations are usually not as simple, and a lot of work gets involved. Furthermore, we usually cannot solve the problem perfectly in the first round of improvement. We get things a bit rounder at the edges, and the picture normally starts to fit better. Still not everywhere, but better. Often, a second, and sometimes many more, rounds are necessary.

Fine, you may say. If things are improving, why bother doing even better? Is not almost fitting as good as fitting? But this is not quite the same. The best known examples we find in history. At the beginning of the 20th century, the picture of physics seem to fit the real world almost perfectly. There were just some small corners, where it seems to still require a bit of polishing. These small problems actually led to one of the greatest change in our understanding of the world, giving birth to both quantum physics and the theory of relativity. Actually, today we are again in a similar situation. Most of what we know, especially the standard model, fits the bill very nicely. But we still have some rough patches. This time, we have learned our lesson, and keep digging into these rough patches. Our secret hope is, of course, that a similar disruption will occur, and that our view of the world will be fundamentally changed. Whether this will be the case, or we just have to slightly augment things, we do not yet know. But it will be surely a great experience to figure it out.

Returning to my own research, it is precisely this situation which I am looking at. However, rather than looking at the whole world, I have been just looking at a very simplified theory. One that involves only the gluons. This is a much simpler theory than the standard model. Still, it is so complicated that we were not (yet) able to solve it completely. We made great progress, though, and it seems that we almost got it right. Still, also here, some rough edges remain. In this paper, I am looking precisely at these edges, and just check how rough they really are. I am not even trying to round them further. I am not the first to do it, and many other people have looked at them in one or the other way. However, doing it more than once, and especially from slightly different angles, is important. It is part of a system of check and balances, to avoid any error. Tt is also in science true: Nobody is perfect. And though there are many calculations, which are correct, even the greatest mind may fail sometime. And therefore it is very important to cross check any result.

In this particular case, everything is correct. But, by looking more precisely, I found some slight deviations. These were previously not found, as precision is almost always also a question of the amount of resources invested. In this case, the resources are mostly computing time, and I have just poured a lot of it into it. These slight deviations do not require a completely new view of the whole theory. But it changes some slight aspects. This may sound like not much. But if they should be confirmed, they provide closure in the following sense: Previously, some conclusions remained dangling, and seemed to be not at ease with each other. There were some ways out, but the previously known results rather suggested a more fundamental problem. My new contribution shifts these old results slightly, and makes them more precise. The new interpretation fits now much better with the suspected ways out rather than with a fundamental problem. Hence, looking closer has in this case improved our understanding.

Hence, theoretical physics has often more in common with a detective's work. We start with a suspicion. But then tedious work on the details is required to uncover more and more of the whole picture, until either the original suspicion is confirmed, or it shifts to a different suspect, which may have even been completely overlooked in the beginning. However, at least normally nobody tries to kill us if we come too close to the truth.

Monday, February 3, 2014

The trouble with new toys

You may remember that one of the projects I am working on is understanding so-called neutron stars. These are the remnants of heavy stars, which die in a gigantic explosion called a supernova. One of the main problems with understanding these neutron stars is that it is far too expensive to simulate them in detail using computers. We try in our research to circumvent this problem by using not the original theory describing neutron stars, but a slightly modified version. For this modified theory, we actually can do simulations. So is now everything shiny? No, unfortunately not. And about these problems we have published a new paper recently. Today, I will outline what we did in this paper.

So what is actually the problem? The problem is that some of our theories are not linear. What does now linear mean? Well, a theory is called linear, if we apply an external input to it, and the effect is has on theory is of (roughly) the same size as whatever we applied. In contrast, for anything which is non-linear, the response can be much larger, or much smaller, than whatever we applied. Unfortunately, the strong interactions, which is responsible for neutron stars, is non-linear. Hence, even though we modified it just a little bit, we can potentially have very strong changes. Therefore, we have to make sure that whatever we did was not having unplanned and strong effects. This task led to the mentioned paper.

The main question we have to answer is: If the theory is so sensitive to modifications, were the effects of our modifications still harmless enough? Can we still learn something?

The answer is, as always, it depends. To judge the similarities, we have looked at the hadrons, the particles build up from quarks and gluons. In the strong force, the masses of these hadrons follow a very special pattern. Especially, there are some unusually light ones, the a few intermediate ones, and then, already quite heavy, the first one which plays an important role in everyday life: The proton, the nucleus of a hydrogen atom. We found that in our modified theory this pattern repeats itself. This is already a good sign. However, we also found some indications that not all is well. Some of the lighter particles have a number of different details than in nature, especially the lightest ones.

Since we are mostly interested in neutron stars, we also did the calculations at large densities. There, we saw that indeed the slightly different properties of the lightest particles play a role. At quite small densities, we observe a behavior, which we are reasonably sure will not occur in nature. So is then all lost? It does not seem so. While at these densities the behavior is different, this will probably not play an important role for the densities we are really interested in. And indeed, at higher densities the theory behaved similar to the expectations: It seems to behave in a way which we would guess based on the observations of real neutron stars, and general arguments. This is quite encouraging. Still, we also encountered two more challenges. One is that to make a definite statement, we will need much more precision: Some of what we see is sensitive to details. We need to understand this better. And this will require much more calculations.

The other one is that we are still not quite sure if there is not some special kind of different particle playing a too important role. This special kind of particle is similar to the proton, but not present in nature. It is only a feature of the modified theory. This is a so-called hybrid. In contrast to the proton, which consist out of three quarks and no gluons, it is made out of one quark and three gluons. There are certain technical reasons, why this particle could be a problem when trying to understand neutron stars. So far, it escaped detection in our calculations. We have to find it, to make really sure what is going on. This will be a challenge.

Fortunately, still, even in the worst case scenario of both problems, what we did will not be irrelevant. On the one hand, it was a genuinely new theory we looked at, and we learned already very much about how theories in general work. And the second is - what we created will also serve as a benchmark for other methods. If someone creates a new method to get to the neutron star's core, she or he can test it again our simulations, to build confidence in it.

Wednesday, January 15, 2014

For each yes and no there is a perhaps

The last time, I was writing about my research on the Higgs. Especially, I was writing how we tested perturbation theory using numerical simulations. I was quite optimistic back then to have results by now, which could be of either of two types. Either perturbation theory is a good description, or it is not.

By now we have finished this project, and you can download the results from the arxiv. The arxiv is a server where you can find essentially every published result in particle physics of the last twenty years, legally and free of charge. But lets get back to our results. As I should have expected, things turned out to differently. Instead of a clear yes or no answer I got a perhaps.

The original question was, under which circumstances can perturbation theory be applied. It appears to be a simple enough question. Originally, it looked like this would depend on the relative sizes of the Higgs mass to the W and Z masses. And yes, it does. But. We found more.

We found different regimes. One is where the Higgs is lighter than the W and Z. Of course, this is not a situation we encounter in nature, where it is about 50% heavier. But as theoreticians, we are allowed to play this kind of games. Anyway, in this case, we confirmed what was already indirectly known from other investigations: Perturbation theory seems not to work. Always. While the first statement is not too surprising, the second statement is. Naively, one expected that if the interactions between the Higgs and the W and Z are of certain relative sizes, perturbation theory would still work. We did not find any hint of that. Is this already then a no? Unfortunately not. As I have described earlier, it is not so easy to relate a simulation to reality. Even if it is only a fantasy version of reality, as in this case. Hence, we cannot be sure that we have exhausted all possibilities. The only thing we can say for sure is that there are cases, where perturbation theory does not work. Perhaps there is something more, some other cases. And thus there is the first perhaps.

The situation gets even more interesting, when the Higgs is heavier than the W and Z, but lighter than twice their mass. In this regime, perturbation theory is expected to be pretty good. At least here, we find a rather clear answer: Perturbation theory does indeed well. Wherever we looked, we did not find anything to the contrary. Of course, again we cannot exclude that there is somewhere else a different case. But so far, everything seems to be fine.

When the Higgs finally hits the magic limit of twice the W and Z masses, something unexpected happens. This limit is particularly interesting, because above it, the Higgs can decay into W and Z. The expectation was that perturbation theory is still valid. At least until reaching several times the W and Z mass. But here, we found something odd. We found both possibilities, depending on the relative interaction strengths. In the one case, perturbation theory still works for a long time. In the other already a little bit above this critical mass perturbation theory starts to fail. We do not yet really understand, what is going on there, and what really characterizes the two different cases. We are working on this right now. But whatever it is, it is different than we expected. And this once more teaches to always expect that your naive expectations are not fulfilled. Things remain full of surprises, even if you think you understood them.

Monday, December 2, 2013

Knowing the limits

Some time ago, I have presented one of the methods I am using: The so-called perturbation theory. This fancy name signifies the following idea: If we know something, and we add just a little disturbance (a perturbation) to it, then this will not change things too much. If this is the case, then we can systematically give the consequences of the perturbation. Mathematically, this is done by first calculating the direct impact of the perturbation (the leading order). Then we look at the first indirection, which involves not only the direct effect, but also the simplest indirect effect, and so on.

Back then, I already wrote that, nice as the idea sounds, it is not possible to describe everything by it. Although it works in many cases very beautifully. But this leaves us with the question when does it not work. We cannot know this exactly. This would require to know the theory perfectly, and then there would be no need in the first place to do perturbation theory. So how can we then know what we are doing?

The second problem is that in many cases anything but perturbation theory is technically extremely demanding. Thus the first thing one checks is the simplest one: Whether perturbation theory makes itself sense. Indeed, it turns out that usually perturbation theory starts to produce nonsense if we increase the strength of the perturbation too far. This indicates clearly the breakdown of our assumptions, and thus the breakdown of perturbation theory. However, this is a best-case scenario. Hence, one wonders whether this approach could be fooling us. Indeed, it could be that this approximation breaks down long before it gets critical. So that it first produces bad (or even wrong) answers before it produces nonsensical ones.

This seems like serious trouble. What can be done to avoid it? There is no way inside perturbation theory to deal with it. One way is, of course, to compare to experiment. However, this is not always the best choice. On the one hand it is always possible that our underlying theory actually fails. Then we would misinterpret the failure of our ideas of nature as the failure of our methods. One would therefore like to have a more controllable way. In addition, we often reduce complex problems to simpler ones, to make them tractable. But the simpler problems often do not have a direct realization in nature, and thus we have no experimental access to them. Then this way is also not possible.

Currently, I find myself in such a situation. I want to understand, in the context of my Higgs research, to which extent perturbation theory can be used. In this context, the perturbation is usually the mass of the Higgs. The question then becomes: Up to which Higgs mass is perturbation theory still reliable? Perturbation theory itself predicts its failure at not more than eight time the mass of the observed Higgs particle. The question is, whether this is adequate, or whether this is too optimistic.

How can I answer this question? Well, here enters my approach not to rely only on a single method. It is true that we are not able to calculate as much with different methods than perturbation theory, just because anything else is too complicated. But if we concentrate on a few questions, enough resources are available to calculate things otherwise. The important task is then to make a wise choice. I.e. a choice from which one can read off the desired answer, in the present case whether perturbation theory applies or not. And at the same time to do something one can afford to calculate.

My present choice is to look at the relation of the W boson mass and the Higgs mass. If perturbation theory works, there is a close relation between both, if everything else is adjusted in a suitable way. The perturbative result can be found already in textbooks for physic students. To check it, I am using numerical simulations of both particles and their interactions. Even this simple question is an expensive endeavor, and several ten-thousand days of computing time (we always calculate how much time it would take a single computer to do all the work all by itself) have been invested. The results I found so far are intriguing, but not yet conclusive. However, in just a few weeks more time, it seems, that the fog will finally lift, and at least something can be said. I am looking with great anticipation to this date. Since either of two things will happen: Something unexpected, or something reassuring.

Monday, November 4, 2013

How to mix ingredients

I have written earlier that one particular powerful way to do calculations is to combine different methods. In this entry, I will be a bit more specific. The reason is that we just published a proceeding in which we describe our progress to prepare for such a combination. A proceeding is, by the way, just a fancy name for a write-up of a talk given at a conference.

How to combine different methods always depends on the methods. In this case, I would like to combine simulations and the so-called equations of motion. The latter describe how particles move and how they interact. Because you have usually an infinite number of them - a particle can interact with another one, or two, or three, or..., you can usually not solve them exactly. That is unfortunate. You may then ask, how one can be sure that the results after any approximation is still useful. The answer is that this is not always clear. It is here where the combination comes in.

The solution to the equations of motion are no numbers, but mathematical functions. These functions describe, for example, how a particle moves from one place to another. Since this travel depends on how far these places are apart, this must be described by a function. Similarly, there are results which describe how two, three, four... particles interact with each other depending on how far they are apart from each other. Since all of these functions then describe how two or more things are related to each other, a thing which is called a correlation in theoretical physics, these functions are correlation functions.

If we would be able to solve the equations of motions correctly, we would get the exact answer for all these correlation functions. We are not, and thus we will not get the exact ones, but different ones. The important question is how different they are from the correct ones. One possibility is, of course, to compare to experiment. But it is in all cases a very long way from the start of the calculation to results, which can be compared to experiment. There is therefore a great risk that a lot of effort is invested in vain. In addition, it is often not easy to identify then where the problem is, if there is a large disagreement.

It is therefore much better if a possibility exists to check this much earlier. This is the point where the numerical simulations come in. Numerical simulations are a very powerful tool. Up to some subtle, but likely practically irrelevant, fundamental questions, they essentially simulate a system exactly. However, the closer one moves to reality, the more expensive the calculations become. Most expensive is to have very different values in a simulation, e. g. large distances and small distances simultaneously, or large and small masses. There are also some properties of the standard model, like parity violation, which can even not be simulated up to now in any reasonable way at all. But within these limitations, it is possible to calculate pretty accurate correlation functions.

And this is then how the methods are combined. The simulations provide the correlation functions. They can then be used with the equations of motions in two ways. Either they can be used as a benchmark, to check whether the approximations make sense. Or they can be fed into the equations as a starting point to solve them. Of course, the aim is then not to reproduce them, as otherwise nothing would have been gained. Both possibilities have been used very successfully in the past, especially for hadrons made from quarks and to understand how the strong force has influenced the early universe or plays a role in neutron stars.

Our aim is to use this combination for Higgs physics. What we did, and have shown in the proceedings, are calculating the correlation functions of a theory including only the Higgs and the W and Z. This will now form the starting point for getting also the leptons and quarks into the game using the equations of motions. And we do this to avoid the problem with parity, and to include the very different masses of the particles. This will be the next step.

Monday, October 14, 2013

Looking for something out of the ordinary

A few days ago, I was at a quite interesting workshop on "Anomalous quartic gauge couplings". Since the topic itself is quite interesting and has significant relevance to my own research on Higgs physics, I like to spend this entry on it. Let me start with describing what the workshop was about, and what is behind its rather obscure title.

At the experiments at the LHC what we do is smashing two protons into each other. Occasionally, either of two things may happen. The first possibility is that in the process each proton emits a W or Z boson, the carriers of the weak force. These two particles may collide themselves, but emerge without change. That is what we call an elastic scattering. Afterwards, these bosons are detected. Or rather, they are indirectly detected by their decay products. The second possibility is that either of the protons emits a very energetic W or Z, which then splits into three of these afterwards. These are then, again indirectly, observed. This is called an inelastic effect. Of course, much more usually goes on, but I will concentrate here on the most important part of the collision.

At first sight, both types of events may not have to do a lot with each other. But they have something important in common: In all cases four particles are involved. They may be any combination of Ws and Zs. To avoid further excessive use of 'W or Z', I will just call them all Ws. The details of which is which is not really important right now.

The fact that four are involved explains already the quartic in the name of the workshop. In the standard model of particle physics, that what happens can occur mainly in either of two ways. One is that two Ws form for some time another particle. This may either be another W or Z, or also a Higgs. Or they can all four interact with each other at the same time. That is then called a quartic gauge coupling, and thus the complete second half of the name of the workshop.

Now what is not right with this, as we talk about something anomalous? Actually, everything is all right with it in the standard model. But we are not only interested what there is in the standard model, but also whether there is something else. If there is something else, what we observe should not fit with what we calculate in the standard model alone. Such a difference would thus be anomalous. Hence, if we would measure that the quartic gauge coupling is different from the one we calculate in the standard model, we call it an anomalous quartic gauge coupling. And then we are happy, since we found something new. Hence, the workshop was about looking for such anomalous quartic gauge couplings. So far, we did not find anything, but we do not yet have seen many such situations. Far too few to make already any statements. But we will record many more in the future. Then we can make really a statement. At this workshop we were thus essentially preparing for the new data, which we expected to come in once the LHC is switched on again in early 2014.

What has this to do with my research? Well, I try to figure out whether two Higgs, or Ws, can form a bound state. If such bound states would exist, they would form exactly in such processes, for a very short time. Afterwards, they again decay into the final Ws. If they form, they would contribute to what we know. They are part of the standard model. So to see something new, we have to subtract their contributions, if they are there. Otherwise, they could be mistaken for a new, anomalous effect. Most important for me at the workshop was to understand, what is measured, and how they could contribute. It was also important to know what can be measured at all, since any experimental constraint I can get would help me in improving my calculations. This kind of connecting experiment and theory is very important for us, as we are yet far from the point where our results are perfect. The developments in this area remain therefore very important to my own research project.

Wednesday, September 11, 2013

Blessing and bane: Redundancy

We have just recently published a new paper. It is part of my research on the foundations of theoretical particle physics. To fully appreciate its topic, it is necessary to say a few words on an important technical tool: Redundancy.

Most people have heard the term already when it comes to technology. If you have a redundant system, you have two or more times the same system. If the first one fails, the second takes over, and you have time to do repairs. Redundancy in theoretical physics is a little bit different. But it serves the same ends: To make life easier.

When one thinks about a theory in particle physics, one thinks about the particles it describes. But if we would write down a theory only using the particles which we can observe in experiment, these theories would become very quickly very complicated. Too complicated, in fact, in most cases. Thus people have very early on found a trick. If you add artificially something more to the theory, it becomes simpler. Of course, we cannot just simply add it really, because otherwise we would have a different theory. What we really do is, we start with the original theory. Then we add something additional. We make our calculations. And from the final result we remove then what we added. In this sense, we added a redundancy to our theory. It is a mathematical trick, nothing more. We imagine a theory with more particles, and by removing in the end everything too much, we end up with the result for our original problem.

Modern particle physics would not be imaginable without such tricks. It is one of the first things we learn when we start particle physics, the power of redundancy. A particular powerful case is to add additional particles. Another one is to add something external to the system. Like opening a door. It is the latter kind with which we had to deal.

Now, what has this to do with our work? Well, redundancies are a powerful tool. But one has to be careful with them nonetheless. As I have written, we remove at the end everything we added too much. The question is, can this be done? Or becomes everything so entwined that this is no longer possible? We have looked at especially was such a question.

To do this, we regarded a theory of only gluons, the carrier of the strong force. There has been a rather long debate in the scientific community how such gluons move from one place to another. A consensus has only recently started to emerge. One of the puzzling things were that you could prove mathematical certain properties of their movement. Surprisingly, numerical simulations did not agree with this proof. So what was wrong?

It was an example of reading the fine-print carefully enough. The proof made some assumptions. Making assumptions is not bad. It is often the only way of making progress: make an assumption, and see whether everything fits together. Here it did not. When studying the assumptions, it turned out that one had to do with such redundancies.

What was done, was essentially adding an artificial sea of such gluons to the theory. At the end, this sea was made to vanish, to get the original result. The assumption was that the sea could be removed without affecting how the gluons move. What we found in our research was that this is not correct. When removing the sea, the gluons cling to it in a way that for any sea, no matter how small, they still moved differently. Thus, removing the sea little by little is not the same as starting without the sea in the first place. Thus, the introduction of the sea was not permissible, and hence we found the discrepancy. There have been a number of further results along the way, where we learned a lot more about the theory, and about gluons, but this was the essential result.

This may seem a bit strange. Why should an extremely tiny sea have such a strong influence? I am talking here about a difference of principle, not just a number.

The reason for this can be found in a very strange property of the strong force, which is called confinement: A gluon cannot be observed individually. When the sea is introduced, it offers the gluons the possibility to escape into the sea, a loophole of confinement. It is then a question of principle: Any sea, no matter how small, provides such a loophole. Thus, there is always an escape for the gluons, and they can therefore move differently. At the same time, if there is no sea to begin with, the gluons remain confined. Unfortunately, this loophole was buried deep into the mathematical formalism, and we had to first find it.

This taught us an important lesson that, while redundancies are a great tool, one has to be careful with them. If you do not introduce your redundancies carefully enough, you may alter the system in a way too substantial to be undone. We now know what to avoid, and can go on, making further progress.