Tuesday, August 6, 2013

Picking the right theory

One of the more annoying facts of modern particle physics is that our theories cannot stand purely alone. All of them have some parameters, which we are not able to predict (yet). The standard-model of particle physics has some thirty-odd parameters, which we cannot fathom by theoretical investigations alone. An example are the masses of the particles. We need to measure them in an experiment.

Well, you may say that this is not too bad. If we just have to make a couple of measurements, and then can predict all the rest, this is acceptable. We just need to find as many independent quantities as there are parameters, and we are done. In principle, this is correct. But, once more, it is the gap between experiment and theory which makes live complicated. Usually, what we can easily measure is something which is very hard to compute theoretically. And, of course, something we ca calculate easily is hard to measure, if at all possible.

The only thing left is therefore to do our best, and get as close to each other as possible. As a consequence, we usually do not know the parameters exactly, but only within a certain error. This may still not seem too bad. But here enters something which we call theory space. This is an imaginary space in which every point corresponds to one particular set of parameters for a given theory.

In principle, when we change the parameters of our theory, we do not just change some quantitative values. We are really changing our theory. Even a very small change may produce a large effect. This is not only a hypothetical situation - we know explicit examples where this happens. Why is this so?

There are two main reasons for this.

One is that the theory depends very sensitively on its parameters. Thus, a slight change of the parameters will induce a significant change in everything we can measure. Finding then precisely the parameters which reproduce a certain set of measurements requires a very precise determination of the parameters, possibly with many digits of precision. This is a so-called fine-tuning problem. An example of this is the standard model itself. The mass of the Higgs is fine-tuned in the standard model, it depends strongly on the parameters. This was the reason why we were not able to fully predict its mass before we measured it, although we had many, many more measurements than the standard model has parameters. We could just not pinpoint it precisely enough, but only within a rough factor of ten.

A related problem is then the so-called hierarchy problem of modern particle physics. It is the question why the parameters are fine-tuned just to the value we measure in the experiment, and not slightly off and giving a completely different result. Especially this takes today the form of "Why is the Higgs particle rather light?"

But the standard model is not as worse as it can get. In so-called chaotic models the situation is far worse, and any de-tuning of the parameters leads to an exponential effect. That is the reason why these models are called chaotic - because everything is so sensitive, there is no structure at first sight. Of course, one can, at least in principle, calculate this dependency exactly. But any ever so slight imprecision is punished by an exponentially large effect. This makes chaotic models a persistent challenge to theoreticians. Fortunately, particle physics is not of this kind, but such models are known to be realized in nature nonetheless. E.g. some fluids behave like this, under certain conditions.

The other reason is that theory space can have separate regions. Inside these regions, so-called phases, the theory shows quite distinct behaviors. A simple example is water. If you take the temperature and pressure as parameters (though they are not really fundamental parameters of the theory of water, but this is just for the sake of the argument) then two such phases could be solid and liquid. In a similar manner theories in particle physics can show different phases.

If the parameters of a theory are very close by to the boundary between two phases, a slight change will push the theory from one phase to another. Thus, also here it is necessary to determine the parameters very precisely.

In the case of my research on Higgs physics, I am currently facing such a challenge. The problem is that the set of basic parameters are not unique - you can always exchange one set against another set, as log as the number is kept, and there is no trivial connection between two new parameters. Especially, depending on the methods used, particular set are more convenient. However, this implies that for every method it is necessary to redo the comparison to experiment. Since I work on Higgs physics, the dependence is strong, as I said above. It is thus highly challenging to make the right pick. And currently some for my results significantly depend on this determination.

So what can I do? Instead of pressing on, I have to make a step back, and improve my connection to experiment. Such a cycle is quite normal. First you make your calculation, getting some new and exciting result, but with large errors. To improve your errors, you have to go back, and make some groundwork, to improve the basis. Then you return back to the original result, and get an improved one. Then you do the next step, and repeat.

Right now, I am not only for the Higgs physics in such a process. Also our project on a neutron star's interior currently underoes such an improvement. This work is rarely very exciting or yields unexpected results. But it is very necessary, and one cannot get away without, once one wants to get reliable results. Also this is part of a researcher's (everyday) life.