Tuesday, June 12, 2018

How to test an idea

As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects.

This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way:

Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it.

So far, this does not seem to be something where it is necessary to worry about.

However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation.

So, is this hopeless? Do we have to wait for new physics to make its appearance?

Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory.

Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test.

Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned.

By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed.

Thursday, March 29, 2018

Asking questions leads to a change of mind

In this entry, I would like to digress a bit from my usual discussion of our physics research subject. Rather, I would like to talk a bit about how I do this kind of research. There is a twofold motivation for me to do this.

One is that I am currently teaching, together with somebody from the philosophy department, a course on science philosophy of physics. It cam to me as a surprise that one thing the students of philosophy are interested in is, how I think. What are the objects, or subjects, and how I connect them when doing research. Or even when I just think about a physics theory. The other is the review I have have recently written. Both topics may seem unrelated at first. But there is deep connection. It is less about what I have written in the review, but rather what led me up to this point. This requires some historical digression in my own research.

In the very beginning, I started out with doing research on the strong interactions. One of the features of the strong interactions is that the supposed elementary particles, quarks and gluons, are never seen separately, but only in combinations as hadrons. This is a phenomenon which is called confinement. It always somehow presented as a mystery. And as such, it is interesting. Thus, one question in my early research was how to understand this phenomenon.

Doing that I came across an interesting result from the 1970ies. It appears that a, at first sight completely unrelated, effect is very intimately related to confinement. At least in some theories. This is the Brout-Englert-Higgs effect. However, we seem to observe the particles responsible for and affected by the Higgs effect. And indeed, at that time, I was still thinking that the particles affected by the Brout-Englert-Higgs effect, especially  the Higgs and the W and Z bosons, are just ordinary, observable particles. When one reads my first paper of this time on the Higgs, this is quite obvious. But then there was the results of the 1970ies. It stated that, on a very formal level, there should be no difference between confinement and the Brout-Englert-Higgs effect, in a very definite way.

Now the implications of that serious sparked my interest. But I thought this would help me to understand confinement, as it was still very ingrained into me that confinement is a particular feature of the strong interactions. The mathematical connection I just took as a curiosity. And so I started to do extensive numerical simulations of the situation.

But while trying to do so, things which did not add up started to accumulate. This is probably most evident in a conference proceeding where I tried to put sense into something which, with hindsight, could never be interpreted in the way I did there. I still tried to press the result into the scheme of thinking that the Higgs and the W/Z are physical particles, which we observe in experiment, as this is the standard lore. But the data would not fit this picture, and the more and better data I gathered, the more conflicted the results became. At some point, it was clear that something was amiss.

At that point, I had two options. Either keep with the concepts of confinement and the Brout-Englert-Higgs effect as they have been since the 1960ies. Or to take the data seriously, assuming that these conceptions were wrong. It is probably signifying my difficulties that it took me more than a year to come to terms with the results. In the end, the decisive point was that, as a theoretician, I needed to take my theory seriously, no matter the results. There is no way around it. And it gave a prediction which did not fit my view of the experiments than necessarily either my view was incorrect or the theory. The latter seemed more improbable than the first, as it fits experiment very well. So, finally, I found an explanation, which was consistent. And this explanation accepted the curious mathematical statement from the 1970ies that confinement and the Brout-Englert-Higgs effect are qualitatively the same, but not quantitatively. And thus the conclusion was what we observe are not really the Higgs and the W/Z bosons, but rather some interesting composite objects, just like hadrons, which due to a quirk of the theory just behave almost as if they are the elementary particles.

This was still a very challenging thought to me. After all, this was quite contradictory to usual notions. Thus, it came as a very great relief to me that during a trip a couple months later someone pointed me to a few, almost forgotten by most, papers from the early 1980ies, which gave, for a completely different reason, the same answer. Together with my own observation, this made click, and everything started to fit together - the 1970ies curiosity, the standard notions, my data. That I published in the mid of 2012, even though this still lacked some more systematic stuff. But it required still to shift my thinking from agreement to really understanding. That came then in the years to follow.

The important click was to recognize that confinement and the Brout-Englert-Higgs effect are, just as pointed out in the 1970ies mathematically, really just two faces to the same underlying phenomena. On a very abstract level, essentially all particles which make up the standard model, are really just a means to an end. What we observe are objects which are described by them, but which they are not themselves. They emerge, just like hadrons emerge in the strong interaction, but with very different technical details. This is actually very deeply connected with the concept of gauge symmetry, but this becomes quickly technical. Of course, since this is fundamentally different from the usual way, this required confirmation. So we went, made predictions which could distinguish between the standard way of thinking and this way of thinking, and tested them. And it came out as we predicted. So, seems we are on the right track. And all details, all the if, how, and why, and all the technicalities and math you can find in the review.

To make now full circle to the starting point: That what happened during this decade in my mind was that the way I thought about how the physical theory I tried to describe, the standard model, changed. In the beginning I was thinking in terms of particles and their interactions. Now, very much motivated by gauge symmetry, and, not incidental, by its more deeper conceptual challenges, I think differently. I think no longer in terms of the elementary particles as entities themselves, but rather as auxiliary building blocks of actually experimentally accessible quantities. The standard 'small-ball' analogy went fully away, and there formed, well, hard to say, a new class of entities, which does not necessarily has any analogy. Perhaps the best analogy is that of, no, I really do not know how to phrase it. Perhaps at a later time I will come across something. Right now, it is more math than words.

This also transformed the way how I think about the original problem, confinement. I am curious, where this, and all the rest, will lead to. For now, the next step will be to go ahead from simulations, and see whether we can find some way how to test this actually in experiment. We have some ideas, but in the end, it may be that present experiments will not be sensitive enough. Stay tuned.

Wednesday, February 7, 2018

How large is an elementary particle?

Recently, in the context of a master thesis, our group has begun to determine the size of the W boson. The natural questions on this project is: Why do you do that? Do we not know it already? And does elementary particles have a size at all?

It is best to answer these questions in reverse order.

So, do elementary particles have a size at all? Well, elementary particles are called elementary as they are the most basic constituents. In our theories today, they start out as pointlike. Only particles made from other particles, so-called bound states like a nucleus or a hadron, have a size. And now comes the but.

First of all, we do not yet know whether our elementary particles are really elementary. They may also be bound states of even more elementary particles. But in experiments we can only determine upper bounds to the size. Making better experiments will reduce this upper bound. Eventually, we may see that a particle previously thought of as point-like has a size. This has happened quite frequently over time. It always opened up a new level of elementary particle theories. Therefore measuring the size is important. But for us, as theoreticians, this type of question is only important if we have an idea about what could be the more elementary particles. And while some of our research is going into this direction, this project is not.

The other issue is that quantum effects give all elementary particles an 'apparent' size. This comes about by how we measure the size of a particle. We do this by shooting some other particle at it, and measure how strongly it becomes deflected. A truly pointlike particle has a very characteristic reflection profile. But quantum effects allow for additional particles to be created and destroyed in the vicinity of any particle. Especially, they allow for the existence of another particle of the same type, at least briefly. We cannot distinguish whether we hit the original particle or one of these. Since they are not at the same place as the original particle, their average distance looks like a size. This gives even a pointlike particle an apparent size, which we can measure. In this sense even an elementary particle has a size.

So, how can we then distinguish this size from an actual size of a bound state? We can do this by calculations. We determine the apparent size due to the quantum fluctuations and compare it to the measurement. Deviations indicate an actual size. This is because for a real bound state we can scatter somewhere in its structure, and not only in its core. This difference looks pictorially like this:


So, do we know the size already? Well, as said, we can only determine upper limits. Searching for them is difficult, and often goes via detours. One of such detours are so-called anomalous couplings. Measuring how they depend on energy provides indirect information on the size. There is an active program at CERN underway to do this experimentally. The results are so far say that the size of the W is below 0.0000000000000001 meter. This seems tiny, but in the world of particle physics this is not that strong a limit.

And now the interesting question: Why do we do this? As written, we do not want to make the W a bound state of something new. But one of our main research topics is driven by an interesting theoretical structure. If the standard model is taken seriously, the particle which we observe in an experiment and call the W is actually not the W of the underlying theory. Rather, it is a bound state, which is very, very similar to the elementary particle, but actually build from the elementary particles. The difference has been so small that identifying one with the other was a very good approximation up to today. But with better and better experiments may change. Thus, we need to test this.

Because then the thing we measure is a bound state it should have a, probably tiny, size. This would be a hallmark of this theoretical structure. And that we understood it. If the size is such that it could be actually measured at CERN, then this would be an important test of our theoretical understanding of the standard model.

However, this is not a simple quantity to calculate. Bound states are intrinsically complicated. Thus, we use simulations for this purpose. In fact, we actually go over the same detour as the experiments, and will determine an anomalous coupling. From this we then infer the size indirectly. In addition, the need to perform efficient simulations forces us to simplify the problem substantially. Hence, we will not get the perfect number. But we may get the order of magnitude, or be perhaps within a factor of two, or so. And this is all we need to currently say whether a measurement is possible, or whether this will have to wait for the next generation of experiments. And thus whether we will know whether we understood the theory within a few years or within a few decades.

Monday, January 22, 2018

Finding - and curing - disagreements

The topic of grand-unified theories came up in the blog several times, most recently last year in January. To briefly recap, such theories, called GUTs for short, predict that all three forces between elementary particles emerge from a single master force. That would explain a lot of unconnected observations we have in particle physics. For example, why atoms are electrically neutral. The latter we can describe, but not yet explain.

However, if such a GUT exists, then it must not only explain the forces, but also somehow why we see the numbers and kinds of elementary particles we observe in nature. And now things become complicated. As discussed in the last entry on GUTs there maybe a serious issue in how we determine which particles are actually described by such a theory.

To understand how this issue comes about, I need to put together many different things my research partners and I have worked on during the last couple of years. All of these issues are actually put into an expert language in the review of which I talked in the previous entry. It is now finished, and if your interested, you can get it free from here. But it is very technical.

So, let me explain it less technically.

Particle physics is actually superinvolved. If we would like to write down a theory which describes what we see, and only what we see, it would be terribly complicated. It is much more simple to introduce redundancies in the description, so-called gauge symmetries. This makes life much easier, though still not easy. However, the most prominent feature is that we add auxiliary particles to the game. Of course, they cannot be really seen, as they are just auxiliary. Some of them are very obviously unphysical, called therefore ghosts. They can be taken care of comparatively simply. For others, this is less simple.

Now, it turns out that the weak interaction is a very special beast. In this case, there is a unique one-to-one identification between a really observable particle and an auxiliary particle. Thus, it is almost correct to identify both. But this is due to the very special structure of this part of particle physics.

Thus, a natural question is whether, even if it is special, it is justified to do the same for other theories. Well, in some cases, this seems to be the case. But we suspected that this may not be the case in general. And especially not in GUTs.

Now, recently we were going about this much more systematically. You can again access the (very, very technical) result for free here. There, we looked at a very generic class of such GUTs. Well, we actually looked at the most relevant part of them, and still by far not all of them. We also ignored a lot of stuff, e.g. what would become quarks and leptons, and concentrated only on the generalization of the weak interaction and the Higgs.

We then checked, based on our earlier experiences and methods, whether a one-to-one identification of experimentally accessible and auxiliary particles works. And it does essentially never. Visually, this result looks like


On the left, it is seen that everything works nicely with a one-to-one identification in the standard model. On the right, if one-to-one identification would work in a GUT, everything would still be nice. But a our more precise calculation shows that the actually situation, which would be seen in an experiment, is different. There is non one-to-one identification possible. And thus the prediction of the GUT differs from what we already see inn experiments. Thus, a previously good GUT candidate is no longer good.

Though more checks are needed, as always, this is a baffling, and at the same time very discomforting, result.

Baffling as we did originally expect to have problems under very special circumstances. It now appears that actually the standard model of particles is the very special case, and having problems is the standard.

It is discomforting because in the powerful method of perturbation theory the one-to-one identification is essentially always made. As this tool is widely used, this seems to question the validity of many predictions on GUTs. That could have far-reaching consequences. Is this the case? Do we need to forget everything about GUTs we learned so far?

Well, not really, for two reasons. One is that we also showed that methods almost as easily handleable as perturbation theory can be used to fix the problems. This is good, because more powerful methods, like the simulations we used before, are much more cumbersome. However, this leaves us with the problem of having made so far wrong predictions. Well, this we cannot change. But this is just normal scientific progress. You try, you check, you fail, you improve, and then you try again.

And, in fact, this does not mean that GUTs are wrong. Just that we need to consider somewhat different GUTs, and make the predictions more carefully next time. Which GUTs we need to look at we still need to figure out, and that will not be simple. But, fortunately, the improved methods mentioned beforehand can use much of what has been done so far, so most technical results are still unbelievable useful. This will help enormously in finding GUTs which are applicable, and yield a consistent picture, without the one-to-one identification. GUTs are not dead. They likely just need a bit of changing.

This is indeed a dramatic development. But one which fits logically and technically to the improved understanding of the theoretical structures underlying particle physics, which were developed over the last decades. Thus, we are confident that this is just the next logical step in our understanding of how particle physics works.

Thursday, November 30, 2017

Reaching closure – completing a review

I did not publish anything here within the last few months, as the review I am writing took up much more time than expected. A lot of interesting project developments happened also during this time. I will write on them as well later, so that nobody will miss out on the insights we gained and the fun we had with them.

But now, I want to write about how the review comes along. It has now grown into a veritable almost 120 page document. And actually most of it is texts and formulas, and only very few figures. This makes for a lot of content. Right now, it has reached the status of a release candidate 2. This means I have distributed it to many of my colleagues to comment on it. I also used the draft as lecture notes for a lecture on its contents at a winter school in Odense/Denmark (where I actually wrote this blog entry). Why? Because I wanted to have feedback. What can be understood, and what may I have misunderstood? After all, this review not only looks at my own research. Rather, it compiles knowledge from more than a hundred scientists over 45 years. In fact, some of the results I write about have been obtained before I was born. Especially, I could have overlooked results. With by now dozens of new papers per day, this can easily happen. I have collected more than 330 relevant articles, which I refer to in the review.

And, of course, I could have misunderstood other people’s results or made mistakes. This needs to be avoided in a review as good as possible.

Indeed, I had many discussions by now on various aspects of the research I review. I got comments and was challenged. In the end, there was always either a conclusion or the insight that some points, believed to be clear, are not as entirely clear as it seemed. There are always more loopholes, more subtleties, than one anticipates. By this, the review became better, and could collect more insights from many brilliant scientists. And likewise I myself learned a lot.

In the end, I learned two very important lessons about the physics I review.

The first is that many more things are connected than I expected. Some issues, which looked to my like a parenthetical remark in the beginning became first remarks at more than one place and ultimately became an issue of their on.

The second is that the standard modelof particle physics is even more special and more balanced than I thought. I was never really thinking that the standard model is so terrible special. Just one theory among many which happen to fit experiments. But really it is an extremely finely adjusted machinery. Every cog in it is important, and even slight changes will make everything fall apart. All the elements are in constant connection with each other, and influence each other.

Does this mean anything? Good question. Perhaps it is a sign of an underlying ordering principle. But if it is, I cannot see it (yet?). Perhaps this is just an expression of how a law of nature must be – perfectly balanced. At any rate, it gave me a new perspective of what the standard model is.

So, as I anticipated writing this review gave me a whole new perspective and a lot of insights. Partly by formulating questions and answers more precisely. But, and probably more importantly, I had to explain it to others, and to either successfully defend or adapt it or even correct it.

In addition, two of the most important lessons about understanding physics I learned were the following:

One: Take your theory seriously. Do not take a shortcut or use some experience. Literally understand what it means and only then start to interpret.

Two: Pose your questions (and answers) clearly. Every statement should have a well-defined meaning. Never be vague when you want to make a scientific statement. Be always able to back up a question of “what do you mean by this?” by a precise definition. This seems obvious, but is something you tend to be cavalier about. Don’t.

So, writing a review not only helps in summarizing knowledge. It also helps to understand this knowledge and realize its implications. And, probably fortunately, it poses new questions. What they are, and what we do about, this is something I will write about in the future.

So, how does it proceed now? In two weeks I have to deliver the review to the journal which mandated it. At the same time (watch my twitteraccount) it will become available on the preprint server arxiv.org, the standard repository of all elementary particle physics knowledge. Then you can see for yourself what I wrote, and wrote about

Thursday, July 20, 2017

Getting better

One of our main tools in our research are numerical simulations. E.g. the research of the previous entry would have been impossible without.

Numerical simulations require computers to run them. And even though computers become continuously more powerful, they are limited in the end. Not to mention that they cost money to buy and to use. Yes, also using them is expensive. Think of the electricity bill or even having space available for them.

So, to reduce the costs, we need to use them efficiently. That is good for us, because we can do more research in the same time. And that means that we as a society can make scientific progress faster. But it also reduces financial costs, which in fundamental research almost always means the taxpayer's money. And it reduces the environmental stress which we exercise by having and running the computers. That is also something which should not be forgotten.

So what does efficiently mean?

Well, we need to write our own computer programs. What we do nobody did before us. Most of what we do is really the edge of what we understand. So nobody was here before us and could have provided us with computer programs. We do them ourselves.

For that to be efficient, we need three important ingredients.

The first seems to be quite obvious. The programs should be correct before we use them to make a large scale computation. It would be very wasteful to run on a hundred computers for several months, just to figure out it was all for naught, because there was an error. Of course, we need to test them somewhere, but this can be done with much less effort. But this takes actually quite some time. And is very annoying. But it needs to be done.

The next two issues seems to be the same, but are actually subtly different. We need to have fast and optimized algorithms. The important difference is: The quality of the algorithm decides how fast it can be in principle. The actual optimization decides to which extent it uses this potential.

The latter point is something which requires a substantial amount of experience with programming. It is not something which can be learned theoretically. And it is more of a craftsmanship than anything else. Being good in optimization can make a program a thousand times faster. So, this is one reason why we try to teach students programming early, so that they can acquire the necessary experience before they enter research in their thesis work. Though there is still today research work which can be done without computers, it has become markedly less over the decades. It will never completely vanish, though. But it may well become a comparatively small fraction.

But whatever optimization can do, it can do only so much without good algorithms. And now we enter the main topic of this entry.

It is not only the code which we develop by ourselves. It is also the algorithms. Because again, they are new. Nobody did this before. So it is also up to us to make them efficient. But to really write a good algorithm requires knowledge about its background. This is called domain-specific knowledge. Knowing the scientific background. One reason more why you cannot get it off-the-shelf. Thus, if you want to calculate something new in research using computer simulations that means usually sitting down and writing a new algorithm.

But even once an algorithm is written down this does not mean that it is necessarily already the fastest possible one. Also this requires on the one hand experience, but even more so it is something new. And it is thus research as well to make it fast. So they can, and need to be, made better.

Right now I am supervising two bachelor theses where exactly this is done. The algorithms are indeed directly those which are involved with the research mentioned in the beginning. While both are working on the same algorithm, they do it with quite different emphasis.

The aim in one project is to make the algorithm faster, without changing its results. It is a classical case of improving an algorithm. If successful, it will make it possible to push the boundaries of what projects can be done. Thus, it makes computer simulations more efficient, and thus satisfies allows to do more research. One goal reached. Unfortunately the 'if' already tells that, as always with research, there is never a guarantee that it is possible. But if this kind of research should continue, it is necessary. The only alternative is waiting for a decade for the computers to become faster, and doing something different in the time in between. Not a very interesting option.

The other one is a little bit different. Here, the algorithm should be modified to serve a slightly different goal. It is not a fundamentally different goal, but subtly different so. Thus, while it does not create a fundamentally new algorithm, it still does create something new. Something, which will make a different kind of research possible. Without the modification, the other kind of research may not be possible for some time to come. But just as it is not possible to guarantee that an algorithm can be made more efficient, it is also not always possible that an algorithm with any reasonable amount of potential can be created at all. So this is also true research.

Thus, it remains exciting of what both theses will ultimately lead to.

So, as you see, behind the scenes research is quite full of the small things which make the big things possible. Both of these projects are probably closer to our everyday work than most of the things I have been posting before. The everyday work in research is quite often grinding. But, as always, this is what makes the big things ultimately possible. Without such projects as these two theses, our progress would be slowed down to a snail's speed.

Wednesday, July 19, 2017

Tackling ambiguities

I have recently published a paper with a rather lengthy and abstract title. I wanted to enlighten in this entry a little bit what is going on.

The paper is actually on a problem which occupies me by now since more than a decade. And this is the problem how to really define what we mean when we talk about gluons. The reason for this problem is a certain ambiguity. This ambiguity arises because it is often much more convenient to have auxiliary additional stuff around to make calculations simple. But then you have to deal with this additional stuff. In a paper last year I noted that the amount of stuff is much larger than originally anticipated. So you have to deal with more stuff.

The aim of the research leading to the paper was to make progress with that.

So what did I do? To understand this, it is first necessary to say a few words about how we describe gluons. We describe them by mathematical functions. The simplest such mathematical functions makes, loosely speaking, a statement about how probable it is that a gluon moves from one point to another. Since a fancy word for moving is propagating, this function is called a propagator.

So the first question I posed was whether the ambiguity in dealing with the stuff affects this. You may ask whether this should happen at all. Is a gluon not a particle? Should this not be free of ambiguities? Well, yes and no. A particle which we actually detect should be free of ambiguities. But gluons are not detected. Gluons are, in fact, never seen directly. They are confined. This is a very peculiar feature of the strong force. And one which is not satisfactorily fully understood. But it is experimentally well established.

Since therefore something happens to gluons before we can observe them, there is now a way out. If the gluon is ambiguous, then this ambiguity has to be canceled by whatever happens to it. Then whatever we detect is not ambiguous. But cancellations are fickle things. If you are not careful in your calculations, something is left uncanceled. And then your results become ambiguous. This has to be avoided. Of course, this is purely a problem for us theoreticians. The experimentalists never have this problem. A long time ago I actually already wrote together with a few other people a paper on this, showing how it may proceed.

So, the natural first step is to figure out what you have to cancel. And therefore to map the ambiguity in its full extent. The possibilities discussed since decades look roughly like this:

As you see, at short distances there is (essentially) no ambiguity. This is actually quite well understood. It is a feature very deeply embedded in the strong interaction. It has to do with the fact that, despite its name, the strong interaction makes itself less known the shorter the distance. But for weak effects we have very precise tools, and we therefore understand it.

On the other hand at long distances - well, there we knew for a long time not even qualitatively what is going on for sure. But, finally, over the decades, we were able to constrain the behavior at least partly. Now, I tested a large part of the remaining range of ambiguities. In the end, it indeed mattered little. There is almost no effect left of the ambiguity on the behavior of the gluon. So, it seems we have this under control.

Or do we? One of the important things in research is that it is never sufficient to confirm your result just by looking at a single thing. Either your explanation fits everything we see and measure, or it cannot be the full story. Or may even be wrong and the agreement with part of the observations is just a lucky coincidence. Well, actually not lucky. Rather terrible, since this misguides you.

Of course, doing all in one go is a horrendous amount of work, and so you work on a few at the time. Preferably, you first work on those where the most problems are expected. It is just ultimately that you need to have covered everything. But you cannot stop and claim victory before you did.

So I did, and looked in the paper at a handful of other quantities. And indeed, in some of them there remain effects. Especially, if you look at how strong the strong interaction is, depending on the distance where you measure it, something remains:

The effects of the ambiguity are thus not qualitative. So it does not change our qualitative understanding of how the strong force works. But there remains some quantitative effect, which we need to take into account.

There is one more important side effect. When I calculated the effects of the ambiguity, I learned also to control how the ambiguity manifests. This does not alter that there is an ambiguity, nor that it has consequences. But it allows others to reproduce how I controlled the ambiguity. This is important because now two results from different sources can be put together, and when using the same control they will fit such that for experimental observables the ambiguity cancels. And thus we have achieved the goal.

To be fair, however, this is currently at the level of an operative control. It is not yet a mathematically well-defined and proven procedure. As with so many cases, this still needs to be developed. But having operative control allows to develop the rigorous control easier than starting without it. So, progress has been made.