Wednesday, January 18, 2017

Can we tell when unification works? - Some answers.

This time, the following is a guest entry by one of my PhD students, Pascal Törek, writing about the most recent results of his research, especially our paper.

Some time ago the editor of this blog, offered me to write about my PhD research here. Since now I gained some insight and collected first results, I think this is the best time to do so.

In a previous blog entry, Axel explained what I am working on and which questions we try to answer. The most important one was: “Does the miracle repeat itself for a unified theory?”. Before I answer this question and explain what is meant by “miracle”, I want to recap some things.

The first thing I want to clarify is, what a unified or a grand unified theory is. The standard model of particle physics describes all the interactions (neglecting gravity) between elementary particles. Those interactions or forces are called strong, weak and electromagnetic force. All these forces or sectors of the standard model describe different kinds of physics. But at very high energies it could be that these three forces are just different parts of one unified force. Of course a theory of a unified force should also be consistent with what has already been measured. What usually comes along in such unified scenarios is that next to the known particles of the standard model, additional particles are predicted. These new particles are typically very heavy and thus makes them very hard to detect in experiments in the near future (if one of those unified theories really describes nature).

What physicists often use to make predictions in an unified theory is perturbation theory. But here comes the hook: what one does in this framework is to do something really arbitrarily, namely to fix a so-called “gauge”. This rather technical term just means that we have to use a mathematical trick to make calculations easier. Or to be more precise, we have to use that trick to even perform a calculation in perturbation theory in those kinds of theories which would be impossible otherwise.

Since nature does not care about this man-made choice, every quantity which could be measured in experiments must be independent of the gauge. But this is exactly how the elementary particles are treated in conventional perturbation theory, they depend on the gauge. An even more peculiar thing is that also the particle spectrum (or the number of particles) predicted by these kinds of theories depends on the gauge.
This problem appears already in the standard model: what we call the Higgs, W, Z, electron, etc. depends on the gauge. This is pretty confusing because those particles have been measured experimentally but should not have been observed like that if you take the theory serious. 

This contradiction in the standard model is resolved by a certain mechanism (the so-called “FMS mechanism”) which maps quantities which are independent of the gauge to the gauge-dependent objects. Those gauge-independent quantities are so called bound states. What you essentially do is to “glue” the gauge-dependent objects together in such a way that the result does not depended on the gauge. This exactly the miracle I wrote about in the beginning: one interprets something (gauge-dependent objects as e.g. the Higgs) as if it will be observable and you indeed find this something in experiments. The correct theoretical description is then in terms of bound states and there exists a one-to-one mapping to the gauge-dependent objects. This is the case in the standard model and it seems like a miracle that everything fits so perfectly such that everything works out in the end. The claim is that you see those bound states in experiments and not the gauge-dependent objects.

However, it was not clear if the FMS mechanism works also in a grand unified theory (“Does the miracle repeat itself?”). This is exactly what my research is about. Instead of taking a realistic grand unified theory we decided to take a so called “toy theory”. What is meant by that is that this theory is not a theory which can describe nature but rather covers the most important features of such kind of theory. The reason is simply that I use simulations for answering the question raised above and due to time constraints and the restricted resources a toy model is more feasible than a realistic model. By applying the FMS mechanism to the toy model I found that there is a discrepancy to perturbation theory, which was not the case in the standard model. In principle there were three possible outcomes: the mechanism works in this model and perturbation theory is wrong, the mechanism fails and perturbation theory gives the correct result or both are wrong. So I performed simulations to see which statement is correct and what I found is that only the FMS mechanism predicts the correct result and perturbation theory fails. As a theoretician this result is very pleasing since we like to have nature independent of a arbitrarily chosen gauge.

The question you might ask is: “What is it good for?” Since we know that the standard model is not the theory which can describe everything, we look for theories beyond the standard model as for instance grand unified theories. There are many of these kinds of theories on the market and there is yet no way to check each of them experimentally. What one can do now is to use the FMS mechanism to rule some of them out. This is done by, roughly speaking, applying the mechanism to the theory you want to look at, count the number of particles predicted by the mechanism, compare it to the number particles of the standard model. If there are more the theory is probably a good candidate to study and if not you can throw it away.

Right now Axel, a colleague from Jena University, and myself look at more realistic grand unified theories and try to find general features concerning the FMS mechanism. I am sure Axel or maybe myself keep you updated on this topic.

Monday, January 16, 2017

Writing a review

As I have mentioned recently on Twitter, I have been given the opportunity, and the mandate, to write a review on Higgs physics. Especially, I should describe how the connection is established from the formal basics to what we see in experiment. While I will be writing in the next time a lot about the insights I gain and the connection I make during writing, this time I want to talk about something different. About what this means, and what the purpose of reviews is.

So what is a review good for? Physics is not static. Physics is about our understanding of the world around us. It is about making things we experience calculable. This is done by phrasing so-called laws of nature as mathematical statements. Then making predictions (or explaining something what happens) is, essentially, just evaluating equations. At least in principle, because this may be technically extremely complicated and involved. There are cases in which our current abilities are not even yet able to do so. But this is technology and, often, resources in form of computing time. Not some conceptual problem.

But there is also a conceptual problem. Our mathematical statements encode what we know. One of their most powerful feature is that they tell us themselves that they are incomplete. That our mathematical formulation of nature only reaches this far. That are things, we do not even yet know what they are, which we cannot describe. Physics is at the edge of knowledge. But we are not lazy. Every day, thousands of physicists all around the world work together to push this edge daily a little bit farther out. Thus, day by day, we know more. And, in a global world, this knowledge is shared almost instantaneously.

A consequence of this progress is that the textbooks at the edge become outdated. Because we get a better understanding. Or we figure out that something is different than we thought. Or because we find a way to solve a problem which withstood solution for decades. However, what we find today or tomorrow is not yet confirmed. Every insight we gain needs to be checked. Has to be investigated from all sides. And has to be fitted into our existing knowledge. More often that not some of these insights turn out to be false hopes. That we thought we understood something. But there is still that one little hook, this one tiny loop, which in the end lets our insight crumble. This can take a day or a month or a year, or even decades. Thus, insights should not directly become part of textbooks, which we use to teach the next generation of students.

To deal with this, a hierarchy of establishing knowledge has formed.

In the beginning, there are ideas and first results. These we tell our colleagues at conferences. We document the ideas and first results in write-ups of our talks. We visit other scientists, and discuss our ideas. By this we find many loopholes and inadequacies already, and can drop things, which do not work.

Results which survive this stage then become research papers. If we write such a paper, it is usually about something, which we personally believe to be well funded. Which we have analyzed from various angles, and bounced off the wisdom and experience of our colleagues. We are pretty sure that it is solid. By making these papers accessible to the rest of the world, we put this conviction to the test of a whole community, rather than some scientists who see our talks or which we talk to in person.

Not all such results remain. In fact, many of these are later to be found to be only partly right, or still have overlooked a loophole, or are invalidated by other results. But this stage already a considerable amount of insights survive.

Over years, and sometimes decades, insights in papers on a topic accumulate. With every paper, which survives the scrutiny of the world, another piece in the puzzle fits. Thus, slowly a knowledge base emerges on a topic, carried by many papers. And then, at some point, the amount of knowledge has provided a reasonable good understanding of the topic. This understanding is still frayed at the edges towards the unknown. There is still here and there some holes to be filled. But overall, the topic is in fairly good condition. That is the point where a review is written on the topic. Which summarizes the finding of the various papers, often hundreds of them. And which draws the big picture, and fits all the pieces into it. Its duty is also to point out all remaining problems, and where the ends are still frayed. But at this point usually the things are well established. They often will not change substantially in the future. Of course, no rule without exception.

Over time, multiple reviews will evolve the big picture, close all holes, and connect the frayed edges to neighboring topics. By this, another patch in the tapestry of a field is formed. It becomes a stable part of the fabric of our understanding of physics. When this process is finished, it is time to write textbooks. To make even non-specialist students of physics aware of the topic, its big picture, and how it fits into our view of the world.

Those things, which are of particular relevance, since they form the fabric of our most basic understanding of the world, will eventually filter further down. At some point, the may become part of the textbooks at school, rather then university. And ultimately, they will become part of common knowledge.

This has happened many times in physics. Mechanics, classical electrodynamics, thermodynamics, quantum and nuclear physics, solid state physics, particle physics, and many other fields have undergone these level of hierarchies. Of course, often only with hindsight the transitions can be seen, which lead from the first inspiration to the final revelation of our understanding. But in this way our physics view of the world evolves.