This time it will again be a behind-the-scenes entry. The reason is that we got just our graduate school prolonged. This is a great success. 'We' are in this case the professors doing particle physics here at the University of Graz, in total five. With this, we are now able to support nine new PhD students, i.e. give them a job during the time they are doing their PhD work, and giving them the opportunity to travel to conferences, or to invite people for them to talk to.
You may wonder what I mean by 'giving a job'. PhD students in physics are not only students. They are beginning researchers. Each and every PhD thesis contributes to our knowledge, and opens up new frontiers. In the course of doing this, the PhD students are guided and supported by us, their supervisors. The goal is, of course, that at the end of their thesis they have matured into equal partners in research. A goal, which is satisfyingly often achieved. And hence, they are not only studying but indeed contributing, and thus they also do a job, and should get paid for the work they are doing. And hence having PhD positions is not only nice - it is required already out of fairness. And therefore this success means that we can now accompany nine more young people on their way to become researchers.
But this is not everything a graduate school provides. A graduate school is also providing the infrastructure too provide advanced lectures by world-leading experts to the students. But here one has to walk a thin line. What we do not want is that they just soak up knowledge, and then reproduce it. This can never be how a PhD education should be. The aim of the PhD studies must always be that the students learn how to create, how to be creative, and how to think in directions nobody else did before. Especially not their supervisors. Providing a too much formalized education would quell much or all of this.
On the other hand, it cannot work without some formal education. While creativity is important, (particle) physics has become a vast field. As a consequence, almost every simple idea has already been found decades ago by someone else. Knowing what is known is therefore already important to avoid repeating the same things (and often the same mistakes) others did. At the same time, knowledge of general principles and structures is important such that one's own ideas can be embedded into the big picture. And in the course, checked for technical consistency. Without knowing about technical details, this would be hard to achieve. One could then easily loose oneself in pursuing a chain of technical points, leading one far astray. It is especially here where it shows that theoretical particle physics is nowadays an enormous collaborative and worldwide effort. None of the problems we are dealing with can be solved by one person alone. It requires the combined knowledge of many people to make progress.
Knowing what other people did - and do - is therefore of paramount importance. Here, the graduate school helps also in another way. It provides the PhD students with the possibility to travel themselves, meet people, and go to conferences. We also can make it possible for them to stay abroad for up to half a year at a different institution to work with different people on a different project. They can thereby substantially broaden their horizon, and learn how to cooperate with different people.
So, are there any downsides? Well, not for the students. Except that they may at times have to go a lecture or talk, which they otherwise would not go to. Most of the downsides are hitting us supervisors, because there is a lot of additional administrative work involved. However, this is easily outweighed by the possibility to have more PhD students to work with, and with their ambition achieve something new.
Friday, December 5, 2014
Tuesday, November 4, 2014
More on big blobs and little blobs
Two months ago, I have introduced you to what I called big blobs. In the end, just a big heap of particles, which act in unison. As announced there, I have meanwhile produced new results on this topic. So, what did I find?
In this investigation I tried to disentangle what relevance big blobs of gluons have for a single gluon. To do this, I somehow had to get my hands on blobs. To do this, I performed computer simulations of the strong force. Without any further modifications, this would deliver a mixture of small and large blobs, many, many individual gluons, and everything rather unorganized. This would not help much.
Fortunately, clever people have found a way how to isolate the blobs from this mixture. This is a method which is nowadays called smearing or cooling. The names are not quite accurate. What is actually done is to remove anything which even remotely resembles single gluons at high energies. This is really hand-waving, and nobody should take this too literal. But it gives a good idea, and avoids a lot of technicalities. In the end, the important thing is that this gave me a lot of blobs.
But the blobs alone were not interesting for me. Also, many people have studied them in the last forty years or so. I wanted something different. So I took the blobs, and then injected a single gluon into this heap of blobs. Then, I checked how the gluon behaved.
The first result was that at short distances, much shorter than the size of the blobs or the distance between them, the gluon did not feel anything. It just behaved as it would travel through empty space. This was not yet too surprising. After all, the big blobs are separated, and as long as the gluon did not crash into one, how should it know about it.
The second result was also not too surprising. If I let the gluon travel very far, it behaved essentially as if I would not have filtered everything out but the blobs. It was just plain normal. Also this makes sense. If the distances become much longer than the blob's size and their separation, the gluon just gets an average picture. And this picture should, and seems to be, not too different from the real thing.
But then came something, which surprised me at first. Though I later learned that somebody else has anticipated it long ago. If the gluon travels distances roughly of the size of the blobs, it behaved substantially different than normal. This behavior was actually what one would expect in the first place for something which is so strongly interacting as gluons do. That would be quite reassuring, as it was exactly this behavior which has been looked for in gluons since a long time. This would mean that just all the stuff which I have filtered out to get to the blobs would normally obscure it.
Since this sounds to good to be true, it probably is. Hence, a necessary next step must be to check this result, in some way. In the manuscript, I have developed some ideas, but none of them will be easy. They are thus part of future research. But it must be checked. After all, its science, and one should always check and try to falsify the results. And this one certainly deserves to be checked.
In this investigation I tried to disentangle what relevance big blobs of gluons have for a single gluon. To do this, I somehow had to get my hands on blobs. To do this, I performed computer simulations of the strong force. Without any further modifications, this would deliver a mixture of small and large blobs, many, many individual gluons, and everything rather unorganized. This would not help much.
Fortunately, clever people have found a way how to isolate the blobs from this mixture. This is a method which is nowadays called smearing or cooling. The names are not quite accurate. What is actually done is to remove anything which even remotely resembles single gluons at high energies. This is really hand-waving, and nobody should take this too literal. But it gives a good idea, and avoids a lot of technicalities. In the end, the important thing is that this gave me a lot of blobs.
But the blobs alone were not interesting for me. Also, many people have studied them in the last forty years or so. I wanted something different. So I took the blobs, and then injected a single gluon into this heap of blobs. Then, I checked how the gluon behaved.
The first result was that at short distances, much shorter than the size of the blobs or the distance between them, the gluon did not feel anything. It just behaved as it would travel through empty space. This was not yet too surprising. After all, the big blobs are separated, and as long as the gluon did not crash into one, how should it know about it.
The second result was also not too surprising. If I let the gluon travel very far, it behaved essentially as if I would not have filtered everything out but the blobs. It was just plain normal. Also this makes sense. If the distances become much longer than the blob's size and their separation, the gluon just gets an average picture. And this picture should, and seems to be, not too different from the real thing.
But then came something, which surprised me at first. Though I later learned that somebody else has anticipated it long ago. If the gluon travels distances roughly of the size of the blobs, it behaved substantially different than normal. This behavior was actually what one would expect in the first place for something which is so strongly interacting as gluons do. That would be quite reassuring, as it was exactly this behavior which has been looked for in gluons since a long time. This would mean that just all the stuff which I have filtered out to get to the blobs would normally obscure it.
Since this sounds to good to be true, it probably is. Hence, a necessary next step must be to check this result, in some way. In the manuscript, I have developed some ideas, but none of them will be easy. They are thus part of future research. But it must be checked. After all, its science, and one should always check and try to falsify the results. And this one certainly deserves to be checked.
Wednesday, October 15, 2014
Challenging subtleties
I have just published a conference proceeding in which I return to an idea of how the standard model of particle physics could be extended. It is an idea I have already briefly written about: The idea is concerned with the question what would happen if there would be twice as many Higgs particles as there are in nature. The model describing this idea is therefore called 2-Higgs(-doublet)-model, or for short 2HDM. The word doublet in the official name is rather technical. It has something to do with how the second Higgs connects to the weak interaction.
As fascinating as the model itself may be, I do not want to write about its general properties. Given its popularity, you will find many things about it already on the web. No, here I want to write about what I want to learn about this theory in particular. And this is a peculiar subtlety. It connects to the research I am doing on the situation with just the single Higgs.
To understand what is going on, I have to dig deep into the theory stuff, but I will try to keep it not too technical.
The basic question is: What can we observe, and what can we not observe. One of the things a theoretician learns early on that it may be quite helpful to have some dummies. This means that he adds something in a calculation just for the sake of making the calculation simpler. Of course, she or he has to make very sure that this is not affecting the result. But if done properly, this can be of great help. The technical term for this trick is an auxiliary quantity.
Now, when we talk about the weak interactions, something amazing happens. If we assume that everything is indeed very weak, we can calculate results using so-called perturbation theory. And now an amazing thing happens: It appears, like the auxiliary quantities are real, and we can observe them. It is, and can only be, some kind of illusion. This is indeed true, something I have been working on since a long time, and others before me. It just comes out that the true thing and the auxiliary quantities have the same properties, and therefore it does not matter, which we take for our calculation. This is far from obvious, and pretty hard to explain without very much technical stuff. But since this is not the point I would like to make in this entry, let me skip these details.
That this is the case is actually a consequence of a number of 'lucky' coincidences in the standard model. Some particles have just the right mass. Some particles appear just in the right ratio of numbers. Some particles are just inert enough. Of course, as a theoretician, my experience is that there is no such thing as 'lucky'. But that is a different story (I know, I say this quite often this time).
Now, I finally return to the starting point: The 2HDM. In this theory, one can do the same kind of tricks with auxiliary quantities and perturbation theory and so on. If you assume that everything is just like in the standard model, this is fine. But is this really so? In the proceedings, I look at this question. Especially, I check whether perturbation theory should work. And what I find is: This may be possible, but it is very unlikely to happen in all the circumstances where one would like this to be true. Especially, in several scenarios in which one would like to have this property, it could indeed be failing. E.g., in some scenarios this theory could have twice as many weak gauge bosons, so-called W and Z bosons, as we see in experiment. That would be bad, as this would contradict experiment, and therefore invalidate these scenarios.
This is not the final word, of course not - proceedings are just status reports, not final answers. But that there may be, just may be, a difference. This is enough to require us (and, in this case, me) to make sure what is going on. That will be challenging. But this time such a subtly may make a huge difference.
As fascinating as the model itself may be, I do not want to write about its general properties. Given its popularity, you will find many things about it already on the web. No, here I want to write about what I want to learn about this theory in particular. And this is a peculiar subtlety. It connects to the research I am doing on the situation with just the single Higgs.
To understand what is going on, I have to dig deep into the theory stuff, but I will try to keep it not too technical.
The basic question is: What can we observe, and what can we not observe. One of the things a theoretician learns early on that it may be quite helpful to have some dummies. This means that he adds something in a calculation just for the sake of making the calculation simpler. Of course, she or he has to make very sure that this is not affecting the result. But if done properly, this can be of great help. The technical term for this trick is an auxiliary quantity.
Now, when we talk about the weak interactions, something amazing happens. If we assume that everything is indeed very weak, we can calculate results using so-called perturbation theory. And now an amazing thing happens: It appears, like the auxiliary quantities are real, and we can observe them. It is, and can only be, some kind of illusion. This is indeed true, something I have been working on since a long time, and others before me. It just comes out that the true thing and the auxiliary quantities have the same properties, and therefore it does not matter, which we take for our calculation. This is far from obvious, and pretty hard to explain without very much technical stuff. But since this is not the point I would like to make in this entry, let me skip these details.
That this is the case is actually a consequence of a number of 'lucky' coincidences in the standard model. Some particles have just the right mass. Some particles appear just in the right ratio of numbers. Some particles are just inert enough. Of course, as a theoretician, my experience is that there is no such thing as 'lucky'. But that is a different story (I know, I say this quite often this time).
Now, I finally return to the starting point: The 2HDM. In this theory, one can do the same kind of tricks with auxiliary quantities and perturbation theory and so on. If you assume that everything is just like in the standard model, this is fine. But is this really so? In the proceedings, I look at this question. Especially, I check whether perturbation theory should work. And what I find is: This may be possible, but it is very unlikely to happen in all the circumstances where one would like this to be true. Especially, in several scenarios in which one would like to have this property, it could indeed be failing. E.g., in some scenarios this theory could have twice as many weak gauge bosons, so-called W and Z bosons, as we see in experiment. That would be bad, as this would contradict experiment, and therefore invalidate these scenarios.
This is not the final word, of course not - proceedings are just status reports, not final answers. But that there may be, just may be, a difference. This is enough to require us (and, in this case, me) to make sure what is going on. That will be challenging. But this time such a subtly may make a huge difference.
Friday, September 5, 2014
Big blobs
One of the things I have discussed in my blog is how particles arise in quantum theories. Putting it into one (hand-waving) sentence, then particles are just isolated peaks in the quantum fields which fill up the universe. But is this all that there is possible?
The answer is no, and I had to do with the alternatives several times in my own research. But what are these alternatives?
Particles, I said, are isolated peaks. They are, what we call localized - existing at a single place. A single, and very slender, peak on a background of (nearly) nothing else. Of course, there are also bound states, like the hydrogen atom, and other such objects. These are two, or more particles, being close to each other, and which move in the same direction. However, in this case the individual particles are still, more or less, distinct.
Here, I want to introduce another concept. It arises, when one takes many particles, and puts them very close together. Then the peaks start to overlap, until it is impossible to say where one starts, and where another ends. In many cases such a bunch of particles is just unstable, and the particles fly apart pretty quickly. But several theories, most notably the strong interactions, provide another option. When carefully balancing how the particles are together, they form a super-particle, and the whole bunch behaves almost like one big particle. This is different from the bound states, because the particles are no longer individually detectable inside, it is just one big blob. Of course, it is possible to disassemble this blob, and the original particles come out. Hence, such blobs are not called particles, but pseudo-particles. A more fancy name for them is 'topological excitations'. This name has been given to them because of certain properties linked to the mathematical field of 'topology'. One of the particularly important features of these blobs is that they are, without external disturbance, extremely stable. The reason is, pictorially speaking, the way the particles are interwoven makes knots, which do not open.
So aside from the fascinating fact that these things exist, what is their use for physics? They play especially a role in theories where everything interacts strongly with each other, like the strong force. It is hypothesized that in such theories blobs emerge easily, and may even play the most important role. This would mean that effectively not the original particles, but the blobs are the usually encountered objects. And how they interact makes up the phenomena we see in experiments. Single particles are then just some minor disturbance to the game of the big blobs. The blobs become what physicists call the 'effective degrees of freedom', meaning the important players.
Is this true, especially in the strong interactions? It depends. We do not have an equivalent formulation of the theory in terms of blobs instead of particles, so we do not know for sure. We do know that several features, like mass generation, can be very simply explained just by using the blobs. There, it helped us a lot in understanding what is going on. Other features, like the famous confinement, turn out to be a much tougher cookie. We still are not sure, whether it is really possible.
Finally, what are my stakes in the blobs? One of the questions to be posed is, whether the properties of remaining individual particles are determined by the what the blobs do. Is their movement constrained by them? Are their interactions mainly with a blob involved, rather then directly between the particles? I am trying to answer these questions by simulations. Some preliminary findings are already available, but there will be more to come.
The answer is no, and I had to do with the alternatives several times in my own research. But what are these alternatives?
Particles, I said, are isolated peaks. They are, what we call localized - existing at a single place. A single, and very slender, peak on a background of (nearly) nothing else. Of course, there are also bound states, like the hydrogen atom, and other such objects. These are two, or more particles, being close to each other, and which move in the same direction. However, in this case the individual particles are still, more or less, distinct.
Here, I want to introduce another concept. It arises, when one takes many particles, and puts them very close together. Then the peaks start to overlap, until it is impossible to say where one starts, and where another ends. In many cases such a bunch of particles is just unstable, and the particles fly apart pretty quickly. But several theories, most notably the strong interactions, provide another option. When carefully balancing how the particles are together, they form a super-particle, and the whole bunch behaves almost like one big particle. This is different from the bound states, because the particles are no longer individually detectable inside, it is just one big blob. Of course, it is possible to disassemble this blob, and the original particles come out. Hence, such blobs are not called particles, but pseudo-particles. A more fancy name for them is 'topological excitations'. This name has been given to them because of certain properties linked to the mathematical field of 'topology'. One of the particularly important features of these blobs is that they are, without external disturbance, extremely stable. The reason is, pictorially speaking, the way the particles are interwoven makes knots, which do not open.
So aside from the fascinating fact that these things exist, what is their use for physics? They play especially a role in theories where everything interacts strongly with each other, like the strong force. It is hypothesized that in such theories blobs emerge easily, and may even play the most important role. This would mean that effectively not the original particles, but the blobs are the usually encountered objects. And how they interact makes up the phenomena we see in experiments. Single particles are then just some minor disturbance to the game of the big blobs. The blobs become what physicists call the 'effective degrees of freedom', meaning the important players.
Is this true, especially in the strong interactions? It depends. We do not have an equivalent formulation of the theory in terms of blobs instead of particles, so we do not know for sure. We do know that several features, like mass generation, can be very simply explained just by using the blobs. There, it helped us a lot in understanding what is going on. Other features, like the famous confinement, turn out to be a much tougher cookie. We still are not sure, whether it is really possible.
Finally, what are my stakes in the blobs? One of the questions to be posed is, whether the properties of remaining individual particles are determined by the what the blobs do. Is their movement constrained by them? Are their interactions mainly with a blob involved, rather then directly between the particles? I am trying to answer these questions by simulations. Some preliminary findings are already available, but there will be more to come.
Wednesday, August 13, 2014
Triviality is not trivial
OK, starting with a pun is probably not the wisest course of action, but there is truth in it as well.
When you followed the various public discussiosn on the Higgs then you will probably have noticed the following: Though finding it, most physicists are not really satisfied with it. Some are even repelled by it. In fact, most of us are convinced that the Higgs is only a first step towards something bigger. Why is this so? Well, there are a number of reasons, from purely aesthetic ones to deeply troubling ones. As the latter also affect my own research, I will write about a particular annoying nuisance: The triviality referred to in the title.
To really understand this problem, I have to paint a somewhat bigger picture, before coming back to the Higgs. Let me start: As a theoretician, I can (artificially) distinguish between something I call classical physics, and something I call quantum physics.
Classical physics is any kind of physics which is fully predictive: If I know the start conditions with sufficient precision, I can predict the outcome as precisely as desired. Newton's law of gravity, and even the famous general theory of relativity belong to this class of classical physics.
Quantum physics is different. Quantum phenomena introduce a fundamental element of chance into physics. We do not know why this is so, but it is very well established experimentally. In fact, the computer you use to read this would not work without it. As a consequence, in quantum physics we cannot predict what will happen, even if we know the start as good as possible. The only thing we can do is make very reliable statements of how probable a certain outcome is.
All kinds of known particle physics are quantum physics, and have this element of chance. This is also experimentally very well established.
The connection between classical physics and quantum physics is the following: I can turn any kind of classical system into a quantum system by adding the element of chance, which we also call quantum fluctuations. This does not necessarily go the other way around. We know theories where quantum effects are so deeply ingrained that we cannot remove them without destroying the theory entirely.
Let me return to the Higgs. For the Higgs part in the standard model, we can write down a classical system. When we then want to analyze what happens at a particle physics experiment, we have to add the quantum fluctuations. And here enters the concept of triviality.
Adding quantum fluctuations is not necessarily a small effect. Indeed, quantum fluctuations can profoundly and completely alter the nature of a theory. One possible outcome of adding quantum fluctuations is that the theory becomes trivial. This technical term means the following: If I add quantum fluctuations to a theory, the resulting theory will describe particles which do not interact, no matter how complicated they do in the classical version. Hence, a trivial quantum theory describes nothing interesting. What is really driving this phenomena depends on the theory at hand. The important thing is that it can happen.
For the Higgs part of the standard model, there is the strong suspicion that it is trivial, though we do not have a full proof for (or against) it. Since we cannot solve the theory entirely, we cannot (yet) be sure. The only thing we can say is that if we add only a part of the quantum fluctuations, only a part of the so-called radiative corrections, the theory makes still sense. Hence it is not trivial to decide whether the theory is trivial, to reiterate the pun.
Assuming that the theory is trivial, can we escape it? Yes, this is possible: Adding something to a trivial theory can always make a theory non-trivial. So, if we knew for sure that the Higgs theory is trivial, we would know for sure that there is something else. On the other hand, trivial theories are annoying for a theoretician, because you either have nothing or have to remove artificially part of the quantum fluctuations. This is what annoys me right now with the Higgs. Especially as I have to deal with it in my own research.
Thus, this is one out of the many reasons people would prefer to discover soon more than 'just' the Higgs.
When you followed the various public discussiosn on the Higgs then you will probably have noticed the following: Though finding it, most physicists are not really satisfied with it. Some are even repelled by it. In fact, most of us are convinced that the Higgs is only a first step towards something bigger. Why is this so? Well, there are a number of reasons, from purely aesthetic ones to deeply troubling ones. As the latter also affect my own research, I will write about a particular annoying nuisance: The triviality referred to in the title.
To really understand this problem, I have to paint a somewhat bigger picture, before coming back to the Higgs. Let me start: As a theoretician, I can (artificially) distinguish between something I call classical physics, and something I call quantum physics.
Classical physics is any kind of physics which is fully predictive: If I know the start conditions with sufficient precision, I can predict the outcome as precisely as desired. Newton's law of gravity, and even the famous general theory of relativity belong to this class of classical physics.
Quantum physics is different. Quantum phenomena introduce a fundamental element of chance into physics. We do not know why this is so, but it is very well established experimentally. In fact, the computer you use to read this would not work without it. As a consequence, in quantum physics we cannot predict what will happen, even if we know the start as good as possible. The only thing we can do is make very reliable statements of how probable a certain outcome is.
All kinds of known particle physics are quantum physics, and have this element of chance. This is also experimentally very well established.
The connection between classical physics and quantum physics is the following: I can turn any kind of classical system into a quantum system by adding the element of chance, which we also call quantum fluctuations. This does not necessarily go the other way around. We know theories where quantum effects are so deeply ingrained that we cannot remove them without destroying the theory entirely.
Let me return to the Higgs. For the Higgs part in the standard model, we can write down a classical system. When we then want to analyze what happens at a particle physics experiment, we have to add the quantum fluctuations. And here enters the concept of triviality.
Adding quantum fluctuations is not necessarily a small effect. Indeed, quantum fluctuations can profoundly and completely alter the nature of a theory. One possible outcome of adding quantum fluctuations is that the theory becomes trivial. This technical term means the following: If I add quantum fluctuations to a theory, the resulting theory will describe particles which do not interact, no matter how complicated they do in the classical version. Hence, a trivial quantum theory describes nothing interesting. What is really driving this phenomena depends on the theory at hand. The important thing is that it can happen.
For the Higgs part of the standard model, there is the strong suspicion that it is trivial, though we do not have a full proof for (or against) it. Since we cannot solve the theory entirely, we cannot (yet) be sure. The only thing we can say is that if we add only a part of the quantum fluctuations, only a part of the so-called radiative corrections, the theory makes still sense. Hence it is not trivial to decide whether the theory is trivial, to reiterate the pun.
Assuming that the theory is trivial, can we escape it? Yes, this is possible: Adding something to a trivial theory can always make a theory non-trivial. So, if we knew for sure that the Higgs theory is trivial, we would know for sure that there is something else. On the other hand, trivial theories are annoying for a theoretician, because you either have nothing or have to remove artificially part of the quantum fluctuations. This is what annoys me right now with the Higgs. Especially as I have to deal with it in my own research.
Thus, this is one out of the many reasons people would prefer to discover soon more than 'just' the Higgs.
Thursday, July 17, 2014
Why continue into the beyond?
I have just returned from a very excellent 37th International Conference on High-Energy Physics. However, as splendid as the event itself was, it was in a sense bad news: No results which hint at anything substantial beyond the standard model, except for the usual suspect statistical fluctuations. This does not mean that there is nothing - we know there is more for many reasons. But in an increasingly frustrating sequence of years all our observational and experimental results keep pushing it beyond our reach. Even for me as a theorist there is just not enough substantial information to be able to do more than just vague speculation of what could be.
Nonetheless, I just wrote that I want to venture into this unknown beyond, and in force. Hence it is reasonable - in fact necessary - to pose the question: Why? If I do not know and have too little information, is there any chance to hit the right answer? The answer to this: Probably not. But...
Actually, there are two buts. One is simply curiosity. I am a theorist, and I can always pose the question how does something work, even without having a special application or situation in mind. Though this may just end up as nothing, it would not be the first time that the answer to a question has been discovered long before the question. In fact, the single most important building block of the standard-model, so-called Yang-Mills theory, has been discovered by theorists almost a decade before it was recognized to be the key to explain the experimental results.
But this is not the main reason for me to venture into this direction. The main reason has to do with the experience I made with Higgs physics - that despite appearance there is often a second layer to the theory. Such a second layer has in this case shifted the perception of how things we describe in theory correlate with the things we see in experiment. Since many proposed theories beyond the standard model, especially such as have caught my interest, are extensions of the Higgs of the standard model. It thus stands to reason that similar statements hold true in their cases. However, whether they hold true, and how they work cannot be fathomed without looking at theses theories. And that is what I want to do.
Why should one do this? Such subtle questions seem to be at first not really related to experiment. But understanding how a theory really works should also give us a better idea of what kind of observations such a theory can actually deliver. And now it becomes very interesting for an experiment. Since we do at the current time not know what to expect, we need to think about what we could expect. This is especially important as to look in every corner requires much more resources than available to us in the foreseeable future. Hence, any insights into what kind of experimental results a theory can yield is very important to select where to focus.
Of course, my research alone will not be sufficient to do this. Since it easily can be that I am looking at the 'wrong' theory, it would not be a good idea to put too much effort in it. But, when there are many theoreticians working on many theories, and many theories all say that it is a good idea to look into a particular direction: Then we have a guidance for where to look. Then there seems to be something special in this direction. And if not, then we have excluded a lot of theories in one go.
As one person in a discussion session (I could not figure out who precisely) has put it aptly at the conference: "The time of guaranteed discoveries is over.". This means that now that we have all pieces of the standard model, we cannot expect to find a new piece any time soon. All our indirect results even tell us that the next piece will be much harder to find. Hence, we are facing a situation as was last seen in physics in the second half of the 19th century and beginning 20th century: There are only some hints that something does not fit. And now we have to go looking, without knowing in advance how far we will have to walk. Or in which direction. This is probably more of an adventure than the last decades, where things where essentially happening on schedule. But is also requires more courage, since there will be much more dead ends (or worse) available.
Nonetheless, I just wrote that I want to venture into this unknown beyond, and in force. Hence it is reasonable - in fact necessary - to pose the question: Why? If I do not know and have too little information, is there any chance to hit the right answer? The answer to this: Probably not. But...
Actually, there are two buts. One is simply curiosity. I am a theorist, and I can always pose the question how does something work, even without having a special application or situation in mind. Though this may just end up as nothing, it would not be the first time that the answer to a question has been discovered long before the question. In fact, the single most important building block of the standard-model, so-called Yang-Mills theory, has been discovered by theorists almost a decade before it was recognized to be the key to explain the experimental results.
But this is not the main reason for me to venture into this direction. The main reason has to do with the experience I made with Higgs physics - that despite appearance there is often a second layer to the theory. Such a second layer has in this case shifted the perception of how things we describe in theory correlate with the things we see in experiment. Since many proposed theories beyond the standard model, especially such as have caught my interest, are extensions of the Higgs of the standard model. It thus stands to reason that similar statements hold true in their cases. However, whether they hold true, and how they work cannot be fathomed without looking at theses theories. And that is what I want to do.
Why should one do this? Such subtle questions seem to be at first not really related to experiment. But understanding how a theory really works should also give us a better idea of what kind of observations such a theory can actually deliver. And now it becomes very interesting for an experiment. Since we do at the current time not know what to expect, we need to think about what we could expect. This is especially important as to look in every corner requires much more resources than available to us in the foreseeable future. Hence, any insights into what kind of experimental results a theory can yield is very important to select where to focus.
Of course, my research alone will not be sufficient to do this. Since it easily can be that I am looking at the 'wrong' theory, it would not be a good idea to put too much effort in it. But, when there are many theoreticians working on many theories, and many theories all say that it is a good idea to look into a particular direction: Then we have a guidance for where to look. Then there seems to be something special in this direction. And if not, then we have excluded a lot of theories in one go.
As one person in a discussion session (I could not figure out who precisely) has put it aptly at the conference: "The time of guaranteed discoveries is over.". This means that now that we have all pieces of the standard model, we cannot expect to find a new piece any time soon. All our indirect results even tell us that the next piece will be much harder to find. Hence, we are facing a situation as was last seen in physics in the second half of the 19th century and beginning 20th century: There are only some hints that something does not fit. And now we have to go looking, without knowing in advance how far we will have to walk. Or in which direction. This is probably more of an adventure than the last decades, where things where essentially happening on schedule. But is also requires more courage, since there will be much more dead ends (or worse) available.
Wednesday, June 11, 2014
Building a group
Those who follow my twitter feed have already seen that I will become a full professor at the Institute of Physics at the University of Graz in Austria from October on. This also implies that I will be building up a group to continue my research on the standard model of particle physics and beyond.
I would like to use this opportunity to write a little bit about what happens behind the scenes right now, rather than something about the outcome of the research we are doing. Such a 'behind-the-scene' look is also interesting, I think, since it shows how research is done, not only what it finds. Hence, I may write such entries more often in the future. If you have any comments or thoughts on this, I would be happy to read your opinion.
One of the major tasks right now for me is to decide what will be the research focus of this group, and how I will organize the resources I will have for this purpose. These are the most important steps, as I have to decide which kind of positions I will open (especially for PhD and master theses), as well as which kind of computers I have to arrange for. Since I will now have the resources to work on more projects than before, this means to structure my activities, such that I get not lost. It would not do to think that now that I have more possibilities I should just jump into many new fields, putting each and every member of the group on a separate topic. In the long run, I will have to take care that methods and techniques developed and used in my group will be handed down from one generation of students to the next. This will only be possible, if the topics they are applied to are sufficiently close that this is meaningfully possible. That is particularly true for my numerical and technically more involved analytical tools.
As a consequence, I decided to establish two main directions. One will be concerned with neutron stars. The other will be looking at Higgs physics, continuing my research on combinations of Higgs particles.
But, of course, just staying with what I already do will not be sufficient. Especially, as there are so many interesting problems offering themselves, like the one of electric charge on which I wrote last time. Since the technology I have accumulated so far is more than sufficient to start working on it, without needing to first invent a new approach, it is a natural way to expand. The same is true for the so-called technicolor theories, on which I did some exploratory work in the past. Hence I decided to make the first new additions to my research fields in these areas. Also, both can be done with already the present infrastructure, so I do not need to wait for new one.
Now that I decided what to do, I still have to make it happen. What are the steps I will have to take? The most important goal is to have some students with whom I can work on all these exciting research topics. This will mean to open positions for them, announce them, and find someone for it. This includes not only PhD students, but also master and bachelor students. To reach them, I will have to set up a new web page and other structures to show what I am working on. As important will be to give good lectures in general, and also special lectures about interesting topics. I am looking very much forward to this part. I have already started to prepare the first lecture I will give in the winter term. It will focus on supersymmetry, another candidate for something beyond the standard model.
In the long run, I will have to acquire third-party funding, to enlarge the group beyond what I will have when I start. That, and the accompanying work, is worth a blog entry on its own, so I will not write about it now. I will return to it at a later time.
I would like to use this opportunity to write a little bit about what happens behind the scenes right now, rather than something about the outcome of the research we are doing. Such a 'behind-the-scene' look is also interesting, I think, since it shows how research is done, not only what it finds. Hence, I may write such entries more often in the future. If you have any comments or thoughts on this, I would be happy to read your opinion.
One of the major tasks right now for me is to decide what will be the research focus of this group, and how I will organize the resources I will have for this purpose. These are the most important steps, as I have to decide which kind of positions I will open (especially for PhD and master theses), as well as which kind of computers I have to arrange for. Since I will now have the resources to work on more projects than before, this means to structure my activities, such that I get not lost. It would not do to think that now that I have more possibilities I should just jump into many new fields, putting each and every member of the group on a separate topic. In the long run, I will have to take care that methods and techniques developed and used in my group will be handed down from one generation of students to the next. This will only be possible, if the topics they are applied to are sufficiently close that this is meaningfully possible. That is particularly true for my numerical and technically more involved analytical tools.
As a consequence, I decided to establish two main directions. One will be concerned with neutron stars. The other will be looking at Higgs physics, continuing my research on combinations of Higgs particles.
But, of course, just staying with what I already do will not be sufficient. Especially, as there are so many interesting problems offering themselves, like the one of electric charge on which I wrote last time. Since the technology I have accumulated so far is more than sufficient to start working on it, without needing to first invent a new approach, it is a natural way to expand. The same is true for the so-called technicolor theories, on which I did some exploratory work in the past. Hence I decided to make the first new additions to my research fields in these areas. Also, both can be done with already the present infrastructure, so I do not need to wait for new one.
Now that I decided what to do, I still have to make it happen. What are the steps I will have to take? The most important goal is to have some students with whom I can work on all these exciting research topics. This will mean to open positions for them, announce them, and find someone for it. This includes not only PhD students, but also master and bachelor students. To reach them, I will have to set up a new web page and other structures to show what I am working on. As important will be to give good lectures in general, and also special lectures about interesting topics. I am looking very much forward to this part. I have already started to prepare the first lecture I will give in the winter term. It will focus on supersymmetry, another candidate for something beyond the standard model.
In the long run, I will have to acquire third-party funding, to enlarge the group beyond what I will have when I start. That, and the accompanying work, is worth a blog entry on its own, so I will not write about it now. I will return to it at a later time.
Tuesday, May 27, 2014
Why does chemistry work?
This seems to be an odd question to ask in a blog about particle physics. But, as you will see, it actually makes connection to a very deep problem of particle physics. A problem, which I am currently turning my attention to. Hence, in preparation of things to come, I write this blog entry.
So, where is the connection? Well, chemistry is actually all about the electromagnetic interaction. One of the most important features is that atoms are electrically neutral. This is only possible, if the atomic nucleus has the same positive charge as its surrounding electrons have a negative one. The electrons are elementary particles, as far as we know. The atomic nucleus, however, is ultimately made up of quarks. So the last statement boils down to the fact that the total electric charge of the quarks in the nucleus has to compensate the one of the electrons. Sounds simple enough. And this is in fact something which has been established very exactly in experiment. The compensation is much better than one part in a billion - within our best efforts, the cancellation appears perfect.
The problem is that it is not necessary, according to our current knowledge. Quarks and electrons are very different objects in particle physics. So, why should they carry electric charge such that this balancing is possible? The answer to this is, as often: We do not know. Yet.
When we are just looking at electromagnetism, there is actually no theoretical reason why they should have balanced electric charges. Electromagnetism would work in exactly the same way if they did not, if they would have arbitrarily different charges. Of course, atoms are then no longer neutral. And chemistry would then work quite differently.
If there is no simple explanation in the details, one should look at the big picture. Perhaps it helps. In this case it does, but this time not in a very useful way.
Electromagnetism does not stand on its own. It is part of the standard model of particle physics. And here things start to become seriously bizarre.
I am a theorist. Hence, the internal consistency of a theory is something quite important to me. The standard model of particle physics as a theory turns out to be consistent if very precise relations exist between the various particles in it - and the charge they carry. The exact cancellation of electric charges in the atoms we observe is one of the very few possibilities how the standard model can work theoretically.
So, did we explain it now? Unfortunately, no. "The theory should work" is not an adequate requirement for a description of nature. The game goes the other way around. Nature dictates, and our theory must describe it. Experiment rules theory in physics.
So the fact that we need this cancellation is troublesome: We only know that we need it. But it is just there, we cannot explain it with what we know.
So that is the point to enter speculation. We know theories in which the electromagnetic charge cancellation is not 'just there', but it follows immediately from the structure of the theory. The best known examples of such theories are the so-called grand-unified theories. In these, there is a super-force, and the known forces of the standard model are just different facets of this super-force. The fact that electrons and quarks have canceling charges in such a theory just stems from the fact that everything originates in this one super-force.
It is possible to write down a theory of such a super-force, which is compatible with our current experiments. But so is the standard model. Hence, only if we find an experimental result, in which a theory of such a super-force shows a distinct behavior to the standard-model, we can be sure that it exists. This is not (yet?) the case.
At the same time, we so far know relatively little about many aspects of such a theory. This is the reason for me to start getting interested in it. Especially, there are still conceptual questions we need to answer. I will write about them in future entries. Because it will be quite interesting and challenging to understand these things.
So, where is the connection? Well, chemistry is actually all about the electromagnetic interaction. One of the most important features is that atoms are electrically neutral. This is only possible, if the atomic nucleus has the same positive charge as its surrounding electrons have a negative one. The electrons are elementary particles, as far as we know. The atomic nucleus, however, is ultimately made up of quarks. So the last statement boils down to the fact that the total electric charge of the quarks in the nucleus has to compensate the one of the electrons. Sounds simple enough. And this is in fact something which has been established very exactly in experiment. The compensation is much better than one part in a billion - within our best efforts, the cancellation appears perfect.
The problem is that it is not necessary, according to our current knowledge. Quarks and electrons are very different objects in particle physics. So, why should they carry electric charge such that this balancing is possible? The answer to this is, as often: We do not know. Yet.
When we are just looking at electromagnetism, there is actually no theoretical reason why they should have balanced electric charges. Electromagnetism would work in exactly the same way if they did not, if they would have arbitrarily different charges. Of course, atoms are then no longer neutral. And chemistry would then work quite differently.
If there is no simple explanation in the details, one should look at the big picture. Perhaps it helps. In this case it does, but this time not in a very useful way.
Electromagnetism does not stand on its own. It is part of the standard model of particle physics. And here things start to become seriously bizarre.
I am a theorist. Hence, the internal consistency of a theory is something quite important to me. The standard model of particle physics as a theory turns out to be consistent if very precise relations exist between the various particles in it - and the charge they carry. The exact cancellation of electric charges in the atoms we observe is one of the very few possibilities how the standard model can work theoretically.
So, did we explain it now? Unfortunately, no. "The theory should work" is not an adequate requirement for a description of nature. The game goes the other way around. Nature dictates, and our theory must describe it. Experiment rules theory in physics.
So the fact that we need this cancellation is troublesome: We only know that we need it. But it is just there, we cannot explain it with what we know.
So that is the point to enter speculation. We know theories in which the electromagnetic charge cancellation is not 'just there', but it follows immediately from the structure of the theory. The best known examples of such theories are the so-called grand-unified theories. In these, there is a super-force, and the known forces of the standard model are just different facets of this super-force. The fact that electrons and quarks have canceling charges in such a theory just stems from the fact that everything originates in this one super-force.
It is possible to write down a theory of such a super-force, which is compatible with our current experiments. But so is the standard model. Hence, only if we find an experimental result, in which a theory of such a super-force shows a distinct behavior to the standard-model, we can be sure that it exists. This is not (yet?) the case.
At the same time, we so far know relatively little about many aspects of such a theory. This is the reason for me to start getting interested in it. Especially, there are still conceptual questions we need to answer. I will write about them in future entries. Because it will be quite interesting and challenging to understand these things.
Friday, April 11, 2014
News about the state of the art
Right now, I am at workshop in Benasque, Spain. This workshop is called 'After the Discovery: Hunting for a non-standard Higgs Sector'. The topic is essentially this: We now have a Higgs. How can we find what else is out there? Or at least assure that it is currently out of our reach? That there is something more is beyond doubt. We know too many cases where our current knowledge is certainly limited.
I will not go on with describing all what is presented on this workshop. This is too much. And there are certainly other places on the web, where this is done. In this entry I will therefore just describe how what is discussed at the workshop relates to my own research.
One point is certainly what experiments find. At such specialized workshops, you can get much more details of what they actually do. Since any theoretical investigation is to some extent approximative, it is always good to know, what is known experimentally. Hence, if I get a result in disagreement with the experiment, I know that there is something wrong. Usually, it is the theory, or the calculations performed. Some assumption being too optimistic, some approximation being too drastic.
Fortunately, so far nothing is at odds with what I have. That is encouraging. Though no reason for becoming overly confident.
The second aspect is to see what other peoples do. To see, which other ideas still hold up against experiment, and which failed. Since different people do different things, combining the knowledge, successes and failures of the different approaches helps you. It helps not only in avoiding too optimistic assumptions or other errors. But other people's successes provide new input.
One particular example at this workshop is for me the so-called 2-Higgs-Doublet models. Such models assume that there exists besides the known Higgs another set of Higgs particles. Though this is not obvious, the doublet in the name indicates that they have four more Higgs particles, one of them being just a heavier copy of the one we know. I have recently considered to look also into such models, though for quite different reasons. Here, I learned how they can be motivated for entirely different reasons, and especially why there are so interesting for ongoing experiments. I also learned much about their properties, and what is known (and not known) about them. This gives me quite some new perspectives, and some new ideas.
Ideas, I will certainly realize, once being back.
Finally, collecting all the talks together, they draw the big picture. They tell me, where we are now. What we know about the Higgs, what we do not know, and where there is room (left) for much more than just the 'ordinary' Higgs. It is an update for my own knowledge about particle physics. And it finally delivers the list, of what will become looked at in the next couple of months and years. I now know better where to look for the next result relevant for my research, and relevant for the big picture.
I will not go on with describing all what is presented on this workshop. This is too much. And there are certainly other places on the web, where this is done. In this entry I will therefore just describe how what is discussed at the workshop relates to my own research.
One point is certainly what experiments find. At such specialized workshops, you can get much more details of what they actually do. Since any theoretical investigation is to some extent approximative, it is always good to know, what is known experimentally. Hence, if I get a result in disagreement with the experiment, I know that there is something wrong. Usually, it is the theory, or the calculations performed. Some assumption being too optimistic, some approximation being too drastic.
Fortunately, so far nothing is at odds with what I have. That is encouraging. Though no reason for becoming overly confident.
The second aspect is to see what other peoples do. To see, which other ideas still hold up against experiment, and which failed. Since different people do different things, combining the knowledge, successes and failures of the different approaches helps you. It helps not only in avoiding too optimistic assumptions or other errors. But other people's successes provide new input.
One particular example at this workshop is for me the so-called 2-Higgs-Doublet models. Such models assume that there exists besides the known Higgs another set of Higgs particles. Though this is not obvious, the doublet in the name indicates that they have four more Higgs particles, one of them being just a heavier copy of the one we know. I have recently considered to look also into such models, though for quite different reasons. Here, I learned how they can be motivated for entirely different reasons, and especially why there are so interesting for ongoing experiments. I also learned much about their properties, and what is known (and not known) about them. This gives me quite some new perspectives, and some new ideas.
Ideas, I will certainly realize, once being back.
Finally, collecting all the talks together, they draw the big picture. They tell me, where we are now. What we know about the Higgs, what we do not know, and where there is room (left) for much more than just the 'ordinary' Higgs. It is an update for my own knowledge about particle physics. And it finally delivers the list, of what will become looked at in the next couple of months and years. I now know better where to look for the next result relevant for my research, and relevant for the big picture.
Wednesday, March 12, 2014
Precision may matter
The latest paper I have produced is an example of an often overlooked part of scientific research: It is not enough to get a qualitative picture. Sometimes the quantitative details modify or even alter the picture. Or, put more bluntly, sometimes precision matters.
When we encounter a new problem, we usually first try to get a rough idea of what is going on. It starts with a first rough calculation. Such an approach is often not very precise. Still, this creates a first qualitative picture of what is going on. This may be rough around the edges, and often does not perfectly fit the bill. But it usually gets the basic features right. Performing such a first estimate is often not a too serious challenge.
But once this rough picture is there, the real work begins. Almost fitting is not quite the same as fitting. This is the time where we need to get quantitative. This implies that we need to use more precise, probably different, but almost certainly more tedious methods. These calculations are usually not as simple, and a lot of work gets involved. Furthermore, we usually cannot solve the problem perfectly in the first round of improvement. We get things a bit rounder at the edges, and the picture normally starts to fit better. Still not everywhere, but better. Often, a second, and sometimes many more, rounds are necessary.
Fine, you may say. If things are improving, why bother doing even better? Is not almost fitting as good as fitting? But this is not quite the same. The best known examples we find in history. At the beginning of the 20th century, the picture of physics seem to fit the real world almost perfectly. There were just some small corners, where it seems to still require a bit of polishing. These small problems actually led to one of the greatest change in our understanding of the world, giving birth to both quantum physics and the theory of relativity. Actually, today we are again in a similar situation. Most of what we know, especially the standard model, fits the bill very nicely. But we still have some rough patches. This time, we have learned our lesson, and keep digging into these rough patches. Our secret hope is, of course, that a similar disruption will occur, and that our view of the world will be fundamentally changed. Whether this will be the case, or we just have to slightly augment things, we do not yet know. But it will be surely a great experience to figure it out.
Returning to my own research, it is precisely this situation which I am looking at. However, rather than looking at the whole world, I have been just looking at a very simplified theory. One that involves only the gluons. This is a much simpler theory than the standard model. Still, it is so complicated that we were not (yet) able to solve it completely. We made great progress, though, and it seems that we almost got it right. Still, also here, some rough edges remain. In this paper, I am looking precisely at these edges, and just check how rough they really are. I am not even trying to round them further. I am not the first to do it, and many other people have looked at them in one or the other way. However, doing it more than once, and especially from slightly different angles, is important. It is part of a system of check and balances, to avoid any error. Tt is also in science true: Nobody is perfect. And though there are many calculations, which are correct, even the greatest mind may fail sometime. And therefore it is very important to cross check any result.
In this particular case, everything is correct. But, by looking more precisely, I found some slight deviations. These were previously not found, as precision is almost always also a question of the amount of resources invested. In this case, the resources are mostly computing time, and I have just poured a lot of it into it. These slight deviations do not require a completely new view of the whole theory. But it changes some slight aspects. This may sound like not much. But if they should be confirmed, they provide closure in the following sense: Previously, some conclusions remained dangling, and seemed to be not at ease with each other. There were some ways out, but the previously known results rather suggested a more fundamental problem. My new contribution shifts these old results slightly, and makes them more precise. The new interpretation fits now much better with the suspected ways out rather than with a fundamental problem. Hence, looking closer has in this case improved our understanding.
Hence, theoretical physics has often more in common with a detective's work. We start with a suspicion. But then tedious work on the details is required to uncover more and more of the whole picture, until either the original suspicion is confirmed, or it shifts to a different suspect, which may have even been completely overlooked in the beginning. However, at least normally nobody tries to kill us if we come too close to the truth.
When we encounter a new problem, we usually first try to get a rough idea of what is going on. It starts with a first rough calculation. Such an approach is often not very precise. Still, this creates a first qualitative picture of what is going on. This may be rough around the edges, and often does not perfectly fit the bill. But it usually gets the basic features right. Performing such a first estimate is often not a too serious challenge.
But once this rough picture is there, the real work begins. Almost fitting is not quite the same as fitting. This is the time where we need to get quantitative. This implies that we need to use more precise, probably different, but almost certainly more tedious methods. These calculations are usually not as simple, and a lot of work gets involved. Furthermore, we usually cannot solve the problem perfectly in the first round of improvement. We get things a bit rounder at the edges, and the picture normally starts to fit better. Still not everywhere, but better. Often, a second, and sometimes many more, rounds are necessary.
Fine, you may say. If things are improving, why bother doing even better? Is not almost fitting as good as fitting? But this is not quite the same. The best known examples we find in history. At the beginning of the 20th century, the picture of physics seem to fit the real world almost perfectly. There were just some small corners, where it seems to still require a bit of polishing. These small problems actually led to one of the greatest change in our understanding of the world, giving birth to both quantum physics and the theory of relativity. Actually, today we are again in a similar situation. Most of what we know, especially the standard model, fits the bill very nicely. But we still have some rough patches. This time, we have learned our lesson, and keep digging into these rough patches. Our secret hope is, of course, that a similar disruption will occur, and that our view of the world will be fundamentally changed. Whether this will be the case, or we just have to slightly augment things, we do not yet know. But it will be surely a great experience to figure it out.
Returning to my own research, it is precisely this situation which I am looking at. However, rather than looking at the whole world, I have been just looking at a very simplified theory. One that involves only the gluons. This is a much simpler theory than the standard model. Still, it is so complicated that we were not (yet) able to solve it completely. We made great progress, though, and it seems that we almost got it right. Still, also here, some rough edges remain. In this paper, I am looking precisely at these edges, and just check how rough they really are. I am not even trying to round them further. I am not the first to do it, and many other people have looked at them in one or the other way. However, doing it more than once, and especially from slightly different angles, is important. It is part of a system of check and balances, to avoid any error. Tt is also in science true: Nobody is perfect. And though there are many calculations, which are correct, even the greatest mind may fail sometime. And therefore it is very important to cross check any result.
In this particular case, everything is correct. But, by looking more precisely, I found some slight deviations. These were previously not found, as precision is almost always also a question of the amount of resources invested. In this case, the resources are mostly computing time, and I have just poured a lot of it into it. These slight deviations do not require a completely new view of the whole theory. But it changes some slight aspects. This may sound like not much. But if they should be confirmed, they provide closure in the following sense: Previously, some conclusions remained dangling, and seemed to be not at ease with each other. There were some ways out, but the previously known results rather suggested a more fundamental problem. My new contribution shifts these old results slightly, and makes them more precise. The new interpretation fits now much better with the suspected ways out rather than with a fundamental problem. Hence, looking closer has in this case improved our understanding.
Hence, theoretical physics has often more in common with a detective's work. We start with a suspicion. But then tedious work on the details is required to uncover more and more of the whole picture, until either the original suspicion is confirmed, or it shifts to a different suspect, which may have even been completely overlooked in the beginning. However, at least normally nobody tries to kill us if we come too close to the truth.
Monday, February 3, 2014
The trouble with new toys
You may remember that one of the projects I am working on is understanding so-called neutron stars. These are the remnants of heavy stars, which die in a gigantic explosion called a supernova. One of the main problems with understanding these neutron stars is that it is far too expensive to simulate them in detail using computers. We try in our research to circumvent this problem by using not the original theory describing neutron stars, but a slightly modified version. For this modified theory, we actually can do simulations. So is now everything shiny? No, unfortunately not. And about these problems we have published a new paper recently. Today, I will outline what we did in this paper.
So what is actually the problem? The problem is that some of our theories are not linear. What does now linear mean? Well, a theory is called linear, if we apply an external input to it, and the effect is has on theory is of (roughly) the same size as whatever we applied. In contrast, for anything which is non-linear, the response can be much larger, or much smaller, than whatever we applied. Unfortunately, the strong interactions, which is responsible for neutron stars, is non-linear. Hence, even though we modified it just a little bit, we can potentially have very strong changes. Therefore, we have to make sure that whatever we did was not having unplanned and strong effects. This task led to the mentioned paper.
The main question we have to answer is: If the theory is so sensitive to modifications, were the effects of our modifications still harmless enough? Can we still learn something?
The answer is, as always, it depends. To judge the similarities, we have looked at the hadrons, the particles build up from quarks and gluons. In the strong force, the masses of these hadrons follow a very special pattern. Especially, there are some unusually light ones, the a few intermediate ones, and then, already quite heavy, the first one which plays an important role in everyday life: The proton, the nucleus of a hydrogen atom. We found that in our modified theory this pattern repeats itself. This is already a good sign. However, we also found some indications that not all is well. Some of the lighter particles have a number of different details than in nature, especially the lightest ones.
Since we are mostly interested in neutron stars, we also did the calculations at large densities. There, we saw that indeed the slightly different properties of the lightest particles play a role. At quite small densities, we observe a behavior, which we are reasonably sure will not occur in nature. So is then all lost? It does not seem so. While at these densities the behavior is different, this will probably not play an important role for the densities we are really interested in. And indeed, at higher densities the theory behaved similar to the expectations: It seems to behave in a way which we would guess based on the observations of real neutron stars, and general arguments. This is quite encouraging. Still, we also encountered two more challenges. One is that to make a definite statement, we will need much more precision: Some of what we see is sensitive to details. We need to understand this better. And this will require much more calculations.
The other one is that we are still not quite sure if there is not some special kind of different particle playing a too important role. This special kind of particle is similar to the proton, but not present in nature. It is only a feature of the modified theory. This is a so-called hybrid. In contrast to the proton, which consist out of three quarks and no gluons, it is made out of one quark and three gluons. There are certain technical reasons, why this particle could be a problem when trying to understand neutron stars. So far, it escaped detection in our calculations. We have to find it, to make really sure what is going on. This will be a challenge.
Fortunately, still, even in the worst case scenario of both problems, what we did will not be irrelevant. On the one hand, it was a genuinely new theory we looked at, and we learned already very much about how theories in general work. And the second is - what we created will also serve as a benchmark for other methods. If someone creates a new method to get to the neutron star's core, she or he can test it again our simulations, to build confidence in it.
So what is actually the problem? The problem is that some of our theories are not linear. What does now linear mean? Well, a theory is called linear, if we apply an external input to it, and the effect is has on theory is of (roughly) the same size as whatever we applied. In contrast, for anything which is non-linear, the response can be much larger, or much smaller, than whatever we applied. Unfortunately, the strong interactions, which is responsible for neutron stars, is non-linear. Hence, even though we modified it just a little bit, we can potentially have very strong changes. Therefore, we have to make sure that whatever we did was not having unplanned and strong effects. This task led to the mentioned paper.
The main question we have to answer is: If the theory is so sensitive to modifications, were the effects of our modifications still harmless enough? Can we still learn something?
The answer is, as always, it depends. To judge the similarities, we have looked at the hadrons, the particles build up from quarks and gluons. In the strong force, the masses of these hadrons follow a very special pattern. Especially, there are some unusually light ones, the a few intermediate ones, and then, already quite heavy, the first one which plays an important role in everyday life: The proton, the nucleus of a hydrogen atom. We found that in our modified theory this pattern repeats itself. This is already a good sign. However, we also found some indications that not all is well. Some of the lighter particles have a number of different details than in nature, especially the lightest ones.
Since we are mostly interested in neutron stars, we also did the calculations at large densities. There, we saw that indeed the slightly different properties of the lightest particles play a role. At quite small densities, we observe a behavior, which we are reasonably sure will not occur in nature. So is then all lost? It does not seem so. While at these densities the behavior is different, this will probably not play an important role for the densities we are really interested in. And indeed, at higher densities the theory behaved similar to the expectations: It seems to behave in a way which we would guess based on the observations of real neutron stars, and general arguments. This is quite encouraging. Still, we also encountered two more challenges. One is that to make a definite statement, we will need much more precision: Some of what we see is sensitive to details. We need to understand this better. And this will require much more calculations.
The other one is that we are still not quite sure if there is not some special kind of different particle playing a too important role. This special kind of particle is similar to the proton, but not present in nature. It is only a feature of the modified theory. This is a so-called hybrid. In contrast to the proton, which consist out of three quarks and no gluons, it is made out of one quark and three gluons. There are certain technical reasons, why this particle could be a problem when trying to understand neutron stars. So far, it escaped detection in our calculations. We have to find it, to make really sure what is going on. This will be a challenge.
Fortunately, still, even in the worst case scenario of both problems, what we did will not be irrelevant. On the one hand, it was a genuinely new theory we looked at, and we learned already very much about how theories in general work. And the second is - what we created will also serve as a benchmark for other methods. If someone creates a new method to get to the neutron star's core, she or he can test it again our simulations, to build confidence in it.
Wednesday, January 15, 2014
For each yes and no there is a perhaps
The last time, I was writing about my research on the Higgs. Especially, I was writing how we tested perturbation theory using numerical simulations. I was quite optimistic back then to have results by now, which could be of either of two types. Either perturbation theory is a good description, or it is not.
By now we have finished this project, and you can download the results from the arxiv. The arxiv is a server where you can find essentially every published result in particle physics of the last twenty years, legally and free of charge. But lets get back to our results. As I should have expected, things turned out to differently. Instead of a clear yes or no answer I got a perhaps.
The original question was, under which circumstances can perturbation theory be applied. It appears to be a simple enough question. Originally, it looked like this would depend on the relative sizes of the Higgs mass to the W and Z masses. And yes, it does. But. We found more.
We found different regimes. One is where the Higgs is lighter than the W and Z. Of course, this is not a situation we encounter in nature, where it is about 50% heavier. But as theoreticians, we are allowed to play this kind of games. Anyway, in this case, we confirmed what was already indirectly known from other investigations: Perturbation theory seems not to work. Always. While the first statement is not too surprising, the second statement is. Naively, one expected that if the interactions between the Higgs and the W and Z are of certain relative sizes, perturbation theory would still work. We did not find any hint of that. Is this already then a no? Unfortunately not. As I have described earlier, it is not so easy to relate a simulation to reality. Even if it is only a fantasy version of reality, as in this case. Hence, we cannot be sure that we have exhausted all possibilities. The only thing we can say for sure is that there are cases, where perturbation theory does not work. Perhaps there is something more, some other cases. And thus there is the first perhaps.
The situation gets even more interesting, when the Higgs is heavier than the W and Z, but lighter than twice their mass. In this regime, perturbation theory is expected to be pretty good. At least here, we find a rather clear answer: Perturbation theory does indeed well. Wherever we looked, we did not find anything to the contrary. Of course, again we cannot exclude that there is somewhere else a different case. But so far, everything seems to be fine.
When the Higgs finally hits the magic limit of twice the W and Z masses, something unexpected happens. This limit is particularly interesting, because above it, the Higgs can decay into W and Z. The expectation was that perturbation theory is still valid. At least until reaching several times the W and Z mass. But here, we found something odd. We found both possibilities, depending on the relative interaction strengths. In the one case, perturbation theory still works for a long time. In the other already a little bit above this critical mass perturbation theory starts to fail. We do not yet really understand, what is going on there, and what really characterizes the two different cases. We are working on this right now. But whatever it is, it is different than we expected. And this once more teaches to always expect that your naive expectations are not fulfilled. Things remain full of surprises, even if you think you understood them.
By now we have finished this project, and you can download the results from the arxiv. The arxiv is a server where you can find essentially every published result in particle physics of the last twenty years, legally and free of charge. But lets get back to our results. As I should have expected, things turned out to differently. Instead of a clear yes or no answer I got a perhaps.
The original question was, under which circumstances can perturbation theory be applied. It appears to be a simple enough question. Originally, it looked like this would depend on the relative sizes of the Higgs mass to the W and Z masses. And yes, it does. But. We found more.
We found different regimes. One is where the Higgs is lighter than the W and Z. Of course, this is not a situation we encounter in nature, where it is about 50% heavier. But as theoreticians, we are allowed to play this kind of games. Anyway, in this case, we confirmed what was already indirectly known from other investigations: Perturbation theory seems not to work. Always. While the first statement is not too surprising, the second statement is. Naively, one expected that if the interactions between the Higgs and the W and Z are of certain relative sizes, perturbation theory would still work. We did not find any hint of that. Is this already then a no? Unfortunately not. As I have described earlier, it is not so easy to relate a simulation to reality. Even if it is only a fantasy version of reality, as in this case. Hence, we cannot be sure that we have exhausted all possibilities. The only thing we can say for sure is that there are cases, where perturbation theory does not work. Perhaps there is something more, some other cases. And thus there is the first perhaps.
The situation gets even more interesting, when the Higgs is heavier than the W and Z, but lighter than twice their mass. In this regime, perturbation theory is expected to be pretty good. At least here, we find a rather clear answer: Perturbation theory does indeed well. Wherever we looked, we did not find anything to the contrary. Of course, again we cannot exclude that there is somewhere else a different case. But so far, everything seems to be fine.
When the Higgs finally hits the magic limit of twice the W and Z masses, something unexpected happens. This limit is particularly interesting, because above it, the Higgs can decay into W and Z. The expectation was that perturbation theory is still valid. At least until reaching several times the W and Z mass. But here, we found something odd. We found both possibilities, depending on the relative interaction strengths. In the one case, perturbation theory still works for a long time. In the other already a little bit above this critical mass perturbation theory starts to fail. We do not yet really understand, what is going on there, and what really characterizes the two different cases. We are working on this right now. But whatever it is, it is different than we expected. And this once more teaches to always expect that your naive expectations are not fulfilled. Things remain full of surprises, even if you think you understood them.
Subscribe to:
Posts (Atom)