Some time ago, I have presented one of the methods I am using: The so-called perturbation theory. This fancy name signifies the following idea: If we know something, and we add just a little disturbance (a perturbation) to it, then this will not change things too much. If this is the case, then we can systematically give the consequences of the perturbation. Mathematically, this is done by first calculating the direct impact of the perturbation (the leading order). Then we look at the first indirection, which involves not only the direct effect, but also the simplest indirect effect, and so on.
Back then, I already wrote that, nice as the idea sounds, it is not possible to describe everything by it. Although it works in many cases very beautifully. But this leaves us with the question when does it not work. We cannot know this exactly. This would require to know the theory perfectly, and then there would be no need in the first place to do perturbation theory. So how can we then know what we are doing?
The second problem is that in many cases anything but perturbation theory is technically extremely demanding. Thus the first thing one checks is the simplest one: Whether perturbation theory makes itself sense. Indeed, it turns out that usually perturbation theory starts to produce nonsense if we increase the strength of the perturbation too far. This indicates clearly the breakdown of our assumptions, and thus the breakdown of perturbation theory. However, this is a best-case scenario. Hence, one wonders whether this approach could be fooling us. Indeed, it could be that this approximation breaks down long before it gets critical. So that it first produces bad (or even wrong) answers before it produces nonsensical ones.
This seems like serious trouble. What can be done to avoid it? There is no way inside perturbation theory to deal with it. One way is, of course, to compare to experiment. However, this is not always the best choice. On the one hand it is always possible that our underlying theory actually fails. Then we would misinterpret the failure of our ideas of nature as the failure of our methods. One would therefore like to have a more controllable way. In addition, we often reduce complex problems to simpler ones, to make them tractable. But the simpler problems often do not have a direct realization in nature, and thus we have no experimental access to them. Then this way is also not possible.
Currently, I find myself in such a situation. I want to understand, in the context of my Higgs research, to which extent perturbation theory can be used. In this context, the perturbation is usually the mass of the Higgs. The question then becomes: Up to which Higgs mass is perturbation theory still reliable? Perturbation theory itself predicts its failure at not more than eight time the mass of the observed Higgs particle. The question is, whether this is adequate, or whether this is too optimistic.
How can I answer this question? Well, here enters my approach not to rely only on a single method. It is true that we are not able to calculate as much with different methods than perturbation theory, just because anything else is too complicated. But if we concentrate on a few questions, enough resources are available to calculate things otherwise. The important task is then to make a wise choice. I.e. a choice from which one can read off the desired answer, in the present case whether perturbation theory applies or not. And at the same time to do something one can afford to calculate.
My present choice is to look at the relation of the W boson mass and the Higgs mass. If perturbation theory works, there is a close relation between both, if everything else is adjusted in a suitable way. The perturbative result can be found already in textbooks for physic students. To check it, I am using numerical simulations of both particles and their interactions. Even this simple question is an expensive endeavor, and several ten-thousand days of computing time (we always calculate how much time it would take a single computer to do all the work all by itself) have been invested. The results I found so far are intriguing, but not yet conclusive. However, in just a few weeks more time, it seems, that the fog will finally lift, and at least something can be said. I am looking with great anticipation to this date. Since either of two things will happen: Something unexpected, or something reassuring.
Monday, December 2, 2013
Monday, November 4, 2013
How to mix ingredients
I have written earlier that one particular powerful way to do calculations is to combine different methods. In this entry, I will be a bit more specific. The reason is that we just published a proceeding in which we describe our progress to prepare for such a combination. A proceeding is, by the way, just a fancy name for a write-up of a talk given at a conference.
How to combine different methods always depends on the methods. In this case, I would like to combine simulations and the so-called equations of motion. The latter describe how particles move and how they interact. Because you have usually an infinite number of them - a particle can interact with another one, or two, or three, or..., you can usually not solve them exactly. That is unfortunate. You may then ask, how one can be sure that the results after any approximation is still useful. The answer is that this is not always clear. It is here where the combination comes in.
The solution to the equations of motion are no numbers, but mathematical functions. These functions describe, for example, how a particle moves from one place to another. Since this travel depends on how far these places are apart, this must be described by a function. Similarly, there are results which describe how two, three, four... particles interact with each other depending on how far they are apart from each other. Since all of these functions then describe how two or more things are related to each other, a thing which is called a correlation in theoretical physics, these functions are correlation functions.
If we would be able to solve the equations of motions correctly, we would get the exact answer for all these correlation functions. We are not, and thus we will not get the exact ones, but different ones. The important question is how different they are from the correct ones. One possibility is, of course, to compare to experiment. But it is in all cases a very long way from the start of the calculation to results, which can be compared to experiment. There is therefore a great risk that a lot of effort is invested in vain. In addition, it is often not easy to identify then where the problem is, if there is a large disagreement.
It is therefore much better if a possibility exists to check this much earlier. This is the point where the numerical simulations come in. Numerical simulations are a very powerful tool. Up to some subtle, but likely practically irrelevant, fundamental questions, they essentially simulate a system exactly. However, the closer one moves to reality, the more expensive the calculations become. Most expensive is to have very different values in a simulation, e. g. large distances and small distances simultaneously, or large and small masses. There are also some properties of the standard model, like parity violation, which can even not be simulated up to now in any reasonable way at all. But within these limitations, it is possible to calculate pretty accurate correlation functions.
And this is then how the methods are combined. The simulations provide the correlation functions. They can then be used with the equations of motions in two ways. Either they can be used as a benchmark, to check whether the approximations make sense. Or they can be fed into the equations as a starting point to solve them. Of course, the aim is then not to reproduce them, as otherwise nothing would have been gained. Both possibilities have been used very successfully in the past, especially for hadrons made from quarks and to understand how the strong force has influenced the early universe or plays a role in neutron stars.
Our aim is to use this combination for Higgs physics. What we did, and have shown in the proceedings, are calculating the correlation functions of a theory including only the Higgs and the W and Z. This will now form the starting point for getting also the leptons and quarks into the game using the equations of motions. And we do this to avoid the problem with parity, and to include the very different masses of the particles. This will be the next step.
How to combine different methods always depends on the methods. In this case, I would like to combine simulations and the so-called equations of motion. The latter describe how particles move and how they interact. Because you have usually an infinite number of them - a particle can interact with another one, or two, or three, or..., you can usually not solve them exactly. That is unfortunate. You may then ask, how one can be sure that the results after any approximation is still useful. The answer is that this is not always clear. It is here where the combination comes in.
The solution to the equations of motion are no numbers, but mathematical functions. These functions describe, for example, how a particle moves from one place to another. Since this travel depends on how far these places are apart, this must be described by a function. Similarly, there are results which describe how two, three, four... particles interact with each other depending on how far they are apart from each other. Since all of these functions then describe how two or more things are related to each other, a thing which is called a correlation in theoretical physics, these functions are correlation functions.
If we would be able to solve the equations of motions correctly, we would get the exact answer for all these correlation functions. We are not, and thus we will not get the exact ones, but different ones. The important question is how different they are from the correct ones. One possibility is, of course, to compare to experiment. But it is in all cases a very long way from the start of the calculation to results, which can be compared to experiment. There is therefore a great risk that a lot of effort is invested in vain. In addition, it is often not easy to identify then where the problem is, if there is a large disagreement.
It is therefore much better if a possibility exists to check this much earlier. This is the point where the numerical simulations come in. Numerical simulations are a very powerful tool. Up to some subtle, but likely practically irrelevant, fundamental questions, they essentially simulate a system exactly. However, the closer one moves to reality, the more expensive the calculations become. Most expensive is to have very different values in a simulation, e. g. large distances and small distances simultaneously, or large and small masses. There are also some properties of the standard model, like parity violation, which can even not be simulated up to now in any reasonable way at all. But within these limitations, it is possible to calculate pretty accurate correlation functions.
And this is then how the methods are combined. The simulations provide the correlation functions. They can then be used with the equations of motions in two ways. Either they can be used as a benchmark, to check whether the approximations make sense. Or they can be fed into the equations as a starting point to solve them. Of course, the aim is then not to reproduce them, as otherwise nothing would have been gained. Both possibilities have been used very successfully in the past, especially for hadrons made from quarks and to understand how the strong force has influenced the early universe or plays a role in neutron stars.
Our aim is to use this combination for Higgs physics. What we did, and have shown in the proceedings, are calculating the correlation functions of a theory including only the Higgs and the W and Z. This will now form the starting point for getting also the leptons and quarks into the game using the equations of motions. And we do this to avoid the problem with parity, and to include the very different masses of the particles. This will be the next step.
Monday, October 14, 2013
Looking for something out of the ordinary
A few days ago, I was at a quite interesting workshop on "Anomalous quartic gauge couplings". Since the topic itself is quite interesting and has significant relevance to my own research on Higgs physics, I like to spend this entry on it. Let me start with describing what the workshop was about, and what is behind its rather obscure title.
At the experiments at the LHC what we do is smashing two protons into each other. Occasionally, either of two things may happen. The first possibility is that in the process each proton emits a W or Z boson, the carriers of the weak force. These two particles may collide themselves, but emerge without change. That is what we call an elastic scattering. Afterwards, these bosons are detected. Or rather, they are indirectly detected by their decay products. The second possibility is that either of the protons emits a very energetic W or Z, which then splits into three of these afterwards. These are then, again indirectly, observed. This is called an inelastic effect. Of course, much more usually goes on, but I will concentrate here on the most important part of the collision.
At first sight, both types of events may not have to do a lot with each other. But they have something important in common: In all cases four particles are involved. They may be any combination of Ws and Zs. To avoid further excessive use of 'W or Z', I will just call them all Ws. The details of which is which is not really important right now.
The fact that four are involved explains already the quartic in the name of the workshop. In the standard model of particle physics, that what happens can occur mainly in either of two ways. One is that two Ws form for some time another particle. This may either be another W or Z, or also a Higgs. Or they can all four interact with each other at the same time. That is then called a quartic gauge coupling, and thus the complete second half of the name of the workshop.
Now what is not right with this, as we talk about something anomalous? Actually, everything is all right with it in the standard model. But we are not only interested what there is in the standard model, but also whether there is something else. If there is something else, what we observe should not fit with what we calculate in the standard model alone. Such a difference would thus be anomalous. Hence, if we would measure that the quartic gauge coupling is different from the one we calculate in the standard model, we call it an anomalous quartic gauge coupling. And then we are happy, since we found something new. Hence, the workshop was about looking for such anomalous quartic gauge couplings. So far, we did not find anything, but we do not yet have seen many such situations. Far too few to make already any statements. But we will record many more in the future. Then we can make really a statement. At this workshop we were thus essentially preparing for the new data, which we expected to come in once the LHC is switched on again in early 2014.
What has this to do with my research? Well, I try to figure out whether two Higgs, or Ws, can form a bound state. If such bound states would exist, they would form exactly in such processes, for a very short time. Afterwards, they again decay into the final Ws. If they form, they would contribute to what we know. They are part of the standard model. So to see something new, we have to subtract their contributions, if they are there. Otherwise, they could be mistaken for a new, anomalous effect. Most important for me at the workshop was to understand, what is measured, and how they could contribute. It was also important to know what can be measured at all, since any experimental constraint I can get would help me in improving my calculations. This kind of connecting experiment and theory is very important for us, as we are yet far from the point where our results are perfect. The developments in this area remain therefore very important to my own research project.
At the experiments at the LHC what we do is smashing two protons into each other. Occasionally, either of two things may happen. The first possibility is that in the process each proton emits a W or Z boson, the carriers of the weak force. These two particles may collide themselves, but emerge without change. That is what we call an elastic scattering. Afterwards, these bosons are detected. Or rather, they are indirectly detected by their decay products. The second possibility is that either of the protons emits a very energetic W or Z, which then splits into three of these afterwards. These are then, again indirectly, observed. This is called an inelastic effect. Of course, much more usually goes on, but I will concentrate here on the most important part of the collision.
At first sight, both types of events may not have to do a lot with each other. But they have something important in common: In all cases four particles are involved. They may be any combination of Ws and Zs. To avoid further excessive use of 'W or Z', I will just call them all Ws. The details of which is which is not really important right now.
The fact that four are involved explains already the quartic in the name of the workshop. In the standard model of particle physics, that what happens can occur mainly in either of two ways. One is that two Ws form for some time another particle. This may either be another W or Z, or also a Higgs. Or they can all four interact with each other at the same time. That is then called a quartic gauge coupling, and thus the complete second half of the name of the workshop.
Now what is not right with this, as we talk about something anomalous? Actually, everything is all right with it in the standard model. But we are not only interested what there is in the standard model, but also whether there is something else. If there is something else, what we observe should not fit with what we calculate in the standard model alone. Such a difference would thus be anomalous. Hence, if we would measure that the quartic gauge coupling is different from the one we calculate in the standard model, we call it an anomalous quartic gauge coupling. And then we are happy, since we found something new. Hence, the workshop was about looking for such anomalous quartic gauge couplings. So far, we did not find anything, but we do not yet have seen many such situations. Far too few to make already any statements. But we will record many more in the future. Then we can make really a statement. At this workshop we were thus essentially preparing for the new data, which we expected to come in once the LHC is switched on again in early 2014.
What has this to do with my research? Well, I try to figure out whether two Higgs, or Ws, can form a bound state. If such bound states would exist, they would form exactly in such processes, for a very short time. Afterwards, they again decay into the final Ws. If they form, they would contribute to what we know. They are part of the standard model. So to see something new, we have to subtract their contributions, if they are there. Otherwise, they could be mistaken for a new, anomalous effect. Most important for me at the workshop was to understand, what is measured, and how they could contribute. It was also important to know what can be measured at all, since any experimental constraint I can get would help me in improving my calculations. This kind of connecting experiment and theory is very important for us, as we are yet far from the point where our results are perfect. The developments in this area remain therefore very important to my own research project.
Labels:
BSM,
Electroweak,
Experiment,
Research,
Standard model
Wednesday, September 11, 2013
Blessing and bane: Redundancy
We have just recently published a new paper. It is part of my research on the foundations of theoretical particle physics. To fully appreciate its topic, it is necessary to say a few words on an important technical tool: Redundancy.
Most people have heard the term already when it comes to technology. If you have a redundant system, you have two or more times the same system. If the first one fails, the second takes over, and you have time to do repairs. Redundancy in theoretical physics is a little bit different. But it serves the same ends: To make life easier.
When one thinks about a theory in particle physics, one thinks about the particles it describes. But if we would write down a theory only using the particles which we can observe in experiment, these theories would become very quickly very complicated. Too complicated, in fact, in most cases. Thus people have very early on found a trick. If you add artificially something more to the theory, it becomes simpler. Of course, we cannot just simply add it really, because otherwise we would have a different theory. What we really do is, we start with the original theory. Then we add something additional. We make our calculations. And from the final result we remove then what we added. In this sense, we added a redundancy to our theory. It is a mathematical trick, nothing more. We imagine a theory with more particles, and by removing in the end everything too much, we end up with the result for our original problem.
Modern particle physics would not be imaginable without such tricks. It is one of the first things we learn when we start particle physics, the power of redundancy. A particular powerful case is to add additional particles. Another one is to add something external to the system. Like opening a door. It is the latter kind with which we had to deal.
Now, what has this to do with our work? Well, redundancies are a powerful tool. But one has to be careful with them nonetheless. As I have written, we remove at the end everything we added too much. The question is, can this be done? Or becomes everything so entwined that this is no longer possible? We have looked at especially was such a question.
To do this, we regarded a theory of only gluons, the carrier of the strong force. There has been a rather long debate in the scientific community how such gluons move from one place to another. A consensus has only recently started to emerge. One of the puzzling things were that you could prove mathematical certain properties of their movement. Surprisingly, numerical simulations did not agree with this proof. So what was wrong?
It was an example of reading the fine-print carefully enough. The proof made some assumptions. Making assumptions is not bad. It is often the only way of making progress: make an assumption, and see whether everything fits together. Here it did not. When studying the assumptions, it turned out that one had to do with such redundancies.
What was done, was essentially adding an artificial sea of such gluons to the theory. At the end, this sea was made to vanish, to get the original result. The assumption was that the sea could be removed without affecting how the gluons move. What we found in our research was that this is not correct. When removing the sea, the gluons cling to it in a way that for any sea, no matter how small, they still moved differently. Thus, removing the sea little by little is not the same as starting without the sea in the first place. Thus, the introduction of the sea was not permissible, and hence we found the discrepancy. There have been a number of further results along the way, where we learned a lot more about the theory, and about gluons, but this was the essential result.
This may seem a bit strange. Why should an extremely tiny sea have such a strong influence? I am talking here about a difference of principle, not just a number.
The reason for this can be found in a very strange property of the strong force, which is called confinement: A gluon cannot be observed individually. When the sea is introduced, it offers the gluons the possibility to escape into the sea, a loophole of confinement. It is then a question of principle: Any sea, no matter how small, provides such a loophole. Thus, there is always an escape for the gluons, and they can therefore move differently. At the same time, if there is no sea to begin with, the gluons remain confined. Unfortunately, this loophole was buried deep into the mathematical formalism, and we had to first find it.
This taught us an important lesson that, while redundancies are a great tool, one has to be careful with them. If you do not introduce your redundancies carefully enough, you may alter the system in a way too substantial to be undone. We now know what to avoid, and can go on, making further progress.
Most people have heard the term already when it comes to technology. If you have a redundant system, you have two or more times the same system. If the first one fails, the second takes over, and you have time to do repairs. Redundancy in theoretical physics is a little bit different. But it serves the same ends: To make life easier.
When one thinks about a theory in particle physics, one thinks about the particles it describes. But if we would write down a theory only using the particles which we can observe in experiment, these theories would become very quickly very complicated. Too complicated, in fact, in most cases. Thus people have very early on found a trick. If you add artificially something more to the theory, it becomes simpler. Of course, we cannot just simply add it really, because otherwise we would have a different theory. What we really do is, we start with the original theory. Then we add something additional. We make our calculations. And from the final result we remove then what we added. In this sense, we added a redundancy to our theory. It is a mathematical trick, nothing more. We imagine a theory with more particles, and by removing in the end everything too much, we end up with the result for our original problem.
Modern particle physics would not be imaginable without such tricks. It is one of the first things we learn when we start particle physics, the power of redundancy. A particular powerful case is to add additional particles. Another one is to add something external to the system. Like opening a door. It is the latter kind with which we had to deal.
Now, what has this to do with our work? Well, redundancies are a powerful tool. But one has to be careful with them nonetheless. As I have written, we remove at the end everything we added too much. The question is, can this be done? Or becomes everything so entwined that this is no longer possible? We have looked at especially was such a question.
To do this, we regarded a theory of only gluons, the carrier of the strong force. There has been a rather long debate in the scientific community how such gluons move from one place to another. A consensus has only recently started to emerge. One of the puzzling things were that you could prove mathematical certain properties of their movement. Surprisingly, numerical simulations did not agree with this proof. So what was wrong?
It was an example of reading the fine-print carefully enough. The proof made some assumptions. Making assumptions is not bad. It is often the only way of making progress: make an assumption, and see whether everything fits together. Here it did not. When studying the assumptions, it turned out that one had to do with such redundancies.
What was done, was essentially adding an artificial sea of such gluons to the theory. At the end, this sea was made to vanish, to get the original result. The assumption was that the sea could be removed without affecting how the gluons move. What we found in our research was that this is not correct. When removing the sea, the gluons cling to it in a way that for any sea, no matter how small, they still moved differently. Thus, removing the sea little by little is not the same as starting without the sea in the first place. Thus, the introduction of the sea was not permissible, and hence we found the discrepancy. There have been a number of further results along the way, where we learned a lot more about the theory, and about gluons, but this was the essential result.
This may seem a bit strange. Why should an extremely tiny sea have such a strong influence? I am talking here about a difference of principle, not just a number.
The reason for this can be found in a very strange property of the strong force, which is called confinement: A gluon cannot be observed individually. When the sea is introduced, it offers the gluons the possibility to escape into the sea, a loophole of confinement. It is then a question of principle: Any sea, no matter how small, provides such a loophole. Thus, there is always an escape for the gluons, and they can therefore move differently. At the same time, if there is no sea to begin with, the gluons remain confined. Unfortunately, this loophole was buried deep into the mathematical formalism, and we had to first find it.
This taught us an important lesson that, while redundancies are a great tool, one has to be careful with them. If you do not introduce your redundancies carefully enough, you may alter the system in a way too substantial to be undone. We now know what to avoid, and can go on, making further progress.
Tuesday, August 6, 2013
Picking the right theory
One of the more annoying facts of modern particle physics is that our theories cannot stand purely alone. All of them have some parameters, which we are not able to predict (yet). The standard-model of particle physics has some thirty-odd parameters, which we cannot fathom by theoretical investigations alone. An example are the masses of the particles. We need to measure them in an experiment.
Well, you may say that this is not too bad. If we just have to make a couple of measurements, and then can predict all the rest, this is acceptable. We just need to find as many independent quantities as there are parameters, and we are done. In principle, this is correct. But, once more, it is the gap between experiment and theory which makes live complicated. Usually, what we can easily measure is something which is very hard to compute theoretically. And, of course, something we ca calculate easily is hard to measure, if at all possible.
The only thing left is therefore to do our best, and get as close to each other as possible. As a consequence, we usually do not know the parameters exactly, but only within a certain error. This may still not seem too bad. But here enters something which we call theory space. This is an imaginary space in which every point corresponds to one particular set of parameters for a given theory.
In principle, when we change the parameters of our theory, we do not just change some quantitative values. We are really changing our theory. Even a very small change may produce a large effect. This is not only a hypothetical situation - we know explicit examples where this happens. Why is this so?
There are two main reasons for this.
One is that the theory depends very sensitively on its parameters. Thus, a slight change of the parameters will induce a significant change in everything we can measure. Finding then precisely the parameters which reproduce a certain set of measurements requires a very precise determination of the parameters, possibly with many digits of precision. This is a so-called fine-tuning problem. An example of this is the standard model itself. The mass of the Higgs is fine-tuned in the standard model, it depends strongly on the parameters. This was the reason why we were not able to fully predict its mass before we measured it, although we had many, many more measurements than the standard model has parameters. We could just not pinpoint it precisely enough, but only within a rough factor of ten.
A related problem is then the so-called hierarchy problem of modern particle physics. It is the question why the parameters are fine-tuned just to the value we measure in the experiment, and not slightly off and giving a completely different result. Especially this takes today the form of "Why is the Higgs particle rather light?"
But the standard model is not as worse as it can get. In so-called chaotic models the situation is far worse, and any de-tuning of the parameters leads to an exponential effect. That is the reason why these models are called chaotic - because everything is so sensitive, there is no structure at first sight. Of course, one can, at least in principle, calculate this dependency exactly. But any ever so slight imprecision is punished by an exponentially large effect. This makes chaotic models a persistent challenge to theoreticians. Fortunately, particle physics is not of this kind, but such models are known to be realized in nature nonetheless. E.g. some fluids behave like this, under certain conditions.
The other reason is that theory space can have separate regions. Inside these regions, so-called phases, the theory shows quite distinct behaviors. A simple example is water. If you take the temperature and pressure as parameters (though they are not really fundamental parameters of the theory of water, but this is just for the sake of the argument) then two such phases could be solid and liquid. In a similar manner theories in particle physics can show different phases.
If the parameters of a theory are very close by to the boundary between two phases, a slight change will push the theory from one phase to another. Thus, also here it is necessary to determine the parameters very precisely.
In the case of my research on Higgs physics, I am currently facing such a challenge. The problem is that the set of basic parameters are not unique - you can always exchange one set against another set, as log as the number is kept, and there is no trivial connection between two new parameters. Especially, depending on the methods used, particular set are more convenient. However, this implies that for every method it is necessary to redo the comparison to experiment. Since I work on Higgs physics, the dependence is strong, as I said above. It is thus highly challenging to make the right pick. And currently some for my results significantly depend on this determination.
So what can I do? Instead of pressing on, I have to make a step back, and improve my connection to experiment. Such a cycle is quite normal. First you make your calculation, getting some new and exciting result, but with large errors. To improve your errors, you have to go back, and make some groundwork, to improve the basis. Then you return back to the original result, and get an improved one. Then you do the next step, and repeat.
Right now, I am not only for the Higgs physics in such a process. Also our project on a neutron star's interior currently underoes such an improvement. This work is rarely very exciting or yields unexpected results. But it is very necessary, and one cannot get away without, once one wants to get reliable results. Also this is part of a researcher's (everyday) life.
Well, you may say that this is not too bad. If we just have to make a couple of measurements, and then can predict all the rest, this is acceptable. We just need to find as many independent quantities as there are parameters, and we are done. In principle, this is correct. But, once more, it is the gap between experiment and theory which makes live complicated. Usually, what we can easily measure is something which is very hard to compute theoretically. And, of course, something we ca calculate easily is hard to measure, if at all possible.
The only thing left is therefore to do our best, and get as close to each other as possible. As a consequence, we usually do not know the parameters exactly, but only within a certain error. This may still not seem too bad. But here enters something which we call theory space. This is an imaginary space in which every point corresponds to one particular set of parameters for a given theory.
In principle, when we change the parameters of our theory, we do not just change some quantitative values. We are really changing our theory. Even a very small change may produce a large effect. This is not only a hypothetical situation - we know explicit examples where this happens. Why is this so?
There are two main reasons for this.
One is that the theory depends very sensitively on its parameters. Thus, a slight change of the parameters will induce a significant change in everything we can measure. Finding then precisely the parameters which reproduce a certain set of measurements requires a very precise determination of the parameters, possibly with many digits of precision. This is a so-called fine-tuning problem. An example of this is the standard model itself. The mass of the Higgs is fine-tuned in the standard model, it depends strongly on the parameters. This was the reason why we were not able to fully predict its mass before we measured it, although we had many, many more measurements than the standard model has parameters. We could just not pinpoint it precisely enough, but only within a rough factor of ten.
A related problem is then the so-called hierarchy problem of modern particle physics. It is the question why the parameters are fine-tuned just to the value we measure in the experiment, and not slightly off and giving a completely different result. Especially this takes today the form of "Why is the Higgs particle rather light?"
But the standard model is not as worse as it can get. In so-called chaotic models the situation is far worse, and any de-tuning of the parameters leads to an exponential effect. That is the reason why these models are called chaotic - because everything is so sensitive, there is no structure at first sight. Of course, one can, at least in principle, calculate this dependency exactly. But any ever so slight imprecision is punished by an exponentially large effect. This makes chaotic models a persistent challenge to theoreticians. Fortunately, particle physics is not of this kind, but such models are known to be realized in nature nonetheless. E.g. some fluids behave like this, under certain conditions.
The other reason is that theory space can have separate regions. Inside these regions, so-called phases, the theory shows quite distinct behaviors. A simple example is water. If you take the temperature and pressure as parameters (though they are not really fundamental parameters of the theory of water, but this is just for the sake of the argument) then two such phases could be solid and liquid. In a similar manner theories in particle physics can show different phases.
If the parameters of a theory are very close by to the boundary between two phases, a slight change will push the theory from one phase to another. Thus, also here it is necessary to determine the parameters very precisely.
In the case of my research on Higgs physics, I am currently facing such a challenge. The problem is that the set of basic parameters are not unique - you can always exchange one set against another set, as log as the number is kept, and there is no trivial connection between two new parameters. Especially, depending on the methods used, particular set are more convenient. However, this implies that for every method it is necessary to redo the comparison to experiment. Since I work on Higgs physics, the dependence is strong, as I said above. It is thus highly challenging to make the right pick. And currently some for my results significantly depend on this determination.
So what can I do? Instead of pressing on, I have to make a step back, and improve my connection to experiment. Such a cycle is quite normal. First you make your calculation, getting some new and exciting result, but with large errors. To improve your errors, you have to go back, and make some groundwork, to improve the basis. Then you return back to the original result, and get an improved one. Then you do the next step, and repeat.
Right now, I am not only for the Higgs physics in such a process. Also our project on a neutron star's interior currently underoes such an improvement. This work is rarely very exciting or yields unexpected results. But it is very necessary, and one cannot get away without, once one wants to get reliable results. Also this is part of a researcher's (everyday) life.
Friday, May 17, 2013
What could the Higgs be made of?
One of the topics I am working on is how the standard model of particle physics can be extended. The reason is that it is, intrinsically, but not practically, flawed. Therefore, we know that there must be more. However, right now we have only very vague hints from experiments and astronomical observations how we have to improve our theories. Therefore, many possibilities are right now explored. The one I am working on is called technicolor.
A few weeks ago, my master student and I have published a preprint. By the way, a preprint is a paper which is in the process of being reviewed by the scientific community, whether it is sound. They play an important role in science, as they contain the most recent results. Anyway, in this preprint, we have worked on technicolor. I will not rehearse too much about technicolor here, this can be found in an earlier blog entry. The only important ingredient is that in a technicolor scenario one assumes that the Higgs particle is not an elementary particle. Instead, just like an atom, it is made from other particles. In analogy to quarks, which build up the protons and other hadrons, these parts of the Higgs are called techniquarks. Of course, something has to hold them together. This must be a new, unknown force, called techniforce. It is imagined to be again similar, in a very rough way, to the strong force. Consequently, the carrier of this fore are called technigluons, in analogy to the gluons of the strong force.
In our research we wanted to understand the properties of these techniquarks. Since we do not yet know if there is really technicolor, we can also not be sure of how it would eventually look like. In fact, there are many possibilities how technicolor could look like. So many that it is not even simple to enumerate them all, much less to calculate for all of them simultaneously. But since we are anyhow not sure, which is the right one, we are not yet in a position where it makes sense to be overly precise. In fact, what we wanted to understand is how techniquarks work in principle. Therefore, we just selected out of the many possibilities just one.
Now, as I said, techniquarks are imagined to be similar to quarks. But they cannot be the same, because we know that the Higgs behaves very different from, say, a proton or a pion. It is not possible to get this effect without making the techniquarks profoundly different from the quarks. One of the possibilities to do so is by making them a thing in between a gluon and a quark, which is called an adjoint quark. The term 'adjoint' is referring to some mathematical property, but these are not so important details. So that is what we did: We assumed our techniquarks should be adjoint quarks.
The major difference is now what happens if we make these techniquarks light and lighter. For the strong force, we know what happens: We cannot make them arbitrarily light, because they gain mass from the strong force. This appears to be different for the theory we studied. There you can make them arbitrarily light. This has been suspected since a long time from indirect observations. What we did was, for the first time, to directly investigate the techniquarks. What we saw was that when they are rather heavy, we have a similar effect like for the strong force: The techniquarks gain mass from the force. But once they got light enough, this effect ceases. Thus, it should be possible to make them massless. This possibility is necessary to make a Higgs out of them.
Unfortunately, because we used computer simulations, we could not really go to massless techniquarks. This is far too expensive in terms of the time needed to do computer simulations (and actually, already part of the simulations were provided by other people, for which we are very grateful). Thus, we could not make sure that it is the case. But our results point strongly in this direction.
So is this a viable new theory? Well, we have shown that a necessary condition is fulfilled. But there is a strong difference between necessary and sufficient. For a technicolor theory to be useful it should not only have a Higgs made from techniquarks, and no mass generation from the techniforce. It must also have more properties, to be ok with what we know from experiment. The major requirement is how strong the techniforce is over how long distances. There existed some indirect earlier evidence that for this theory the techniforce is not quite strong enough for sufficiently far distances to be good enough. Our calculations have again a more direct way of determining this strength. And unfortunately, it appears that we have to agree with this earlier calculations.
Is this the end of technicolor? Certainly not. As I said above, technicolor is foremost an idea. There are many possibilities how to implement this idea, and we have just checked one. Is it then the end of this version? We have to agree with the earlier investigations that it appears so in this pure form. But, in fact, in this purest form we have neglected a lot, like the rest of the standard model. There is still a significant chance that a more complete version could work. After all, the qualitative features are there, it is just that the numbers are not perfectly right. Or perhaps just a minor alteration may already do the job. And this is something where people are continuing working on.
A few weeks ago, my master student and I have published a preprint. By the way, a preprint is a paper which is in the process of being reviewed by the scientific community, whether it is sound. They play an important role in science, as they contain the most recent results. Anyway, in this preprint, we have worked on technicolor. I will not rehearse too much about technicolor here, this can be found in an earlier blog entry. The only important ingredient is that in a technicolor scenario one assumes that the Higgs particle is not an elementary particle. Instead, just like an atom, it is made from other particles. In analogy to quarks, which build up the protons and other hadrons, these parts of the Higgs are called techniquarks. Of course, something has to hold them together. This must be a new, unknown force, called techniforce. It is imagined to be again similar, in a very rough way, to the strong force. Consequently, the carrier of this fore are called technigluons, in analogy to the gluons of the strong force.
In our research we wanted to understand the properties of these techniquarks. Since we do not yet know if there is really technicolor, we can also not be sure of how it would eventually look like. In fact, there are many possibilities how technicolor could look like. So many that it is not even simple to enumerate them all, much less to calculate for all of them simultaneously. But since we are anyhow not sure, which is the right one, we are not yet in a position where it makes sense to be overly precise. In fact, what we wanted to understand is how techniquarks work in principle. Therefore, we just selected out of the many possibilities just one.
Now, as I said, techniquarks are imagined to be similar to quarks. But they cannot be the same, because we know that the Higgs behaves very different from, say, a proton or a pion. It is not possible to get this effect without making the techniquarks profoundly different from the quarks. One of the possibilities to do so is by making them a thing in between a gluon and a quark, which is called an adjoint quark. The term 'adjoint' is referring to some mathematical property, but these are not so important details. So that is what we did: We assumed our techniquarks should be adjoint quarks.
The major difference is now what happens if we make these techniquarks light and lighter. For the strong force, we know what happens: We cannot make them arbitrarily light, because they gain mass from the strong force. This appears to be different for the theory we studied. There you can make them arbitrarily light. This has been suspected since a long time from indirect observations. What we did was, for the first time, to directly investigate the techniquarks. What we saw was that when they are rather heavy, we have a similar effect like for the strong force: The techniquarks gain mass from the force. But once they got light enough, this effect ceases. Thus, it should be possible to make them massless. This possibility is necessary to make a Higgs out of them.
Unfortunately, because we used computer simulations, we could not really go to massless techniquarks. This is far too expensive in terms of the time needed to do computer simulations (and actually, already part of the simulations were provided by other people, for which we are very grateful). Thus, we could not make sure that it is the case. But our results point strongly in this direction.
So is this a viable new theory? Well, we have shown that a necessary condition is fulfilled. But there is a strong difference between necessary and sufficient. For a technicolor theory to be useful it should not only have a Higgs made from techniquarks, and no mass generation from the techniforce. It must also have more properties, to be ok with what we know from experiment. The major requirement is how strong the techniforce is over how long distances. There existed some indirect earlier evidence that for this theory the techniforce is not quite strong enough for sufficiently far distances to be good enough. Our calculations have again a more direct way of determining this strength. And unfortunately, it appears that we have to agree with this earlier calculations.
Is this the end of technicolor? Certainly not. As I said above, technicolor is foremost an idea. There are many possibilities how to implement this idea, and we have just checked one. Is it then the end of this version? We have to agree with the earlier investigations that it appears so in this pure form. But, in fact, in this purest form we have neglected a lot, like the rest of the standard model. There is still a significant chance that a more complete version could work. After all, the qualitative features are there, it is just that the numbers are not perfectly right. Or perhaps just a minor alteration may already do the job. And this is something where people are continuing working on.
Friday, March 22, 2013
(Un-)Dead stars and particles
I have already written about some aspects of my research on neutron stars. But what is the problem with them? And what do I want to understand?
First: What are they? If you have a sufficiently massive star, it will not die in a fizzle, like our sun, but it will end violently. It will explode in a supernova. In this process, a lot of its mass gets compressed at its core. If this core is not too heavy, a remainder, a corpse of the star, will remain: A neutron star. If it is too heavy, the remainder will, however, collapse further into a black hole. But this is not the interesting case for me.
Such a neutron star is actually far from dead. It continues its life after its death. But it no longer emits light and warmth, but usually x-rays, neutrinos, and occasionally so-called gravitational waves.
Second: What do I want to understand about them? Neutron stars are enigmatic objects. Their size is about ten kilometers, not more than a larger city. At the same time they have about one to two times the mass of our sun. Thus, they are incredible dense. In fact, they are so dense that there is no place for atoms, but they consist out of the atomic nuclei. That is the case in the outer layers of the neutron star, perhaps the first kilometer or so. Going further inward, the density increases. Then everything gets so tight, that it is no longer possible to separate the nuclei, and they start to overlap. In addition, whatever electrons there still were have already after the first few meters been soaked up to change almost all of the protons into neutrons. In a certain sense, it is just one big atomic nucleus.
And even further in? Well, nobody knows. However, there are many speculations. Do we have there a different kind of matter, so-called strange matter? Such matter is obtained when one starts to replace the up and down quarks in the neutrons with strange quarks. Or does also the neutrons dissolve, and we just have a bunch of quarks? And if yes, how would such quark matter behave? Would it be a fluid, a superconducting metal, or possibly even a crystal?
And that is, where my research starts. What I want to understand is, which form this matter takes. And I am not alone with this. I have just organized a workshop, which partly focused on this subject, and we have worked hard on getting a better understanding, of what goes on in there. That becomes even more interesting as more and more results come in from astronomical observations on neutron stars. They provide us with a lot of indirect evidence on how the matter inside the neutron star's core must behave. But if we understand the strong force correctly, we should be able to calculate this.
The central problem involved in these calculations is the density. The standard approach to particle physics (and to physics in general) is to attempt to simplify the problem, and study its parts in isolation. That is quite well working for many cases, like the Higgs. However, the properties of the neutron star is determined not by the individual neutrons, but in how they interact with each other when there are many of them. Thus, by breaking the system apart you destroy what you want to study. Thus, you have to study the neutrons - or more appropriately the quarks - all together. This enlarges the complexity severely, and it is what stops us in our tracks. Particularly, because it is hard to find efficient ways to calculate anything for a real neutron star.
One way around this is to attempt to indirectly understand it, by studying a simpler system. The alternative is to simplify the system itself. This can be done by making things a bit more fuzzy. This fuzziness is achieved by not tracking each and every quark and what it precisely does. Instead, groups of quarks are tracked, and their activities is averaged. This can be a very simple step. For example, one can treat a neutron instead of being made from three quarks as made from one quark and the rest. And then approximate the rest by a single particle with simple properties. Such an approximation already gives a rough estimate of how things work. Of course, if one wants to get the last bit of precision out of the theory, then one has to return to the original three quarks.
But the problem is complicated, and thus one follows this strategy: Creating less and simpler objects first, an then refine them again. This simpler objects are often called 'effective degrees of freedom', because they effectively mimic many complicated objects. And then we solve the simpler theory describing them, the so-called effective theory. Afterwards, we go back. We refine the effective theory and the simple particles again, introducing the problems bit by bit. And solving them on the way. And that is, where we are currently. Still far away from understanding a neutron stars as a set of elementary particles, as quarks and gluons, but closing in, step by step.
First: What are they? If you have a sufficiently massive star, it will not die in a fizzle, like our sun, but it will end violently. It will explode in a supernova. In this process, a lot of its mass gets compressed at its core. If this core is not too heavy, a remainder, a corpse of the star, will remain: A neutron star. If it is too heavy, the remainder will, however, collapse further into a black hole. But this is not the interesting case for me.
Such a neutron star is actually far from dead. It continues its life after its death. But it no longer emits light and warmth, but usually x-rays, neutrinos, and occasionally so-called gravitational waves.
Second: What do I want to understand about them? Neutron stars are enigmatic objects. Their size is about ten kilometers, not more than a larger city. At the same time they have about one to two times the mass of our sun. Thus, they are incredible dense. In fact, they are so dense that there is no place for atoms, but they consist out of the atomic nuclei. That is the case in the outer layers of the neutron star, perhaps the first kilometer or so. Going further inward, the density increases. Then everything gets so tight, that it is no longer possible to separate the nuclei, and they start to overlap. In addition, whatever electrons there still were have already after the first few meters been soaked up to change almost all of the protons into neutrons. In a certain sense, it is just one big atomic nucleus.
And even further in? Well, nobody knows. However, there are many speculations. Do we have there a different kind of matter, so-called strange matter? Such matter is obtained when one starts to replace the up and down quarks in the neutrons with strange quarks. Or does also the neutrons dissolve, and we just have a bunch of quarks? And if yes, how would such quark matter behave? Would it be a fluid, a superconducting metal, or possibly even a crystal?
And that is, where my research starts. What I want to understand is, which form this matter takes. And I am not alone with this. I have just organized a workshop, which partly focused on this subject, and we have worked hard on getting a better understanding, of what goes on in there. That becomes even more interesting as more and more results come in from astronomical observations on neutron stars. They provide us with a lot of indirect evidence on how the matter inside the neutron star's core must behave. But if we understand the strong force correctly, we should be able to calculate this.
The central problem involved in these calculations is the density. The standard approach to particle physics (and to physics in general) is to attempt to simplify the problem, and study its parts in isolation. That is quite well working for many cases, like the Higgs. However, the properties of the neutron star is determined not by the individual neutrons, but in how they interact with each other when there are many of them. Thus, by breaking the system apart you destroy what you want to study. Thus, you have to study the neutrons - or more appropriately the quarks - all together. This enlarges the complexity severely, and it is what stops us in our tracks. Particularly, because it is hard to find efficient ways to calculate anything for a real neutron star.
One way around this is to attempt to indirectly understand it, by studying a simpler system. The alternative is to simplify the system itself. This can be done by making things a bit more fuzzy. This fuzziness is achieved by not tracking each and every quark and what it precisely does. Instead, groups of quarks are tracked, and their activities is averaged. This can be a very simple step. For example, one can treat a neutron instead of being made from three quarks as made from one quark and the rest. And then approximate the rest by a single particle with simple properties. Such an approximation already gives a rough estimate of how things work. Of course, if one wants to get the last bit of precision out of the theory, then one has to return to the original three quarks.
But the problem is complicated, and thus one follows this strategy: Creating less and simpler objects first, an then refine them again. This simpler objects are often called 'effective degrees of freedom', because they effectively mimic many complicated objects. And then we solve the simpler theory describing them, the so-called effective theory. Afterwards, we go back. We refine the effective theory and the simple particles again, introducing the problems bit by bit. And solving them on the way. And that is, where we are currently. Still far away from understanding a neutron stars as a set of elementary particles, as quarks and gluons, but closing in, step by step.
Wednesday, February 6, 2013
(Almost) Nothing is forever: Decays
A truly remarkable fact is that most elementary particles do not live forever. Most of them exist only for a very brief moment in time. The only particles, for which we are reasonable sure that they live forever are photons. Then there are a number of particles, which live for almost forever. This means their life expectancy is much, much longer than the current age of the universe. The electron, the proton, and at least one neutrino belong to this group. And so do many nuclei. That is good, because almost all matter around us is made of these particles. It is rather reassuring that it will not disintegrate spontaneously anytime soon.
However, the vast majority of elementary particles has not a very long life expectancy. For some, their lifetime is still such that we can observe them directly. But for particles like the infamous Higgs, this is not the case.
If we want to learn something about such particles, we have to cope with this problem. A very important help is that a particle cannot just vanish without a trace. If a particle's life ends, it decays into other, lighter particles. For example, the Higgs can decay into two photons. Or into two light quarks. Or in some other particles, as long as the sum of their masses is lighter than the Higgs' mass. And these decays follow very specific rules. Take again the Higgs. If it is lying somewhere around, and then decays into an electron and a positron, these two particles are not completely free in their actions. Very basic rules of physics require them then to move away from the position of the Higgs in opposite directions, with the same speed. And this speed is also fixed.
These laws are what helps us. If we detect the electron and the positron, and measure their speed, we can reconstruct that they come from a Higgs. This is a bit indirect, and one has to measure things rather precisely. But it is possible. And it was exactly this approach with which we have detected (very likely) the Higgs last year.
Of course, there are a lot of subtleties involved. And not every decay can happen, which seems to be permitted at a first, superficial look. And so on. But all this follows very precise rules. And the experimental physicists have become very good at using these rules to their advantage.
And here enters my own research. One of my projects is about some particles which are build from Higgs particles and Ws and Zs. If I want to tell my experimental colleagues to check, whether my theory is right, I have to tell them for what to look. Since those complicated particles are not expected to live very long, they will decay. Hence, I should be able to tell how they decay, and into which particles with what speed. This will give what we call a signature: The traces a unstable particle leaves at the end of its life. Doing this is somewhat complicated, as you not only have to understand the structure of the particles, but also their dynamics. Fortunately, people have developed very sophisticated methods. I use them now in simulations. With them I start to obtain results how my complicated particles decay into two Ws. In addition, I learn how often and how effectively this will happen, and how long the original particle lived.
Of course, as always, my first results are not the final answer yet. But things look encouraging. And, what is best, I start to find hints that the complicated states may even live long enough that, just maybe, they could be seen by my experimental colleagues. This is important, because a particle which lives to short requires a precision to observe higher than what we have currently. But it is still a long road before anything is certain. As always.
However, the vast majority of elementary particles has not a very long life expectancy. For some, their lifetime is still such that we can observe them directly. But for particles like the infamous Higgs, this is not the case.
If we want to learn something about such particles, we have to cope with this problem. A very important help is that a particle cannot just vanish without a trace. If a particle's life ends, it decays into other, lighter particles. For example, the Higgs can decay into two photons. Or into two light quarks. Or in some other particles, as long as the sum of their masses is lighter than the Higgs' mass. And these decays follow very specific rules. Take again the Higgs. If it is lying somewhere around, and then decays into an electron and a positron, these two particles are not completely free in their actions. Very basic rules of physics require them then to move away from the position of the Higgs in opposite directions, with the same speed. And this speed is also fixed.
These laws are what helps us. If we detect the electron and the positron, and measure their speed, we can reconstruct that they come from a Higgs. This is a bit indirect, and one has to measure things rather precisely. But it is possible. And it was exactly this approach with which we have detected (very likely) the Higgs last year.
Of course, there are a lot of subtleties involved. And not every decay can happen, which seems to be permitted at a first, superficial look. And so on. But all this follows very precise rules. And the experimental physicists have become very good at using these rules to their advantage.
And here enters my own research. One of my projects is about some particles which are build from Higgs particles and Ws and Zs. If I want to tell my experimental colleagues to check, whether my theory is right, I have to tell them for what to look. Since those complicated particles are not expected to live very long, they will decay. Hence, I should be able to tell how they decay, and into which particles with what speed. This will give what we call a signature: The traces a unstable particle leaves at the end of its life. Doing this is somewhat complicated, as you not only have to understand the structure of the particles, but also their dynamics. Fortunately, people have developed very sophisticated methods. I use them now in simulations. With them I start to obtain results how my complicated particles decay into two Ws. In addition, I learn how often and how effectively this will happen, and how long the original particle lived.
Of course, as always, my first results are not the final answer yet. But things look encouraging. And, what is best, I start to find hints that the complicated states may even live long enough that, just maybe, they could be seen by my experimental colleagues. This is important, because a particle which lives to short requires a precision to observe higher than what we have currently. But it is still a long road before anything is certain. As always.
Wednesday, January 9, 2013
Taking a detour helps
Almost all relevant physical systems are pretty complicated. One I am working on is how the interior of so-called neutron stars look like. Neutron stars is what is left of stars somewhat heavier than our sun, but not too heavy, after they became a supernova. In a neutron star the atoms collapse due to the strong gravitation. Only the atomic nuclei remain, and are packed very densely. Neutron stars have roughly one to two times the mass of our sun, but have a radius of only about ten kilometers, barely larger than a small city. These star remnants are very interesting for astronomy and astrophysics. But I am more interested what happens in their most inner core.
Deep inside the neutron star, everything is even more packed. In fact, even the atomic nuclei are no longer separated, but are mashed into a big mess. Because their are so densely packed, even the nucleons are overlapping. Thus, the substructure of them, the quarks may become the most important players.
But this nobody knows yet for sure. It has been a challenge to understand such matter since more than thirty years. It is a joint effort of theoreticians, like me, people smashing atoms on each other in accelerators, so-called heavy-ion experiments, and people observing actual neutron stars with telescopes of many kinds.
In general, if quarks come into play, very often simulations have been very helpful. But it turns out that we are not (yet) clever enough to simulate a neutron star's interior. The algorithms, which we have developed to deal with single nuclei are just too inefficient to deal with so many nuclei. For technical reasons, this is called the sign problem, denoting the particular technical problem involved. This obstruction is also known since decades, without us being able so far to remove it.
An alternative have been other methods and models, but we would like to have a combination, to be more sure of our results.
One possibility has been to circumvent the problem. We have looked at theories which are similar to the strong nuclear force, but slightly modified. The modification were such that numerical simulations were possible. We made this detour for two reasons. We hoped that we could learn something in general. And we wanted to use these results to provide us with tests for our models and other methods. In a way we cheated: We evaded the problem by doing a simpler problem. And hoped that we would learn enough by this to solve the original problem or get a new insight.
However, so far our detours had serious drawbacks. The replacement theories were only able to solve some problems, but never all at the same time. Some had the problem that the mass creation by the strong force did not work in the right way. This would yield wrong answers for size and mass of a neutron stars. Or the nucleons were not repellent enough, so that all neutron stars would collapse further to so-called quark stars, much smaller than neutron stars and made from quarks.
And here comes my own research into play. Just recently we found another theory, which we call G2-QCD for very technical reasons. Irrespective of the name, it has neither of these problems. However, it is still not QCD. E. g., it has besides the nucleons further exotic objects flying around. But it is anyway the theory closest to the original one so far investigated. And we can actually simulate it. That is something we just done very recently. The results are very encouraging, though we are yet far from a final answer for neutron stars. Nonetheless, we have now an even stronger test for all the models and results from other methods available. This should provide even more constraints on our understanding of neutron stars, though still an enormous amount of work has to be done. But this is research: Mostly progress by small steps. And we thus continue on with this theory.
And this is just one example in my research where it is worthwhile to take a detour, and this is true for physics in general: Often the study of a simpler problem helps to reveal the solution of the original one. Even if we did not (again yet) succeeded, we made progress.
Deep inside the neutron star, everything is even more packed. In fact, even the atomic nuclei are no longer separated, but are mashed into a big mess. Because their are so densely packed, even the nucleons are overlapping. Thus, the substructure of them, the quarks may become the most important players.
But this nobody knows yet for sure. It has been a challenge to understand such matter since more than thirty years. It is a joint effort of theoreticians, like me, people smashing atoms on each other in accelerators, so-called heavy-ion experiments, and people observing actual neutron stars with telescopes of many kinds.
In general, if quarks come into play, very often simulations have been very helpful. But it turns out that we are not (yet) clever enough to simulate a neutron star's interior. The algorithms, which we have developed to deal with single nuclei are just too inefficient to deal with so many nuclei. For technical reasons, this is called the sign problem, denoting the particular technical problem involved. This obstruction is also known since decades, without us being able so far to remove it.
An alternative have been other methods and models, but we would like to have a combination, to be more sure of our results.
One possibility has been to circumvent the problem. We have looked at theories which are similar to the strong nuclear force, but slightly modified. The modification were such that numerical simulations were possible. We made this detour for two reasons. We hoped that we could learn something in general. And we wanted to use these results to provide us with tests for our models and other methods. In a way we cheated: We evaded the problem by doing a simpler problem. And hoped that we would learn enough by this to solve the original problem or get a new insight.
However, so far our detours had serious drawbacks. The replacement theories were only able to solve some problems, but never all at the same time. Some had the problem that the mass creation by the strong force did not work in the right way. This would yield wrong answers for size and mass of a neutron stars. Or the nucleons were not repellent enough, so that all neutron stars would collapse further to so-called quark stars, much smaller than neutron stars and made from quarks.
And here comes my own research into play. Just recently we found another theory, which we call G2-QCD for very technical reasons. Irrespective of the name, it has neither of these problems. However, it is still not QCD. E. g., it has besides the nucleons further exotic objects flying around. But it is anyway the theory closest to the original one so far investigated. And we can actually simulate it. That is something we just done very recently. The results are very encouraging, though we are yet far from a final answer for neutron stars. Nonetheless, we have now an even stronger test for all the models and results from other methods available. This should provide even more constraints on our understanding of neutron stars, though still an enormous amount of work has to be done. But this is research: Mostly progress by small steps. And we thus continue on with this theory.
And this is just one example in my research where it is worthwhile to take a detour, and this is true for physics in general: Often the study of a simpler problem helps to reveal the solution of the original one. Even if we did not (again yet) succeeded, we made progress.
Subscribe to:
Posts (Atom)