I wrote some time ago about the immense importance of diversity and multiculturality for research. How important exchange is by going abroad and to have people from many different places around oneself. Also, and probably even more important so, at home. How this is indispensable to make research possible, especially at the utmost frontiers of human knowledge.
This is, and remains, true. There is no progress without diversity. In this entry, I would like to write a bit about what we did recently to foster and structure such exchange.
The insight that diversity is important is something fortunately embraced also by the European Union. As a consequence, they offer various support options to help with this goal. One possibility are so-called COST networks. These actually involve countries, rather than individuals, with the intention to foster exchange across borders.
Since mid of October, Austria is now member of one such network within one of my core research areas, the physics governing quarks and gluons at high temperatures and densities, relevant for how the early universe evolved, and what the properties of supernovas and neutron stars in today's universe are. In this network I am one of the two representatives of Austria, i.e. speaking on behalf of the scientists in Austria being members of this network. Representatives of the (so far) 26 member countries have met in Brussels in the mid of October to discuss how this exchange should be organized in the future. One important part of this agenda, also very much encouraged by the European Union, is the promotion of minorities and gender equality and to support scientists from countries with economically less support for science.
On this first meeting, which was actually only on these and other issues and not on scientific content, we have established an agenda how the funds available to us in this network will be prioritized to achieve this goal. This includes the possibility for members of the aforementioned groups to receive travel support to meetings and collaboration partners and/or preferential participation in events. We want them to be part of this effort as fully as possible. We need them, and their perspectives, to make progress, and also to reevaluate our own views and endeavors.
Of course, there were also many other issues to be discussed, many of them rather administrative in nature. There were also discussions involved, when there were some different opinions on which was the ideal way forward. But, as a democratic process, this was resolved in a way to which everyone could commit.
It was certainly a quite uplifting experience to sit together with scientists from so many different countries, not with the aim to find an answer to a physics problems as at a conference, but rather with the goal to get people together, to connect. In the roughly four years this structure will run we will have several more meetings. The ultimate goal will be a joint series of so-called white papers. White papers are statements describing the most urgent and challenging problems in a given branch of research. Their aim is to structure future research and to make it more efficient by separating the irrelevant from the relevant questions.
These white papers will then be a truly international effort. People from almost thirty countries will provide a mutual view on some of the most challenging problems at the frontier of human knowledge. Questions important for our origin and of the world we live in. Without such a network, this would surely not happen. Rather, the many groups in different countries would be more isolated. And then there would be too many smaller groups trying to achieve the same purpose. But without such a broad and international basis and connection, the outcome would certainly not have such a broad collection of perspectives. And only by enough views coming together, we may eventually identify the point were all eyes look on, giving us the clue, where the key to the next big leap forward could be hidden.
Thursday, November 17, 2016
Monday, October 31, 2016
Redundant ghosts
A recurring topic in our research are the joys and sorrows of the redundancies in our description. As I have discussed several times introducing these redundancies makes live much easier. But this can turn against you, if you need to make approximations. Which, unfortunately, is usually the case. Still their benefits outweighs the troubles.
One of the remarkable consequences of these redundancies is that they even affect our description of the most fundamental particles in our theories. Here, I will concentrate on the gluons of the strong interactions (or QCD). On the one hand because they play a very central role in many phenomena. But, more importantly, because they are the simplest particles exhibiting the problem. This follows essentially the old strategy of divide and conquer. Solve it for the simplest problem first, and continue from there.
Still, even the simplest case is not easy. The reason is that the redundancies introduced auxiliary quantities. These act like some imaginary particles. These phantom particles are called also ghosts, because, just like ghosts, they actually do not really exist, they are only there in our imagination. Actually, they are called Faddeev-Popov ghosts, honoring those two people who have introduced them for the very first time.
Thus, whenever we calculate quantities we can actually observe, we do not see any traces of these ghosts. But directly computing an observable quantity is often hard, especially when you want to use eraser-and-pencil-type calculations. So we work stepwise. And in such intermediate steps ghosts do show up. But because they only encode information differently, but not add information, their presence affects also the description of 'real' particles in these intermediate stages. Only at the very end they would drop out. If we could do the calculations exactly.
Understanding how this turns out quantitatively is something I have been working on since almost a decade, with the last previous results available almost a year ago. Now, I made a little bit progress. But making progress is for this problem rather though. Therefore there are usually no big breakthroughs. It is much like grinding in an MMO. You need to accumulate little bits of information, to perhaps, eventually, understand what is going on. And this is once more the case.
I have presented the results of the latest steps recently at a conference. A summary of this report is freely available in a write-up for the proceedings of this conference.
I found a few new bits of information. One was that we certainly underestimated the seriousness of the problem. That is mainly due to the fact that most such investigations have so far been done using numerical simulations. Even though we want to do in the end rather the eraser-and-pencil type calculations, ensuring that they work is easier done using numerical simulations.
However, the numerical simulations are expensive, and therefore one is limited in them. I have extended the effort, and was able to get a glimpse of the size of the problem. I did this by simulating not only the gluons, but also simulated the extent to which we can probe the problem. By seeing how the problem depends on our perception of the problem, I could estimate, how big it will become at least, eventually.
Actually, the result was somewhat unsettling, even though it is not hopeless. One of the reason, why it is not hopeless is the way how it affects everything. And there it turned out that the aforementioned ghosts actually carry the brunt of the problem. This is good, as they will cancel out in the end. Thus, even if we cannot solve the problem completely, it will not have as horrible an impact as was imaginable. Thus, we can have a little bit more confidence that what we do makes actually sense, especially when we calculate something observable.
You may say that we could use experiments to check our approximations. It appears easier. After all, this is what we want to describe - or is it? Well, this is certainly true, when we are thinking about the standard model. But fundamental physics is more geared towards the unknown nowadays. And as a theoretician, I try to predict also the unknown. But if my predictions are invalidated by my approximations, what good can they be? Knowing therefore that they are not quite as affected as they could be is more than valuable. It is necessary. I can then tell the experimentalists with more confidence the places they should look, with at least some justified hope that I do not lead them on a wild geese chase.
One of the remarkable consequences of these redundancies is that they even affect our description of the most fundamental particles in our theories. Here, I will concentrate on the gluons of the strong interactions (or QCD). On the one hand because they play a very central role in many phenomena. But, more importantly, because they are the simplest particles exhibiting the problem. This follows essentially the old strategy of divide and conquer. Solve it for the simplest problem first, and continue from there.
Still, even the simplest case is not easy. The reason is that the redundancies introduced auxiliary quantities. These act like some imaginary particles. These phantom particles are called also ghosts, because, just like ghosts, they actually do not really exist, they are only there in our imagination. Actually, they are called Faddeev-Popov ghosts, honoring those two people who have introduced them for the very first time.
Thus, whenever we calculate quantities we can actually observe, we do not see any traces of these ghosts. But directly computing an observable quantity is often hard, especially when you want to use eraser-and-pencil-type calculations. So we work stepwise. And in such intermediate steps ghosts do show up. But because they only encode information differently, but not add information, their presence affects also the description of 'real' particles in these intermediate stages. Only at the very end they would drop out. If we could do the calculations exactly.
Understanding how this turns out quantitatively is something I have been working on since almost a decade, with the last previous results available almost a year ago. Now, I made a little bit progress. But making progress is for this problem rather though. Therefore there are usually no big breakthroughs. It is much like grinding in an MMO. You need to accumulate little bits of information, to perhaps, eventually, understand what is going on. And this is once more the case.
I have presented the results of the latest steps recently at a conference. A summary of this report is freely available in a write-up for the proceedings of this conference.
I found a few new bits of information. One was that we certainly underestimated the seriousness of the problem. That is mainly due to the fact that most such investigations have so far been done using numerical simulations. Even though we want to do in the end rather the eraser-and-pencil type calculations, ensuring that they work is easier done using numerical simulations.
However, the numerical simulations are expensive, and therefore one is limited in them. I have extended the effort, and was able to get a glimpse of the size of the problem. I did this by simulating not only the gluons, but also simulated the extent to which we can probe the problem. By seeing how the problem depends on our perception of the problem, I could estimate, how big it will become at least, eventually.
Actually, the result was somewhat unsettling, even though it is not hopeless. One of the reason, why it is not hopeless is the way how it affects everything. And there it turned out that the aforementioned ghosts actually carry the brunt of the problem. This is good, as they will cancel out in the end. Thus, even if we cannot solve the problem completely, it will not have as horrible an impact as was imaginable. Thus, we can have a little bit more confidence that what we do makes actually sense, especially when we calculate something observable.
You may say that we could use experiments to check our approximations. It appears easier. After all, this is what we want to describe - or is it? Well, this is certainly true, when we are thinking about the standard model. But fundamental physics is more geared towards the unknown nowadays. And as a theoretician, I try to predict also the unknown. But if my predictions are invalidated by my approximations, what good can they be? Knowing therefore that they are not quite as affected as they could be is more than valuable. It is necessary. I can then tell the experimentalists with more confidence the places they should look, with at least some justified hope that I do not lead them on a wild geese chase.
Wednesday, September 28, 2016
Searching for structure
This time I want to report on a new bachelor thesis, which I supervise. In this project we try to understand a little better the foundations of so-called gauge symmetries. In particular we address some of the ground work we have to lay for understanding our theories.
Let me briefly outline the problem: Most of the theories in particle physics include some kind of redundancy I.e., there are more things in it then we actually see in experiments. The surplus stuff is actually not real. It is just a kind of mathematical device to make calculations simpler. It is like a ladder, which we bring to climb a wall. We come, use the ladder, and are on top. The ladder we take again with us, and the wall remains as it was. The ladder made live simpler. Of course, we could have climbed the wall without it. But it would have been more painful.
Unfortunately, theories are more complicated than wall climbing.
One of the problems is that we usually cannot solve problems exactly. And as noted before, this can mess up the removal of the surplus stuff.
The project the bachelor student and I am working on has the following basic idea: If we can account for all of the surplus stuff, we should be able to know whether our approximations did something wrong. It is like preparing an engine. If something is left afterwards it is usually not a good sign. Unfortunately, things are again more complicated. For the engine, we just have to look through our workspace to see whether anything is left. But how to do so for our theories? And this is precisely the project.
So, the project is essentially about listing stuff. We start out with something we know is real and important. For this, we take the most simplest thing imaginable: Nothing. Nothing means in this case just an empty universe, no particles, no reactions, no nothing. That is certainly a real thing, and one we want to include in our calculations.
Of this nothing, there are also versions where some of the surplus stuff appears. Like some ghost image of particles. We actually know how to add small amounts of ghost stuff. Like a single particle in a whole universe. But these situations are not so very interesting, as we know how to deal with them. No, the really interesting stuff happens if well fill the whole universe with ghost images. With surplus stuff which we add just to make life simpler. At least originally. And the question is now: How can we add this stuff systematically? As the ghost stuff is not real, we know it must fulfill special mathematical equations.
Now we do something, which is very often done in theoretical physics: We use an analogy. The equations in question are not unique to the problem at hand, but appear also in quite different circumstances, although with a completely different meaning. In fact, the same equations describe how in quantum physics one particle is bound to each other. In quantum physics, depending on the system at hand, there may be one or more different ways how this binding occurs. You can count the number, and there is a set which one can label by whole numbers. Incidentally, this feature is where the name quantum originates from.
Returning to our original problem, we do the following analogy: Enumerating the ghost stuff can be cast into the same form as enumerating the possibilities of binding two particles together in quantum mechanics. The actual problem is only to find the correct quantum system which is the precise analogous one to our original problem. Finding this is still a complicated mathematical problem. Finding only one solution for one example is the aim of this bachelor thesis. But already finding one would be a huge step forward, as so far we do not have one at all. Having it will probably be like having a first stepping stone for crossing a river. From understanding it, we should be able to understand how to generate more. Hopefully, we will eventually understand how to create arbitrary such examples. And thus solve our enumeration problem. But this is still in the future. For the moment, we do the first step.
Let me briefly outline the problem: Most of the theories in particle physics include some kind of redundancy I.e., there are more things in it then we actually see in experiments. The surplus stuff is actually not real. It is just a kind of mathematical device to make calculations simpler. It is like a ladder, which we bring to climb a wall. We come, use the ladder, and are on top. The ladder we take again with us, and the wall remains as it was. The ladder made live simpler. Of course, we could have climbed the wall without it. But it would have been more painful.
Unfortunately, theories are more complicated than wall climbing.
One of the problems is that we usually cannot solve problems exactly. And as noted before, this can mess up the removal of the surplus stuff.
The project the bachelor student and I am working on has the following basic idea: If we can account for all of the surplus stuff, we should be able to know whether our approximations did something wrong. It is like preparing an engine. If something is left afterwards it is usually not a good sign. Unfortunately, things are again more complicated. For the engine, we just have to look through our workspace to see whether anything is left. But how to do so for our theories? And this is precisely the project.
So, the project is essentially about listing stuff. We start out with something we know is real and important. For this, we take the most simplest thing imaginable: Nothing. Nothing means in this case just an empty universe, no particles, no reactions, no nothing. That is certainly a real thing, and one we want to include in our calculations.
Of this nothing, there are also versions where some of the surplus stuff appears. Like some ghost image of particles. We actually know how to add small amounts of ghost stuff. Like a single particle in a whole universe. But these situations are not so very interesting, as we know how to deal with them. No, the really interesting stuff happens if well fill the whole universe with ghost images. With surplus stuff which we add just to make life simpler. At least originally. And the question is now: How can we add this stuff systematically? As the ghost stuff is not real, we know it must fulfill special mathematical equations.
Now we do something, which is very often done in theoretical physics: We use an analogy. The equations in question are not unique to the problem at hand, but appear also in quite different circumstances, although with a completely different meaning. In fact, the same equations describe how in quantum physics one particle is bound to each other. In quantum physics, depending on the system at hand, there may be one or more different ways how this binding occurs. You can count the number, and there is a set which one can label by whole numbers. Incidentally, this feature is where the name quantum originates from.
Returning to our original problem, we do the following analogy: Enumerating the ghost stuff can be cast into the same form as enumerating the possibilities of binding two particles together in quantum mechanics. The actual problem is only to find the correct quantum system which is the precise analogous one to our original problem. Finding this is still a complicated mathematical problem. Finding only one solution for one example is the aim of this bachelor thesis. But already finding one would be a huge step forward, as so far we do not have one at all. Having it will probably be like having a first stepping stone for crossing a river. From understanding it, we should be able to understand how to generate more. Hopefully, we will eventually understand how to create arbitrary such examples. And thus solve our enumeration problem. But this is still in the future. For the moment, we do the first step.
Tuesday, June 21, 2016
How to search for dark, unknown things: A bachelor thesis
Today, I would like to write about a recently finished bachelor thesis on the topic of dark matter and the Higgs. Though I will also present the results, the main aim of this entry is to describe an example of such a bachelor thesis in my group. I will try to follow up also in the future with such entries, to give those interested in working in particle physics an idea of what one can do already at a very early stage in one's studies.
The framework of the thesis is the idea that dark matter could interact with the Higgs particle. This is a serious possibility, as both objects are somehow related to mass. There is also not yet any substantial reason why this should not be the case. The unfortunate problem is only: how strong is this effect? Can we measure it, e.g. in the experiments at CERN?
We are looking in a master thesis in the dynamical features of this idea. This is ongoing, and something I will certainly write about later. Knowing the dynamics, however, is only the first step towards connecting the theory to experiment. To do so, we need the basic properties of the theory. This input will then be put through a simulation of what happens in the experiment. Only this result is the one really interesting for experimental physicists. They then look what any kind of imperfections of the experiments change and then they can conclude, whether they will be able to detect something. Or not.
In the thesis, we did not yet had the results from the master student's work, so we parametrized the possible outcomes. This meant mainly to have the mass and the strength of the interaction between the Higgs and the dark matter particle to play around. This gave us what we call an effective theory. Such a theory does not describe every detail, but it is sufficiently close to study a particular aspect of a theory. In this case how dark matter should interact with the Higgs at the CERN experiments.
With this effective theory, it was then possible to use simulations of what happens in the experiment. Since dark matter cannot, as the name says, be directly seen, we needed somehow a marker to say that it has been there. For that purpose we choose the so-called associate production mode.
We knew that the dark matter would escape the experiment undetected. In jargon, this is called missing energy, since we miss the energy of the dark matter particles, when we account for all we see. Since we knew what went in, and know that what goes in must come out, anything not accounted for must have been carried away by something we could not directly see. To make sure that this came from an interaction with the Higgs we needed a tracer that a Higgs had been involved. The simplest solution was to require that there is still a Higgs. Also, there are deeper reasons which require that dark matter in this theory should not only arrive with a Higgs particle, but should be obtained also from a Higgs particle before the emission of the dark matter particles. The simplest way to check for this is that there is besides the Higgs in the end also a so-called Z-boson, for technical reasons. Thus, we had what we called a signature: Look for a Higgs, a Z-boson, and missing energy.
There is, however, one unfortunate thing in known particle physics which makes this more complicated: neutrinos. These particles are also essentially undetectable for an experiment at the LHC. Thus, when produced, they will also escape undetected as missing energy. Since we do not detect either dark matter or neutrinos, we cannot decide, what actually escaped. Unfortunately, the tagging with the Higgs and the Z do not help, as neutrinos can also be produced together with them. This is what we call a background to our signal. Thus, it was necessary to account for this background.
Fortunately, there are experiments which can detect, with a lot of patience, neutrinos. They are very different from the one we at the LHC. But they gave us a lot of information on neutrinos. Hence, we knew how often neutrinos would be produced in the experiment. So, we would only need to remove this known background from what the simulation gives. Whatever is left would then be the signal of dark matter. If the remainder would be large enough, we would be able to see the dark matter in the experiment. Of course, there are many subtleties involved in this process, which I will skip.
So the student simulated both cases, and determined the signal strength. From that she could deduce that the signal grows quickly with the strength of the interaction. She also found that the signal became stronger if the dark matter particles become lighter. That is so because there is only a finite amount of energy available to produce them. But the more energy is left to make the dark matter particles move the easier it gets to produce them, an effect known in physics as phase space. In addition, she found that if the dark matter particles have half the mass of the Higgs their production became also very efficient. The reason is a resonance. Just like two noises amplify each other if they are at the same frequency, so such amplifications can happen in particle physics.
The final outcome of the bachelor thesis was thus telling us for the values of the two parameters of the effective theory how strong our signal would be. Once we know these values from our microscopic theory in the master project, we know whether we have a chance to see these particles in this type of experiments.
The framework of the thesis is the idea that dark matter could interact with the Higgs particle. This is a serious possibility, as both objects are somehow related to mass. There is also not yet any substantial reason why this should not be the case. The unfortunate problem is only: how strong is this effect? Can we measure it, e.g. in the experiments at CERN?
We are looking in a master thesis in the dynamical features of this idea. This is ongoing, and something I will certainly write about later. Knowing the dynamics, however, is only the first step towards connecting the theory to experiment. To do so, we need the basic properties of the theory. This input will then be put through a simulation of what happens in the experiment. Only this result is the one really interesting for experimental physicists. They then look what any kind of imperfections of the experiments change and then they can conclude, whether they will be able to detect something. Or not.
In the thesis, we did not yet had the results from the master student's work, so we parametrized the possible outcomes. This meant mainly to have the mass and the strength of the interaction between the Higgs and the dark matter particle to play around. This gave us what we call an effective theory. Such a theory does not describe every detail, but it is sufficiently close to study a particular aspect of a theory. In this case how dark matter should interact with the Higgs at the CERN experiments.
With this effective theory, it was then possible to use simulations of what happens in the experiment. Since dark matter cannot, as the name says, be directly seen, we needed somehow a marker to say that it has been there. For that purpose we choose the so-called associate production mode.
We knew that the dark matter would escape the experiment undetected. In jargon, this is called missing energy, since we miss the energy of the dark matter particles, when we account for all we see. Since we knew what went in, and know that what goes in must come out, anything not accounted for must have been carried away by something we could not directly see. To make sure that this came from an interaction with the Higgs we needed a tracer that a Higgs had been involved. The simplest solution was to require that there is still a Higgs. Also, there are deeper reasons which require that dark matter in this theory should not only arrive with a Higgs particle, but should be obtained also from a Higgs particle before the emission of the dark matter particles. The simplest way to check for this is that there is besides the Higgs in the end also a so-called Z-boson, for technical reasons. Thus, we had what we called a signature: Look for a Higgs, a Z-boson, and missing energy.
There is, however, one unfortunate thing in known particle physics which makes this more complicated: neutrinos. These particles are also essentially undetectable for an experiment at the LHC. Thus, when produced, they will also escape undetected as missing energy. Since we do not detect either dark matter or neutrinos, we cannot decide, what actually escaped. Unfortunately, the tagging with the Higgs and the Z do not help, as neutrinos can also be produced together with them. This is what we call a background to our signal. Thus, it was necessary to account for this background.
Fortunately, there are experiments which can detect, with a lot of patience, neutrinos. They are very different from the one we at the LHC. But they gave us a lot of information on neutrinos. Hence, we knew how often neutrinos would be produced in the experiment. So, we would only need to remove this known background from what the simulation gives. Whatever is left would then be the signal of dark matter. If the remainder would be large enough, we would be able to see the dark matter in the experiment. Of course, there are many subtleties involved in this process, which I will skip.
So the student simulated both cases, and determined the signal strength. From that she could deduce that the signal grows quickly with the strength of the interaction. She also found that the signal became stronger if the dark matter particles become lighter. That is so because there is only a finite amount of energy available to produce them. But the more energy is left to make the dark matter particles move the easier it gets to produce them, an effect known in physics as phase space. In addition, she found that if the dark matter particles have half the mass of the Higgs their production became also very efficient. The reason is a resonance. Just like two noises amplify each other if they are at the same frequency, so such amplifications can happen in particle physics.
The final outcome of the bachelor thesis was thus telling us for the values of the two parameters of the effective theory how strong our signal would be. Once we know these values from our microscopic theory in the master project, we know whether we have a chance to see these particles in this type of experiments.
Labels:
behind-the-scenes,
Dark Matter,
Experiment,
Higgs,
Research,
Students
Tuesday, May 3, 2016
Digging into a particle
This time I would like to write about a new paper which I have just put out. In this paper, I investigate a particular class of particles.
This class of particles is actually quite similar to the Higgs boson. I. e. the particles are bosons and they have the same spin as the Higgs boson. This spin is zero. This class of particles is called scalars. These particular sclars also have the same type of charges, they interact with the weak interaction.
But there are fundamental differences as well. One is that I have switched off the back reaction between these particles and the weak interactions: The scalars are affected by the weak interaction, but they do not influence the W and Z bosons. I have also switched off the interactions between the scalars. Therefore, no Brout-Englert-Higgs effect occurs. On the other hand, I have looked at them for several different masses. This set of conditions is known as quenched, because all the interactions are shut-off (quenched), and the only feature which remains to be manipulated is the mass.
Why did I do this? There are two reasons.
One is a quite technical reason. Even in this quenched situation, the scalars are affected by quantum corrections, the radiative corrections. Due to them, the mass changes, and the way the particles move changes. These effects are quantitative. And this is precisely the reason to study them in this setting. Being quenched it is much easier to actually determine the quantitative behavior of these effects. Much easier than when looking at the full theory with back reactions, which is a quite important part of our research. I have learned a lot about these quantitative effects, and am now much more confident in how they behave. This will be very valuable in studies beyond this quenched case. As was expected, there was not many surprises found. Hence, it was essentially a necessary but unspectacular numerical exercise.
Much more interesting was the second aspect. When quenching, this theory becomes very different from the normal standard model. Without the Brout-Englert-Higgs effect, the theory actually looks very much like the strong interaction. Especially, in this case the scalars would be confined in bound states, just like quarks are in hadrons. How this occurs is not really understood. I wanted to study this using these scalars.
Justifiable, you may ask why I would do this. Why would I not just have a look at the quarks themselves. There is a conceptual and a technical reason. The conceptual reason is that quarks are fermions. Fermions have non-zero spin, in contrast to scalars. This entails that they are mathematically more complicated. These complications mix in with the original question about confinement. This is disentangled for scalars. Hence, by choosing scalars, these complications are avoided. This is also one of the reasons to look at the quenched case. The back-reaction, irrespective of with quarks or scalars, obscures the interesting features. Thus, quenching and scalars isolates the interesting feature.
The other is that the investigations were performed using simulations. Fermions are much, much more expensive than scalars in such simulations in terms of computer time. Hence, with scalars it is possible to do much more at the same expense in computing time. Thus, simplicity and cost made scalars for this purpose attractive.
Did it work? Well, no. At least not in any simple form. The original anticipation was that confinement should be imprinted into how the scalars move. This was not seen. Though the scalars are very peculiar in their properties, they in no obvious way show confinement. It may still be that there is an indirect way. But so far nobody has any idea how. Though disappointing, this is not bad. It only tells us that our simple ideas were wrong. It also requires us to think harder on the problem.
An interesting observation could be made nonetheless. As said above, the scalars were investigated for different masses. These masses are, in a sense, not the observed masses. What they really are is the mass of the particle before quantum effects are taken into account. These quantum effects change the mass. These changes were also measured. Surprisingly, the measured mass was larger than the input mass. The interactions created mass, even if the input mass was zero. The strong interaction is known to do so. However, it was believed that this feature is strongly tied to fermions. For scalars it was not expected to happen, at least not in the observed way. Actually, the mass is even of a similar size as for the quarks. This is surprising. This implies that the kind of interaction is generically introducing a mass scale.
This triggered for me the question whether the mass scale also survives when having the backcoupling in once more. If it remains even when there is a Brout-Englert-Higgs effect then this could have interesting implications for the mass of the Higgs. But this remains to be seen. It may as well be that this will not endure when not being quenched.
This class of particles is actually quite similar to the Higgs boson. I. e. the particles are bosons and they have the same spin as the Higgs boson. This spin is zero. This class of particles is called scalars. These particular sclars also have the same type of charges, they interact with the weak interaction.
But there are fundamental differences as well. One is that I have switched off the back reaction between these particles and the weak interactions: The scalars are affected by the weak interaction, but they do not influence the W and Z bosons. I have also switched off the interactions between the scalars. Therefore, no Brout-Englert-Higgs effect occurs. On the other hand, I have looked at them for several different masses. This set of conditions is known as quenched, because all the interactions are shut-off (quenched), and the only feature which remains to be manipulated is the mass.
Why did I do this? There are two reasons.
One is a quite technical reason. Even in this quenched situation, the scalars are affected by quantum corrections, the radiative corrections. Due to them, the mass changes, and the way the particles move changes. These effects are quantitative. And this is precisely the reason to study them in this setting. Being quenched it is much easier to actually determine the quantitative behavior of these effects. Much easier than when looking at the full theory with back reactions, which is a quite important part of our research. I have learned a lot about these quantitative effects, and am now much more confident in how they behave. This will be very valuable in studies beyond this quenched case. As was expected, there was not many surprises found. Hence, it was essentially a necessary but unspectacular numerical exercise.
Much more interesting was the second aspect. When quenching, this theory becomes very different from the normal standard model. Without the Brout-Englert-Higgs effect, the theory actually looks very much like the strong interaction. Especially, in this case the scalars would be confined in bound states, just like quarks are in hadrons. How this occurs is not really understood. I wanted to study this using these scalars.
Justifiable, you may ask why I would do this. Why would I not just have a look at the quarks themselves. There is a conceptual and a technical reason. The conceptual reason is that quarks are fermions. Fermions have non-zero spin, in contrast to scalars. This entails that they are mathematically more complicated. These complications mix in with the original question about confinement. This is disentangled for scalars. Hence, by choosing scalars, these complications are avoided. This is also one of the reasons to look at the quenched case. The back-reaction, irrespective of with quarks or scalars, obscures the interesting features. Thus, quenching and scalars isolates the interesting feature.
The other is that the investigations were performed using simulations. Fermions are much, much more expensive than scalars in such simulations in terms of computer time. Hence, with scalars it is possible to do much more at the same expense in computing time. Thus, simplicity and cost made scalars for this purpose attractive.
Did it work? Well, no. At least not in any simple form. The original anticipation was that confinement should be imprinted into how the scalars move. This was not seen. Though the scalars are very peculiar in their properties, they in no obvious way show confinement. It may still be that there is an indirect way. But so far nobody has any idea how. Though disappointing, this is not bad. It only tells us that our simple ideas were wrong. It also requires us to think harder on the problem.
An interesting observation could be made nonetheless. As said above, the scalars were investigated for different masses. These masses are, in a sense, not the observed masses. What they really are is the mass of the particle before quantum effects are taken into account. These quantum effects change the mass. These changes were also measured. Surprisingly, the measured mass was larger than the input mass. The interactions created mass, even if the input mass was zero. The strong interaction is known to do so. However, it was believed that this feature is strongly tied to fermions. For scalars it was not expected to happen, at least not in the observed way. Actually, the mass is even of a similar size as for the quarks. This is surprising. This implies that the kind of interaction is generically introducing a mass scale.
This triggered for me the question whether the mass scale also survives when having the backcoupling in once more. If it remains even when there is a Brout-Englert-Higgs effect then this could have interesting implications for the mass of the Higgs. But this remains to be seen. It may as well be that this will not endure when not being quenched.
Wednesday, April 27, 2016
Some small changes in the schedule
As you may have noticed, I have not written a new entry since some time.
The reasons have been twofold.
One is that being a professor is a little more strenuous than being a postdoc. Though not unexpected, at some point it takes a toll.
The other is that in the past I tried just to keep a regular schedule. However, that often required of me to think hard about a topic as there was no natural candidate. At other times, I had a number of possible topics, which where then stretched out rather than to be written when they were important.
As a consequence, I think it is more appropriate to write entries when something happens that is interesting to write about. This will be at least any time we put out a new paper, so that I will still update you on our research. I will also write something whenever somebody new starts in the group, or otherwise we start a new project. Also, some of my students want to also contribute, and I will be very happy to give them the opportunity to do so. Once in a while, I will also write some background entries, such that I can offer some context for the research we are doing.
So stay tuned. It may be in a different rhythm, but I will keep on writing about our (and my) research.
The reasons have been twofold.
One is that being a professor is a little more strenuous than being a postdoc. Though not unexpected, at some point it takes a toll.
The other is that in the past I tried just to keep a regular schedule. However, that often required of me to think hard about a topic as there was no natural candidate. At other times, I had a number of possible topics, which where then stretched out rather than to be written when they were important.
As a consequence, I think it is more appropriate to write entries when something happens that is interesting to write about. This will be at least any time we put out a new paper, so that I will still update you on our research. I will also write something whenever somebody new starts in the group, or otherwise we start a new project. Also, some of my students want to also contribute, and I will be very happy to give them the opportunity to do so. Once in a while, I will also write some background entries, such that I can offer some context for the research we are doing.
So stay tuned. It may be in a different rhythm, but I will keep on writing about our (and my) research.
Friday, February 5, 2016
More than one Higgs means more structure
We have published once more a new paper, and I would like again to outline what we did (and why).
The motivation for this investigation started out with another paper of mine. As described earlier, back then I have taken a rather formal stand on proposals for new physics. It was based on the idea that there is some kind of self-similar substructure of what we usually call the Higgs and the W and Z bosons. In this paper, I speculated that this self-similarity may be rather exclusive to the standard model. As a consequence, this may alter the predictions for new physics models.
Of course, speculating is easy. To make something out of it requires to do real calculations. Thus, I have started two projects to test them. One is on the unification of forces, and still ongoing. Some first results are there, but not yet anything conclusive. It is the second project which yielded new results.
In this second project we had a look at a theory where more Higgs particles are added to the standard model, a so-called 2-Higgs-doublet model, or 2HDM for short. I had speculated that, besides the additional Higgs particles, further additional particles may arise as bound states. I. e., as states which are made from two or more other particles. These are not accounted for by ordinary methods.
In the end, it now appears that this idea is not correct, at least not in its simplest form. There are still some very special cases left, where this may still be true, but by and large not. However, we have understood why the original idea is wrong, and why it may still be correct in other cases. The answer is symmetry.
When adding additional Higgs particles, one is not entirely free. It is necessary that we do not alter the standard model where we have already tested it. Especially, we cannot easily modify the symmetries of the standard model. However, the symmetries of the standard model then induce a remarkable effect. The additional Higgs particles in 2HDMs are not entirely different from the ones we know. Rather, they mix with it as a quantum effect. In quantum theories, particles can change into each other under certain conditions. And the symmetries of the standard model entail that this is possible for the new and the old Higgses.
If the particles mix, the possibilities to distinguish them diminish. As a consequence, the simplest additional states can no longer be distinguished from the states already accounted for by ordinary methods. Thus, they are not additional states. Hence, the simplest possible deviation I speculated about is not realized. There may still be more complicated ones, but to figure this out is much more complicated, and has yet to be done. Thus, this work showed that the simple idea was not right.
So what about the other project still in progress? Should I now also expect this to just reproduce what is known? Actually no. The thing we learned in this project was why everything fell into its ordinary places. The reason is the mixing between the normal and the additional Higgs particles. This possibility is precluded in the other project, as there the additional particles are very different from the original ones. It may still be that my original idea is wrong. But it has to be wrong in a different way than in the case we investigated now. And thus we have also learned something more about a wider class of theories.
This shows that even disproving your ideas is important. From the reasons why they fail you learn more than just from a confirmation of them - you learn something new.
The motivation for this investigation started out with another paper of mine. As described earlier, back then I have taken a rather formal stand on proposals for new physics. It was based on the idea that there is some kind of self-similar substructure of what we usually call the Higgs and the W and Z bosons. In this paper, I speculated that this self-similarity may be rather exclusive to the standard model. As a consequence, this may alter the predictions for new physics models.
Of course, speculating is easy. To make something out of it requires to do real calculations. Thus, I have started two projects to test them. One is on the unification of forces, and still ongoing. Some first results are there, but not yet anything conclusive. It is the second project which yielded new results.
In this second project we had a look at a theory where more Higgs particles are added to the standard model, a so-called 2-Higgs-doublet model, or 2HDM for short. I had speculated that, besides the additional Higgs particles, further additional particles may arise as bound states. I. e., as states which are made from two or more other particles. These are not accounted for by ordinary methods.
In the end, it now appears that this idea is not correct, at least not in its simplest form. There are still some very special cases left, where this may still be true, but by and large not. However, we have understood why the original idea is wrong, and why it may still be correct in other cases. The answer is symmetry.
When adding additional Higgs particles, one is not entirely free. It is necessary that we do not alter the standard model where we have already tested it. Especially, we cannot easily modify the symmetries of the standard model. However, the symmetries of the standard model then induce a remarkable effect. The additional Higgs particles in 2HDMs are not entirely different from the ones we know. Rather, they mix with it as a quantum effect. In quantum theories, particles can change into each other under certain conditions. And the symmetries of the standard model entail that this is possible for the new and the old Higgses.
If the particles mix, the possibilities to distinguish them diminish. As a consequence, the simplest additional states can no longer be distinguished from the states already accounted for by ordinary methods. Thus, they are not additional states. Hence, the simplest possible deviation I speculated about is not realized. There may still be more complicated ones, but to figure this out is much more complicated, and has yet to be done. Thus, this work showed that the simple idea was not right.
So what about the other project still in progress? Should I now also expect this to just reproduce what is known? Actually no. The thing we learned in this project was why everything fell into its ordinary places. The reason is the mixing between the normal and the additional Higgs particles. This possibility is precluded in the other project, as there the additional particles are very different from the original ones. It may still be that my original idea is wrong. But it has to be wrong in a different way than in the case we investigated now. And thus we have also learned something more about a wider class of theories.
This shows that even disproving your ideas is important. From the reasons why they fail you learn more than just from a confirmation of them - you learn something new.
Labels:
2HDM,
BSM,
Electroweak,
Higgs,
Research,
Standard model
Wednesday, January 20, 2016
More similar than expected
Some while ago I have written about a project a master student and myself have embarked upon: Using a so-called supersymmetric theory - or SUSY theory for short - to better understand ordinary theories.
Well, this work has come to fruition, both in the form of the completion of the master project as well as new insights written up in a paper. This time I would like to present these results a little bit.
To start, let me briefly rehearse what we did, and why. One of the aims of our research is to better understand how the theories work we are using to describe nature. A particular feature of these theories is redundancy. This redundancy makes many calculations possible, but at the same time introduces new problems, mainly about how to get unique results.
Now, in theories like the standard model, we have all problems at the same time: All the physics, all the formalism, and especially all the redundancy. But this is a tedious mess. It is therefore best to reduce the complexity and solve one problem at a time. This is done by taking a simpler theory, which has only one of the problems. This is what we did.
We took a (maximal) SUSY theory. In such a theory, the supersymmetry is very constraining, and a lot of features are exactly known. But the implications of redundancy are not. So we hoped that by applying the same procedures we use to deal with the redundancy in ordinary theories to this theory, we could check whether our approach is valid.
Of course, the first, and expected, finding was that even a very constraining theory is not simple. When it comes to technical details, anything interesting becomes a hard problem. So it required a lot of grinding work before we got results. I will not bore you with the details. If you want them, you can find them in the paper. No, here I want to discuss the final result.
The first finding was a rather comforting one. Doing the same things to this theory that we do to ordinary theories did not do too much damage. Using these approximations, the final results were still in agreement with what we do know exactly about this theory. This was a relief, because this lends a tiny amount of support more to what we are usually doing.
The real surprise was, however, a very different one. We knew that this theory shows a very different kind of behavior than all the theories we are usually dealing with. So we did expect that, even if our methods work, the results will still be drastically different from the other cases we are dealing with. But this was not so.
To understand better what we have found, it is necessary to know that this theory is similar in structure to a conventional theory. This conventional one is a theory of gluons, but without quarks to make the strong interactions complete. In the SUSY theory, we also have gluons. In addition, we have further new particles, which are needed to get.
The first surprise was that the gluons behaved unexpectedly similar to their counterparts in the original theory. Of course, there are differences, but these differences were expected. They came from the differences of both theories. But where they could be similar, they were. And not roughly so, but surprisingly precisely so. We have an idea why this could be the case, because there is one structural property, which is very restricting, and which appears in both theories. But we know that this is not enough, as we now other theories where this still is different, despite also having this one constraining structure. Since the way how the gluons are very similar is strongly influenced by the redundancy features of both theories, we can hope that this means we are treating the redundancy in a reasonable way.
The second surprise was that the new particles mirror the behavior of the gluons. Even though these particles are connected by supersymmetry to the gluons, the connection would have allowed many possible shapes of relations. But no, the relation is an almost exact mirror. And this time, there is no constraining structure which gives us a clue why, out of all possible relations, this one is picked. However, this is again related to redundancy, and perhaps, just speculating here, this could indicate more about how this redundancy works.
In total, we have learned quite a lot. We have more support for what we doing in ordinary theories. We have seen that some structures might be more universal than expected. And we may even have a clue in which direction we could learn more about how to deal with the redundancy in more immediately relevant theories.
Well, this work has come to fruition, both in the form of the completion of the master project as well as new insights written up in a paper. This time I would like to present these results a little bit.
To start, let me briefly rehearse what we did, and why. One of the aims of our research is to better understand how the theories work we are using to describe nature. A particular feature of these theories is redundancy. This redundancy makes many calculations possible, but at the same time introduces new problems, mainly about how to get unique results.
Now, in theories like the standard model, we have all problems at the same time: All the physics, all the formalism, and especially all the redundancy. But this is a tedious mess. It is therefore best to reduce the complexity and solve one problem at a time. This is done by taking a simpler theory, which has only one of the problems. This is what we did.
We took a (maximal) SUSY theory. In such a theory, the supersymmetry is very constraining, and a lot of features are exactly known. But the implications of redundancy are not. So we hoped that by applying the same procedures we use to deal with the redundancy in ordinary theories to this theory, we could check whether our approach is valid.
Of course, the first, and expected, finding was that even a very constraining theory is not simple. When it comes to technical details, anything interesting becomes a hard problem. So it required a lot of grinding work before we got results. I will not bore you with the details. If you want them, you can find them in the paper. No, here I want to discuss the final result.
The first finding was a rather comforting one. Doing the same things to this theory that we do to ordinary theories did not do too much damage. Using these approximations, the final results were still in agreement with what we do know exactly about this theory. This was a relief, because this lends a tiny amount of support more to what we are usually doing.
The real surprise was, however, a very different one. We knew that this theory shows a very different kind of behavior than all the theories we are usually dealing with. So we did expect that, even if our methods work, the results will still be drastically different from the other cases we are dealing with. But this was not so.
To understand better what we have found, it is necessary to know that this theory is similar in structure to a conventional theory. This conventional one is a theory of gluons, but without quarks to make the strong interactions complete. In the SUSY theory, we also have gluons. In addition, we have further new particles, which are needed to get.
The first surprise was that the gluons behaved unexpectedly similar to their counterparts in the original theory. Of course, there are differences, but these differences were expected. They came from the differences of both theories. But where they could be similar, they were. And not roughly so, but surprisingly precisely so. We have an idea why this could be the case, because there is one structural property, which is very restricting, and which appears in both theories. But we know that this is not enough, as we now other theories where this still is different, despite also having this one constraining structure. Since the way how the gluons are very similar is strongly influenced by the redundancy features of both theories, we can hope that this means we are treating the redundancy in a reasonable way.
The second surprise was that the new particles mirror the behavior of the gluons. Even though these particles are connected by supersymmetry to the gluons, the connection would have allowed many possible shapes of relations. But no, the relation is an almost exact mirror. And this time, there is no constraining structure which gives us a clue why, out of all possible relations, this one is picked. However, this is again related to redundancy, and perhaps, just speculating here, this could indicate more about how this redundancy works.
In total, we have learned quite a lot. We have more support for what we doing in ordinary theories. We have seen that some structures might be more universal than expected. And we may even have a clue in which direction we could learn more about how to deal with the redundancy in more immediately relevant theories.
Subscribe to:
Posts (Atom)