Some time ago, I have written about my growing interest in dark matter. Now, both a master student and a bachelor student have actually started working on this topic. Thus, I want to describe this time what they are exactly working on.
As told, we know actually very little about what dark matter is. Especially, we know very little about how it interacts with the rest of the universe, except for gravity. If you do not want to assume that this is the only way dark matter shows its there, you have therefore to guess. Luckily, the number of guesses is somewhat limited by experiment and observation.
One interesting possibility is that dark matter is actually only interacting additionally with the Higgs. Theories of this type are called Higgs-Portal models, because the Higgs is the portal through which we see dark matter. Such models have some nice features. Probably the nicest is that there is a good chance that the LHC will be able to access through this portal dark matter. This idea has received much more attention since we know that there is a Higgs. Thus, a lot of investigations have been performed already. So what do we want to add?
Here enters a second observation about dark matter. Today, we have become pretty good at observing dark matter indirectly through its gravitational action on galaxies. From this, people have indirectly deduced that dark matter can interact with itself. Especially, it seems quite possible that it interacts very strongly with itself. Thus, while dark matter is very reclusive, it still forms in its reclusion a very active world.
Now, comes the new part. Essentially all of the previous investigations of Higgs-portal models assumed that dark matter is not strongly self-interacting. Therefore they used perturbation theory, which is then the adequate language. To capture the effects of strong interactions requires a different method. We will employ numerical simulations to deal with them. However, we will reduce, for the sake of computing time, the problem somewhat. We keep only the Higgs, the W and Z, and the dark matter particle. This is still a formidable problem.
The topic of the master thesis is now to perform these simulations. The goal of them are the following: How much do the strong interactions of the dark matter particle imprint on the Higgs and the W and the Z? Are their properties changed? If yes, how strong can the dark matter self-interaction be before they are changed too strongly, i.e. before they do no longer agree with our experimental knowledge? What are the properties of the dark matter particles? How strongly can the Higgs and the dark matter particle communicate through the portal before the Higgs becomes changed? In this context, how is the structure of the Higgs affected? These are the most important questions, which need answers.
However, with this we will very much understand the theoretical aspects. But this is not enough. If there is some interesting effect in principle, this by no means guarantees that we can see it in an experiment. On the one hand, there is also still the rest of the standard model. Do they interfere? And then, if they do not, are the effects of dark matter reduced so strongly in a real experiment that we ca no longer see it? Especially, can we still see anything of the strong self-interaction?
Herein lies the goal of the bachelor student. We can, unfortunately, not simulate the whole standard model and the experiment. But we can encode it into an effective theory, which we then can treat sufficiently well. This is again a combination of methods which I do so often. Using this effective model, and a toolbox created by other people, a so-called Monte-Carlo generator, she can make predictions for an actual experiment. This can be either the LHC, or one of the planned next experiments. That should give us at least a rough idea, whether we can see something of the dark matter. Or, if we are lucky, a very good idea.
This also demonstrates how different projects, and the work of several people, feed into each other. I am quite curious what will come out, and what we will learn about strongly-interacting dark matter.
Friday, December 18, 2015
Tuesday, December 1, 2015
An international perspective
This time, I would like to write a little bit about a very important part of our work: Being international.
Right now, in our complete particle physics group, we have with about 25 people about 12 nationalities. So, being international is a very basic part of our daily life. This yields a long list of effects. It starts from using a common language so that everyone one can speak to everyone (which is today English, but has been a different one in the past, and may again be a different one in the future. English as the language of science is only there since less than a century). Thus, we need to educate also all our students in this language somehow. And this not only pertains to normal speaking, but also the specialized vocabulary of our topic.
Being international also requires us to pay attention to many more administrative aspects, which appear due to the existence of different nations. The question of who can represent our work in which country, because it is possible to get a visa, is not an entirely simple problem. Being from the European Union myself puts me in a privileged position, as I can get into most countries with little or no effort. But this is not true for many other people, giving us often headaches and requires long-range planning, if we want someone particular to go somewhere. Furthermore, when students come from abroad, they may have learned different things, and therefore have a different background, which needs to be leveled, so that everyone can talk to everyone. And, finally, this may also manifest on how to incorporate different cultures and habits. This does not only touch upon the personal, but can very much also affect the way how we work together. In some areas of the world, it is still usual that less experienced people accept that what more experienced people do without questioning, probably since childhood, as an example. This does not help in doing science: Everyone has to speak open, and also criticize to find out errors. None of us is error free, and therefore everyone must contribute in nailing errors.
This list can be continued almost indefinitely.
Why do we put up with this? It appears a lot of extra work, just to do science.
But here comes into play how science today operates: On a global scale. And this is very good for two reasons.
One is that the problems we have to deal with becomes more and more specialized, and thus a smaller and smaller percentage of scientists can work on them. To still have a sizable workforce requires therefore to include as many people as possible. Otherwise, too specialized subgroups may loose contact, and become adrift, with no possibility to regain the overarching picture. This could also be put the other way around: Today's problems are far too complex that any single country, even the largest ones, could have enough scientific workforce to deal with them. Everyone is needed. And this not even touches upon having enough resources to do certain kinds of research.
The other is that we need diversity. The different educational, cultural, and habitual backgrounds also play an important role in science. Everyone has learned in school and during studies something in a particular style. Everyone has adopted certain view points, and certain strategies. But science lives in the unknown. There is no gold-plated way how to deal with the unknown. Therefore, there is no special preparation which is the best way to be prepared for doing science. We need many different minds, vastly different minds, such that we can get many perspectives. We need people with different backgrounds, with a different lookout on everything, to find new angles how to deal with problems. We need all ways of seeing things, even those which at first may look not intuitive to ourselves. But we have to learn and listen to all the view points. Thus, everyone who is willing to support the scientific process, the ever turning wheel of creating a theory and putting it through myriads of experimental tests, can provide a new point of view. Thus, diversity is essential for us. New problems need different points of view.
This is one of the points which also explains the many travels scientists do, often for years. Every new surrounding, every new group of peoples, provides a new perspective. Changing one's perspective by traveling, or by bringing many different peoples to our homes, helps us in broadening our view, in giving us the opportunity to learn adopt to take new perspectives. This is demanding for the individual, as it implies being around the world rather than at home, but our understanding profits from being used to seeing things from many perspectives.
The ability to see from different perspectives is not only supported by talking to other scientists. But experiencing different cultures, different surroundings in general, and trying to understand them, gives us the ability. So, diversity is essential to our ability to understand.
This is why being international is so extremely important for modern research, and why diversity counts so much for basic research.
And this also implies that already living in a diverse culture in a single spot will already help us in becoming better in understanding. If we are used to experience the new, and trying to understand it, in everyday live, it prepares us also to face the new at the boundary of our knowledge.
Right now, in our complete particle physics group, we have with about 25 people about 12 nationalities. So, being international is a very basic part of our daily life. This yields a long list of effects. It starts from using a common language so that everyone one can speak to everyone (which is today English, but has been a different one in the past, and may again be a different one in the future. English as the language of science is only there since less than a century). Thus, we need to educate also all our students in this language somehow. And this not only pertains to normal speaking, but also the specialized vocabulary of our topic.
Being international also requires us to pay attention to many more administrative aspects, which appear due to the existence of different nations. The question of who can represent our work in which country, because it is possible to get a visa, is not an entirely simple problem. Being from the European Union myself puts me in a privileged position, as I can get into most countries with little or no effort. But this is not true for many other people, giving us often headaches and requires long-range planning, if we want someone particular to go somewhere. Furthermore, when students come from abroad, they may have learned different things, and therefore have a different background, which needs to be leveled, so that everyone can talk to everyone. And, finally, this may also manifest on how to incorporate different cultures and habits. This does not only touch upon the personal, but can very much also affect the way how we work together. In some areas of the world, it is still usual that less experienced people accept that what more experienced people do without questioning, probably since childhood, as an example. This does not help in doing science: Everyone has to speak open, and also criticize to find out errors. None of us is error free, and therefore everyone must contribute in nailing errors.
This list can be continued almost indefinitely.
Why do we put up with this? It appears a lot of extra work, just to do science.
But here comes into play how science today operates: On a global scale. And this is very good for two reasons.
One is that the problems we have to deal with becomes more and more specialized, and thus a smaller and smaller percentage of scientists can work on them. To still have a sizable workforce requires therefore to include as many people as possible. Otherwise, too specialized subgroups may loose contact, and become adrift, with no possibility to regain the overarching picture. This could also be put the other way around: Today's problems are far too complex that any single country, even the largest ones, could have enough scientific workforce to deal with them. Everyone is needed. And this not even touches upon having enough resources to do certain kinds of research.
The other is that we need diversity. The different educational, cultural, and habitual backgrounds also play an important role in science. Everyone has learned in school and during studies something in a particular style. Everyone has adopted certain view points, and certain strategies. But science lives in the unknown. There is no gold-plated way how to deal with the unknown. Therefore, there is no special preparation which is the best way to be prepared for doing science. We need many different minds, vastly different minds, such that we can get many perspectives. We need people with different backgrounds, with a different lookout on everything, to find new angles how to deal with problems. We need all ways of seeing things, even those which at first may look not intuitive to ourselves. But we have to learn and listen to all the view points. Thus, everyone who is willing to support the scientific process, the ever turning wheel of creating a theory and putting it through myriads of experimental tests, can provide a new point of view. Thus, diversity is essential for us. New problems need different points of view.
This is one of the points which also explains the many travels scientists do, often for years. Every new surrounding, every new group of peoples, provides a new perspective. Changing one's perspective by traveling, or by bringing many different peoples to our homes, helps us in broadening our view, in giving us the opportunity to learn adopt to take new perspectives. This is demanding for the individual, as it implies being around the world rather than at home, but our understanding profits from being used to seeing things from many perspectives.
The ability to see from different perspectives is not only supported by talking to other scientists. But experiencing different cultures, different surroundings in general, and trying to understand them, gives us the ability. So, diversity is essential to our ability to understand.
This is why being international is so extremely important for modern research, and why diversity counts so much for basic research.
And this also implies that already living in a diverse culture in a single spot will already help us in becoming better in understanding. If we are used to experience the new, and trying to understand it, in everyday live, it prepares us also to face the new at the boundary of our knowledge.
Thursday, October 29, 2015
Being formal
One of the topics I am working on is about basic properties of so-called gauge symmetries. I just have published a new paper on it, and here I want to describe what it is about.
A gauge symmetry is, very roughly stated, a useful tool for which we pay the price of a very redundant description. Pictorially speaking, we can say the same thing with very many different words. This may sound awful. However, in practice, it seems to work just like a charm. So what is there still to investigate?
Well, knowing how to use something in one way, and understanding it fully are two very different things. And actually, we are not, on a very strict and formal level, absolutely sure that we know how to use gauge symmetry. Though this is likely the case. But the situation is nonetheless for two reasons not really satisfactory.
The first is more a question of approach. When we use something, we would really like to know what we are actually doing. The second is that if we would understand it better, there may very well be ways to use it much better than we currently do. So there are reasons for understanding it better.
But what is it what I actually want to do?
It all starts when going back to the meaning of symmetries. Symmetries introduce redundant directions, meaning that when you have a symmetry you have more directions to point then there are actually. That is in general very helpful on a technical level.
But here enters the problem. If we have directions, we should be able to say 'go in this direction'. To do so, we introduce coordinate systems. Now comes the catch: For a local symmetry this is easier said than done, especially when it comes to the gauge symmetries of the standard model.
The problem is somewhat abstract. When you think about directions, then usually you think about left, right, up or down, and so on. This is true if you think about our usual space around us. But not everything has the same geometry. Especially, symmetries can also have a direction of bending. This is still not a major issue. But, there are some symmetries where the bending of directions becomes so strong, that some directions bend back on themselves or meet others, when going too far. And this is a problem. If they bend back, or even worse, bend on a different direction, what is direction anyway? I can start walking in one direction, and then I am in another direction. Sounds like a catastrophe, right?
Well, the reason it sounds like that is that we insisted to define directions once and for all. This is what we are used to. A direction is a direction is a direction. Unfortunately, not everything is so straight and something, especially gauge symmetries, have additional directions which are warped, and can intersect each other. The problem then arises how to orient oneself, if directions change. The answer to this is that it is necessary to give up directions which are always the same. Rather, you need to define directions only in some area around you, and when you move, you may need to change them.
The aforementioned paper now investigates this bending of directions. In a sense, it tries to map how far it is possible to go in a fixed direction, before this direction changes. Finally, it attempts to draw a map of where these directional changes occur. That sounds now pretty graphical, but the reality is once more mathematically involved. But in the end, this map hopefully will help to setup useful collections of coordinate systems, and a dictionary telling you where to use which coordinate system to get your directions.
The details are pretty involved. But the rough outline of what I did was to put myself at many points, create there coordinate systems, follow the directions they give and check when they started to make no more sense - when they hit other directions or themselves. Then I got a list of collisions, and where they occurred. And from this I could get a map of collisions. What I did not yet do is to make something useful out of the map. That comes next.
A gauge symmetry is, very roughly stated, a useful tool for which we pay the price of a very redundant description. Pictorially speaking, we can say the same thing with very many different words. This may sound awful. However, in practice, it seems to work just like a charm. So what is there still to investigate?
Well, knowing how to use something in one way, and understanding it fully are two very different things. And actually, we are not, on a very strict and formal level, absolutely sure that we know how to use gauge symmetry. Though this is likely the case. But the situation is nonetheless for two reasons not really satisfactory.
The first is more a question of approach. When we use something, we would really like to know what we are actually doing. The second is that if we would understand it better, there may very well be ways to use it much better than we currently do. So there are reasons for understanding it better.
But what is it what I actually want to do?
It all starts when going back to the meaning of symmetries. Symmetries introduce redundant directions, meaning that when you have a symmetry you have more directions to point then there are actually. That is in general very helpful on a technical level.
But here enters the problem. If we have directions, we should be able to say 'go in this direction'. To do so, we introduce coordinate systems. Now comes the catch: For a local symmetry this is easier said than done, especially when it comes to the gauge symmetries of the standard model.
The problem is somewhat abstract. When you think about directions, then usually you think about left, right, up or down, and so on. This is true if you think about our usual space around us. But not everything has the same geometry. Especially, symmetries can also have a direction of bending. This is still not a major issue. But, there are some symmetries where the bending of directions becomes so strong, that some directions bend back on themselves or meet others, when going too far. And this is a problem. If they bend back, or even worse, bend on a different direction, what is direction anyway? I can start walking in one direction, and then I am in another direction. Sounds like a catastrophe, right?
Well, the reason it sounds like that is that we insisted to define directions once and for all. This is what we are used to. A direction is a direction is a direction. Unfortunately, not everything is so straight and something, especially gauge symmetries, have additional directions which are warped, and can intersect each other. The problem then arises how to orient oneself, if directions change. The answer to this is that it is necessary to give up directions which are always the same. Rather, you need to define directions only in some area around you, and when you move, you may need to change them.
The aforementioned paper now investigates this bending of directions. In a sense, it tries to map how far it is possible to go in a fixed direction, before this direction changes. Finally, it attempts to draw a map of where these directional changes occur. That sounds now pretty graphical, but the reality is once more mathematically involved. But in the end, this map hopefully will help to setup useful collections of coordinate systems, and a dictionary telling you where to use which coordinate system to get your directions.
The details are pretty involved. But the rough outline of what I did was to put myself at many points, create there coordinate systems, follow the directions they give and check when they started to make no more sense - when they hit other directions or themselves. Then I got a list of collisions, and where they occurred. And from this I could get a map of collisions. What I did not yet do is to make something useful out of the map. That comes next.
Friday, September 18, 2015
Something dark on the move
If you browse either through popular science physics or through the most recent publications on the particle physics' preprint server arxiv.org then there is one topic which you cannot miss: Dark matter.
What is dark matter? Well, we do not know. So why do we care? Because we know something is out there, something dark, and its moving. Or, more precisely, it moves stuff around. When we look to the skies and measure how stars and galaxies move, then we find something interesting. We think we know how these objects interact, and how they therefore influence each other's movement. But what we observe does not agree with our expectations. We think we have excluded any possibility that we are overlooking something known, like that there are many more black holes, intergalactic gas and dust, or any of the other particles we know filling up the cosmos. No, it seems there is more out there than we can detect right now directly and have a theory for.
Of course, it can be that we miss something about how stars and galaxies influence each other, and this possibility is also pursued. But actually the simplest explanation is that out there is a new type of matter. A type of matter which does not interact either by electromagnetism or the strong force, because otherwise we would have seen it in experiment. Since there is no interaction with electromagnetism, it does not reflect or emit light, and therefore we cannot see it using optics. Hence the name dark matter. Because it is dark.
It certainly acts gravitationally, since this is how stars and galaxies are influenced. It may still be that it either interacts by the weak interaction or with the Higgs. That is something which is currently investigated in many experiments around the world. Of course, it could also interact with the standard model particles by some unknown force we have yet to discover. This would make it even more mysterious.
Because it is so popular there are many resources on the web which discuss what we already know (or do not know) about dark matter. Rather than repeating that, I will here write why I start to be interested in it. Or at least in some possible types of it. Because dark matter which only interacts by gravitation is not particularly interesting right know, as we will likely not learn much about in the foreseeable future. So I am more interested in such types of dark matter which couple by some other means to the standard model. Until they are excluded by experiments.
If it should interact with the standard model by some new force then this new force will look likely at first just like a modification of the weak interactions and/or of the Higgs. This would be an effective description of it. Given time, we would also figure out the details, but we have not yet.
Thus, as a first shot, I will concentrate on the cases where it it could interact with the weak force or just with the Higgs. Interacting with the weak force is actually quite complicated if it should fulfill all experimental constraints we have. Modifications there, though possible, are thus unlikely. Leaves the Higgs.
Therefore, I would like to see how dark matter could interact with the Higgs. Such models are called Higgs portal models, because the Higgs act as the portal through which we see dark matter. So far, this is also pretty standard.
Now comes the new thing. I have written several times that I work on questions what the Higgs really is. That it could have an interesting self-similar structure. And here is the big deal for me: The presence of dark matter interacting with the Higgs could influence actually this structure. This is similar to what happens with other bound states: The constituents can change their identity, as we investigate in another project.
My aim is now to bring all these three things together: Dark matter, Higgs, and the structure of the Higgs. I want to know whether such a type of dark matter influences the structure of the Higgs, and if yes how. And whether this could have a measurable influence. The other way around is that I would like to know whether the Higgs influences the dark matter in some way. Combining these three things is a rather new idea, and it will be very fascinating to explore it. The best course of action will be to do this by simulating the Higgs together with dark matter. This will be neither simple nor cheap, so this may take a lot of time. I will keep you posted.
What is dark matter? Well, we do not know. So why do we care? Because we know something is out there, something dark, and its moving. Or, more precisely, it moves stuff around. When we look to the skies and measure how stars and galaxies move, then we find something interesting. We think we know how these objects interact, and how they therefore influence each other's movement. But what we observe does not agree with our expectations. We think we have excluded any possibility that we are overlooking something known, like that there are many more black holes, intergalactic gas and dust, or any of the other particles we know filling up the cosmos. No, it seems there is more out there than we can detect right now directly and have a theory for.
Of course, it can be that we miss something about how stars and galaxies influence each other, and this possibility is also pursued. But actually the simplest explanation is that out there is a new type of matter. A type of matter which does not interact either by electromagnetism or the strong force, because otherwise we would have seen it in experiment. Since there is no interaction with electromagnetism, it does not reflect or emit light, and therefore we cannot see it using optics. Hence the name dark matter. Because it is dark.
It certainly acts gravitationally, since this is how stars and galaxies are influenced. It may still be that it either interacts by the weak interaction or with the Higgs. That is something which is currently investigated in many experiments around the world. Of course, it could also interact with the standard model particles by some unknown force we have yet to discover. This would make it even more mysterious.
Because it is so popular there are many resources on the web which discuss what we already know (or do not know) about dark matter. Rather than repeating that, I will here write why I start to be interested in it. Or at least in some possible types of it. Because dark matter which only interacts by gravitation is not particularly interesting right know, as we will likely not learn much about in the foreseeable future. So I am more interested in such types of dark matter which couple by some other means to the standard model. Until they are excluded by experiments.
If it should interact with the standard model by some new force then this new force will look likely at first just like a modification of the weak interactions and/or of the Higgs. This would be an effective description of it. Given time, we would also figure out the details, but we have not yet.
Thus, as a first shot, I will concentrate on the cases where it it could interact with the weak force or just with the Higgs. Interacting with the weak force is actually quite complicated if it should fulfill all experimental constraints we have. Modifications there, though possible, are thus unlikely. Leaves the Higgs.
Therefore, I would like to see how dark matter could interact with the Higgs. Such models are called Higgs portal models, because the Higgs act as the portal through which we see dark matter. So far, this is also pretty standard.
Now comes the new thing. I have written several times that I work on questions what the Higgs really is. That it could have an interesting self-similar structure. And here is the big deal for me: The presence of dark matter interacting with the Higgs could influence actually this structure. This is similar to what happens with other bound states: The constituents can change their identity, as we investigate in another project.
My aim is now to bring all these three things together: Dark matter, Higgs, and the structure of the Higgs. I want to know whether such a type of dark matter influences the structure of the Higgs, and if yes how. And whether this could have a measurable influence. The other way around is that I would like to know whether the Higgs influences the dark matter in some way. Combining these three things is a rather new idea, and it will be very fascinating to explore it. The best course of action will be to do this by simulating the Higgs together with dark matter. This will be neither simple nor cheap, so this may take a lot of time. I will keep you posted.
Thursday, August 20, 2015
Looking for one thing in another
One of the most powerful methods to
learn about particle physics is to smash two particles against each
other at high speed. This is what is currently done at the LHC, where
two protons
are used. Protons have the advantage that they are simple to handle
on an engineering level, but since they are made up out of quarks andgluons,
these collisions are rather messy.
An alternative are colliders using
electrons and positrons.
There have been many of these used successfully in the past. The
reason is that the electrons and positrons appear at first sight to
be elementary, but are technically much harder to use. Nonetheless,
there are right now two large projects planned to use them, one in
Japan and one in China. The decision is still out, whether either, or
both, will be build, but they would both open up a new view on
particle physics. These would start, hopefully, in the (late) 2020ies
or early 2030ies.
However, they may be a little more
messier than currently expected. I have written previously several
times about our research on the structure of the Higgs particle.
Especially that the Higgs may be much less simpler than just a singleparticle.
We are currently working on possible consequences of this insight.
What has this to do with the
collisions? Well, already 35 years ago
people figured out that if the statements about the Higgs are true,
then this has further implications. Especially, the electrons as we
know them cannot be really elementary particles. Rather, they have to
be bound states, a combination of what we usually call an electron
and a Higgs.
This is confusing at first sight: An
electron should consists out of an electron and something else? The
reason is that a clear distinction is often not made. But one should
be more precise. One should first think of an elementary 'proper'
electron. Together with the Higgs this proper electron creates a
bound state. This bound state is what we perceive and usually call an
electron. But it is different from the proper electron. Thus, we
should therefore call it rather an 'effective' electron. This chaos
is not so much different as what you would get when you would call a
hydrogen atom also proton, since the electron is such a small
addition to the proton in the atom. That you do not do so has a
historical origin, as it has in the reverse way for the (effective)
electron. Yes, it is confusing.
Even after mastering this confusion,
that seems to be a rather outrageous statement. The Higgs is so much
heavier than the electron, how can that be? Well, field theory indeed
allows to have a bound state of two particles which is much, much
lighter than the two individual particles. This effect is called a
mass defect. This has been measured for atoms and nuclei, but there
this is a very small effect. So, it is in principle possible. Still,
the enormous size of the effect makes this a very surprising
statement.
What we want to do now is to find some
way to confirm this picture using experiments. And confirm this
extreme mass defect.
Unfortunately, we cannot disassemble
the bound state. But the presence of the Higgs can be seen in a
different way. When we collide such bound states hard enough, then we
usually have the situation that we actually collide from each bound
states just one of the elements, rather than the whole thing. In a
simple picture, in most cases one of the parts will be in the front
of the collision, and will take the larger part of the hit.
This means that
sometimes we will collide the conventional part, the proper electron.
Then everything looks as usually expected. But sometimes we will do
something involving the Higgs from either or both bound states. We
can estimate already that anything involving the Higgs will be very
rare. In the simple picture above, the Higgs, being much heavier than
the proper electron, mostly drags behind the proper electron. But
'rarer' is no quantitative enough in physics. Therefore we have to do
now calculations. This is a project I intend to do now with a new
master student.
We want to do this, since we want to
predict whether the aforementioned next experiments we will be
sensitive enough to see that some times we actually collide the
Higgses. That would confirm the idea of bound states. Then, we would
indeed find something else than people were originally been looking
for. Or expecting.
Labels:
Electroweak,
Experiment,
Higgs,
Research,
Standard model,
Students
Friday, July 10, 2015
Playing indirectly
One of my research areas are neutron stars. To understand them requires to understand how the strong interactions behave when the matter is enormously densely packed. A new PhD student of mine has now started to work on this topic, and I would like to describe a little bit what we will be looking at.
I have already written in the past that this type of situation is very hard to deal with, because we cannot just do simulations. This is unfortunate, since simulations have been very successful in uncovering what happened in the early universe. In that case, the system is hot rather than dense. Though the reason for the problem is 'just' technical, chances are not too bright to resolve it in the near future.
Hence, I had already quite some time ago decided that a possibility is to play indirectly. The basic idea is that there are other methods, which would work. The price we have to pay is that we need to make approximations in these methods. But we would like to check these approximations, ideally against simulations. But we cannot, because there are no. So how to break the circle?
To escape this problem, we can again use a detour. We did this once, because we hoped that we will learn more about the qualitative features. That we can get some insight into this type of physics. Now, we have a much more quantitative approach. We use theories, which are very similar to the strong interactions (also called QCD), but are not QCD, but which can be simulated. And these we will use to break the circle.
Why is it possible to perform simulations for this type of theories? Well, the main reason is the difference between particles and anti-particles. In QCD, a quark and an anti-quark are fundamentally very different objects. Hence, a large density can mean two things. A large density could be having many more quarks than anti-quarks, but still have plenty of both. Or it could be just to have many of one type. For a neutron-star both situations are relevant. And thus, there may be actually many more particles present then we would think, just many of them anti-particles. This is at the heart of the problem, that there is so much more than just the superficial number of particles.
This problem is evaded by using a theory instead where there are no anti-quarks. To be more precise, a theory in which anti-quarks are the same as quarks. There exists a number of such theories. However, such a change is very drastic. It thus may happen that the so changed theory is so radically different from QCD that any comparison becomes meaningless. Thus, it is necessary to ensure that the theory is close enough to the original.
Two candidates for such theories have been identified so far. One is the so-called G2QCD, of which I talked about previously. Another one is very close to QCD, but instead of three color charges is has just two different ones. Both cases have their own merits. The first is closer to QCD. In this theory there are protons and neutrons. The latter does not have these, but it is very cheap to simulate. Both theories are hence quite different, but actually share both also many other traits with QCD.
It therefore stands to reason that whatever approximation describes both well will also work for QCD. Thus, we will now use the simulations of both theories to test the approximations made in the other methods. Especially, we will look at the properties of the quarks and gluons. We will then use the insights gained to improve the approximations. Until we describe both theories well enough. Then we will translate the approximations back to QCD. And if everything works out, we will have then an acceptable description of a piece of neutron star matter.
I have already written in the past that this type of situation is very hard to deal with, because we cannot just do simulations. This is unfortunate, since simulations have been very successful in uncovering what happened in the early universe. In that case, the system is hot rather than dense. Though the reason for the problem is 'just' technical, chances are not too bright to resolve it in the near future.
Hence, I had already quite some time ago decided that a possibility is to play indirectly. The basic idea is that there are other methods, which would work. The price we have to pay is that we need to make approximations in these methods. But we would like to check these approximations, ideally against simulations. But we cannot, because there are no. So how to break the circle?
To escape this problem, we can again use a detour. We did this once, because we hoped that we will learn more about the qualitative features. That we can get some insight into this type of physics. Now, we have a much more quantitative approach. We use theories, which are very similar to the strong interactions (also called QCD), but are not QCD, but which can be simulated. And these we will use to break the circle.
Why is it possible to perform simulations for this type of theories? Well, the main reason is the difference between particles and anti-particles. In QCD, a quark and an anti-quark are fundamentally very different objects. Hence, a large density can mean two things. A large density could be having many more quarks than anti-quarks, but still have plenty of both. Or it could be just to have many of one type. For a neutron-star both situations are relevant. And thus, there may be actually many more particles present then we would think, just many of them anti-particles. This is at the heart of the problem, that there is so much more than just the superficial number of particles.
This problem is evaded by using a theory instead where there are no anti-quarks. To be more precise, a theory in which anti-quarks are the same as quarks. There exists a number of such theories. However, such a change is very drastic. It thus may happen that the so changed theory is so radically different from QCD that any comparison becomes meaningless. Thus, it is necessary to ensure that the theory is close enough to the original.
Two candidates for such theories have been identified so far. One is the so-called G2QCD, of which I talked about previously. Another one is very close to QCD, but instead of three color charges is has just two different ones. Both cases have their own merits. The first is closer to QCD. In this theory there are protons and neutrons. The latter does not have these, but it is very cheap to simulate. Both theories are hence quite different, but actually share both also many other traits with QCD.
It therefore stands to reason that whatever approximation describes both well will also work for QCD. Thus, we will now use the simulations of both theories to test the approximations made in the other methods. Especially, we will look at the properties of the quarks and gluons. We will then use the insights gained to improve the approximations. Until we describe both theories well enough. Then we will translate the approximations back to QCD. And if everything works out, we will have then an acceptable description of a piece of neutron star matter.
Wednesday, June 3, 2015
The nature of particles
I have written some time ago that most of the particles we know decay, i.e. after some time they fall apart into other particles. Probably that is not to surprising. After all, essentially everything we know tends to fall apart after a while. Hence, we can think of these particles being made out of the particles into which they decay. Such particles made up out of other particles are called bound states or composite particles. The particles into which it decays are called decay products, but here I will just use particles. Actually, even the particles into which the composite particle decays may in turn decay further. But for the things I want to write about in this entry, this will not matter. Thus, I will just talk about a composite particle and the particles it decays into.
But there is an important difference between usual things falling apart and particles falling apart.
Think about a tower made from wood logs, like a child's toy. You build it from the logs, and after some time it will break down again into the logs. Especially, when a child is around to kick it. But the logs themselves remain intact. So far, this is the same with composite particles. You start with a composite particle, it then decays into other particles. You can rebuild your tower from the logs. This is also possible with the particles. The decay products can be refused into the original composite particle.
But now there is a difference. When you build the tower, the logs keep there identity. If you look close enough at the tower, you can still see the individual logs you used to build the tower. That is not so simple with particles. This is best seen by a specific example. Start with two particles, and fuse them to a new composite particle. So far, nothing new. But then it may happen that this composite particle decays into entirely different other particles then the original ones, or it may decay into the original ones. The expression we use is that the composite particle has different decay channels. It is not that all the possible particles are stored in the original particle, it really changes its identity. It would be like the wood logs turn into plastic ones while being in the tower.
Describing such a spontaneous change is not simple. We have become quite expert in modeling the starting composite particle, and then perform at some point an explicit change into the different particles. But that is a little bit like taking the tower and, very nifty, exchanging each log while it is inside the tower from wood to plastic. What we would like to be able is to have this as a dynamical process. Without our interference, the structure of the composite particle changes, and thus decays differently as it has been formed.
We actually know how to simulate this. But there we can just observe that this happens. We also would like to know how this proceeds inside the structure of the composite particle itself, what governs this process in detail.
Learning this is another project which I now supervise as a PhD project. We will use the so-called equations of motion to dissect the process. For this, we will be looking at a very simple particle, the so-called (charged) pion. It is a composition of two quarks, but can also decay into an electron and a neutrino. Choosing this particular composite particle has a number of reasons. One is that it is very well studied both experimentally and theoretically. We can therefore concentrate on the new aspects, the change of identity of the constituents. The decay is also rather slow, ad therefore technically easier to control. And finally, quarks, electrons and neutrinos are very different particles. As theoreticians, we can use this fact by modifying their properties, and therefore switch on and off various features of the process. And finally, though the pion is made up (sometimes) of quarks, it can actually not really decay into them, due to confinement. Therefore, we need only to consider the change inside the pion, but not outside. This also reduces the technical challenges.
Solving this question, we will continue on to more interesting composite particles, like bound states of the Higgs. But this project is an enormously important first step on this road.
But there is an important difference between usual things falling apart and particles falling apart.
Think about a tower made from wood logs, like a child's toy. You build it from the logs, and after some time it will break down again into the logs. Especially, when a child is around to kick it. But the logs themselves remain intact. So far, this is the same with composite particles. You start with a composite particle, it then decays into other particles. You can rebuild your tower from the logs. This is also possible with the particles. The decay products can be refused into the original composite particle.
But now there is a difference. When you build the tower, the logs keep there identity. If you look close enough at the tower, you can still see the individual logs you used to build the tower. That is not so simple with particles. This is best seen by a specific example. Start with two particles, and fuse them to a new composite particle. So far, nothing new. But then it may happen that this composite particle decays into entirely different other particles then the original ones, or it may decay into the original ones. The expression we use is that the composite particle has different decay channels. It is not that all the possible particles are stored in the original particle, it really changes its identity. It would be like the wood logs turn into plastic ones while being in the tower.
Describing such a spontaneous change is not simple. We have become quite expert in modeling the starting composite particle, and then perform at some point an explicit change into the different particles. But that is a little bit like taking the tower and, very nifty, exchanging each log while it is inside the tower from wood to plastic. What we would like to be able is to have this as a dynamical process. Without our interference, the structure of the composite particle changes, and thus decays differently as it has been formed.
We actually know how to simulate this. But there we can just observe that this happens. We also would like to know how this proceeds inside the structure of the composite particle itself, what governs this process in detail.
Learning this is another project which I now supervise as a PhD project. We will use the so-called equations of motion to dissect the process. For this, we will be looking at a very simple particle, the so-called (charged) pion. It is a composition of two quarks, but can also decay into an electron and a neutrino. Choosing this particular composite particle has a number of reasons. One is that it is very well studied both experimentally and theoretically. We can therefore concentrate on the new aspects, the change of identity of the constituents. The decay is also rather slow, ad therefore technically easier to control. And finally, quarks, electrons and neutrinos are very different particles. As theoreticians, we can use this fact by modifying their properties, and therefore switch on and off various features of the process. And finally, though the pion is made up (sometimes) of quarks, it can actually not really decay into them, due to confinement. Therefore, we need only to consider the change inside the pion, but not outside. This also reduces the technical challenges.
Solving this question, we will continue on to more interesting composite particles, like bound states of the Higgs. But this project is an enormously important first step on this road.
Friday, May 8, 2015
A model for a model
One of the more disturbing facts of modern theoretical particle physics is complexity. We can formulate the standard model on, more or less, two pages of paper. But to calculate most interesting quantities is so seriously challenging that even an approximate result takes many person-years, and often even much, much more.
Fortunately, the standard model is in one respect kind: For many interesting questions only a small part of it is relevant. This does not mean that the parts are really independent. But the influence of the other parts on the subject in question is so minor that the consequences are often much smaller than any reasonable theoretical or experimental accuracy could resolve. For example, many features of the strong interactions can be determined without ever considering the Higgs explicitly. In fact, it is even possible to learn much about the strong interactions just from looking at the gluons alone, neglecting the quarks. This reduced theory is called Yang-Mills theory. It is a very reduced model for the core features of the strong interactions.
Unfortunately, even this theory, which contains only a single of the particles of the standard model, is very complex. One of our lines of research is dealing with the resulting problems. One of these problems has to do with the properties of the local symmetry of this model, the so-called gauge symmetry. This feature leads to certain, technically necessary, redundancies. But when doing calculations, we need to do approximations. This may mess up the classification of what is redundant and what is not. Getting this straight is important, and this is the research topic I write about today.
And it is here where the title comes into play. Even if the theory of only gluons is much simpler than the original theory, it is still so complicated that the redundancies are pretty messed up. Therefore, we decided by now that it would be better to understand first a different case. A case, in which the same redundancies appear, but all the rest is simpler. A (simpler) model for a (more complicated) model.
This strategy creates the bridge to my previous entry on supersymmetry.
Theories which have this supersymmetry are, in almost all cases, much simpler than theories without. As I wrote, there are different levels of supersymmetry. In its simplest form, supersymmetry relates the kinds of possible particles, and constrains a few interactions. In the maximum version, essentially the whole structure of the theory, and almost all details, are constrained. These constrains are so rigid and powerful that we can solve the theory almost exactly. Nonetheless, this theory has the same kind of redundancies as Yang-Mills theory, and even the full standard model. Thus, we can study what approximations do to these redundancies. Especially, using the exact knowledge, we can reverse engineer essentially everything we want.
In fact, we make a kind of theoretical experiment: We take the theory. We treat it with a method - in our case we use the so-called equations-of-motion. We know the results. Now, we perform the same type of approximations we do in the more complicated models, or even the full theory. We see how this modifies the results. Well, actually we will see, since we are still working on this bit. From the change of the results, we will learn a lot of things. One is which kind of approximations make a qualitative change. Since any qualitative difference compared to the exact result will be a wrong result, we should not do such approximations. Not in this theory, and especially not in more complicated theories. Just small quantitative changes are probably fine, though there is no guarantee. And we can explicitly see if the approximations start to mix redundant parts such that they are treated wrongly. From this we will (hopefully) learn more about how to correctly treat the redundancies in the more complicated models.
Fortunately, the standard model is in one respect kind: For many interesting questions only a small part of it is relevant. This does not mean that the parts are really independent. But the influence of the other parts on the subject in question is so minor that the consequences are often much smaller than any reasonable theoretical or experimental accuracy could resolve. For example, many features of the strong interactions can be determined without ever considering the Higgs explicitly. In fact, it is even possible to learn much about the strong interactions just from looking at the gluons alone, neglecting the quarks. This reduced theory is called Yang-Mills theory. It is a very reduced model for the core features of the strong interactions.
Unfortunately, even this theory, which contains only a single of the particles of the standard model, is very complex. One of our lines of research is dealing with the resulting problems. One of these problems has to do with the properties of the local symmetry of this model, the so-called gauge symmetry. This feature leads to certain, technically necessary, redundancies. But when doing calculations, we need to do approximations. This may mess up the classification of what is redundant and what is not. Getting this straight is important, and this is the research topic I write about today.
And it is here where the title comes into play. Even if the theory of only gluons is much simpler than the original theory, it is still so complicated that the redundancies are pretty messed up. Therefore, we decided by now that it would be better to understand first a different case. A case, in which the same redundancies appear, but all the rest is simpler. A (simpler) model for a (more complicated) model.
This strategy creates the bridge to my previous entry on supersymmetry.
Theories which have this supersymmetry are, in almost all cases, much simpler than theories without. As I wrote, there are different levels of supersymmetry. In its simplest form, supersymmetry relates the kinds of possible particles, and constrains a few interactions. In the maximum version, essentially the whole structure of the theory, and almost all details, are constrained. These constrains are so rigid and powerful that we can solve the theory almost exactly. Nonetheless, this theory has the same kind of redundancies as Yang-Mills theory, and even the full standard model. Thus, we can study what approximations do to these redundancies. Especially, using the exact knowledge, we can reverse engineer essentially everything we want.
In fact, we make a kind of theoretical experiment: We take the theory. We treat it with a method - in our case we use the so-called equations-of-motion. We know the results. Now, we perform the same type of approximations we do in the more complicated models, or even the full theory. We see how this modifies the results. Well, actually we will see, since we are still working on this bit. From the change of the results, we will learn a lot of things. One is which kind of approximations make a qualitative change. Since any qualitative difference compared to the exact result will be a wrong result, we should not do such approximations. Not in this theory, and especially not in more complicated theories. Just small quantitative changes are probably fine, though there is no guarantee. And we can explicitly see if the approximations start to mix redundant parts such that they are treated wrongly. From this we will (hopefully) learn more about how to correctly treat the redundancies in the more complicated models.
Monday, April 13, 2015
A partner for every particle?
A master student has started with me a thesis on a new topic, one on which I have not been working before. Therefore, before going into details about the thesis' topic itself, I would like to introduce the basic physics underlying it.
The topic is the rather famous concept of supersymmetry. What this means I will explain in a minute. Supersymmetry is related to two general topics we are working on. One is the quest for what comes after the standard model. It is with this respect that it has become famous. There are many quite excellent introductions to why it is relevant, and why it could be within the LHC's reach to discover it. I will not just point to any of these, but write nonetheless here a new text on it. Why? Because of the relation to the second research area involved in the master thesis, the ground work about theory. This gives our investigation a quite different perspective on the topic, and requires a different kind of introduction.
So what is supersymmetry all about? I have written about the fact that there are two very different types of particles we know of: Bosons and fermions. Both types have very distinct features. Any particle we know belong to either of these two types. E.g. the famous Higgs is a boson, while the electron is a fermion.
One question to pose is, whether these two categories are really distinct, or if there are just two sides of a single coin. Supersymmetry is what you get if you try to realize the latter option. Supersymmetry - or SUSY for short - introduces a relation between bosons and fermions. A consequence of SUSY is that for every boson there is a fermion partner, and for every fermion there is a boson partner.
A quick counting in the standard model shows that it cannot be supersymmetric. Moreover, SUSY also dictates that all other properties of a boson and a fermion partner must be the same. This includes the mass and the electric charge. Hence, if SUSY would be real, there should be a boson which acts otherwise like an electron. Experiments tell us that this is not the case. So is SUSY doomed? Well, not necessarily. There is a weaker version of SUSY where it only approximately true - a so-called broken symmetry. This allows to make the partners differently massive, and then they can escape detection. For now.
SUSY, even in its approximate form, has many neat features. It is therefore a possibility desired by many to be true. But only experiment (and nature) will tell eventually.
But the reason why we are interested in SUSY is quite different.
As you see, SUSY puts tight constraints on what kind of particles are in a theory. But it does even more. It also restricts the way how these particles can interact. The constraints on the interactions are a little bit more flexible than on the kind of particles. You can realize different amounts of SUSY by relaxing or enforcing relations between the interactions. What does 'more or less' SUSY mean? The details are somewhat subtle, but a hand-waving statement is that more SUSY not only relates bosons and fermions, but in addition also partner particles of different particles more and more. There is an ultimate limit to the amount of SUSY you can have, essentially when everything and everyone is related and every interaction is essentially of the same strength. That is what is called a maximal SUSY theory. A fancy name is N=4 SUSY for technical reason, if you should come across it somewhere on the web.
And it is this theory which is interesting to us. Having such very tight constraints enforces a very predetermined behavior. Many things are fixed. Thus, calculations are more simple. At the same time, many of there more subtle questions we are working on are nonetheless still there. Using the additional constraints, we hope to understand this stuff better. With these insights, we may have a better chance to understand the same stuff in a less rigid theory, like the standard model.
The topic is the rather famous concept of supersymmetry. What this means I will explain in a minute. Supersymmetry is related to two general topics we are working on. One is the quest for what comes after the standard model. It is with this respect that it has become famous. There are many quite excellent introductions to why it is relevant, and why it could be within the LHC's reach to discover it. I will not just point to any of these, but write nonetheless here a new text on it. Why? Because of the relation to the second research area involved in the master thesis, the ground work about theory. This gives our investigation a quite different perspective on the topic, and requires a different kind of introduction.
So what is supersymmetry all about? I have written about the fact that there are two very different types of particles we know of: Bosons and fermions. Both types have very distinct features. Any particle we know belong to either of these two types. E.g. the famous Higgs is a boson, while the electron is a fermion.
One question to pose is, whether these two categories are really distinct, or if there are just two sides of a single coin. Supersymmetry is what you get if you try to realize the latter option. Supersymmetry - or SUSY for short - introduces a relation between bosons and fermions. A consequence of SUSY is that for every boson there is a fermion partner, and for every fermion there is a boson partner.
A quick counting in the standard model shows that it cannot be supersymmetric. Moreover, SUSY also dictates that all other properties of a boson and a fermion partner must be the same. This includes the mass and the electric charge. Hence, if SUSY would be real, there should be a boson which acts otherwise like an electron. Experiments tell us that this is not the case. So is SUSY doomed? Well, not necessarily. There is a weaker version of SUSY where it only approximately true - a so-called broken symmetry. This allows to make the partners differently massive, and then they can escape detection. For now.
SUSY, even in its approximate form, has many neat features. It is therefore a possibility desired by many to be true. But only experiment (and nature) will tell eventually.
But the reason why we are interested in SUSY is quite different.
As you see, SUSY puts tight constraints on what kind of particles are in a theory. But it does even more. It also restricts the way how these particles can interact. The constraints on the interactions are a little bit more flexible than on the kind of particles. You can realize different amounts of SUSY by relaxing or enforcing relations between the interactions. What does 'more or less' SUSY mean? The details are somewhat subtle, but a hand-waving statement is that more SUSY not only relates bosons and fermions, but in addition also partner particles of different particles more and more. There is an ultimate limit to the amount of SUSY you can have, essentially when everything and everyone is related and every interaction is essentially of the same strength. That is what is called a maximal SUSY theory. A fancy name is N=4 SUSY for technical reason, if you should come across it somewhere on the web.
And it is this theory which is interesting to us. Having such very tight constraints enforces a very predetermined behavior. Many things are fixed. Thus, calculations are more simple. At the same time, many of there more subtle questions we are working on are nonetheless still there. Using the additional constraints, we hope to understand this stuff better. With these insights, we may have a better chance to understand the same stuff in a less rigid theory, like the standard model.
Thursday, March 5, 2015
Can we tell when unification works?
Some time ago, I wrote about the idea that the three forces of the standard model, the electromagnetic force, the weak force, and the strong force, could all be just different parts of one unified force. In the group I am building I have now a PhD student working on such a theory, using simulations.
Together, we would like to answer a number of questions. The most important one is, whether such a theory is consistent with what we see around us. That is necessary to make such a theory relevant.
Now, there is almost an infinite number of versions of such unified theories. We could never hope to check each and every one of them. We could pick one. But hoping it would be the right one is somewhat too optimistic. We therefore take a different approach. We aim to get a general criterion such that we can check out many of the candidate theories at the same time.
For this reason, we ignore for the moment that we would like to reproduce experiments. Rather, we ask ourselves what are common traits of these theories. We have done that. What we are currently doing is to construct the simplest possible theory which has as many of these traits as possible. We have almost completed that. This reduced theory will become indeed very simple. Of known physics, it contains the weak force and the Higgs. As with every unified theory, it also contains a number of additional particles. But they are not dangerous, if they will be too heavy to be visible to us. At least, as long as we do not have more powerful experiments. The last ingredient are the interactions between the different particles. That is what we are working on now. Having the simplest possible theory has also another benefit - it demands small enough computer resources to be manageable.
After fixing the theory, how do the questions look like? One of the traits of such theories is that there are many new particles. What is there fate? How is it arranged that we cannot see them? If we think of the theory describing only rather small changes to the standard model, we can use perturbation theory. With this, we would just follow pretty old footsteps, and the answer can essentially be guessed form the experience of other people. The answer will be that all the surplus stuff is indeed very, very heavy. In fact, so heavy that our experiments will not be able to see it in any foreseeable future, except as very indirect effects. We get out what we put in.
But here comes the new stuff. As I have described earlier, there are many subtleties when it comes to the Higgs of the standard model. But in the end, everything collapses to a rather simple picture. Almost a miracle. Almost, but not quite. The reason is the structure of the standard model, which is very special in the number and properties of particles. The other one is that the parameters, things like masses, just fits.
The natural question is hence: Does the miracle repeats itself for this type of unified theory? Is the new stuff really heavy? Is the known stuff light enough? If the almost-miracle repeats itself, the answer is yes. Should it repeat itself? Well, we will test under which conditions it repeats itself, by playing around both with the number of particles, their structures, and the parameters. We assume right now that we can get it to work, but that we can also break it. And we would like to understand very precisely when it breaks and why it breaks. And finally, the most obvious question, do we want that it repeats itself? Probably the most obvious question, arguably the hardest to answer. If it does not repeat itself, a whole class of ideas becomes more problematic. Ideas, which are conceptually pretty attractive. So, in principle, we would like to see it repeating itself. But then, would it not be more interesting if we needed to start afresh? Probably also true. But in the end, our preference should not play a role. After all, nature decides, and we are just the spectators, trying to figuring out what goes on. And our preferences have nothing to do with it, and therefore we should keep them out of the game.
Together, we would like to answer a number of questions. The most important one is, whether such a theory is consistent with what we see around us. That is necessary to make such a theory relevant.
Now, there is almost an infinite number of versions of such unified theories. We could never hope to check each and every one of them. We could pick one. But hoping it would be the right one is somewhat too optimistic. We therefore take a different approach. We aim to get a general criterion such that we can check out many of the candidate theories at the same time.
For this reason, we ignore for the moment that we would like to reproduce experiments. Rather, we ask ourselves what are common traits of these theories. We have done that. What we are currently doing is to construct the simplest possible theory which has as many of these traits as possible. We have almost completed that. This reduced theory will become indeed very simple. Of known physics, it contains the weak force and the Higgs. As with every unified theory, it also contains a number of additional particles. But they are not dangerous, if they will be too heavy to be visible to us. At least, as long as we do not have more powerful experiments. The last ingredient are the interactions between the different particles. That is what we are working on now. Having the simplest possible theory has also another benefit - it demands small enough computer resources to be manageable.
After fixing the theory, how do the questions look like? One of the traits of such theories is that there are many new particles. What is there fate? How is it arranged that we cannot see them? If we think of the theory describing only rather small changes to the standard model, we can use perturbation theory. With this, we would just follow pretty old footsteps, and the answer can essentially be guessed form the experience of other people. The answer will be that all the surplus stuff is indeed very, very heavy. In fact, so heavy that our experiments will not be able to see it in any foreseeable future, except as very indirect effects. We get out what we put in.
But here comes the new stuff. As I have described earlier, there are many subtleties when it comes to the Higgs of the standard model. But in the end, everything collapses to a rather simple picture. Almost a miracle. Almost, but not quite. The reason is the structure of the standard model, which is very special in the number and properties of particles. The other one is that the parameters, things like masses, just fits.
The natural question is hence: Does the miracle repeats itself for this type of unified theory? Is the new stuff really heavy? Is the known stuff light enough? If the almost-miracle repeats itself, the answer is yes. Should it repeat itself? Well, we will test under which conditions it repeats itself, by playing around both with the number of particles, their structures, and the parameters. We assume right now that we can get it to work, but that we can also break it. And we would like to understand very precisely when it breaks and why it breaks. And finally, the most obvious question, do we want that it repeats itself? Probably the most obvious question, arguably the hardest to answer. If it does not repeat itself, a whole class of ideas becomes more problematic. Ideas, which are conceptually pretty attractive. So, in principle, we would like to see it repeating itself. But then, would it not be more interesting if we needed to start afresh? Probably also true. But in the end, our preference should not play a role. After all, nature decides, and we are just the spectators, trying to figuring out what goes on. And our preferences have nothing to do with it, and therefore we should keep them out of the game.
Labels:
behind-the-scenes,
BSM,
GUT,
Research,
Students
Tuesday, February 10, 2015
Take your theory seriously
I have published a new paper. This paper has a somewhat simple message, even if it is technical. This message is just: Take your theory seriously. This may seem obvious, but it is not necessarily so. The reason is that if a theory works spectacularly well, if you do not take it serious, why should you care? And if I mean spectacular, I mean an agreement with experimenter where any deviations are smaller than any background noise we could not yet eliminate. The theory which works so well is, of course, the standard model.
The paper is a follow-up of the proceeding I have discussed last year. The upshot of this proceeding is that we use perturbation theory to describe the physics we measure, e.g., at the LHC at CERN. However, perturbation theory is not taking the theory too seriously, but it works so very well. The reason for this can be understood: It is an almost miraculous coincidence that taking the theory seriously gives almost the same results. We have checked this in very great detail.
But understanding what is going on in the standard model is one thing. One of the big aims in modern particle physics is to understand what else there could be.
Now, comfortable with the success within the standard model, we have for a very long time assumed that taking the candidate theories for new physics also not seriously should work out. Of course, there have been some exceptions, where we knew it should not work. But by and large, the success with the standard model made us comfortable with using the same techniques.
In the proceeding, I already raised some doubt whether this would be justified when there should be a second Higgs. Under certain conditions, this may not be correct. In the full paper I now extend these doubts also to other theories, and even conclude it may be necessary to rethink even the cases where we thought we were careful.
What is the reason behind this departure? Why should it not work? Well, I do not state that it will not work, just that it might not work. In the standard model, we were in the comfortable situation that experiments told us that it does work. Now, in the absence of experimental results, we are left to theory to tell us where to look. So we do not know whether taking the theory not seriously works out, and may be misguided if it does not.
But why should it fail this time, when it works so well for the standard model? A legitimate question. It requires to understand why it does work for the standard model. Looking at it in detail shows that it requires two conditions to work. One is that the relative masses of the particles lie within a certain range. That is satisfied by the standard model. The second is that the relative number of particles is just right. Both conditions are or may not be met by the theories we have for new physics. In the paper, I give particular examples for several theories, and formulate requirements which have to be met for things to work out.
So is the paper now killing of models? No. At least not yet. I only formulate conditions and requirements. Whether a particular theory meets these is a question to the theory. I give some examples where the situation is very much on the borderline to have a starting point where to check. But what actually happens requires a calculation. A calculation in which we do take the theory very seriously. This will be very complicated, and in the end we may just figure out that it was unnecessary. But then we can be sure to be right, and the theory gives us the leeway to not take it too seriously. And being sure is a basic requirement when one ones to explain the unknown.
The paper is a follow-up of the proceeding I have discussed last year. The upshot of this proceeding is that we use perturbation theory to describe the physics we measure, e.g., at the LHC at CERN. However, perturbation theory is not taking the theory too seriously, but it works so very well. The reason for this can be understood: It is an almost miraculous coincidence that taking the theory seriously gives almost the same results. We have checked this in very great detail.
But understanding what is going on in the standard model is one thing. One of the big aims in modern particle physics is to understand what else there could be.
Now, comfortable with the success within the standard model, we have for a very long time assumed that taking the candidate theories for new physics also not seriously should work out. Of course, there have been some exceptions, where we knew it should not work. But by and large, the success with the standard model made us comfortable with using the same techniques.
In the proceeding, I already raised some doubt whether this would be justified when there should be a second Higgs. Under certain conditions, this may not be correct. In the full paper I now extend these doubts also to other theories, and even conclude it may be necessary to rethink even the cases where we thought we were careful.
What is the reason behind this departure? Why should it not work? Well, I do not state that it will not work, just that it might not work. In the standard model, we were in the comfortable situation that experiments told us that it does work. Now, in the absence of experimental results, we are left to theory to tell us where to look. So we do not know whether taking the theory not seriously works out, and may be misguided if it does not.
But why should it fail this time, when it works so well for the standard model? A legitimate question. It requires to understand why it does work for the standard model. Looking at it in detail shows that it requires two conditions to work. One is that the relative masses of the particles lie within a certain range. That is satisfied by the standard model. The second is that the relative number of particles is just right. Both conditions are or may not be met by the theories we have for new physics. In the paper, I give particular examples for several theories, and formulate requirements which have to be met for things to work out.
So is the paper now killing of models? No. At least not yet. I only formulate conditions and requirements. Whether a particular theory meets these is a question to the theory. I give some examples where the situation is very much on the borderline to have a starting point where to check. But what actually happens requires a calculation. A calculation in which we do take the theory very seriously. This will be very complicated, and in the end we may just figure out that it was unnecessary. But then we can be sure to be right, and the theory gives us the leeway to not take it too seriously. And being sure is a basic requirement when one ones to explain the unknown.
Friday, January 9, 2015
What is so important (to me) about the Higgs?
In a few weeks, I will give my inaugural lecture at the University of Graz. I have entitled it "The Higgs as a touchstone of theoretical particle physics". Of course, with this title I could easily give a lecture on many of the problems the Higgs present us with which are begging for new physics. I could write about how we have no idea why the Higgs has a mass of the size it has. How we do not understand why it does what it does. And so many other things.
But this is not what I want to write about. Nor is it what i want to talk about. It is rather something more mundane but also much more subtle. It is something for which I do not need any new physics to worry about. I can very well worry just about the known physics.
It has something to do with redundancy. In theoretical physics it is often very convenient - well, often indispensable - to add something redundant in our description. A redundancy is like writing instead of 2 2+0. Both statements have the same meaning. But adding zero to two is redundant. In this example, the zero is just redundant, but not useful. In particle physics, what we add is both redundant and useful.
It is this redundancy which helps us disentangling complicated problems. Of course, we are not allowed to change something by introducing the redundancy. But as long as we respect this requirement, we are pretty free what kind of redundancy we add. In the previous example, we could just write 2+0+0, and have another version of redundancy.
OK, so what has this to do with the Higgs? Well, if we measure something, it is of course independent of these redundancies - after all, they are man-made. And nothing made by us should influence what we measure. But if we look at the ordinary version of how we describe the Higgs, than there is a slight mismatch. In our theoretical description of the Higgs, there is some remainder of the redundancy still lingering. It is, like 2+0.001 pops up. Nonetheless, our theoretical description of the Higgs is spot on the experimental results. But how can this be if there is still redundancy polluting our result? It is this where the Higgs becomes a touchstone of understanding theoretical particle physics: In explaining why this is not correct and correct at the same time.
As always, the answer appears to be in the fine-print. In the standard way how we approach the Higgs, we are not doing it exactly. Well, to be honest, we could not do it exactly. We make some approximations. The consequences of these approximations is the appearance of the residual redundancy. Since our calculations are so spot on, these approximations appear to be good. However, after performing these approximations, we have now way short of experiment to confirm our approximations. That is highly unsatisfactory. We must be able to do it such that we can predict whether the approximations work. And understand why they work. It is in this sense that the Higgs is a touchstone. If we are not even able to answer these questions, how can we expect to solve the many outstanding questions?
This question has bugged people already 35 years ago, and some understanding of why it works was achieved. But not of when it works, at least not in the form of numbers. We have made some progress with this, especially recently. The amazing result was that it appears to work only in a very limited range of masses of the Higgs - with the observed Higgs mass essentially right in the middle of the possible range. This is even more surprising as already slight modifications of how many Higgs particles there are seem to change this. So, why is this so? Why is the Higgs mass just there? And what would happen otherwise? Understanding these questions will be very important to go beyond what is known. Without understanding, we may easily be fooled by our approximations, if we are not that lucky next time. This is the reason, why I think the Higgs is a touchstone for theoretical particle physics.
But this is not what I want to write about. Nor is it what i want to talk about. It is rather something more mundane but also much more subtle. It is something for which I do not need any new physics to worry about. I can very well worry just about the known physics.
It has something to do with redundancy. In theoretical physics it is often very convenient - well, often indispensable - to add something redundant in our description. A redundancy is like writing instead of 2 2+0. Both statements have the same meaning. But adding zero to two is redundant. In this example, the zero is just redundant, but not useful. In particle physics, what we add is both redundant and useful.
It is this redundancy which helps us disentangling complicated problems. Of course, we are not allowed to change something by introducing the redundancy. But as long as we respect this requirement, we are pretty free what kind of redundancy we add. In the previous example, we could just write 2+0+0, and have another version of redundancy.
OK, so what has this to do with the Higgs? Well, if we measure something, it is of course independent of these redundancies - after all, they are man-made. And nothing made by us should influence what we measure. But if we look at the ordinary version of how we describe the Higgs, than there is a slight mismatch. In our theoretical description of the Higgs, there is some remainder of the redundancy still lingering. It is, like 2+0.001 pops up. Nonetheless, our theoretical description of the Higgs is spot on the experimental results. But how can this be if there is still redundancy polluting our result? It is this where the Higgs becomes a touchstone of understanding theoretical particle physics: In explaining why this is not correct and correct at the same time.
As always, the answer appears to be in the fine-print. In the standard way how we approach the Higgs, we are not doing it exactly. Well, to be honest, we could not do it exactly. We make some approximations. The consequences of these approximations is the appearance of the residual redundancy. Since our calculations are so spot on, these approximations appear to be good. However, after performing these approximations, we have now way short of experiment to confirm our approximations. That is highly unsatisfactory. We must be able to do it such that we can predict whether the approximations work. And understand why they work. It is in this sense that the Higgs is a touchstone. If we are not even able to answer these questions, how can we expect to solve the many outstanding questions?
This question has bugged people already 35 years ago, and some understanding of why it works was achieved. But not of when it works, at least not in the form of numbers. We have made some progress with this, especially recently. The amazing result was that it appears to work only in a very limited range of masses of the Higgs - with the observed Higgs mass essentially right in the middle of the possible range. This is even more surprising as already slight modifications of how many Higgs particles there are seem to change this. So, why is this so? Why is the Higgs mass just there? And what would happen otherwise? Understanding these questions will be very important to go beyond what is known. Without understanding, we may easily be fooled by our approximations, if we are not that lucky next time. This is the reason, why I think the Higgs is a touchstone for theoretical particle physics.
Subscribe to:
Posts (Atom)