Wednesday, August 7, 2019

Making connections

Over time, it has happened that some solution in one area of physics could also be used in a quite different area. Or, at least, inspired the solution. Unfortunately, this does not always work. Even quite often it happened that when reaching the finer points it turns out that something promising did in the end not work. Thus, it pays off to be always careful with such a transfer, and never believe a hype. Still, in some cases it worked, and even lead to brilliant triumphs. And so it is always worthwhile to try.

Such an attempt is precisely the content of my latest paper. In it, I try to transfer ideas from my research on electroweak physics and the Brout-Englert-Higgs effect to quantum gravity. Quantum gravity is first and foremost still an unsolved issue. We know that mathematical consistency demands that there is some unification of quantum physics and gravity. We expect that this will be by having a quantum theory of gravity. Though we are yet lacking any experimental evidence for this assumption. Still, I also make the assumption for now that quantum gravity exists.

Based on this assumption, I take a candidate for such a quantum gravity theory and pose the question what are its observable consequences. This is a question which has driven me since a long time in particle physics. I think that by now I have an understanding of how it works. But last year, I was challenged whether these ideas can still be right if there is gravity in the game. And this new paper is essentially my first step towards an answerhttps://arxiv.org/abs/1908.02140. Much of this answer is still rough, and especially mathematically will require much work. But at least it provides a first consistent picture. And, as advertised above, it draws from a different field.

The starting point is that the simplest version of quantum gravity currently considered is actually not that different from other theories in particle physics. It is a so-called gauge theory. As such, many of its fundamental objects, like the structure of space and time, are not really observable. Just like most of the elementary particles of the standard model, which is also a gauge theory, are not. Thus, we cannot see them directly in an experiment. In the standard model case, it was possible to construct observable particles by combining the elementary ones. In a sense, the particles we observe are bound states of the elementary particles. However, in electroweak physics one of the bound elementary particles totally dominates the rest, and so the whole object looks very similar to the elementary one, but not quite.

This works, because the Brout-Englert-Higgs effect makes it possible. The reason is that there is a dominating kind of not observable structure, the so-called Higgs condensate, which creates this effect. This is something coincidental. If the parameters of the standard model would be different, it would not work. But, luckily, our standard model has just the right parameter values.

Now, when looking at gravity around us, there is a very similar feature. While we have the powerful theory of general relativity, which describes how matter warps space, we rarely see this. Most of our universe behaves much simpler, because there is so little matter in it. And because the parameters of gravity are such that this warping is very, very small. Thus, we have again a dominating structure: A vacuum which is almost not warped.

Using this analogy and the properties of gauge theories, I figured out the following: We can use something like the Brout-Englert-Higgs effect in quantum gravity. And all observable particles must still be some kind of bound states. But they may now also include gravitons, the elementary particles of quantum gravity. But just like in the standard model, these bound states are dominated by just one of its components. And if there is a standard model component it is this one. Hence, the particles we see at LHC will essentially look like there is no gravity. And this is very consistent with experiment. Detecting the deviations will be so hard in comparison to those which come from the standard model, we can pretty much forget about it for earthbound experiments. At least for the next couple of decades.

However, there are now also some combinations of gravitons without standard model particles involved. Such objects have been long speculated about, and are called geons, or gravity balls. But in contrast to the standard model case, they are not stable classically. But they may be stabilized due to quantum effects. The bound state structure strongly suggests that there is at least one stable one. Still, this is pure speculation at the moment. But if they are, these objects could have dramatic consequences. E.g., they could be part of the dark matter we are searching for. Or, they could make up black holes very much like neutrons make a neutron star. I have no idea, whether any of these speculations could be true. But if there is only a tiny amount of truth in it, this could be spectacular.

Thus, some master students and I will set out to have a look at these ideas. To this end, we will need to some hard calculations. And, eventually, the results should be tested against observation. These will be coming form the universe, and from astronomy. Especially from the astronomy of black holes, where recently there have been many interesting and exciting developments, like observing two black holes merge, or the first direct image of a black hole (obviously just black inside a kind of halo). These are exciting times, and I am looking forward to see whether any of these ideas work out. Stay tuned!

Thursday, July 25, 2019

Talking about the same thing

In this blog entry I will try to explain my most recent paper. The theme of the paper is rather simply put: You should not compare apple with oranges. The subtlety comes from knowing whether you have an apple or an orange in your hand. This is far less simple than it sounds.

The origin of the problem are once more gauge theories. In gauge theories, we have introduced additional degrees of freedom. And, in fact, we have a choice of how we do this. Of course, our final results will not depend on the choice. However, getting to the final result is not always easy. Thus, ensuring that the intermediate steps are right would be good. But they depend on the choice. But then they are only comparable between two different calculations, if in both calculations the same choice is made.

Now it seems simple at first to make the same choice. Ultimately, it is our choice, right? But this is actually not that easy in such theories, due to their mathematical complexity. Thus, rather than making the choice explicit, the choice is made implicitly. The way how this is done is, again for technical reasons, different for methods. And because of all of these technicalities and the fact that we need to do approximations, figuring out whether the implicit conditions yield the same explicit choice is difficult. This is especially important as the choice modifies the equations describing our auxiliary quantities.

In the paper I test this. If everything is consistent between two particular methods, then the solutions obtained in one method should be a solution to the equations obtained in the other method. Seems a simple enough idea. There had been various arguments in the past which suggested that this should be he case. But there had been more and more pieces of evidence over the last couple of years that led me to think that there was something amiss. So I made this test, and did not rely on the arguments.

And indeed, what I find in the article is that the solution of one method does not solve the equation from the other method. The way how this happens strongly suggests that the implicit choices made are not equivalent. Hence, the intermediate results are different. This does not mean that they are wrong. They are just not comparable. Either method can still yield in itself consistent results. But since neither of the methods are exact, the comparison between both would help reassure that the approximations made make sense. And this is now hindered.

So, what to do now? We would very much like to have the possibility to compare between different methods at the level of the auxiliary quantities. So this needs to be fixed. This can only be achieved if the same choice is made in all the methods. The though question is, in which method we should work on the choice. Should we try to make the same choice as in some fixed of the methods? Should we try to find a new choice in all methods? This is though, because everything is so implicit, and affected by approximations.

At the moment, I think the best way is to get one of the existing choices to work in all methods. Creating an entirely different one for all methods appears to me far too much additional work. And I, admittedly, have no idea what a better starting point would be than the existing ones. But in which method should we start trying to alter the choice? In neither method this seems to be simple. In both cases, fundamental obstructions are there, which need to be resolved. I therefore would currently like to start poking around in both methods. Hoping that there maybe a point in between where the choices of the methods could meet, which is easier than to push all all the way. I have a few ideas, but they will take time. Probably also a lot more than just me.

This investigation also amazes me as the theory where this happens is nothing new. Far from it, it is more than half a century old, older than I am. And it is not something obscure, but rather part of the standard model of particle physics. So a very essential element in our description of nature. It never ceases to baffle me, how little we still know about it. And how unbelievable complex it is at a technical level.

Wednesday, June 19, 2019

Creativity in physics

One of the most widespread misconceptions about physics, and other natural sciences, is that they are quite the opposite to art: Precise, fact-driven, logical, and systematic. While art is perceived as emotional, open, creative, and inspired.

Of course, physics has experiments, has data, has math. All of that has to be fitted perfectly together, and there is no room for slights. Logical deduction is central in what we do. But this is not all. In fact, these parts are more like the handiwork. Just like a painter needs to be able to draw a line, a writer needs to be able to write coherent sentences, so we need to be able to calculate, build, check, and infer. But just like the act of drawing a line or writing a sentence is not what we recognize already as art, so is not the solving of an equation physics.

We are able to solve an equation, because we learned this during our studies. We learned, what was known before. Thus, this is our tool set. Like people read books before start writing one. But when we actually do research, we face the fact that nobody knows what is going on. In fact, quite often we do not even know what is an adequate question to pose. We just stand there, baffled, before a couple of observations. That is, where the same act of creativity has to set in as when writing a book or painting a picture. We need an idea, need inspiration, on how to start. And then afterwards, just like the writer writes page after page, we add to this idea various pieces, until we have a hypotheses of what is going on. This is like having the first draft of a book. Then, the real grinding starts, where all our education comes to bear. Then we have to calculate and so on. Just like the writer has to go and fix the draft to become a book.

You may now wonder whether this part of creativity is only limited to the great minds, and at the inception of a whole new step in physics? No, far from it. On the one hand, physics is not the work of lone geniuses. Sure, somebody has occasionally the right idea. But this is usually just the one idea, which is in the end correct, and all the other good ideas, which other people had, did just turn out to be incorrect, and you never hear of them because of this. And also, on the other hand, every new idea, as said above, requires eventually all that what was done before. And more than that. Creativity is rarely borne out of being a hermit. It is often by inspiration due to others. Talking to each other, throwing fragments of ideas at each other, and mulling about consequences together is what creates the soil where creativity sprouts. All those, with whom you have interacted, have contributed to the idea you have being born.

This is, why the genuinely big breakthroughs have often resulted from so-called blue-sky research or curiosity-driven research. It is not a coincidence that the freedom of doing whatever kind of research you think is important is an, almost sacred, privilege of hired scientists. Or should be. Fortunately I am privileged enough, especially in the European Union, to have this privilege. In other places, you are often shackled by all kinds of external influences, down to political pressure to only do politically acceptable research. And this can never spark the creativity you need to make something genuine new. If you are afraid about what you say, you start to restrain yourself, and ultimately anything which is not already established to be acceptable becomes unthinkable. This may not always be as obvious as real political pressure. But if whether you being hired, if your job is safe, starts to depend on it, you start going for acceptable research. Because failure with something new would cost you dearly. And with the currently quite common competitive funding prevalent particularly for non-permanently hired people, this starts to become a serious obstruction.

As a consequence, real breakthrough research can be neither planned nor can you do it on purpose. You can only plan the grinding part. And failure will be part of any creative process. Though you actually never really fail. Because you always learn how something does not work. That is one of the reasons why I strongly want that failures become also publicly available. They are as important to progress as success, by reducing the possibilities. Not to mention the amount of life time of researchers wasted because they fail with them same attempt, not knowing that others failed before them.

And then, perhaps, a new scientific insight arises. And, more often than not, some great technology arises along the way. Not intentionally, but because it was necessary to follow one's creativity. And that is actually where most technological leaps came from. So,real progress in physics, in the end, is made from about a third craftsmanship, a third communication, and a third creativity.

So, after all this general stuff, how do I stay creative?

Well, first of all, I was and am sufficiently privileged. I could afford to start out with just following my ideas, and either it will keep me in business, or I will have to find a non-science job. But this only worked out because of my personal background, because I could have afforded to have a couple of months with no income to find a job, and had an education which almost guarantees me a decent job eventually. And the education I could only afford in this quality because of my personal background. Not to mention that as a white male I had no systemic barriers against me. So, yes, privilege plays a major role.

The other part was that I learned more and more that it is not effort what counts, but effect. Took me years. But eventually, I understood that a creative idea cannot be forced by burying myself in work. Time off is for me as important. It took me until close to the end of my PhD to realize that. But not working overtime, enjoying free days and holidays, is for me as important for the creative process as any other condition. Not to mention that I also do all non-creative chores much more efficiently if well rested, which eventually leaves me with more time to ponder creatively and do research.

And the last ingredient is really exchange. I have had now the opportunity, in a sabbatical, to go to different places and exchange ideas with a lot of people. This gave me what I needed to acquire a new field and have already new ideas for it. It is the possibility to sit down with people for some hours, especially in a nicer and more relaxing surrounding than an office, and just discuss ideas. That is also what I like most about conferences. And one of the reasons I think conferences will always be necessary, even though we need to make going there and back ecologically much more viable, and restrict ourselves to sufficiently close ones until this is possible.

Sitting down over a good cup of coffee or a nice meal, and just discuss, is really jump starting my creativity. Even sitting with a cup of good coffee in a nice cafe somewhere and just thinking does wonders for me in solving problems. And with that, it seems not to be so different for me than for artists, after all.

Tuesday, May 14, 2019

Acquiring a new field

I have recently started to look into a new field: Quantum gravity. In this entry, I would like to write a bit about how this happens, acquiring a new field. Such that you can get an idea what can lead a scientist to do such a thing. Of course, in future entries I will also write more about what I am doing, but it would be a bit early to do so right now.

Acquiring a new field in science is not something done lightly. One has always not enough time for the things one does already. And when you enter a new field, stuff is slow. You have to learn a lot of basics, need to get an overview of what has been done, and what is still open. Not to mention that you have to get used to a different jargon. Thus, one rarely does so lightly.

I have in the past written already one entry about how I came to do Higgs physics. This entry was written after the fact. I was looking back, and discussed my motivation how I saw it at that time. It will be an interesting thing to look back at this entry in a few years, and judge what is left of my original motivation. And how I feel about this knowing what happened since then. But for now, I only know the present. So, lets get to it.

Quantum gravity is the hypothetical quantum version of the ordinary theory of gravity, so-called general relativity. However, it has withstood quantization for a quite a while, though there has been huge progress in the last 25 years or so. If we could quantize it, its combination with the standard model and the simplest version of dark matter would likely be able to explain almost everything we can observe. Though even then a few open questions appear to remain.

But my interest in quantum gravity comes not from the promise of such a possibility. It has rather a quite different motivation. My interest started with the Higgs.

I have written many times that we work on an improvement in the way we look at the Higgs. And, by now, in fact of the standard model. In what we get, we see a clear distinction between two concepts: So-called gauge symmetries and global symmetries. As far as we understand the standard model, it appears that global symmetries determine how many particles of a certain type exists, and into which particles they can decay or be combined. Gauge symmetries, however, seem to be just auxiliary symmetries, which we use to make calculations feasible, and they do not have a direct impact on observations. They have, of course, an indirect impact. After all, in which theory which gauge symmetry can be used to facilitate things is different, and thus the kind of gauge symmetry is more a statement about which theory we work on.

Now, if you add gravity, the distinction between both appears to blur. The reason is that in gravity space itself is different. Especially, you can deform space. Now, the original distinction of global symmetries and gauge symmetries is their relation to space. A global symmetry is something which is the same from point to point. A gauge symmetry allows changes from point to point. Loosely speaking, of course.

In gravity, space is no longer fixed. It can itself be deformed from point to point. But if space itself can be deformed, then nothing can stay the same from point to point. Does then the concept of global symmetry still make sense? Or does all symmetries become just 'like' local symmetries? Or is there still a distinction? And what about general relativity itself? In a particular sense, it can be seen as a theory with a gauge symmetry of space. Makes this everything which lives on space automatically a gauge symmetry? If we want to understand the results of what we did in the standard model, where there is no gravity, in the real world, where there is gravity, then this needs to be resolved. How? Well, my research will hopefully answer this question. But I cannot do it yet.

These questions were already for some time in the back of my mind. A few years, I actually do not know how many exactly. As quantum gravity pops up in particle physics occasionally, and I have contact with several people working on it, I was exposed to this again and again. I knew, eventually, I will need to address it, if nobody else does. So far, nobody did.

But why now? What prompted me to start now with it? As so often in science, it were other scientists.

Last year at the end of November/beginning of December, I took part in a conference in Vienna. I had been invited to talk about our research. The meeting has a quite wide scope, and also present were several people, who work on black holes and quantum physics. In this area, one goes, in a sense, halfway towards quantum gravity: One has quantum particles, but they life in a classical gravity theory, but with strong gravitational effects. Which is usually a black hole. In such a setup, the deformations of space are fixed. And also non-quantum black holes can swallow stuff. This combination appears to make the following thing: Global symmetries appear to become meaningless, because everything associated with them can vanish in the black hole. However, keeping space deformations fixed means that local symmetries are also fixed. So they appear to become real, instead of auxiliary. Thus, this seems to be quite opposite to our result. And this, and the people doing this kind of research, challenged my view of symmetries. In fact, in such a half-way case, this effect seems to be there.

However, in a full quantum gravity theory, the game changes. Then also space deformations become dynamical. At the same time, black holes need no longer to have the characteristic to swallow stuff forever, because they become dynamical, too. They develop. Thus, to answer what happens really requires full quantum gravity. And because of this situation, I decided to start to work actively on quantum gravity. Because I needed to answer whether our picture of symmetries survive, at least approximately, when there is quantum gravity. And to be able to answer such challenges. And so it began.

Within the last six months, I have now worked through a lot of the basic stuff. I have now a rough idea of what is going on, and what needs to be done. And I think, I see a way how everything can be reconciled, and make sense. It will still need a long time to complete this, but I am very optimistic right now. So optimistic, in fact, that a few days back I gave my first talk, in which I discussed this issues including quantum gravity. It will still need time, before I have a first real result. But I am quite happy how thing progress.

And that is the story how I started to look at quantum gravity in earnest. If you want to join me in this endeavor: I am always looking for collaboration partners and, of course, students who want to do their thesis work on this subject 😁

Thursday, February 7, 2019

Why there won't be warp travel in times of global crises

One of the questions I get most often at outreach events is: "What is about warp travel?", or some other wording for faster-than-light travel. Something, which makes interstellar travel possible, or at least viable.

Well, the first thing I can say is that there is nothing which excludes it. Of course, within our well established theories of the world it is not possible. Neither the standard model of particle physics, nor general relativity, when constrained to the matter we know of, allows it. Thus, whatever describes warp travel, it needs to be a theory, which encompasses and enlarges what we know. Can a quantized combination of general relativity and particle physics do this? Perhaps, perhaps not. Many people think about it really hard. Mostly, we run afoul of causality when trying.

But these are theoretical ideas. And even if some clever team comes up with a theory which allows warp travel, this does not say that this theory is actually realized in nature. Just because we can make it mathematical consistent does not guarantee that it is realized. In fact, we have many, many more mathematical consistent theories than are realized in nature. Thus, it is not enough to just construct a theory of warp travel. Which, as noted, we failed so far to do.

No, what we need is to figure out that it really happens in nature. So far, this did not happen. Neither did we observe it in any human-made experiment, nor did we have any observation in nature which unambiguously point to it. And this is what makes it real hard.

You see, the universe is a tremendous place, which is unbelievable large, and essentially three times as old as the whole planet earth. Not to mention humanity. There happen extremely powerful events out there. This starts from quasars, effectively like a whole galactic core on fire, to black hole collisions and supernovas. These events put out an enormous amount of energy. Much, much more than even our sun generates. Hence, anything short of a big bang is happening all the time in the universe. And we see the results. The earth is hit constantly by particles with much, much higher energies than we can produce in any experiment. And this since earth came into being. Incidentally, this also tells us that nothing we can do at a particle accelerator can really be dangerous. Whatever we do there has happened so often in our Earth's atmosphere, it would have killed this planet long before humanity entered the scene. Only bad thing about it, we do never know when and where such an event happens. And the rate is also not that high, it is only that earth existed already so very long. And is big. Hence, we cannot use this to make controlled observations.

Thus, whatever could happen, happens out there. In the universe. We see some things out there, which we cannot explain yet, e.g. dark matter. But by and large a lot works as expected. Especially, we do not see anything which begs warp travel to explain. Or anything else remotely suggesting something happening faster than the speed of light. Hence, if something like faster-than-light travel is possible, it is neither common nor easily happening.

As noted, this does not mean it is impossible. Only that if it is possible, it is very, very hard. Especially, this means it will be very, very hard to make an experiment to demonstrate the phenomenon. Much less to actually make it a technology, rather than a curiosity. This means, a lot of effort will be necessary to get to see it, if it is really possible.

What is a lot? Well, the CERN is a bit. But human, or even robotic, space exploration is an entire different category, some one to two orders of magnitudes more. Probably, we would need to combine such space exploration with particle physics to really get to it. Possible the best example for such an endeavor is the future LISA project to measure gravitational waves in space. It is perhaps even our current best bet to observe any hints of faster-than-light phenomena, aside from bigger particle physics experiments on earth.

Do we have the technology for such a project? Yes, we do. We have it since roughly a decade. But it will likely take at least one more decade to have LISA flying. Why not now? Resources. Or, often put equivalently, costs.

And here comes the catch. I said, it is our best chance. But this does not mean it is a good chance. In fact, even if faster-than-light is possible, I would be very surprised if we would see it with this mission. There is probably a few more generations of technology, and another order of magnitude of resources, needed, before we could see something, given of what I know how well everything currently fits. Of course, there can always be surprises with every little step further. I am sure, we will discover something interesting, possibly spectacular with LISA. But I would not bet anything valuable that it will be having to do with warp travel.

So, you see, we have to scale up, if we want to go to the stars. This means investing resources. A lot of them. But resources are needed to fix things on earth as well. And the more we damage, the more we need to fix, and the less we have to get to the stars. Right now, humanity moves into a state of perpetual crises. The damage wrought by the climate crises will require enormous efforts to mitigate, much more to stop the downhill trajectory. As a consequence of the climate crises, as well as social inequality, more and more conflicts will create further damage. Finally, isolationism, both nationally as well as socially, driven by fear of the oncoming crises, will also soak up tremendous amounts of resources. And, finally, a hostile environment towards diversity and putting individual gains above common gains create a climate which is hostile to anything new and different in general, and to science in particular. Hence, we will not be able to use our resources, or the ingenuity of the human species as a whole, to get to the stars.

Thus, I am not hopeful to see faster-than-light in my lifetime, or those of the next generation. Such a challenge, if it is possible at all, will require a common effort of our species. That would be truly one worthy endeavour to put our minds at. But right now, as a scientist, I am much more occupied with protecting a world in which science is possible, both metaphorically as well as literally.

But, there is always hope. If we rise up, and decide to change fundamentally. When we put the well-being of us as a whole in front. Then, I would be optimistic that we can get out there. Well, at least as fast as nature permits. How fast this ever will be.

Tuesday, January 8, 2019

Taking your theory seriously

This blog entry is somewhat different than usual. Rather than writing about some particular research project, I will write about a general vibe, directing my research.

As usual, research starts with a 'why?'. Why does something happen, and why does it happen in this way? Being the theoretician that I am, this question often equates with wanting to have mathematical description of both the question and the answer.

Already very early in my studies I ran into peculiar problems with this desire. It usually left me staring at the words '...and then nature made a choice', asking myself, how could it? A simple example of the problem is a magnet. You all know that a magnet has a north pole and a south pole, and that these two are different. So, how does it happen which end of the magnet becomes the north pole and which the south pole? At the beginning you always get to hear that this is a random choice, and it just happens that one particular is made. But this is not really the answer. If you dig deeper than you find that originally the metal of any magnet has been very hot, likely liquid. In this situation, a magnet is not really magnetic. It becomes magnetic when it is cooled down, and becomes solid. At some temperature (the so-called Curie temperature), it becomes magnetic, and the poles emerge. And here this apparent miracle of a 'choice by nature' happens. Only that it does not. The magnet cools down not all by itself, but it has a surrounding. And the surrounding can have magnetic fields as well, e.g. the earth's magnetic field. And the decision what is south and what is north is made by how the magnet forms relative to this field. And thus, there is a reason. We do not see it directly, because magnets have usually moved since then, and thus this correlation is no longer obvious. But if we would heat the magnet again, and let it cool down again, we could observe this.

But this immediately leaves you with the question of where did the Earth's magnetic field comes from, and got its direction? Well, it comes from the liquid metallic core of the Earth, and aligns along or oppositely, more or less, the rotation axis of the Earth. Thus, the question is, how did the rotation axis of the Earth comes about, and why has it a liquid core? Both questions are well understood, and arise from how the Earth has formed billions of years ago. This is due to the mechanics of the rotating disk of dust and gas which formed around our fledgling sun. Which in turns comes from the dynamics on even larger scales. And so on.

As you see, whenever one had the feeling of a random choice, it was actually the outside of what we looked at so far, which made the decision. So, such questions always lead us to include more into what we try to understand.

'Hey', I now can literally hear people say who are a bit more acquainted with physics, 'does not quantum mechanics makes really random choices?'. The answer to this is yes and no in equal measures. This is probably one of the more fundamental problems of modern physics. Yes, our description of quantum mechanics, as we teach it also in courses, has intrinsic randomness. But when does it occur? Yes, exactly, whenever we jump outside of the box we describe in our theory. Real, random choice is encountered in quantum physics only whenever we transcend the system we are considering. E.g. by an external measurement. This is one of the reasons why this is known as the 'measurement problem'. If we stay inside the system, this does not happen. But at the expense that we are loosing the contact to things, like an ordinary magnet, which we are used to. The objects we are describing become obscure, and we talk about wave functions and stuff like this. Whenever we try to extend our description to also include the measurement apparatus, on the other hand, we again get something which is strange, but not as random as it originally looked. Although talking about it becomes almost impossible beyond any mathematical description. And it is not really clear what random means anymore in this context. This problem is one of the big ones in the concept of physics. While there is a relation to what I am talking about here, this question can still be separated.

And in fact, it is not this divide what I want to talk about, at least not today. I just wanted to get away with this type of 'quantum choice'. Rather, I want to get to something else.

If we stay inside the system we describe, then everything becomes calculable. Our mathematical description is closed in the sense that after fixing a theory, we can calculate everything. Well, at least in principle, in practice our technical capabilities may limit this. But this is of no importance for the conceptual point. Once we have fixed the theory, there is no choice anymore. There is no outside. And thus, everything needs to come from inside the theory. Thus, a magnet in isolation will never magnetize, because there is nothing which can make a decision about how. The different possibilities are caught in an eternal balanced struggle, and none can win.

Which makes a lot of sense, if you take physical theories really seriously. After all, one of the basic tenants is that there is no privileged frame of reference: 'Everything is relative'. If there is nothing else, nothing can happen which creates an absolute frame of reference, without violating the very same principles on which we found physics. If we take our own theories seriously, and push them to the bitter end, this is what needs to come about.

And here I come back to my own research. One of the driving principles has been to really push this seriousness. And ask what it implies if one really, really takes it seriously. Of course, this is based on the assumption that the theory is (sufficiently) adequate, but that is everyday uncertainty for a physicist anyhow. This requires me to very, very carefully separate what is really inside, and outside. And this leads to quite surprising results. Essentially most of my research on Brout-Englert-Higgs physics, as described in previous entries, is coming about because of this approach. And leads partly to results quite at odds with common lore, often meaning a lot of work to convince people. Even if the mathematics is valid and correct, interpretation issues are much more open to debate when it comes to implications.

Is this point of view adequate? After all, we know for sure that we are not yet finished, and our theories do not contain all there is, and there is an 'outside'. However it may look. And I agree. But, I think it is very important that we very clearly distinguish what is an outside influence, and what is not. And as a first step to ensure what is outside, and thus, in a sense, is 'new physics', we need to understand what our theories say if they are taken in isolation.

Thursday, December 13, 2018

The size of the W

As discussed in an earlier entry we set out to measure the size of a particle: The W boson. We have now finished this, and published a paper about our results. I would like to discuss these results a bit in detail.

This project was motivated because we think that the W (and its sibling, the Z boson) are actually more complicated than usually assured. We think that they may have a self-similar structure. The bits and pieces of this is quite technical. But the outline is the following: What we see and measure as a W at, say, the LHC or earlier, is actually not a point-like particle. Although this is the currently most common view. But science has always been about changing the common ideas and replacing them with something new and better. So, our idea is that the W has a substructure. This substructure is a bit weird, because it is not made from additional elementary particles. It rather looks like a bubbling mess of quantum effects. Thus, we do not expect that we can isolate anything which resembles a physical particle within the W. And if we try to isolate something, we should not expect it to behave as a particle.

Thus, this scenario gives two predictions. One: Substructure needs to have space somewhere. Thus, the W should have a size. Two: Anything isolated from it should not behave like a particle. To test both ideas in the same way, we decided to look at the same quantity: The radius. Hence, we simulated a part of the standard model. Then we measured the size of the W in this simulation. Also, we tried to isolate the most particle-like object from the substructure, and also measured its size. Both of these measurements are very expensive in terms of computing time. Thus, our results are rather exploratory. Hence, we cannot yet regard what we found as final. But at least it gives us some idea of what is going on.

The first thing is the size of the W. Indeed, we find that it has a size, and one which is not too small either. The number itself, however, is far less accurate. The reason for this is twofold. On the one hand, we have only a part of the standard model in our simulations. On the other hand, we see artifacts. They come from the fact that our simulations can only describe some finite part of the world. The larger this part is, the more expensive the calculation. With what we had available, the part seems to be still so small that the W is big enough to 'bounce of the walls' fairly often. Thus, our results still show a dependence on the size of this part of the world. Though we try to accommodate for this, this still leaves a sizable uncertainty for the final result. Nonetheless, the qualitative feature that it has a significant size remains.

The other thing are the would-be constituents. We indeed can identify some kind of lumps of quantum fluctuations inside. But indeed, they do not behave like a particle, not even remotely. Especially, when trying to measure their size, we find that the square of their radius is negative! Even though the final value is still uncertain, this is nothing a real particle should have. Because when trying to take the square root of such a negative quantity to get the actual number yields an imaginary number. That is an abstract quantity, which, while not identifiable with anything in every day, has a well-defined mathematical meaning. In the present case, this means this lump is nonphysical, as if you would try to upend a hole. Thus, this mess is really not a particle at all, in any conventional sense of the word. Still, what we could get from this is that such lumps - even though they are not really lumps, 'live' only in areas of our W much smaller than the W size. So, at least they are contained. And let the W be the well-behaved particle it is.

So, the bottom line is, our simulations agreed with our ideas. That is good. But it is not enough. After all, who can tell if what we simulate is actually the thing happening in nature? So, we will need an experimental test of this result. This is surprisingly complicated. After all, you cannot really get a measure stick to get the size of a particle. Rather, what you do is, you throw other particles at them, and then see how much they are deflected. At least in principle.

Can this be done for the W? Yes, it can be done, but is very indirect. Essentially, it could work as follows: Take the LHC, at which two protons are smashed in each other. In this smashing, it is possible that a Z boson is produced, which smashes of a W. So, you 'just' need to look at the W before and after. In practice, this is more complicated. Since we cannot send the W in there to hit the Z, we use that mathematically this process is related to another one. If we get one, we get the other for free. This process is that the produced Z, together with a lot of kinetic energy, decays into two W particles. These are then detected, and their directions measured.

As nice as this sounds, this is still horrendously complicated. The problem is that the Ws themselves decay into some leptons and neutrinos before they reach the actual detector. And because neutrinos escape essentially always undetected, one can only indirectly infer what has been going on. Especially the directions of the Ws cannot easily be reconstructed. Still, in principle it should be possible, and we discuss this in our paper. So we can actually measure this size in principle. It will be now up to the experimental experts if it can - and will - be done in practice.

Wednesday, October 24, 2018

Looking for something when no one knows how much is there

This time, I want to continue the discussion from some months ago. Back then, I was rather general on how we could test our most dramatic idea. This idea is connected to what we regard as elementary particles. So far, our idea is that those you have heard about, the electrons, the Higgs, and so on are truly the basic building blocks of nature. However, we have found a lot of evidence that indicate that we see in experiment, and call these names, are actually not the same as the elementary particles themselves. Rather, they are a kind of bound state of the elementary ones, which only look at first sight like they themselves would be the elementary ones. Sounds pretty weird, huh? And if it sounds weird, it means it needs to be tested. We did so with numerical simulations. They all agreed perfectly with the ideas. But, of course, its physics, and thus we need also an experiment. The only question is which one.

We had some ideas already a while back. One of them will be ready soon, and I will talk again about it in due time. But this will be rather indirect, and somewhat qualitative. The other, however, required a new experiment, which may need two more decades to build. Thus, both cannot be the answer alone, and we need something more.

And this more is what we are currently closing in. Because one has this kind of weird bound state structure to make the standard model consistent, not only exotic particles are more complicated than usually assumed. Ordinary ones are too. And most ordinary are protons, the nucleus of the hydrogen atom. More importantly, protons is what is smashed together at the LHC at CERN. So, we have a machine already, which may be able to test it. But this is involved, as protons are very messy. They are already in the conventional picture bound states of quarks and gluons. Our results just say there are more components. Thus, we have somehow to disentangle old and new components. So, we have to be very careful in what we do.

Fortunately, there is a trick. All of this revolves around the Higgs. The Higgs has the property that interacts stronger with particles the heavier they are. The heaviest particles we know are the top quark, followed by the W and Z bosons. And the CMS experiment (and other experiments) at CERN has a measurement campaign to look at the production of these particles together! That is exactly where we expect something interesting can happen. However, our ideas are not the only ones leading to top quarks and Z bosons. There are many known processes which produce them as well. So we cannot just check whether they are there. Rather, we need to understand if there are there as expected. E.g., if they fly away from the interaction in the expected direction and with the expected speeds.

So what a master student and myself do is the following. We use a program, called HERWIG, which simulates such events. One of the people who created this program helped us to modify this program, so that we can test our ideas with it. What we now do is rather simple. An input to such simulations is how the structure of the proton looks like. Based on this, it simulates how the top quarks and Z bosons produced in a collision are distributed. We now just add our conjectured additional contributions to the proton, essentially a little bit of Higgs. We then check, how the distributions change. By comparing the changes to what we get in experiment, we can then deduced how large the Higgs contribution in the proton is. Moreover, we can even indirectly deduce its shape, i.e. how in the proton the Higgs is located.

And this we now study. We iterate modifications of the proton structure with comparison to experimental results and predictions without this Higgs contribution. Thereby, we constraint the Higgs contribution in the proton bit by bit. At the current time, we know that the data is only sufficient to provide an upper bound to this amount inside the proton. Our first estimates show already that this bound is actually not that strong, and quite a lot of Higgs could be inside the proton. But on the other hand, this is good, because that means that the expected data in the next couple of years from the experiments will be able to actually either constraint the contribution further, or could even detect it, if it is large enough. At any rate, we now know that we have a sensitive leverage to understand this new contribution.

Thursday, September 27, 2018

Unexpected connections

The history of physics is full of stuff developed for one purpose ending up being useful for an entirely different purpose. Quite often they also failed their original purpose miserably, but are paramount for the new one. Newer examples are the first attempts to describe the weak interactions, which ended up describing the strong one. Also, string theory was originally invented for the strong interactions, and failed for this purpose. Now, well, it is the popular science star, and a serious candidate for quantum gravity.

But failing is optional for having a second use. And we just start to discover a second use for our investigations of grand-unified theories. There our research used a toy model. We did this, because we wanted to understand a mechanism. And because doing the full story would have been much too complicated before we did not know, whether the mechanism works. But it turns out this toy theory may be an interesting theory on its own.

And it may be interesting for a very different topic: Dark matter. This is a hypothetical type of matter of which we see a lot of indirect evidence in the universe. But we are still mystified of what it is (and whether it is matter at all). Of course, such mysteries draw our interests like a flame the moth. Hence, our group in Graz starts to push also in this direction, being curious on what is going on. For now, we follow the most probable explanation that there are additional particles making up dark matter. Then there are two questions: What are they? And do they, and if yes how, interact with the rest of the world? Aside from gravity, of course.

Next week I will go to a workshop in which new ideas on dark matter will be explored, to get a better understanding of what is known. And in the course of preparing for this workshop I noted that there is this connection. I will actually present this idea at the workshop, as it forms a new class of possible explanations of dark matter. Perhaps not the right one, but at the current time an equally plausible one as many others.

And here is how it works. Theories of the type of grand-unified theories were for a long time expected to have a lot of massless particles. This was not bad for their original purpose, as we know quite some of them, like the photon and the gluons. However, our results showed that with an improved treatment and shift in paradigm that this is not always true. At least some of them do not have massless particles.

But dark matter needs to be massive to influence stars and galaxies gravitationally. And, except for very special circumstances, there should not be additional massless dark particles. Because otherwise the massive ones could decay into the massless ones. And then the mass is gone, and this does not work. Thus the reason why such theories had been excluded. But with our new results, they become feasible. Even more so, we have a lot of indirect evidence that dark matter is not just a single, massive particle. Rather, it needs to interact with itself, and there could be indeed many different dark matter particles. After all, if there is dark matter, it makes up four times more stuff in the universe than everything we can see. And what we see consists out of many particles, so why should not dark matter do so as well. And this is also realized in our model.

And this is how it works. The scenario I will describe (you can download my talk already now, if you want to look for yourself - though it is somewhat technical) finds two different types of stable dark matter. Furthermore, they interact. And the great thing about our approach is that we can calculate this quite precisely, giving us a chance to make predictions. Still, we need to do this, to make sure that everything works with what astrophysics tells us. Moreover, this setup gives us two more additional particles, which we can couple to the Higgs through a so-called portal. Again, we can calculate this, and how everything comes together. This allows to test this model not only by astronomical observations, but at CERN. This gives the basic idea. Now, we need to do all the detailed calculations. I am quite excited to try this out :) - so stay tuned, whether it actually makes sense. Or whether the model will have to wait for another opportunity.

Monday, August 13, 2018

Fostering an idea with experience

In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

Tuesday, June 12, 2018

How to test an idea

As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects.

This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way:

Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it.

So far, this does not seem to be something where it is necessary to worry about.

However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation.

So, is this hopeless? Do we have to wait for new physics to make its appearance?

Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory.

Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test.

Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned.

By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed.

Thursday, March 29, 2018

Asking questions leads to a change of mind

In this entry, I would like to digress a bit from my usual discussion of our physics research subject. Rather, I would like to talk a bit about how I do this kind of research. There is a twofold motivation for me to do this.

One is that I am currently teaching, together with somebody from the philosophy department, a course on science philosophy of physics. It cam to me as a surprise that one thing the students of philosophy are interested in is, how I think. What are the objects, or subjects, and how I connect them when doing research. Or even when I just think about a physics theory. The other is the review I have have recently written. Both topics may seem unrelated at first. But there is deep connection. It is less about what I have written in the review, but rather what led me up to this point. This requires some historical digression in my own research.

In the very beginning, I started out with doing research on the strong interactions. One of the features of the strong interactions is that the supposed elementary particles, quarks and gluons, are never seen separately, but only in combinations as hadrons. This is a phenomenon which is called confinement. It always somehow presented as a mystery. And as such, it is interesting. Thus, one question in my early research was how to understand this phenomenon.

Doing that I came across an interesting result from the 1970ies. It appears that a, at first sight completely unrelated, effect is very intimately related to confinement. At least in some theories. This is the Brout-Englert-Higgs effect. However, we seem to observe the particles responsible for and affected by the Higgs effect. And indeed, at that time, I was still thinking that the particles affected by the Brout-Englert-Higgs effect, especially  the Higgs and the W and Z bosons, are just ordinary, observable particles. When one reads my first paper of this time on the Higgs, this is quite obvious. But then there was the results of the 1970ies. It stated that, on a very formal level, there should be no difference between confinement and the Brout-Englert-Higgs effect, in a very definite way.

Now the implications of that serious sparked my interest. But I thought this would help me to understand confinement, as it was still very ingrained into me that confinement is a particular feature of the strong interactions. The mathematical connection I just took as a curiosity. And so I started to do extensive numerical simulations of the situation.

But while trying to do so, things which did not add up started to accumulate. This is probably most evident in a conference proceeding where I tried to put sense into something which, with hindsight, could never be interpreted in the way I did there. I still tried to press the result into the scheme of thinking that the Higgs and the W/Z are physical particles, which we observe in experiment, as this is the standard lore. But the data would not fit this picture, and the more and better data I gathered, the more conflicted the results became. At some point, it was clear that something was amiss.

At that point, I had two options. Either keep with the concepts of confinement and the Brout-Englert-Higgs effect as they have been since the 1960ies. Or to take the data seriously, assuming that these conceptions were wrong. It is probably signifying my difficulties that it took me more than a year to come to terms with the results. In the end, the decisive point was that, as a theoretician, I needed to take my theory seriously, no matter the results. There is no way around it. And it gave a prediction which did not fit my view of the experiments than necessarily either my view was incorrect or the theory. The latter seemed more improbable than the first, as it fits experiment very well. So, finally, I found an explanation, which was consistent. And this explanation accepted the curious mathematical statement from the 1970ies that confinement and the Brout-Englert-Higgs effect are qualitatively the same, but not quantitatively. And thus the conclusion was what we observe are not really the Higgs and the W/Z bosons, but rather some interesting composite objects, just like hadrons, which due to a quirk of the theory just behave almost as if they are the elementary particles.

This was still a very challenging thought to me. After all, this was quite contradictory to usual notions. Thus, it came as a very great relief to me that during a trip a couple months later someone pointed me to a few, almost forgotten by most, papers from the early 1980ies, which gave, for a completely different reason, the same answer. Together with my own observation, this made click, and everything started to fit together - the 1970ies curiosity, the standard notions, my data. That I published in the mid of 2012, even though this still lacked some more systematic stuff. But it required still to shift my thinking from agreement to really understanding. That came then in the years to follow.

The important click was to recognize that confinement and the Brout-Englert-Higgs effect are, just as pointed out in the 1970ies mathematically, really just two faces to the same underlying phenomena. On a very abstract level, essentially all particles which make up the standard model, are really just a means to an end. What we observe are objects which are described by them, but which they are not themselves. They emerge, just like hadrons emerge in the strong interaction, but with very different technical details. This is actually very deeply connected with the concept of gauge symmetry, but this becomes quickly technical. Of course, since this is fundamentally different from the usual way, this required confirmation. So we went, made predictions which could distinguish between the standard way of thinking and this way of thinking, and tested them. And it came out as we predicted. So, seems we are on the right track. And all details, all the if, how, and why, and all the technicalities and math you can find in the review.

To make now full circle to the starting point: That what happened during this decade in my mind was that the way I thought about how the physical theory I tried to describe, the standard model, changed. In the beginning I was thinking in terms of particles and their interactions. Now, very much motivated by gauge symmetry, and, not incidental, by its more deeper conceptual challenges, I think differently. I think no longer in terms of the elementary particles as entities themselves, but rather as auxiliary building blocks of actually experimentally accessible quantities. The standard 'small-ball' analogy went fully away, and there formed, well, hard to say, a new class of entities, which does not necessarily has any analogy. Perhaps the best analogy is that of, no, I really do not know how to phrase it. Perhaps at a later time I will come across something. Right now, it is more math than words.

This also transformed the way how I think about the original problem, confinement. I am curious, where this, and all the rest, will lead to. For now, the next step will be to go ahead from simulations, and see whether we can find some way how to test this actually in experiment. We have some ideas, but in the end, it may be that present experiments will not be sensitive enough. Stay tuned.