Tuesday, August 11, 2020

Making big plans

 Occasionally, you have an idea, and you can do the required research within a couple of weeks. But this is the rare exception. Most research requires months, and often years, to complete. In particle physics, with its huge experiments running for decades, this is probably even more aware to people than in many other cases. This requires plans. A very recent example of such a plan is the European Strategy on Particle Physics (Update), in which all of Europe came together to make a plan. I have contributed to this as coordinating the theory input for the national Austrian roadmap. It is a huge effort to get everyone agreeing on what to do next - and what to do in the next half-a-century. Because this is how long you have to plan in advance for the big experiments.

Aside from these big plans, there are also smaller ones. Even for me as a theoretician. Occasionally, I have to sit down, and formulate a research plan for a couple of years into the future. The reason is often that I write a so-called grant proposal to get a considerable amount of money to hire postdocs and PhD students. Such a large proposal requires you to formulate what you want to do with all these people, usually for about five years. Last year, we got one already, for dark matter.

This year, I write another one. Why again, if we just got one? Well, on the one hand each would roughly take up half my time. So, I can manage both, and thereby do more. But putting this up front is cheating. The main reason is that it is unlikely I will get it in the first attempt. As there are currently many more people wanting to do particle physics than resources are allocated for this purpose at the national and international level, these resources need to be distributed. Thus, you write a proposal to get some of these. Then some panel judges the submitted proposals, and decides, who will get resources. And thus where efforts in particle physics will be concentrated. Usually, the are many more proposals the panel would like to fund than there are resources available, and so small points tip the scale to one or the other proposals, and the others are rejected. One can then try again. On average, less than one in five proposals is successful. Thus, you often need to try again, with an optimized proposal. And thus, I already submit another one.

Coming back to the original topic: For such a proposal I need to make a five-years plan. Of course, its research. Nobody can guarantee me that I (or, more likely, someone else) will not discover something which requires a fundamental change of plans. This is always allowed. But you are still required to make a plan what you want to do, if nothing unexpected happens. Usually, in my personal experience, about half what is planned will be done, and the rest of the resources is spent on unexpected stuff. Which is as well.

Still, you need to make a plan, if everything happens as you would expect it now. And that is what I did.

The first thing you need to decide is to what part of your research you would like to base it on. If you read my blog since a while, you may have seen that I actually do quite a lot of different topics, ranging from neutron stars to quantum gravity. But not all of this research is something I would like to extend at this level. The neutron star physics is something I currently do not work too much on. It is very interesting. But I would need to focus much more efforts on it, and needed to mainly concentrate on technical details. That is not what I currently want. The quantum gravity part is very exciting, and we develop quickly new ideas. There is much more to come. But currently it is too much at an exploratory stage as I would be able to formulate a large-scale five-years program. This will have to cook for a little time longer before it warrants this kind of attention.

So, I am down to my Higgs physics and beyond-the-standard-model research. For the latter, we are currently having enough people to work on. Also, it is a bit more speculative, as we did not yet see anything new in experiments. It is thus somewhat less easy to identify where to concentrates ones efforts on. The combination of our current research and what the next few years of experiments, especially LHC Run 3, will bring, will make this clearer.

So I concentrate this time on our attempts to find some new, subtle effects from theory in experiments: That there is an additional Higgs contribution inside the proton.

Right now, what we did was making a good guess, and looked, whether experiment told us we are right. Iterating this would be a time-honed approach to identifying a new effect. But for this plan, I wanted to be more ambitious. I wanted to have some prediction that rather just guess and iterate. This is very demanding. As a suitable tool, I choose simulations. While I will not be able to really simulate an actual proton and its Higgs content, the effort made possible by such a big grant should be enough to get a decent proxy for it. Something, which is close enough to the real thing that from a guess I can move to something which only requires a few more numbers, which I can get from experiments. That would be a huge success. We then use slightly different methods to fix the numbers.

But this is not easy. Based on what we learned so far, this is a big endeavour. At least for a theoretician. I estimated that I will need about four people with PhD, plus myself, and five more doing a PhD to get there. Not to mention that many master students and bachelor students will be able to work on this as well. This also means that especially several of the PhD students will work on this project, but will complete their PhD only on a part of it, and be done before the whole project is done. This required me to break the project down into smaller workpackages, 17 in total. Each of them is a milestone in itself, and provides intermediate (and eventually final) results. Each requires several of the people, and each at least half a year of time, and some even a year. I needed to make a plan, how each of them intersect with the other, and how they depend on each other. If you are interested in how such a thing looks in the end (it has a lot of tech babble in it), contact me. But it is actually not that different from any other large scale project, even in industry, like building a house. Thus, you also need some project management skills to do research. Even as a theoretician.

I am quite pleased with how it turned out in the end. It really has a good flow, and a succession of reasonable and manageable steps. In the end, it holds the promise of a guaranteed discovery - i.e. we will see a new physics effect, as long as we just keep on with the experiments, it will happen. Likely by the end of the runtime of the LHC in about 15 years. Or with the next generation of machines latest, which are part of the Strategy mentioned in the beginning. By this, I come full circle: My small research project ties in into the big ones. And together, we push the boundaries of human knowledge just a bit further.

Monday, July 27, 2020

What happens if gluons meet?

We have published a new paper on how gluons interact, which is described by the strong force. In fact, how exactly one gluon interacts by being absorbed or emitted by another one. There can be interactions with more of them. These are much more complicated to determine, and so we concentrate on this simplest one.

You may ask yourself, how we cannot yet know this, and still do stuff like calculate the mass of a hadron? And not even bother to do more than this simplest process? Because for the proton we need to now what gluons do, right? Well, not exactly. When we want to calculate the properties of a proton we need to know only how they do so in a particular way of averaging. We do not need to resolve the full details. But if we really want to understand how they interact in detail, this is not enough. And this is crucial, if we want to be able to build up not only the proton, but any particle or thing we want to measure. Being able to do a particular averaging good enough is not sufficient to do all of them as well.

In fact, this way of gluons interacting is the simplest way they can interact. Because of this, we know already quite a bit of it, if the gluons are very energetic. But we know less about how they interact, if they have little energy or travel over very long distances. And there a surprise arose some years back. It was raised in a much older work by myself and other people. It indicated that the gluons undergo a drastic change when they start to traverse distances of the order of the size of a proton or even further (inside bigger hadrons, because of confinement). It appears that at distances of the order of a proton diameter they stop interacting. But they become much stronger interacting again at even longer distances. This is, of course, a very interesting insight what happens, in a sense, at the boundary of a proton.

We used simulations for this back then. But we very limited at this time, because of the available computing power. This was aggravated, because at this time, I was working as a postdoc in Brazil. Which, as a disadvantaged country, does have bright minds, but much less resources than I have nowadays in Austria. At any rate, the result nonetheless got people excited, and there were a lot of follow-up works since then. Still, while most results supported the indications, it is not yet possible to give a fully satisfactory answer.

In our latest work, we picked up the idea of looking at the behavior in a world with one direction less. This saves a lot of computing time. And we did not yet had a final answer there either. This we provided now. There is a clear answer, confirming the behavior described above: First getting weaker, until the interaction vanishes at roughly a (flat) proton across, and then becoming quickly much stronger.

Still, doing the same in our world was too expensive. But we did a trick. Having the results from the fewer dimensions, we knew what to anticipate. So we used this information to test our world for consistency. And this checked out surprisingly well. In fact, we could even predict how much more computing time would be needed for a final confirmation also for our world. Could be done in the next few years. So hang around just a little longer for the final answer.

And, perhaps, we can then also do more complicated interactions. But this is a really tedious business. So you need patience and a long-term perspective.

Tuesday, April 21, 2020

A more complicated photon

The photon - the particle which makes up light - is probably one of the best known elementary particles. Nonetheless, everything can be made more involved. Thus, we studied a more complicated version of it in our most recent paper.

"Why in the world should we do this?" is a valid question at this point. That proceeded in multiple stages. I have written quite some time ago that for a particle physicist it is baffling that chemistry works. Chemistry works, among other things, because the electric charge of nuclei and electrons are perfectly balanced. Well, as perfect as we can measure, anyhow. In the standard model of particle physics there is no reason why this should be the case. However, the standard model is mathematical consistent only if this is the case. In fact, only if the balance is really perfect. Mathematical consistency is not a sufficient argument why a theory needs to be correct. Experiment is. So people have investigated this baffling fact since decades. In this process, the idea came up that there is a reason for this. And that reason would be that the standard model is only a facet of an underlying theory. This underlying theory enforces the equality of the electric charges by combining the weak, strong, and electromagnetic forces into one force. Such theories are called grand-unified theories, or short GUTs.

Such GUTs use a single gauge theory to combine all these forces. This is only possible with a certain kind, which is fundamentally different from the one we use for electromagnetism alone. It is more similar to the ones of the strong and weak force. We have investigated this type of theories for a long time. And one central insight is that in such theories none of the elementary particles can be observed individually. Only so-called bound states, which are made from two or more elementary particles, can be. That is very different from the ordinary photon of electromagnetism, which is, essentially, elementary.

The central question was therefore whether any such bound state could have the properties of the photon which we know from experiment. Otherwise a GUT would (very likely) not be a possible candidate to explain the balance of electric charges in chemistry. The photon has three important features. It does not carry itself electric charge. It is massless. And it has one unit of so-called spin.

Thus we needed to build a bound state in a GUT with these properties. The spin and absence of charge is actually quite simple, and you get this almost for free in any GUT. It is really the fact that it should not have mass which makes it so complicated. It is even more complicated to verify that it has no mass.

We had some ideas from our previous work, using pen-and-paper calculations, how this could work. There had also been some numerical simulations looking into similar questions in the early 1980ies, though they were, given resources back then, very exploratory. So we set up our own, modern-day numerical simulations. However, it is not yet possible to simulate a full, realistic GUT. For this all computing power on earth would not suffice, if we want to be done in a lifetime. So we used the simplest possible theory which had all the correct features relevant to a true GUT. This is an often employed trick in physics. One reduces a problem to the absolutely essential features, throwing away the rest, which has no or little impact on the particular question at hand. And by this getting a manageable problem.

So we did. And due to some ingenious ideas of my collaborators, especially my PhD student Vincenzo Afferrante, we were able to perform the simulations. There was a lot of frustrating work for the first few months, actually. But we persevered. And we were rewarded. In the end, we got our massless photon in exactly the way we hoped! We thus demonstrated that such a mechanism is possible. We got a massless photon made up out of elementary particles! A huge success for the whole setup. In addition, the things which make up the photon are (partly) very massive. That a bound state can be lighter than its constituents is an amazing consequence of special relativity. For us, this is an added bonus. Because you cannot see the fact that a particle is made up of other particles if you have not enough energy available to create the constituents. Again, this comes from the theory of relativity. In this scenario one of the constituents is indeed so heavy that we would not be able to produce it in experiments yet. Hence, with our current experiments, we would not yet detect that the photon is made up from other particles. And this is indeed what we observe. So everything is consistent. Very reassuring. Unfortunately, it is so heavy that also none of the currently planned experiments would be able to do so. Hence, we will not be able to test this idea directly experimentally. This will need to resort to indirect evidence.

Of course, I gloss over a lot of details and technicalities here, which took most of our time. Describing them would fill multiple entries.

Now, the only thing we need to do is to figure out whether anything we neglected could interfere. None of it will at a qualitative level. But, of course, we have very good experiments. And thus to make the whole idea a suitable GUT to describe nature, we also need to get it quantitatively correct. But this will be a huge step. Therefore we broke it down in small steps. We will do them one by one. Our next step is now to get the electron right. Let's see if this also works out.

Tuesday, February 18, 2020

What is a proton made of?

We have published a new paper, which has quite a bold topic: That a proton has a bit more structure than what you usually hear about.

Usually, you hear a proton is made up out of three quarks, the so-called valence quarks. These quarks, two up-quarks and one down quark, are termed valence quarks. Valence particles provide the proton with its characteristic properties, like its electric charge and spin. In addition, every other particle can also appear inside the proton, as a so-called sea particle. But these are quantum fluctuations, which are only very short lived. There existence has been tested in experiments for gluons, strange quarks, charm quarks, and bottom quarks, as well as photons. We understand this relatively well. Their contribution gets smaller the larger their mass is. So what do we want to add?

Those people reading this blog longer have already seen that one of the central topics we are looking at in our research are the weak interactions and the Higgs. Especially, we figured out that this part of the standard model of particle physics is more involved than is usually assumed. Most importantly, it requires for mathematical consistency that most particles, which we usually call elementary, are more involved bound states, i.e. made up out of multiple particles. Such bound states are very different from elementary particles. E.g., they should have a size. And, in principle, this should show up in experiments.

Of course, mathematical consistency is not sufficient for nature to behave is a certain way. Though it is nice if it does. Therefore 'should' is not sufficient. If we are right, it *must* show up in experiments. Unfortunately, as all of this is associated with the Higgs, which is very heavy, this requires a lot of energy. Since there is currently only one powerful enough experiment available, the LHC at CERN, we need to figure out how to test our ideas with this one. Which, unfortunately, is not ideally suited. But you have to make do with what you have.

Already two years back we figured out that all of the mathematical consistency arguments had a surprising impact for our proton. You see, the proton is one of two very similar particles, the proton and the neutron, the so-called nucleons. They make up all atomic nuclei. The difference between proton and neutron are threefold. Two are their mass and electric charge. They are explained by the valence quarks. The third is the protoness and neutroness - a feature which is called flavor (or sometimes isospin). The aforementioned valence quarks can actually not be really responsible for this quantum number. The argument is very technical, and has a lot to do with gauge symmetry, and especially its more involved aspects. Those who are interested in all the technical details can find it in my review article. Ultimately, it boils down that this flavor cannot come from the valence quarks. Something else needs to provide it.

This something else should not upset those things which are explained by the valence quarks, the mass and spin. Thus, it needs to be spinless and chargeless. The Higgs is the only particle in the standard model, which fits the bill. And it indeed carries something, which can provide the difference between protons and neutrons. In technical terms, it is called the custodial quantum number. What only matters is that this quantity can have two different values, and one can be associated with being proton and the other with being neutron, mathematically completely consistent, if the Higgs is another valence particle.

As the Higgs is much heavier than the proton, the immediate question is, how can that be? But here the combination of quantum mechanics and relativity comes to the rescue. It allows a bound state to be lighter (or heavier) than the sum of the masses of their constituents. Actually, an hydrogen atom is, e.g., lighter than the mass of the constituent proton and electron. But only by an extremely small amount. In the proton, this now works in the same way, but hugely amplified. But we have examples that this is actually possible. So this is fine.

When we now smash two protons together, like at the LHC, we actually get its constituents to interact with each other. And we have now additional Higgs content, so these Higgs can interact as well. However, this will be suppressed by the large mass of the Higgs, as in this case the interaction is as 'if it was alone'. And then it is heavy. Thus, even at the LHC this will be rare.

What we did in the paper was to estimate how rare, and which processes could best be sensitive to this. We find that the LHC so far is not too sensitive to the valence Higgs beyond uncertainties, if the effect is really there. But we figure out that with the production of top quarks at the LHC we should have a sensitive handle for looking for the valence Higgs.

This is really just the first step in hunting the valence Higgs. And it may well be that we need a more powerful experiment in the future to really see the effect. Not to mention that our estimates a very crude, and a lot of calculations need still to be done much better. But it is the first time that the effect of the valence Higgs, as required from mathematical consistency of the standard model, is tested experimentally. And this is a big step into a completely unknown domain. Who knows what we will find along the way.

Thursday, January 9, 2020

A personal perspective on how capitalism hurts science

In a number of my recent blog entries, and also occasionally on twitter, I have made statements about how bad our current late stage capitalism is for science. It is time that I follow up with a more detailed blog entry on this.

Before delving into it, I should discuss reasons why I hesitate to write on this subject. Those who have read my scientific blog entries may have noticed that I work on many ideas, which are unconventional. While I do my best to back them up with many different types of calculations, I have not been (yet) able to get these issues across as important. Thus, despite there have been quite a number of people in the past who did work on these subjects, and my own results are in line with theirs, there are very few contemporary people doing so. It is quite easy to be frustrated about this, especially since I think that are important things which need to be taken into considerations. Because they may change a lot of particle physics on a very fundamental level.

If you are in such a situation, it is very tempting to search guilt for your continued failure to make your stuff popular in some external reason. Hence, I am very much double guessing myself, if part of what I write here is affected by this. Probably part of it is. If I would be the only one having these thoughts it surely would be the case. However, over recent years I saw more and more studies being published or popping up on the arXiv which agree with my own perception. Hence, I am more and more convinced that a larger issue is at work here. And whether I am affected by this or not is not easy to say. Hence, I will try to avoid making any personal connections here, and just tell how in my perspective I see the results of these studies realized. Most of the studies I linked on twitter over time.

The gist of many of these studies is twofold. The way how research results are published and perceived is not necessarily correlated with its relevance. In fact, there appears to be anti-correlation between long-term relevance (measured by number of citations) and the impact factor of the journal in which the research has been published. Meaning more prestigious journals tend to not accept research where the short-term relevance is not obvious. On the other hand, also in funding there is a strong tendency that those who have get more, and bold claims are more important than well-funded statements or even checks.

While these issues are on their own troublesome, it is the way how they resemble other elements of public life, which is alarming. To say the least. And which is typical for late-stage capitalism. This is the fact that those who have get more. That those who have, or have the favor of someone who has, can do anything essentially anything, and get rewarded. While those who do not have a hard time to get anything. This is amplified by gate-keeping and a lack of diversity in academia, which is far from resolved. Of course, this is also a problem appearing in society in general.

In my personal experience, this manifests itself in a very strong tendency to create hype. If the results is only promising enough, any assumption, even if it is just wishful thinking, becomes acceptable. Theoreticians seem to be much more prone to this than experimentalists. The reason is simple. As long as no one disproves your statement, you will get attention. And if somebody, who has, picks it up and promotes it (or is actually the origin), it will gain traction. If it fails eventually, you just cook up another thing, and so on. This is in particle physics supported by our current lack of hard experimental evidence beyond the standard model. Thus it is easy to escape experimental falsification. Theoretical falsification is much more complicated. Because in sufficiently complicated theories, doing an exact falsification is technically hard. Even if you there is a lot of evidence, it is always possible to find a loop hole to not accept a falsification. And given the promises made, it is for most much better to just ignore any claim of invalidity. Especially, most of the assumptions often simplify, or even trivialize, calculations. Hence, it is possible to get results with little effort. And since they promise so much, it is easy to publish them or get funding for them.

This even happens in a less dramatic fashion quite often. Even without anything wrong any new field has first a lot of simple problems. They can be done with little or moderate effort. Thus, the return-on-investment is large. Therefore many people flock to these new fields, to have a large output compared to work invested. Thereby, they gain resources. As soon as the inevitable complications set in, most of these leave the field, and move to the next field of the same type. However, they take with them the resources, leaving those trying to solve the hard problems with little. While in any case resources are limited it is necessary to focus effort, this should be decided upon the relevance of the question, rather than on how easy it is to get results.

All of this mirrors trends in society. As long as one can get much without solving actual problem, everyone goes for it. And if you can gain an advantage by making too strong claims, the better. We see how this damages our society from the climate crises to the rise of authoritarianism. All of that follows this pattern. You claim that there is an easy solution how you can get profit and avoid investing solving the reason for the climate crises. See greenwashing. Or you claim social problems have an easy solution, because others are at fault, so you just need to get rid of them. Yielding the rise of rightwing extremism and authoritarian systems. All of this is fueled by capitalism, which puts profits before solutions.

And these effects find their mirror in science, as science is not set apart from society. Thus, capitalistic thinking - gathering resources, in science renown and funding, become more important than the actual solution of problems.

How can this by avoided? Well, probably the same way as in society at large. That what damages the scientific process needs to be got rid off. A scientific system which focuses on what people did instead of who did it, and a distribution of resources based on the relevance of problem rather than renown or promises, would probably go a long way. This was recognized by quite some people. And there are tentative steps ongoing. Like banishing renown as a measure of success. Putting the actual works at center, rather than how and where they are published. But it is a slow process, and one which can again be misused. Probably, only if we as a society change fundamentally science will get closer to its ideals.

Tuesday, November 19, 2019

Going abroad: Yes or no?

One topic which reemerges in many discussions online and offline is that many scientists, especially in (particle) physics, have to move around several times as postdocs. For me, this was after the PhD in Germany first going to Brazil, then back to Germany, then to Slovakia, Austria, Germany, and finally back to Austria.

The discussion evolves usually around whether this is good or bad, and whether the price tag in terms of private life associated with so many moves is worth what one gains from it. There are three aspects, I would like to address, especially from personal experience. One is the cost to one's social net. The other is the personal and professional gain. And the last is suffering because of a lack of predictability. Because you usually do not know, where it will go to in one or two years.

Let me start with the most obvious price tag: Social contacts. And especially partnership. The last one is the most individual point. Here, it is really up to you and your family members, how all of you think about it. But this needs to be addressed well before you start with such moves. How many are acceptable? How long may it take? Which countries are acceptable? And so on. That has to be agreed upon by everyone involved, and that is really different for every one.

More general is the question of the general social net. Despite modern communication methods, a social net will tear if someone moves away. Without direct contact, it is for most people hard to hold contact. Even with video communication, its is not easy to transfer everything. And not everyone is able to keep a connection in written form. Especially if it is not clear when, or even if, one will meet again in person. In addition, even when moving somewhere and building a new social net, this will tear again with the next move. And so can easily leave behind several fragmented nets. It depends, of course, on how much you rely on our own social net, and what kind of people are in there. But too me, this was always the highest cost. Because building a new net takes time, and the old one is missed.

If the cost is so high, how could I even consider moving to be a good thing? Before I did it, I would actually would have no good thing to think about other than our current scientific society is requiring it. And I will come back to this later. Already during the first place, my opinion changed. I expected that just by working with other on a day-by-day basis, not so much would change in my own work. But the constant exposure to very different approaches to science, emphasizing very different aspects and questions, has fundamentally changed the way I think about my own research, and about how I should perform research. At the same time, the need to live in a very different society than the one I came from also taught me a lot about people, and about how to deal with life. In hindsight, I am very sure that I would have been both a lesser person and a lesser scientist if not for these other places I lived and worked at. Again, this is my very own experience, though I heard similar stories by most people. Especially those people who went to a place, which was welcoming to them, if not always simple to deal with.

So, I have now both a strong argument against moving and in favor of moving. And really, I could not decide for me, which is now the stronger point. I am pretty sure that everyone has an opinion about this, but this is probably very individual. Still, in my personal experience most people who have moved to different places are better scientist, and also often show better abilities in dealing with the not hardcore-technical part of science.

While their maybe no optimal choice for everyone on the previous issue, there is certainly one part, in which we can make the whole story better for everyone: Predictability. Right now, you usually move to a place, and while there, you somehow need to get a new position somewhere else for the time afterwards. Usually on a two-year or three-year basis. Until you hit jackpot, and get a permanent position. Which, depending on the country, can take a decade or so. Especially not knowing where things go next, and how long, is in my experience something which makes everything, especially with social nets, much, much harder. On top of this, especially older scientists, insist that some places are as a place important, and you have to go there to be a good scientist. This latter point is very annoying, because it usually boils down to where money is, and where the best people in marketing are, and this creates a self-sustaining cycle. But this is an aspect of late-stage capitalistic science I will write about sometimes else.

Thus, in my opinion, the best compromise between the drawbacks of moving and the advantages of moving could be achieved by making this predictable. Say, you have six years to move around, including say three moves, and there is an assessment every two years, and if all of them are sufficiently positive, then you have a permanent position at the place where you came from. This should make it possible to plan your life. Also, knowing that the stress on the social net is only temporary, this may more often than not preventing it from tearing.

Sure, this will still not be a workable solution for everyone. There are too many individual issues, which cannot be taken into account with a one-fits-all solution. Thus, it is still necessary to help individual researchers to work around their individual situations.

Still, in the end, this means arguably that I think moving around, at least for a while, is important. It is just right now not supported in a good way. However, it will likely be impossible to quantify my personal experience generally. There are far too many soft factors involved. And, of course, I also encountered the occasional exception.

The take-home message for me from these considerations is that I will put effort into making going abroad more sustainable, but will not argue against it. Also, I will counsel everyone about all the aspects one has to think about, and the deliberate obstructions one currently faces, as well as the impact it has beyond work. Thus, everyone can at least make an informed decision, though unfortunately not yet a free one. I hope that I can contribute in changing this.

Wednesday, October 30, 2019

About toxic working culture in science

Recently, I have read this excellent article on mental health in academia. It emphasizes the consequences of a toxic working culture in academia on mental health, with a focus on PhD students. I would like to provide here a few reflections on my own experience with this topic, both as a scientist, but also as a professor.

The first experience is that I very often encounter the phrase that 'we should be grateful that we have the opportunity to do what we like/love' to justify bad working conditions. While I am certainly happy that I do something I like to do, this should never be used in any circumstances to justify circumstances. Because it puts us as people being only granted something, and which equally well can be taken away. Because its meaning can be easily shifted to 'we should be thankful that it is not worse', and thus to justify the status quo for being afraid of consequences when trying to improve the situation. And ultimately to paint the picture of willful suffering just to be able to do something which is important to one.

This is then used to justify almost anything. Even more so, it is seen as an act of individual heroism to still do science, in face of such conditions. I have often witness scenes where people have tried to outdo others by the sheer amount of hours/week they worked. Or how many days of holiday they did not use. Of course, this is fired up by a perpetual overcommitment of people, necessitated often by all the various things we have to do. As scientists, we are expected not only to do science, but also teaching, outreach, presentation in form of talks and paper writing, fact checker as reviewers, marketing in form of research grants, and administrative duties both for research grants as well as within University and for national and international infrastructure in many commissions, i.e. management. While at the same time it is expected to explored with creativity and solve the deepest problems of research. To this comes often the impression of our own grandeur that we know everything better and we cannot delegate anything because we are the only ones who can get it done right. Which is outright wrong.

Of course, this is driven by precarious working conditions until one reaches a permanent position, often for decades, at payment levels which are very low compared to research and development positions in industry. By making the resource of permanent employment scarce and competitive, essentially by turning science into another branch of capitalism, the same happens as everywhere else in capitalism: To ensure ones survival, one puts up with being slowly destroyed by the working conditions. This gets its toxic turn by accusing people of not having enough dedication if they do not overwork themselves. This goes on at a reduced level once permanency is reached, by making resources to do our work scarce and getting them again competitive.

Given that research into work has established that peak effectivity is attained around thirty hours of work a week, this is actually damaging science. When we work much longer, we usually do not get so much more work done. And at some point, we get even less work done, because we start to err too often. Of course, this is a distribution, and there are tails. But an average scientist is also an average human being. Scientist may still overpopulate the tail of this distribution, but this is then selected by the working conditions, and who can suffer them, and not by the brilliance and creativity of the researcher.

Of course, it is easy to buy into the picture of the never-tiring scientist, working all time to discover the greatest secrets. This is how we are often depicted in literature or film. Can you name any scientist, who actually saves the day, who is regularly working only forty hours? I cannot. And especially as a young person, it is even easier to find oneself in pursuit of such a heroic idealization. At the time we get a permanent position, most just carry on like this, because it has become very internalized.

My own experience is in the beginning quite like this. I wanted to solve the scientific problem. I cannot remember actually to reflect upon my working times, or even track it. It was certainly much more than I got paid for. And when many years later I started to change this, I had a very bad consciousness when moving my actual work time towards the amount of time I was paid for. Even though I realized quite quickly that I still get essentially the same amount of work done. Thus proving to myself that what I written about peak effectivity is true for me. However, I have been quite privileged in this development, because failing in getting a permanent position was quite acceptable for me. And even now I do not feel the urge to 'discover something really big' or 'getting acknowledge by grants or prizes', the later being recognized to be just another tool to exploit scientists by letting them actively fight against each other for scraps of resources.

Now, as a professor, I feel the obligation to bring through these points to students. Which turns out to be very complicated. I hear my younger self echoed too often. Like I want to finish fast, or I think this is too important. It is very hard arguing, because the counter argument is self care. And we have so often seen the trope of the scientists sacrificing themselves for the greater good. How can I be a good scientist (or even a good human being), when I do not put the greater good of science above petty personal necessities?

Well, the true answer is that a sane, well-cared for scientist will be doing just as much as an overworked one. And will do so for a much longer time. Not only bodily, because I can better avoid problems like cardiac arrest by stress, by also mentally. Just as the article points out.

What do I do concretely? Besides trying to implement the points mentioned in the article, I do my best to reduce the capitalistic structures in science. By using my influence wherever possible to create easier career paths, and by generally attempting to espouse a cooperative rather than a competitive culture. I certainly fail far too often in this endeavour. Because it means unlearning something I have engulfed in for far too long. But I listen to those doing research about work, about mental and bodily well-being, and to those I work with. Perhaps I can improve it at least a little bit.

Thursday, September 5, 2019

Reflection, self-criticism, and audacity as a scientist

Today, I want to write a bit about me as a scientist, rather than about my research. It is about how I deal with our attitude towards being right.

As I still do particle physics, we are not done with it. Meaning, we have no full understanding. As we try to understand things better, we make progress, and we make both wrong assumptions and actual errors. The latter because we are human, after all. The former because we do not yet know better. Thus, we necessarily know that whatever we do will not be perfect. In fact, especially when we enter unexplored territory, what we do is more likely not the final answer than not. This led to a quite defensive way of how results are presented. In fact, many conclusions of papers read more like an enumeration what all could be wrong with what was written than what has been learned. And because we are not in perfect control of what we are doing, anyone who is trying to twist things in a way they like, they will find a way due to all the cautious presentation. On the other hand, if we would not be so defensive, and act like we think we are right, but we are not - well, this would also be held against us, right?

Thus, as a scientist one is caught in an eternal limbo about actually believing one's own results and thinking that they can only be wrong. If you browse through scientist on, e.g, Twitter, you will see that this is a state which is not easy to endure. This becomes aggravated by a science system which was geared by neoliberalism towards competition and populist movements who need to discredit science to further their own ends, no matter the cost. To deal with both, we need to be audacious, and make our claims bold. At the same time, we know very well that any claims to be right are potentially wrong. Thus enhancing the perpetual cycle of self-doubt on an individual level. On a collective level this means that science gravitates to things which are simple and incremental, as there the chance to being wrong is smaller then when trying to do something more radical or new. Thus, this kind of pressure reduces science from revolutionary to evolutionary, with all the consequences. It also damns us to avoid taking all consequences of our results, because they could be wrong, couldn't they?

In the case of particle physics, this slows us down. One of the reasons, at least in my opinion, why there is no really big vision of how to push forward, is exactly being too afraid of being wrong. We are at a time, where we have too little evidence to do evolutionary steps. But rather than to make the bold step of just go exploring, we try to cover every possible evolutionary direction. Of course, one reason is that because of being in a competitive system, we have no chance of being bold more than once. If we are wrong with this, this will probably create a dead stop for decades. Of course, it other fields of science the consequence can be much more severe. E.g. in climate sciences, this may very well be the difference between extinction of the human species and its survival.

How do I deal with this? Well, I have been far too privileged and in addition was lucky a couple of time. As a consequence, I could weather the consequences to be a bit more revolutionary and bit more audacious than most. However, I also see that if I would not have been, I would probably had an easier career still. But this does not remove my own doubt about my results. After all, what I do has far-reaching consequences. In fact, I am questioning very much conventional wisdom in textbooks, and want to reinterpret the way how the standard model (and beyond) describes the particles of the world we are living in. Once in a while, when I realize what I claim, I can get scared. Other times, I feel empowered by how things seem to fall into place, and I do not see how edges not fit. Thus, I live in my own cycle of doubt.

Is there anything we can do about the nagging self-doubt, the timidity and the feeling of being an imposter? Probably not so much as individuals, except for taking good care of oneself, and working with people with a positive attitude about our common work. Much of the problems are systemic. Some of them could be dealt with by taking the heat of completion out of science, and have a cooperative model. This will only work out, if there is more access to science positions, and more resources to do science. After all, there are right now far too many people wanting a position as a scientist than there are available. No matter what we do, this always creates additional pressure. But even that could be reduced by having controllable career paths, more mentoring, easier transitions out of science, and much more feedback. But this not only requires long-term commitments on behalf of research institutes, but also that scientists themselves acknowledge these problems. I am very happy to see that this consciousness grows, especially with younger people getting into science. Too many scientist I encounter blatantly deny that these problems exist.

However, in the end, also these problems are connected to societal issues at large. The current culture is extremely competitive, and more often than not rewards selfish behavior. Also, there is, both in science and in society, a strong tendency to give those who have already. And such a society shapes also science. It will be necessary that society reshapes itself to a more cooperative model to get a science, which is much more powerful and forward-moving than we have today. On the other hand, existential crises of the world, like the climate crises or the rise of fascism, are also facilitated by a competitive society. And could therefore likely be overcome by having a more cooperative and equal society. Thus, dealing with the big problems will also help solving the problems of scientists today. I think this is worthwhile, and invite any fellow scientist, and anyone, to do so.

Wednesday, August 7, 2019

Making connections

Over time, it has happened that some solution in one area of physics could also be used in a quite different area. Or, at least, inspired the solution. Unfortunately, this does not always work. Even quite often it happened that when reaching the finer points it turns out that something promising did in the end not work. Thus, it pays off to be always careful with such a transfer, and never believe a hype. Still, in some cases it worked, and even lead to brilliant triumphs. And so it is always worthwhile to try.

Such an attempt is precisely the content of my latest paper. In it, I try to transfer ideas from my research on electroweak physics and the Brout-Englert-Higgs effect to quantum gravity. Quantum gravity is first and foremost still an unsolved issue. We know that mathematical consistency demands that there is some unification of quantum physics and gravity. We expect that this will be by having a quantum theory of gravity. Though we are yet lacking any experimental evidence for this assumption. Still, I also make the assumption for now that quantum gravity exists.

Based on this assumption, I take a candidate for such a quantum gravity theory and pose the question what are its observable consequences. This is a question which has driven me since a long time in particle physics. I think that by now I have an understanding of how it works. But last year, I was challenged whether these ideas can still be right if there is gravity in the game. And this new paper is essentially my first step towards an answerhttps://arxiv.org/abs/1908.02140. Much of this answer is still rough, and especially mathematically will require much work. But at least it provides a first consistent picture. And, as advertised above, it draws from a different field.

The starting point is that the simplest version of quantum gravity currently considered is actually not that different from other theories in particle physics. It is a so-called gauge theory. As such, many of its fundamental objects, like the structure of space and time, are not really observable. Just like most of the elementary particles of the standard model, which is also a gauge theory, are not. Thus, we cannot see them directly in an experiment. In the standard model case, it was possible to construct observable particles by combining the elementary ones. In a sense, the particles we observe are bound states of the elementary particles. However, in electroweak physics one of the bound elementary particles totally dominates the rest, and so the whole object looks very similar to the elementary one, but not quite.

This works, because the Brout-Englert-Higgs effect makes it possible. The reason is that there is a dominating kind of not observable structure, the so-called Higgs condensate, which creates this effect. This is something coincidental. If the parameters of the standard model would be different, it would not work. But, luckily, our standard model has just the right parameter values.

Now, when looking at gravity around us, there is a very similar feature. While we have the powerful theory of general relativity, which describes how matter warps space, we rarely see this. Most of our universe behaves much simpler, because there is so little matter in it. And because the parameters of gravity are such that this warping is very, very small. Thus, we have again a dominating structure: A vacuum which is almost not warped.

Using this analogy and the properties of gauge theories, I figured out the following: We can use something like the Brout-Englert-Higgs effect in quantum gravity. And all observable particles must still be some kind of bound states. But they may now also include gravitons, the elementary particles of quantum gravity. But just like in the standard model, these bound states are dominated by just one of its components. And if there is a standard model component it is this one. Hence, the particles we see at LHC will essentially look like there is no gravity. And this is very consistent with experiment. Detecting the deviations will be so hard in comparison to those which come from the standard model, we can pretty much forget about it for earthbound experiments. At least for the next couple of decades.

However, there are now also some combinations of gravitons without standard model particles involved. Such objects have been long speculated about, and are called geons, or gravity balls. But in contrast to the standard model case, they are not stable classically. But they may be stabilized due to quantum effects. The bound state structure strongly suggests that there is at least one stable one. Still, this is pure speculation at the moment. But if they are, these objects could have dramatic consequences. E.g., they could be part of the dark matter we are searching for. Or, they could make up black holes very much like neutrons make a neutron star. I have no idea, whether any of these speculations could be true. But if there is only a tiny amount of truth in it, this could be spectacular.

Thus, some master students and I will set out to have a look at these ideas. To this end, we will need to some hard calculations. And, eventually, the results should be tested against observation. These will be coming form the universe, and from astronomy. Especially from the astronomy of black holes, where recently there have been many interesting and exciting developments, like observing two black holes merge, or the first direct image of a black hole (obviously just black inside a kind of halo). These are exciting times, and I am looking forward to see whether any of these ideas work out. Stay tuned!

Thursday, July 25, 2019

Talking about the same thing

In this blog entry I will try to explain my most recent paper. The theme of the paper is rather simply put: You should not compare apple with oranges. The subtlety comes from knowing whether you have an apple or an orange in your hand. This is far less simple than it sounds.

The origin of the problem are once more gauge theories. In gauge theories, we have introduced additional degrees of freedom. And, in fact, we have a choice of how we do this. Of course, our final results will not depend on the choice. However, getting to the final result is not always easy. Thus, ensuring that the intermediate steps are right would be good. But they depend on the choice. But then they are only comparable between two different calculations, if in both calculations the same choice is made.

Now it seems simple at first to make the same choice. Ultimately, it is our choice, right? But this is actually not that easy in such theories, due to their mathematical complexity. Thus, rather than making the choice explicit, the choice is made implicitly. The way how this is done is, again for technical reasons, different for methods. And because of all of these technicalities and the fact that we need to do approximations, figuring out whether the implicit conditions yield the same explicit choice is difficult. This is especially important as the choice modifies the equations describing our auxiliary quantities.

In the paper I test this. If everything is consistent between two particular methods, then the solutions obtained in one method should be a solution to the equations obtained in the other method. Seems a simple enough idea. There had been various arguments in the past which suggested that this should be he case. But there had been more and more pieces of evidence over the last couple of years that led me to think that there was something amiss. So I made this test, and did not rely on the arguments.

And indeed, what I find in the article is that the solution of one method does not solve the equation from the other method. The way how this happens strongly suggests that the implicit choices made are not equivalent. Hence, the intermediate results are different. This does not mean that they are wrong. They are just not comparable. Either method can still yield in itself consistent results. But since neither of the methods are exact, the comparison between both would help reassure that the approximations made make sense. And this is now hindered.

So, what to do now? We would very much like to have the possibility to compare between different methods at the level of the auxiliary quantities. So this needs to be fixed. This can only be achieved if the same choice is made in all the methods. The though question is, in which method we should work on the choice. Should we try to make the same choice as in some fixed of the methods? Should we try to find a new choice in all methods? This is though, because everything is so implicit, and affected by approximations.

At the moment, I think the best way is to get one of the existing choices to work in all methods. Creating an entirely different one for all methods appears to me far too much additional work. And I, admittedly, have no idea what a better starting point would be than the existing ones. But in which method should we start trying to alter the choice? In neither method this seems to be simple. In both cases, fundamental obstructions are there, which need to be resolved. I therefore would currently like to start poking around in both methods. Hoping that there maybe a point in between where the choices of the methods could meet, which is easier than to push all all the way. I have a few ideas, but they will take time. Probably also a lot more than just me.

This investigation also amazes me as the theory where this happens is nothing new. Far from it, it is more than half a century old, older than I am. And it is not something obscure, but rather part of the standard model of particle physics. So a very essential element in our description of nature. It never ceases to baffle me, how little we still know about it. And how unbelievable complex it is at a technical level.

Wednesday, June 19, 2019

Creativity in physics

One of the most widespread misconceptions about physics, and other natural sciences, is that they are quite the opposite to art: Precise, fact-driven, logical, and systematic. While art is perceived as emotional, open, creative, and inspired.

Of course, physics has experiments, has data, has math. All of that has to be fitted perfectly together, and there is no room for slights. Logical deduction is central in what we do. But this is not all. In fact, these parts are more like the handiwork. Just like a painter needs to be able to draw a line, a writer needs to be able to write coherent sentences, so we need to be able to calculate, build, check, and infer. But just like the act of drawing a line or writing a sentence is not what we recognize already as art, so is not the solving of an equation physics.

We are able to solve an equation, because we learned this during our studies. We learned, what was known before. Thus, this is our tool set. Like people read books before start writing one. But when we actually do research, we face the fact that nobody knows what is going on. In fact, quite often we do not even know what is an adequate question to pose. We just stand there, baffled, before a couple of observations. That is, where the same act of creativity has to set in as when writing a book or painting a picture. We need an idea, need inspiration, on how to start. And then afterwards, just like the writer writes page after page, we add to this idea various pieces, until we have a hypotheses of what is going on. This is like having the first draft of a book. Then, the real grinding starts, where all our education comes to bear. Then we have to calculate and so on. Just like the writer has to go and fix the draft to become a book.

You may now wonder whether this part of creativity is only limited to the great minds, and at the inception of a whole new step in physics? No, far from it. On the one hand, physics is not the work of lone geniuses. Sure, somebody has occasionally the right idea. But this is usually just the one idea, which is in the end correct, and all the other good ideas, which other people had, did just turn out to be incorrect, and you never hear of them because of this. And also, on the other hand, every new idea, as said above, requires eventually all that what was done before. And more than that. Creativity is rarely borne out of being a hermit. It is often by inspiration due to others. Talking to each other, throwing fragments of ideas at each other, and mulling about consequences together is what creates the soil where creativity sprouts. All those, with whom you have interacted, have contributed to the idea you have being born.

This is, why the genuinely big breakthroughs have often resulted from so-called blue-sky research or curiosity-driven research. It is not a coincidence that the freedom of doing whatever kind of research you think is important is an, almost sacred, privilege of hired scientists. Or should be. Fortunately I am privileged enough, especially in the European Union, to have this privilege. In other places, you are often shackled by all kinds of external influences, down to political pressure to only do politically acceptable research. And this can never spark the creativity you need to make something genuine new. If you are afraid about what you say, you start to restrain yourself, and ultimately anything which is not already established to be acceptable becomes unthinkable. This may not always be as obvious as real political pressure. But if whether you being hired, if your job is safe, starts to depend on it, you start going for acceptable research. Because failure with something new would cost you dearly. And with the currently quite common competitive funding prevalent particularly for non-permanently hired people, this starts to become a serious obstruction.

As a consequence, real breakthrough research can be neither planned nor can you do it on purpose. You can only plan the grinding part. And failure will be part of any creative process. Though you actually never really fail. Because you always learn how something does not work. That is one of the reasons why I strongly want that failures become also publicly available. They are as important to progress as success, by reducing the possibilities. Not to mention the amount of life time of researchers wasted because they fail with them same attempt, not knowing that others failed before them.

And then, perhaps, a new scientific insight arises. And, more often than not, some great technology arises along the way. Not intentionally, but because it was necessary to follow one's creativity. And that is actually where most technological leaps came from. So,real progress in physics, in the end, is made from about a third craftsmanship, a third communication, and a third creativity.

So, after all this general stuff, how do I stay creative?

Well, first of all, I was and am sufficiently privileged. I could afford to start out with just following my ideas, and either it will keep me in business, or I will have to find a non-science job. But this only worked out because of my personal background, because I could have afforded to have a couple of months with no income to find a job, and had an education which almost guarantees me a decent job eventually. And the education I could only afford in this quality because of my personal background. Not to mention that as a white male I had no systemic barriers against me. So, yes, privilege plays a major role.

The other part was that I learned more and more that it is not effort what counts, but effect. Took me years. But eventually, I understood that a creative idea cannot be forced by burying myself in work. Time off is for me as important. It took me until close to the end of my PhD to realize that. But not working overtime, enjoying free days and holidays, is for me as important for the creative process as any other condition. Not to mention that I also do all non-creative chores much more efficiently if well rested, which eventually leaves me with more time to ponder creatively and do research.

And the last ingredient is really exchange. I have had now the opportunity, in a sabbatical, to go to different places and exchange ideas with a lot of people. This gave me what I needed to acquire a new field and have already new ideas for it. It is the possibility to sit down with people for some hours, especially in a nicer and more relaxing surrounding than an office, and just discuss ideas. That is also what I like most about conferences. And one of the reasons I think conferences will always be necessary, even though we need to make going there and back ecologically much more viable, and restrict ourselves to sufficiently close ones until this is possible.

Sitting down over a good cup of coffee or a nice meal, and just discuss, is really jump starting my creativity. Even sitting with a cup of good coffee in a nice cafe somewhere and just thinking does wonders for me in solving problems. And with that, it seems not to be so different for me than for artists, after all.

Tuesday, May 14, 2019

Acquiring a new field

I have recently started to look into a new field: Quantum gravity. In this entry, I would like to write a bit about how this happens, acquiring a new field. Such that you can get an idea what can lead a scientist to do such a thing. Of course, in future entries I will also write more about what I am doing, but it would be a bit early to do so right now.

Acquiring a new field in science is not something done lightly. One has always not enough time for the things one does already. And when you enter a new field, stuff is slow. You have to learn a lot of basics, need to get an overview of what has been done, and what is still open. Not to mention that you have to get used to a different jargon. Thus, one rarely does so lightly.

I have in the past written already one entry about how I came to do Higgs physics. This entry was written after the fact. I was looking back, and discussed my motivation how I saw it at that time. It will be an interesting thing to look back at this entry in a few years, and judge what is left of my original motivation. And how I feel about this knowing what happened since then. But for now, I only know the present. So, lets get to it.

Quantum gravity is the hypothetical quantum version of the ordinary theory of gravity, so-called general relativity. However, it has withstood quantization for a quite a while, though there has been huge progress in the last 25 years or so. If we could quantize it, its combination with the standard model and the simplest version of dark matter would likely be able to explain almost everything we can observe. Though even then a few open questions appear to remain.

But my interest in quantum gravity comes not from the promise of such a possibility. It has rather a quite different motivation. My interest started with the Higgs.

I have written many times that we work on an improvement in the way we look at the Higgs. And, by now, in fact of the standard model. In what we get, we see a clear distinction between two concepts: So-called gauge symmetries and global symmetries. As far as we understand the standard model, it appears that global symmetries determine how many particles of a certain type exists, and into which particles they can decay or be combined. Gauge symmetries, however, seem to be just auxiliary symmetries, which we use to make calculations feasible, and they do not have a direct impact on observations. They have, of course, an indirect impact. After all, in which theory which gauge symmetry can be used to facilitate things is different, and thus the kind of gauge symmetry is more a statement about which theory we work on.

Now, if you add gravity, the distinction between both appears to blur. The reason is that in gravity space itself is different. Especially, you can deform space. Now, the original distinction of global symmetries and gauge symmetries is their relation to space. A global symmetry is something which is the same from point to point. A gauge symmetry allows changes from point to point. Loosely speaking, of course.

In gravity, space is no longer fixed. It can itself be deformed from point to point. But if space itself can be deformed, then nothing can stay the same from point to point. Does then the concept of global symmetry still make sense? Or does all symmetries become just 'like' local symmetries? Or is there still a distinction? And what about general relativity itself? In a particular sense, it can be seen as a theory with a gauge symmetry of space. Makes this everything which lives on space automatically a gauge symmetry? If we want to understand the results of what we did in the standard model, where there is no gravity, in the real world, where there is gravity, then this needs to be resolved. How? Well, my research will hopefully answer this question. But I cannot do it yet.

These questions were already for some time in the back of my mind. A few years, I actually do not know how many exactly. As quantum gravity pops up in particle physics occasionally, and I have contact with several people working on it, I was exposed to this again and again. I knew, eventually, I will need to address it, if nobody else does. So far, nobody did.

But why now? What prompted me to start now with it? As so often in science, it were other scientists.

Last year at the end of November/beginning of December, I took part in a conference in Vienna. I had been invited to talk about our research. The meeting has a quite wide scope, and also present were several people, who work on black holes and quantum physics. In this area, one goes, in a sense, halfway towards quantum gravity: One has quantum particles, but they life in a classical gravity theory, but with strong gravitational effects. Which is usually a black hole. In such a setup, the deformations of space are fixed. And also non-quantum black holes can swallow stuff. This combination appears to make the following thing: Global symmetries appear to become meaningless, because everything associated with them can vanish in the black hole. However, keeping space deformations fixed means that local symmetries are also fixed. So they appear to become real, instead of auxiliary. Thus, this seems to be quite opposite to our result. And this, and the people doing this kind of research, challenged my view of symmetries. In fact, in such a half-way case, this effect seems to be there.

However, in a full quantum gravity theory, the game changes. Then also space deformations become dynamical. At the same time, black holes need no longer to have the characteristic to swallow stuff forever, because they become dynamical, too. They develop. Thus, to answer what happens really requires full quantum gravity. And because of this situation, I decided to start to work actively on quantum gravity. Because I needed to answer whether our picture of symmetries survive, at least approximately, when there is quantum gravity. And to be able to answer such challenges. And so it began.

Within the last six months, I have now worked through a lot of the basic stuff. I have now a rough idea of what is going on, and what needs to be done. And I think, I see a way how everything can be reconciled, and make sense. It will still need a long time to complete this, but I am very optimistic right now. So optimistic, in fact, that a few days back I gave my first talk, in which I discussed this issues including quantum gravity. It will still need time, before I have a first real result. But I am quite happy how thing progress.

And that is the story how I started to look at quantum gravity in earnest. If you want to join me in this endeavor: I am always looking for collaboration partners and, of course, students who want to do their thesis work on this subject 😁