Monday, February 6, 2012

Simulating a universe

One of the things which has changed science drastically in the last few decades has been the computer. With nowadays computational resources it is feasible to simulate even complex physical systems inside a computer with an accuracy which is useful. This is done in all areas of physics, and of course also in particle physics. In the latter context, the numerical simulations of particles is known mostly known as lattice gauge theory, due to its predominant version.

So how does such a simulation proceed? And what can we actually learn from it? Well, the rules of quantum physics forbid us to just use a number of marbles and simulate their behavior in the computer. With such an approach, we would be able to do classical physics (as is even done with the marbles representing entire galaxies), but not quantum effects. Quantum effects require us to simulate the whole universe, such that all the quanta can interact with each other. Thus, what we do in quantum physics, is indeed to simulate many possible histories of a universe, and from that infer the quantum behavior of a system of elementary particles.

Stop! You may say. The universe is really, really big. Even with the most powerful computer today, how can we ever dream of simulating it completely? And you are right to say so, it is not possible, not with all computers on the world. But it is also not necessary. For most questions we are interested in, most of the universe only contributes negligible, and what is really interesting happens only inside a small part of it. What we want to know in particle physics is how the particles interact and how they form bound states, but not much larger than, say, a nucleus. But in an atom the electrons are so far away from the nucleus with so much space in between that the electron plays little role for how the nucleus is build from the quarks. We can thus just look at a universe which is just a little, maybe a factor ten or so, larger than the nucleus we are interested in, and we should capture already almost everything. At least as much as we can expect to experimentally verify within the foreseeable future. Thus, we are permitted to just simulate a very small universe to answer all the questions of relevance in particle physics.

But this is not yet sufficient to make a meaningful simulation. As you may remember, the standard model is just a low-energy approximation of whatever the theory is at some higher energy scale. If I just try to simulate now this universe, the simulation will not work, because we are lacking the knowledge of what happens at very short distances. As a consequence, our simulation would just produce either zeros or infinities, which is not too helpful. To deal with that we have to include the limits of our knowledge. Since what we do not know is the physics at very short distances, this is most easily done by not simulating at very short distances. The simplest solution to do this is not to take every space-time point of the universe you want to simulate, but only a finite number of them, which have finite distances in between. Technically, this is usually realized by arranging this finite number of points on a lattice, and thus the name lattice theory. The still missing word gauge just stems from the fact that the standard model is a gauge theory. Thus lattice gauge theory.

Ok, now we are ready to go. We just take a small box with a finite number of points in it, arranged on a (usually square) lattice, and let the computer run. And this works rather well. With this we can calculate the mass of the proton or how strongly the Higgs and the W boson interact, and so on.

Great, you say, so that is it. We just do simulations, and that will be all we will ever need. Unfortunately, it is again not that simple. There are two drawbacks to this approach.

One is that we still have limited resources at our disposal. The result is that the volume can be still rather small, so that we cannot cover all processes we like. Also, If we increase the volume and do not increase the number of space-time points, their spacing will increase. At some point, this spacing may become so large that we can simply not resolve some part of physics anymore. It falls through the cracks (or the lattice), figuratively speaking. And adding more points increases the run time. Thus, we are not yet able to simulate at the same time, say, a proton and a Higgs. It will take much, much more computing time to do so. We do not expect that so much computing power will be available in any reachable future. So there are things we cannot answer with simulations. There are also technical reasons. We have not yet been able to develop algorithms with which we could simulate very light fermions or parity violations. We are currently also stuck when it comes to having many fermions, like it is the case in the interior of say, a large nucleus or a neutron star. Thus, we are yet just limited in simulating by the power of our computers and algorithms.

The second thing is that a simulation will always create a number. Good, you may say, an experiment does so as well. So you can compare with experiment, and find out whether your theory is correct. True. But this is not yet satisfactory. Just because you can compare the speed a car drives and you can simulate the car and see that the simulation produces the same speed is not yet telling you how the car does it. You know that your description of the car is right, but you do not know how the different parts are really interacting with themselves, or what they mean. For a car you can then just go on and look at the details. But in quantum physics that is not possible, because the laws of quantum physics dictate that everything interacts with everything to a certain degree. Thus, disassembling is not entirely simple, or even possible. An alternative is to have expressions in which you could see for each knob what it turns. That cannot be delivered by any numerical calculations.

Make no mistake here. This is not making numerical simulations useless. In fact, some of the greatest discoveries in the past decades were only possible using computers. But it is not all, and therefore we need more. If the theory is only weakly interacting, perturbation theory is doing the job nicely. But what if it is not? Well, then we have further possibilities which I will discuss next.

As a final remark, I should add that I use heavily computer simulations in my work. Right while I am typing this entry many, many CPUs do work for me. But as you see there are limitations. This is why I use also some of the further methods I will discuss next.

1 comment:

  1. Hardly anyone does it, but it is enlightning to use quaternionic probability amplitude distributions(QPAD's)as wave functions. It turns linear equations of motion into balance equations. It throws completely new light on fundamental physics.
    It is applied in the Hilbert Book Model.