Phase space and Boltzmanns definition of entropy
We are still not finished with the definition of entropy, however, for what has been said up to this point only half addresses the issue. We can see an inadequacy in our description so far by considering a slightly different example. Rather than having a can of red and blue paint, we might consider a bottle which is half filled with water and half with olive oil. We can stir it as much as we like, and also shake the bottle vigorously. But in a few moments, the olive oil and the water will separate out, and we soon have just olive oil at the top half of the bottle and water at the bottom half. The entropy has been increasing all the time throughout the separation process, nevertheless. The new point that arises here is that there is a strong mutual attraction between the molecules of olive oil which causes them to aggregate, thereby expelling the water. The notion of mere configuration space is not adequate to account for the entropy increase in this kind of situation, as we really need to take into account the motions of the individual particles/molecules, not just of their locations. Their motions will be necessary for us, in any case, so that the future evolution of the state is determined, according to the Newtonian laws that we are assuming to be operative here. In the case of the molecules in the olive oil, their strong mutual attraction causes their velocities to increase (in vigorous orbital motions about one another) as they get closer together, and it is the 'motion' part of the relevant space which provides the needed extra volume (and therefore extra entropy) for the situations where the olive oil is collected together.
The space that we need, in place of the configuration space C described above, is what is called phase space. The phase space T has twice as many dimensions (!) as C, and each position coordinate for each constituent particle (or molecule) must have a corresponding 'motion' coordinate in addition to that position coordinate (see Fig. 1.6). We might imagine that the appropriate such coordinate would be a measure of velocity (or angular velocity, in the case of angular coordinates describing orientation in space). However, it turns out (because of deep connections with the formalism of Hamiltonian theory[11]) that it is the momentum (or angular momentum, in the case of angular coordinates) that we shall require in order to describe the motion. In most familiar situations, all we need to know about this 'momentum' notion is that it is the mass times the velocity (as already mentioned in §1.1). Now the (instantaneous) motions, as well as the positions, of all the particles composing our system are encoded in the location of a single point p in T. We say that the state of our system is described by the location of p within T.
For the dynamical laws that we are considering, governing the behaviour of our system, we may as well take them to be Newton's laws of motion, but we can also treat more general situations (such as with the continuous fields of Maxwell's electrodynamics; see §2.6, §3.1, §3.2, and Appendix A1), which also come under the broad Hamiltonian framework (referred to above). These laws are deterministic in the sense that the state of our system at any one time completely determines the state at any other time, whether earlier or later. To put things another way, we can describe the dynamical evolution of our system, according to these laws as a point p which moves along a curve—called an evolution curve—in the phase space T. This evolution curve represents the unique evolution of the entire system according to the dynamical laws, starting from the initial state, which we can represent by some particular point p0 in the phase space T. (See Fig. 1.7.) In fact, the whole phase space T will be filled up (technically foliated) by such evolution curves (rather like a bale of straw), where every point of T will lie on some particular evolution curve. We must think of this curve as being oriented—which means that we must assign a direction to the curve, and we can do this by putting an arrow on it. The evolution of our system, according to the dynamical laws, is described by a moving point p, which travels along the evolution curve—in this case starting from the particular point p0— and moves in the direction in which the arrow points. This provides us with the future evolution of the particular state of the system represented by p. Following the evolution curve in the direction away from p0 in the opposite direction to the arrow gives the timereverse of the evolution, this telling us how the state represented by p0 would have arisen from states in the past. Again, this evolution would be unique, according to the dynamical laws.
One important feature of phase space is that, since the advent of quantum mechanics, we find that it has a natural measure, so that we can take volumes in phase space to be, essentially, just dimensionless numbers. This is important, because Boltzmann's entropy definition, that we shall come to shortly, is given in terms of phasespace volumes, so we need to be able to compare highdimensional volume measures with each other, where the dimensions may differ very greatly from one to another. This may seem strange from the point of view of ordinary classical (i.e. nonquantum) physics, since in ordinary terms we would think of the length of a curve (a 1dimensional 'volume') as always having a smaller measure than the area of a surface (a 2dimensional 'volume'), and a surface area as being of smaller measure than a 3volume, etc. But the measures of phasespace volumes that quantum theory tells us to use are indeed just numbers, as measured in units of mass and distance that give us h = 1, the quantity being Dirac's version of Planck's constant (sometimes called the 'reduced' Planck's constant), where h is the original Planck's constant. In standard units, h has the extremely tiny value h = 1.05457 . . . x 1034 Joule seconds, so the phasespace measures that we encounter in ordinary circumstances tend to have exceedingly large numerical values.
Thinking of these numbers as being just integers (whole numbers) gives a certain 'granularity' to phase space, and this provides the discreteness of the 'quanta' of quantum mechanics. But in most ordinary circumstances these numbers would be huge, so the granularity and discreteness is not noticed. An exception arises with the Planck blackbody spectrum that we shall be coming to in §2.2 (see Fig. 2.6 and note 1.2), this being the observed phenomenon that Max Planck's theoretical analysis explained, in 1900, thereby launching the quantum revolution. Here one must consider an equilibrium situation simultaneously involving different numbers of photons, and therefore phase spaces of different dimensions. The proper discussion of such matters is outside the scope of this book,[13] but we shall return to the basics of quantum theory in §3.4.
Fig. 1.8 Impression of coarsegraining in higher dimensions.
Now that we have the notion of the phase space of a system, we shall need to understand how the Second Law operates in relation to it. As with our discussion of configuration space, above, this will require us to provide a coarsegraining of T where two points belonging to the same coarsegraining region would be deemed to be 'indistinguishable' with regard to macroscopic parameters (such as the temperature, pressure, density, direction and magnitude of flow of a fluid, colour, chemical composition, etc.). The definition of the entropy S of a state represented by a point p in T is now provided by the remarkable Boltzmann formula where V is the volume of the coarsegraining region containing p. The quantity k is a small constant (which would have been Boltzmann's constant had I chosen to use a natural logarithm, 'log'; it is given by k = klog10 (log 10=2.302585 . . .), where k is indeed Boltzmann's constant, and k takes the tiny value so k = 3.179 . . . x 1023 JK1 (see Fig. 1.8). In fact, to be consistent with
Fig. 1.8 Impression of coarsegraining in higher dimensions.
5=k logic V, k = 1.3805 . . . x 1023 Joules/degree Kelvin, the definitions normally used by physicists, I shall henceforth revert to natural logarithms, and write Boltzmann's entropy formula as
Before we move on, in §1.4, to explore the reasonableness and implications of this elegant definition, and how it relates to the Second Law, we should appreciate one particular issue that is very nicely addressed by it. Sometimes people (quite correctly) point out that the lowness of the entropy of a state is not really a good measure of the state's 'specialness'. If we again consider the situation of the falling egg that was introduced in §1.1, we note that the relatively highentropy state that is arrived at when the egg has become a mess on the floor is still an extraordinarily special one. It is special because there are some very particular correlations between the motions of the particles constituting that apparent 'mess'—of such a nature that if we reverse them all, then that mess will have the remarkable property that it will rapidly resolve itself into a perfectly completed egg that projects itself upwards so as to perch itself delicately on the table above. This, indeed, is a very special state, no less special than was the relatively lowentropy configuration of the egg up on the table in the first place. But, 'special' as that state consisting of a mess on the floor undoubtedly was, it was not special in the particular way that we refer to as 'low entropy'. Lowness of entropy refers to manifest speciality, which is seen in special values of the macroscopic parameters. Subtle correlations between particle motions are neither here nor there when it comes to the entropy that is to be assigned to the state of a system.
We see that although some states of relatively high entropy (such as the timereversed smashed egg just considered) can evolve to states of lower entropy, in contradiction with the Second Law, these would represent a very tiny minority of the possibilities. It may be said that this is the 'whole point' of the notion of entropy and of the Second Law. Boltzmann's definition of entropy in terms of the notion of coarse graining deals with this matter of the kind of 'specialness' that is demanded by low entropy in a very natural and appropriate way.
One further point is worth making here. There is a key mathematical theorem known as Liouville's theorem, which asserts that, for the normal type of classical dynamical system considered by physicists (the standard Hamiltonian systems referred to above), the timeevolution preserves volumes in phase space. This is illustrated in the righthand part of Fig. 1.7, where we see that if a region of Vo, of volume V, in phase space T, is carried by the evolution curves to a region Vt, after time t, then we find that Vt has the same volume V as does Vo. This does not contradict the Second Law, however, because the coarsegraining regions are not preserved by the evolution. If the initial region Vo happened to be a coarsegraining region, then Vt would be likely to be a sprawling messy volume spreading out through a much larger coarsegraining region, or perhaps several such regions, at the later time t.
To end this section, it will be appropriate to return to the important matter of the use of a logarithm in Boltzmann's formula, following up an issue that was briefly addressed in §1.2. The matter will have particular importance for us later, most especially in §3.4. Suppose that we are contemplating the physics taking place in our local laboratory, and we wish to consider the definition of the entropy of some structures involved in an experiment that we are performing. What would we consider to be Boltzmann's definition of entropy relevant to our experiment? We would take into account all degrees of freedom of concern, in the laboratory, and use these to define some phase space T. Within T would be the relevant coarsegraining region V of volume V, giving us our Boltzmann entropy klogV.
However, we might choose to consider our laboratory as part of a far larger system, let us say the rest of the entire Milky Way galaxy within which we reside, where there are enormously many more degrees of freedom. By including all these degrees of freedom, we find that our phase space will now be enormously larger than before. Moreover, the coarsegraining region pertinent to our calculation of entropies within our laboratory will now also be enormously larger than before, because it can involve all the degrees of freedom present in the entire galaxy, not just those relevant to the contents of the laboratory. This is natural, however, because the entropy value is now that which applies to the galaxy as a whole, the entropy involved in our experiment being only a small part of this.
The parameters defining the external degrees of freedom (those determining the state of the galaxy except for those defining the state within the laboratory) provide us with a huge 'external' phase space X, and there will be a coarsegraining region W within X that characterizes the state of the galaxy external to the laboratory. See Fig. 1.9. The phase space Q for the entire galaxy will be defined by the complete set of parameters, both external (providing the space X) and internal (providing the space P). The space Q is called, by mathematicians, the product spaceilA] of P with X, written
Q=P x X, and its dimension will be the sum of the dimensions of T and of X (because its coordinates are those of T followed by those of X). Figure 1.10 illustrates the idea of a product space, where T is a plane and X is a line.
Fig. 1.10 Product space where T is a plane and X is a line.
If we take the external degrees of freedom to be completely independent of the internal ones, then the relevant coarsegraining region in Q will be the product of the coarsegraining regions V in ? and W in X, respectively (see Fig.1.11). Moreover, the volume element in a product space is taken to be the product of the volume elements in each of the constituent spaces; consequently the volume of the coarsegraining region V x W in Q will be the product VW of the volume V of the coarsegraining region V in ? with the volume W of the coarsegraining region W in X. Hence, by the 'producttosum' property of the logarithm, the Boltzmann entropy we obtain is which is the sum of the entropy within the laboratory and the entropy external to the laboratory. This just tells us that entropies of independent systems just add together, showing that an entropy value is something that can be assigned to any part of a physical system that is independent of the rest of the system.
Fig. 1.10 Product space where T is a plane and X is a line.
Vx W
coarsegraining
K cc coarsegraining region V in P
region V x W in Q coarsegraining Orf region W in X
Fig. 1.11 Coarsegraining region in the product space as a product of coarsegraining regions in the factors.
In the situation considered here for which T refers to the degrees of freedom relevant to the laboratory and X to those relevant to the external galaxy (assumed independent of each other), we find that the entropy value klogV that the experimenter would assign to the experiment being performed, if the external degrees of freedom are being ignored, would differ from the entropy value klog(VW) that would result if these external degrees of freedom are also taken into consideration, simply by the entropy value klogW that would be assigned to all the external galactic degrees of freedom. This external part plays no role for the experimenter and can therefore be safely ignored for studying the role of the Second Law within the laboratory itself. However, when in §3.4 we come to consider the entropy balance of the universe as a whole and, most particul arly, the contribution due to black holes, we shall find that these matters cannot be ignored, and will acquire a fundamental significance for us!
Responses

gloria russo7 days ago
 Reply