Entropy as state counting

But how does the physicist's notion of 'entropy', as it appears in the Second Law, actually quantify this 'randomness', so that the self-assembling egg can indeed be seen to be overwhelmingly improbable, and thereby rejected as a serious possibility? In order to be a bit more explicit about what the entropy concept actually is, so that we can make a better description of what the Second Law actually asserts, let us consider a physically rather simpler example than the breaking egg. The Second Law tells us, for example, that if we pour some red paint into a pot and then some blue paint into the same pot and give the mixture a good stir, then after a short period of such stirring the different regions of red and of blue will lose their individuality, and ultimately the entire contents of the pot will appear to have the colour of a uniform purple. It seems that no amount of further stirring will convert the purple colour back to the original separated regions of red and blue, despite the time-reversibility of the submicroscopic physical processes underlying the mixing. Indeed, the purple colour should eventually come about spontaneously, even without the stirring, especially if we were to warm the paint up a little. But with stirring, the purple state is reached much more quickly. In terms of entropy, we find that the original state, in which there are distinctly separated regions of red and blue paint, will have a relatively low entropy, but that the pot of entirely purple paint that we end up with will have a considerably larger entropy. Indeed, the whole stirring procedure provides us with a situation that is not only consistent with the Second Law, but which begins to give us a feeling of what the Second Law is all about.

Let us try to be more precise about the entropy concept, so that we can be more explicit about what is happening here. What actually is the entropy of a system? Basically, the notion is a fairly elementary one, although involving some distinctly subtle insights, due mainly to the great Austrian physicist Ludwig Boltzmann, and it has to do just with counting the different possibilities. To make things simple, let us idealize our pot of paint example so that there is just a (very large) finite number of different possibilities for the locations of each molecule of red paint or of blue paint. Let us think of these molecules as red balls or blue balls, these being allowed to occupy only discrete positions, centred within N3 cubical compartments, where we are thinking of our paint pot as an enormously subdivided N x Nx N cubical crate composed of these compartments (see Fig. 1.2), where I am assuming that every compartment is occupied by exactly one ball, either red or blue (represented as white and black, respectively, in the figure).

Fig. 1.2 Nx Nx N cubical crate, each compartment containing a red or blue ball.

To judge the colour of the paint at some place in the pot, we make some sort of average of the relative density of red balls to blue balls in the neighbourhood of the location under consideration. Let us do this by containing that location within a cubical box that is much smaller than the entire crate, yet very large as compared with the individual cubical compartments just considered. I shall suppose that this box contains a large number of the compartments just considered, and belongs to a cubical array of such boxes, filling the whole crate in a way that is less refined than that of the original compartments (Fig. 1.3). Let us suppose that each box has a side length that is n times as great as that of the original compartments, so that there are n x n x n = n3 compartments in each box. Here n, though still very large, is to be taken to be far smaller than N:

To keep things neat, I suppose that N is an exact multiple of n, so that

N = kn where k is a whole number, giving the number of boxes that span the crate along each side. There will now be k x k x k=k3 of these intermediate-sized boxes in the entire crate.

Fig. 1.3 The compartments are grouped together into k3 boxes, each of size n x n x n.

The idea will be to use these intermediate boxes to provide us with a measure of the 'colour' that we see at the location of that box, where the balls themselves are considered to be too small to be seen individually. There will be an average colour, or hue that can be assigned to each box, given by 'averaging' the colours of the red and blue balls within that box. Thus, if r is the number of red balls in the box under consideration, and b the number of blue balls in it (so r+b=n3), then the hue at that location is taken to be defined by the ratio of r to b. Accordingly, we consider that we get a redder hue if rib is larger than 1 and a bluer hue if rib is smaller than 1.

Let us suppose that the mixture looks to be a uniform purple to us if every one of these n x n x n compartments has a value of rib that is between 0.999 and 1.001 (so that r and b are the same, to an accuracy of one tenth of a per cent). This may seem, at first consideration, to be a rather stringent requirement (having to apply to every individual n x n x n compartment). But when n gets very large, we find that the vast majority of the ball arrangements do satisfy this condition! We should also bear in mind that when considering molecules in a can of paint, the number of them will be staggeringly large, by ordinary standards. For example, there could well be something like 1024 molecules in an ordinary can of paint, so taking N = 108 would not be at all unreasonable. Also, as will be clear when we consider that colours look perfectly good in digitally displayed photographs with a pixel size of only 10-2 cm, taking a value of k = 103 is also very reasonable, in this model. From this, we find that, with these numbers (N = 108 and k = 103, so n = 105) there are around 1023570000000000000000000000 different arrangements of the entire collection of V2N3 red balls and V2N3 blue balls that give the appearance of a uniform purple. There are only a mere 1046 500000 000000 different arrangements which give the original configuration in which the blue is entirely at the top and the red entirely at the bottom. Thus, for balls distributed entirely at random, the probability of finding uniform purple is a virtual certainty, whereas the probability of finding all the blue ones at the top is something like 10-23570000000000000000000000 (and this figure is not substantially changed if we do not require 'all' the blue balls to be initially at the top but, say, only 99.9% of them to be at the top).

We are to think of the 'entropy' to be something like a measure of these probabilities or, rather, of these different numbers of arrangements that give the same 'overall appearance'. Actually, to use these numbers directly would give an exceedingly unruly measure, owing to their vast differences in size. It is fortunate, therefore, that there are good theoretical reasons for taking the (natural) logarithm of these numbers as a more appropriate 'entropy' measure. For those readers who are not very familiar with the notion of a logarithm (especially a 'natural' logarithm), let us phrase things in terms of the logarithm taken to the base 10— referred to here as 'log10' (rather than the natural logarithm, used later, which I refer to simply as 'log'). To understand log10, the basic thing to remember is that logio 1 = 0, logio 10 = 1, logio 100=2, logio 1000=3, logio 10000=4, etc.

That is, to obtain the log10 of a power of 10, we simply count the number of 0s. For a (positive) whole number that is not a power of 10, we can generalize this to say that the integral part (i.e. the number before the decimal point) of its log10 is obtained by counting the total number of digits and subtracting 1, e.g. (with the integral part printed in bold type)

logio 2 = 0.301029995 66 . . . logio 53 = 1.724275 86960 . . . logio 9140=3.960946195 73 . . .

etc., so in each case the number in bold type is just one less than the number of digits in the number whose log10 is being taken. The most important property of log10 (or of log) is that it converts multiplication to addition; that is:

logio (ab)=logio a+logio b.

(In the case when a and b are both powers of 10, this is obvious from the above, since multiplying a = 10^ by b = 10B gives us ab = 10^.)

The significance of the above displayed relation to the use of the logarithm in the notion of entropy is that we want the entropy of a system which consists of two separate and completely independent components to be what we get by simply adding the entropies of the individual parts. We say that, in this sense, the entropy concept is additive. Indeed, if the first component can come about in P different ways and the second component in Q different ways, then there will be the product PQ of different ways in which the entire system—consisting of both components together—can come about (since to each of the P arrangements giving the first component there will be exactly Q arrangements giving the second). Thus, by defining the entropy of the state of any system to be proportional to the logarithm of the number of different ways that that state can come about, we ensure that this additivity property, for independent systems, will indeed be satisfied.

I have, however, been a bit vague, as yet, about what I mean by this 'number of ways in which the state of a system can come about'. In the first place, when we model the locations of molecules (in a can of paint, say), we would normally not consider it realistic to have discrete compartments, since in Newtonian theory there would, in full detail, be an infinite number of different possible locations for each molecule rather than just a finite number. In addition, each individual molecule might be of some asymmetrical shape, so that it could be oriented in space in different ways. Or it might have other kinds of internal degrees of freedom, such as distortions of its shape, which would have to be correspondingly taken into account. Each such orientation or distortion would have to count as a different configuration of the system. We can deal with all these points by considering what is known as the configuration space of a system, which I next describe.

For a system of d degrees of freedom, the configuration space would be a d-dimensional space. For example, if the system consisted of q point particlespi,p2,... ,pq (each without any internal degrees of freedom), then the configuration space would have 3q dimensions. This is because each individual particle requires just three coordinates to determine its position, so there are 3q coordinates overall, whereby a single point P of configuration space defines the locations of all of pi,p2,... ,pq together (see Fig. 1.4). In more complicated situations, where there are internal degrees of freedom as above, we would have more degrees of freedom for each such particle, but the general idea is the same. Of course, I am not expecting the reader to be able to 'visualize' what is going on in a space of such a high number of dimensions. This will not be necessary. We shall get a good enough idea if we just imagine things going on in a space of just 2 dimensions (such as a region drawn on a piece of paper) or of some region in ordinary 3-dimensional space, provided that we always bear in mind that such visualizations will inevitably be limited in certain ways, some of which we shall be coming to shortly. And of course we should always keep in mind that such spaces are purely abstract mathematical ones which should not be confused with the 3-dimensional physical space or 4-dimensional physical space-time of our ordinary experiences.

Fig. 1.4 Configuration space C of q point particlespi,p2,... ,pq is a 3q-dimensional space.

There is a further point that needs clarification, in our attempts at a definition of entropy, and this is the issue of what exactly we are trying to count. In the case of our finite model, we had finite numbers of different arrangements for the red and blue balls. But now we have an infinite number of arrangements (since the particle locations require continuous parameters), and this leads us to consider high-dimensional volumes in configuration space, to provide us with an appropriate measure of size, instead of just counting discrete things.

configuration space C of q point particles pi, p2, . . . , pq is a 3q-dimensional space

To get an idea of what is meant by a 'volume' in a high-dimensional space, it is a good idea first to think of lower dimensions. The 'volume-measure' for a region of 2-dimensional curved surface, for example, would be simply the measure of surface area of that region. In the case of a 1-dimensional space, we are thinking simply of the length along some portion of a curve. In an n-dimensional configuration space, we would be thinking in terms of some n-dimensional analogue of the volume of an ordinary 3-volume region.

But which regions of configuration space are we to be measuring the volumes of, when we are concerned with the entropy definition? Basically, what we would be concerned with would be the volume of that entire region in configuration space that corresponds to the collection of states which 'look the same' as the particular state under consideration. Of course, 'look the same' is a rather vague phrase. What is really meant here is that we have some reasonably exhaustive collection of macroscopic parameters which measure such things as density distribution, colour, chemical composition, but we would not be concerned with such detailed matters as the precise locations of every atom that constitutes the system under consideration. This dividing up of the configuration space C into regions that 'look the same' in this sense is referred to as a 'coarse graining' of C. Thus, each 'coarse-graining region' consists of points that represent states that would be considered to be indistinguishable from each other, by means of macroscopic measurements. See Fig. 1.5.

Fig. 1.5 A coarse-graining of C.

microscopically identical are represented in the same coarse-graining region differing detailed configurations that

Fig. 1.5 A coarse-graining of C.

Of course, what is meant by a 'macroscopic' measurement, is still rather vague, but we are looking for some kind of analogue of the 'hue' notion that we were concerned with above in our simplified finite model for the can of paint. There is admittedly some vagueness in such a 'coarse-graining' notion, but it is the volume of such a region in configuration space—or, rather, the logarithm of the volume of such a coarse-graining region—that we are concerned with in the definition of entropy. Yes, this is still a bit vague, but it is remarkable how robust the entropy notion turns out to be, nevertheless, mainly due to the absolutely stupendous ratios of volumes that the coarse-graining volumes turn out to have.

+1 0

Post a comment