Skip to main content

Lectures on Thermodynamics and Statistical Mechanics: Chapter 7: Classical Statistical Mechanics

Lectures on Thermodynamics and Statistical Mechanics
Chapter 7: Classical Statistical Mechanics
    • Notifications
    • Privacy
  • Project HomeThermodynamics and Statistical Mechanics
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Chapter 1: Basic Concepts
  2. Chapter 2: The First Law of Thermodynamics
  3. Chapter 3: The Second Law of Thermodynamics
  4. Chapter 4: The Third Law of Thermodynamics
  5. Chapter 5: Thermodynamic Potentials and Equilibrium
  6. Chapter 6: Thermodynamic Relations and Processes
  7. Chapter 7: Classical Statistical Mechanics
  8. Chapter 8: Quantum Statistical Mechanics
  9. Chapter 9: The Carathéodory principle
  10. Chapter 10: Entropy and Information
  11. Bibliography

7. Classical Statistical Mechanics

Dynamics of particles is given by Newton’s laws or, if we include quantum effects, the quantum mechanics of point-particles. Thus, if we have a large number of particles such as molecules or atoms which constitute a macroscopic system, then, in principle, the dynamics is determined. Classically, we just have to work out solutions to Newton’s laws. But for systems with large numbers of particles, such as the Avogadro number which may be necessary in some cases, this is a wholly impractical task. We do not have general solutions for the three-body problem in mechanics, let alone for particles. What we can attempt to do is a statistical approach, where one focuses on certain averages of interest, which can be calculated with some simplifying assumptions. This is the province of Statistical Mechanics.

If we have particles, in principle, we can calculate the future of the system if we are given the initial data, namely, the initial positions and velocities or momenta. Thus we need input numbers. Already, as a practical matter, this is impossible, since and we do not, in fact cannot, measure the initial positions and momenta for all the molecules in a gas at any time. So generally we can make the assumption that a probabilistic treatment is possible. The number of molecules is so large that we can take the initial data to be a set of random numbers, distributed according to some probability distribution. This is the basic working hypothesis of statistical mechanics. To get some feeling for how large numbers lead to simplification, we start with the binomial distribution.

7.1 The binomial distribution

This is exemplified by the tossing of a coin. For a fair coin, we expect that if we toss it a very large number of times, then roughly half the time we will get heads and half the time we will get tails. We can say that the probability of getting heads is and the probability of getting tails is . Thus the two possibilities have equal a priori probabilities.

Now consider the simultaneous tossing of coins. What are the probabilities? For example, if , the possibilities are and . There are two ways we can get one head and one tail, so the probabilities are , and for two heads, one head and no heads respectively. The probability for one head (and one tail) is higher because there are many (two in this case) ways to get that result. So we can ask: How many ways can we get heads (and () tails)? This is given by the number of ways we can choose out of , to which we can assign the heads. In other words, it is given by

(7.1)

The probability for any arrangement will be given by

(7.2)

where we have used the binomial theorem to write the denominator as . This probability as a function of for large values of is shown in Fig.7.1. Notice that already for , the distribution is sharply peaked around the middle value of . This becomes more and more pronounced as N becomes large. We can check the place where the maximum occurs by noting that the values and are very close to each other, infinitesimally different for , so that may be taken to be continuous as . Further, for large numbers, we can use the Stirling formula

(7.3)

Figure 7.1: The binomial distribution showing as a function of n1 = the number of heads for

Then we get

(7.4)

This has a maximum at . Expanding around this value, we get

(7.5)

We see that the distribution is peaked around with a width given by . The probability of deviation from the mean value is very very small as . This means that many quantities can be approximated by their mean values or values at the maximum of the distribution.

We have considered equal a priori probabilities. If we did not have equal probabilities then the result will be different. For example, suppose we had a coin with a probability of q, for heads and probability for tails. Then the probability for coins would go like

(7.6)

(Note that reproduces (7.2).) The maximum is now at . The standard

deviation from the maximum value is unchanged.

Here we considered coins for each of which there are only two outcomes, head or tail. If we have a die with 6 outcomes possible, we must consider splitting into Thus we can first choose out of in ways, then choose out of the remaining in ways and so on, so that the number of ways we can get a particular assignment is

, (7.7)

More generally, the number of ways we can distribute particles into boxes is

(7.8)

Basically, this gives the multinomial distribution.

7.2 Maxwell-Boltzmann statistics

Now we can see how all this applies to the particles in a gas. The analog of heads or tails would be the momenta and other numbers which characterize the particle properties. Thus we can consider particles distributed into different cells, each of the cells standing for a collection of observables or quantum numbers which can characterize the particle. The number of ways in which particles can be distributed into these cells, say of them, would be given by (7.8). The question is then about a priori probabilities. The basic assumption which is made is that there is nothing to prefer one set of values of observables over another, so we assume equal a priori probabilities. This is a key assumption of statistical mechanics. So we want to find the distribution of particles into different possible values of momenta or other observables by maximizing the probability

(7.9)

Here is a normalization constant given by , the analog of in (7.2). Now the maximization has to be done obviously keeping in mind that , since we have a total of particles. But this is not the only condition. For example, energy is a conserved quantity and if we have a system with a certain energy , no matter how we distribute the particles into different choices of momenta and so on, the energy should be the same. Thus the maximization of probability should be done subject to this condition. Any other conserved quantity should also be preserved. Thus our condition for the equilibrium distribution should read

(7.10)

where (for various values of α) give the conserved quantities, the total particle number and energy being two such observables

The maximization of probability seems very much like what is given by the second law of thermodynamics, wherein equilibrium is characterized by maximization of entropy. In fact this suggests that we can define entropy in terms of probability or , so that the condition of maximization of probability is the same as the condition of maximization of entropy. This identification was made by Boltzmann who defined entropy corresponding to a distribution of particles among various values of momenta and other observables by

(7.11)

where is the Boltzmann constant. For two completely independent systems , we need ,while .Thus the relation should be in terms of .This equation is one of the most important formulae in physics. It is true even for quantum statistics, where the counting of the number of ways of distributing particles is different from what is given by (7.8). We will calculate entropy using this and show that it agrees with the thermodynamic properties expected of entropy. We can restate Boltzmann’s hypothesis as

(7.12)

With this identification, we can write

(7.13)

We will consider a simple case where the single particle energy values are (where i may be interpreted as momentum label) and we have only two conserved quantities to keep fixed, the particle number and the energy. To carry out the variation subject to the conditions we want to impose, we can use Lagrange multipliers. We add terms ,and vary the parameters (or multipliers) , to get the two conditions

(7.14)

Since these are anyway obtained as variational conditions, we can vary freely without worrying about the constraints, when we try to maximize the entropy. Usually we use instead of , where , so we will use this way of writing the Lagrange multiplier. The equilibrium condition now becomes

(7.15)

This simplifies to

(7.16)

Since are not constrained, this means that the quantity in brackets should vanish, giving the solution

(7.17)

This is the value at which the probability and entropy are a maximum. It is known as the Maxwell-Boltzmann distribution. As in the case of the binomial distribution, the variation around this value is very very small for large values of , so that observable values can be obtained by using just the solution (7.17). We still have the conditions (7.14) obtained as maximization conditions (for variation of , ), which means that

(7.18)

The first of these conditions will determine in terms of and the second will determine in terms of the total energy .

In order to complete the calculation, we need to identify the summation over the index . This should cover all possible states of each particle. For a free particle, this would include all momenta and all possible positions. This means that we can replace the summation by an integration over .Further the single-particle energy is given by

(7.19)

Since

(7.20)

we find from (7.18)

(7.21)

The value of the entropy at the maximum can now be expressed as

(7.22)

From this, we find the relations

(7.23)

(7.24)

(7.25)

Comparing these with

(7.26)

which is the same as (5.8), we identify

(7.27)

Further, is the chemical potential and is the internal energy. The last relation in (7.25) tells us that

(7.28)

which is the ideal gas equation of state.

Once we have the identification (7.27), we can also express the chemical potential and internal energy as functions of the temperature:

(7.29)

The last relation also gives the specific heats for a monatomic ideal gas as

(7.30)

These specific heats do not vanish as , so clearly we are not consistent with the third law of thermodynamics. This is because of the classical statistics we have used. The third law is a consequence of quantum dynamics. So, apart from the third law, we see that with Boltzmann’s identification of entropy as , we get all the expected thermodynamic relations and explicit formulae for the thermodynamic quantities.

We have not addressed the normalization of the entropy carefully so far. There are two factors of importance. In arriving at (7.22), we omitted the

(7.31)

Where is Planck’s constant. Including this factor, the entropy can be expressed as

(7.32)

This is known as the Sackur-Tetrode formula for the entropy of a classical ideal gas.

Gibbs paradox

Our expression for entropy has omitted the factor . The original formula for the entropy in terms of includes the factor of in . This corresponds to an additional factor of in the formula (7.32). The question of whether we should keep it or not was considered immaterial since the entropy contained an additive undetermined term anyway. However, Gibbs pointed out a paradox that arises with such a result. Consider two ideal gases at the same temperature, originally with volumes and and number of particles and . Assume they are mixed together, this creates some additional entropy which can be calculated as . Since , if we use the formula (7.32) without the factors due to (which means with an additional ), we find

(7.33)

(We have also ignored the constants depending on m for now.) This entropy of mixing can be tested experimentally and is indeed correct for monatomic nearly ideal gases. The paradox arises when we think of making the gases more and more similar, taking a limit when they are identical. In this case, we should not get any entropy of mixing, but the above formula gives

(7.34)

(In this limit, the constants depending on m are the same, which is why we did not have to worry about it in posing this question.) This is the paradox. Gibbs suggested dividing out the factor of N!, which leads to the formula (7.32). If we use that formula, there is no change for identical gases because the specific volume is the same before and after mixing. For dissimilar gases, the formula of mixing is still obtained. The Gibbs factor of arises naturally in quantum statistics.

Equipartition

The formula for the energy of a single particle is given by

(7.35)

If we consider the integral in (7.20) for each direction of , we have

(7.36)

The corresponding contribution to the internal energy is , so that for the three degrees of freedom we get , per particle. We have considered the translational degrees of freedom corresponding to the movement of the particle in 3-dimensional space. For more complicated molecules, one has to include rotational and vibrational degrees of freedom. In general, in classical statistical mechanics, we will find that for each degree of freedom we get . This is known as the equipartition theorem. The specific heat is thus given by

(7.37)

Quantum mechanically, equipartition does not hold, at least in this simple form, which is as it should be, since we know the specific heats must go to zero as .

7.3 The Maxwell distribution for velocities

The most probable distribution of velocities of particles in a gas is given by (7.17) with . Thus we expect the distribution function for velocities to be

(7.38)

This is known as the Maxwell distribution. Maxwell arrived at this by an ingenious argument many years before the derivation we gave in the last section was worked out. He considered the probability of a particle having velocity components . If the probability of a particle having the α-component of velocity between and is , then the probability for would be

(7.39)

Since the dynamics along the three dimensions are independent, the probability should be the product of the individual ones. Further, there is nothing to single out any particular Cartesian component, they are all equivalent, so the function f should be the same for each direction. This leads to (7.39). Finally, we have rotational invariance in a free gas with no external potentials (in a large enough volume), so the probability should be a function only of the speed Thus we need a function such that depends only on ν. The only solution is for to be of the form

(7.40)

for some constant α. The distribution of velocities is thus

(7.41)

Since the total probability must be one, we can identify the constant as

We now consider particles colliding with the wall of the container, say the face at , as shown in Fig. 7.2. The momentum imparted to the wall in an elastic collision is . At any given instant roughly half of the molecules will have a component towards the wall.

All of them in a volume (where A is the area of the face) will reach the wall in one second, so that the force on the wall due to collisions is

(7.42)

Averaging over using (7.41), we get the pressure as

(7.43)

Figure 7.2: A typical collision with the wall of the container at . The velocity component . The velocity component before and after collision is shown by the dotted line. before and after collision is shown by the dotted line.

Comparing with the ideal gas law, we can identify α as . Thus the distribution of velocities is

(7.44)

This is in agreement with (7.17).

Adapting Maxwell’s argument to a relativistic gas

Maxwell’s argument leading to (7.44) is so simple and elegant that it is tempting to see if there are other situations to which such a symmetry-based reasoning might be applied. The most obvious case would be a gas of free particles for which relativistic effects are taken into account. In this case, and it is clear that e^{-\beta \epsilon } cannot be obtained from a product of the form f(p_1)f(p_2)f(p_3). So, at first glance, Maxwell’s reasoning seems to fail. But this is not quite so, as the following line of reasoning will show.

As a first step, notice that the distribution (7.44) is for a gas which has no overall drift motion. This is seen by noting that

(7.45)

We can include an overall velocity by changing the distribution to

(7.46)

It is easily verified that . It is important to include the overall motion in the reasoning since the symmetry is the full set of Lorentz transformations in the relativistic case and they include velocity-transformations.

Secondly, we note that in the relativistic case where we have the 4-momentum , and what is needed to sum over all states is not the integration over all , rather we must integrate with the invariant measure

(7.47)

where is the step function,

(7.48)

Further the -function can be expanded as

(7.49)

The integration over is trivial because of these equations and we find that

(7.50)

Now we seek a function which can be written in the form involving the four components of which is also Lorentz invariant and we consider integrating it with the measure (7.47). The function must also involve the drift velocity in general. In the relativistic case, this is the 4-velocity , whose components are

(7.51)

The solution is again an exponential

(7.52)

With the measure (7.47), we find

(7.53)

for any observable and where . At this stage, if we wish to, we can consider a gas with no overall motion, setting , to get

(7.54)

This brings us back to a form similar to (7.17), even for the relativistic case.

7.4 The Gibbsian ensembles

The distribution which was obtained in (7.17) gives the most probable number of particles with momentum as . This was obtained by considering the number of ways in which free particles can be distributed among possible momentum values subject to the constraints of fixed total number of particles and total energy. We want to consider some generalizations of this now. First of all, one can ask whether a similar formula holds if we have an external potential. The barometric formula (2.13) has a similar form since mgh is the potential energy of a molecule or atom in that context. So, for external potentials one can make a similar argument.

Interatomic or intermolecular forces are not so straightforward. In principle, if we have intermolecular forces, single particle energy values are not easily identified. Further, in some cases, one may even have new molecules formed by combinations or bound states of old ones. Should they be counted as one particle or two or more? So one needs to understand the distribution from a more general perspective. The idea is to consider the physical system of interest as part of a larger system, with exchange of energy with the larger system. This certainly is closer to what is really obtained in most situations. When we study or do experiments with a gas at some given temperature, it is maintained at this temperature by being part of a larger system with which it can exchange energy. Likewise, one could also consider a case where exchange of particles is possible. The important point is that, if equilibrium is being maintained, the exchange of energy or particles with a larger system will not change the distribution in the system under study significantly. Imagine high energy particles get scattered into the volume of gas under study from the environment. This can raise the temperature slightly. But there will be roughly equal number of particles of similar energy being scattered out of the volume under study as well. Thus while we will have fluctuations in energy and particle number, these will be very small compared to the average values, in the limit of large numbers of particles. So this approach should be a good way to analyze systems statistically.

Arguing along these lines one can define three standard ensembles for statistical mechanics: the micro-canonical, the canonical and the grand canonical ensembles. The canonical ensemble is the case where we consider the system under study (of fixed volume V ) as one of a large number of similar systems which are all in equilibrium with larger systems with free exchange of energy possible. For the grand canonical ensemble, we also allow free exchange of particles, so that only the average value of the number of particles in the system under study is fixed. The micro-canonical ensemble is the case where we consider a system with fixed energy and fixed number of particles. (One could also consider fixing the values of other conserved quantities, either at the average level (for grand canonical case) or as rigidly fixed values (for the micro-canonical case).)

We still need a formula for the probability for a given distribution of particles in various states. In accordance with the assumption of equal a priori probabilities, we expect the probability to be proportional to the number of states N available to the system subject to the constraints on the conserved quantities. In classical mechanics, the set of possible trajectories for a system of particles is given by the phase space since the latter constitutes the set of possible initial data. Thus the number of states for a system of N particles would be proportional to the volume of the subspace of the phase space defined by the conserved quantities. In quantum mechanics, the number of states would be given in terms of the dimension of the Hilbert space. The semiclassical formula for the counting of states is then

(7.55)

In other words, a cell of volume in phase space corresponds to a state in the quantum theory. (This holds for large numbers of states; in other words, it is semiclassical.) This gives a more precise meaning to the counting of states via the phase volume. In the microcanonical ensemble, the total number of states with total energy between and would be

(7.56)

where is the Hamiltonian of the -particle system. The entropy is then defined by Boltzmann’s formula as . For a Hamiltonian ,this can be explicitly calculated and leads to the formulae we have already obtained. However, as explained earlier, this is not easy to do explicitly when the particles are interacting. Nevertheless, the key idea is that the required phase volume is proportional to the exponential of the entropy,

(7.57)

This idea can be carried over to the canonical and grand canonical ensembles.

In the canonical ensemble, we consider the system of interest as part of a much larger

system, with, say, particles. The total number of available states is then

(7.58)

The idea is then to consider integrating over the particles to obtain the phase volume for the remaining, viewed as a subsystem. We refer to this subsystem of interest as system 1 while the particles which are integrated out will be called system 2. If the total energy is , we take the system 1 to have energy , with system 2 having energy . Of course, is not fixed, but can vary as there can be some amount of exchange of energy between the two systems. Integrating out the system 2 leads to

(7.59)

We then expand as

(7.60)

where have used the thermodynamic formula for the temperature. The temperature is the same for system 1 and the larger system (system 1 + system 2) of which it is a part. HN is the Hamiltonian of the particles in system 1. This shows that, as far as the system under study is concerned, we can take the probability as

(7.61)

Here is a proportionality factor which can be set by the normalization requirement that the total probability (after integration over all remaining variables) is 1. (The factor from (7.60) can be absorbed into the normalization as it is a constant independent of the phase space variables for the particles in system 1. Also, the subscript 1 referring to the system under study is now redundant and has been removed.)

There are higher powers in the Taylor expansion in (7.60) which have been neglected. The idea is that these are very small as is small compared to the energy of the total system. In doing the integration over the remaining phase space variables, in principle, one could have regions with HN comparable to , and the neglect of terms of order may not seem justified. However, the formula (7.61) in terms of the energy is sharply peaked around a certain average value with fluctuations being very small, so that the regions with comparable to will have exponentially vanishing probability. This is the ultimate justification for neglecting the higher terms in the expansion (7.60). We can a posteriori verify this by calculating the mean square fluctuation in the energy value which is given by the probability distribution (7.61). This will be taken up shortly.

Turning to the grand canonical case, when we allow exchange of particles as well, we get

(7.62)

By a similar reasoning as in the case of the canonical ensemble, we find, for the grand canonical ensemble,

(7.63)

More generally, let us denote by an additively conserved quantum number or observable other than energy. The general formula for the probability distribution is then

(7.64)

Even though we write the expression for particles, it should be kept in mind that averages involve a summation over as well. Thus the average of some observable is given by

(7.65)

Since the normalization factor is fixed by the requirement that the total probability is 1, it is convenient to define the ā€œpartition function". In the canonical case, it is given by

(7.66)

We have introduced an extra factor of . This is the Gibbs factor needed for resolving the Gibbs paradox; it is natural in the quantum counting of states. Effectively, because the particles are identical, permutation of particles should not be counted as a new configuration, so the phase volume must be divided by N! to get the ā€œcorrect" counting of states. We will see that even this is not entirely adequate when full quantum effects are taken into account. In the grand canonical case, the partition function is defined by

(7.67)

Using the partition functions in place of , and including the Gibbs factor, we find the probability of a given configuration as

(7.68)

while for the grand canonical case we have

(7.69)

The partition function contains information about the thermodynamic quantities. Notice that, in particular,

(7.70)

We can also define the average value of the entropy (not the entropy of the configuration corresponding to particular way of distributing particles among states, but the average over the distribution) as

(7.71)

While the averages and do not depend on the factors of and , the entropy does. This is why we chose the normalization factors in (7.66) to be what they are.

Consider the case when we have only one conserved quantity, the particle number, in addition to the energy. In this case, (7.71) can be written as

(7.72)

Comparing this with the definition of the Gibbs free energy in (5.6) and its expression in terms of in (5.16), we find that we can identify

(7.73)

This gives the equation of state in terms of the partition function.

These equations (7.67), (7.70 - 7.73) are very powerful. Almost all of the thermodynamics we have discussed before is contained in them. Further, they can be used to calculate various quantities, including corrections due to interactions among particles, etc. As an example, we can consider the calculation of corrections to the equation of state in terms of the intermolecular potential.

7.5 Equation of state

The equation of state, as given by (7.73), requires the computation of the grand canonical partition function. We will consider the case where the only conserved quantities are the Hamiltonian and the number of particles. The grand canonical partition function can then be written as

(7.74)

where is the canonical partition for a fixed number of particles, given in (7.66). The variable is called the fugacity. The easiest way to proceed to the equation of state is to consider an expansion of in powers of Z. This is known as the cumulant expansion and, explicitly, it takes the form

(7.75)

where the first few coefficients are easily worked out as

(7.76)

The Hamiltonian, for particles, has the form

(7.77)

where we have included a general two-particle interaction and the ellipsis stands for possible 3-particle and higher interactions. For we just have the first term. The š’«-integrations factorize and, if , we get . All cumulants vanish except for . We can explicitly obtain

(7.78)

For , we find

(7.79)

where, in the second line, we have taken the potential to depend only on the difference and carried out the integration over the center of mass coordinate. Thus

(7.80)

Using the cumulant expansion to this order, we find and the average number of particles in the volume is given by

(7.81)

which can be solved for š’µ as

(7.82)

If we ignore the - term, this relation is the same as what we found in (7.21), with the addition of the correction,

(7.83)

Using (7.82) (with the -term) back in and the expression (7.80) for , we get

(7.84)

These formulae show explicitly the first correction to the ideal gas equation of state. The quantity is a function of temperature; it is called the second virial coefficient and can be calculated, once a potential is known, by carrying out the integration. Even for complicated potentials it can be done, at least, numerically. As a simple example, consider a hard sphere approximation to the interatomic potential,

(7.85)

In this case the integral is easily done to obtain This is independent of the temperature. One can consider more realistic potentials for better approximations to the equation of state.

The van der Waals equation, which we considered earlier, is, at best, a model for the equation of state incorporating some features of the interatomic forces. Here we have a more systematic way to calculate with realistic interatomic potentials. Nevertheless, there is a point of comparison which is interesting. If we expand the van der Waals equation in the form (7.84), it has ; the term is independent of the temperature. Comparing with the hard sphere repulsion at short distances, we see how something like the excluded volume effect can arise.

We have considered the first corrections due to the interatomic forces. More generally, the equation of state takes the form

(7.86)

This is known as the virial expansion, with referred to as the š“ƒ-th virial coefficient. These are in general functions of the temperature; they can be calculated by continuing the cumulant expansion to higher order and doing the integrations needed for . In practice such a calculation becomes more and more difficult as increases. This virial expansion (7.86) is in powers of the density and integrations involving powers of ,where is the potential energy of the interaction. Thus for low densities and interaction strengths small compared to , truncation of the series at some finite order is a good approximation. So only a few of the virial coefficients are usually calculated.

It is useful to calculate corrections to some of the other quantities as well. From the identification of in (7.81) and from (7.70), we can find the internal energy as

(7.87)

In a similar way, the entropy is found to be

(7.88)

Since the number of pairs of particles is , the correction to the internal energy is easily understood as an average of the potential energy. Also the first set of terms in the entropy reproduces the Sackur-Tetrode formula (7.32) for an ideal gas.

7.6 Fluctuations

We will now calculate the fluctuations in the values of energy and the number of particles as given by the canonical and grand canonical ensembles. First consider . From the definition, we have

(7.89)

If we calculate from the partition function as a function of and , we can differentiate it with respect to to get

(7.90)

The Gibbs free energy is given by and it also obeys

(7.91)

These follow from (5.16 and (5.10). Since T is fixed in the differentiations we are considering, this gives

(7.92)

The equation of state gives p as a function of the number density , at fixed temperature. Thus

(7.93)

Using this in (7.92), we get

(7.94)

From (7.90), we now see that the mean square fluctuation in the number is given by

(7.95)

This goes to zero as \bar{N} becomes large, in the thermodynamic limit. An exception could occur if (\partial p/\partial \rho) becomes very small. This can happen at a second order phase transition point. The result is that fluctuations in numbers become very large at the transition. The theoretical treatment of such a situation needs more specialized techniques.

We now turn to energy fluctuations in the canonical ensemble. For this we consider to be fixed and write

(7.96)

The derivative of with respect to gives the specific heat, so we find

(7.97)

Once again, the fluctuations are small compared to the average value as becomes large.

7.7 Internal degrees of freedom

Many particles, such as atoms, molecules have internal degrees of freedom. This can be due to atomic energy levels, due to vibrational and rotational states for molecules, etc. Very often one has to consider mixtures of particles where they can be in different internal states as well. For example, in a sodium lamp where the atoms are at a high temperature, some of the atoms are in an electronic excited state while some are in the ground state. Of course, the translational degrees of freedom are important as well. In principle, at the classical level, the internal dynamics has its own phase space and by including it in the integration measure for the partition function, we can have a purely classical statistical mechanics of such systems. However, this can be grossly inadequate. Even though in many situations, the translational degrees of freedom can be treated classically, it is necessary to take account of the discreteness of states and energy levels for the internal degrees of freedom. The question is: How do we do this?

The simplest strategy is to consider the particles in different internal states as different species of particles. For example, consider a gas of, say, argon atoms which can be in the ground state (call them ) and in an excited state (call them ). The partition function would thus be

(7.98)

If we ignore interatomic forces, considering a gas of free particles, , so that

(7.99)

The single particle partition function which we have used so far is of the form

(7.100)

This is no longer good enough since and have difference in energies for the internal degrees of freedom and this is not reflected in using just . Because of the equivalence of mass and energy, this means that and are different. From the relativistic formula

(7.101)

we see that this difference is taken account of if we include the rest energy in . (Here C is the speed of light in vacuum.) Thus we should use the modified formula

(7.102)

even for nonrelativistic calculations. If there are degenerate internal states, the mass would be the same, so in we could get a multiplicity factor. To see how this arises, consider a gas of particles each of which can have internal states of the same energy (or mass). Such a situation is realized, for example, by a particle of spin s with . If we treat each internal state as a separate species of particle, the partition function would be

(7.103)

Each of the will specify the average number in the distribution for each internal state. However, in many cases, we do not specify the numbers for each state, only the total average number is macroscopically fixed for the gas. Thus all are the same giving a single factor , , in(7.103). Correspondingly, if we have no interparticular interactions, we get

(7.104)

where we used the fact that all masses are the same. We see the degeneracy factor explicitly.

7.8 Examples

7.8.1 Osmotic pressure

An example of the use of the idea of the partition function in a very simple way is provided by the osmotic pressure. Here one considers a vessel partitioned into two regions, say, I and II,

with a solvent (labeled ) on one side and a solution of the solvent plus a solute (labeled ) on the other side. The separation is via a semipermeable membrane which allows the solvent molecules to pass through either way, but does not allow the solute molecules to pass through. Thus the solute molecules stay in region II, as the Fig. 7.3. When such a situation is set up, the solvent molecules pass back and forth and eventually achieve equilibrium with the average number of solvent molecules on each side not changing any further. What is observed is that the pressure in the solution pII is higher than the pressure pI in the solvent in region I. Once equilibrium is achieved, there is no further change of volume or temperature either, so we can write the equilibrium condition for the solvent as

(7.105)

Correspondingly, we have , for the fugacities. The partition function has the form

(7.106)

Here is the partition function for just the solvent. For simplicity, let us take the volumes of the two regions to be the same. Then we may write

(7.107)

even though this occurs in the formula for the full partition function in region II, since is the same for regions I and II. Going back to , we expand log in powers of , keeping only the lowest order term, which is adequate for dilute solutions. Thus

(7.108)

The derivative of the partition function with respect to is related to , as in (7.70), so that

(7.109)

Further, log is given by . Using these results, (7.108) gives

(7.110)

where is the number density of the solute. The pressure difference is called the osmotic pressure.

7.8.2 Equilibrium of a chemical reaction

Here we consider a general chemical reaction of the form

(7.111)

If the substances can be approximated as ideal gases, the partition function is given by

(7.112)

For the individual chemical potentials, we can use the general formula (7.87) but with the correction due to the factor as in (7.106), since we have different species of particles here. Thus

(7.113)

The condition of equilibrium of the reaction is given as . Using (7.117), this becomes

(7.114)

The total pressure of the mixture of the substances is given from as

(7.115)

So if we define the concentrations,

(7.116)

then we can rewrite (7.114) as

(7.117)

With our interpretation of the masses as rest energy, we see that ε is the heat of reaction, i.e., the total energy released by the reaction. ε is positive for an exothermic reaction and negative for an endothermic reaction. K, in (7.121), is known as the reaction constant and is a function only of the temperature (and the masses of the molecules involved, but these are fixed once a reaction is chosen). The condition (7.121) on the concentrations of the reactants is called the law of mass action.

7.8.3 Ionization equilibrium

Another interesting example is provided by ionization equilibrium, which is of interest in plasmas and in astrophysical contexts. Consider the ionization reaction of an atom

(7.118)

There is a certain amount of energy needed to ionize the atom . Treating the particles involved as different species with possible internal states, we can use (7.108) to write

(7.119)

By differentiation with respect to , we find, for each species,

(7.120)

The condition for equilibrium is the same as . Using (7.124), this becomes

(7.121)

The mass of the atom is almost equal to the mass of and the electron; the difference is the binding energy of the electron in . This is the ionization energy , . Using this, (7.125) can be rewritten as

(7.122)

This is known as Saha’s equation for ionization equilibrium. It relates the number density of the ionized atom to that of the neutral atom. (While the mass difference is important for the exponent, it is generally a good approximation to take in the factor . So it is often omitted.) The degeneracy for the electron states, namely , is due to the spin degrees of freedom, so . The degeneracies and will depend on the atom and the energy levels involved.

The number densities can be related to the pressure by the equation of state; they are also important in determining the intensities of spectral lines. Thus by observation of spectral lines from the photospheres of stars, one can estimate the pressures involved.

Annotate

Next Chapter
Chapter 8: Quantum Statistical Mechanics
PreviousNext
Thermo-Textbook
This text is licensed under a CC BY-NC-ND 4.0 license.
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org