Jump to content

Statistical mechanics and thermodynamics

From Wikiversity

(The original course was based on a course summarized by Lior Yosub and Liana Diesendruck. A rewrite is in progress that brings in more detail)

Introduction

[edit | edit source]

The properties of a macroscopic physical system ultimately derives from the properties of its fundamental constituents. Since an exact solution for many particle systems with non-trivial interactions is out of the question, one is forced to resort to statistical methods to make progress. Statistical methods can, of course, only be used when we know the probabilities for finding the system in a particular state. The fundamental postulate of statistical physics is that all accessible states of an isolated system are a priori equally likely. Here "state" means the exact quantum state of the system which thus provides for a full microscopic description of the system. This postulate can be motivated by Liouville's theorem and Boltzmann's H-theorem which we'll discuss later in this course.

A macroscopic property of a system is, in general, obtained from the exact state after a coarse graining. Properties such as the internal energy, temperature, pressure etc. are examples of such macroscopic properties and they all have to be understood as (derived from) coarse grained variables. In principle, however, one can make any arbitrary choice for the macroscopic properties. For any choice, one defines the so-called "macrostate" of the system to be the set of these variables. Thermodynamics is the study of the relations between the macroscopic properties of the system. In thermodynamics, the word "state" is often used for "macrostate". To avoid confusion, it is customary to call the exact physical state of the system the "microstate".

A system is said to be in thermal equilibrium if its macrostate is time independent. A major part of statistical mechanics is the study of systems in or very close to thermal equilibrium. It follows from the fundamental postulate that lacking any information about a system, the most likely macrostate is that state for which the number of microstates that corresponds to it, is maximal. Any initial macrostate should thus evolve to such a macrostate, which is thus the thermal equilibrium state. This conclusion is known as the Second Law of Thermodynamics.

In the next section we'll apply the methods of statistical mechanics to isolated systems characterised by their total energy.

The Micro-Canonical Ensemble

[edit | edit source]

Since the total energy of an isolated system is conserved, all accessible microstates have the same energy. The macroscopic variable corresponding to the total energy, has to be understood as a coarse grained variable corresponding to the total energy of the system. We define this as follows: When is specified, the total energy of the system can be in the range between and , where is assumed to be small on a macroscopic scale. We'll see later that the value of macroscopic quantities per unit mass or unit volume in the limit of infinite system size do not depend on the choice of .

We define the function to be the number of microstates with energies with an energy in the range between and . All these states are equally likely in thermal equilibrium. One can then compute macroscopic properties of systems by averaging over all the states. Put differently, one can imagine the so-called "micro-canonical ensemble " ensemble of of systems with microstates in each of the states. Averaging over this ensemble yields the macroscopic thermal variables.

Definition of temperature

[edit | edit source]

Now consider two such isolated systems in internal equilibrium, but isolated from each other. They have internal energies of and . The number of microstates the systems can be in are then and , respectively. Suppose we bring the two systems into contact, such that energy can flow between the two systems. The total internal energy of the combined system will then be conserved. The total number of accessible states the combined system can be in, is clearly given by:

Here the summation variable increases by a steps of . This then means that the coarse grained internal energy variable for the combined system determines its exact system's energy to be in the range between and . Obviously, the most likely energy distribution over the two systems is such that the product is maximal. When we start with any other energy distribution and bring the two systems into thermal contact, energy will flow until that particular equilibrium state is reached. While there is then still a small probability for the energy distribution to deviate from this equilibrium state, it turns out that this is exceedingly unlikely, because the product of the two omega functions has a very sharp maximum.

We can formally compute the condition for thermal equilibrium between the two subsystems by maximizing the function . It is convenient to take the logarithm of this function and equate the derivative of that w.r.t. to zero. This gives:

In the derivative of we can change variables by putting . By the chain rule, we then have:

Note that the l.h.s. of the equation only refers to the property of system 1 and the r.h.s only to the properties of system 2. For any arbitrary isolated system with internal energy we define the so-called temperature parameter as:

The condition that two isolated systems in internal thermal equilibrium are also in thermal equilibrium with each other (i.e. that when they are brought into thermal equilibrium no energy will flow between them), is thus that the two temperature parameters of both systems are equal. The thermodynamic temperature is defined by:


where is a arbitrary constant that fixes the temperature scale. If we choose to measure the thermodynamic temperature in Kelvin, then this constant is fixed to be Boltzmann's constant. We'll see later that this definition of thermodynamic temperature implies that energy flow between two systems as a result of thermal contact is from the high temperature system to the low temperature system.

While the Omega function is not accessible at the macroscopic level, the entropy, defined as:

is a thermodynamic variable that can (indirectly) be determined from the macroscopic thermal properties of the system. From the above definitions, it follows that the temperature can be expressed as:

Now, this equation is only valid for isolated systems that don't perform any work. I.e. when the energy is added it goes into the internal thermal energy of the system. In general, we deal with systems that can perform work. In general this can be described by so-called external parameters, such as the volume of the system. The energy change due to the change in the external parameters is called work. Thermal energy that flows into a system that is not due to work is called heat.

The fundamental thermodynamic relation

[edit | edit source]

We will now show that in general, when heat is added to a system so slowly that it remains in internal thermal equilibrium (such slow processes are called quasistatic processes), we have:

i.e. the change in entropy depends only on the added heat to the system and not on changes in internal energy due to work. Suppose that the system has some external parameter, that can be changed. In general, the energy eigenstates of the system will depend on . According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.

The generalized force, corresponding to the external variable is defined such that is the work performed by the system if is increased by an amount E.g., if is the volume, then is the pressure. The generalized force for a system known to be in energy eigenstate is given by:

Since the system can be in any energy eigenstate within an interval of , we define the generalized force for the system as the expectation value of the above expression:

To evaluate the average, we partition the energy eigenstates by counting how many of them have a value for within a range between and . Calling this number , we have:

The average defining the generalized force can now be written:

We can relate this to the derivative of the entropy w.r.t. at constant energy as follows. Suppose we change to . Then will change because the energy eigenstates depend on causing energy eigenstates to move into or out of the range between and . Let's focus again on the energy eigenstates for which lies within the range between and . Since these energy eigenstates increase in energy by all such energy eigenstates that are in the interval ranging from to move from below to above There are

such energy eigenstates. If , all these energy eigenstates will move into the range between and and contribute to an increase in . The number of energy eigenstates that move from below to above is, of course, given by . The difference

is thus the net contribution to the increase in . Note that if is larger than there will be the energy eigenstates that move from below to above . They are counted in both and , therefore the above expression is also valid in that case.

Expressing the above expression as a derivative w.r.t. and summing over yields the expression:

The logarithmic derivative of w.r.t. is thus given by:

The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanishes in the thermodynamic limit. We have thus found that:

Combining this with

Gives:

The first part of this equation, which can be written as:

(1)

is called the fundamental thermodynamic relation. It is more general than the equation , as the former is also valid when the changes in the system are not quasistatic. This is because the internal energy in thermal equilibrium will be completely determined by when and are specified. So, we can consider to be a function of and (we say that is a thermodynamic state function) and we always have:

The fact that (1) is valid for quasistatic changes then allows us to conclude that the partial derivatives in the above equation are given by:

The equation (1) is thus always valid, albeit it that for non-quasistatic changes we cannot identify the two terms as heat added and work done by the system any more. Also we don't always have to interpret the infinitesimal changes as a change that actually happens in a system. E.g. one can consider two copies of the same system but with slightly different thermodynamic variables. Then can refer to the difference in internal energy, entropy, and external parameter between the two systems. These changes will satisfy (1).

Canonical Ensemble

[edit | edit source]

Often, the system we want to study is not isolated, but instead is in thermal contact with an environment at some fixed temperature. Now, for a large system a system kept at some temperature will fix its internal energy with negligible energy fluctuations relative to the total internal energy. So, we could just as well treat such systems using the micro-canonical ensemble. However, it turns out that doing computations in the micro-canonical ensemble is rather inconvenient.

Since a system kept at constant temperature is in thermal contact with an environment, usually called a heat bath, it is no longer true that all microstates of the system are equally likely. We can find the probability to find the system in some microstate as follows. We imagine that the system is kept at constant temperature by a heat bath such that the system plus heat bath is an isolated system. The accessible microstates for this combined system are then all equally likely. Suppose that the combined system has a total energy of E. If the system is in some microstate with energy , then we can express the total number of states the combined system can be in, in terms of the omega function of the heat bath as . Since all states are equally likely, this is proportional to the probability of finding the system in the microstate with an energy :

for some constant C. Here one has to define the omega function by choosing the energy resolution smaller than the spacing between the energy levels of the system. For the Omega function to be well defined we have to choose to be much larger than the spacing between the energy levels in the heat bath. These conditions can be satisfied because the heat bath will have to consist of far more degrees of freedoms than the system in order to be able to be able to absorb heat from the system and yet stay at (almost) constant temperature. The more degrees of freedom a system has, the smaller the spacing between the energy levels will be.

The above expression for can be simplified as follows. We can write:

for some constant . Here we have used the definition of the temperature parameter given in the previous section and that in the limit of an ideal heat bath, the higher order terms in the Taylor expansion tend to zero as they involve the change in temperature due to change in the system's energy. It thus follows that:

where the normalization constant is the so-called partition function which is given by:

The canonical ensemble is a hypothetical ensemble of copies of a system whose microstates are distributed according to .

Computation of thermodynamic variables

[edit | edit source]

The system's thermodynamic variables can be computed by averaging over the probability distribution . It turns out that a knowledge of the partition function suffices to extract all of the system's thermodynamic variables. In this section, we'll derive the equations for the internal energy, generalized forces, and the entropy in terms of .

The internal energy is expectation value of the system's energy. This can be expressed in terms of the partition function in the following way:


If the system is in the state r, then the generalized force corresponding to an external variable x is given by

The expectation value of this is the thermodynamic generalized force of the system and this can be written as:

Suppose the system has one external variable x. Then changing the system's temperature parameter by and the external variable by dx will lead to a change in :

If we write as:

we get:

This means that the change in the internal energy is given by:

In the thermodynamic limit, the fundamental thermodynamic relation should hold:

This then implies that the entropy of the system is given by:

where c is some constant. The value of c can be determined by considering the limit T → 0. In this limit the entropy becomes where is the ground state degeneracy. The partition function in this limit is where is the ground state energy. We thus see that , therefore:

The Helmholtz free energy can thus be expressed as:

Grand Canonical Ensemble

[edit | edit source]

In this ensemble we attach our system to a heat-particles reservoir in order to fix the temperature and the chemical potential of our system. The chemical potential is defined as . In previous ensembles we fixed number of particles in the system. Now we let it change, so now we should rectify the expression of the energy and all its derivatives.

We start by defining the probability of a certain state as:

where

Directly from , the Grand Partition Function, we shall derive the definition of the Grand Thermodynamical Potential, .

Another form of is: . The differential form of is:

There are other possible ensembles, some of them we shall see in this course.

Gibbs Potential

[edit | edit source]

We should now elaborate a specific characteristic of physical parameters of a thermodynamic system. An extensive parameter is a parameter whose value is proportional to the size of the system (number of particles), an intensive parameter is a parameter that is independent of the system size.

Let's summarize the thermodynamic potential we have seen:

  • - Energy
  • - Free Energy
  • - Enthalpy
  • - Grand Potential

We have expressed all of these potentials in a differential form, where the number of elements depends on the number of ways to change the system's energy. Each term involves one extensive parameter and one intensive parameter. The pairs we have encountered so far are , (), (), (), where the first element of each pair is the intensive parameter and the second is the extensive one. The parameters under differential sign have a special meaning: when they are fixed, this thermo potential tends to minimum/maximum. For example:

, when

.

or

when

In order to get from one potential to another one we simply need to add the proper pair of parameters to the first potential. For example, let's construct an ensemble where we fix the temperature, the number of particles and the pressure of the system. Like in the previous ensembles we'll look for a matching potential for our new ensemble and we shall see that our matching potential tends to a minimum.

We attach to the system a heat-particle reservoir and will show that Gibbs potential is minimum at , and - constant.

The total entropy increase since is a closed system.

We use Taylor expansion:

Using Maxwell relations we get:

Now we multiply by , so the expression should tend to minimum:

We call this potential Gibbs Free Energy.

Intensive Potentials

[edit | edit source]

All of our potentials so far are extensive quantities. Sometimes we want to characterize the system by an intensive quantity with a meaning of thermodynamic potential. The easiest way define it is to divide the potential by an extensive parameter. If we divide the potential by, for example, N we'll get the potential per particle, which is an intensive size. We shall see now that in the special case of the potential G we'll get:

i.e. is Gibbs free energy per particle

We start with the Energy (U):

The above equation simply states that the energy of a system is equal to the number of particles times the energy of one particle.

The same can be done with the Free Energy (F):

(The temperature is an intensive quantity so it's the same for every particle) Now we do the same to Gibbs Free Energy:

From the differential form of G we calculate the Chemical Potential:

From here we receive:


Phase transitions:

[edit | edit source]

(example of liquid-vapor transition)

Consider vessel which contain water vapor with volume and at low pressure so that the gas is an ideal gas. We start decrease the volume so the pressure increases: .

In a certain point a small layer of water will appear and from this point, the following decreasing of volume will not change the pressure, it will be constant. So what happens in the system? If we continue to reduce the volume, the water level will rise, but the pressure will remain constant. When all the vapor turn into water, reducing the volume will increase the pressure drastically and the proportion between the pressure and volume will change. In range of constant pressure we observe coexistence of two phases: liquid and gas.

The process we described above is done at a constant temperature. If we raise the temperature, the range of phase separation will shrink. But, if we raise the temperature high enough, we will have only a point of constant pressure. This temperature is called critical temperature, and for a phase transformation process at this temperature we don't have phase separation, we have everytime coexistence of gas and liquid, this named - fluid.

Theory of phase transition

[edit | edit source]

Despite the complicated mathematical description like for example the BCS theory all known phase transitions like superconductivity, superfluidity or ferromagnetism are the result of the competition between the ordering interaction and the thermal fluctuations and they are similar to frequent attempts of falling out of the ping-pong ball in the hemispherical bowl or the box excited for example by the stream of air with the cooling fan or the vacuum cleaner. After exceeding the certain excitation intensity i.e. for the true case the critical temperature with the sufficiently strong stream of air the ball will stop randomly bounce from the walls of the bowl and it will fall out i.e. its order parameter which is its localization in or outside the bowl will undergo the phase transition. It is therefore the result of strong non-linearity of motion which shows up when the system is randomly excited. The similar mechanism is responsible for "melting" of the ferromagnet i.e. for its total demagnetization with the thermal excitation above the Curie temperature when the interaction with the thermal reservoir is causing that the spin from small oscillations around the parallel direction of other spins starts randomly oscillate averaging to zero and ignoring the other spins around or when the thermal oscillations cause the ionization of Cooper pairs in the superconductor. For the Heisenberg model for example it is the pendular cosine scalar product interaction between the spins and the separatrix between two types of pendular spin motion in the external field which is responsible for the transition between the oscillatory (ordered) phase and the demagnetized or rotational phase with the critical temperature or the energy between the pendulum up and down, close to its exact value from two dimensional Ising model given implicitly by or . Considering the spin as pendulum with the potential of the interaction with two other spins (with one above in the third dimension) one gets close to its numerical value obtained from the numerical Monte Carlo calculations and generally as interacting with one more for extra dimension for four dimensions close to numerical and close to Bethe-Peierls-Weiss theory in dimensions where it is for the very large .