Random physics

Alberto Verga, research notebook

\(\newcommand{\I}{\mathrm{i}} \newcommand{\E}{\mathrm{e}} \newcommand{\D}{\mathop{}\!\mathrm{d}} \newcommand{\Di}[1]{\mathop{}\!\mathrm{d}#1\,} \newcommand{\Dd}[1]{\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d}#1}} \newcommand{\bra}[1]{\langle{#1}|} \newcommand{\ket}[1]{|{#1}\rangle} \newcommand{\braket}[1]{\langle{#1}\rangle} \newcommand{\bbraket}[1]{\langle\!\langle{#1}\rangle\!\rangle}\)

Lectures on statistical mechanics.

Solids: effective models

Solid is an ordered state of matter characterized by a crystal structure and electronic bands. Atoms are located on the sites of the crystal lattice, tightly bound to their neighbors, their vibrations give raise to phonons, the quantum of the acoustic modes; electrons, depending on the chemical properties of the atoms, can jump from one site to another, carry a magnetic moment (spin), or interact with the nucleus and crystal potentials through the spin-orbit coupling, giving rise to a wealth of different physical properties and effects, such thermal or electric conductivity, ferromagnetism, piezoelectric effects, quantum Hall conductance, giant magnetoresitence or topological effects. In spite of the complexity of the solid’s band structure, specific electronic properties can be described by relatively simple effective hamiltonians comprising a few parameters, in which the complex crystal structure is replaced by a regular lattice and the collective electron coulomb interactions, by short range neighbors interactions.

The simplest example is a free electron in a metal. The presence of the crystal periodic potential creates gaps in the continuum spectrum. A tight binding hamiltonian on a cubic lattice, in which the electron can jump from one site to one of its six neighbors, is enough to account for the band structure; the hoping energy \(t\) is the only parameter of the model:

$$H = - t \sum_{(\boldsymbol x, \boldsymbol y)} \big[c^\dagger({\boldsymbol x}) c({\boldsymbol y}) + c^\dagger({\boldsymbol y}) c({\boldsymbol x})\big]$$

where \(\boldsymbol y = \{ (x \pm 1,y,z), (x,y\pm1,z), (x,y,z\pm1) \}\), is a neighbor of the site \(\boldsymbol x \in \mathbb{Z}^3\) of the cubic lattice. The operator \(c({\boldsymbol x})\) annihilate the particle at site \(\boldsymbol x\):

$$c^\dagger(x+1,y,z) c(x,y,z) \ket{x,y,z} = \ket{x+1,y,z}\,,$$

its hermitian conjugate \(c^\dagger({\boldsymbol x})\), creates a particle at this site.

Many universal properties of magnetic systems are well described by the ising hamiltonian,

$$H = -J \sum_{(\boldsymbol x, \boldsymbol y)} s_{\boldsymbol x} s_{\boldsymbol y}\,,$$

where \(s_{\boldsymbol x} = \pm 1\) is a classical spin variable, \(J\) is the exchange energy constant, and \((\boldsymbol x, \boldsymbol y)\) are neighboring sites in a lattice. The ising model is a simplified version of the quantum heisenberg hamiltonian:

$$H = -J\sum_{(\boldsymbol x, \boldsymbol y)} \boldsymbol\sigma_{\boldsymbol x} \cdot \boldsymbol\sigma_{\boldsymbol y}\,,$$

where \(\sigma\) is the vector of pauli matrices. The ising and heisenberg hamiltonians, describe a magnetic material as a set of spins (classical or quantum), attached to the sites of a lattice (linear, square or cubic, depending on the dimension, for example), interacting with their nearest neighbors via the \(J\) exchange coupling (mimicking the overlap of electronic functions).

Ising model of a ferromagnetic material. Two dimensional lattice of interacting neighboring spins. The gray region defines a cluster of up spins.

More generally, lattice models are useful in a variety of domains, ranging from condensed matter to biology or social sciences. They can describe statistical equilibrium properties, like phase transitions, or nonequilibrium processes, like crystal growth,1 population dynamics in a living system,3 or opinion propagation in a social network.2

Free particles in a lattice

We consider a gas of noninteracting, spinless bosons in a cubic lattice of step size \(a=1\), which we take as the length unit (fermions can be treated similarly). The system’s hamiltonian is,

$$H_N = \sum_{n=1}^N H_n\,, \quad H_n = - t \sum_{(\boldsymbol x, \boldsymbol y)} [a^\dagger_n({\boldsymbol x}) a_n({\boldsymbol y}) + a^\dagger_n({\boldsymbol y}) a_n({\boldsymbol x})]\,,$$

where \(N\) is the number of particles and the operators create or annihilate particle \(n\) at a lattice site \(\boldsymbol x \in \mathbb{Z}^3\). In order to write the grand potential we diagonalize the hamiltonian using a change of the position basis \(\boldsymbol x\), to the momentum \(\boldsymbol k\) basis, through the fourier transformation,

$$a_n({\boldsymbol x}) = \frac{V}{(2\pi)^3} \int_{-\pi}^\pi \D \boldsymbol k \, \E^{-\I \boldsymbol k \cdot \boldsymbol x} a_n({\boldsymbol k})\,,$$

where \(V=L^3\) is the number of sites in the cubic lattice of edge \(L\), the integration is over the first “brillouin zone” (here, the cube \(\mathrm{BZ} = (-\pi,\pi)^3\) in momentum space). Note that the inverse transformation gives,

$$c_n({\boldsymbol k}) = \frac{1}{V} \sum_{\boldsymbol x} \E^{\I \boldsymbol k \cdot \boldsymbol x} c_n({\boldsymbol x})\,,$$

and that,

$$\frac{V}{(2\pi)^3} \int_{-\pi}^\pi \D \boldsymbol k \, \E^{-\I \boldsymbol k \cdot (\boldsymbol x - \boldsymbol y)} = V \delta_{\boldsymbol x, \boldsymbol y}$$

is the kronecker delta. In the momentum basis the hamiltonian becomes EX,

$$H = \frac{V}{(2\pi)^3} \int_{-\pi}^\pi \D \boldsymbol k \, \epsilon(\boldsymbol k) a^\dagger(\boldsymbol k) a(\boldsymbol k)\,,$$

where

$$\epsilon(\boldsymbol k) = -2(\cos k_x + \cos k_y + \cos k_z)\,.$$

The grand potential \(\Phi(T,\mu,V)\) of the noninteracting hamiltonian, writes:

$$\Phi(T,\mu,V) = \frac{TV}{(2\pi)^3} \int_{-\pi}^\pi \D \boldsymbol k \, \ln\left( 1 - \E^{\mu/T} \E^{-\epsilon(\boldsymbol k)/T} \right)\,,$$

where \(H\) is the single particle hamiltonian. The number of particles and the energy are given by the integrals,

$$N = \frac{V}{(2\pi)^3} \int_{-\pi}^\pi \frac{\D \boldsymbol k} {\E^{\mu/T} \E^{\epsilon(\boldsymbol k)/T} - 1} \,,$$

and

$$E = \frac{V}{(2\pi)^3} \int_{-\pi}^\pi \D \boldsymbol k \, \frac{\epsilon(\boldsymbol k)}{\E^{\mu/T} \E^{\epsilon(\boldsymbol k)/T} - 1} \,.$$

To compare the behavior of the gas in a lattice with the ideal gas, we keep the first term in the fugacity (Boltzmann limit, which is similar for fermions and bosons). In this limit we have,

\begin{align*} \Phi & = - \frac{TV \E^{\mu/T}}{(2\pi)^3} \int_{-\pi}^\pi \D \boldsymbol k \,\E^{\epsilon(\boldsymbol k)/T} \\ & = - \frac{TV \E^{\mu/T}}{(2\pi)^3} \left[\int_{-\pi}^\pi \D k \, \E^{2t \cos k/T} \right]^3\\ & = - TV \E^{\mu/T} I_0(2t/T)^3\,, \end{align*}

where \(I_0\) is the modified bessel function of the first kind,

$$I_0(z) = \frac{1}{\pi} \int_0^\pi \D t \, \E^{z \cos t}\,,$$

and the particle number,

$$N = \frac{V \E^{\mu/T}}{(2\pi)^3} \left[\int_{-\pi}^\pi \D k \, \E^{2t \cos k/T} \right]^3 = V \E^{\mu/T} I_0(2t/T)^3\,,$$

which combined with the expression of \(\Phi= -PV\), one recovers

$$PV = NT$$

independent of the hoping energy \(t\). However, the lattice gas energy differs to the one of the ideal gas.

Indeed, the energy is given by,

$$E = -2t\frac{V \E^{\mu/T}}{(2\pi)^3} \left[\int_{-\pi}^\pi \D k \, \cos k \, \E^{2t \cos k/T} \right]^3 = -2TV \E^{\mu/T} I_1(2t/T)^3\,,$$

or, eliminating the chemical potential,

$$E = -2tN \left[\frac{I_1(2t/T)}{I_0(2t/T)} \right]^3\,.$$

The ideal gas energy is found in the continuous limit (\(a \rightarrow 0\)), which corresponds to \(t \sim \hbar^2/2ma^2 \rightarrow \infty\) EX,

$$E \approx -2tN \frac{1-9T/16t}{1+3T/t} = -2tN + \frac{3}{2}NT\,,$$

where, up to an irrelevant constant, gives the ideal gas result. We used the asymptotic expression,

$$I_n(z) \approx \frac{\E^z}{\sqrt{2\pi z}} \left( 1 - \frac{4n^2 - 1}{8 z} \right)\,.$$

In the opposite limit \(T \gg t\), the energy grows slowly with the temperature:

$$E \approx -2tN \left( \frac{t}{T} \right)^3\,.$$

(The small \(z\) limit of the \(I\) bessel functions is \(I_n(z) = (z/2)^n/n!\).)

One dimensional spin model

In the presence of an external field \(B\), the one dimensional ising hamiltonian is,

$$H = -J\sum_{x=1}^N s_x s_{x+1} - \sum_{x=1}^N s_x B$$

where we assumed periodic boundary conditions \(s_1 = s_{N+1}\). We choose units based on the lattice constant and the spin magnetic moment, \(a = g\mu_\text{B}=1\) (we also put the permeability \(\mu_0=1\)); in such units the magnetic field \(B\) has the dimensions of the energy.

The probability of a spin configuration \(s\),

$$s = \{s_1=\pm1, s_2=\pm1, \ldots, s_N=\pm 1\}$$

is,

$$P[s] = \frac{1}{Z} \prod_x \exp\left[\frac{J}{T} s_x s_{x+1} + \frac{B}{2T}(s_x + s_{x+1}) \right]$$

and the partition function is given by,

$$Z(T,B) = \sum_s \prod_x \exp\left[K s_x s_{x+1} + \frac{B}{2T} (s_x + s_{x+1}) \right] \,,$$

where \(K = J/T\). We observe that the thermodynamics of the present model depends on two nondimensional parameters, the interaction energy-temperature ratio \(K = J/T\) and the external field-temperature ratio \(h = B/T\). The idea to compute the partition function is to try (obviously this is not always possible) to factor it in a series of identical terms, like in the noninteracting case; we symmetrized the \(B\) term to this goal. We note first that the product appearing in \(Z\) can be written in the form,

$$\prod_x \exp\left[K s_x s_{x+1} + \frac{B}{2T} (s_x + s_{x+1})\right] = T_{s_1,s_2} T_{s_2,s_3}\ldots T_{s_N,s_1}\,,$$

where each term \(T_{s_x,s_{y}} = T_{s_y,s_x}\) can take different values according to the values of \(s_x = \pm 1\) and \(s_{y} = \pm 1\):

$$T_{s_x,s_y} = \{\E^{K \pm h}, \E^{-K}\}\,.$$

Therefore, the partition function becomes,

$$Z = \sum_{s_1} \sum_{s_2} \ldots \sum_{s_N} T_{s_1,s_2} T_{s_2,s_3}\ldots T_{s_N,s_1}\,.$$

A generic factor is,

$$\sum_{s_x} T_{s_{x-1},s_x} T_{s_x,s_{x+1}} = T_{s_{x-1},s_{x+1}}\,,$$

corresponding to the product of two matrices,

$$T = \begin{pmatrix} \E^{K+h} & \E^{-K} \\ \E^{-K} & \E^{K-h} \\ \end{pmatrix} \,.$$

Using the transfer matrix \(T\) to write the partition function,

$$Z = \mathrm{Tr}\, T^N\,.$$

Therefore, using the invariance of the trace under basis transformations EX,

$$Z = \lambda_+^N + \lambda_-^N$$

where \(\lambda_\pm\) are the eigenvalues of the transfer matrix \(T\):

$$\lambda_\pm = \E^K \cosh h \pm \sqrt{\E^{2K}\sinh^2h + \E^{-2K}}\,.$$

In the thermodynamic limit \(N\rightarrow\infty\), only the larger eigenvalue of the transfer matrix survives in \(Z\):

$$Z = \lambda_+^N+\lambda_-^N= \lambda_+^N \big[1 + (\lambda_-/\lambda_+)^N \big] \approx \lambda_+^N\,,$$

from which we compute the free energy,

$$F(T,B) = -N T \ln\left( \E^{J/T} \cosh(B/T) + \sqrt{\E^{2J/T}\sinh^2(B/T) + \E^{-2J/T}} \right)\,.$$

The mean magnetization per particle \(m=M/N \braket{s_x}\), is given by EX,

$$m = -\frac{1}{N}\frac{\partial F}{\partial B} = \frac{\E^{2J/T}\sinh(B/T)}{\sqrt{\E^{4J/T}\sinh^2(B/T) + 1}}\,.$$

In the limit of small \(B\) (and finite temperature), the magnetization is proportional to the field:

$$m \approx \chi B\,, \quad \chi = \frac{\E^{2J/T}}{T}\,,$$

a behavior of the paramagnetic type, with a susceptibility inversely proportional to the temperature, corrected with the factor \(\E^{J/T}\) depending on the interaction energy. However, for \(T=0\), te magnetization becomes,

$$m = \mathrm{sgn}(B)\,,$$

behavior reminiscent to a phase transition (at critical temperature \(T_c=0\)): indeed, at zero temperature the two values of the magnetization \(m=\pm1\) correspond to an ordered state with all spins pointing up or down.

The correlation function (at \(B=0\)) between two spins separated by a distance \(r\), is defined by,

$$C(r) = \langle s_x s_{x + r} \rangle = \sum_s s_x s_{x + r} P[s] = \frac{1}{Z} \sum_s s_x s_{x + r} \prod_y \E^{J s_y s_{y+1}/T}\,.$$

We introduce as before, the transfer matrix \(T\),

$$\frac{1}{Z} \sum_s \prod_{y=1}^{x-1} T_{s_ys_{y+1}}s_x \prod_{y=x}^{x+r-1} T_{s_ys_{y+1}} s_{x+r} \prod_{y=x+y}^{N} T_{s_ys_{y+1}}\,,$$

which gives, using the matrix product formula,

$$C(r) = \mathrm{Tr\ } T^{x-1} \sigma_z T^{r} \sigma_z T^{N-r-x+1}\,.$$

Using now the unitary transformation,

$$U = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}\,,$$

to diagonalize \(T\), and the cyclic property of the trace \(\mathrm{Tr}\, ABC = \mathrm{Tr}\, (U^\dagger A U)( U^\dagger B U)( U^\dagger C U)\), we obtain EX,

$$C(r) = \frac{1}{Z} \mathrm{Tr\ } \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} \lambda_+^{r} & 0 \\ 0 & \lambda_-^{r} \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} \lambda_+^{N-r} & 0 \\ 0 & \lambda_-^{N-r} \end{pmatrix}\,.$$

Therefore the two spins correlation function of a Ising system with \(N\) spins is given by the explicit formula,

$$C(r) = \langle s_{x}s_{x+r}\rangle = \frac{\lambda_+^{N-r}\lambda_-^{r} + \lambda_+^{r}\lambda_-^{N-r}}{ \lambda_+^N+\lambda_-^N}$$

In the thermodynamic limit \(N \rightarrow \infty\), we finally get,

$$C(r) = \left(\frac{\lambda_-}{\lambda_+}\right)^r = \E^{-r/\xi},\; \xi = -\frac{1}{\ln \tanh J/T} \,,$$

where \(\xi\) is the correlation length. The exponential decay of correlations indicates the absence of long range order. The 1D Ising model has not phase transitions for \(T>0\).

Mean field spin model

Instead of considering the nearest neighbor interaction, we can investigate the case of long range interaction, where each spin on a lattice indexed by \(x=1,2,\ldots,N\) (\(N\) is the number of spins) can interact with all other spins:

$$H=-\frac{J}{2N} \sum_{x,y} s_x s_y - B \sum_x s_x\,,$$

where the factor two is to avoid counting twice each pair \((x,y)\), and the factor \(1/N\) is to ensure that the energy is extensive (there are \(\sim N^2\) terms of order one in the sum). This model is easily solved by noting that the hamiltonian corresponds to one spin in an effective field \(B_s\) created by the other spins, in addition to the real applied field:

$$B_s = \frac{J}{N}\sum_x s_x + B = Jm + B = \frac{J}{2}m + B\,,$$

where

$$m = \frac{1}{N}\sum_x s_x,$$

is the magnetization per site, or, inserting this expression into \(H\),

$$H = \frac{NJ}{2} m^2 - B_s\sum_x s_x\,,$$

The partition function is then,

$$Z = [\E^{Jm^2/2T} 2\cosh(B_s/T)]^N\,.$$

the free energy is,

$$F(T,B) = \frac{NJ}{2} m^2 -NT \ln\left( 2\cosh\frac{B_s}{T} \right)\,,$$

and the magnetization is given by the self-consistent equation,

$$m = \tanh\left( \frac{Jm/2 + B}{T} \right)\,.$$

At variance to the one dimensional case, the mean free model account for the existence of a non zero magnetization, even in the absence of an external magnetic field. Such a state is called ferromagnetic: it is a thermodynamic phase with ordered spins and spontaneous magnetization; it differs to the disordered paramagnetic phase, whose spontaneous magnetization vanishes. Therefore, the mean field model is able to describe a magnetic system undergoing, as a function of the temperature and magnetic field, a phase transition from a disordered state at high temperatures, to an ordered state at low temperatures. The critical temperature is given by \(T_c=J/2\). For \(T > T_c\), only the value \(m = m_P(T) = 0\) is solution, while for \(T,T_c\) two equilibrium values are found \(m = \pm m_F(T)\), corresponding to the paramagnetic and ferromagnetic phases respectively.

Let us compute the heat capacity and the susceptibility in the present framework. Calculating the derivative of the free energy with respect to the temperature we find the energy EX,

$$\frac{E}{N} = -mB - \frac{Jm^2}{2}\,, \quad m = m_P,\pm m_F\,,$$

in the form of an implicit function of the temperature, through the equilibrium value of the magnetization (\(m_P\) or \(\pm m_F\)). The derivative of \(E\) with respect to \(T\) gives the specific heat,

$$c_V = \frac{1}{T^2} \frac{\left(B + J m\right) \left(B + J m/2\right)}{ \cosh^{2}{\left (\frac{2 B + J m}{2 T} \right )} - \frac{J}{2T} }\,, \quad m = m_P,\pm m_F\,,$$

where we used the expression of the derivative,

$$\frac{\D m}{\D T} = -\frac{ \frac{\D g}{\D T} }{ \frac{\D g}{\D m} } \,,\quad g = m - \tanh\left( \frac{Jm/2 + B}{T} \right)\,.$$

The susceptibility is computed using also the derivative of the implicit function \(m=m(T)\),

$$\chi = \frac{1}{T} \left[\cosh^{2}{\left (\frac{J m}{2 T} \right ) - \frac{J}{2T}}\right]^{-1}\,, \quad m = m_P,\pm m_F\,,$$

at \(B=0\).

Magnetization and susceptibility as a function of the temperature (\(B=0\)). A phase transition occurs at \(T=T_c\) between a height temperature disordered state \(m=0\) (paramagnetic) and a low temperature ordered state \(m \ne 0\) (ferromagnetic); at the transition temperature the susceptibility diverges.

The specific heat (at zero magnetic field),

$$C = \frac{\partial E}{\partial T} = -\frac{NJ}{2} \frac{\D m^2}{\D T}$$

presents a discontinuity at the critical temperature. This behavior is specific of the mean field approximation, more realistic models give a divergence of the form,

$$C \sim c_\pm |T-T_c|^{-\alpha}$$

characterized by the exponent \(\alpha\).

Specific heat capacity of the ferromagnet as a function of the temperature; at the critical temperature it has a jump of \(3\). In the paramagnetic state the specific heat of the magnet vanishes.

The two dimensional ising model

Second order phase transitions arise in magnetic materials, superfluids, superconductors, liquid crystals, and solid allows; although microscopically different, they are described by the same critical exponents: they are examples of order-disorder transitions.

The study of critical properties of physical systems near a phase transition relies, for a large part, on numerical simulation:

  • Monte Carlo methods: allow the investigation of (classical and quantum) equilibrium systems.
  • Kinetic Monte Carlo: suitable for nonequilibrium systems
  • Molecular dynamics: used in soft matter (liquids, polymers)

Monte Carlo methods are based on a Markovian process, they reject or retain a given modification in the state of a system at each step, provided the change optimize some suitable functional (the energy, for a mechanical system; the Gibbs distribution, for a thermodynamic system)

The dynamics of weakly nonequilibrium systems, well approximated by a Markovian process, or systems relaxing towards the equilibrium, is described by a master equation. It gives the probability \(P\) that a system \(S\) be at state \(s\) at time \(t+1\) given its state at \(t\) is the same \(s\) or other possible configuration \(s'\),

$$P(s,t + 1) - P(s,t) = \sum_{s'}\big[ T(s' \rightarrow s) P(s',t) - T(s \rightarrow s') P(s,t) \big]$$

where \(T\) is the transition probability between states \(s\) and \(s'\), not necessarily equivalent in the two directions. The sum is over all possible configurations of the system, \(s' \ne s\). At the stationary state the right hand side must vanish. A sufficient condition is,

$$T(s' \rightarrow s) P(s') = T(s \rightarrow s') P(s)\,,$$

which is called detailed balance

The Metropolis method, uses the thermodynamic equilibrium Gibbs distribution as the stationary probability distribution,

$$P(s) = \frac{ \E^{-\beta E(s)} }{Z}\,, \quad \beta = 1/T$$

(\(\beta\) is the usual notation of the inverse temperature). It is convenient to write the transition probability in the form \(T(s \rightarrow s') = w_s A_{ss'}\), where:

  • \(w\) is the trial step probability, for the ising system this is the probability to choose a random site in the lattice and change its spin
  • and \(A\) is the acceptance probability
    $$\frac{A_{ss'}}{A_{s's}} = \frac{P(s')}{P(s)} = \E^{\beta E(s) - \beta E(s')}$$
    one has,
    $$\frac{T(s \rightarrow s')}{T(s' \rightarrow s)} = \frac{w_{s}}{w_{s'}} \frac{A_{ss'}}{A_{s's}}= \frac{P(s')}{P(s)}$$
    where we assumed symmetric trials \(w_{s}= w_{s'}\).

The metropolis algorithm proceeds in two steps: given a configuration \(s\)

  • we choose a new configuration \(s'\) with probability \(w_{s}\)
  • if the probability of this configuration satisfies \(P(s') > P(s)\) then we accept the new configuration
    $$A_{ss'} = 1,\ \text{if } P(s') \ge P(s)$$
    otherwise, we put
    $$A_{ss'} = \frac{P(s')}{P(s)} ,\ \text{if } P(s') < P(s)$$
    and accepts \(s'\) with probability \(A_{ss'}\), in summary:
    $$A_{ss'} = \min[1, P(s')/P(s)]$$

This algorithm ensures that the detailed balance condition is satisfied and that the asymptotic, stationary, probability distribution is the Gibbs distribution at the given temperature.

In the metropolis algorithm only one spin at each step is reversed; near the transition, where the spins are in large patches having the same orientation (clusters), the rejection probability is large. To remedy at this, simultaneous reversal of many spins is much more efficient. This is the idea of the Wolff cluster algorithm.

At variance to the metropolis algorithm which supposed the trial probability independent on the configuration, let us assume that \(w_s \ne w_{s'}\). The condition of detailed balance becomes,

$$w_s A_{ss'}P(s) = w_{s'} A_{s's}P(s')$$

For the construction of the cluster we must specify in addition to the Gibbs weights, the rules of accepting or rejecting the configuration.

Let us consider a \(s=1\) cluster \(C\); there are \(n_1\) pairs \((1,1)\) and \(n_2\) pairs \((1,-1)\) at its border \(\partial C\). The contribution of the border to the energy is

$$E_{\partial C} = n_1 - n_2$$

(in units of the exchange constant \(J=1\)). The gibbs probability of the \(s\) spin configuration, is then

$$P(s) \sim \E^{-\beta(n_1-n_2)}$$

If \(p\) is the probability to add a site to the cluster, we can assume the form trial probability associated to \(s\), to be

$$w_{s} \sim (1-p)^{n_2}$$

which gives the a priori probability to stop growing the cluster \(s\) whose spins will be reversed (leading to a new configuration \(s'\)). Therefore, using similar rules to go back to the configuration \(s\) from \(s'\), the acceptance probability is given by the rule,

$$A_{ss'} = \min{ \left[1, \frac{\E^{-\beta(n_2-n_1)}}{\E^{-\beta(n_1-n_2)}} \frac{(1-p)^{n_1}}{(1-p)^{n_2}} \right] } = 1,\ \text{if } p=1-\E^{2\beta}$$

this means that with this special value of \(p\), once the cluster is formed, we reverse it with probability \(1\). It is the existence of this special value of \(p\), leading to full acceptance, that makes the algorithm particularly performant.

Numerical computation

The ising model in a square lattice of size \(L\),

$$H=-\sum_{\langle x,y \rangle} s_x s_y\,, x,y = 0,1,\ldots, (L-1)^2$$

(with \(J=1\) the energy unit) was exactly solved by Onsager (1944). It undergoes a ferromagnetic transition at the temperature \(T_c = 2/\log(1+\sqrt{2})\). The magnetization per site \(m=\sum_x s_x/L^2\) is near \(1\) in the low temperature ferromagnetic state, and near \(0\) in the high temperature paramagnetic state. At the transition point the heat capacity

$$c_V\approx \log|1-T/T_c|$$

has a logarithmic cusp and the susceptibility

$$\chi \sim |1-T/T_c|^{-\gamma}$$

diverges as a power law with a characteristic exponent \(\gamma = 7/4\). The magnetization is given by the formula,

$$m(T) = \left[1 - \sinh^{-4}\frac{2}{T} \right]^{1/8}$$

which gives, near the transition, also a power law with characteristic exponent \(\beta=1/8\) (the mean field gives \(\beta=1/2\)).

We use the monte carlo method to investigate the statistical properties of the ising system; here we gives, as an example, the two dimensional routines of the metropolis and cluster algorithms.

#
# Metropolis algorithm
#
# Output:
#   s:  spin lattice configuration
#   M:  total magnetization
#   E:  total energy
def metropolis(s, beta, M, E):
    L = shape(s)[0]
    it = 1
    while it < L**2:
        i = choice(L)
        j = choice(L)
        # compute energy required to flip spin
        dE = s[(i+1)%L,j] + s[(i-1)%L,j] + \
             s[i,(j+1)%L] + s[i,(j-1)%L]
        dE *= 2.0*s[i,j]
        # Metropolis algorithm to see if we should accept trial
        if (dE <= 0.0) or (random.random() <= exp(-beta*dE)):
            # accept trial: reverse spin; return dE and dM
            s[i,j] *= -1
            M += 2.0*s[i,j]
            E += dE
        it += 1
    return (s, M, E)
#
# Cluster algorithm
#
def wolff(s, kT):
    """
    W. Krauth, Statistical Mechanics: Algorthms and methods
    http://www.smac.lps.ens.fr/index.php/Python_program
    #
    Input:
        lattice s
        temperature kT (in units of J)
    Output:
        s, N_cluster
    """
    p = 1.0 - exp(-2/kT)
    L = shape(s)[0]
    (x, y) = (randint(0,L-1), randint(0,L-1))
    Pocket = [(x, y)]
    Cluster = [(x, y)]
    while Pocket != []:
        (x, y) = Pocket[choice(range(len(Pocket)))]
        neighbors = [(x, (y+1)%L), ((x+1)%L, y),
                     (x, (y-1+L)%L), ((x-1+L)%L, y)]
        for site in neighbors:
            if s[site] == s[(x,y)] and site not in Cluster \
                       and uniform(0,1) < p:
                Pocket.append(site)
                Cluster.append(site)
        Pocket.remove((x,y))
    for site in Cluster: s[site] = - s[site]
    return s

The figures below show magnetization snapshots of the two phases and at the transition.

Magnetization distribution of a 2D ising lattice below, at, and above the critical temperature, \(T_c \approx 2.269\) (or \(\beta_c=0.4407\)). The critical states characterizes by the existence of large clusters of ferromagnetic domains, randomly distributed on the lattice.

We can also compute the thermodynamic quantities, and observe their singular behavior at the transition. A quantitative comparison with the Onsager formulas can be made. In addition, the method can of course be generalized to take into account an external field, or to go to larger dimensions. In the three dimensional lattice (cubic lattice), the exponent characterizing the vanishing of the magnetization in the paramagnetic state is \(\beta \approx 0.326\).

Magnetization, energy and heat capacity as a function of the temperature for the 2D ising model, from cluster monte carlo simulations. The heat capacity is shown in the region of the phase transition in a logarithmic scale. The three set of data correspond to \(L=20\) (red), \(L=64\) (gray), and \(L=128\) (black), to emphasize the finite size effect on the singularities present at the thermodynamic limit. The vertical line shows the critical temperature \(T_c=2.269\).

Notes


  1. Barabási, A. L. and Stanley, H. E., Fractal Concepts in Surface Growth (Cambridge, 1995) 

  2. Barrat, A., Barthélemy, M., and Vespignani, A., Dynamical Processes in Complex Networks (Cambridge, 2008) 

  3. Krapivsky, P., Redner, S., and Ben-Naim, E., A Kinetic View of Statistical Physics (Cambridge, 2010)