# Limiting probabilities

The probability that a continuous-time Markov chain will be in state j at time t often converges to a limiting value which is independent of the intial state. We call this value Pj where Pj is equal to:

${\displaystyle {\frac {(\lambda _{0})(\lambda _{1})(\lambda _{2})\cdot \cdot \cdot (\lambda _{n-1})}{(\mu _{1})(\mu _{2})\cdot \cdot \cdot (\mu _{n})(1+\sum _{n=1}^{\infty }{\frac {(\lambda _{0})(\lambda _{1})(\lambda _{2})\cdot \cdot \cdot (\lambda _{n-1})}{(\mu _{1})(\mu _{2})\cdot \cdot \cdot (\mu _{n})}})}},}$

For a limiting probability to exist, it is necessary that

${\displaystyle \sum _{k=1}^{\infty }{\frac {(\lambda _{0})(\lambda _{1})(\lambda _{2})\cdot \cdot \cdot (\lambda _{n-1})}{(\mu _{1})(\mu _{2})\cdot \cdot \cdot (\mu _{n})}}<\infty ,}$

This condition may be shown to be sufficient.

We can determine the limiting probabilities for a birth and death process using these equations and equating the rate at which the process leaves a state with the rate at which it enters the state.