# Introduction to Likelihood Theory/The Basic Definitions

Jump to navigation Jump to search

## Formal Probability Review

Let ${\displaystyle D}$ be a set contained in ${\displaystyle R^{k}}$, and ${\displaystyle dm\left(x\right)}$ is the counting measure if ${\displaystyle D}$ is discrete, Lebesgue measure if ${\displaystyle D}$ is continuous and Steltjes measure otherwise (if you don't know what a measure function in a ${\displaystyle \sigma -\mathrm {algebra} }$ is, lookup in w:measure (mathematics) or just consider that if ${\displaystyle D}$ is continuous the integrals below are the usual integrals from calculus, and the integrals resume to summation over ${\displaystyle D}$ for discrete sets).

Definition.: A function ${\displaystyle f(x):D\rightarrow R}$ is a probability density function (abbreviated pdf) if and only if
${\displaystyle \int _{-\infty }^{\infty }fdm(x)=1}$
and
${\displaystyle f(x)\geq 0~\forall x\in R}$.
We say that a variable ${\displaystyle X}$ has pdf f if the probability of ${\displaystyle X}$ being in any set ${\displaystyle S}$ is given by the expression
${\displaystyle \int _{S}^{}f(x)dm}$
(if you don't know measure theory, consider that ${\displaystyle S}$ is an interval on the real line).

Exercise 1.1 - Show that ${\displaystyle f(x)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}\exp \left\{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right\}}$ is a pdf with ${\displaystyle D=R}$, ${\displaystyle \mu \in R}$ and ${\displaystyle \sigma ^{2}>0}$.
Exercise 1.2 - Show that we can build a distribution function using the function ${\displaystyle g(x)=0}$, if ${\displaystyle x, ${\displaystyle f(x)}$ otherwise (${\displaystyle k}$ is any real number, ${\displaystyle f}$ is defined in the previous exercise) by multiplying it with an appropriate constant. Find the constant. Generalize it for any pdf defined on the real line.
Exercise 1.3 - If ${\displaystyle X}$ has distribuition ${\displaystyle f}$ with ${\displaystyle \mu =1}$ and ${\displaystyle \sigma ^{2}=1}$, what is the distribution of the function ${\displaystyle Y=X^{2}}$?(Calculate it, don't look it up on probability books). In statistics, the term probability density function is often abbreviated to density.

Definition.: Let ${\displaystyle X}$ be a random variable with density ${\displaystyle f}$. The Cumulative Distribution Function (cdf) of ${\displaystyle X}$ is the function defined as

${\displaystyle F(x)=\int _{-\infty }^{x}fdm(x)}$

This function is often called distribution function or simply distribution. Since the distribution determines uniquely the density, the terms distribution and density are used by statisticians as synonymous (provided no ambiguity arises from the context).
Exercise 1.4 - Prove that every cdf is nondecreasing.

Definition.: Let ${\displaystyle X}$ be a random variable. We call the expectation of the function ${\displaystyle g(X)}$ the value

${\displaystyle E[g(X)]=\int _{-\infty }^{\infty }gfdm(x)}$

where ${\displaystyle f}$ is the density of ${\displaystyle X}$. The expectation og the identity function is called expectation of ${\displaystyle X}$.
Exercise 1.5 - Compute the expectation of the random variables defined in Exercise 1.1.
Exercise 1.6 - Show that ${\displaystyle E[c]=c}$ for any constant ${\displaystyle c}$.
Exercise 1.7 - Show that ${\displaystyle E[g(x)+c]=E[g(x)]+c}$ for any constant ${\displaystyle c}$.

## In The Beginning There Were Chaos, Empirical Densities and Samples

A population is a collection of objects (collection, not a proper set or class in a Logicist point of view) where each object has an array of measurable variables. Examples include the set of all people on earth together with their heights and weights and the set of all fish in a lake together with artificial marks on them, where this latter case is found in capture-recapture studies (I suggest you look into Wikipedia and find out what is a capture-recapture study). Let ${\displaystyle s}$ be an element of a population and ${\displaystyle V(s)}$ be the array of measurable variables mensured in the object ${\displaystyle s}$ (for an example, ${\displaystyle s}$ is a man and ${\displaystyle V(s)}$ is his height and weight measured at some arbitrary instant, or ${\displaystyle s}$ is a fish and ${\displaystyle V(s)}$ is ${\displaystyle 1}$ if he has a man-made mark on it and ${\displaystyle 0}$ otherwise). A sample of a population ${\displaystyle P}$ is a collection ${\displaystyle S}$ (again, not a set) where ${\displaystyle s\in S\Rightarrow \exists r\in P}$ such that ${\displaystyle V(s)=V(r)}$.

There are two main methods for generating samples: Sampling with replacement and Sampling without replacement. In the former, you randomly select a element ${\displaystyle a_{1}}$ of ${\displaystyle P}$, and call the set ${\displaystyle S_{1}=\left\{a_{1}\right\}}$ your first subsample. Define your (n+1)-th subsample as the set ${\displaystyle S_{n+1}=S_{n}\bigcup \left\{R(P-S_{n})\right\}}$, where ${\displaystyle R(X)}$ is a function returning a randomly chosen element of ${\displaystyle X}$. Any subsample you pick generated using the definitions above will be called a sample without replacement and is the more intuitive kind of sample, but also one of the most complicated to obtain in a real world situation. In the former, we have ${\displaystyle S_{1}}$ and ${\displaystyle R(X)}$ defined in the same way above, but in this case we have ${\displaystyle S_{n+1}=S_{n}\bigcup \left\{s_{n}:V(R(P)=V(s_{n})\right\}}$. Samples with replacement have the exquisite property that they have different objects with same characteristics.

TO DO: Some stuff on empirical densities and example of real-world sampling techniques.

## Likelihoods, Finally

Given a random vector ${\displaystyle Y=\left[Y_{1}~Y_{2}~\cdots ~Y_{n}\right]^{T}}$ with density ${\displaystyle f_{Y}(y,\theta )}$, where ${\displaystyle \theta }$ is a vector of parameters, and an observation ${\displaystyle y'=\left[y'_{1}~y'_{2}~\cdots ~y'_{n}\right]^{T}}$ of ${\displaystyle Y}$, we define the likelihood function associated with ${\displaystyle y'}$ as

${\displaystyle L\left(\theta \right)=f_{Y}\left(y',\theta \right)}$

This is a function of ${\displaystyle \theta }$, but not of ${\displaystyle Y}$, of an observation ${\displaystyle y}$ or any other related quantity, for ${\displaystyle L(\theta )}$ is the restriction of the function ${\displaystyle f_{Y}}$, which is a function of ${\displaystyle n+dim(\theta )}$, to a subspace where the ${\displaystyle y}$ are fixed.

In many applications we have that, for all ${\displaystyle j,i\in \{0,1,\ldots ,n\}}$, ${\displaystyle Y_{j}}$ and ${\displaystyle Y_{i}}$ are independent. Suppose that we draw a student from a closed classroom at random, record his height ${\displaystyle y_{1}}$, and put him back. If we repeat the proccess ${\displaystyle n}$ times, the set of heights measured forms an observed vector ${\displaystyle y'=\left[y'_{1}~y'_{2}~\cdots ~y'_{n}\right]^{T}}$, and our ${\displaystyle Y}$ variable is the distribution of the height of the students in that classroom. Then we have our independence supposition fullfilled, as it will be for any sampling scheme with replacement. In the case where the supposition is true, the above definition of likelihood finction is equivalent to

${\displaystyle L(\theta )=\prod _{j=1}^{n}f_{Y_{j}}(y',\theta )}$

where ${\displaystyle f_{Y_{j}}(y',\theta )}$ is the probability density function of the variable ${\displaystyle Y_{j}}$.
Exercise 3.1: Let ${\displaystyle X_{j}}$ have a Gaussian density with zero mean and unit variance for all ${\displaystyle j}$. Compute the likelihood function of ${\displaystyle Y_{1}=X_{1}}$ and ${\displaystyle Y_{2}=X_{1}+X_{2}}$ for an arbitrary sample.

## Intuitive Meaning?

This function we call likelihood is not directly related to the probability of events involving ${\displaystyle Y}$ or any proper subset of it, despite its name, but it has a non-obvious relation to the probability of the sample as a whole being selected in the space of all the possible samples. This can be seen if we use discrete densities (or probability generating functions). Supose that each ${\displaystyle y_{j}}$ has a binomial distribution with ${\displaystyle m}$ tries and succes probability ${\displaystyle p_{j}}$, and they are independent. So the likelihood function associated with a sample ${\displaystyle y'=\left[y'_{1}~y'_{2}~\cdots ~y'_{n}\right]^{T}}$ is

${\displaystyle L(\theta )=\prod _{j=1}^{n}C_{m,y'_{j}}p^{y'_{j}}(1-p)^{1-y'_{j}}}$

where each ${\displaystyle y'_{j}}$ is in ${\displaystyle \{0,1\}}$, and ${\displaystyle C_{m,k}}$ means ${\displaystyle choose\left(m,k\right)}$. This function is the probability of this particular sample appear considering all the possible samples of the same size, but this trail of thought only works in discrete cases with finite sample space.
Exercise 4.1: In the Binomial case, does ${\displaystyle L(\theta _{1})>L(\theta _{0})}$ has any probabilistic meaning? If the observed values are throws of regular fair coins, what can you expect of the function ${\displaystyle L(\theta )}$?

But the likelihood has a comparative meaning. Supose that we are given two observations of ${\displaystyle Y}$, namely ${\displaystyle Y_{1}}$ and ${\displaystyle Y_{2}}$. Then each observation defines a likelihood function, and for each fixed ${\displaystyle \theta _{0}}$, we may compare their likelihoods ${\displaystyle L_{1}\left(\theta _{0}\right)}$ and ${\displaystyle L_{2}\left(\theta _{0}\right)}$ to argue that the one with bigger value occurs more likely. This argument equivalent to Fisher's rant against Inverse Probabilities.

## Bayesian Generalization

Even if most classical statisticians (also called "frequentists") complain, we must talk about this generalization of the likelihood function concept. Given that the vector ${\displaystyle Y}$ has a density conditional on ${\displaystyle X}$ called ${\displaystyle f_{Y|X}(y|x)}$ and that we have a observation ${\displaystyle x'}$ of ${\displaystyle X}$ (I said ${\displaystyle X}$, forget about observations of ${\displaystyle Y}$ in this section!), we will play a little with the function

${\displaystyle L\left(Y\right)=f_{Y|X}\left(y|x'\right)}$

Before anything, Exercise 5.1: Find two tractable discrete densities with known conditional density and compute their likelihood function. Relate ${\displaystyle L\left(Y\right)}$ to ${\displaystyle L\left(X\right)}$.

## Thank you for reading

Some comments are needed. The "?" mark in the previous section title is proposital, to show how this might be confusing. It needs more exercises and examples from outside formal probability. The way this thing is right now needs a good background in formal probability (high level) and much more experience with sampling.