# MyOpenMath/Solutions/Gauss law (TF)/Proof

Gauss's law is based on a coincidence that might not strike you as very remarkable: The surface area of a sphere grows as the square of the radius and the Coulomb force law falls as the inverse squareof the radius:

${\displaystyle F=qE={\frac {1}{4\pi \varepsilon _{0}}}{\frac {qQ}{r^{2}}}}$

Serious consequences for the theory of electromagnetism (and the nature of light) will result if it is ever found necessary to replace ${\displaystyle r^{2}}$ by something like ${\displaystyle r^{2.0001}}$ in this formula. For this reason, we our discussion of Gauss's law begins with the area of a sphere. First we review the radian and the fact that the circumference of a circle is ${\displaystyle 2\pi r}$ (it is also important to be aware of a formula for the surface area of a sphere: ${\displaystyle A_{\text{sphere}}=4\pi r^{2}}$.)

Here θ ≈ .572 rad ≈ 32.77° because for this angle shown in the figure, the cone formed by the intersection with the plane that divides the sphere between regions 1 and 2 describes a solid angle of Ω = 1 steradian.

The radian is defined as arclength divided by radius: ${\displaystyle \theta =s/r}$. If ${\displaystyle s=r}$, then we have an angle of 1 radian, as shown to the left. A full circle measures as 2π radians.

For solid angle, we replace the circle by a sphere of radius r, and we replace the arclength by an area on that sphere. Instead of the radian defined as the ratio of two lengths, ${\displaystyle \theta =s/r}$, we use the steradian as the ratio two entities that are squares of lengths, ${\displaystyle \Omega =A/r^{2}}$ where Ω is the capital Greek omega, A is an area situated on a sphere of radius r. Since the area of a sphere can be shown to be 4πr2, the solid angle of an entire sphere is 4π.

For a sufficiently small solid angle, the portion of the sphere where the area A is calculated is so small that we may calculate the area as if the sphere were a flat surface. In contrast with angles, where two arcs described by the same θ and r have the same shape, there is no restriction on the shape of the area associated with a solid angle.

## The electric field near a two-dimensional surface

Open surface
A solid cone crossing a Gaussian surface at three places. The base of the cone is a circle shown in yellow.
Closed surface
A closeup at the first exit. All the points on the base shaded in yellow are equidistant from the origin, while larger grey oval follows the local contour of the Gaussian surface.

Gauss's law is about an integral over a closed surface. When thinking about surface integrals, one needs to imagine dividing up the surface into small sections, typically small quadrilaterals. A closed surface has an "inside" and "outside", such as the bent peanut shown to the left.[1]

To construct these differential surface elements it helps to think about differential (small) solid angles. Consider a small shape of area ${\displaystyle dA}$ on the surface of a sphere with a sufficiently large radius ${\displaystyle r}$:

${\displaystyle \;\;d\Omega ={\frac {dA}{r^{2}}}{\text{ (valid only for a sphere).}}}$

Shown to the right is a solid angle centered at point O with a solid angle defined by the circle shown in yellow (dotted outline) at the far right of the figure. Since this is a 3-D image, the circle is depicted as an ellipse from this perspective. The Gaussian surface in this figure surrounds point O, and the surface is shaped like a bent peanut so that the cone exits, re-enters, and then again exits the Gaussian surface.

If the surface's outward unit normal ${\displaystyle {\hat {n}}}$ is not oriented along the ${\displaystyle {\vec {r}}}$ vector (from origin to surface), we cannot use the differential area ${\displaystyle dA}$ to calculate the differential solid angle ${\displaystyle d\Omega }$ because the differential area of the Gaussian surface is too large. This is illustrated below, where the solid angle differential is now a small rectangle. The surface with polka-dots represents a portion of the Gaussian surface, and all the points on this surface are not equidistant from the origin. To calculate the solid angle we require the yellow surface, which strictly speaking is the surface of a sphere of radius ${\displaystyle r}$.

The yellow surface is is the projection of the polka-dotted surface along the ${\displaystyle {\hat {r}}}$ direction. If the polka-dotted surface is ${\displaystyle d{\vec {S}}={\hat {n}}\,dA}$, the yellow surface is ${\displaystyle {\hat {r}}{\hat {r}}\cdot {\hat {n}}\,dA={\hat {r}}\cos \theta \,dA}$. The entity ${\displaystyle {\hat {r}}{\hat {r}}}$ is known as a dyadic product.

This figure also allows us to visualize the components of a differential surface area. The polka-dotted surface area is the sum of two surface areas, that are perpendicular to each other:

${\displaystyle d{\vec {S}}={\hat {n}}dA=d{\vec {A}}_{\parallel }+d{\vec {A}}_{\perp }}$,

where ${\displaystyle dA\equiv |d{\vec {S}}|}$, and,

${\displaystyle d{\vec {A}}_{\parallel }={\hat {r}}({\hat {r}}\cdot {\hat {n}})\,dA}$

is the component of ${\displaystyle d{\vec {S}}}$ parallel to ${\displaystyle {\vec {r}}}$. The perpendicular component ${\displaystyle d{\vec {A}}_{\perp }}$ is shown in the figure as the unmarked bottom rectangle in the right triangular prism whose other two sides are the polka-dotted and yellow shaded rectangles in the figure. The reader can verify the Pythagorean identity, ${\displaystyle dA^{2}=dA_{\parallel }^{2}+dA_{\perp }^{2}}$.

We can now express the solid angle differential in terms of a small area that is not necessarily perpendicular to the radius:

${\displaystyle d\Omega ={\frac {\cos \theta \,dA}{r^{2}}}={\frac {{\hat {r}}\cdot {\hat {n}}\,dA}{r^{2}}}}$,

where ${\displaystyle \theta }$ is the angle between ${\displaystyle {\hat {r}}}$ and ${\displaystyle {\hat {n}}}$. This identity will be used to construct our "proof" of Gauss's law.[2]

## Vector fields

A vector field is a vector function of the three spatial dimensions ${\displaystyle (x,y,z)}$ (it can also be a function of time ${\displaystyle t}$.) If you include non-Cartesian coordinate systems, vector fields can be described in an number of ways. For example,

${\displaystyle {\vec {F}}({\vec {r}})=F_{x}(x,y,x){\hat {i}}+F_{y}(x,y,x){\hat {j}}+F_{z}(x,y,x){\hat {k}}}$

${\displaystyle {\vec {F}}({\vec {r}})=F_{r}(r,\theta ,\varphi ){\hat {r}}+F_{\theta }(r,\theta ,\varphi ){\hat {\theta }}+F_{\varphi }(r,\theta ,\varphi ){\hat {\varphi }}}$

define the same field, first in Cartesian coordinates, and then in spherical coordinates.

## A theorem for radially directed fields

If the only non-vanishing component of a vector field is radial, we have,

${\displaystyle F_{\theta }=F_{\varphi }=0}$,

which implies that the ${\displaystyle {\hat {\theta }}}$ and ${\displaystyle {\hat {\varphi }}}$ components both vanish, leaving us with only one component of the vector field:

${\displaystyle {\vec {F}}=F_{r}(r,\theta ,\varphi ){\hat {r}}}$

It is not always easy to find all the components of the surface elements ${\displaystyle d{\vec {S}}={\hat {n}}dA}$ (where we have defined ${\displaystyle dA\equiv |d{\vec {S}}|}$.) But fortunately, we have already derived a simple formula for ${\displaystyle {\hat {r}}\cdot d{\vec {S}}}$:

${\displaystyle {\hat {r}}\cdot d{\vec {S}}={\hat {r}}\cdot {\hat {n}}dA=r^{2}d\Omega }$,

where ${\displaystyle d\Omega }$ is the differential solid angle as measured from the origin (which is the tail of ${\displaystyle {\vec {r}}}$.) If a vector field of the three spatial variables is always directed towards or away from the origin, then the surface integral for any shape the encloses the origin is given by:

${\displaystyle \oint {\vec {F}}\cdot d{\vec {S}}=\oint F_{r}{\hat {r}}\cdot {\hat {n}}dA=\oint F_{r}(r,\theta ,\varphi )\,r^{2}d\Omega }$,

where in a calculus class you might use ${\displaystyle d\Omega =\sin \theta d\theta d\varphi }$. If the origin is situated inside a simple shape like an ellipsoid or even a cube, we just need to define the distance to the origin as a function of the two angular variables:

${\displaystyle r=R(\theta ,\varphi )}$,

where ${\displaystyle R(\theta ,\varphi )}$ is some function. Two simple examples involve any constant value of ${\displaystyle R_{0}>0}$:

• ${\displaystyle r=R_{0}}$ is a sphere of radius R0
• ${\displaystyle r={\frac {R}{\sin \theta }}}$ is a cylinder of radius R aligned along the z axis (and θ is measured relative to that axis.)

Later we discuss the complexity associated with more complicated shapes such as the "bent peanut" described above, where it is necessary to introduce r as a multi-valued function because a ray directed from the origin intersects the surface more than once.

A radially directed vector field (${\displaystyle F_{\theta }=F_{\varphi }=0}$) can be integrated over a simple Gaussian surface defined by ${\displaystyle r=R(\theta ,\varphi )}$, using this expression:

${\displaystyle {\vec {F}}\cdot d{\vec {S}}=F_{r}{\hat {r}}\cdot {\hat {n}}dA=F_{r}(r,\theta ,\varphi )\,r^{2}d\Omega }$

In the last step we set ${\displaystyle F_{r}=F_{r}(r,\theta ,\varphi )}$ to highlight the fact that no restriction is placed on the vector field, other than the fact that it always points in the radial (${\displaystyle {\hat {r}}}$) direction. Defining the Gaussian surface for the "bent-peanut" shape shown above is a bit tricky because for one orientation (i.e., one value of ${\displaystyle \theta }$ and ${\displaystyle \varphi }$) one ray will pierce the surface at more than one location.

#### Special case: Fr does not depend on θ or φ

The simplest application of this theorem is the case where ${\displaystyle F_{r}}$ depends only on ${\displaystyle r}$, and something interesting happens when dependence is inverse square:

${\displaystyle \oint {\frac {{\hat {n}}\cdot d{\vec {S}}}{r^{2}}}\equiv \oint {\frac {{\hat {r}}\cdot {\hat {n}}}{r^{2}}}dA=4\pi {\text{ if the origin is inside the closed surface,}}}$
${\displaystyle \oint {\frac {{\hat {n}}\cdot d{\vec {S}}}{r^{2}}}\equiv \oint {\frac {{\hat {r}}\cdot {\hat {n}}}{r^{2}}}dA=0{\text{ if the origin is outside the closed surface,}}}$

where the origin is defined at where ${\displaystyle {\vec {r}}=0}$ and ${\displaystyle {\hat {n}}dA\equiv d{\vec {S}}}$ denotes integration over a closed surface of any shape. To understand why the integral vanishes if the origin is outside the Gaussian surface, note any ray (${\displaystyle {\vec {r}}}$ vector) that pierces the Gaussian surface from the outside will also exit at a place with the opposite sign. For any such ray (i.e., that originates from outside Gaussian surface) the Riemann sum ${\displaystyle \left(\sum \Delta \Omega )\right)}$ of the differentials will occur in pairs and will not sum to ${\displaystyle 4\pi }$. Instead they will cancel as equal and opposite pairs:

${\displaystyle \Delta \Omega _{out}=-\Delta \Omega _{in}}$

## Generalization of Gauss's Law beyond the case of a single point charge

For arbitrary charge distributions, it can be shown that:

${\displaystyle \varepsilon _{0}\oint {\vec {E}}\cdot d{\vec {S}}=Q_{\text{enc}}}$,

where,

${\displaystyle Q_{\text{enc}}=\sum _{j}q_{j}{\text{ ... or ... }}\int _{\text{inside}}\rho ({\vec {r}}')d^{3}r'}$,

is the net enclosed charge, which can be a sum over charges or a volume integral (e.g. dx'dy'dz') over charge density. Since mathematically rigorous arguments for this generalization are beyond the scope of most first-year physics courses, this section will only outline the arguments that extend Gauss's law in this fashion.

### Multiple point charges

The discussion so far has been restricted to a single point charge, with the added stipulation that the origin of the coordinate system is situated at the location of that point charge. First, we must recognize the implicit assumption that Gauss's Law remains valid even if the coordinate system is moved to a different location. Could be accomplished by a change of variables, ${\displaystyle {\vec {r}}\rightarrow {\vec {r}}-{\vec {r}}_{0}}$, where the constants ${\displaystyle {\vec {r}}_{0}=[x_{0},y_{0},z_{0}]}$ represent the location of the point charge in the original coordinate system. This permits us to use a property called superposition to show that electric field due to a sum of charges is the sum of the electric field due to individual charges:

${\displaystyle {\vec {E}}={\frac {1}{4\pi \varepsilon }}\sum _{j}{{\frac {q_{j}}{r_{j}^{2}}}{\hat {r_{j}}}}=\sum _{j}{\vec {E_{j}}},}$

where ${\displaystyle {\vec {E_{j}}}}$ is the field due to ${\displaystyle q_{j}}$. We can also appeal to linearity to argue:

${\displaystyle \sum _{j}\oint {\vec {E}}_{j}\cdot d{\vec {S}}=\oint \left(\sum _{j}{\vec {E}}_{j}\right)\cdot d{\vec {S}}=\oint {\vec {E}}\cdot d{\vec {S}}}$

### Continuous charge density

Some readers might find it interesting that the sum over point charges can also be expressed as an integral over a charge density if we use the three-dimensional Dirac delta function:

${\displaystyle \rho \left({\vec {r}}\right)=\sum _{j}q_{j}\delta ({\vec {r}}-{\vec {r}}_{j})}$

## gallery

1. Even a tiny hole in the peanut would convert it into an open surface. Open surfaces have "boundarys", and rim of the hole would be the boundary of a peanut with a hole in it.
2. "Proof" was placed in quotation marks because mathematicians prefer to use analysis instead of the plausibility arguments physicists are often fond of.