Jump to content

Mathematical Methods in Physics/Introduction to 2nd order differential equations

From Wikiversity
Resource type: this resource contains a lecture or lecture notes.
Completion status: this resource is ~75% complete.

Introduction

[edit | edit source]

What are differential equations? Why are they so important in physics? The answer to these questions will become more apparent as the course goes on, but to provide motivation, for now we will say that a differential equation is an equation where derivatives of a function appear (we will provide a more formal definition in the following section), and from which we'd like to know what this function is. Finding a function such that the differential equation is satisfied is known as finding a solution to the differential equation.

Why should physical scientists study differential equations? The answer to this question is rather easy if the student has taken any more or less advanced physics course. It will become apparent to them that the basic laws of nature can be expressed in the language of differential equations, both ordinary as well as partial differential equations.

As canonical examples, we consider the equation of the harmonic oscillator (ordinary),

the wave equation (partial),

the equation of an RLC circuit (ordinary),

and finally, Laguerre's equation (ordinary),

an equation that shows up in quantum mechanics.

There are many alternative notations for the derivative; we may use primes (Lagrange's notation) (, , etc.), numbers enclosed within parentheses (, , etc.), Leibniz's notation (), or Newton's dot notation, when we discuss derivatives with respect to time (). In what follows, we will try to use consistent notation, but the reader should be aware that notation is mostly a matter of preference and one notation is as good as any other.

Basic definitions

[edit | edit source]

A differential equation is an equation that relates a function with its derivative. Given a function , independent variable and dependent variable , an (ordinary) differential equation's most general expression is

A solution to this differential equation is a function such that

We say that a differential equation is of order if the highest derivative that appears in the differential equation is the -th derivative.

An autonomous differential equation is one where there is no explicit dependence on the independent variable :

A linear ordinary differential equation only involves the dependent variable and its derivatives in a linear fashion (multiplied by a non-zero function of , which may or may not be constant). For example,

are examples of nonlinear differential equations, whereas

are linear differential equations.

We say that a linear differential equation is homogeneous if any potential term(s) involving solely the independent variable are identically vanishing. Thus,

is homogeneous, whereas

is inhomogeneous or nonhomogeneous, due to the term that depends solely on .

It is customary, but by no means necessary, to move all the nonhomogeneous terms to the right-hand side of the differential equation; this practice is done to clearly distinguish these inhomogeneous terms as well as make some solutions method easier to implement.

Linear ordinary differential equations

[edit | edit source]

We focus now on linear ordinary differential equations, as these appear pervasively in the physical sciences, in particular those of second-order.

A linear ordinary differential equation is an equation of the form

As we have seen before, if the equation is nonhomogeneous or inhomogeneous, and if all the coefficients, that is, all the factors are constant and not functions of , we say that the equation is of constant coefficients.

Linear dependence of functions

[edit | edit source]

Vectors

[edit | edit source]

From linear algebra, we intuitively know what it means for two vectors to be linearly independent. The vectors and are linearly dependent because can be expressed as a linear combination of , or vice versa: or equivalently, .

More formally, given the set of vectors , we say that these vectors are linearly dependent if the equation

has a nontrivial (nonzero) solution in the scalar coefficients (, , etc.), that is to say, that at least one of the coefficients doesn't vanish, and where . If, for example, , then

and we can see that is a linear combination of the rest of the vectors.

This means that the vectors of the set are linearly independent if the equation

can only be satisfied if the scalar coefficients are all .

Functions

[edit | edit source]

We can now extend our definition of linear independence to functions.

We say that the functions , are linearly independent in an interval if the equation

can only be satisfied if all the coefficients are vanishing, for all in the interval . If the equation can be satisfied without all the coefficients being , as before, we say that the functions are linearly dependent.

We now define the Wronskian of the times differentiable functions , :

This functional determinant is important to study the linear independence of a given set of functions. We will make this more explicit in the next section.

Theorems for linear differential equations

[edit | edit source]

Principle of superposition

[edit | edit source]
If and are two solutions of a linear homogeneous ordinary differential equation, then so is , where and are any two real numbers.

A theorem for complex solutions

[edit | edit source]
If is the complex solution to a linear homogeneous differential equation with continuous coefficients, then and are also solutions to the differential equation.

Number of general solutions for linear homogeneous differential equation

[edit | edit source]
The maximum number of linearly independent solutions to a linear homogeneous differential equation is equal to its order.

General solutions for a linear differential equation

[edit | edit source]

Linear independence and the Wronskian

[edit | edit source]

We now make use of the Wronskian determinant (defined earlier) to give a sufficient, but not necessary, condition of linear independence of the times differentiable functions , .

If the Wronskian of the times differentiable functions , does not vanish over an open interval , then the functions are linearly independent. That is,

It is important to note that this is a sufficient but not necessary condition. It is not true that if the Wronskian does vanish, then the functions are linearly dependent.

For example, the functions , and are linearly independent in any closed interval of the reals, as their Wronskian doesn't vanish identically (for all ) in any such closed interval.

However, if we consider the functions and on the interval , we can see that for all in the interval . But these functions are not linearly dependent on the whole interval .

The Ostrogradski-Liouville formula

[edit | edit source]

If we solve for the -th derivative in a linear differential equation, we have

The following equality then holds:

where is any point belonging to any closed interval where the coefficients of the differential equation are continuous.

Second-order ordinary linear differential equations

[edit | edit source]

We now turn to arguably the most important topic of this part of the course.

A second-order ordinary linear differential equation is an equation of the form

Why are these equations so important in the physical sciences? There are at least three reasons.

First of all, in many occasions, Newton's second law, when applied to a specific system, yields such an equation. Canonical examples of this include the damped and driven oscillator:

and a particle under uniform gravitational acceleration,

Secondly, when applying certain methods of solution to linear partial differential equations, we obtain as intermediate steps these sorts of second-order linear ordinary differential equations. An example is the aforementioned Laguerre equation. Another example is the Cauchy-Euler equation,

where all the terms are constants.

Lastly, the importance of linear equations lies in the fact that, most of the time, a nonlinear equation can be approximated by a linear one in the vicinity of a specific point (called the equilibrium point). For example, the equation that governs the dynamics of a pendulum can be written as

If is taken as the equilibrium point, we expand using its Taylor series

and if all terms except the first one are considered negligible (), then the equation of the pendulum is now

and the equation is now linear. It should be noted that, thus, the solution obtained from this linear equation will only be valid under the hypothesis the linearization was done in the first place, namely .

See Also

[edit | edit source]