Laplace transform

From Wikiversity
Jump to: navigation, search

In mathematics, the Laplace transform is a technique for analyzing linear time-invariant systems such as electrical circuits, harmonic oscillators, optical devices, and mechanical systems. Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications.

The Laplace transform is an important concept from the branch of mathematics called functional analysis.

In actual physical systems the Laplace transform is often interpreted as a transformation from the time-domain point of view, in which inputs and outputs are understood as functions of time, to the frequency-domain point of view, where the same inputs and outputs are seen as functions of complex angular frequency, or radians per unit time. This transformation not only provides a fundamentally different way to understand the behavior of the system, but it also drastically reduces the complexity of the mathematical calculations required to analyze the system.

The Laplace transform has many important applications in physics, optics, electrical engineering, control engineering, signal processing, and probability theory.

The Laplace transform is named in honor of mathematician and astronomer Pierre-Simon Laplace, who used the transform in his work on probability theory.

Formal definition[edit]

The Laplace transform of a function f(t), defined for all real numbers t ≥ 0, is the function F(s), defined by:

F(s) = \mathcal{L} \left\{f(t)\right\}=\int_{0^-}^\infty e^{-st} f(t) \,dt.

The lower limit of 0^- is short notation to mean  \lim_{\epsilon \rightarrow +0} -\epsilon \ and assures the inclusion of the entire Dirac delta function δ(t) at 0 if there is such an impulse in f(t) at 0.

The parameter s is in general complex:

s = \sigma + i \omega \,

This integral transform has a number of properties that make it useful for analyzing linear dynamical systems. The most significant advantage is that differentiation and integration become multiplication and division, respectively, with s. (This is similar to the way that logarithms change an operation of multiplication of numbers to addition of their logarithms.) This changes integral equations and differential equations to polynomial equations, which are much easier to solve.

Bilateral Laplace transform[edit]

When one says "the Laplace transform" without qualification, the unilateral or one-sided transform is normally intended. The Laplace transform can be alternatively defined as the bilateral Laplace transform or two-sided Laplace transform by extending the limits of integration to be the entire real axis. If that is done the common unilateral transform simply becomes a special case of the bilateral transform where the definition of the function being transformed is multiplied by the Heaviside step function (also known at the unit step function).

The bilateral Laplace transform is defined as follows:

F(s)  = \mathcal{L}\left\{f(t)\right\}  =\int_{-\infty}^{+\infty} e^{-st} f(t)\,dt.

Inverse Laplace transform[edit]

The inverse Laplace transform is the Bromwich integral, which is a complex integral given by:

f(t) = \mathcal{L}^{-1} \{F(s)\} = \frac{1}{2 \pi i} \int_{ \gamma - i \cdot \infty}^{ \gamma + i \cdot \infty} e^{st} F(s)\,ds,

where γ is a real number so that the contour path of integration is in the region of convergence of F(s) normally requiring γ > Re(sp) for every singularity sp of F(s) and i2=-1. If all singularities are in the left half-plane, that is Re(sp) < 0 for every sp, then γ can be set to zero and the above inverse integral formula above becomes identical to the inverse Fourier transform.

An alternative formula for the inverse Laplace transform is given by Post's inversion formula.

Region of convergence[edit]

The Laplace transform F(s) typically exists for all complex numbers such that Re{s} > a, where a is a real constant which depends on the growth behavior of f(t), whereas the two-sided transform is defined in a range a < Re{s} < b. The subset of values of s for which the Laplace transform exists is called the region of convergence (ROC) or the domain of convergence. In the two-sided case, it is sometimes called the strip of convergence.

There are no specific conditions that one can check a function against to know in all cases if its Laplace transform can be taken, other than to say the defining integral converges. It is however easy to give theorems on cases where it may or may not be taken.

Properties and theorems[edit]

Given the functions f(t) and g(t), and their respective Laplace transforms F(s) and G(s):

 f(t) = \mathcal{L}^{-1} \{  F(s) \}
 g(t) = \mathcal{L}^{-1} \{  G(s) \}

the following table is a list of properties of unilateral Laplace transform:

Properties of the unilateral Laplace transform
Time domain Frequency domain Comment
Linearity a f(t) + b g(t) \ a F(s)  + b G(s) \
Frequency differentiation  t f(t) \  -F'(s) \
Frequency differentiation  t^{n} f(t) \   (-1)^{n} F^{(n)}(s) \ more general
Differentiation  f' \   s F(s) - f(0^-) \
Second Differentiation  f'' \   s^2 F(s) - s f(0^-) - f'(0^-) \
General Differentiation  f^{(n)}  \   s^n F(s) - s^{n - 1} f(0^-) - \cdots - f^{(n - 1)}(0^-) \
Frequency integration  \frac{f(t)}{t}  \   \int_s^\infty F(\sigma)\, d\sigma \
Integration  \int_0^t f(\tau)\, d\tau  =  u(t) * f(t)   {1 \over s} F(s) u(t) is the Heaviside step function
Scaling  f(at) \   {1 \over |a|} F \left ( {s \over a} \right )
Frequency shifting  e^{at} f(t)  \  F(s - a) \
Time shifting  f(t - a) u(t - a) \   e^{-as} F(s) \ u(t) is the Heaviside step function
Convolution  f(t) * g(t) \  F(s) \cdot G(s) \
Periodic Function  f(t) \ {1 \over 1 - e^{-Ts}} \int_0^T e^{-st} f(t)\,dt f(t) is a periodic function of period T so that f(t) = f(t + T), \; \forall t


  • Initial value theorem:
f(0^+)=\lim_{s\to \infty}{sF(s)}
  • Final value theorem:
f(\infty)=\lim_{s\to 0}{sF(s)}, all poles in left-hand plane.
The final value theorem is useful because it gives the long-term behaviour without having to perform partial fraction decompositions or other difficult algebra. If a function's poles are in the right hand plane (e.g. e^t or \sin(t)) the behaviour of this formula is undefined.

Proof of the Laplace transform of a function's derivative[edit]

It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's derivative. This can be derived from the basic expression for a Laplace transform as follows:

\mathcal{L} \left\{f(t)\right\}  =\int_{0^-}^{+\infty} e^{-st} f(t)\,dt
 ~~ = \left[ \frac{f(t)e^{-st}}{-s} \right]_{0^-}^{+\infty} - 
\int_{0^-}^{+\infty} \frac{e^{-st}}{-s} f'(t)dt (by parts)
 ~~ = \left[-\frac{f(0)}{-s}\right] + 
\frac{1}{s}\mathcal{L}\left\{f'(t)\right\},

yielding

\mathcal{L}\left\{\frac{df}{dt}\right\} = s\cdot\mathcal{L} \left\{ f(t) \right\}-f(0),

and in the bilateral case, we have

 \mathcal{L}\left\{ { df \over dt }  \right\}
  = s \int_{-\infty}^{+\infty} e^{-st} f(t)\,dt  = s \cdot \mathcal{L} \{ f(t) \}.

Relationship to other transforms[edit]

Fourier transform[edit]

The continuous Fourier transform is equivalent to evaluating the bilateral Laplace transform with complex argument s = iω:


\begin{array}{rcl}
F(\omega) & = & \mathcal{F}\left\{f(t)\right\} \\[1em]
& = & \mathcal{L}\left\{f(t)\right\}|_{s = i \omega}  =  F(s)|_{s = i \omega}\\[1em]
& = & \int_{-\infty}^{+\infty} e^{-\imath \omega t} f(t)\,\mathrm{d}t.\\
\end{array}

Note that this expression excludes the scaling factor \frac{1}{\sqrt{2 \pi}}, which is often included in definitions of the Fourier transform.

This relationship between the Laplace and Fourier transforms is often used to determine the frequency spectrum of a signal or dynamical system.

Mellin transform[edit]

The Mellin transform and its inverse are related to the two-sided Laplace transform by a simple change of variables. If in the Mellin transform

G(s) = \mathcal{M}\left\{g(\theta)\right\} = \int_0^\infty \theta^s g(\theta) \frac{d\theta}{\theta}

we set θ = e-t we get a two-sided Laplace transform.

Z-transform[edit]

The Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution of

 z \ \stackrel{\mathrm{def}}{=}\  e^{s T} \
where T = 1/f_s \ is the sampling period (in units of time e.g. seconds) and  f_s \ is the sampling rate (in samples per second or hertz)

Let

 \Delta_T(t) \ \stackrel{\mathrm{def}}{=}\  \sum_{n=0}^{\infty}  \delta(t - n T)

be a sampling impulse train (also called a Dirac comb) and

 x_q(t) \ \stackrel{\mathrm{def}}{=}\  x(t) \Delta_T(t) = x(t) \sum_{n=0}^{\infty}  \delta(t - n T)
 = \sum_{n=0}^{\infty} x(n T) \delta(t - n T) = \sum_{n=0}^{\infty} x[n] \delta(t - n T)

be the continuous-time representation of the sampled  x(t) \ .

 x[n] \ \stackrel{\mathrm{def}}{=}\  x(nT) \ are the discrete samples of  x(t) \ .

The Laplace transform of the sampled signal  x_q(t) \ is

X_q(s) = \int_{0^-}^{\infty} x_q(t) e^{-s t} \,dt
 \ = \int_{0^-}^{\infty} \sum_{n=0}^{\infty} x[n] \delta(t - n T) e^{-s t} \, dt
 \ = \sum_{n=0}^{\infty} x[n] \int_{0^-}^{\infty} \delta(t - n T) e^{-s t} \, dt
 \ = \sum_{n=0}^{\infty} x[n] e^{-n s T}.

This is precisely the definition of the Z-transform of the discrete function  x[n] \

 X(z) = \sum_{n=0}^{\infty} x[n] z^{-n}

with the substitution of  z \leftarrow e^{s T} \ .

Comparing the last two equations, we find the relationship between the Z-transform and the Laplace transform of the sampled signal:

X_q(s) =  X(z) \Big|_{z=e^{sT}}.

Borel transform[edit]

The integral form of the Borel transform is identical to the Laplace transform; indeed, these are sometimes mistakenly assumed to be synonyms. The generalized Borel transform generalizes the Laplace transform for functions not of exponential type.

Fundamental relationships[edit]

Since an ordinary Laplace transform can be written as a special case of a two-sided transform, and since the two-sided transform can be written as the sum of two one-sided transforms, the theory of the Laplace-, Fourier-, Mellin-, and Z-transforms are at bottom the same subject. However, a different point of view and different characteristic problems are associated with each of these four major integral transforms.

Table of selected Laplace transforms[edit]

The following table provides Laplace transforms for many common functions of a single variable. For definitions and explanations, see the Explanatory Notes at the end of the table.

Because the Laplace transform is a linear operator:

  • The Laplace transform of a sum is the sum of Laplace transforms of each term.
\mathcal{L}\left\{f(t) + g(t) \right\}  = \mathcal{L}\left\{f(t)\right\} + \mathcal{L}\left\{ g(t) \right\}
  • The Laplace transform of a multiple of a function, is that multiple times the Laplace transformation of that function.
\mathcal{L}\left\{a f(t)\right\}  = a \mathcal{L}\left\{ f(t)\right\}

The unilateral Laplace transform is only valid when t is non-negative, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t).

ID Function Time Domain
x(t) = \mathcal{L}^{-1} \left\{ X(s) \right\}
Frequency Domain
X(s) = \mathcal{L}\left\{ x(t) \right\}
Region of convergence
for causal systems
1 ideal delay  \delta(t-\tau) \  e^{-\tau s} \
1a unit impulse  \delta(t) \  1 \  \mathrm{all} \  s \,
2 delayed nth power
with frequency shift
\frac{(t-\tau)^n}{n!} e^{-\alpha (t-\tau)} \cdot u(t-\tau)  \frac{e^{-\tau s}}{(s+\alpha)^{n+1}}  s > 0 \,
2a nth power
( for integer n )
{  t^n \over n! } \cdot u(t)  { 1 \over s^{n+1} }  s > 0 \,
2a.1 qth power
( for real q )
{  t^q \over \Gamma(q+1) } \cdot u(t)  { 1 \over s^{q+1} }  s > 0 \,
2a.2 unit step  u(t) \  { 1 \over s }  s > 0 \,
2b delayed unit step  u(t-\tau) \  { e^{-\tau s} \over s }  s > 0 \,
2c ramp  t \cdot u(t)\ \frac{1}{s^2}  s > 0 \,
2d nth power with frequency shift \frac{t^{n}}{n!}e^{-\alpha t} \cdot u(t) \frac{1}{(s+\alpha)^{n+1}}  s > - \alpha \,
2d.1 exponential decay  e^{-\alpha t} \cdot u(t)  \  { 1 \over s+\alpha }   s > - \alpha \
3 exponential approach ( 1-e^{-\alpha t})  \cdot u(t)  \ \frac{\alpha}{s(s+\alpha)}   s > 0\
4 sine  \sin(\omega t) \cdot u(t) \  { \omega \over s^2 + \omega^2  }  s > 0  \
5 cosine  \cos(\omega t) \cdot u(t) \  { s \over s^2 + \omega^2  }  s > 0 \
6 hyperbolic sine  \sinh(\alpha t) \cdot u(t) \  { \alpha \over s^2 - \alpha^2 }  s > | \alpha | \
7 hyperbolic cosine  \cosh(\alpha t) \cdot u(t) \  { s \over s^2 - \alpha^2  }  s > | \alpha | \
8 Exponentially-decaying
sine wave
e^{\alpha t}  \sin(\omega t) \cdot u(t) \  { \omega \over (s-\alpha )^2 + \omega^2  }  s > \alpha \
9 Exponentially-decaying
cosine wave
e^{\alpha t}  \cos(\omega t) \cdot u(t) \  { s-\alpha \over (s-\alpha )^2 + \omega^2  }  s > \alpha \
10 nth root  \sqrt[n]{t} \cdot u(t)  s^{-(n+1)/n} \cdot \Gamma\left(1+\frac{1}{n}\right)  s > 0 \,
11 natural logarithm  \ln \left (  { t \over t_0 } \right ) \cdot u(t)  - { t_0 \over s} \  [ \  \ln(t_0 s)+\gamma \ ]  s > 0 \,
12 Bessel function
of the first kind,
of order n
 J_n( \omega t) \cdot u(t) \frac{ \omega^n \left(s+\sqrt{s^2+ \omega^2}\right)^{-n}}{\sqrt{s^2 + \omega^2}}  s > 0 \,
 (n > -1) \,
13 Modified Bessel function
of the first kind,
of order n
I_n(\omega t) \cdot u(t)  \frac{ \omega^n \left(s+\sqrt{s^2-\omega^2}\right)^{-n}}{\sqrt{s^2-\omega^2}}  s > | \omega | \,
14 Bessel function
of the second kind,
of order 0
 Y_0(\alpha t) \cdot u(t) -{2 \sinh^{-1}(s/\alpha) \over \pi \sqrt{s^2+\alpha^2}} s > 0 \,
15 Modified Bessel function
of the second kind,
of order 0
 K_0(\alpha t) \cdot u(t)  \frac{\cos ^{-1}\left(\frac{s}{\sqrt{\alpha ^2}}\right)}{\sqrt{\alpha ^2-s^2}}  s > 0
16 Error function  \mathrm{erf}(t) \cdot u(t)     {e^{s^2/4} \left(1 - \operatorname{erf} \left(s/2\right)\right) \over s}  s > 0 \,
Explanatory notes:

s-Domain equivalent circuits and impedances[edit]

The Laplace transform is often used in circuit analysis, and simple conversions to the s-Domain of circuit elements can be made. Circuit elements can be transformed into impedances, very similar to phasor impedances.

Here is a summary of equivalents:

S-domain circuit equivalents.png

Note that the resistor is exactly the same in the time domain and the s-Domain. The sources are put in if there are initial conditions on the circuit elements. For example, if a capacitor has an initial voltage across it, or if the inductor has an initial current through it, the sources inserted in the s-Domain account for that.

The equivalents for current and voltage sources are simply derived from the transformations in the table above.

Examples: How to apply the properties and theorems[edit]

The Laplace transform is used frequently in engineering and physics; the output of a linear dynamic system can be calculated by convolving its unit impulse response with the input signal. Performing this calculation in Laplace space turns the convolution into a multiplication; the latter being easier to solve because of its algebraic form. For more information, see control theory.

The Laplace transform can also be used to solve differential equations and is used extensively in electrical engineering. The method of using the Laplace Transform to solve differential equations was developed by the English electrical engineer Oliver Heaviside.

The following examples, derived from applications in physics and engineering, will use SI units of measure. SI is based on meters for distance, kilograms for mass, seconds for time, and amperes for electric current.

Example #1: Solving a differential equation[edit]

The following example is based on concepts from nuclear physics.

Consider the following first-order, linear differential equation:

\frac{dN}{dt} = -\lambda N.

This equation is the fundamental relationship describing radioactive decay, where

 N \ = \ N(t)

represents the number of undecayed atoms remaining in a sample of a radioactive isotope at time t (in seconds), and \ \lambda is the decay constant.

We can use the Laplace transform to solve this equation.

Rearranging the equation to one side, we have

\frac{dN}{dt} +  \lambda N  =  0

Next, we take the Laplace transform of both sides of the equation:

 \left( s \tilde{N}(s) - N_o  \right) + \lambda \tilde{N}(s) \ = \ 0

where

\tilde{N}(s) = \mathcal{L}\{N(t)\}

and

N_o \ = \ N(0).

Solving, we find

\tilde{N}(s) = { N_o \over s + \lambda  }.

Finally, we take the inverse Laplace transform to find the general solution

N(t) \ = \mathcal{L}^{-1} \{\tilde{N}(s)\} = \mathcal{L}^{-1}  \left\{ \frac{N_o}{s + \lambda} \right\}
 = \ N_o e^{-\lambda t},

which is indeed the correct form for radioactive decay.

Example #2: Deriving the complex impedance for a capacitor[edit]

This example is based on the principles of electrical circuit theory.

The constitutive relation governing the dynamic behavior of a capacitor is the following differential equation:

 i = C { dv \over dt}

where C is the capacitance (in farads) of the capacitor, i = i(t) is the electrical current (in amperes) flowing through the capacitor as a function of time, and v = v(t) is the voltage (in volts) across the terminals of the capacitor, also as a function of time.

Taking the Laplace transform of this equation, we obtain

  I(s) = C \left( s V(s) - V_o  \right)

where

 I(s) = \mathcal{L} \{ i(t) \} ,
 V(s) = \mathcal{L} \{ v(t) \} , and
 V_o \ = \ v(t)|_{t=0}.

Solving for V(s) we have

  V(s) = { I(s) \over Cs }  +  { V_o  \over s }.

The definition of the complex impedance Z (in ohms) is the ratio of the complex voltage V divided by the complex current I while holding the initial state Vo at zero:

Z(s) = { V(s) \over I(s) } \bigg|_{V_o = 0}.

Using this definition and the previous equation, we find:

Z(s) = \frac{1}{sC}

which is the correct expression for the complex impedance of a capacitor.

Example #3: Finding the transfer function from the impulse response[edit]

Relationship between the time domain and the frequency domain. Note the * in the time domain, denoting convolution.
This example is based on concepts from signal processing, and describes the dynamic behavior of a damped harmonic oscillator. See also RLC circuit.

Consider a linear time-invariant system with impulse response

 h(t) = A e^{- \alpha t} \cos(\omega_d t - \phi_d) \,

such that

\omega_d t - \phi_d \ge 0 \,

where t is the time (in seconds), and

  0 \le \phi_d \le 2 \pi

is the phase delay (in radians).

Suppose that we want to find the transfer function of the system. We begin by noting that

 h(t) = A e^{- \alpha t} \cos \left[ \omega_d (t - t_d) \right] \cdot u(t - t_d) \,

where

t_d = { \phi_d \over \omega_d }

is the time delay of the system (in seconds), and  \ u(t) \ is the Heaviside step function.

The transfer function is simply the Laplace transform of the impulse response:

H(s) \ = \ \mathcal{L} \{ h(t) \}  = A e^{-s t_d} {(s + \alpha) \over (s + \alpha)^2 + \omega_d^2 }
  = \ A e^{-s t_d} {(s + \alpha) \over (s^2 + 2 \alpha s + \alpha^2) + \omega_d^2 }
  = \ A e^{-s t_d} {(s + \alpha) \over (s^2 + 2 \alpha s + \omega_0^2 ) }

where

 \omega_0 = \sqrt{\alpha^2 + \omega_d^2}

is the (undamped) natural frequency or resonance of the system (in radians per second).

Example #4: Method of partial fraction expansion[edit]

Consider a linear time-invariant system with transfer function

H(s) = \frac{1}{(s+\alpha)(s+\beta)}.

The impulse response is simply the inverse Laplace transform of this transfer function:

h(t) = \mathcal{L}^{-1}\{H(s)\}.

To evaluate this inverse transform, we begin by expanding H(s) using the method of partial fraction expansion:

H(s) = \frac{1}{(s+\alpha)(s+\beta)} =   { P \over (s+\alpha) } + { R  \over (s+\beta) }

for unknown constants P and R. To find these constants, we evaluate

P = \left.{1 \over (s+\beta)}\right|_{s=-\alpha} = {1 \over (\beta - \alpha)}

and

R = \left.{1 \over (s+\alpha)}\right|_{s=-\beta} = {1 \over (\alpha - \beta)} = {-1 \over (\beta - \alpha)} = - P .

Substituting these values into the expression for H(s), we find

H(s)  = \left( \frac{1}{\beta-\alpha} \right) \cdot \left(  { 1 \over (s+\alpha) } - { 1  \over (s+\beta) }  \right).

Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of Laplace Transforms, above), we can take the inverse Laplace transform of H(s) to obtain:

h(t) = \mathcal{L}^{-1}\{H(s)\} = \frac{1}{\beta-\alpha}\left(e^{-\alpha t}-e^{-\beta t}\right),

which is the impulse response of the system.

Example #5: Mixing sines, cosines, and exponentials[edit]

Time function Laplace transform
e^{-\alpha t}\left[ \cos{(\omega t)}+\left(\frac{\beta-\alpha}{\omega}\right)\sin{(\omega t)}\right]u(t) \frac{s+\beta}{(s+\alpha)^2+\omega^2}

Starting with the Laplace transform

X(s) = \frac{s+\beta}{(s+\alpha)^2+\omega^2},

we find the inverse transform by first adding and subtracting the same constant α to the numerator:

X(s) = \frac{s+\alpha } { (s+\alpha)^2+\omega^2}  +   \frac{\beta - \alpha }{(s+\alpha)^2+\omega^2}.

By the shift-in-frequency property, we have

 x(t) = e^{-\alpha t} \mathcal{L}^{-1} \left\{   {s \over s^2 + \omega^2}  +  {  \beta - \alpha \over s^2 + \omega^2  } \right\}
 = e^{-\alpha t} \mathcal{L}^{-1} \left\{   {s \over s^2 + \omega^2}  + \left( {  \beta - \alpha \over \omega } \right) \left( { \omega \over s^2 + \omega^2  } \right) \right\}
 = e^{-\alpha t} \left[  \mathcal{L}^{-1} \left\{   {s \over s^2 + \omega^2}  \right\}  + \left( {  \beta - \alpha \over \omega } \right) \mathcal{L}^{-1} \left\{  { \omega \over s^2 + \omega^2  }  \right\}  \right].

Finally, using the Laplace transforms for sine and cosine (see the table, above), we have

x(t)   =  e^{-\alpha t}  \left[ \cos{(\omega t)}u(t)+\left(\frac{\beta-\alpha}{\omega}\right)\sin{(\omega t)}u(t)\right].
x(t)   =  e^{-\alpha t}  \left[ \cos{(\omega t)}+\left(\frac{\beta-\alpha}{\omega}\right)\sin{(\omega t)}\right]u(t).

Example #6: Phase delay[edit]

Time function Laplace transform
\sin{(\omega t+\phi)} \ \frac{s\sin\phi+\omega \cos\phi}{s^2+\omega^2} \
\cos{(\omega t+\phi)} \ \frac{s\cos\phi - \omega \sin\phi}{s^2+\omega^2} \

Starting with the Laplace transform,

X(s) = \frac{s\sin\phi+\omega \cos\phi}{s^2+\omega^2}

we find the inverse by first rearranging terms in the fraction:

X(s) \,\!  {} = \frac{s \sin \phi}{s^2 + \omega^2} + \frac{\omega \cos \phi}{s^2 + \omega^2}
 {} = (\sin \phi) \left(\frac{s}{s^2 + \omega^2} \right) + (\cos \phi) \left(\frac{\omega}{s^2 + \omega^2} \right).

We are now able to take the inverse Laplace transform of our terms:

x(t) \,\!  {} = (\sin \phi) \mathcal{L}^{-1}\left\{\frac{s}{s^2 + \omega^2} \right\} + (\cos \phi) \mathcal{L}^{-1}\left\{\frac{\omega}{s^2 + \omega^2} \right\}
 {}=(\sin \phi)(\cos \omega t) + (\sin \omega t)(cos \phi). \,\!

To simplify this answer, we must recall the trigonometric identity that

a \sin \omega t + b \cos \omega t = \sqrt{a^2+b^2} \cdot \sin \left(\omega t + \arctan (b/a) \right)

and apply it to our value for x(t):

x(t) \,\!  {} = \sqrt{\cos^2 \phi + \sin^2 \phi} \cdot \sin \left( \omega t + \arctan \left(\frac{\sin \phi}{\cos \phi} \right) \right)
{}= \sin (\omega t + \phi) . \,\!

We can apply similar logic to find that

\mathcal{L}^{-1} \left\{ \frac{s\cos\phi - \omega \sin\phi}{s^2+\omega^2} \right\} = \cos{(\omega t+\phi)}. \

References[edit]

  • A. D. Polyanin and A. V. Manzhirov, Handbook of Integral Equations, CRC Press, Boca Raton, 1998. ISBN 0-8493-2876-4
  • William McC. Siebert, Circuits, Signals, and Systems, MIT Press, Cambridge, Massachusetts, 1986. ISBN 0-262-19229-2
  • Davies, Brian, Integral transforms and their applications (Springer, New York, 1978). ISBN 0-387-90313-5

See also[edit]

Pierre-Simon Laplace

External links[edit]