Jump to content

Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 9/latex

From Wikiversity

\setcounter{section}{9}






\subtitle {Base change}

We know, due to Theorem 8.4 , that in a finite-dimensional vector space, any two bases have the same length, the same number of vectors. Every vector has, with respect to every basis, unique coordinates \extrabracket {the coefficient tuple} {} {.} How do these coordinates behave when we change the bases? This is answered by the following statement.




\inputfactproof
{Vector space/Finite dimensional/Change of basis/Fact}
{Lemma}
{}
{

\factsituation {Let $K$ be a field, and let $V$ be a $K$-vector space of dimension $n$. Let \mathcor {} {\mathfrak{ v } = v_1 , \ldots , v_n} {and} {\mathfrak{ w } = w_1 , \ldots , w_n} {} denote bases of $V$.}
\factcondition {Suppose that
\mathrelationchaindisplay
{\relationchain
{v_j }
{ =} { \sum_{ i = 1 }^{ n } c_{ij} w_i }
{ } { }
{ } { }
{ } { }
} {}{}{} with coefficients
\mathrelationchain
{\relationchain
{ c_{ij} }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} which we collect into the $n \times n$-matrix
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ v } }_{ \mathfrak{ w } } }
{ =} { { \left( c_{ij} \right) }_{ij} }
{ } { }
{ } { }
{ } { }
} {}{}{.}}
\factconclusion {Then a vector $u$, which has the coordinates $\begin{pmatrix} s_{1 } \\ \vdots\\ s_{ n } \end{pmatrix}$ with respect to the basis $\mathfrak{ v }$, has the coordinates
\mathrelationchaindisplay
{\relationchain
{\begin{pmatrix} t _{1 } \\ \vdots\\ t _{ n } \end{pmatrix} }
{ =} { M^{ \mathfrak{ v } }_{ \mathfrak{ w } } \begin{pmatrix} s_{1 } \\ \vdots\\ s_{ n } \end{pmatrix} }
{ =} { \begin{pmatrix} c_{11 } & c_{1 2} & \ldots & c_{1 n } \\ c_{21 } & c_{2 2} & \ldots & c_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ c_{ n 1 } & c_{ n 2 } & \ldots & c_{ n n } \end{pmatrix} \begin{pmatrix} s_{1 } \\ \vdots\\ s_{ n } \end{pmatrix} }
{ } { }
{ } { }
} {}{}{} with respect to the basis $\mathfrak{ w }$.}
\factextra {}
}
{

This follows directly from
\mathrelationchaindisplay
{\relationchain
{ u }
{ =} { \sum_{ j = 1 }^{ n } s_j v_j }
{ =} { \sum_{ j = 1 }^{ n } s_j { \left( \sum_{ i = 1 }^{ n } c_{ij} w_i \right) } }
{ =} { \sum_{ i = 1 }^{ n } { \left( \sum_{ j = 1 }^{ n } s_j c_{ij} \right) } w_i }
{ } { }
} {}{}{,} and the definition of matrix multiplication.

}


If for a basis $\mathfrak{ v }$, we consider the corresponding bijective mapping \extrabracket {see Remark 7.12 } {} {}
\mathdisp {\Psi_ \mathfrak{ v } \colon K^n \longrightarrow V} { , }
then we can express the preceding statement as saying that the triangle
\mathdisp {\begin{matrix}K^n & \stackrel{ M^{ \mathfrak{ v } }_{ \mathfrak{ w } } }{\longrightarrow} & K^n & \\ & \!\!\! \!\! \Psi_ \mathfrak{ v } \searrow & \downarrow \Psi_ \mathfrak{ w } \!\!\! \!\! & \\ & & V & \!\!\!\!\! \!\!\! \\ \end{matrix}} { }
commutes\extrafootnote {The commutativity of such a diagram of arrows and mappings means that all composed mappings coincide as long as their domain and codomain coincide. In this case, it simply means that
\mathrelationchain
{\relationchain
{ \Psi_ \mathfrak{ v } }
{ = }{ \Psi_ \mathfrak{ w } \circ M^{ \mathfrak{ v } }_{ \mathfrak{ w } } }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} holds} {.} {.}




\inputdefinition
{ }
{

Let $K$ denote a field, and let $V$ denote a $K$-vector space of dimension $n$. Let \mathcor {} {\mathfrak{ v } = v_1 , \ldots , v_n} {and} {\mathfrak{ w } = w_1 , \ldots , w_n} {} denote two bases of $V$. Let
\mathrelationchaindisplay
{\relationchain
{ v_j }
{ =} { \sum_{ i = 1 }^{ n } c_{ij} w_i }
{ } { }
{ } { }
{ } { }
} {}{}{} with coefficients
\mathrelationchain
{\relationchain
{ c_{ij} }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Then the $n \times n$-matrix
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ v } }_{ \mathfrak{ w } } }
{ =} {(c_{ij})_{ij} }
{ } { }
{ } { }
{ } { }
} {}{}{}

is called the \definitionword {transformation matrix}{} of the base change from $\mathfrak{ v }$ to $\mathfrak{ w }$.

}




\inputremark {}
{

The $j$-th column of a transformation matrix \mathl{M^{ \mathfrak{ v } }_{ \mathfrak{ w } }}{} consists of the coordinates of $v_j$ with respect to the basis $\mathfrak{ w }$. The vector $v_j$ has the coordinate tuple $e_j$ with respect to the basis $\mathfrak{ v }$, and when we apply the matrix to $e_j$, we get the $j$-th column of the matrix, and this is just the coordinate tuple of $v_j$ with respect to the basis $\mathfrak{ w }$.

For a one-dimensional space and
\mathrelationchaindisplay
{\relationchain
{v }
{ =} {cw }
{ } { }
{ } { }
{ } { }
} {}{}{,} we have
\mathrelationchain
{\relationchain
{ M^{ \mathfrak{ v } }_{ \mathfrak{ w } } }
{ = }{ c }
{ = }{ { \frac{ v }{ w } } }
{ }{ }
{ }{ }
} {}{}{,} where the fraction is well-defined. This might help in memorizing the order of the bases in this notation.

Another important relation is
\mathrelationchaindisplay
{\relationchain
{ \mathfrak{ v } }
{ =} { { { \left( M^{ \mathfrak{ v } }_{ \mathfrak{ w } } \right) } ^{ \text{tr} } } \mathfrak{ w } }
{ } { }
{ } { }
{ } { }
} {}{}{.} Note that here, the matrix is not applied to an $n$-tuple of $K$ but to an $n$-tuple of $V$, yielding a new $n$-tuple of $V$. This equation might be an argument to define the transformation matrix the other way around; however, we consider the behavior in Lemma 9.1 as decisive.

In case
\mathrelationchaindisplay
{\relationchain
{V }
{ =} {K^n }
{ } { }
{ } { }
{ } { }
} {}{}{,} if $\mathfrak{ e }$ is the standard basis, and $\mathfrak{ v }$ some further basis, we obtain the transformation matrix \mathl{M^{ \mathfrak{ e } }_{ \mathfrak{ v } }}{} of the base change from $\mathfrak{ e }$ to $\mathfrak{ v }$ by expressing each $e_j$ as a linear combination of the basis vectors \mathl{v_1 , \ldots , v_n}{,} and writing down the corresponding tuples as columns. The inverse transformation matrix, \mathl{M^{ \mathfrak{ v } }_{ \mathfrak{ e } }}{,} consists simply in \mathl{v_1 , \ldots , v_n}{,} written as columns.

}




\inputexample{}
{

We consider in $\R^2$ the standard basis,
\mathrelationchaindisplay
{\relationchain
{ \mathfrak{ u } }
{ =} { \begin{pmatrix} 1 \\0 \end{pmatrix} , \, \begin{pmatrix} 0 \\1 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{,} and the basis
\mathrelationchaindisplay
{\relationchain
{ \mathfrak{ v } }
{ =} { \begin{pmatrix} 1 \\2 \end{pmatrix} , \, \begin{pmatrix} -2 \\3 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.} The basis vectors of $\mathfrak{ v }$ can be expressed directly with the standard basis, namely
\mathdisp {v_1= \begin{pmatrix} 1 \\2 \end{pmatrix} = 1 \begin{pmatrix} 1 \\0 \end{pmatrix} + 2 \begin{pmatrix} 0 \\1 \end{pmatrix} \text{ and } v_2= \begin{pmatrix} -2 \\3 \end{pmatrix} = -2 \begin{pmatrix} 1 \\0 \end{pmatrix} + 3 \begin{pmatrix} 0 \\1 \end{pmatrix}} { . }
Therefore, we get immediately
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ v } }_{ \mathfrak{ u } } }
{ =} { \begin{pmatrix} 1 & -2 \\ 2 & 3 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.} For example, the vector that has the coordinates \mathl{(4,-3)}{} with respect to $\mathfrak{ v }$, has the coordinates
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ v } }_{ \mathfrak{ u } } \begin{pmatrix} 4 \\-3 \end{pmatrix} }
{ =} { \begin{pmatrix} 1 & -2 \\ 2 & 3 \end{pmatrix} \begin{pmatrix} 4 \\-3 \end{pmatrix} }
{ =} { \begin{pmatrix} 10 \\-1 \end{pmatrix} }
{ } { }
{ } { }
} {}{}{} with respect to the standard basis $\mathfrak{ u }$. The transformation matrix \mathl{M^{ \mathfrak{ u } }_{ \mathfrak{ v } }}{} is more difficult to compute. We have to write the standard vectors as linear combinations of \mathcor {} {v_1} {and} {v_2} {.} A direct computation \extrabracket {solving two linear systems} {} {} yields
\mathrelationchaindisplay
{\relationchain
{ \begin{pmatrix} 1 \\0 \end{pmatrix} }
{ =} { { \frac{ 3 }{ 7 } } \begin{pmatrix} 1 \\2 \end{pmatrix} - { \frac{ 2 }{ 7 } } \begin{pmatrix} -2 \\3 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} and
\mathrelationchaindisplay
{\relationchain
{ \begin{pmatrix} 0 \\1 \end{pmatrix} }
{ =} { { \frac{ 2 }{ 7 } } \begin{pmatrix} 1 \\2 \end{pmatrix} + { \frac{ 1 }{ 7 } } \begin{pmatrix} -2 \\3 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.} Hence,
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ u } }_{ \mathfrak{ v } } }
{ =} { \begin{pmatrix} { \frac{ 3 }{ 7 } } & { \frac{ 2 }{ 7 } } \\ - { \frac{ 2 }{ 7 } } & { \frac{ 1 }{ 7 } } \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.}

}




\inputfactproof
{Base change/Three bases/Composition/Fact}
{Lemma}
{}
{

\factsituation {Let $K$ be a field, and let $V$ be a $K$-vector space of dimension $n$. Let \mathcor {} {\mathfrak{ u } = u_1 , \ldots , u_n ,\, \mathfrak{ v } = v_1 , \ldots , v_n ,\,} {and} {\mathfrak{ w } = w_1 , \ldots , w_n} {} denote bases of $V$.}
\factconclusion {Then the three transformation matrices fulfill the relation
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ u } }_{ \mathfrak{ w } } }
{ =} { M^{ \mathfrak{ v } }_{ \mathfrak{ w } } \circ M^{ \mathfrak{ u } }_{ \mathfrak{ v } } }
{ } { }
{ } { }
{ } { }
} {}{}{.}}
\factextra {In particular, we have
\mathrelationchaindisplay
{\relationchain
{ M^{ \mathfrak{ u } }_{ \mathfrak{ v } } \circ M^{ \mathfrak{ v } }_{ \mathfrak{ u } } }
{ =} { E_n }
{ } { }
{ } { }
{ } { }
} {}{}{.}}

}
{See Exercise 9.9 .}






\subtitle {Sum of linear subspaces}




\inputdefinition
{ }
{

For a $K$-vector space $V$ and a family of linear subspaces
\mathrelationchain
{\relationchain
{ U_1 , \ldots , U_n }
{ \subseteq }{ V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} we define the \definitionword {sum of these linear subspaces}{} by
\mathrelationchaindisplay
{\relationchain
{ U_1 + \cdots + U_n }
{ =} { { \left\{ u_1 + \cdots + u_n \mid u _i \in U_i \right\} } }
{ } { }
{ } { }
{ } { }
}

{}{}{.}

}

For this, we also write \mathl{\sum_{i = 1}^n U_i}{.} The sum is again a linear subspace. In case
\mathrelationchaindisplay
{\relationchain
{V }
{ =} { U_1 + \cdots + U_n }
{ } { }
{ } { }
{ } { }
} {}{}{,} we say that $V$ is the sum of the linear subspaces \mathl{U_1 , \ldots , U_n}{.} The following theorem describes an important relation between the dimension of the sum of two linear subspaces and the dimension of their intersection.




\inputfactproof
{Linear subspace/Sum and intersection/Dimension/Fact}
{Theorem}
{}
{

\factsituation {Let $K$ denote a field, and let $V$ denote a $K$-vector space of finite dimension. Let
\mathrelationchain
{\relationchain
{ U_1,U_2 }
{ \subseteq }{ V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} denote linear subspaces.}
\factconclusion {Then
\mathrelationchaindisplayhandleft
{\relationchaindisplayhandleft
{ \dim_{ K } { \left( U_1 \right) } + \dim_{ K } { \left( U_2 \right) } }
{ =} { \dim_{ K } { \left( U_1 \cap U_2 \right) } + \dim_{ K } { \left( U_1 + U_2 \right) } }
{ } { }
{ } { }
{ } { }
} {}{}{.}}
\factextra {}
}
{

Let \mathl{w_1 , \ldots , w_k}{} be a basis of \mathl{U_1 \cap U_2}{.} On one hand, we can extend this basis, according to Theorem 8.10 , to a basis \mathl{w_1 , \ldots , w_k, u_1 , \ldots , u_n}{} of $U_1$, on the other hand, we can extend it to a basis \mathl{w_1 , \ldots , w_k, v_1 , \ldots , v_m}{} of $U_2$. Then
\mathdisp {w_1 , \ldots , w_k, u_1 , \ldots , u_n , v_1 , \ldots , v_m} { }
is a generating system of \mathl{U_1+U_2}{.} We claim that it is even a basis. To see this, let
\mathrelationchaindisplay
{\relationchain
{ a_1w_1 + \cdots + a_k w_k + b_1 u_1 + \cdots + b_n u_n + c_1 v_1 + \cdots + c_mv_m }
{ =} { 0 }
{ } { }
{ } { }
{ } { }
} {}{}{.} This implies that the element
\mathrelationchaindisplay
{\relationchain
{ a_1w_1 + \cdots + a_k w_k + b_1 u_1 + \cdots + b_n u_n }
{ =} {- c_1 v_1 - \cdots - c_mv_m }
{ } { }
{ } { }
{ } { }
} {}{}{} belongs to \mathl{U_1 \cap U_2}{.} From this, we get directly
\mathrelationchain
{\relationchain
{b_i }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} for
\mathrelationchain
{\relationchain
{ i }
{ = }{ 1 , \ldots , n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and
\mathrelationchain
{\relationchain
{c_j }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} for
\mathrelationchain
{\relationchain
{ j }
{ = }{ 1 , \ldots , m }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} From the equation before, we can then infer that also
\mathrelationchain
{\relationchain
{ a_\ell }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} holds for all $\ell$. Hence, we have linear independence. This gives altogether
\mathrelationchainalignhandleft
{\relationchainalignhandleft
{ \dim_{ K } { \left( U_1 \cap U_2 \right) } + \dim_{ K } { \left( U_1 + U_2 \right) } }
{ =} { k + k +n +m }
{ =} { k+n +k+m }
{ =} { \dim_{ K } { \left( U_1 \right) } + \dim_{ K } { \left( U_2 \right) } }
{ } { }
} {} {}{.}

}


The intersection of two planes \extrabracket {through the origin} {} {} in $\R^3$ is \quotationshort{usually}{} a line; it is the plane itself if the same plane is taken twice, but it is never just a point. This observation is generalized in the following statement.




\inputfactproof
{Linear subspace/Intersection/Dimension estimate/Fact}
{Corollary}
{}
{

\factsituation {Let $K$ be a field, and let $V$ be a $K$-vector space of dimension $n$. Let
\mathrelationchain
{\relationchain
{ U_1,U_2 }
{ \subseteq }{ V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} denote linear subspaces of dimensions \mathcor {} {\dim_{ K } { \left( U_1 \right) } = n-k_1} {and} {\dim_{ K } { \left( U_2 \right) } = n-k_2} {.}}
\factconclusion {Then
\mathrelationchaindisplay
{\relationchain
{ \dim_{ K } { \left( U_1 \cap U_2 \right) } }
{ \geq} { n-k_1 -k_2 }
{ } { }
{ } { }
{ } { }
} {}{}{.}}
\factextra {}
}
{

Due to Theorem 9.7 , we have
\mathrelationchainalignhandleft
{\relationchainalignhandleft
{ \dim_{ K } { \left( U_1 \cap U_2 \right) } }
{ =} { \dim_{ K } { \left( U_1 \right) } + \dim_{ K } { \left( U_2 \right) } - \dim_{ K } { \left( U_1 +U_2 \right) } }
{ =} { n-k_1 + n-k_2 - \dim_{ K } { \left( U_1 +U_2 \right) } }
{ \geq} { n-k_1 + n-k_2 -n }
{ =} { n-k_1-k_2 }
} {} {}{.}

}


Recall that, for a linear subspace
\mathrelationchain
{\relationchain
{U }
{ \subseteq }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} the difference \mathl{\dim_{ K } { \left( V \right) } - \dim_{ K } { \left( U \right) }}{} is called the \keyword {codimension} {} of $U$ in $V$. With this concept, we can paraphrase the statement above by saying that the codimension of an intersection of linear subspaces equals at most the sum of their codimensions.




\inputfactproof
{Homogeneous linear system/Dimension estimate/Fact}
{Corollary}
{}
{

\factsituation {Let a homogeneous system of linear equations with $k$ equations in $n$ variables be given.}
\factconclusion {Then the dimension of the solution space of the system is at least \mathl{n-k}{.}}
\factextra {}
}
{

The solution space of one linear equation in $n$ variables has dimension $n-1$ or $n$. The solution space of the system is the intersection of the solution spaces of the individual equations. Therefore, the statement follows by applying Corollary 9.8 to the individual solution spaces.

}






\subtitle {Direct sum}




\inputdefinition
{ }
{

Let $K$ denote a field, and let $V$ denote a $K$-vector space. Let \mathl{U_1 , \ldots , U_m}{} be a family of linear subspaces of $V$. We say that $V$ is the \definitionword {direct sum}{} of the \mathl{U_i}{} if the following conditions are fulfilled. \enumerationtwo {Every vector
\mathrelationchain
{\relationchain
{ v }
{ \in }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} has a representation
\mathrelationchaindisplay
{\relationchain
{v }
{ =} {u_1+u_2 + \cdots + u_m }
{ } { }
{ } { }
{ } { }
} {}{}{,} where
\mathrelationchain
{\relationchain
{ u_i }
{ \in }{ U_i }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} } {
\mathrelationchain
{\relationchain
{U_i \cap { \left( \sum_{j \neq i} U_j \right) } }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} for all $i$.

}

}

If the sum of the $U_i$ is direct, then we also write \mathl{U_1 \oplus \cdots \oplus U_m}{} instead of \mathl{U_1 + \cdots + U_m}{.} For two linear subspaces
\mathrelationchaindisplay
{\relationchain
{ U_1,U_2 }
{ \subseteq} { V }
{ } { }
{ } { }
{ } { }
} {}{}{,} the second condition just means
\mathrelationchain
{\relationchain
{U_1 \cap U_2 }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}




\inputexample{}
{

Let $V$ denote a finite-dimensional $K$-vector space together with a basis \mathl{v_1 , \ldots , v_n}{.} Let
\mathrelationchaindisplay
{\relationchain
{ \{ 1 , \ldots , n\} }
{ =} { I_1 \uplus \ldots \uplus I_k }
{ } { }
{ } { }
{ } { }
} {}{}{} be a partition of the index set. Let
\mathrelationchaindisplay
{\relationchain
{ U_j }
{ =} { \langle v_i ,\, i \in I_j \rangle }
{ } { }
{ } { }
{ } { }
} {}{}{} be the linear subspaces generated by the subfamilies. Then
\mathrelationchaindisplay
{\relationchain
{ V }
{ =} { U_1 \oplus \cdots \oplus U_k }
{ } { }
{ } { }
{ } { }
} {}{}{.} The extreme case
\mathrelationchain
{\relationchain
{ I_j }
{ = }{ \{j\} }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} yields the direct sum
\mathrelationchaindisplay
{\relationchain
{V }
{ =} { K v_1 \oplus \cdots \oplus K v_n }
{ } { }
{ } { }
{ } { }
} {}{}{} with one-dimensional linear subspaces.

}




\inputfactproof
{Vector space/Finite dimensional/Linear subspace/Direct complement/Fact}
{Lemma}
{}
{

\factsituation {Let $V$ be a finite-dimensional $K$-vector space, and let
\mathrelationchain
{\relationchain
{ U }
{ \subseteq }{ V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} be a linear subspace.}
\factconclusion {Then there exists a linear subspace
\mathrelationchain
{\relationchain
{ W }
{ \subseteq }{ V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} such that we have the direct sum decomposition
\mathrelationchaindisplay
{\relationchain
{ V }
{ =} { U \oplus W }
{ } { }
{ } { }
{ } { }
} {}{}{.}}
\factextra {}
}
{

Let \mathl{v_1 , \ldots , v_k}{} denote a basis of $U$. We can extend this basis, according to Theorem 8.10 , to a basis \mathl{v_1 , \ldots , v_k, v_{k+1} , \ldots , v_n}{} of $V$. Then
\mathrelationchaindisplay
{\relationchain
{ W }
{ =} { \langle v_{k+1} , \ldots , v_n \rangle }
{ } { }
{ } { }
{ } { }
} {}{}{} fulfills all the properties of a direct sum.

}


In the preceding statement, the linear subspace $W$ is called a \keyword {direct complement} {} for $U$ \extrabracket {in $V$} {} {.} In general, there are many different direct complements.






\subtitle {Direct sum and product}

Recall that, for a family
\mathcond {M_i} {}
{i \in I} {}
{} {} {} {,} of sets $M_i$, the product set \mathl{\prod_{i \in I} M_i}{} is defined. If all
\mathrelationchain
{\relationchain
{ M_i }
{ = }{ V_i }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} are $K$-vector spaces over a field $K$, then this is, using componentwise addition and scalar multiplication, again a $K$-vector space. This is called the \keyword {direct product of vector spaces} {.} If it is always the same space, say
\mathrelationchain
{\relationchain
{M_i }
{ = }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then we also write \mathl{V^{I}}{.} This is just the mapping space \mathl{\operatorname{Map} \, { \left( I , V \right) }}{.}

Each vector space $V_j$ is a linear subspace inside the direct product, namely as the set of all tuples
\mathdisp {(x_i)_{i \in I} \text{ with } x_i = 0 \text{ for all } i \neq j} { . }
The set of all these tuples that are only \extrabracket {at most} {} {} at one place different from $0$ generates a linear subspace of the direct product. For $I$ infinite, it is not the direct product.




\inputdefinition
{ }
{

Let $I$ denote a set, and let $K$ denote a field. Suppose that, for every
\mathrelationchain
{\relationchain
{ i }
{ \in }{ I }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} a $K$-vector space $V_i$ is given. Then the set
\mathrelationchaindisplay
{\relationchain
{ \bigoplus_{i \in I} V_i }
{ =} { { \left\{ (v_i)_{ i \in I} \mid v_i \in V_i , \, v_i \neq 0 \text{ for only finitely many } i \right\} } }
{ } { }
{ } { }
{ } { }
} {}{}{}

is called the \definitionword {direct sum}{} of the $V_i$.

}

We have the linear subspace relation
\mathrelationchaindisplay
{\relationchain
{ \bigoplus_{i \in I} V_i }
{ \subseteq} { \prod_{i \in I} V_i }
{ } { }
{ } { }
{ } { }
} {}{}{.} If we always have the same vector space, then we write \mathl{V^{(I)}}{} for this direct sum. In particular,
\mathrelationchaindisplay
{\relationchain
{ V^{(I)} }
{ \subseteq} {V^{I} }
{ } { }
{ } { }
{ } { }
} {}{}{} is a linear subspace. For $I$ finite, there is no difference, but for an infinite index set, this inclusion is strict. For example, \mathl{\R^\N}{} is the space of all real sequences, but \mathl{\R^{(\N)}}{} consists only of those sequences satisfying the property that only finitely many members are different from $0$. The polynomial ring \mathl{K[X]}{} is the direct sum of the vector spaces \mathl{KX^n,\, n \in \N}{.} Every $K$-vector space with a basis
\mathcond {v_i} {}
{i \in I} {}
{} {} {} {,} is \quotationshort{isomorphic}{} to the direct sum \mathl{\bigoplus_{i \in I} Kv_i}{.}