Theorem. Let W be a subspace of a vector space V. Assume dim(V) is n. Then, dim(W)≤n. Moreover, if dim(W)=n, then V=W.

proof. We may construct a basis for $W$ using the same process as we construct a basis for $V$. This process must halt when the set has $n$ elements. Therefore, the dimension of $W$ must be less than or equal to that of $V$. The key lemma shows that any linearly independent set with $\dim(V)$ elements generates $V$, so if $\dim(W)=n$, then $V=W$.


Theorem. Let $W_1$, $W_2$ be subspace of a vector space $V$. Assume $V$ is finite dimensional. Then, we have

\[\text{dim}(W_1+W_2)=\text{dim}(W_1)+\dim(W_2)-\dim(W_1\cap W_2).\]

proof. Let $\beta$ be a basis for $W_1\cap W_2$. By the Lemma, we know that we can find $S_1$ such that $S_1\cup \beta$, denoted by $\beta_1$, is a basis for $W_1$, and similarly, we have a set $S_2$ such that $\beta_2$ is a basis for $W_2$. Let

\[\begin{align*} S_1&=\set{u_1,\ldots,u_m}\\\\ S_2&=\set{v_1,\ldots,v_n}\\\\ \beta&=\set{w_1,\ldots,w_o}. \end{align*}\]

We claim $S_1\cup S_2\cup \beta$ is a basis for $W_1+W_2$. First, this is a generating set for sure, and we left the details for readers. We have to show this is a linearly independent set. Write $x\in W_1+W_2$ as a linear combination of $S_1\cup S_2\cup \beta$,

\[x=a_1u_1+\cdots +a_mu_m+b_1v_1+\cdots +b_nv_n + c_1w_1+\cdots +c_ow_o\]

We may write $x$ as $v+u+w$ where

\[\begin{align*} v&=a_1v_1+\cdots+a_nv_n\\\\ u&=b_1u_1+\cdots+b_mu_m\\\\ w&=c_1w_1+\cdots+c_ow_o. \end{align*}\]

To show that $S_1\cup S_2\cup\beta$ is linearly independent, we assume $v=0$. Equivalently, we have $v=-u-w$. On the left hand side of the last equation, the vector is in $W_1$, and on the other side, it shows that the vector is in $W_2$. Therefore, $v$ is in $W_1\cap W_2$. However, $v$ is generated by elements not in $W_1$ and $W_2$, so the only possibility is that $v$ is a zero vector. Similar argument may apply to $u$. Therefore, we have $0=w$. However, $w$ is written as the linear combination of a basis for $W_1\cap W_2$. It follows immediately that $w$ is also a zero vector. This shows that $S_1\cup S_2\cup \beta$ is linearly independent. Thus, it forms a basis for $W_1+W_2$.

The rest of the proof is simply counting the number of vectors in the basis.


Corollary. Let V be a finite dimensional vector space. If $V=W_1\oplus W_2$, then $\dim(V)=\dim(W_1)+\dim(W_2)$

Proof. By the definition of direct sum, the conclusion follows immediately.


Linear Transformation

Definition. Let $V$ and $W$ be vector spaces. Let $T:V\to W$ be a map. We say $T$ is linear or call $T$ a linear transformation if $T$ satisfies the following two conditions: for $x$, $x_1$, and $x_2$ in $V$ and $c\in F$,

  1. $T(x_1+x_2)=T(x_1)+T(x_2)$;
  2. $T(cx)=cT(x).$

To verify that a map between vector spaces is a linear transformation, we need to show that it satisfies the two conditions given in the definition. We use the following examples to demonstrate.

Example. We define the projection map $\text{proj}:\mathbb{R}^3\to \mathbb{R}$ by $\text{proj}((x,y,z))=x$. The projection map is linear since we have

\[\begin{align*} \text{proj}((x,y,z)+(x',y',z'))&=\text{proj}((x+x',y+y',z+z'))\\\\ &=x+x'=\text{proj}((x,y,z))+\text{proj}((x',y',z')) \end{align*}\]

and, for $c\in\mathbb{R}$,

\[\text{proj}(c(x,y,z))=\text{proj}((cx,cy,cz))=cx=c\text{proj}((x,y,z)).\]

Example. In Calculus or any course in Analysis, we must prove that

\[\frac{\partial}{\partial x}:\mathcal{F}^1(\mathbb{R})\to\mathcal{F}^0(\mathbb{R})\quad\text{and}\quad\int_0^t\cdot dx:\mathcal{F}^0(\mathbb{R})\to\mathcal{F}^1(\mathbb{R})\]

are linear. Furthermore, these two operations are inverse to each other. This is known as the Fundamental Theorem of Calculus.

Example. $T((a_1,a_2))=(2a_1+a_2,a_1)$ is a linear transformation on $\mathbb{R}^2$. We may check:

\[\begin{align*} T((a_1,a_2)+(b_1,b_2))&=T((a_1+b_1,a_2+b_2))\\\\&=(2(a_1+b_1)+(a_2+b_2),a_1+b_1)\\\\&=(2a_1+a_2,a_1)+(2b_1+b_2,b_1)\\\\ &=T((a_1,a_2))+T((b_1,b_2)) \end{align*}\]

and, for $c\in \mathbb{R}$, $T(c(a_1,a_2))=T((ca_1,a_2))=(2ca_1+ca_2,ca_1)=c(2a_1+a_2,a_1)=cT((a_1,a_2))$. We should note that this map is equivalent to multiplying the matrix

\[\begin{bmatrix} 2 & 1\\\\ 1 & 0 \end{bmatrix}\]

by the column vector $\begin{bmatrix}a_1\\a_2\end{bmatrix}$. To conclude our introduction to linear transformations, let us consider a fundamental question:

Can we represent any linear transformation between finite-dimensional vector spaces as a matrix?

Two key subspaces regarding a linear transformation

Let $T:V\to W$. One of the subspace is obvious and easy to justify the motivation. It is the range (or image) of $T$, denoted as $T(V)$.

Lemma. $T(V)$ is a subspace of $W$.

proof. (Exercise)

Another subspace, belonging to $V$, is hidden but plays a significant role in Linear Algebra. We already seen it several time when we talk about the solution space of a differential equation.

Recall. The solution set of a linear equation system $Ax=0$ forms a subspace.

Here we will show you that this is generally right.

Lemma. The solution set of $T(v)=0$ forms a subspace of V.

proof. (Exercise)

Definition. Let $V$ and $W$ be vector spaces, and let $T:V\to W$ be linear. We define the null space (or kernel) $N(T)$ of $T$ to be the set of all vectors $x$ in $V$ such that $T(x)=0$; I.e., $N(T)=\set{x\in V T(x)=0}$.
We define the range (or image) $R(T)$ to be the subset of $W$ consisting of all images (under $T$) of elements of $V$; i.e., $R(T)=\set{T(x) x\in V}$.

If $N(T)$ and $R(T)$ are finite-dimensional, then we define the nullity of $T$, denoted as $\text{nullity}(T)$, and the rank of $T$, denoted as $\text{rank}(T)$, to be the dimensions of $N(T)$ and $R(T)$, respectively.

###