transformation matrix

suppose that and are finite-dimensional vector spaces with ordered bases and , respectively. let be linear. then for each , there exist unique scalars , such that
we call the matrix defined by the matrix representation of in the ordered bases and and write . if and , then we write
[cite:;taken from @algebra_insel_2019 chapter 2 linear transformations and matrices]
notice that the th column of is simply . also observe that if is a linear transformation such that , then .
[cite:;taken from @algebra_insel_2019 chapter 2 linear transformations and matrices]
let , and be finite-dimensional vector spaces with ordered basis , and , respectively. let and be linear transformations. then
[cite:;taken from @algebra_insel_2019 theorem 2.11]
the following theorem states that if we want to find the coordinates of the transformed vector in the basis , we can do so by first taking the coordinates of the original vector in the basis and then multiplying them by the matrix that represents the transformation with respect to those bases.
let and be finite-dimensional vector spaces having ordered bases and , respectively, and let be linear. then, for each , we have
[cite:;taken from @algebra_insel_2019 theorem 2.14]
let such that
is given by
more generally, we want to have such that
we have
and
which means
when the transformation matrix is from the basis onto itself, we may denote it by . we have
the relation between and :
which means
then
let that is defined by , let and be 2 bases of .
  • find ,
  • find .
solution steps:
  1. change-of-basis matrices (, ): these are the tools for translating between bases.
  2. single-basis transformation matrices (, ): these represent the transformation if you work exclusively in one basis.
  3. mixed-basis transformation matrices (, ): these represent when moving from one basis to another.
the solution:
  1. change-of-basis matrices first, we find the matrix which converts coordinates from basis to basis . its columns are the coordinate vectors of expressed in basis . this gives the change-of-basis matrix from to : the change-of-basis matrix from to is its inverse:
  1. single-basis transformation matrices next, we find . we apply to each vector in and find the coordinates of the result relative to . assembling the columns gives : now we find using the change-of-basis formula :
  1. mixed-basis transformation matrices the matrix takes a vector in , applies , and gives the result in . we find it by first transforming in (), then translating the result to (). similarly, takes a vector in , applies , and gives the result in . we transform in (), then translate to ().
let be a linear operator on a finite-dimensional vector space , and let and be ordered bases for . suppose that is the change of coordinate matrix that changes -coordinates
into -coordinates. then
[cite:;taken from @algebra_insel_2019 theorem 2.23]