In the previous section we reduced the original matrix and had an additional
`0`

column to address. We multiplied the original problem times the third column and the result was zero. This simply
means the vector was in the null-space, but there's more than one point in the null-space and we're going to figure
out how to get at that here. (We're using 3 dimensions here, and so it's convenient to think of 3-space, but the
method works for n-space, but is more difficult to conceptualize.)

1 | 2 | 3 | 1 | 1 | |||||||||

1 | 2 | 2 | -3 | -4 | 0 | ||||||||

2 | -1 | 0 | 2 | • | 1 | = | 0 | ||||||

3 | -2 |

Our inverse (
```
A
```

) was:
^{-1}

1 | 2 | |||

1 | 2 | 3 | ||

2 | 0 | 0 | ||

3 | 1 | 2 |

If we think about our original equation:
```
A • x = b
```

, and the usual solution:
```
(A
```

, or just
^{-1} • A) • x = A^{-1} • b
```
x = A
```

, we must remember that the
^{-1} • b
```
(A
```

was not always the identity matrix, and therefore contained more information that we'd lose if we looked at the
^{-1} • A)
```
x = A
```

only. To get access to this additional information it's necessary to bring across the
^{-1} • b
```
(A
```

component and combine and reduce this to:
^{-1} • A)

```
x = A
```

^{-1} • b + (I - A^{-1} • A)
• z

Here the
```
z
```

vector is completely arbitrary.

Multiply the "inverse" times the original to get
```
(A
```

.
^{-1} • A)

1 | 2 | 1 | 2 | 3 | 1 | 2 | 3 | |||||||||

1 | 2 | 3 | 2 | 2 | -3 | 1 | 4 | 0 | ||||||||

2 | 0 | 0 | • | -1 | 0 | 2 | = | 0 | 0 | 0 | ||||||

3 | 1 | 2 | 0 | 2 | 1 |

Subtract this from the identity matrix
`I`

to get
```
(I - A
```

^{-1} • A)

1 | 2 | 3 | |||

1 | 0 | -4 | 0 | ||

2 | 0 | 0 | 0 | ||

3 | 0 | -2 | 0 |

The next part requires an arbitrary vector
```
z
```

.

1 | |||

1 | r | ||

2 | s | ||

3 | t |

It's interesting to note that the
`A`

matrix times this
```
(I - A
```

matrix is zero.
^{-1} • A)

1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | |||||||||

1 | 2 | 2 | -3 | 0 | -4 | 0 | 0 | 0 | 0 | ||||||||

2 | -1 | 0 | 2 | • | 0 | 1 | 0 | = | 0 | 0 | 0 | ||||||

3 | 0 | -2 | 0 |

And therefore the product
```
(I - A
```

will be in the null-space of the original
^{-1} • A) • z
`A`

matrix.

1 | 2 | 3 | 1 | 1 | |||||||||

1 | 0 | -4 | 0 | r | -4s | ||||||||

2 | 0 | 1 | 0 | • | s | = | s | ||||||

3 | 0 | -2 | 0 | t | -2s |

Let's multiply the original A matrix times this vector:

1 | 2 | 3 | 1 | 1 | 1 | ||||||||||||

1 | 2 | 2 | -3 | -4s | -8s +2s +6s | 0 | |||||||||||

2 | -1 | 0 | 2 | • | s | = | 4s -4s | = | 0 | ||||||||

3 | -2s |

The general solution (
```
x
```

) is a combination of the solution using
_{gs}
```
A
```

and the arbitrariness contained in the
^{-1} • b
```
(I - A
```

.
^{-1} • A) • z

Using this value for the
```
b
```

vector:

1 | |||

1 | a | ||

2 | b |

We can multiply the terms to get the full solution for
```
x
```

:
_{gs}

1 | 2 | 1 | 1 | 1 | |||||||||||||

1 | 2 | 3 | a | -4s | 2a +3b -4s | ||||||||||||

2 | 0 | 0 | • | b | + | s | = | s | |||||||||

3 | 1 | 2 | -2s | a +2b -2s |

If we multiply the original
`A`

matrix times this
```
x
```

, we get the
_{gs}
```
b
```

vector returned.

1 | 2 | 3 | 1 | 1 | 1 | ||||||||||||

1 | 2 | 2 | -3 | 2a +3b -4s | 4a +6b -8s +2s -3a -6b +6s | a | |||||||||||

2 | -1 | 0 | 2 | • | s | = | -2a -3b +4s +2a +4b -4s | = | b | ||||||||

3 | a +2b -2s |

We put no restrictions on the value of the
```
z
```

vector, and it could be anything. As it turns out (in this example) only the second value (the
`s`

) is used. Any value for
`s`

is legal because it drops out of the problem.

In this 3-dimensional problem, the "answer" is a line. If we had kept all 3 equations, the "answer" would've been a point. If we had removed two equations from the original set, instead of just the one, the "answer" would've been a plane.