While using the matrix inverse (which we'll cover next) provides a neat theoretical way to express the solution to $Ax = b$ as $x = A^{-1}b$, it's often not the most computationally efficient or numerically stable way to actually find the solution $x$, especially for large systems. A more fundamental and often practical algorithm for solving systems of linear equations is Gaussian elimination. You might have encountered this method before, perhaps in a different context. Here, we'll review it as a systematic procedure for manipulating matrix equations.The core idea behind Gaussian elimination is to transform the original system of equations into an equivalent system that is much easier to solve. "Equivalent" means the new system has the same solution set as the original one. We achieve this transformation by applying a sequence of elementary row operations to an augmented matrix.The Augmented MatrixFirst, let's represent the system $Ax = b$ using an augmented matrix. This is simply the matrix $A$ with the vector $b$ appended as an extra column. We often draw a vertical line to separate them visually, though this is just a convention.For a system like: $$ \begin{alignedat}{4} 2x_1 & {}+{} & x_2 & {}-{} & x_3 & {}={} & 8 \ -3x_1 & {}-{} & x_2 & {}+{} & 2x_3 & {}={} & -11 \ -2x_1 & {}+{} & x_2 & {}+{} & 2x_3 & {}={} & -3 \end{alignedat} $$The corresponding matrix equation $Ax=b$ is: $$ \begin{bmatrix} 2 & 1 & -1 \ -3 & -1 & 2 \ -2 & 1 & 2 \end{bmatrix} \begin{bmatrix} x_1 \ x_2 \ x_3 \end{bmatrix} = \begin{bmatrix} 8 \ -11 \ -3 \end{bmatrix} $$The augmented matrix for this system is: $$ \left[ \begin{array}{ccc|c} 2 & 1 & -1 & 8 \ -3 & -1 & 2 & -11 \ -2 & 1 & 2 & -3 \end{array} \right] $$Elementary Row OperationsGaussian elimination proceeds by applying three types of elementary row operations to this augmented matrix:Swapping two rows: Interchanging row $i$ and row $j$. (Corresponds to swapping two equations).Scaling a row: Multiplying all elements in row $i$ by a non-zero scalar $c$. (Corresponds to multiplying an equation by $c$).Adding a multiple of one row to another: Adding $c$ times row $i$ to row $j$. (Corresponds to adding a multiple of one equation to another).Crucially, none of these operations change the underlying solution set of the linear system.The Goal: Row Echelon FormThe objective of applying these operations is to transform the coefficient part (the left side) of the augmented matrix into an upper triangular form, also known as row echelon form. A matrix is in row echelon form if:All rows consisting entirely of zeros are at the bottom.The first non-zero entry (the "leading entry" or "pivot") of each non-zero row is strictly to the right of the leading entry of the row above it.For our example augmented matrix, a possible row echelon form might look like this (the exact values depend on the operations chosen): $$ \left[ \begin{array}{ccc|c} \blacksquare & * & * & * \ 0 & \blacksquare & * & * \ 0 & 0 & \blacksquare & * \end{array} \right] $$ where $\blacksquare$ represents a non-zero pivot element and $*$ represents any value (including zero).The Process: Forward Elimination and Back SubstitutionGaussian elimination typically involves two phases:Forward Elimination: Systematically use the row operations to introduce zeros below the pivot in each column, moving from left to right. This achieves the row echelon form.Back Substitution: Once the matrix is in row echelon form, the corresponding system of equations is easy to solve. The last equation gives the value of the last variable directly. This value can then be substituted back into the second-to-last equation to solve for the second-to-last variable, and so on, moving upwards.Let's illustrate with our example: $$ \left[ \begin{array}{ccc|c} 2 & 1 & -1 & 8 \ -3 & -1 & 2 & -11 \ -2 & 1 & 2 & -3 \end{array} \right] $$Step 1: Target zeros below the first pivot (the '2' in row 1, column 1).Add $\frac{3}{2}$ times Row 1 to Row 2: $R_2 \leftarrow R_2 + \frac{3}{2}R_1$Add $1$ times Row 1 to Row 3: $R_3 \leftarrow R_3 + R_1$ $$ \left[ \begin{array}{ccc|c} 2 & 1 & -1 & 8 \ 0 & 1/2 & 1/2 & 1 \ 0 & 2 & 1 & 5 \end{array} \right] $$Step 2: Target zero below the second pivot (the '1/2' in row 2, column 2).Add $-4$ times Row 2 to Row 3: $R_3 \leftarrow R_3 - 4R_2$ $$ \left[ \begin{array}{ccc|c} 2 & 1 & -1 & 8 \ 0 & 1/2 & 1/2 & 1 \ 0 & 0 & -1 & 1 \end{array} \right] $$This matrix is now in row echelon form.Step 3: Back Substitution. Convert back to equations: $$ \begin{alignedat}{4} 2x_1 & {}+{} & x_2 & {}-{} & x_3 & {}={} & 8 \ & & \tfrac{1}{2}x_2 & {}+{} & \tfrac{1}{2}x_3 & {}={} & 1 \ & & & & -x_3 & {}={} & 1 \end{alignedat} $$From the last equation: $-x_3 = 1 \implies x_3 = -1$.Substitute $x_3=-1$ into the second equation: $\frac{1}{2}x_2 + \frac{1}{2}(-1) = 1 \implies \frac{1}{2}x_2 - \frac{1}{2} = 1 \implies \frac{1}{2}x_2 = \frac{3}{2} \implies x_2 = 3$.Substitute $x_2=3$ and $x_3=-1$ into the first equation: $2x_1 + (3) - (-1) = 8 \implies 2x_1 + 4 = 8 \implies 2x_1 = 4 \implies x_1 = 2$.So, the solution is $x = \begin{bmatrix} 2 \ 3 \ -1 \end{bmatrix}$.digraph GaussianElimination { rankdir=LR; node [shape=box, style=rounded, fontname="helvetica", fontsize=10, margin=0.2]; edge [fontname="helvetica", fontsize=9]; AugMatrix [label="Augmented Matrix\n[ A | b ]", fillcolor="#a5d8ff", style=filled]; RowOps1 [label="Apply Row Ops\n(Zero out below Pivot 1)", fillcolor="#ffec99", style=filled]; Intermediate1 [label="Intermediate Matrix\n(Zeros in Col 1)", fillcolor="#a5d8ff", style=filled]; RowOps2 [label="Apply Row Ops\n(Zero out below Pivot 2)\n...", fillcolor="#ffec99", style=filled]; EchelonForm [label="Row Echelon Form\n[ U | c ]", fillcolor="#96f2d7", style=filled]; BackSub [label="Back Substitution", fillcolor="#fcc2d7", style=filled]; Solution [label="Solution Vector\nx", fillcolor="#b2f2bb", style=filled]; AugMatrix -> RowOps1; RowOps1 -> Intermediate1; Intermediate1 -> RowOps2; RowOps2 -> EchelonForm [label="Forward Elimination"]; EchelonForm -> BackSub; BackSub -> Solution [label="Solve Ux=c"]; }A simplified flowchart illustrating the main stages of Gaussian elimination: transforming the augmented matrix to row echelon form via row operations (Forward Elimination), followed by Back Substitution to find the solution vector.Relevance and LimitationsGaussian elimination provides a concrete algorithm for solving $Ax=b$. Understanding this process is helpful because:Foundation: It's the basis for many practical algorithms used in numerical linear algebra libraries, although implementations often use variations (like LU decomposition, which we'll touch upon later) for better efficiency and stability.Concept: It reinforces the idea of transforming a problem into an equivalent, simpler form.Analysis: It helps in understanding concepts like matrix rank and the existence/uniqueness of solutions (e.g., a row of zeros in the coefficient part but a non-zero value on the right side indicates no solution).While performing Gaussian elimination by hand is feasible for small systems, it becomes tedious and prone to numerical errors (especially involving division by small numbers) for larger systems. In practice, we rely on optimized implementations available in libraries like NumPy and SciPy, which often use more advanced techniques derived from these principles.In the next sections, we will explore the matrix inverse and how it provides an alternative, algebraic way to think about solving $Ax=b$, along with how to compute it using computational tools.