DOE Math

Understanding the matrix formulation of least squares regression can help you understand the essence of DOE.  

The equation of a straight line has the following well-known form:

[latex size=”1″]y = \beta_0 + \beta_1x[/latex]

Using this notation the equation can be generalised to a Pth order polynomial:

[latex size=”1″]y = \beta_0x^0 + \beta_1x^1 + \beta_2x^2 +\dots+ \beta_Px^P[/latex]

If there are N independent observations then

[latex size=”1″]y_1 = \beta_0x_1^0 + \beta_1x_1^1 + \beta_2x_1^2 +\dots+ \beta_Px_1^P[/latex]

[latex size=”1″]y_2= \beta_0x_2^0 + \beta_1x_2^1 + \beta_2x_2^2 +\dots+ \beta_Px_2^P[/latex]

[latex]\vdots[/latex]

[latex size=”1″]y_N= \beta_0x_N^0 + \beta_1x_N^1 + \beta_2x_N^2 +\dots+ \beta_Px_N^P[/latex]

We have N simultaneous equations with (P+1) unknown parameters.  The equations are most conveniently expressed using a matrix notation.

The N observations of y can be represented as a vector of order N, and the β-parameters can be represented by a vector of order P+1.

[latex size=”1″]
\textbf{Y}=\begin{pmatrix}
y_1 \\
y_2 \\
\vdots \\
y_N
\end{pmatrix}
\quad
\textbf{B}=\begin{pmatrix}
\beta_0 & \beta_1 & \beta_2 & \dots & \beta_P
\end{pmatrix}[/latex]

Furthermore an X matrix of dimensions N x (P+1) can be constructed:

[latex size=”1″]
\textbf{X}=\begin{pmatrix}
x_1^0 & x_1^1 & x_1^2 & \dots & x_1^P \\
x_2^0 & x_2^1 & x_2^2 & \dots & x_2^P \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_N^0 & x_N^1 & x_N^2 & \dots & x_N^P \\
\end{pmatrix}[/latex]

In matrix notation the equation now becomes:

[latex size=”1″]\textbf{Y} = \textbf{X}\textbf{B}[/latex]

The unknowns in this equation are the β-parameters that form the elements of the B matrix.

The naïve way to solve the equation is the following:

[latex]\textbf{B} = \textbf{X}^\textbf{-1}\textbf{Y}[/latex]

where X-1 denotes the inverse of the X matrix.

This is “naïve” because we can only take the inverse of a square matrix.  If we have a matrix of dimensions n x m then the transpose (Xt) of the matrix has dimensions m x n, and multiplying these two matrices together results in a square matrix of dimensions n x n.  This matrix can be inverted.  Hence we take the following steps:

[latex]\textbf{B} = \textbf{X}^\textbf{-1}\textbf{Y}[/latex]

[latex]\textbf{X} ^\textbf{t}\textbf{B} = \textbf{X}^\textbf{t} \textbf{X}^\textbf{-1}\textbf{Y}[/latex]

[latex](\textbf{X}^\textbf{t}\textbf{X})^\textbf{-1}\textbf{X} ^\textbf{t}\textbf{B} = (\textbf{X}^\textbf{t}\textbf{X})^\textbf{-1}\textbf{X}^\textbf{t} \textbf{X}^\textbf{-1}\textbf{Y}[/latex]

[latex] \textbf{B} = (\textbf{X}^\textbf{t}\textbf{X})^\textbf{-1}\textbf{X} ^\textbf{t}\textbf{Y}[/latex]

The above analysis was based on a single x-variable, however, the matrix formulation generalises to the case where multiple x-variables are included in the model.

In least squares regression, the x-variables are presumed to be controlled and all errors are assumed to be in the observations y.  Consequently:

[latex] \textbf{Var}(\textbf{B} )= (\textbf{X}^\textbf{t}\textbf{X})^\textbf{-1}\textbf{X} ^\textbf{t}\textbf{Var}(\textbf{Y})[/latex]

If our goal is to produce estimates of the β-parameters with minimum variance then we need to minimise the following matrix:

[latex] (\textbf{X}^\textbf{t}\textbf{X})^\textbf{-1}\textbf{X} ^\textbf{t}[/latex]

Notice that this quantity is a function of two things:

  • The type of model that we wish to fit (i.e. the number of polynomial terms)
  • Our choice of x-values

Both of these are under our control and known before the experiment it performed!

This is the basis of design of experiments.

2 thoughts on “DOE Math”

Leave a Reply

Your email address will not be published.