DOE Math

Understanding the matrix formulation of least squares regression can help you understand the essence of DOE.  

The equation of a straight line has the following well-known form:

y = \beta_0 + \beta_1x

Using this notation the equation can be generalised to a Pth order polynomial:

y = \beta_0x^0 + \beta_1x^1 + \beta_2x^2 +\dots+ \beta_Px^P

If there are N independent observations then

y_1 = \beta_0x_1^0 + \beta_1x_1^1 + \beta_2x_1^2 +\dots+ \beta_Px_1^P

y_2= \beta_0x_2^0 + \beta_1x_2^1 + \beta_2x_2^2 +\dots+ \beta_Px_2^P

\vdots

y_N= \beta_0x_N^0 + \beta_1x_N^1 + \beta_2x_N^2 +\dots+ \beta_Px_N^P

We have N simultaneous equations with (P+1) unknown parameters.  The equations are most conveniently expressed using a matrix notation.

The N observations of y can be represented as a vector of order N, and the β-parameters can be represented by a vector of order P+1.

  \textbf{Y}=\begin{pmatrix}  y_1 \\  y_2 \\  \vdots \\  y_N  \end{pmatrix}  \quad  \textbf{B}=\begin{pmatrix}  \beta_0 & \beta_1 & \beta_2 & \dots & \beta_P  \end{pmatrix}

Furthermore an X matrix of dimensions N x (P+1) can be constructed:

  \textbf{X}=\begin{pmatrix}  x_1^0 & x_1^1 & x_1^2 & \dots & x_1^P \\  x_2^0 & x_2^1 & x_2^2 & \dots & x_2^P \\  \vdots & \vdots & \vdots & \ddots & \vdots \\  x_N^0 & x_N^1 & x_N^2 & \dots & x_N^P \\  \end{pmatrix}

In matrix notation the equation now becomes:

\textbf{Y} = \textbf{X}\textbf{B}

The unknowns in this equation are the β-parameters that form the elements of the B matrix.

The naïve way to solve the equation is the following:

\textbf{B} = \textbf{X}^\textbf{-1}\textbf{Y}

where X-1 denotes the inverse of the X matrix.

This is “naïve” because we can only take the inverse of a square matrix.  If we have a matrix of dimensions n x m then the transpose (Xt) of the matrix has dimensions m x n, and multiplying these two matrices together results in a square matrix of dimensions n x n.  This matrix can be inverted.  Hence we take the following steps:

\textbf{B} = \textbf{X}^\textbf{-1}\textbf{Y}

\textbf{X} ^\textbf{t}\textbf{B} = \textbf{X}^\textbf{t} \textbf{X}^\textbf{-1}\textbf{Y}

(\textbf{X}^\textbf{t}\textbf{X})^\textbf{-1}\textbf{X} ^\textbf{t}\textbf{B} = (\textbf{X}^\textbf{t}\textbf{X})^\textbf{-1}\textbf{X}^\textbf{t} \textbf{X}^\textbf{-1}\textbf{Y}

 \textbf{B} = (\textbf{X}^\textbf{t}\textbf{X})^\textbf{-1}\textbf{X} ^\textbf{t}\textbf{Y}

The above analysis was based on a single x-variable, however, the matrix formulation generalises to the case where multiple x-variables are included in the model.

In least squares regression, the x-variables are presumed to be controlled and all errors are assumed to be in the observations y.  Consequently:

 \textbf{Var}(\textbf{B} )= (\textbf{X}^\textbf{t}\textbf{X})^\textbf{-1}\textbf{X} ^\textbf{t}\textbf{Var}(\textbf{Y})

If our goal is to produce estimates of the β-parameters with minimum variance then we need to minimise the following matrix:

 (\textbf{X}^\textbf{t}\textbf{X})^\textbf{-1}\textbf{X} ^\textbf{t}

Notice that this quantity is a function of two things:

  • The type of model that we wish to fit (i.e. the number of polynomial terms)
  • Our choice of x-values

Both of these are under our control and known before the experiment it performed!

This is the basis of design of experiments.

2 thoughts on “DOE Math”

Leave a Reply

Your email address will not be published. Required fields are marked *