In cases where regression errors are not $i.i.d.$, we need to use GLS to estimate the parameters.

The Model

$$ y = X\beta + \epsilon, \; E[\epsilon|X]=0; \; Var(\epsilon|X) = \Sigma \equiv \sigma^2 \Omega $$

with $dim(\Omega) = T\times T$, $dim(X) = T \times K$ and $dim(\beta) = K \times 1$

Explanation: In the OLS case, one of the assumptions of our errors was that they were white noise innovations, meaning that we assumed that $\epsilon \sim i.i.d(0, \sigma_\epsilon^2)$ with no serial correlation between them (so the covariance matrix would be a diagonal matrix with diagonal entries $\sigma_\epsilon^2$). In the case of GLS, the covariance matrix is may not be a diagonal matrix meaning that some serial correlation (autocorrelation) exists and that we have heteroscedastic errors.

Note: Applying OLS to a GLS problem is inefficient; however, the answer remains unbiased (i.e. correct on average).

GLS differentiates two cases:

Case 1: $\Sigma$ is known

If $\Sigma$ is known, both $\sigma^2$ and $\Omega$ are known. Hence, we can find the eigenvalue decomposition of $\Omega$:

$$ \Omega = C\Lambda C', $$

where $dim(C) = T \times T$ is the matrix of eigenvectors and $dim(\Lambda) = T \times T$ is the diagonal matrix of eigenvalues.

Define a rotation matrix $P$ such that