Linear and Generalized Linear Models
Standard Errors
Linear Regression
GLM
Sampling Distributions
Find the variance of the estimate
Find the information matrix
Use for Inference
\[ \hat \sigma^2 = \frac{1}{n-2} \sum^n_{i=1} (Y_i-\boldsymbol X_i^\mathrm T\hat{\boldsymbol \beta})^2 \]
\[ SE(\hat\beta_0)=\sqrt{\frac{\sum^n_{i=1}x_i^2\hat\sigma^2}{n\sum^n_{i=1}(x_i-\bar x)^2}} \]
\[ SE(\hat\beta_1)=\sqrt\frac{\hat\sigma^2}{\sum^n_{i=1}(x_i-\bar x)^2} \]
\[ Var(\hat {\boldsymbol \beta}) = (\boldsymbol X ^\mathrm T\boldsymbol X)^{-1} \hat \sigma^2 \]
Let \(\hat{\boldsymbol \beta} = \{\hat \beta_0, \hat \beta_1, \cdots, \hat \beta_p\}^\mathrm{T}\) be the MLE estimator for a parameter \(\boldsymbol \beta = \{\beta_0, \beta_1, \cdots, \beta_p\}^\mathrm{T}\). The observed information matrix is
\[ I(\hat{\boldsymbol \beta})=E\left[-\frac{\partial}{\partial \boldsymbol \beta}\frac{\partial}{\partial \boldsymbol \beta^\mathrm{T}}\log\{f(X;\boldsymbol \beta)\}\right] \]
\(I(\hat{\boldsymbol \beta})\) is a \((p+1)\times(p+1)\) matrix.
\[ \mathrm{se}(\hat \beta_j) = \sqrt{I(\hat{\boldsymbol \beta})_{[j,j]}} \]
\[ \frac{\hat\beta_j - \beta_j}{\mathrm{se}(\hat\beta_j)} \sim N(0,1) \]
\[ \frac{\hat\beta_j-\beta_j}{\mathrm{se}(\hat\beta_j)} \sim t_{n-p} \]
| Source | DF | SS | MS | F |
|---|---|---|---|---|
| Model | \(DFR=k-1\) | \(SSR\) | \(MSR=\frac{SSM}{DFR}\) | \(\hat F=\frac{MSR}{MSE}\) |
| Error | \(DFE=n-k\) | \(SSE\) | \(MSE=\frac{SSE}{DFE}\) | |
| Total | \(TDF=n-1\) | \(TSS=SSR+SSE\) |
\[ \hat F \sim F(DFR, DFE) \]
\[ \Lambda = \frac{L(\boldsymbol \beta_1)}{L(\boldsymbol \beta_0)} \sim \chi^2_\varphi \]