Linear and Generalized Linear Models
Standard Errors
Linear Regression
GLM
Sampling Distributions
Find the variance of the estimate
Find the information matrix
Use for Inference
\[ \hat \sigma^2 = \frac{1}{n-2} \sum^n_{i=1} (Y_i-\boldsymbol X_i^\mathrm T\hat{\boldsymbol \beta})^2 \]
\[ SE(\hat\beta_0)=\sqrt{\frac{\sum^n_{i=1}x_i^2\hat\sigma^2}{n\sum^n_{i=1}(x_i-\bar x)^2}} \]
\[ SE(\hat\beta_1)=\sqrt\frac{\hat\sigma^2}{\sum^n_{i=1}(x_i-\bar x)^2} \]
\[ Var(\hat {\boldsymbol \beta}) = (\boldsymbol X ^\mathrm T\boldsymbol X)^{-1} \hat \sigma^2 \]
Let \(X_1,\ldots,X_n\) be a random sample from a distribution with parameter \(\theta\). Let \(\hat \theta\) be the MLE estimator for a parameter \(\theta\). As \(n\rightarrow\infty\), then \(\hat \theta\) has a normal distribution with mean \(\theta\) and variance \(1/nI(\theta)\), where
\[ I(\theta)=E\left[-\frac{\partial^2}{\partial\theta^2}\log\{f(X;\theta)\}\right] \]
\[ \frac{\hat\beta_j - \beta_j}{\mathrm{se}(\hat\beta_j)} \sim N(0,1) \]
\[ \frac{\hat\beta_j-\beta_j}{\mathrm{se}(\hat\beta_j)} \sim t_{n-p^\prime} \]