Mathematical Statistics
Author:Keith Knight, University of Toronto
这本书例题很多,有些是帮助学生理解概念,有的是反例
课后习题安排得很好,准备考研的同学也可以参考一下
[list][*]Provides the tools that allow an understanding of the underpinnings of statistical methods [/list]
[list][*]Encourages the use of statistical software, which widens the range of problems reader can consider [/list]
[list][*]Brings relevance to the subject-shows readers it has much to offer beyond optimality theory [/list]
[list][*]Focuses on inferential procedures within the framework of parametric models, but also views estimation from the nonparametric perspective [/list]
Traditional texts in mathematical statistics can seem - to some readers-heavily weighted with optimality theory of the various flavors developed in the 1940s and50s, and not particularly relevant to statistical practice. Mathematical Statistics stands apart from these treatments. While mathematically rigorous, its focus is on providing a set of useful tools that allow students to understand the theoretical underpinnings of statistical methodology.
The author concentrates on inferential procedures within the framework of parametric models, but - acknowledging that models are often incorrectly specified - he also views estimation from a non-parametric perspective. Overall, Mathematical Statistics places greater emphasis on frequentist methodology than on Bayesian, but claims no particular superiority for that approach. It does emphasize, however, the utility of statistical and mathematical software packages, and includes several sections addressing computational issues.
The result reaches beyond "nice" mathematics to provide a balanced, practical text that brings life and relevance to a subject so often perceived as irrelevant and dry.
Contents
1 Introduction to Probability
1.1 Random experiments
1.2 Probability measures
1.3 Conditional probability and independence
1.4 Random variables
1.5 Transformations of random variables
1.6 Expected values
1.7 Problems and complements
2 Random vectors and joint distributions
2.1 Introduction
2.2 Discrete and continuous random vectors
2.3 Conditional distributions and expected values
2.4 Distribution theory for Normal samples
2.5 Poisson processes
2.6 Generating random variables
2.7 Problems and complements
3 Convergence of Random Variables
3.1 Introduction
3.2 Convergence in probability and distribution
3.3 Weak Law ofLarge Numbers
3.4 Proving convergence in distribution
3.5 Central Limit Theorems
3.6 Some applications
3.7 Convergence with probability 1
3.8 Problems and complements
4 Principles of Point Estimation
4.1 Introduction
4.2 Statistical models
4.3 Sufficiency
4.4 Point estimation
4.5 The substitution principle
4.6 Influence curves
4.7 Standard errors and their estimation
4.8 Asymptotic relative e?ciency
4.9 The jackknife
4.10 Problems and complements
5 Likelihood-Based Estimation
5.1 Introduction
5.2 The likelihood function
5.3 The likelihood principle
5.4 Asymptotic theory for MLEs
5.5 Misspecified models
5.6 Non-parametric maximum likelihood estimation
5.7 Numerical computation ofMLEs
5.8 Bayesian estimation
5.9 Problems and complements
6 Optimality in Estimation
6.1 Introduction
6.2 Decision theory
6.3 Minimum variance unbiased estimation
6.4 The Cram′er-Rao lower bound
6.5 Asymptotic e?ciency
6.6 Problems and complements
7 Interval Estimation and Hypothesis Testing
7.1 Confidence intervals and regions
7.2 Highest posterior density regions
7.3 Hypothesis testing
7.4 Likelihood ratio tests
7.5 Other issues
7.6 Problems and complements
8 Linear and Generalized Linear Model
8.1 Linear models
8.2 Estimation in linear models
8.3 Hypothesis testing in linear models
8.4 Non-normal errors
8.5 Generalized linear models
8.6 Quasi-Likelihood models
8.7 Problems and complements
9 Goodness-of-Fit
9.1 Introduction
9.2 Tests based on the Multinomial distribution
9.3 Smooth goodness-of-fit tests
9.4 Problems and complements

Author:Keith Knight, University of Toronto
这本书例题很多,有些是帮助学生理解概念,有的是反例
课后习题安排得很好,准备考研的同学也可以参考一下
[list][*]Provides the tools that allow an understanding of the underpinnings of statistical methods [/list]
[list][*]Encourages the use of statistical software, which widens the range of problems reader can consider [/list]
[list][*]Brings relevance to the subject-shows readers it has much to offer beyond optimality theory [/list]
[list][*]Focuses on inferential procedures within the framework of parametric models, but also views estimation from the nonparametric perspective [/list]
Traditional texts in mathematical statistics can seem - to some readers-heavily weighted with optimality theory of the various flavors developed in the 1940s and50s, and not particularly relevant to statistical practice. Mathematical Statistics stands apart from these treatments. While mathematically rigorous, its focus is on providing a set of useful tools that allow students to understand the theoretical underpinnings of statistical methodology.
The author concentrates on inferential procedures within the framework of parametric models, but - acknowledging that models are often incorrectly specified - he also views estimation from a non-parametric perspective. Overall, Mathematical Statistics places greater emphasis on frequentist methodology than on Bayesian, but claims no particular superiority for that approach. It does emphasize, however, the utility of statistical and mathematical software packages, and includes several sections addressing computational issues.
The result reaches beyond "nice" mathematics to provide a balanced, practical text that brings life and relevance to a subject so often perceived as irrelevant and dry.
Contents
1 Introduction to Probability
1.1 Random experiments
1.2 Probability measures
1.3 Conditional probability and independence
1.4 Random variables
1.5 Transformations of random variables
1.6 Expected values
1.7 Problems and complements
2 Random vectors and joint distributions
2.1 Introduction
2.2 Discrete and continuous random vectors
2.3 Conditional distributions and expected values
2.4 Distribution theory for Normal samples
2.5 Poisson processes
2.6 Generating random variables
2.7 Problems and complements
3 Convergence of Random Variables
3.1 Introduction
3.2 Convergence in probability and distribution
3.3 Weak Law ofLarge Numbers
3.4 Proving convergence in distribution
3.5 Central Limit Theorems
3.6 Some applications
3.7 Convergence with probability 1
3.8 Problems and complements
4 Principles of Point Estimation
4.1 Introduction
4.2 Statistical models
4.3 Sufficiency
4.4 Point estimation
4.5 The substitution principle
4.6 Influence curves
4.7 Standard errors and their estimation
4.8 Asymptotic relative e?ciency
4.9 The jackknife
4.10 Problems and complements
5 Likelihood-Based Estimation
5.1 Introduction
5.2 The likelihood function
5.3 The likelihood principle
5.4 Asymptotic theory for MLEs
5.5 Misspecified models
5.6 Non-parametric maximum likelihood estimation
5.7 Numerical computation ofMLEs
5.8 Bayesian estimation
5.9 Problems and complements
6 Optimality in Estimation
6.1 Introduction
6.2 Decision theory
6.3 Minimum variance unbiased estimation
6.4 The Cram′er-Rao lower bound
6.5 Asymptotic e?ciency
6.6 Problems and complements
7 Interval Estimation and Hypothesis Testing
7.1 Confidence intervals and regions
7.2 Highest posterior density regions
7.3 Hypothesis testing
7.4 Likelihood ratio tests
7.5 Other issues
7.6 Problems and complements
8 Linear and Generalized Linear Model
8.1 Linear models
8.2 Estimation in linear models
8.3 Hypothesis testing in linear models
8.4 Non-normal errors
8.5 Generalized linear models
8.6 Quasi-Likelihood models
8.7 Problems and complements
9 Goodness-of-Fit
9.1 Introduction
9.2 Tests based on the Multinomial distribution
9.3 Smooth goodness-of-fit tests
9.4 Problems and complements
