Chapter
2.2 Conditionality and independence
2.3 ... and Bayes' theorem
2.4 Probability distributions
2.4.2 Some common distributions
2.5 Bayesian inferences with probability
2.6 Monte Carlo generators
3 Statistics and expectations
3.2 What should we expect of our statistics?
3.3 Simple error analysis
3.3.1 Random or systematic?
3.3.3 Combining distributions
3.4 Some useful statistics, and their distributions
4 Correlation and association
4.2 Testing for correlation
4.2.1 Bayesian correlation-testing
4.2.2 The classical approach to correlation-testing
4.2.3 Correlation-testing: classical, non-parametric
4.2.4 Correlation-testing: Bayesian versus non-Bayesian tests
4.5 Principal component analysis
5.1 Methodology of classical hypothesis testing
5.2 Parametric tests: means and variances, t and F tests
5.2.1 The Behrens–Fisher Test
5.2.2 Non-Gaussian parametric testing
5.2.3 Which model is better? The Bayes factor
5.3 Non-parametric tests: single samples
5.3.2 Kolmogorov–Smirnov one-sample test
5.3.3 One-sample runs test of randomness
5.4 Non-parametric tests: two independent samples
5.4.2 Chi-square two-sample (or k-sample) test
5.4.3 Wilcoxon–Mann–Whitney U test
5.4.4 Kolmogorov–Smirnov two-sample test
5.5 Summary, one- and two-sample non-parametric tests
6 Data modelling and parameter estimation: basics
6.1 The maximum-likelihood method
6.2 The method of least squares: regression analysis
6.3 The minimum chi-square method
6.4 Weighting combinations of data
6.5 Bayesian likelihood analysis
6.6 Bootstrap and jackknife
7 Data modelling and parameter estimation: advanced topics
7.1 Model choice and Bayesian evidence
7.2 Model simplicity and the Ockham factor
7.3 The integration problem
7.4 Pitfalls in model choice
7.5 The Akaike and Bayesian information criteria
7.6 Monte Carlo integration: doing the Bayesian integrals
7.7 The Metropolis–Hastings algorithm
7.8 Computation of the evidence by MCMC
7.9 Models of models, and the combination of data sets
7.10 Broadening the range of models, and weights
7.11 Press and Kochanek's method
8.2 Catalogues and selection effects
8.3.1 Luminosity functions via the Vmax method
8.3.2 Luminosity functions via maximum likelihood; the SOS method
8.3.3 Luminosity functions via source counts and redshift distributions
8.4 Tests on luminosity functions
8.4.2 Luminosity-function comparison
8.4.3 Correlation: multivariate luminosity functions
8.5.1 The normalized luminosity function
8.5.2 Modelling and parameter estimation
8.5.4 Testing for correlation or statistical independence
9 Sequential data – 1D statistics
9.1 Data transformations, Karhunen–Loeve transform, and others
9.2.1 The fast Fourier transform
9.2.2 Statistical properties of Fourier transforms
9.3.3 An integrated approach
9.4.1 Redshifts by correlation
9.4.2 The coherence function
9.5 Unevenly sampled data
9.7 Detection difficulties: 1/f noise
10 Statistics of large-scale structure
10.1 Statistics on a spherical surface
10.2 Sky representation: projection and contouring
10.3 The sky distribution
10.4 Two-point angular correlation function
10.4.1 Estimators and errors
10.4.2 Integral constraint
10.4.3 Instrumental effects
10.5.1 Counts-in-cells moments
10.5.2 Measuring counts-in-cells
10.5.3 Instrumental effects
10.6 The angular power spectrum
10.6.2 Instrumental effects
10.7 Galaxy distribution statistics: interpretation
11 Epilogue: statistics and our Universe
11.2 The weak lensing universe
11.3 The cosmic microwave background universe
Appendix A: The literature
Appendix B: Statistical tables