
# Generate normal data with known parameters # Load packages for data handling and plotting I hope that you will join in suggesting improvements or submitting improvements yourself in the Github repo to this page. There are links to lots of similar (though more scattered) stuff under sources and teaching materials. Use the menu to jump to your favourite section. For the frequentist “non-parametric”" tests considered here, this approach is highly accurate for N > 15. Indeed, the Bayesian equivalents of “non-parametric”" tests implemented in JASP literally just do (latent) ranking and that’s it. It is much better for students to think “ranks!” than to believe that you can magically throw away assumptions. Since linear models are the same across frequentist, Bayesian, and permutation-based inferences, I’d argue that it’s better to start with modeling than p-values, type-1 errors, Bayes factors, or other inferences.Ĭoncerning the teaching of “non-parametric” tests in intro-courses, I think that we can justify lying-to-children and teach “non-parametric”" tests as if they are merely ranked versions of the corresponding parametric tests. This needless complexity multiplies when students try to rote learn the parametric assumptions underlying each test separately rather than deducing them from the linear model.įor this reason, I think that teaching linear models first and foremost and then name-dropping the special cases along the way makes for an excellent teaching strategy, emphasizing understanding over rote learning. Unfortunately, stats intro courses are usually taught as if each test is an independent tool, needlessly making life more complicated for students and teachers alike. In particular, it all comes down to \(y = a \cdot x + b\) which most students know from highschool.
.jpg)
This beautiful simplicity means that there is less to learn. Most of the common statistical models (t-test, correlation, ANOVA chi-square, etc.) are special cases of linear models or a very close approximation.
