Improving your statistical inferences

share ›
‹ links

Below are the top discussions from Reddit that mention this online Coursera course from Eindhoven University of Technology.

This course aims to help you to draw better statistical inferences from empirical research.

Likelihood Function Bayesian Statistics P-Value Statistical Inference

Reddsera may receive an affiliate commission if you enroll in a paid course after using these buttons to visit Coursera. Thank you for using these buttons to support Reddsera.

Taught by
Daniel Lakens
Associate Professor
and 9 more instructors

Offered by
Eindhoven University of Technology

Reddit Posts and Comments

1 posts • 43 mentions • top 26 shown below

r/statistics • post
99 points • smoochie100
I am finishing the incredible helpful, free coursera course 'Improving your statistical inferences' - perfect to learn applied statistics
r/AcademicPsychology • post
65 points • kvragu
Heads up, Daniel Lakens launched his new course 'Improving Your Statistical Questions' on Coursera yesterday.

Don't be sleeping on Daniel Lakens. The new course is freely available (here)[]. Also check out his older course ('Improving Your Statistical Inferences')[], can personally recommend that one. Bore your friends with your flawless interpretation of p-values and join the cool kids on (open) science twitter.

His blog, The 20% Statistician, has more specific topics such as (why you shouldn't calculate post hoc power)[], (why you should use Welch's instead of Student's t-test)[], (how to think straight about p-values)[] and other things you were always afraid to ask about stats.

r/statistics • comment
6 points • ourannual

Can't recommend this coursera class enough for exactly the situation you're in:

r/AskAcademia • comment
15 points • needlzor

I sort of caught up with a mixture of some very good MOOCs:

And working through my own statistical problems by asking people on stats.stackexchange and other forums.

r/MachineLearning • comment
3 points • at_least_

It sounds like this could help you:

r/AcademicPsychology • comment
6 points • PandoraPanorama

I highly recommend Daniel Laken's "Improving your statistical inferences" course. It's free. It does not deal so much with specific tests, but the principles that underlie all of them. It goes through traditional frequentist approaches, through likelihood methods, to the now very modern Bayesian approaches. You listen to video lectures and do practice (using R) alongside. From time to time, lessons are separated by tests that show you how well you're doing.

If you want to have an intuitive understanding of stats, then I would go for it. I have learnt a lot from it, and I am by no means a beginner. I now recommend it to all my PhD students.

r/statistics • comment
2 points • Whoopska would have helped me a lot when I started out. The homeworks are really straightforward so I don't think it will push your understanding as much as a full time course might, but I think the material is really well structured for achieving an understanding of how statistics work.

I don't think it goes into specific tests like Student's T-Test or ANOVA but I'm not done yet.

r/AskAcademia • comment
2 points • BlueSky1877

Is it this course?

r/psychologystudents • comment
5 points • overwatchacct

100% this class IMO --

Probably the best online stats class/stats class period I've taken, though it doesn't use MPlus.

r/AskStatistics • comment
2 points • Viriaro

There's Daniel Lakens' Coursera class that does a good job going over the philosophical difference between the 2 schools of thoughts, but not much for the mathematical aspect.

r/AcademicPsychology • comment
9 points • andero

Fuck yeah:

These should be required for everyone (they're also easy):
Improving Your Statistical Inferences (Coursera)
Improving Your Statistical Questions (Coursera)

Then, you can get this book or download the PDF for free. There are also free videos that go with each chapter. This is like the bible of basic stats and will PROPERLY teach you general linear models (GLMs) and how to do them in R. You may have learned correlations, t-tests, and ANOVAs as if they were all different things: they're not. They're all GLMs and they're all fundamentally correlations.
That book has more advanced stuff after, but you don't necessarily need to learn it.

All that and you'll set for undergrad. If you do grad school, you'll learn that (just like any other time you learn math-related stuff), nobody uses that stuff you learned. Fret not, though: learning GLMs is still the foundation! The next step is to learn multilevel-modelling, which is a more advanced form of GLM that takes account of "nested" data. The classic example is students nested in classes nested in schools nested in school-districts: the fact that a bunch of data-points (students) all share a common classroom is ignored in basic GLMs but multilevel-modelling accounts for that. It's easy once you know how to do GLMs and it's just a different line of R code.

Also... learn R. You might start out uncomfortable with coding, but that's life: you gotta push through ignorance to learn. There are plenty of free introductory courses to learn R and this is one skill that will translate even if you pursue other things.

EDIT: If you bounce off the first book, check out Andy Field's "Discovering Statistics". It's the other main "bible" of stats praised by many. I've not used it since I went with the one I linked above (ISL).

r/statistics • comment
3 points • Zorander22

For an individual study, you wouldn't really care - all else being equal, the size of your effect must be larger for you to reject the null hypothesis with a smaller study.

However, due to things like publication bias, underpowered studies generally have smaller sample sizes, which means more studies can be run, which means that by chance a higher number of studies will have made a type I error. If you had all of the data available, this wouldn't be a problem, but because of the bias in what gets shared with others, this means that having a field with a lot of small sample sizes can result in high rates of publishing false-positives.

There is another problem you can encounter somewhat related to the situation you're describing. If you didn't quite get the N you wanted according to your power calculation, and you didn't quite get statistical significance, you might then decide to run a few more participants. This means you now have a flexible stopping rule, and that you are now capitalizing more on chance (as you would have stopped the study if you reached traditional significance). If you have access to peer-reviewed papers, you may want to check out False-Positive Psychology.

This is getting further afield from your question, but I'll also mention that traditional hypothesis testing only considers the strength of the evidence assuming the null hypothesis is true - it doesn't compare this to the strength of the evidence you'd expect if the alternative is true. This means that there are some cases where a traditionally significant p-value (.049) is actually more consistent with the null hypothesis than with a particular alternative hypothesis. For more information, Daniel Lakens has a great Coursera course - so even though the power might not matter if you achieved statistical significance, the strength of the evidence both against the null and for a particular alternative may matter.

r/statistics • comment
1 points • doughfacedhomunculus

I’m not aware of a single course/book that meets all your needs, but let me try to tackle them point by point.

For 1, 3, 5, and 7, I would strongly recommend “Improving your Statistical Inferences” by Daniel Lakens. It’s extremely accessible, present by video on Coursera, totally free, and covers lots of other really useful material such as likelihood-based inference, Bayes, effect sizes, and the importance of replication to testing a hypothesis. Very little math (some R), more conceptual but presented in an accessible way.

For 2, 3, and 6 - these seem to be more analysis-specific questions for which you might need a general textbook explaining different types of tests. Any intro stats textbook (the non-math heavy ones used in my field are typically “Discovering Statistics with SPSS/R” by Andy Field) will generally start to answer 2 and 6 in particular. Typically these types of books will give you some discrete type of decision-making chart (e.g. if this type of data, this type of experimental manipulation, then this test)

Let me give a little hint here though - I think it might be good to start learning generalised linear modelling strategies early, as these give you a lot of flexibility analysing almost anything. This is what Statistical Rethinking is really good for (among may other things), but there’s plenty of other resources available.

r/worldnews • comment
1 points • FireZeLazer

>11th out of 195 countries for highest death rate

There is your per capita statistic.

Lol I spent my summer doing an online Daniël Lakens learning how to do bayesian analysis and likelihood functions in R. But sure I don't understand statistics :) here it is if you're interested

r/statistics • comment
1 points • PiotrekAG

I highly recommend this course: Improving your statistical inferences. I haven't done the last assignment but found the course very helpful in terms of thinking about statistics.

r/slatestarcodex • comment
3 points • Verithix

Indeed. Lakens is generally pretty great. He's not a random psychologist in a position to make criticisms like these, he specifically focuses on methods and inference, and even has a course online (which I recommend):

r/AcademicPsychology • comment
2 points • MrMeloMan

You might want to check out this course:

They explain sample size estimation and stuff like that.

But at the end of the day, only you decide how much participants will you collect. Underpowered studies with low sample are not useless, they can be included in a meta-analysis, for example.

r/AskStatistics • comment
2 points • StephenSRMMartin

Although he and I have disagreements about statistical philosophy and practice, Daniel Lakens has a great MOOC for basic stats methods.

r/statistics • comment
3 points • _Patchouli

Here you go! You might find his blog post interesting! Also Daniel made a MOOC. Both include R code that you can play around with. Let me know what you think.


+ his blog post on observed power

r/AcademicPsychology • comment
1 points • The-Credible-Hulk79


I found the above course pretty useful.

r/AcademicPsychology • comment
1 points • Flemon45

I've heard good things about Daniel Lakens' online course "Improving your statistical inferences". It's free to sign up:


It's more about understanding how we draw conclusions from our statistics than (e.g.) how to run a particular test, but I think that level of understanding is often missing from undergraduate courses.




r/datascience • comment
1 points • reddismycolor

Same boat. I took that exact discrete mathematics class you are talking about. But yeah my stats is complete shit too.


I just started this class and it has great reviews so hopefully its good:

r/AskScienceDiscussion • comment
1 points • Archy99

There is no way of knowing without reading the paper in question, and related papers in the field to understand what are all of the sources of methodological bias and how have the authors accounted for this in their methodology. Impact factors, institutional reputation etc are all poor predictors of quality.

Some fields, especially heavily mathematical fields require a lot of effort to understand (I'm not the strongest in math so I'd have to play around with the mathematical models myself for weeks to understand), but medical science, psychology, biology are not that hard to read if you are intelligent and willing to read (literally) thousands of studies and use Wikipedia as a glossary. You will soon (qualitatively) understand which studies have the highest quality methodology.

I also recommend Daniel Lakens' (free) "Improving your statistical inferences". This will cover most of what you need to know to understand the typical statistical analysis used in most papers and you'll learn to spot some common errors, that are even found in peer reviewed papers.

r/AcademicPsychology • comment
1 points • oredna

You don't need to wait, necessarily. You can start with these two free online courses:

My other thoughts are here. You'll get taught stats, but you'd do well to learn R in the meantime. There are free coursera courses and the Johns Hopkins Data Science one is good for the basics. That's what most students struggle with anyway.

Ultimately, the stats you learn are going to be extremely basic unless you take it upon yourself to learn more than the basic multilevel modelling everyone uses for regression. TBH you'll be way ahead if you understand even basic stats. Most psych professors cannot even correctly define what a p-value is so the competition isn't very stiff on that front.

r/datascience • comment
1 points • AspiringGuru

I did some courses with this instructor, found her courses quite good.

just saw this course, haven't done it, keen to hear from others who have.

I find I need to brush up regularly, bad habits creep in too easily.

r/slatestarcodex • comment
1 points • 4QHURikzXS

The gold standard online course in this area is probably MIT's Statistics and Data Science MicroMasters. I went to a top university, and from my perspective, most online courses are somewhat dumbed-down compared to the brick-and-mortar versions, even when top universities are offering them. This MIT MicroMaster's is the only online course I've seen which isn't like that. I've only completed the first course so far, in probability, but it was hardcore. It's the only online course I ever saw where the forums are full of people complaining about how hard it is. It is also a steal relative to a real masters, at about $1200 for four online courses. If you know calculus and are comfortable with proofs, and you want to invest maybe a year or two part time, to gain an extremely good foundation in stats and machine learning and have something reasonably impressive you can put on your resume, this is what I would recommend.

This course was OK, but although it's not obvious from the description, it actually does assume a stats background I didn't have. Still got a decent amount out of it, but honestly a lot of the stuff in the course seems to me like patching up a fundamentally broken paradigm of null hypothesis testing. (On the other hand, if what you care about is evaluating someone else's data analysis, as opposed to doing your own, maybe that's what you want. In fact it might be the best course available for that specific purpose. This vid on p-curve analysis was pure gold. Even for people who know stats, I'd recommend watching it to learn about p-curve analysis; it's an extremely clever technique for figuring out if a collection of results are spurious during a meta-analysis.) This course was poorly explained in my opinion and turned me off of any more JHU data science courses. There are some other Coursera courses on stats which look like they could be good but I can't comment from personal experience.