Improving your statistical inferences

share ›
‹ links

Below are the top discussions from Reddit that mention this online Coursera course from Eindhoven University of Technology.

This course aims to help you to draw better statistical inferences from empirical research.

Likelihood Function Bayesian Statistics P-Value Statistical Inference

Next cohort starts July 20. Accessible for free. Completion certificates are offered.

Affiliate disclosure: Please use the blue and green buttons to visit Coursera if you plan on enrolling in a course. Commissions Reddsera receives from using these links will keep this site online and ad-free. Reddsera will not receive commissions if you only use course links found in the below Reddit discussions.

Taught by
Daniel Lakens
Associate Professor

Offered by
Eindhoven University of Technology

Reddit Posts and Comments

2 posts • 34 mentions • top 28 shown below

r/statistics • post
99 points • smoochie100
I am finishing the incredible helpful, free coursera course 'Improving your statistical inferences' - perfect to learn applied statistics
r/AcademicPsychology • post
65 points • kvragu
Heads up, Daniel Lakens launched his new course 'Improving Your Statistical Questions' on Coursera yesterday.

Don't be sleeping on Daniel Lakens. The new course is freely available (here)[]. Also check out his older course ('Improving Your Statistical Inferences')[], can personally recommend that one. Bore your friends with your flawless interpretation of p-values and join the cool kids on (open) science twitter.

His blog, The 20% Statistician, has more specific topics such as (why you shouldn't calculate post hoc power)[], (why you should use Welch's instead of Student's t-test)[], (how to think straight about p-values)[] and other things you were always afraid to ask about stats.

r/statistics • comment
6 points • ourannual

Can't recommend this coursera class enough for exactly the situation you're in:

r/AskAcademia • comment
15 points • needlzor

I sort of caught up with a mixture of some very good MOOCs:

And working through my own statistical problems by asking people on stats.stackexchange and other forums.

r/MachineLearning • comment
3 points • at_least_

It sounds like this could help you:

r/AcademicPsychology • comment
6 points • PandoraPanorama

I highly recommend Daniel Laken's "Improving your statistical inferences" course. It's free. It does not deal so much with specific tests, but the principles that underlie all of them. It goes through traditional frequentist approaches, through likelihood methods, to the now very modern Bayesian approaches. You listen to video lectures and do practice (using R) alongside. From time to time, lessons are separated by tests that show you how well you're doing.

If you want to have an intuitive understanding of stats, then I would go for it. I have learnt a lot from it, and I am by no means a beginner. I now recommend it to all my PhD students.

r/psychologystudents • comment
5 points • overwatchacct

100% this class IMO --

Probably the best online stats class/stats class period I've taken, though it doesn't use MPlus.

r/AcademicPsychology • comment
5 points • ourannual

If you could provide a little bit of information about what your area is (e.g. vision, memory) and what methods you expect to be using (strictly computer-based behavioral studies, neuroimaging, computational modeling?) that can help people provide a better answer.

My research is skewed toward neuroimaging and web-based behavioral experiments but, as you anticipated, I've found programming to be the most valuable skill and it can be hard to find free time to pick up these skills on the side during the craziness of the semester. Python is slowly eclipsing every other tool I've used in the past for data analysis and visualization (matlab, R, excel, SPSS, etc.). This is an amazing resource for learning how to manipulate and analyze data in Python:

A lot of psychology programs have really shitty statistics courses that everyone has to take - even the ones at really good schools. I mainly mean that these classes are more geared toward teaching you how to get results from a certain software program. This is a good online class for padding your theoretical understanding of statistics and hypothesis testing, which could be lacking in your required course sequence:

Also, read papers from the lab you'll be working in, and read some of the papers they commonly cite. Start getting a feel for what's going on in your area of research right now.

My final piece of advice is to tell your advisor as soon as possible that you explicitly plan to do industry research. It will help them understand how best to advise you. Also, in case you land on an advisor who only cares about advising students who want to stay in academia (which is still somewhat common), you'll figure it out sooner than later. I'm speaking based on the experiences of multiple peers, but this is different at every institution and department.

r/AskAcademia • comment
2 points • BlueSky1877

Is it this course?

r/statistics • comment
2 points • Whoopska would have helped me a lot when I started out. The homeworks are really straightforward so I don't think it will push your understanding as much as a full time course might, but I think the material is really well structured for achieving an understanding of how statistics work.

I don't think it goes into specific tests like Student's T-Test or ANOVA but I'm not done yet.

r/metaresearch • post
1 points • serghiou
Improving your statistical inferences (Coursera)
r/statistics • comment
1 points • PiotrekAG

I highly recommend this course: Improving your statistical inferences. I haven't done the last assignment but found the course very helpful in terms of thinking about statistics.

r/statistics • comment
1 points • doughfacedhomunculus

I’m not aware of a single course/book that meets all your needs, but let me try to tackle them point by point.

For 1, 3, 5, and 7, I would strongly recommend “Improving your Statistical Inferences” by Daniel Lakens. It’s extremely accessible, present by video on Coursera, totally free, and covers lots of other really useful material such as likelihood-based inference, Bayes, effect sizes, and the importance of replication to testing a hypothesis. Very little math (some R), more conceptual but presented in an accessible way.

For 2, 3, and 6 - these seem to be more analysis-specific questions for which you might need a general textbook explaining different types of tests. Any intro stats textbook (the non-math heavy ones used in my field are typically “Discovering Statistics with SPSS/R” by Andy Field) will generally start to answer 2 and 6 in particular. Typically these types of books will give you some discrete type of decision-making chart (e.g. if this type of data, this type of experimental manipulation, then this test)

Let me give a little hint here though - I think it might be good to start learning generalised linear modelling strategies early, as these give you a lot of flexibility analysing almost anything. This is what Statistical Rethinking is really good for (among may other things), but there’s plenty of other resources available.

r/statistics • comment
3 points • Zorander22

For an individual study, you wouldn't really care - all else being equal, the size of your effect must be larger for you to reject the null hypothesis with a smaller study.

However, due to things like publication bias, underpowered studies generally have smaller sample sizes, which means more studies can be run, which means that by chance a higher number of studies will have made a type I error. If you had all of the data available, this wouldn't be a problem, but because of the bias in what gets shared with others, this means that having a field with a lot of small sample sizes can result in high rates of publishing false-positives.

There is another problem you can encounter somewhat related to the situation you're describing. If you didn't quite get the N you wanted according to your power calculation, and you didn't quite get statistical significance, you might then decide to run a few more participants. This means you now have a flexible stopping rule, and that you are now capitalizing more on chance (as you would have stopped the study if you reached traditional significance). If you have access to peer-reviewed papers, you may want to check out False-Positive Psychology.

This is getting further afield from your question, but I'll also mention that traditional hypothesis testing only considers the strength of the evidence assuming the null hypothesis is true - it doesn't compare this to the strength of the evidence you'd expect if the alternative is true. This means that there are some cases where a traditionally significant p-value (.049) is actually more consistent with the null hypothesis than with a particular alternative hypothesis. For more information, Daniel Lakens has a great Coursera course - so even though the power might not matter if you achieved statistical significance, the strength of the evidence both against the null and for a particular alternative may matter.

r/slatestarcodex • comment
3 points • Verithix

Indeed. Lakens is generally pretty great. He's not a random psychologist in a position to make criticisms like these, he specifically focuses on methods and inference, and even has a course online (which I recommend):

r/worldnews • comment
1 points • FireZeLazer

>11th out of 195 countries for highest death rate

There is your per capita statistic.

Lol I spent my summer doing an online Daniël Lakens learning how to do bayesian analysis and likelihood functions in R. But sure I don't understand statistics :) here it is if you're interested

r/AskStatistics • comment
2 points • StephenSRMMartin

Although he and I have disagreements about statistical philosophy and practice, Daniel Lakens has a great MOOC for basic stats methods.

r/statistics • comment
3 points • _Patchouli

Here you go! You might find his blog post interesting! Also Daniel made a MOOC. Both include R code that you can play around with. Let me know what you think.


+ his blog post on observed power

r/AcademicPsychology • comment
1 points • Flemon45

I've heard good things about Daniel Lakens' online course "Improving your statistical inferences". It's free to sign up:


It's more about understanding how we draw conclusions from our statistics than (e.g.) how to run a particular test, but I think that level of understanding is often missing from undergraduate courses.




r/AcademicPsychology • comment
1 points • The-Credible-Hulk79


I found the above course pretty useful.

r/datascience • comment
1 points • reddismycolor

Same boat. I took that exact discrete mathematics class you are talking about. But yeah my stats is complete shit too.


I just started this class and it has great reviews so hopefully its good:

r/statistics • post
1 points • Geologist2010
[Education] Improving your statistical inferences online course (Coursera) - Math background

To those familiar with this course ( ), what level of math should I have before taking this course? I understand elementary statistics concepts and math through pre-calculus.

I am a geologist and I utilize some statistics in my job (mainly hypothesis tests-t-test or WRS and geostatistics using GIS or EPA software).


r/statistics • post
1 points • Geologist2010
Improving your statistical inferences online course (Coursera) - Math background

To those familiar with this course ( ), what level of math should I have before taking this course?


r/AcademicPsychology • comment
1 points • andero

>Can we also add "avoid power analysis" to the this?


>Post-hoc power analysis...

Yeah, post-hoc power is bad and should not be used. One could do a sensitivity analysis instead, though.

>A priori power analysis assumes you already know the true population criteria.

Not true. You are making an estimate, then testing to see if there is such an effect. The point of running the study is to see if such an effect exists.
If you run a study without enough power, your study is too likely to be null in the uninformative way, i.e. a waste of time. Plus, if you run many underpowered studies, you're going to find significant effects only in cases where noise causes the study to overestimate the effect size. This will lead to a replication crisis, which is part of how we got where we already are.

Better yet, one can try to power for the smallest effect-size of interest, as described in Daniel Lakens' courses on Coursera, Improving Your Statistical Inferences and Improving Your Statistical Questions. For example, I might argue that if my intervention only changes scores on the HAMD by 1 point on average, that isn't big enough to care about. Then, I power to find an effect size based on the minimum change score I would be interested in. One could also power based on width of confidence intervals.

If you have enough power, there are ways of estimating whether your effect size is reliably so small that you wouldn't care about it. Then, you would be able to conclude that it's not worth pursuing any further. Contrast this to the case of the uninformative null, wherein you have wide confidence intervals and, while there is no significant effect, the 95% CI includes large enough effect sizes that you would be interested in them, so running the study tells you nothing (i.e. a underpowered study with a non-significant result does not tell you that there is no meaningful effect; absence of evidence is not evidence of absence).

>For example, is spending a million dollars to save the life of one child a big or small investment? You tell me.

That's a different question altogether.
Also, the answer is "big investment". The more relevant question is "is it a worthwhile investment", but it is a big one.

r/AcademicPsychology • comment
1 points • andero

Not a book, a course: "Argumentation - The Study of Effective Reasoning" from The Great Courses (also on TGCPlus).

For more of a practical research focus, I recommend two of Daniel Lakens' free courses on Coursera: Improving Your Statistical Inferences and Improving Your Statistical Questions.

r/slatestarcodex • comment
1 points • 4QHURikzXS

The gold standard online course in this area is probably MIT's Statistics and Data Science MicroMasters. I went to a top university, and from my perspective, most online courses are somewhat dumbed-down compared to the brick-and-mortar versions, even when top universities are offering them. This MIT MicroMaster's is the only online course I've seen which isn't like that. I've only completed the first course so far, in probability, but it was hardcore. It's the only online course I ever saw where the forums are full of people complaining about how hard it is. It is also a steal relative to a real masters, at about $1200 for four online courses. If you know calculus and are comfortable with proofs, and you want to invest maybe a year or two part time, to gain an extremely good foundation in stats and machine learning and have something reasonably impressive you can put on your resume, this is what I would recommend.

This course was OK, but although it's not obvious from the description, it actually does assume a stats background I didn't have. Still got a decent amount out of it, but honestly a lot of the stuff in the course seems to me like patching up a fundamentally broken paradigm of null hypothesis testing. (On the other hand, if what you care about is evaluating someone else's data analysis, as opposed to doing your own, maybe that's what you want. In fact it might be the best course available for that specific purpose. This vid on p-curve analysis was pure gold. Even for people who know stats, I'd recommend watching it to learn about p-curve analysis; it's an extremely clever technique for figuring out if a collection of results are spurious during a meta-analysis.) This course was poorly explained in my opinion and turned me off of any more JHU data science courses. There are some other Coursera courses on stats which look like they could be good but I can't comment from personal experience.

r/datascience • comment
1 points • AspiringGuru

I did some courses with this instructor, found her courses quite good.

just saw this course, haven't done it, keen to hear from others who have.

I find I need to brush up regularly, bad habits creep in too easily.

r/AcademicPsychology • comment
2 points • andero

You might want to clarify what you're looking for. Most of us probably have not read that book and that Amazon review is pretty damning (and comical).

Are you looking for management lessons? Communication stuff? Something else in particular?

Trying my best guesses, I'll recommend Adversaries Into Allies. The writing is not to my taste: it's boisterous and I find that off-putting. Several of the lessons, though, are solid. I circumvented conflict with a lab-mate because I read this book.

I am hesitant to offer this one, but maybe The 4-Hour Workweek by Tim Ferriss. It's not for academics per se and there would be a lot of irrelevant stuff, but there are some relevant bits, like how he batches email. I'll say that it's not directly relevant, but it might help you think outside the box. Maybe skip this one, maybe not.

I typically use non-book ways to learn so I don't have a lot of relevant books. Here are other things, though:

This video about how to speak is pretty good. Starts a bit slow, but then gets better. Ends strong. Easy to start with.

I very much enjoyed Effective Communication Skills. Good for general communication skills.

Also enjoyed Argumentation: The Study of Effective Reasoning. This is a bit technical as it gets into argumentation, reasoning, and debates. More broadly, argumentation is about persuasion, which includes writing scientific articles. You'd have to be interested in arguments to get through this. Learning here would be learning by transferring valuable concepts, not directly made for academics.

More on the methods side, I'd say these two Coursera courses by Daniel Lakens are maximally relevant, even if you just watch the videos (as I did): Improving Your Statistical Inferences and Improving Your Statistical Questions. Every academic psychologist should do these; it would make the field better. I only write it lower because I'm not sure what you're looking for.
On the philosophy of science side, would very much recommend Popper by Bryan Magee.

If you clarify, maybe I can offer more. Maybe not :)