(Okay, I have got to fix the kerning on this theme’s title format.)
Want to know a pet peeve of mine? People suffering from the Dunning-Kruger effect, especially as it relates to health/nutrition/medicine/wellness.
I’ve mentioned before that I follow Science Babe on Facebook. (I follow some other folks who purport to value logic above emotions or agendas, too, so this isn’t a criticism solely of her, by any means.) What I’ve noticed among many non-academics who want to be a voice of reason in an anti-science climate (no pun intended) is that they have a tendency to embrace arguments that fit their worldview regardless of how robust the research is.
The criteria for evaluating new (or any) studies should be the validity of the assumptions made by the authors, the statistics, and other realizations of the scientific method. Being distracted by “the shiny new thing” is understandable for laypersons (of which I am one), but the point of being a science communicator is to evaluate the evidence on behalf of persons not qualified to do so. Why, then, is Science Babe waiting for a consensus, regardless of her feelings on the topic–or results–of studies, rather than pointing out their strengths and weaknesses? She seems to confuse “reputable” with “oft repeated,” even though that goes against everything she claims to stand for. If she feels confident branding herself as a science communicator, she needs to be capable of, and comfortable with, airing the limitations of even the studies she finds favorable.
It is important that we baby-vaccinating, climate-change-believing, healthy-diet-promoting types not lose sight of proving things, just because we’re surrounded by people who already believe them or who cannot mount logical arguments against them. One kind of group think is no better than another, especially when both assume they are correct based on facts not in evidence. If facts are truly what you think people should base their stances on, facts are what you should be discussing. A study that holds up under critical examination is what should give science communicators hope that they pass along to the rest of us. Otherwise, it is their job to temper our hope (with facts!) until better (not just more) research emerges. Examine.com does an excellent job of this and the concepts behind their ratings system would easily be applied elsewhere. I challenge Science Babe and friends to consider similar metrics if they want to be of use to laypersons.
I realize that breaking these things down for a general audience can be time-consuming and tricky, but isn’t that why the need for science communicators exists? Popular figures like Science Babe have such a great platform from which to reinforce logical thinking, but nuance isn’t exactly sexy these days. Let’s hold these educators accountable for their methods, not just their conclusions; maybe they’ll do the same to researchers, in turn.