A junior researcher's practical take on the why and how of open science.

Of course this fear should not stop researchers from making relevant procedures, data, and other materials available after publishing their findings.  The more challenging question is whether, or perhaps for how long, researchers should be able to sit on materials and data.  Currently, my personal practice is to withhold any potentially publishable data until I publish it—regardless of how long that takes or how likely I am to actually do so—and to store any seemingly unpublishable data or “failed projects” in a private folder until I forget about them. Of course there is an ethical question here (something involving withholding huge amounts of information from the public, likely slowing down the progress of science).  But even from a purely selfish perspective I think we should ask ourselves: am I really going to publish this any time soon?  If the answer is “no,” then chances are researchers can only gain from making materials and data publically available (through, e.g., collaborations or citations or just becoming known for doing lots of work on whatever you do).  

The downside of sharing – opening yourself up to scrutiny.

Increased openness has many benefits, but it also has at least one drawback: opening one's research to enhanced scrutiny.  Current research culture often incentivizes presenting 'clean' designs and data, so transparency about design and data 'ugliness' can put one at a disadvantage in peer review, academic hiring, and tenure reviews. This fear is particularly scary as a junior researcher who do not have a solid reputation yet and whose career trajectory is founded on a tiny number of projects. But it is precisely because others might find mistakes or come to different conclusions that it is so important to share.  Science requires that evidence-based conclusions be subject to scrutiny.

Because open practices likely will lead to more criticism, as a field social psychology should be careful about how and why we criticize.  We must acknowledge that there are often multiple ways to analyze data, and mistakes are to be expected.  Thus instead of condemning individuals or practicing “gocha” science, we should focus on developing processes to improve data collection and analysis to reduce errors.  One way to constructively address perceived errors is simply to provide the original authors notice of any mistakes or alternative interpretations, allowing them to respond before their work is publicly criticized.  Another option could be to engage in collaborative re-analyses or re-interpretation akin to collaborative replications. As Betsy Levy Paluck noted, if economists can be respectful, so can we.

Some final thoughts for all, including the less junior reader.

Finally, as a field we need to acknowledge that open science improves our research and reward those who practice it.  Top journals like PSPB are starting to require data from published articles be made available, and Psych Science has begun to incentive open practices. I hope that hiring, tenure, and grant committees will similarly expect transparency and credit those who practice it. 

This of course does not mean that all researchers need to spend the weekend searching their attacks so they can publicly archive all their research materials since 1978.  I empathize with those who are overwhelmed by the idea of practicing openly or even skeptical about open science more generally. To these readers I suggest you start small.  Perhaps begin by organizing your new research materials and data files in an electronic form that could be easily shared if (and when) you decide to—clearly label the variables in your data files, try out an online repository like the Open Science Framework, encourage your next undergraduate honors student to record their hypothesis and post their materials online.  I think that you will find it is not so hard to begin integrating practices that will help make your science more open and our science better.

article author(s)

facebook