"We are ready to move!" An interview with Daniel Lakens and Klaus Fiedler on the current challenges in the field of psychological research
Can psychological research still be trusted? In-Mind interviewed Daniel Lakens and Klaus Fiedler-two of the most prominent voices in the debate on how psychological science can be improved. In this interview, they offer a personal view on how psychology has changed and how it should change in the future. They describe their personal motivation and how the debate has affected their own work.
Distrust in science is on the rise. Whether it concerns climate change or creationism, politicians such as Donald Trump or Recep Erdoğan seem to favor beliefs over scientific data. Growing distrust in science may stem from efforts to depict unwanted research as driven by ideology. However, increasing doubts also originate from science itself: The flurry of new findings might often be too good to be true. The strong pressure to publish may lead to biased results. A few years ago, these doubts have been fueled by shocking revelations of data fraud and difficulties to replicate prior findings. So, is psychological research in a crisis of confidence? Or are failures to replicate previous findings a healthy element of a self-correcting science? How can psychological science do better?
During last year’s Cologne Social Cognition Meeting (CSCM), Jan Crusius and Oliver Genschow from the In-Mind magazine had the unique opportunity to interview two of the most prominent voices in the debate on how to improve psychological science. Daniel Lakens from Eindhoven University of Technology (Netherlands) stresses the need for better methods, research transparency, and rigorous science. Klaus Fiedler from the University of Heidelberg (Germany) contends that better theories will move the field forward.
In-Mind: Can we still trust psychological research?
Fiedler: Of course. There are so many questions at the heart of everybody's problems that only behavioral scientists like psychologists can answer. I could give many examples. Here is only one: How can students most effectively learn at school? Psychological research investigates the optimal timing for learning. Such research has taught us that students learn best when content is distributed over time and when consolidation periods, such as breaks, are included. So, if a class is taught just as one block, learning is much less effective than if it is distributed over fifteen smaller sessions across three months. Of course, there are many other questions that only behavioral scientists can answer—regardless of what Donald Trump says.
Lakens: It is also very important how psychological knowledge and scientific facts are transmitted to the general public. People will very often hear our results and our findings through the media. There is, however, a big filter of what is transferred to the general public. Very often, psychological findings are presented as a fun fact. However, psychologists study much more basic and fundamental issues that have a strong impact on society. A very relevant topic, for example, is what people can do to save the environment. As humans, we often look at short-term interests and neglect long-term goals such as protecting the environment. Psychologists work on how the perspective from short term to long term interests can be shifted can help understanding how humans could better protect the environment in the long run.
In-Mind: From your answers one could get the impression that psychological research is doing just fine and that just the media conveys a wrong picture. Is there no need to improve?
Lakens: Of course is there a need for improvement. It would be foolish to think we are now doing the best possible science and that in the year 2100 people will look back and say "Huh we couldn't improve. In 2017, everything was just perfect.” That’s not going to happen. So, in that sense: Can there be improvement? Yes! Will there be improvement? Of course.
Fiedler: It is very important that there is never the last word. A very good example of psychological progress is the debate in legal psychology on how to do a line up. We know line ups from movies where an eyewitness has to identify the murderer among six people. In psychological terms, this is a multiple-choice recognition test. Over the last forty years, we have learned a lot about how to improve this kind of test. First, researchers found that eyewitnesses often make wrong identification decisions. Because of this fact, many innocent people ended up in prison. As a remedy, stricter and less error-prone methods for identification have been developed. For instance, the number of people in a line-up has been increased. Eyewitnesses now have to identify possible perpetrators sequentially instead of simultaneously. And some years later, researchers found out that asking eyewitnesses about their confidence in identifying the correct person increases the quality of the procedure as well. Over the years, recommendations of how to do a line-up have changed again and again. A single research finding was never the last word. I don’t fear that in twenty, thirty, forty years there will be again new insights that further improve this procedure. That’s just the way it works. It’s a continuous updating of the state of the art. That’s what science is about.
In-Mind: So, psychology is not in a crisis at all…
Fiedler: I don’t like the word “crisis” when it refers to my favorite football club and I don’t like the word “crisis” in science. Of course, scientists make big mistakes such as in the way they analyze their data and in the way they interpret the results. However, I dare say that other disciplines like behavioral economics, medical science, or cell biology–to name just a few examples–can be just as fast in drawing premature interpretations and informing the public too soon. I wouldn’t say that psychologists are overly confident relative to other disciplines. This, of course, should not prevent psychologists from working out new methods and procedures to improve their science.
Lakens: Some historians have said that research is in a continuous crisis. I think that is very true. We now recognize that some things were wrong. Beside the scientific methods we use, we now also question social aspects of doing science. For example, whether the reward structures for scientists, such as the strong incentives to publish new and surprising results, can have damaging effects. We as psychologists are perfectly suited to study these social influences. That’s something we know about and that’s why we are really moving ahead and work on improving our field.
In-Mind: Both of you have been very vocal in the debate on how the field can move forward and it can be improved. What is your personal motivation for engaging in this debate?
Lakens: For me, there were two important incidents. First, somebody contacted my research team, doubting our results. Initially, we didn’t understand the criticism very well. Eventually, somebody wrote "Your findings are too good to be true and there are possible reasons for it. It could be fraud, it could be that you’ve messed around with your data, misanalysed your data, made a mistake, or it could be that you did not report all of your conducted studies.” We were nervous because of the complicated statistical aspects the person was mentioning and started to question our findings. We realized that we had indeed selectively chosen our studies and only reported the ones in favor of our idea, disregarding those that had failed for whatever reason. Soon, we published the additional data on an online platform to correct the scientific record. Later, I got involved in the Reproducibility Project planning a replication study. I realized that for many important issues, such as calculating the correct amount of participants needed for the study, I had no clue how it works. How, as a PhD, can I not know how to design a proper study? I was not trained well enough to do good science. That’s the second reason why I'm so interested in the debate and motivated to educate myself and others.
Fiedler: I am interested in this debate because I believe that the way it unfolded over the last decade was counter-productive. It damaged psychology’s public image and undermined the self-confidence of our young scientists and students. Moreover, the style of the debate provoked my contrarian side. If somebody tells me that there is a crisis, I want to show it's not a crisis. I want to counter-argue. I want to consider the opposite. Before the start of the debate, I was not so much motivated to look at positive aspects. But since then I've changed my view, taking the perspective of a defense attorney. I am convinced that it is more instructive to look at good science than to complain about bad science.
In-Mind: What do you think are currently the biggest challenges for our field?
Fiedler: I believe that if we really want to deal with our problems, to cope with them, and to improve the situation and the statistical problems that Daniel just mentioned, we need different pathways. My hunch is that we have to look at the best research examples of our field that we can be proud of. Then, scientists can strive to emulate such research creating a positive snowball effect. Perhaps we should install a Hall of Fame for really excellent projects. That lives up to all the criteria that Daniel has in mind and will motivate other researchers and will leave that negative tail of less excellent research behind us.
Lakens: I might disagree on this part. I think that publication bias and untrustworthy results in the literature are very problematic. Publication bias means that only results that “worked” are reported, but that the community does not learn about the failed studies. Often you hear in the news that if you eat chocolate or if you drink wine, you're more likely to get cancer. Two weeks later you read in the same newspaper that when you drink wine you're less likely to get cancer. This happens exactly because of the publication bias. There is a distribution of facts and in extremes you will find significant results. But, if you look at all the data of all studies and researchers together, there might actually be no effect. As scientists, we have to be aware of this problem and how we communicate our results to the public.
In-Mind: What could be done against these problems?
Lakens: In my view, at least two things are important. If new students enter the field they don't really know what they can build on. If only extreme results are reported, they don’t know how likely it is that a replication of the same study will work. If they fail themselves in replicating others’ results, it’s also difficult for the students to share that information with other people because of the same publication bias. To me that's one important issue and the other one is education. We need good training on how we do the best possible science. For some reason, we did not do the best job on this issue in the past. We don't all have to become real statisticians. But learning the basics is very important.
Fiedler: There is not much reason to counter Daniel on this point. I like the goal of solving the publication bias problem. And yes, junior researchers need to be able to rely on previous evidence. But, they also need advisors that give them orientation. Therefore, I also like the notion that we need to improve our education. However, when it comes to improving the quality of psychological research I'm less inclined to believe that we need better statistics. Statistics is never better than research designs. If an experiment is not well designed, if the selected research material, the tests, and the measurements are flawed, even the best statistics can’t compensate.
In-Mind: How important are good theories in this respect?
Fiedler: Theorizing beats research designs. Good theorizing is more important than anything else. If the theory you’re testing is weak or logically unwarranted, best designs, methods and statistics cannot solve the fact that your predictions can’t answer the questions you are asking. What really matters is strong theoretical reasoning. Researchers should always be aware of what theoretical constraints drive the hypothesis they are testing, and they should consider the opposite when testing a hypothesis. You have to embrace this self-doubt. This self-critical attitude at the level of theoretical reasoning is more important than technical knowledge.
Lakens: I completely agree. But I just think that statistics is the easiest thing to teach (for online courses of Daniel Lakens see here). Statistics is not so difficult. And you can also teach it in large groups. So, I agree that theorizing is important, but you should not neglect statistics. If you do it wrong, you can fool yourself. And that can affect theorizing in a negative way. That is, if we build up our theory on findings that are based on bad statistics, we cannot derive at precise theoretical predictions.
Fiedler: George Kelly [1] wrote about the creative cycle, which can be applied to our debate, but also to evolution, therapy, and many other topics. Kelly puts forward the idea of a change between loosening phases and tightening phases. During loosening phases, you have to be creative and come up with new, maybe weird ideas. You have to be courageous to do so. Then comes a tightening phase in which you strictly test different hypotheses against each other with hard statistics. We have to embrace both the loosening and the tightening phase. Switching between the two phases is the ultimate art of being a good scientist.
Lakens: I teach the idea of loosening and tightening to all my students. I think it captures how science works and progresses. I also teach Klaus’ article [2] in which he refers to the creative cycle of science.
In-Mind: How did the debate change your own scientific work?
Lakens: What I learned is that science itself is not a fixed system. It can change. For me that's very nice, because reflecting on this idea can improve one’s own research. The idea of a changeable system forces you to always ask yourself: What am I doing? Am I doing it in the best possible way? As an example, the new technologies made it possible to be much more transparent about what we do. When I was finishing high school I just made my first e-mail address and now we have the technology allowing me to share my research methods and data with other people easily. I like that very much and have implemented it in my daily routine as a scientist.
Fiedler: During the debate about the reproducibility of research, sharing of data and knowledge has become self-evident. It has become so easy and so cheap. I like the fact that we are no longer in the old times when data sharing was something unusual. I find it so obvious, so self-evident. Moreover, the debate sharpened my view on statistical and methodological issues. I must say that I did apply most things already before. But I never saw it so clearly as I see it now.
In-Mind: What makes you optimistic that psychology as a science is moving in the right direction?
Lakens: Klaus Fiedler mentioned students who are very worried about the field and are disillusioned. Recently, I was teaching in Zürich and made a similar observation. One of the students was very negative about the field and said: "What do we know about psychology? There is basically nothing valid that is written in the text books!” I then let every of the 15 participants mention one theory or finding that is very good and strong. It was astonishing to see how many strong facts were mentioned. So, I think now is actually a good time to put the negative view on the side. I’ve had it–to be honest. I really think it is over. I think we are ready to move. And in fact, I think we moved already many steps forward. In this respect, I especially like how the debate put researchers closer together leading to more collaborations across different countries and disciplines.
Fiedler: Collaborations are certainly one point that will improve our field. Just as the new movements of data sharing. Sharing of predictions and a pre-registration of what you expect to find in a study will help putting forward the field for sure. You won't believe it, but, I’m actually involved in this. Although in a slightly different way (is laughing). A colleague of mine likes tournaments. So, we came up with the idea to announce tournaments for scientists. If there are different explanations for a certain phenomenon we would like to let different scientific models and researchers compete with each other. The research with the model that best explains and predicts the phenomenon would win the tournament. This approach would help us to overcome a certain type of irrational behavior. This way collaborations would not only be motivating but also elucidating as they help find things you didn’t see alone. Moreover, it would be fun and inspiring.
In-Mind: Given these challenges, how can psychology communicate with a broader audience?
Lakens: Laypersons often don’t have a very good idea of how science works and what the process of science is. Therefore, I think students should learn already in high school how science works. Teachers should present science in a way that people are not confused when they read it in the news. Moreover, we as scientists have to learn to present our results and their limitations in a way that people can easily understand them without the specific scientific details.
Fiedler: If one wants to publish something for an audience of laypeople, one has to develop an authentic attitude towards them. It’s really important to convey confidence, authenticity, and trust to your audience. You can’t fix something that’s not really authentic. It is also important to show this attitude when you talk to journalists. This allows transporting that we as a field can be trusted and that we provide the general public with credible information. What you are doing with the In-Mind magazine is exactly what I mean. The goal of In-Mind is to be an interface between what scientists are doing and what the public can understand. That’s really important. We as a field neglected this aspect too much in the past although it is the ultimate goal. So, In-Mind really is an improvement in this respect.
In-Mind: Thank you very much for your time and this interview.
References
[1] Kelly, G. A. (1955). The psychology of personal constructs. Vol. 1. A theory of personality. Oxford, England: W. W. Norton.
[2] Fiedler, K. (2004). Tools, toys, truisms, and theories: Some thoughts on the creative cycle of theory formation. Personality and Social Psychology Review, 8, 123-131.
article author(s)
blog categories
- Culture (13)
- Current Events (9)
- Gender (6)
- Meaning Making (7)
- Other (3)
- Political Psychology (11)
- Race & Ethnicity (7)
- Solid Science (8)