3… 2… 1… Liftoff — A Dream Come True — Registered Replication Reports Take Off

A million… I mean three and a half years ago, when I wrote my incoming editor’s editorial at Perspectives on Psychological Science  (DOI: 10.1177/1745691609356780), I said that I wanted to encourage new types of articles that I thought would help our field grow stronger and faster.  One of them was dubbed ‘‘The File Drawer” and I wrote: “What I envision is … the Editorial Board identifies topics: phenomena that researchers have not been able to replicate. Next, we appoint lead researchers: people who will collect the mostly unpublished failures and write an analysis of what was done, what was (or was not found), etc.  Finally, the authors who published the original research would be given a chance to respond.”

We (Hal Pashler, Tony Greenwald, and I) identified a study to replicate and contacted the original author early on but he seemed so unnerved by the process that we paused to re-group.  In the meantime, Hal and I developed psychfiledrawer.org where researchers can individually post their attempted replications (both successes and failures).

Then flash forward three years to when Dan Simons and Alex Holcombe proposed what has become the Registered Replication Reports initiative — a way to get teams of researchers to try to replicate important studies with the cooperation of the original authors.  OF COURSE Perspectives should host and publish such articles.

For more on the backstory of the creation of RRR see:  http://blog.dansimons.com/2013/05/registered-replication-reports-stay.html

For more about the pushback I’ve gotten to the replication project see: http://wp.me/p2Wics-1o

We are teamed with the Open Science Framework where projects will be developed and shared.

To get started on your very own replication research report, or to join one already in progress, go to:  http://www.psychologicalscience.org/index.php/replication/ongoing-projects

And if you want some ideas for experiments that people would like to see replicated, take a look at psychfiledrawer’s top-20 list of studies users would like to see replicated.

And now…. for our very first public launch… whose study will it be?  3… 2… 1…    You can find out here.

Advertisements
Posted in Research Revolution | 2 Comments

“But I don’t want people to try to replicate my research.”

If you are reading this blog, you have probably seen some of the news about the new Replication Research Reports to appear in Perspectives on Psychological Science. http://www.psychologicalscience.org/index.php/replication  But something you probably haven’t seen, or heard…  A few months ago, I was at a meeting describing this PoPS initiative and a senior researcher said, in front of two dozen other folks,

“But I don’t want people to try to replicate my research.”

There was a hush.  And I had to stop myself from saying, “Wait.  You mean that you want your papers growing musty on shelves and unaccessed in cyberspace?”

Then I realized that THAT was not the fear.  Rather, this researcher feared that the “replication movement” could be out to get her.  She thought that if people were trying to replicate her work, it would mean that they were targeting her and “trying” to show that it was non-reproducible.  In fact, during that meeting the project was called “McCarthyism” not once but twice.

I know that some of you may be against the “replication movement” in psychology.  I assure you that this PoPS project is not meant to be any kind of “debunking” of particular research.  Rather, we intend to involve the original authors, and involve labs that believe, disbelieve, and are neutral about the existence and generalizability and size of the effects.  We also intend to involve all areas of psychology.

And, we certainly do not intend to advocate that this is the only way science should be done.  There are upcoming articles in Perspectives that will “put replication in its place.”  But I believe that replication should be more valued as a tool than it currently is.  And there seems to be a wave across all the biological sciences (especially the medical sciences) agreeing.

Better humor about replication was evinced at a recent discussion at The Columbia University Department of Psychology a few weeks ago.  Speaker Niall Bolger showed the page in psychfiledrawer.org where researchers can nominate and vote for studies that they would like to see replicated.  Here’s the top-20 list:  http://www.psychfiledrawer.org/top-20/

“Look,” I said happily, “I’m number 9.”  But Kevin Ochsner seemed proud to beat me at number 8.  Others strained to find their own names on the list.

Oscar Wilde once said: “The only thing worse than being talked about is not being talked about.”  Shhh…. Don’t tell anyone but — I was the first to nominate my experiment for the replication top-20 list.

Posted in Meta-analysis, Research Revolution | Tagged , | 1 Comment

My First Post-Revolution Action Letter

I’m a little embarrassed to admit it – but the first post-revolution action letter I’ve seen is one I got as an author, not one that I sent as an action editor.

What do I mean by a post-revolution action letter?  I mean one that incorporates some of the scientific values that we have been vigorously discussing in psychology over the past year.  (E.g., in the Perspectives November special issue.)  In particular, I mean the values of not creating post-hoc hypothesis, replicating surprising results, publishing interesting studies (and replications) regardless of how they turn out.

I was the fourth author on an empirical paper with one experiment that included a priming manipulation and some messy results that did NOT confirm our initial hypothesis but did suggest something else interesting.  What did the action letter (and reviewers) say?

1) After noting that the research was interesting, the reviewers called for some type of replication of the surprising results – and were not put off by the fact that we did not confirm our original hypothesis.

2) The Action Editor wrote two lovely things. First, he said he preferred a direct replication.  He said:  “I like the fact that you propose a hypothesis and fail to find support for it (rather than invent post hoc a hypothesis that is supported). However, I also think this surprising finding (in view of your own expectation) calls for a direct replication. The combination of the two experiments would be very compelling.”

3) And second: “The direct replication would be confirmatory in nature and its outcome would not determine eventual acceptance of the paper. Performing the replication attempt and reporting it in the manuscript is sufficient.

So, hats off to Rolf Zwaan and the anonymous reviewers.

Posted in Perspectives on Writing, Research Revolution | Tagged | 5 Comments

Meta-Analyses Want YOU !

Meta-analyses are an important way of compiling our knowledge.  They help us find true effect sizes and moderators and mediators.  But meta-analyses that consist only of published results may be very biased.  We need a way to make sure that people conducting meta-analyses can find the studies in other people’s file drawers — however those studies turned out.  If you are conducting a meta-analysis, please contact this blog and I will post a notice of it.  (Or you can post a comment here.)

Note: Perspectives DOES publish meta-analyses but there is no guarantee that ones mentioned in this blog will be published in the journal.

Starting the list now (and then moving comments up here, too):

1. Meta-analysis on the effect of induced disgust on moral judgment.
Especially interested in studies in which disgust is induced by a manipulation that is separate from the moral judgment itself (e.g., through disgusting images, hypnosis, filthy environment).  Get in touch if you have any relevant data.

By: Justin Landy and Geoff Goodwin, University of Pennsylvania
Email for more information: landyj at psych.upenn.edu

2. Meta-analysis on the effect of weight on judgment based on the effect found in the following article: Jostmann, N. B., Lakens, D., & Schubert, T. W. (2009). Weight as an embodiment of importance. Psychological Science, 20, 1169-1174. Studies have to involve randomisation, a weight manipulation and a judgment or choice.

By: Steven Raaijmakers, Tilburg University
Email for more information: stevenraaijmakers [at] gmail.com

3. Meta-analysis on the accuracy of emotion recognition in depression compared to controls using face stimuli. I would be interested if anyone has any unpublished data they could share or any suggested works for inclusion.

By: Michael Dalili, University of Bristol
Email: michael.dalili@bristol.ac.uk

Posted in Meta-analysis | Tagged | 4 Comments

What Psychology Journal Editors Really Do (Really?)

A while back I posted the following on a private site.  It got some interesting responses.

I find this [previous] discussion of editorial motives and behavior very interesting.  I am about to break the code of silence here; please don’t spread this around and get me into trouble.

Among the things you guys seem not to know: After you are appointed the editor of a major psychology journal, you are invited to editor boot camp.  This is a three-day retreat in which you learn how to do the job.

Day 1 was mostly an overview of the “system” – the roles people play (from publishers to reviewers to authors), the stages of publication, and the various electronic systems that are used.  Then there was a short section about reviewers – how to select, treat, cajole, and reward them.

Day 2 was about dealing with authors.

The first part was about initial submissions: How to politely decline articles that have no business being submitted to your journal.  How to politely decline those that violate every policy that your journal has stated (about length, figures, etc.).  How to politely decline articles by native English speakers that are barely readable.

But the biggest chunk of the day was about writing action letters.  Did you know that there is a rule that you can never say something in the paper was good without adding a “but” after your comment?  Did you know that you are always supposed to tell the author to make the paper shorter (unless your journal has a strict word count and the author has adhered to it)?  And did you know that you are supposed to perseverate on something – either writing or completeness of references – just so the author acknowledges that you, the editor, are in control?  Hey – that’s social psychology at work.

Oh, and my favorite exercise on Day 2 was the 30 minutes we spent yelling obscenities at each other; the idea is to help develop a thick skin against stupid attacks by authors.  In particular, the 15 minutes we had to impersonate a famous author writing back after a rejection provided some indelible images.

And then there was Day 3.

That’s the one I really shouldn’t be talking about.  That is the Day We Learn To Screw Up Science.  Yes, indeed.  It’s our job to make sure that method sections are so inscrutably written so that no one could ever accurately replicate a study even if they tried.  That way, authors can always say their original results were fine because the replication was not “exact”.  Yes, indeed.  It’s our job to make sure that in results sections authors don’t describe every analysis they did and don’t create 20 graphs, each of which compares two means.   The production people don’t like to check the typesetting on long paragraphs filled with equations and statistics, and creating good-looking tables and graphs is difficult. Yes, indeed.  It’s our job to make sure that the paper reads so that the hypothesis that was proposed at the beginning turns out to be supported at the end – even though the authors had no idea what their hypotheses were in the beginning.  Of course the paper would be so much better if the authors spent 10 pages justifying what they first thought would happen regarding, for example, stereotyping, but at the end they spent 10 pages explaining why that didn’t happen, and then 10 more about why they got in-group/out-group effects instead.  But we are told that no one wants to read all that stuff, and that at the end of a paper, people would like to believe they have learned something.  Oh yeah, and we learned to think about how all those pages with not much in the way of conclusion would really screw up our impact factor so that no author worth his or her salt would send a paper to our journal any more.

Day 3 was truly exhausting.  On our way out the instructors gave us chocolate and lollipops, waved to us, and with big smiles said: “So long, suckers!”

Posted in Perspectives on Editing, Perspectives on Writing | Tagged , , | 1 Comment

Research Revolution 1.0 – Let’s Move On

The tumbrels have rolled, the guillotines have dropped, and I’m hoping that the publication of the Stapel report represents the end of Revolution 1.0.  I also hope that all the heads that have fallen have been guilty of the accusations against them.  And now I hope that we can move on to the next phase of the revolution.

Why do I call what’s happening in psychological (and other) sciences now a “revolution”?  Indeed, there are many similarities between our situation and various historical political revolutions.  For one thing, most of the cries for reform are not new.  We have been hearing about problems of replication and analysis for a long time with calls for various types of changes.  For another, the current situation is not due simply to one factor; rather, currently there is a confluence of events that have lead to the louder cries for reform – issues of fraud, replication, researcher degrees of freedom, post hoc analyses and theorizing, etc.

And, I think, that like early in many revolutions, the calls to revolution, are overly extreme: calls to have people post all methods and data online, register all hypotheses and experiments, provide an analysis plan, do direct replications of all studies.  These ideas all have some merits and some demerits.  But we shouldn’t use them in the service of chopping off any more heads.  As much as anyone, I believe in the value of science and the goodwill of scientists.  So, let’s stop the witch-hunt and move on.

Three New R’s of the Research Revolution

There are three new R’s I think we need to keep in mind as we move toward a more balanced Revolution 2.0.

One is R-E-S-P-E-C-T.  None of us thought, as we became psychological scientists, “Hey, I’m going into to make the big bucks.”  And it was not for the nice offices or the adoring fans.  We do it because we find it interesting, we want to uncover truth, or we believe it’s the only thing we are competent to do.   Therefore, I think it’s time to pull back and stop being accusatory.  And there should be no ex post facto laws.  (See Article I of the United States Constitution.)  That means that if someone has been adhering to past standards for running, analyzing, and reporting research, it is not fair to critique them for it now.  We can create new norms for the future.  It is time for us all to move on.

Another R is refutation.  Science proceeds not only by coming up with new theories, finding support for them, and connecting them to other theories, but also by discarding old theories that have lost support.  A longer analysis of this topic is for a later time, but it is very related to the third R.

The R that many people are focusing on now is Replication. “Let’s call for replications,” “let’s publish replications and failures to replicate”, etc.  Whether and how this should be done has become a divisive issue in the field of psychological science.

This piece in The Guardian notes that editors of the journal Psychological Science thought about, and then rejected, the idea of publishing replication attempts.  The idea is not gone; rather, it has moved from Psychological Science to its more flexible sister journal Perspectives on Psychological Science.  The goal is to combine much information from many researchers as we do in meta-analyses.  We are not witch-hunting for fraud or for bad research practices; instead we are looking for the generalizability and limitations of the research.  Many replications (and failures to replicate) tell us something important.  In fact, I believe that we need to listen to all of our data including pilot studies and replication successes and failures. Your data might not be perfect for your theory but it’s relevant to other theories.  As Nelson Goodman said (more or less):

Every piece of data is consistent with an infinite number of hypotheses and inconsistent with an infinite number of hypotheses.

Or as I put:  One scientist’s p < .25 is another scientist’s p < .05.

As psychological scientists we are uniquely situated to not only study psychology content, but also to study the reasoning processes of scientists themselves.  Thus, what we find should be useful both for our science and for all people involved in the process of science.  So, yes, there should be replication attempts and there should be “science as usual”.  And if someone wants to replicate your findings you should be flattered that they (and others) are interested and motivated enough to do so.  But there should be no more witch-hunts or fears of witch-hunts.  Let’s move on.

(N.B. Many of these issues are addressed in the various articles in the Perspectives Special Issue on Replicability and Research Practices.)

Posted in Research Revolution | 4 Comments

Welcome to My PoPS

I’ve been getting a LOT of submissions to Perspectives on Psychological Science (PoPS) lately from people wanting me to publish their comments on the comments on the comments… .  Seriously?

We should be having discussions in real time, or at least e-time, not waiting for 15th century print technology to make our points.  And if I published every comment on the comments…, there would be no place for the actual science.  So, let’s all get over only counting ideas that have been set in type — even though most of us don’t read them that way any more.

Instead, let’s see your perspective… here.

Posted in Perspectives on Writing | Tagged | 3 Comments