My First Post-Revolution Action Letter

I’m a little embarrassed to admit it – but the first post-revolution action letter I’ve seen is one I got as an author, not one that I sent as an action editor.

What do I mean by a post-revolution action letter?  I mean one that incorporates some of the scientific values that we have been vigorously discussing in psychology over the past year.  (E.g., in the Perspectives November special issue.)  In particular, I mean the values of not creating post-hoc hypothesis, replicating surprising results, publishing interesting studies (and replications) regardless of how they turn out.

I was the fourth author on an empirical paper with one experiment that included a priming manipulation and some messy results that did NOT confirm our initial hypothesis but did suggest something else interesting.  What did the action letter (and reviewers) say?

1) After noting that the research was interesting, the reviewers called for some type of replication of the surprising results – and were not put off by the fact that we did not confirm our original hypothesis.

2) The Action Editor wrote two lovely things. First, he said he preferred a direct replication.  He said:  “I like the fact that you propose a hypothesis and fail to find support for it (rather than invent post hoc a hypothesis that is supported). However, I also think this surprising finding (in view of your own expectation) calls for a direct replication. The combination of the two experiments would be very compelling.”

3) And second: “The direct replication would be confirmatory in nature and its outcome would not determine eventual acceptance of the paper. Performing the replication attempt and reporting it in the manuscript is sufficient.

So, hats off to Rolf Zwaan and the anonymous reviewers.

Posted in Perspectives on Writing, Research Revolution | Tagged | 5 Comments

Meta-Analyses Want YOU !

Meta-analyses are an important way of compiling our knowledge.  They help us find true effect sizes and moderators and mediators.  But meta-analyses that consist only of published results may be very biased.  We need a way to make sure that people conducting meta-analyses can find the studies in other people’s file drawers — however those studies turned out.  If you are conducting a meta-analysis, please contact this blog and I will post a notice of it.  (Or you can post a comment here.)

Note: Perspectives DOES publish meta-analyses but there is no guarantee that ones mentioned in this blog will be published in the journal.

Starting the list now (and then moving comments up here, too):

1. Meta-analysis on the effect of induced disgust on moral judgment.
Especially interested in studies in which disgust is induced by a manipulation that is separate from the moral judgment itself (e.g., through disgusting images, hypnosis, filthy environment).  Get in touch if you have any relevant data.

By: Justin Landy and Geoff Goodwin, University of Pennsylvania
Email for more information: landyj at

2. Meta-analysis on the effect of weight on judgment based on the effect found in the following article: Jostmann, N. B., Lakens, D., & Schubert, T. W. (2009). Weight as an embodiment of importance. Psychological Science, 20, 1169-1174. Studies have to involve randomisation, a weight manipulation and a judgment or choice.

By: Steven Raaijmakers, Tilburg University
Email for more information: stevenraaijmakers [at]

3. Meta-analysis on the accuracy of emotion recognition in depression compared to controls using face stimuli. I would be interested if anyone has any unpublished data they could share or any suggested works for inclusion.

By: Michael Dalili, University of Bristol

Posted in Meta-analysis | Tagged | 4 Comments

What Psychology Journal Editors Really Do (Really?)

A while back I posted the following on a private site.  It got some interesting responses.

I find this [previous] discussion of editorial motives and behavior very interesting.  I am about to break the code of silence here; please don’t spread this around and get me into trouble.

Among the things you guys seem not to know: After you are appointed the editor of a major psychology journal, you are invited to editor boot camp.  This is a three-day retreat in which you learn how to do the job.

Day 1 was mostly an overview of the “system” – the roles people play (from publishers to reviewers to authors), the stages of publication, and the various electronic systems that are used.  Then there was a short section about reviewers – how to select, treat, cajole, and reward them.

Day 2 was about dealing with authors.

The first part was about initial submissions: How to politely decline articles that have no business being submitted to your journal.  How to politely decline those that violate every policy that your journal has stated (about length, figures, etc.).  How to politely decline articles by native English speakers that are barely readable.

But the biggest chunk of the day was about writing action letters.  Did you know that there is a rule that you can never say something in the paper was good without adding a “but” after your comment?  Did you know that you are always supposed to tell the author to make the paper shorter (unless your journal has a strict word count and the author has adhered to it)?  And did you know that you are supposed to perseverate on something – either writing or completeness of references – just so the author acknowledges that you, the editor, are in control?  Hey – that’s social psychology at work.

Oh, and my favorite exercise on Day 2 was the 30 minutes we spent yelling obscenities at each other; the idea is to help develop a thick skin against stupid attacks by authors.  In particular, the 15 minutes we had to impersonate a famous author writing back after a rejection provided some indelible images.

And then there was Day 3.

That’s the one I really shouldn’t be talking about.  That is the Day We Learn To Screw Up Science.  Yes, indeed.  It’s our job to make sure that method sections are so inscrutably written so that no one could ever accurately replicate a study even if they tried.  That way, authors can always say their original results were fine because the replication was not “exact”.  Yes, indeed.  It’s our job to make sure that in results sections authors don’t describe every analysis they did and don’t create 20 graphs, each of which compares two means.   The production people don’t like to check the typesetting on long paragraphs filled with equations and statistics, and creating good-looking tables and graphs is difficult. Yes, indeed.  It’s our job to make sure that the paper reads so that the hypothesis that was proposed at the beginning turns out to be supported at the end – even though the authors had no idea what their hypotheses were in the beginning.  Of course the paper would be so much better if the authors spent 10 pages justifying what they first thought would happen regarding, for example, stereotyping, but at the end they spent 10 pages explaining why that didn’t happen, and then 10 more about why they got in-group/out-group effects instead.  But we are told that no one wants to read all that stuff, and that at the end of a paper, people would like to believe they have learned something.  Oh yeah, and we learned to think about how all those pages with not much in the way of conclusion would really screw up our impact factor so that no author worth his or her salt would send a paper to our journal any more.

Day 3 was truly exhausting.  On our way out the instructors gave us chocolate and lollipops, waved to us, and with big smiles said: “So long, suckers!”

Posted in Perspectives on Editing, Perspectives on Writing | Tagged , , | 1 Comment

Research Revolution 1.0 – Let’s Move On

The tumbrels have rolled, the guillotines have dropped, and I’m hoping that the publication of the Stapel report represents the end of Revolution 1.0.  I also hope that all the heads that have fallen have been guilty of the accusations against them.  And now I hope that we can move on to the next phase of the revolution.

Why do I call what’s happening in psychological (and other) sciences now a “revolution”?  Indeed, there are many similarities between our situation and various historical political revolutions.  For one thing, most of the cries for reform are not new.  We have been hearing about problems of replication and analysis for a long time with calls for various types of changes.  For another, the current situation is not due simply to one factor; rather, currently there is a confluence of events that have lead to the louder cries for reform – issues of fraud, replication, researcher degrees of freedom, post hoc analyses and theorizing, etc.

And, I think, that like early in many revolutions, the calls to revolution, are overly extreme: calls to have people post all methods and data online, register all hypotheses and experiments, provide an analysis plan, do direct replications of all studies.  These ideas all have some merits and some demerits.  But we shouldn’t use them in the service of chopping off any more heads.  As much as anyone, I believe in the value of science and the goodwill of scientists.  So, let’s stop the witch-hunt and move on.

Three New R’s of the Research Revolution

There are three new R’s I think we need to keep in mind as we move toward a more balanced Revolution 2.0.

One is R-E-S-P-E-C-T.  None of us thought, as we became psychological scientists, “Hey, I’m going into to make the big bucks.”  And it was not for the nice offices or the adoring fans.  We do it because we find it interesting, we want to uncover truth, or we believe it’s the only thing we are competent to do.   Therefore, I think it’s time to pull back and stop being accusatory.  And there should be no ex post facto laws.  (See Article I of the United States Constitution.)  That means that if someone has been adhering to past standards for running, analyzing, and reporting research, it is not fair to critique them for it now.  We can create new norms for the future.  It is time for us all to move on.

Another R is refutation.  Science proceeds not only by coming up with new theories, finding support for them, and connecting them to other theories, but also by discarding old theories that have lost support.  A longer analysis of this topic is for a later time, but it is very related to the third R.

The R that many people are focusing on now is Replication. “Let’s call for replications,” “let’s publish replications and failures to replicate”, etc.  Whether and how this should be done has become a divisive issue in the field of psychological science.

This piece in The Guardian notes that editors of the journal Psychological Science thought about, and then rejected, the idea of publishing replication attempts.  The idea is not gone; rather, it has moved from Psychological Science to its more flexible sister journal Perspectives on Psychological Science.  The goal is to combine much information from many researchers as we do in meta-analyses.  We are not witch-hunting for fraud or for bad research practices; instead we are looking for the generalizability and limitations of the research.  Many replications (and failures to replicate) tell us something important.  In fact, I believe that we need to listen to all of our data including pilot studies and replication successes and failures. Your data might not be perfect for your theory but it’s relevant to other theories.  As Nelson Goodman said (more or less):

Every piece of data is consistent with an infinite number of hypotheses and inconsistent with an infinite number of hypotheses.

Or as I put:  One scientist’s p < .25 is another scientist’s p < .05.

As psychological scientists we are uniquely situated to not only study psychology content, but also to study the reasoning processes of scientists themselves.  Thus, what we find should be useful both for our science and for all people involved in the process of science.  So, yes, there should be replication attempts and there should be “science as usual”.  And if someone wants to replicate your findings you should be flattered that they (and others) are interested and motivated enough to do so.  But there should be no more witch-hunts or fears of witch-hunts.  Let’s move on.

(N.B. Many of these issues are addressed in the various articles in the Perspectives Special Issue on Replicability and Research Practices.)

Posted in Research Revolution | 4 Comments

Welcome to My PoPS

I’ve been getting a LOT of submissions to Perspectives on Psychological Science (PoPS) lately from people wanting me to publish their comments on the comments on the comments… .  Seriously?

We should be having discussions in real time, or at least e-time, not waiting for 15th century print technology to make our points.  And if I published every comment on the comments…, there would be no place for the actual science.  So, let’s all get over only counting ideas that have been set in type — even though most of us don’t read them that way any more.

Instead, let’s see your perspective… here.

Posted in Perspectives on Writing | Tagged | 3 Comments