Dear APS: It’s not me, it’s YOU !

This year I decided not to rejoin APS.  I’ve been a member for many years, am a Fellow, was on the board, and have been an CE, AE, and Editor of various APS journals.  Last week, I got an e-mail begging me to come back.  I had three reasons not to rejoin but didn’t talk about them publicly because I thought it was just me.  Now it’s clear, it’s not me APS, it’s YOU.

Screen Shot 2018-04-18 at 6.28.59 PM.pngIf you want us to get back together, there are three things you need to do.

The SECOND of the three reasons was recently mentioned by several people on social media.  APS elections.  APS sent out links for voting (or so I’m told, not being a current member). It is silly that along with the ballot, APS provides little information about the candidates.  They give candidates space to indicate various bits of affiliation and service.  But there are no vision statements, nothing about priorities or initiatives, and certainly nothing about the contentious issue of science reform.  The election becomes a popularity-of-sorts contest.  (So, turns out, it wasn’t only me who has been displeased with that process.  It’s you.)

Watching, Sad and Embarrassed

The FIRST, and major, of the three reasons is the decay of the journal Perspectives on Psychological Science and APS’s continued failure (despite much complaining) to do anything about it.  Again, I thought this was only my worry – having been the previous EIC, of course I wouldn’t like all of the changes.  But a few weeks ago, some Facebook groups concerned with methods in psychology were filled with disdain as they saw Editor Sternberg publish yet another invited special section (this time nearly a fully issue): (1) extolling the virtue of citation counts, (2) with nearly all US male authors, and (3) with a foreward and afterward by himself that barely talked about the invited articles and instead was filled with gratuitous self-citations.  These actions resulted in an open letter to APS assembled by Chris Crandall and signed by about 160 people.

The letter notes the focus of the articles and selection of authors – and refers back to how similar concerns were raised after the infamous “Am I Famous Yet” special section, in which a special section on “Merit” in psychology mostly devolved into a section about becoming famous, with 6 articles by men and 1 first-authored article by a women titled, “Scientific Eminence: Where are the Women?”  After some outcries, Sternberg decided to offer to have a follow-up section on the topic.  The selection of papers for that issue (or, rather, lack of selection) is described here by 6 female authors, who had independently come up with similar themes in their (de facto) rejected submissions.

Discussion circling the open letter to APS also referred to Sternberg’s style of introduction and afterward/postscript to the sections.  Nearly all special sections have both.  And nearly all of them are not traditional introductions that are concerned with informing readers about the what/why/how of the topic to come.  Rather, nearly all look like articles from the sections themselves – indeed, articles where the author focuses on his own work (whether central or tangential to the topic), with about half of the 40-60 citations being self-citations.

I believe that this is an editorial abuse of power: Using the label of Introductions/ Afterwards to write articles of several pages extensively extolling and citing oneself rather than focusing on the topic of the section.  (Yes, I wrote introductions to many special sections when I was editor. None had these characteristics. You can look them up.)

But – here’s the thing:  Sternberg has asserted that all papers in Perspectives go out for peer review, including his own introductions and discussions.  I found it difficult to believe that any peer reviewer, or any action editor, would have signed off on what he has published.  And now I’m ready to say:  I don’t believe it.  Aside from the Introduction to the first special “famous” special section (which at best went out for a “light” review), I do not believe that Sternberg used anything approaching peer review on his own articles.  (Unless you believe that “peer review” means asking some folks to read it and then deciding whether or not to take their advice before you approve publication of it.)  And, I am confident of this “beyond a reasonable doubt” (as we say in the law game).

To Get Back Together

So, APS, before we get back together, I want you to fire Sternberg as Editor of Perspectives.  I would like you to do it because, using the techniques above, he has made the journal, and APS, a laughingstock.  And you should do it before he does so again in his next special section, in which his rambling introduction and postscript take us on tours of his youth and, un-peer reviewed, garner him another 39 self-citations.

For now, I’m going to skip my THIRD reason for quitting APS.  But in some ways they are all of a piece.  APS needs to stop acting like it is the new radical psychological society, reacting against the anti-science of APA, and consisting of 400 friends meeting in someone’s living room.  With 30,000 (now minus one) members, it’s time to take its responsibilities to the membership, and to the science, more seriously.

This was indeed my last issue.  But it doesn’t have to be.  It’s on you.

Photo on 4-19-18 at 3.41 PM.jpg


Posted in Perspectives on Editing, Professional Society Responsibility | 3 Comments

I was a reviewer…

I was a reviewer on two of the manuscripts discussed recently in Simine Vazire’s blog by guest author Katie Corker (on behalf of the six women authors).

(What I say below won’t make sense  unless you have read that first.)

My review of Ase’s Innes-Ker’s piece is below.  It was early March 2017.  Note: I had previously e-mailed with PoPS Editor Sternberg about making sure more voices were heard in this second round of the symposium on scholarly merit.  Also, I was very conscious that the word limit for these papers was much shorter than for the original contributions.  Thus, I believed that expectations for these papers should be different than for the original submissions. (You’ll see more about how I was influenced by that problem in my second review at the bottom.)

(Hmmm….  I can’t believe that I was Reviewer 2 both times.)


Reviewer: 2

Comments to the Author
I think that this manuscript offers some useful / different perspectives on the AIFY symposium.  As I’ve said elsewhere, I wish that the first symposium had been about merit (the alleged point) rather than about fame (what many authors addressed), but with that background I believe it is appropriate (and even necessary) to directly address problems of fame as a metric.

This manuscript does that, although it is somewhat choppy in places and not all arguments are as well supported as they could be.  It relies on some philosophy of science arguments, which adds a nice angle to the discussion.  I noticed that the author is a European non-native English speaker, so some turns of phrase (and punctuation) seem odd but could be fixed easily.

I like the intro and ending with the connections to Stapel and his craving for fame.

And I like that the manuscript connects back to papers from the first AIFY symposium and says that “Am I Famous Yet” is the wrong question to ask.  The manuscript’s answer to what is the right question to ask appears in the following sentences.  Unfortunately, the focus / language there is not consistent with the rest of the paper; they need to be made more consistent to hold the argument together across the entire paper.

I like the connection to Merton – though I suspect he did not have the sort of data that is now available on the distribution of “fame”.  (So, did he “report” or “surmise”?  Would some modern metascience statistics be useful there?)

The Salganic & Watts paper brings up an interesting issue — although I think that the analogy to scientific publishing needs to be made clearer.  We have peer review, we have journals of different quality, and the initial “rankings” of papers probably does carry some merit.  But I do agree with the overarching point that sometimes a paper becomes the paper to cite for proposition X – whether or not it actually supports it.  (E.g., Nisbett & Wilson for anything having to do with not knowing one’s own preferences.)  This section of the manuscript is jumpy – from the S&W study, to the critique of the h-index, to the Srull & Wyer citation investigation.  (And it seems like there is something missing between the first and second sentences in “The problem of metrics”.)  But, again, I do like the big point and totally agree (and have written about) the problem of how we don’t keep track of what we cite papers FOR; we just keep track of the citation count.  And I like the penultimate sentence of that section – about how rewarding frequency of publication over quality of individual papers can lead to poor scientific practices (some of which should be explained in a bit more detail to the readers).

I like the connection to Hull and the Smaldino/McElreath paper.  I think that those two references belong more tightly woven together as being about the evolution of science and its practices.  Then that leads into the importance of community.  Community is, of course, hugely important to the vetting of science, not just in the peer review process but also in the replication, falsification, and theory-advancement processes.  And I agree that for a long while there had been a real problem with the ability of scientists to “expose these hypotheses to severe testing” – and to get such results published – until recently.

On the other hand, although I like the argument about the importance of community, I’m not quite sure how collaborations per se are important.  Cooperation (including researchers sharing methods and data) and competition (alternative theories) = of course, but I’m not sure about “collaboration” (unless that word is being used differently from how I’m interpreting it).  Hull’s “social churning” – that’s a good phrase.

So, I have trouble following the thread at the bottom of p. 5 – middle of p. 6.  I do like the sentences about how the ideas need to be stress-tested not the scientists.  But some of the surrounding bits don’t hang together. E.g., “other ways scientists contribute” would be a great theme for another paper but is not / cannot be fleshed out here.

In short, good ideas, but the argument needs to be more pointed and better stitched together.

Minor comment:
Isn’t Bowie the “first author” of Fame?

I always sign my reviews,
Bobbie Spellman


Here is my review for Fernanda Fernanda’s manuscript.  This was a couple of weeks later after I had seen a few more submissions.  I saw that several important themes were emerging across papers and, in my view, were not being appropriately appreciated by other reviewers or the editor.


Reviewer: 2

Comments to the Author
Review of PPS-17-136

At this point, I have read about half a dozen of the submissions for this second round of essays on merit.  (Some as a reviewer, some as a friend.)  There are many recurring themes; thus, a large part of the Action Editor’s job is going to be to select manuscripts that represent the diversity of common viewpoints that were not represented in the initial symposium.

This manuscript covers some ground covered by the others, and has some of the flaws of the others, but it also stands out for several reasons.


1. Balance –
This manuscript strikes a nice balance between adoring fame and demeaning fame.  It describes how fame might (and should) rightfully emerge from doing both good work and good (useful to the field) deeds.
It also strikes a nice balance between engaging with (and citing) some of the papers in the initial symposium and moving beyond them.

Relatedly: I like the distinction on p 4 between the two questions of asking how someone became famous versus what one must do to become famous.  And I like the points about fame as a potentially problematic heuristic (like availability).

2. My favorite part of this manuscript is the description on p. 6 of the “two particularly impressive scholars” in the distinguished speaker series.   I think it nicely captures features of exemplary scientists – including how they challenged entrenched views.

I like the term “infrastructure” to describe all the “other tasks” we do to keep the field running.

Note: However, I don’t think that the author makes the best use of these examples.  In the lead in to the description, she mentions “merit and quality”.  After the description she talks about fame and “reputations.”  And the next paragraph starts by discussing the bases of fame.  I think this set of reflections could be made more coherent to really showcase the point of the examples.

3. Voice
I like the personal voice of this manuscript.
I worry, thought, that the manuscript might read a bit “cognitive” – e.g., prizes if they “have contributed more than most to uncovering the nature of psychological processes”; also, among the qualities of good science on pp. 6-7 it doesn’t mention doing work that could be applied, although work being “useful” is mentioned in the first sentence of the conclusion.


1. The manuscript suffers from a common weakness of all the second-round manuscripts:  it makes a lot of claims without data supporting them.  I believe that is a common problem due to (a) the authors wanting to make a lot of points (b) within the constraints of a tight word limit.  The manuscript often appeals to how we all know people who… (e.g., bottom of p. 3 – do good work but not known and vice versa).    And it makes claims about the correlations between merit and fame that might/might not be accurate.

2. The author should re-think the abstract.  I don’t think focuses on the core messages of this manuscript (and, instead, mentions a lot of things that the manuscript deals with tangentially).  It also seems long for such a short paper.

3.  Although I generally enjoyed the writing style, I think the manuscript reads a bit “flabby”.  There are sentences that could use fewer words and a couple of redundancies that could be cut.  I think that would help maintain the focus of the paper.
E.g., top of p. 6 – Let me now return to a point that I made at the beginning of this article = … return to an earlier point…

Also, if there are going to be papers that focus on how the recent-past incentives have gotten us into the current replication mess, the paragraph on “The Dangers of Fame” – or at least the part re: incentives — easily could be removed from this manuscript.

I always sign my reviews,
Bobbie Spellman

Posted in Uncategorized | 1 Comment

A “Council of Psychological Advisors to the President” ?

[A couple of months ago I interviewed for a Fellow position with the SBST — the U.S. Social and Behavioral Sciences Team.  (You know, like the UK’s Behavioral Insights Team, a/k/a “The Nudge Unit”.)  It was a dream job but a nightmare interview.  So, I ditched the dream and decided to go back to an idea we had at Perspectives on Psychological Science a couple of years ago — let’s publish memos to President Obama about using psychological science to inform public policy.  Instructions for submission below.]


The Council of Economic Advisers to the President of the United States is “charged with offering theecon President objective economic advice on the formulation of both domestic and international economic policy” and “bases its recommendations and analysis on economic research and empirical evidence, using the best data available to support the President in setting our nation’s economic policy.”

Imagine serving on a new “Council of Psychological Advisers” on which you had the chance to send memos to the President offering insights from the best research in psychological science to help solve specific, pressing problems facing society.

Perspectives on Psychological Science is planning a special series of memos by the Council of Psychological Advisers to the President. This is an open call inviting authors to pair a societal problem with a psychological “solution” to make a succinct point about how psychological science can inform policy.

Examples might include (but are not limited to):
— Climate change and affective forecasting
— Inequality and status/hierarchy
— Obesity and self-control
— Water conservation and intertemporal choice

To submit a proposal for consideration for this special series, submit an abstract (250 words maximum) that outlines the central thesis and arguments by September 26, 2014.

Submissions can be made through the journal’s standard web portal entrance ( Please indicate that the submission is for the “Council of Psychological Advisers” series.

We will select approximately 10 abstracts and invite these authors to submit a full piece. The final pieces will be brief (1000-1500 words maximum) and can even use bullet points. (Think of these as actual brief memos – the goal is to make them short, punchy, and accessible.)

Abstracts are due by September 26, 2014, and you will be notified approximately two weeks later if you are invited for a full submission. The completed piece will be due by December 1, 2014. (Note that this is a hard deadline because all memos will be published in the same issue.)

A few tips to keep in mind: the memos should be based on reliable, established findings, be written from a nonpartisan view, and be pitched for a broad audience of both academics and policymakers (not colleagues in your subfield).

Please direct questions to Bethany Teachman ( and Michael Norton (, co-editors of the series.

We look forward to seeing your submissions!

Bethany Teachman, Associate Editor
Michael Norton, Guest Editor
Barbara Spellman, Editor
Perspectives on Psychological Science

Posted in Psychology and Policy | 1 Comment

Barcelona — and Should Revolution 2.0 Go Grassroots?

I discovered something interesting at SPUDM24 last week.  (That is the European Judgement & Decision Making Conference which was held in Barcelona this year.)  I was speaking about things that are happening in psychology having to do with replication and publication when I mentioned that there was something that each person in the (surprisingly large) audience could do on his or her own to help force new norms on the journals:


“When a journal asks you to review an empirical manuscript, write back to the editor and say you will do it if only if you can get the data.”

Audible gasp.

Wow.  I hadn’t realized the power of that idea until I said it aloud and saw/heard the reaction.   We are authors AND we are reviewers.  So, if we start asking for the data, we have to be willing to be asked for the data.  And we have to recognize that asking (or asking for all of it) is not appropriate in all cases.  But when you do so appropriately, what is an action editor / journal then to do?

I don’t know whether or not I love this idea.  But I think it’s worth more thought.

Posted in Perspectives on Editing, Research Revolution | Tagged , | 3 Comments

Making It Easier to Submit Your Manuscripts

The other day Retraction Watch described a retraction triggered by the authors’ simultaneous submission to two journals.  A comment asked about how one can go about ethically submitting to multiple journals.  The answer: you can’t.  At least not in science.  (But you can in law; more on that below.)


Part of Research Revolution 2.0 consists of changes in how we publish empirical research: there are now more outlets (print, electronic, open access, etc.) but also more variability in requirements (word length, citation style, providing raw data, disclosure statements, placement of tables and figures, etc.).  These variations might be appropriate for journals, which wish to maintain their own style and standards, but they can be a nightmare (or at least a waste of time) for authors.  You may have followed all guidelines when submitting to Journal A only to get a desk rejection based on novelty or content.  You then re-format to submit to Journal B only to get rejected 3 months later.  Now what?  Certainly, revising is called for before your next try (if you try at all), but why also shortening or lengthening, moving materials from online supplement to text, placing figures in the text or at the end, and worrying whether you really do need to capitalize (or not) the first letter in every word of article titles.  I believe that there are reasons to slow down the writing / publishing process – but these certainly are not it.

An interesting solution to this problem was suggested to me last week by the wonderful Orit Tykocinski: one stop bidding.  This solution is amusingly similar to how legal academics find homes for their articles.

Here’s the new plan.  You have a manuscript.  You submit it to the psychology website – which is the portal for ALL empirical (or ALL) psychology journals and has one standard format for submission — and you check off which journals are allowed to look at it.  Then you wait.  Soon Journal D says they want to review it and they will get back to you in X days.  You have Y days to either accept or reject that bid.  You must agree that, if accepted, you will make it longer or shorter or whatever necessary for publication in that journal.  When under review at journal D, no other journals can review it. (Though it sure would be interesting to have a version in which other journals could, with knowledge, choose to review a manuscript already under review at another journal.)  D gets your action letter back in X days.  If they accept, you’re happy.  If they say revise & resubmit, then, as usual you decide what to do next.

This way the manuscript goes to a journal that is interested from the start.  As an editor, I would have my consulting editors on the lookout for appropriate manuscripts.  It would make it much easier to create special issues.  And authors wouldn’t have to do so much style revision.

Of course the reason to be under review at only one journal at a time is because we scientists invest so much thought and energy evaluating and reviewing each other’s work.  But check out how it works in legal academia.  You have a manuscript that you submit through a portal.  With the click of a button you can have it sent to 200 law reviews (for a price, but usually your university will have a subscription to the service).  At the law reviews, student editors take a look.  Maybe a student editor from a less-good school e-mails you, “We want it.”  You say, “Give me a few days,” they say, “Three,” and then you immediately e-mail a bunch of somewhat better schools saying, “I have an offer from less-good school and need an answer from you in three days.”  A student editor from a somewhat-better school e-mails you, “We want it.”  You say, “Give me a few days,” they say “Two,” and then you immediately e-mail the good schools…   You bargain up as high as you can and then: Sold.

No, we can’t do that in science.  Those are students and that is not adequate PEER review.  So, no, we can’t go that far.  But we can do better than what we have now.

As I have said before, I believe that the current “crisis” in science owes much to current technology, but I also believe that technology can provide us with some nice help to get out of it.  Although not a critical flaw in the system, fixing this submission irritation can help researchers spend more where it counts, doing better science.

Posted in Perspectives on Editing, Perspectives on Writing, Research Revolution | Tagged , , , | Leave a comment

Research Revolution 2.0 — The Current “Crisis”: How Technology Got Us Into This Mess and How Technology Will Help Us Out

[I just got back from the APS (Association for Psychological Science) Convention where I spent 5 hours in various symposia on Research Practices and 4 hours in various meetings on what Editors and Journals can (or should) do about what’s going on.  Below I provide an edited version of my 8 minutes speaking in the “Program on Building a Better Psychological Science: Good Data Practices and Replicability” in the section: “View from Editors, Program Officers and Publishers”.]

What we are seeing now is a revolution in Psychological Science.  It’s not a Kuhnian type of revolution: we are not changing our core research theories.  Rather, I think a better analogy is to a political revolution, where there is a structural change in how things are done.  When I decided that was the better analog, I also became much more optimistic that this revolution would be a success.  And I mean “success” in two ways.

One meaning of success is that I believe that this time there really is the momentum to change things.  We know that now is neither the first time there have been “disruptions” in our science (e.g., fraud, failures to replicate, questionable practices), nor the first time that there have been calls to change the way we do psychology (e.g., previous calls to change our statistics, to publish null findings, etc.).  But it hasn’t happened.  Well, I can argue that in every revolution there are precursors – failed rebellions, suppressed uprisings, and storming the barricades.  So, why do I think this time there will be change?

Let’s take a break for a brief quiz.
If you were involved in psychology 25 years ago, how would you answer these questions:
1) Did you ever think that you would be able to run 100 subjects in 1 day?  How about 1000?
2) Did you ever think that you would be able to do all of your data analysis and create all of your graphs for your results section in 1 hour?
3) Did you ever think you would be able to find and have on your computer all of the articles related to your study in 1 minute?
4) Did you ever think that you could send your manuscript around the world, to dozens of readers, in 1 second?

So, what’s the difference now?  Obviously, technology.  We have subject pools that we are getting through MTurk and websites and smartphones.  We have computers that can present stimuli, collect measures, and load it all neatly into a spreadsheet.  We have statistical programs that can handle huge data sets and do dozens of analyses in seconds.  And these programs can generate random data, with specified means and standard deviations that look so much better than “real data” that some people decide to call it exactly that.  Also, we know so much more about what’s going on in other labs, be it what they publish, or what the gossip says.

5) Oh – and since we are celebrating APS’s 25th anniversary one other thing – Did you ever think that there would be 25,000 members of an organization of scientific psychologists, all doing trying to do the same thing at the same time?

So now we have more researchers running more experiments, running them more quickly, running more statistics, spreading the word more quickly, and all competing for jobs and grants and publications and fame.  And what all that means is – more trouble.  Yes, the time is right for the revolution.

But here is the second reason that I’m optimistic.  I believe that we are going to come out of this mess a better and more integrated science.  And I think that our journals, yes, with the help of technology, have a huge role to play.

You have already heard editors talk about empirical journals (Barch, Eich, Liben).
Empirical journals can enforce new norms.  For example:
— what needs to be reported with every empirical article (now we have room in online supplements);
— whether researchers should make their data accessible (now there are repositories);
— whether the journal will publish simple replications and/or failures to replicate (now there is more room) – and eliminate the file drawer problem;
— whether the journal will ask people to register their hypotheses, methods, and/or data analysis plans beforehand — thus eliminating HARKing (Hypothesizing After Results are Known) and p-hacking.

But as great as all that would be for assuring the integrity of our data – the foundation of our knowledge – I think we also need to be doing more to amalgamate and synthesize our knowledge.  I don’t know about everyone else, but I often think there is just too much information for me to wrap my head around.  (In my office, I have a print of a New Yorker cartoon in which a mother says to her tearful daughter: “It’s all right, sweetie.  In the information age, everyone feels stupid.”)

And here I believe that the theory and review journals, with the help of technology, can help.  I think we can do a lot to encourage combining, challenging, and linking our science.

Combining:  (1) Perspectives has begun our Registered Replications Reports initiative (with Dan Simons and Alex Holcombe as Associate Editors).  Researchers propose an experiment to be replicated and justify why the study deserves the time and effort to do so.  Then, with the original author, they develop a tight protocol for what it would mean to do as exact a replication as possible.  When that’s set, we post it and open it up for other labs to join and run the studies.  We publish in it Perspectives, regardless of outcome.  By having lots of labs we get away from some of those “what does replication mean?” questions.  We can get a good sense of effective size and even check out some moderators (like does it matter if the lab is believers or non-believers in the effect).  Recently we went public with the first proposal regarding Verbal Overshadowing.  Two weeks later we had 15 labs, in four different countries, wanting to be involved.

(2) Perspectives has always published meta-analyses and will continue to do so.  But now because there are more ways to publish, or at least post (e.g.,, simple replications and failures to replicate, these analyses should be less likely to suffer from file drawer problems.

Challenging:  I think we should have more format for true discussion and debates about theory in which researchers can more directly engage back-and-forth.  For example, there should be more theoretical adversarial collaborations like that of Kahneman and Klein (2009).  Perspectives has tried some things like that: the mirror-neuron forum of a few years ago and an upcoming pair of articles in the July 2013 issue where one person questioned not the research but, rather, the interpretation of another, wrote a long enumerated critique, and then the other had a chance to write a long reply

Oh, and by the way, I think one thing researchers (especially older researchers) have to get over is the love of journal space print.  Every time Perspectives publishes a controversial piece, people demand that I publish their comments and letters.  No, we need to be doing more of this discussing online – faster and public.  And maybe we need to count references to that type of publication as “impact”.

Linking:  With all this information – more research, more conferences, more journals, more alternative outlets — I think we must do better to make sure it doesn’t fragment.  We need to make better connections both back to the past and across in the present.  You’ve heard the mention of reinventing the wheel – researchers failing to reference relevant past studies.  There was a move to shorten references sections, but now, again, we have the space to do things online and, even better, we have digital links.  We should be insuring that our science accumulates.  We also should be looking for connections across fields.  I once published an article called “A Tale of Two Literatures” showing how parallel research in cognitive psychology and social cognition never referenced each other — perhaps because they (intentionally?) used different terms for similar research.  More such parallels should be discovered.  And I am a big fan of adding to the way we do our citations.  We should not be just sticking in names without having to make it clear why we are citing the study.  Just background, used the methods, found the same thing, or totally disagree?  Not all citations are equal and we could do a better job keeping track of how papers are related.

These are some of the roles I see for journal and editors – building a studier and more integrated science.  That, I think, would be a good, and successful, revolution.

(Question for next time: Is this not so much a research revolution but, rather, a civil war?)

Posted in Meta-analysis, Perspectives on Editing, Research Revolution | Tagged , , | 12 Comments

3… 2… 1… Liftoff — A Dream Come True — Registered Replication Reports Take Off

A million… I mean three and a half years ago, when I wrote my incoming editor’s editorial at Perspectives on Psychological Science  (DOI: 10.1177/1745691609356780), I said that I wanted to encourage new types of articles that I thought would help our field grow stronger and faster.  One of them was dubbed ‘‘The File Drawer” and I wrote: “What I envision is … the Editorial Board identifies topics: phenomena that researchers have not been able to replicate. Next, we appoint lead researchers: people who will collect the mostly unpublished failures and write an analysis of what was done, what was (or was not found), etc.  Finally, the authors who published the original research would be given a chance to respond.”

We (Hal Pashler, Tony Greenwald, and I) identified a study to replicate and contacted the original author early on but he seemed so unnerved by the process that we paused to re-group.  In the meantime, Hal and I developed where researchers can individually post their attempted replications (both successes and failures).

Then flash forward three years to when Dan Simons and Alex Holcombe proposed what has become the Registered Replication Reports initiative — a way to get teams of researchers to try to replicate important studies with the cooperation of the original authors.  OF COURSE Perspectives should host and publish such articles.

For more on the backstory of the creation of RRR see:

For more about the pushback I’ve gotten to the replication project see:

We are teamed with the Open Science Framework where projects will be developed and shared.

To get started on your very own replication research report, or to join one already in progress, go to:

And if you want some ideas for experiments that people would like to see replicated, take a look at psychfiledrawer’s top-20 list of studies users would like to see replicated.

And now…. for our very first public launch… whose study will it be?  3… 2… 1…    You can find out here.

Posted in Research Revolution | 2 Comments