Barcelona — and Should Revolution 2.0 Go Grassroots?

I discovered something interesting at SPUDM24 last week.  (That is the European Judgement & Decision Making Conference which was held in Barcelona this year.)  I was speaking about things that are happening in psychology having to do with replication and publication when I mentioned that there was something that each person in the (surprisingly large) audience could do on his or her own to help force new norms on the journals:

data_entry

“When a journal asks you to review an empirical manuscript, write back to the editor and say you will do it if only if you can get the data.”

Audible gasp.

Wow.  I hadn’t realized the power of that idea until I said it aloud and saw/heard the reaction.   We are authors AND we are reviewers.  So, if we start asking for the data, we have to be willing to be asked for the data.  And we have to recognize that asking (or asking for all of it) is not appropriate in all cases.  But when you do so appropriately, what is an action editor / journal then to do?

I don’t know whether or not I love this idea.  But I think it’s worth more thought.

Posted in Perspectives on Editing, Research Revolution | Tagged , | 3 Comments

Making It Easier to Submit Your Manuscripts

The other day Retraction Watch described a retraction triggered by the authors’ simultaneous submission to two journals.  A comment asked about how one can go about ethically submitting to multiple journals.  The answer: you can’t.  At least not in science.  (But you can in law; more on that below.)

rejected

Part of Research Revolution 2.0 consists of changes in how we publish empirical research: there are now more outlets (print, electronic, open access, etc.) but also more variability in requirements (word length, citation style, providing raw data, disclosure statements, placement of tables and figures, etc.).  These variations might be appropriate for journals, which wish to maintain their own style and standards, but they can be a nightmare (or at least a waste of time) for authors.  You may have followed all guidelines when submitting to Journal A only to get a desk rejection based on novelty or content.  You then re-format to submit to Journal B only to get rejected 3 months later.  Now what?  Certainly, revising is called for before your next try (if you try at all), but why also shortening or lengthening, moving materials from online supplement to text, placing figures in the text or at the end, and worrying whether you really do need to capitalize (or not) the first letter in every word of article titles.  I believe that there are reasons to slow down the writing / publishing process – but these certainly are not it.

An interesting solution to this problem was suggested to me last week by the wonderful Orit Tykocinski: one stop bidding.  This solution is amusingly similar to how legal academics find homes for their articles.

Here’s the new plan.  You have a manuscript.  You submit it to the psychology website – which is the portal for ALL empirical (or ALL) psychology journals and has one standard format for submission — and you check off which journals are allowed to look at it.  Then you wait.  Soon Journal D says they want to review it and they will get back to you in X days.  You have Y days to either accept or reject that bid.  You must agree that, if accepted, you will make it longer or shorter or whatever necessary for publication in that journal.  When under review at journal D, no other journals can review it. (Though it sure would be interesting to have a version in which other journals could, with knowledge, choose to review a manuscript already under review at another journal.)  D gets your action letter back in X days.  If they accept, you’re happy.  If they say revise & resubmit, then, as usual you decide what to do next.

This way the manuscript goes to a journal that is interested from the start.  As an editor, I would have my consulting editors on the lookout for appropriate manuscripts.  It would make it much easier to create special issues.  And authors wouldn’t have to do so much style revision.

Of course the reason to be under review at only one journal at a time is because we scientists invest so much thought and energy evaluating and reviewing each other’s work.  But check out how it works in legal academia.  You have a manuscript that you submit through a portal.  With the click of a button you can have it sent to 200 law reviews (for a price, but usually your university will have a subscription to the service).  At the law reviews, student editors take a look.  Maybe a student editor from a less-good school e-mails you, “We want it.”  You say, “Give me a few days,” they say, “Three,” and then you immediately e-mail a bunch of somewhat better schools saying, “I have an offer from less-good school and need an answer from you in three days.”  A student editor from a somewhat-better school e-mails you, “We want it.”  You say, “Give me a few days,” they say “Two,” and then you immediately e-mail the good schools…   You bargain up as high as you can and then: Sold.

No, we can’t do that in science.  Those are students and that is not adequate PEER review.  So, no, we can’t go that far.  But we can do better than what we have now.

As I have said before, I believe that the current “crisis” in science owes much to current technology, but I also believe that technology can provide us with some nice help to get out of it.  Although not a critical flaw in the system, fixing this submission irritation can help researchers spend more where it counts, doing better science.

Posted in Perspectives on Editing, Perspectives on Writing, Research Revolution | Tagged , , , | Leave a comment

Research Revolution 2.0 — The Current “Crisis”: How Technology Got Us Into This Mess and How Technology Will Help Us Out

[I just got back from the APS (Association for Psychological Science) Convention where I spent 5 hours in various symposia on Research Practices and 4 hours in various meetings on what Editors and Journals can (or should) do about what’s going on.  Below I provide an edited version of my 8 minutes speaking in the “Program on Building a Better Psychological Science: Good Data Practices and Replicability” in the section: “View from Editors, Program Officers and Publishers”.]

What we are seeing now is a revolution in Psychological Science.  It’s not a Kuhnian type of revolution: we are not changing our core research theories.  Rather, I think a better analogy is to a political revolution, where there is a structural change in how things are done.  When I decided that was the better analog, I also became much more optimistic that this revolution would be a success.  And I mean “success” in two ways.

One meaning of success is that I believe that this time there really is the momentum to change things.  We know that now is neither the first time there have been “disruptions” in our science (e.g., fraud, failures to replicate, questionable practices), nor the first time that there have been calls to change the way we do psychology (e.g., previous calls to change our statistics, to publish null findings, etc.).  But it hasn’t happened.  Well, I can argue that in every revolution there are precursors – failed rebellions, suppressed uprisings, and storming the barricades.  So, why do I think this time there will be change?

Let’s take a break for a brief quiz.
If you were involved in psychology 25 years ago, how would you answer these questions:
1) Did you ever think that you would be able to run 100 subjects in 1 day?  How about 1000?
2) Did you ever think that you would be able to do all of your data analysis and create all of your graphs for your results section in 1 hour?
3) Did you ever think you would be able to find and have on your computer all of the articles related to your study in 1 minute?
4) Did you ever think that you could send your manuscript around the world, to dozens of readers, in 1 second?

So, what’s the difference now?  Obviously, technology.  We have subject pools that we are getting through MTurk and websites and smartphones.  We have computers that can present stimuli, collect measures, and load it all neatly into a spreadsheet.  We have statistical programs that can handle huge data sets and do dozens of analyses in seconds.  And these programs can generate random data, with specified means and standard deviations that look so much better than “real data” that some people decide to call it exactly that.  Also, we know so much more about what’s going on in other labs, be it what they publish, or what the gossip says.

5) Oh – and since we are celebrating APS’s 25th anniversary one other thing – Did you ever think that there would be 25,000 members of an organization of scientific psychologists, all doing trying to do the same thing at the same time?

So now we have more researchers running more experiments, running them more quickly, running more statistics, spreading the word more quickly, and all competing for jobs and grants and publications and fame.  And what all that means is – more trouble.  Yes, the time is right for the revolution.

But here is the second reason that I’m optimistic.  I believe that we are going to come out of this mess a better and more integrated science.  And I think that our journals, yes, with the help of technology, have a huge role to play.

You have already heard editors talk about empirical journals (Barch, Eich, Liben).
Empirical journals can enforce new norms.  For example:
– what needs to be reported with every empirical article (now we have room in online supplements);
– whether researchers should make their data accessible (now there are repositories);
– whether the journal will publish simple replications and/or failures to replicate (now there is more room) – and eliminate the file drawer problem;
– whether the journal will ask people to register their hypotheses, methods, and/or data analysis plans beforehand — thus eliminating HARKing (Hypothesizing After Results are Known) and p-hacking.

But as great as all that would be for assuring the integrity of our data – the foundation of our knowledge – I think we also need to be doing more to amalgamate and synthesize our knowledge.  I don’t know about everyone else, but I often think there is just too much information for me to wrap my head around.  (In my office, I have a print of a New Yorker cartoon in which a mother says to her tearful daughter: “It’s all right, sweetie.  In the information age, everyone feels stupid.”)

And here I believe that the theory and review journals, with the help of technology, can help.  I think we can do a lot to encourage combining, challenging, and linking our science.

Combining:  (1) Perspectives has begun our Registered Replications Reports initiative (with Dan Simons and Alex Holcombe as Associate Editors).  Researchers propose an experiment to be replicated and justify why the study deserves the time and effort to do so.  Then, with the original author, they develop a tight protocol for what it would mean to do as exact a replication as possible.  When that’s set, we post it and open it up for other labs to join and run the studies.  We publish in it Perspectives, regardless of outcome.  By having lots of labs we get away from some of those “what does replication mean?” questions.  We can get a good sense of effective size and even check out some moderators (like does it matter if the lab is believers or non-believers in the effect).  Recently we went public with the first proposal regarding Verbal Overshadowing.  Two weeks later we had 15 labs, in four different countries, wanting to be involved.

(2) Perspectives has always published meta-analyses and will continue to do so.  But now because there are more ways to publish, or at least post (e.g., psychfiledrawer.org), simple replications and failures to replicate, these analyses should be less likely to suffer from file drawer problems.

Challenging:  I think we should have more format for true discussion and debates about theory in which researchers can more directly engage back-and-forth.  For example, there should be more theoretical adversarial collaborations like that of Kahneman and Klein (2009).  Perspectives has tried some things like that: the mirror-neuron forum of a few years ago and an upcoming pair of articles in the July 2013 issue where one person questioned not the research but, rather, the interpretation of another, wrote a long enumerated critique, and then the other had a chance to write a long reply

Oh, and by the way, I think one thing researchers (especially older researchers) have to get over is the love of journal space print.  Every time Perspectives publishes a controversial piece, people demand that I publish their comments and letters.  No, we need to be doing more of this discussing online – faster and public.  And maybe we need to count references to that type of publication as “impact”.

Linking:  With all this information – more research, more conferences, more journals, more alternative outlets — I think we must do better to make sure it doesn’t fragment.  We need to make better connections both back to the past and across in the present.  You’ve heard the mention of reinventing the wheel – researchers failing to reference relevant past studies.  There was a move to shorten references sections, but now, again, we have the space to do things online and, even better, we have digital links.  We should be insuring that our science accumulates.  We also should be looking for connections across fields.  I once published an article called “A Tale of Two Literatures” showing how parallel research in cognitive psychology and social cognition never referenced each other — perhaps because they (intentionally?) used different terms for similar research.  More such parallels should be discovered.  And I am a big fan of adding to the way we do our citations.  We should not be just sticking in names without having to make it clear why we are citing the study.  Just background, used the methods, found the same thing, or totally disagree?  Not all citations are equal and we could do a better job keeping track of how papers are related.

These are some of the roles I see for journal and editors – building a studier and more integrated science.  That, I think, would be a good, and successful, revolution.

(Question for next time: Is this not so much a research revolution but, rather, a civil war?)

Posted in Meta-analysis, Perspectives on Editing, Research Revolution | Tagged , , | 11 Comments

3… 2… 1… Liftoff — A Dream Come True — Registered Replication Reports Take Off

A million… I mean three and a half years ago, when I wrote my incoming editor’s editorial at Perspectives on Psychological Science  (DOI: 10.1177/1745691609356780), I said that I wanted to encourage new types of articles that I thought would help our field grow stronger and faster.  One of them was dubbed ‘‘The File Drawer” and I wrote: “What I envision is … the Editorial Board identifies topics: phenomena that researchers have not been able to replicate. Next, we appoint lead researchers: people who will collect the mostly unpublished failures and write an analysis of what was done, what was (or was not found), etc.  Finally, the authors who published the original research would be given a chance to respond.”

We (Hal Pashler, Tony Greenwald, and I) identified a study to replicate and contacted the original author early on but he seemed so unnerved by the process that we paused to re-group.  In the meantime, Hal and I developed psychfiledrawer.org where researchers can individually post their attempted replications (both successes and failures).

Then flash forward three years to when Dan Simons and Alex Holcombe proposed what has become the Registered Replication Reports initiative — a way to get teams of researchers to try to replicate important studies with the cooperation of the original authors.  OF COURSE Perspectives should host and publish such articles.

For more on the backstory of the creation of RRR see:  http://blog.dansimons.com/2013/05/registered-replication-reports-stay.html

For more about the pushback I’ve gotten to the replication project see: http://wp.me/p2Wics-1o

We are teamed with the Open Science Framework where projects will be developed and shared.

To get started on your very own replication research report, or to join one already in progress, go to:  http://www.psychologicalscience.org/index.php/replication/ongoing-projects

And if you want some ideas for experiments that people would like to see replicated, take a look at psychfiledrawer’s top-20 list of studies users would like to see replicated.

And now…. for our very first public launch… whose study will it be?  3… 2… 1…    You can find out here.

Posted in Research Revolution | 2 Comments

“But I don’t want people to try to replicate my research.”

If you are reading this blog, you have probably seen some of the news about the new Replication Research Reports to appear in Perspectives on Psychological Science. http://www.psychologicalscience.org/index.php/replication  But something you probably haven’t seen, or heard…  A few months ago, I was at a meeting describing this PoPS initiative and a senior researcher said, in front of two dozen other folks,

“But I don’t want people to try to replicate my research.”

There was a hush.  And I had to stop myself from saying, “Wait.  You mean that you want your papers growing musty on shelves and unaccessed in cyberspace?”

Then I realized that THAT was not the fear.  Rather, this researcher feared that the “replication movement” could be out to get her.  She thought that if people were trying to replicate her work, it would mean that they were targeting her and “trying” to show that it was non-reproducible.  In fact, during that meeting the project was called “McCarthyism” not once but twice.

I know that some of you may be against the “replication movement” in psychology.  I assure you that this PoPS project is not meant to be any kind of “debunking” of particular research.  Rather, we intend to involve the original authors, and involve labs that believe, disbelieve, and are neutral about the existence and generalizability and size of the effects.  We also intend to involve all areas of psychology.

And, we certainly do not intend to advocate that this is the only way science should be done.  There are upcoming articles in Perspectives that will “put replication in its place.”  But I believe that replication should be more valued as a tool than it currently is.  And there seems to be a wave across all the biological sciences (especially the medical sciences) agreeing.

Better humor about replication was evinced at a recent discussion at The Columbia University Department of Psychology a few weeks ago.  Speaker Niall Bolger showed the page in psychfiledrawer.org where researchers can nominate and vote for studies that they would like to see replicated.  Here’s the top-20 list:  http://www.psychfiledrawer.org/top-20/

“Look,” I said happily, “I’m number 9.”  But Kevin Ochsner seemed proud to beat me at number 8.  Others strained to find their own names on the list.

Oscar Wilde once said: “The only thing worse than being talked about is not being talked about.”  Shhh…. Don’t tell anyone but — I was the first to nominate my experiment for the replication top-20 list.

Posted in Meta-analysis, Research Revolution | Tagged , | 1 Comment

My First Post-Revolution Action Letter

I’m a little embarrassed to admit it – but the first post-revolution action letter I’ve seen is one I got as an author, not one that I sent as an action editor.

What do I mean by a post-revolution action letter?  I mean one that incorporates some of the scientific values that we have been vigorously discussing in psychology over the past year.  (E.g., in the Perspectives November special issue.)  In particular, I mean the values of not creating post-hoc hypothesis, replicating surprising results, publishing interesting studies (and replications) regardless of how they turn out.

I was the fourth author on an empirical paper with one experiment that included a priming manipulation and some messy results that did NOT confirm our initial hypothesis but did suggest something else interesting.  What did the action letter (and reviewers) say?

1) After noting that the research was interesting, the reviewers called for some type of replication of the surprising results – and were not put off by the fact that we did not confirm our original hypothesis.

2) The Action Editor wrote two lovely things. First, he said he preferred a direct replication.  He said:  “I like the fact that you propose a hypothesis and fail to find support for it (rather than invent post hoc a hypothesis that is supported). However, I also think this surprising finding (in view of your own expectation) calls for a direct replication. The combination of the two experiments would be very compelling.”

3) And second: “The direct replication would be confirmatory in nature and its outcome would not determine eventual acceptance of the paper. Performing the replication attempt and reporting it in the manuscript is sufficient.

So, hats off to Rolf Zwaan and the anonymous reviewers.

Posted in Perspectives on Writing, Research Revolution | Tagged | 5 Comments

Meta-Analyses Want YOU !

Meta-analyses are an important way of compiling our knowledge.  They help us find true effect sizes and moderators and mediators.  But meta-analyses that consist only of published results may be very biased.  We need a way to make sure that people conducting meta-analyses can find the studies in other people’s file drawers — however those studies turned out.  If you are conducting a meta-analysis, please contact this blog and I will post a notice of it.  (Or you can post a comment here.)

Note: Perspectives DOES publish meta-analyses but there is no guarantee that ones mentioned in this blog will be published in the journal.

Starting the list now (and then moving comments up here, too):

1. Meta-analysis on the effect of induced disgust on moral judgment.
Especially interested in studies in which disgust is induced by a manipulation that is separate from the moral judgment itself (e.g., through disgusting images, hypnosis, filthy environment).  Get in touch if you have any relevant data.

By: Justin Landy and Geoff Goodwin, University of Pennsylvania
Email for more information: landyj at psych.upenn.edu

2. Meta-analysis on the effect of weight on judgment based on the effect found in the following article: Jostmann, N. B., Lakens, D., & Schubert, T. W. (2009). Weight as an embodiment of importance. Psychological Science, 20, 1169-1174. Studies have to involve randomisation, a weight manipulation and a judgment or choice.

By: Steven Raaijmakers, Tilburg University
Email for more information: stevenraaijmakers [at] gmail.com

3. Meta-analysis on the accuracy of emotion recognition in depression compared to controls using face stimuli. I would be interested if anyone has any unpublished data they could share or any suggested works for inclusion.

By: Michael Dalili, University of Bristol
Email: michael.dalili@bristol.ac.uk

Posted in Meta-analysis | Tagged | 4 Comments