Research Revolution 2.0 — The Current “Crisis”: How Technology Got Us Into This Mess and How Technology Will Help Us Out

[I just got back from the APS (Association for Psychological Science) Convention where I spent 5 hours in various symposia on Research Practices and 4 hours in various meetings on what Editors and Journals can (or should) do about what’s going on.  Below I provide an edited version of my 8 minutes speaking in the “Program on Building a Better Psychological Science: Good Data Practices and Replicability” in the section: “View from Editors, Program Officers and Publishers”.]

What we are seeing now is a revolution in Psychological Science.  It’s not a Kuhnian type of revolution: we are not changing our core research theories.  Rather, I think a better analogy is to a political revolution, where there is a structural change in how things are done.  When I decided that was the better analog, I also became much more optimistic that this revolution would be a success.  And I mean “success” in two ways.

One meaning of success is that I believe that this time there really is the momentum to change things.  We know that now is neither the first time there have been “disruptions” in our science (e.g., fraud, failures to replicate, questionable practices), nor the first time that there have been calls to change the way we do psychology (e.g., previous calls to change our statistics, to publish null findings, etc.).  But it hasn’t happened.  Well, I can argue that in every revolution there are precursors – failed rebellions, suppressed uprisings, and storming the barricades.  So, why do I think this time there will be change?

Let’s take a break for a brief quiz.
If you were involved in psychology 25 years ago, how would you answer these questions:
1) Did you ever think that you would be able to run 100 subjects in 1 day?  How about 1000?
2) Did you ever think that you would be able to do all of your data analysis and create all of your graphs for your results section in 1 hour?
3) Did you ever think you would be able to find and have on your computer all of the articles related to your study in 1 minute?
4) Did you ever think that you could send your manuscript around the world, to dozens of readers, in 1 second?

So, what’s the difference now?  Obviously, technology.  We have subject pools that we are getting through MTurk and websites and smartphones.  We have computers that can present stimuli, collect measures, and load it all neatly into a spreadsheet.  We have statistical programs that can handle huge data sets and do dozens of analyses in seconds.  And these programs can generate random data, with specified means and standard deviations that look so much better than “real data” that some people decide to call it exactly that.  Also, we know so much more about what’s going on in other labs, be it what they publish, or what the gossip says.

5) Oh – and since we are celebrating APS’s 25th anniversary one other thing – Did you ever think that there would be 25,000 members of an organization of scientific psychologists, all doing trying to do the same thing at the same time?

So now we have more researchers running more experiments, running them more quickly, running more statistics, spreading the word more quickly, and all competing for jobs and grants and publications and fame.  And what all that means is – more trouble.  Yes, the time is right for the revolution.

But here is the second reason that I’m optimistic.  I believe that we are going to come out of this mess a better and more integrated science.  And I think that our journals, yes, with the help of technology, have a huge role to play.

You have already heard editors talk about empirical journals (Barch, Eich, Liben).
Empirical journals can enforce new norms.  For example:
— what needs to be reported with every empirical article (now we have room in online supplements);
— whether researchers should make their data accessible (now there are repositories);
— whether the journal will publish simple replications and/or failures to replicate (now there is more room) – and eliminate the file drawer problem;
— whether the journal will ask people to register their hypotheses, methods, and/or data analysis plans beforehand — thus eliminating HARKing (Hypothesizing After Results are Known) and p-hacking.

But as great as all that would be for assuring the integrity of our data – the foundation of our knowledge – I think we also need to be doing more to amalgamate and synthesize our knowledge.  I don’t know about everyone else, but I often think there is just too much information for me to wrap my head around.  (In my office, I have a print of a New Yorker cartoon in which a mother says to her tearful daughter: “It’s all right, sweetie.  In the information age, everyone feels stupid.”)

And here I believe that the theory and review journals, with the help of technology, can help.  I think we can do a lot to encourage combining, challenging, and linking our science.

Combining:  (1) Perspectives has begun our Registered Replications Reports initiative (with Dan Simons and Alex Holcombe as Associate Editors).  Researchers propose an experiment to be replicated and justify why the study deserves the time and effort to do so.  Then, with the original author, they develop a tight protocol for what it would mean to do as exact a replication as possible.  When that’s set, we post it and open it up for other labs to join and run the studies.  We publish in it Perspectives, regardless of outcome.  By having lots of labs we get away from some of those “what does replication mean?” questions.  We can get a good sense of effective size and even check out some moderators (like does it matter if the lab is believers or non-believers in the effect).  Recently we went public with the first proposal regarding Verbal Overshadowing.  Two weeks later we had 15 labs, in four different countries, wanting to be involved.

(2) Perspectives has always published meta-analyses and will continue to do so.  But now because there are more ways to publish, or at least post (e.g., psychfiledrawer.org), simple replications and failures to replicate, these analyses should be less likely to suffer from file drawer problems.

Challenging:  I think we should have more format for true discussion and debates about theory in which researchers can more directly engage back-and-forth.  For example, there should be more theoretical adversarial collaborations like that of Kahneman and Klein (2009).  Perspectives has tried some things like that: the mirror-neuron forum of a few years ago and an upcoming pair of articles in the July 2013 issue where one person questioned not the research but, rather, the interpretation of another, wrote a long enumerated critique, and then the other had a chance to write a long reply

Oh, and by the way, I think one thing researchers (especially older researchers) have to get over is the love of journal space print.  Every time Perspectives publishes a controversial piece, people demand that I publish their comments and letters.  No, we need to be doing more of this discussing online – faster and public.  And maybe we need to count references to that type of publication as “impact”.

Linking:  With all this information – more research, more conferences, more journals, more alternative outlets — I think we must do better to make sure it doesn’t fragment.  We need to make better connections both back to the past and across in the present.  You’ve heard the mention of reinventing the wheel – researchers failing to reference relevant past studies.  There was a move to shorten references sections, but now, again, we have the space to do things online and, even better, we have digital links.  We should be insuring that our science accumulates.  We also should be looking for connections across fields.  I once published an article called “A Tale of Two Literatures” showing how parallel research in cognitive psychology and social cognition never referenced each other — perhaps because they (intentionally?) used different terms for similar research.  More such parallels should be discovered.  And I am a big fan of adding to the way we do our citations.  We should not be just sticking in names without having to make it clear why we are citing the study.  Just background, used the methods, found the same thing, or totally disagree?  Not all citations are equal and we could do a better job keeping track of how papers are related.

These are some of the roles I see for journal and editors – building a studier and more integrated science.  That, I think, would be a good, and successful, revolution.

(Question for next time: Is this not so much a research revolution but, rather, a civil war?)

This entry was posted in Meta-analysis, Perspectives on Editing, Research Revolution and tagged , , . Bookmark the permalink.

12 Responses to Research Revolution 2.0 — The Current “Crisis”: How Technology Got Us Into This Mess and How Technology Will Help Us Out

  1. Pingback: …when the Revolution comes! | Åse Fixes Science

  2. asehelene says:

    Great post! And, I really hope it will be a velvet revolution and not a civil war. I’m trying to do my part at my university, by nudging in new research practices etc, though I’m not particularly powerful. But, they seem to pay at least some attention to me.

  3. very interesting. Another way technology helps us out: with blogs like this one. They make communication about research a lot faster and informal and personal than comments in journals. This quantitative increase seems to create a new quality, a new level of interdependence. But we need to be careful to maintain a civil discourse in this fast medium.

  4. J. M. says:

    About combining things:

    1) “Perspectives has begun our Registered Replications Reports initiative ”

    2) Please see recent F1000prime initiative of waiving article processing charge for “negative/null results” (http://blog.f1000research.com/2013/05/24/scientific-quality-in-negative-results-comments-please/)

    Would/ could that be an interesting initiative/ extension for Perspectives as well ? (i.c. use both of these ideas and extend the “Registered Replication Reports Initiative” with “Registered Negative/Null Reports Initiative”).

    • J. M. says:

      About combining things – part 2:

      1) “Perspectives has begun our Registered Replications Reports initiative ”

      2) Registered Negative/Null Reports Initiative.

      Boom !!: “Registered Negative/Null Replications Report initiative”.

      (would that be useful/ interesting? i.c. comparing a Registered “positive/ there was a significant effect found in the original study” Replication Report initiative with a Registered “negative/ there was no significant effect found in the original study” Replication Report initiative ?)

  5. J. M. says:

    About online discussion, combining, and linking (part 2):

    1) Does (or will) PoPS allow online post-publication comments?
    2) If yes, would it be possible to in turn link/ also transfer these comments to a site like PubPeer (http://pubpeer.com/) so as to provide a single, most complete spot to view post-publication discussion of articles.

    (side note: PubPeer has been set up to provide (anonymous) post-publication peer discussion and contains a database with every single article published with a DOI. So, irrespective of whether the journal allows for post-publication comments: discussion of publications seems possible via that site).

    • E. Star says:

      I think pubpeer might be able to offer an API (I am not computer-savvy, but I conjecture it might relate to this post in some way or form ?).

      Their twitter feed (https://twitter.com/PubPeer) has some recent comments on it, which might be interesting to check, should it be seen as possibly interesting/ useful to think about further (if not done so already).

  6. Pingback: Core Economics | Now you see it, now you don’t: On the deepening crisis in evidence production, and evaluation, in the social sciences (Part I: Problem description)

  7. E.Star says:

    I forgot to say one more thing: thank you for keeping a blog like this one !! It’s very nice of you to do so, and thereby giving people interested in science the possibiliy to read about what happens at these kinds of symposia, and about (psychological) science in general. All the best!

  8. Pingback: Making It Easier to Submit Your Manuscripts | My PoPS

  9. Anonymous says:

    “Linking”: Hey, that’s what i like to do! Thank you for all your efforts in trying to help improve psychological science: 1) the replication special issue of PoPS was “legendary” to me, and 2) reading your post here a few years ago has been influential to me).

    I hope you don’t mind me sharing my version of combined ideas and sources about a possible “Research Revolution 2.0”-research format:

    1) Small groups of let’s say 5 researchers all working on the same theory/topic/construct perform a pilot study/exploratory study and at one point make it clear for themselves and the other members of the group to have their work rigorously tested.

    2) These 5 studies will all then all be pre-registrated and prospectively replicated in a round robin fashion (possibly think about how technology has made this much easier, e.g. using the OSF).

    3) You would hereby end up with 5 (what perhaps often can be seen as “conceptual” replications depending on how far you want to go to consider something a “conceptual” replication) studies, that will all have been “directly” replicated 4 times (+ 1 version via the original researcher, which makes a total of 5).

    4) All results will be published no matter the outcome in a single paper: for instance “Ego-depletion: Round 1”. This paper then includes 5 different “conceptual” studies (probably varying in degree of how “conceptual” they are, e.g. see LeBel et al.‘s “falsifiability is not optional” paper), which will all have been “directly’ replicated.

    Also possibly think about how much easier it would be for researchers to keep up with the information (c.f. your paper “Scientific Utopia…or too much information”): single papers, consisting of multiple “conceptual” and “direct” replications named “Ego Depletion: Round 1”, “Ego Depletion: Round 2, etc.

    5) All members of the team of 5 researchers would then come up with their own follow-up study, possibly (partly) related to the results of the “first round”. The process repeats itself as long as deemed fruitful.

    Additional thoughts related to this format which might be interesting regarding recent discussions and events in psychological science:

    1) Possibly think how this format could influence the discussions about “creativity”, “science being messy” and the acceptance of “null-results”.

    Researchers using this format could each come up with their own ideas for each “round” (creativity), there would be a clear demarcation between pilot-studies/exploratory studies and testing it in a confirmatory fashion (“science is messy”), and this could also contribute to publishing and “doing something” with possible null-results concerning inferences and conclusions (acceptance of “null-results”).

    2) Possibly think about how this format could influence the discussion about how there may be too much information (i.c. Simonsohn’s “let’s publish fewer papers” and your paper “Scientific Utopia…or too much information”).

    Let’s say it’s reasonable that researchers can try and run 5 studies a year (2 years?) given time and resources (50-100 pp per study per individual researcher). That would mean that a group of researchers using this format could publish a single paper every 1 or 2 years (“let’s publish fewer papers”), but this paper would be highly informational given that it would be relatively highly-powered (5 x 50-100 pp = 250-500 pp per study), and would contain both “conceptual” and “direct” replications.

    3) Possibly think about how this format could influence the discussion about “expertise” and “reverse p-hacking/deliberately wanting to find a “null-result” concerning replications.
    Perhaps every member of these small groups would be inclined to a) “put forward” their “best” experiment they want to rigorously test using this format, and b) execute the replication part of the format (i.c. the replications of the other members’ study) with great attention and effort because they would be incentivized to do so. This is because “optimally” gathered information coming from this format (e.g. both significant and non-significant findings) would be directly helpful to them for coming up with study-proposals for the next round (e.g. see LeBel et al.’s “falsifiability is not optional” paper).

    4) Possibly think about how this format could influence the discussion about “a single study almost never provides definitive evidence for or against an effect”, and problems if interpreting “single p-values”. Also see Fisher, 1926, p. 83: “A scientific fact should be regarded as experimentally established only if a properly designed experiment rarely fails to give this level of significance.”

    5) Possibly think about how this format could influence the discussion about the problematic grant-culture in academia. Small groups of collaborating researchers could write grant proposals together, and funding agencies would give their money to multiple researchers who each contribute their own ideas. Both things contribute to psychological science becoming less competetive and more collaborative.

    6) The overall process of this format would entail a clear distinction of post-hoc theorizing and theory testing (c.f. Wagenmakers, Wetzels, Borsboom, van der Maas, & Kievit, 2012), “rounds” of theory building, testing, and reformulation (cf. Wallander, 1992) and could be viewed as a systematic manner of data collection (cf. Chow, 2002)

    7) Finally, it might also be interesting to note that this format could lead to interesting meta-scientific information as well. For instance, perhaps the findings of a later “round” turn out to be more replicable due to enhanced accurate knowledge about a specific theory or phenomenon. Or perhaps it will show that the devastating typical process of research into psychological phenomena and theories described by Meehl (1978) will be cut-off sooner, or will follow a different path.

Leave a comment