At 30, APS Needs to Realize That It Has Grown Up

APS is celebrating its 30th annual convention in a few days.  I promised to write before then about the things I want to see happen at APS that would get me to rejoin the organization that I was involved in, and loved, for many years.

I finally recognized that my gripes have a common theme:  At 30, APS needs to realize that it has grown up.  It started as a small group that believed APA was not representing the interests of psychological science (as opposed to psychological practice).  Early on, everyone knew each other.  And perhaps everyone had common goals.

But you can’t run an organization of 35,000 people, who are now much more diverse in geography, age, field, and other demographics and interests, the same way you run an organization of 400.  APS is no longer the “upstart”; it has gained visibility and power, and needs to act accordingly.

So, below are my three categories of things that need(ed) changing:

  1. I wanted Sternberg gone as Editor of Perspectives. You may have heard that one before; and it was accomplished; so, I’m putting that at the end of this post (to be skipped if sated).
  2. Openness. I’ve mentioned that I want elections to provide more information about the candidates. They are supposed to be our representatives, so they should have the space to make claims about what they are representing.  But, that is not the only type of openness I’m suggesting.
  3. Not following bylaws. No one reads the bylaws of organizations any more. Except maybe people who were on the board and happened to be lawyers.  Like me.  APS disregards many of its own bylaws in the way that a closely held corporation might; the kind of small organization run by a family, where everyone knows each other. And maybe that was fine way back when. But it is not a good practice any more.

2. Openness

APS states that among its guiding principles are:

  • “Transparency, openness, and reproducibility are vital elements in advancing scientific knowledge.”

I don’t want to discuss here how well APS has implemented these values in its journals and conventions. (Clearly, it has done some very good things and failed at some other things.)  But – what about in the governance of the organization?  Not good.  Some examples:

  • As mentioned above, candidates for board positions do not present their views. Hence, election by popularity (of person, or area, or group affinity). Candidates should have the room to write more and to address questions posed by APS members.
  • Member of committees. A few months ago, people wanted to know who was on the Publication Committee. Can’t find that on the website.

The other committees: “The standing Committees shall consist of the following: Awards, Convention, Elections, Finance, Membership, and Publications.”
You can find the fellows and awards committees for each award (when you dig down a few levels to make a nominations).
You cannot find the convention committee through the convention links, but you can if you do a search.  https://www.psychologicalscience.org/2018-program-committee
I have no idea how to find the members of the Elections Committee though maybe they are mentioned when soliciting nominations or when voting occurs.  (Either I never knew or don’t remember.)
Maybe it should look more like the Psychonomic Society site?

  • Name the Action Editors for journal articles – at LEAST if the action editor is the Editor in Chief. (Psych Science and AMPPS do that by EIC choice.)
  • And, not exactly openness but get someone to connect the pieces:

I believe that there should be a new position at APS: Psychological Science Rotator.  This is could be a mid-level person with a psych Ph.D. who advises APS and helps it consider the BIG PICTURE from the view of a psychological scientist.  This might include thing like reading some of the material (e.g., journal article summaries; promotional materials) before they are sent out; considering interconnections between areas within psychological science; and helping to craft reactions to public policy questions.

Right now, Andy DeSoto (Ph.D., 2015) is at APS (maybe part time shared with TheLab@DC.  I’m not sure).  He was a “Methodology Fellow” (2015-16) and now “Assistant Director for Policy”.  Andy seems great, and those positions seem good for a junior person, but a more senior person, with a regular position, might make sense as well.  Board members and committees have very limited (by time and/or content) interaction with the working of APS.  I’ve suggested such a position in the past; some folks think it’s a great idea, some think it’s not necessary to have anyone at APS with advanced psychology degrees.

3. Follow Your Own Rules – or change them.

The APS by-laws say the following:
“Periodically, but at intervals no greater than every five years, the Board of Directors shall appoint a special committee to review the then-current Bylaws and the operation and structure of the Association and to make recommendations about them to the Board.”

Has that ever happened? I’m wondering because the rules prescribe term limits on various board positions, committee members, and committee chair terms, and I’m certain that some of those limits (maybe most, maybe all) have been violated.  For example, the previous chair of the publication committee served for a very long time.  (And did mostly a very good job.  But – the closed corporation/family at work.)

Another example: Here you can see the Board of Directors.  https://www.psychologicalscience.org/about/board-of-directors

Members vote for the president and for the members-at-large.  What about the treasurer and secretary?  As in many organization, those are appointed.
“The Board of Directors shall annually appoint a Secretary and a Treasurer who need not be Members-at-Large and who shall serve as ex-officio members of the Board without voting privileges unless they are Members-at-Large… The Secretary is eligible for reappointment up to three years and the Treasurer is eligible for reappointment up to six years.”

The current Treasurer, Roberta Klatzky, has served since at least 2000.  I like her a lot; as far as I can tell she does a good job.  And it is certainly useful for a Treasurer to hold the position for a while.  But… 18 years?

The current Secretary has served since 2011.  The Secretary has no real responsibilities (because APS staff does what a secretary might do) and no vote.  A long time ago, the President appointed junior faculty as Secretary to have a younger perspective in the room.  Mahzarin Banaji was a secretary; I was a secretary (and for more than 3 years). In more recent years, the picks were to add older wise voices (e.g., past president Linda Bartoshuk, then Anne Triesman) and then to add geographic diversity because Europeans were not winning Board seats (Gun Semin).  But now there is geographic diversity on the Board and, even if not, certainly no reason to keep the same person in that position for 7 years.

So those are my more technical gripes about how APS should be conducting itself as it hits 30.  Then there was my less technical one.  (Feel free to skip.)

– – – – – – – – – – – – – – – – – – – – –
1. Back to Sternberg…

The first thing I wanted to happen at APS before I rejoin was removing Sternberg as Editor of Perspectives.  That was accomplished not by APS but, rather, by Sternberg resigning because of… Well, pick your reason.  But he resigned.  I am sad he stayed so long and that APS took no action (or no visible action) after complaints about him were raised over a year ago.  I am also sad that, at least in part because of him, a new journal was created – Advances in Methods and Practices in Psychological Science – rather than keeping that stuff part of Perspectives.  I thought that Perspectives could go to more issues per year and have “sections”, like JPSP, with different editors.  Shared methods – and, at least, appreciating the strengths and weaknesses of the methods we do and do not use – is something that ties our field together and helps us move forward.  I would have liked to see methods integrated into a content journal to showcase its relevance to every psychologist.  (This has nothing to do with choice of Dan Simons as Editor of AMPPS – I’m quite sure he knows that I am a big fan of his.)

– – – – – – – – – – – – – – – – – – – – – – –

I believe that APS has done some great things in its 30 years. Even some in the last 5.  But it could certainly be doing better and I hope its members will be able to have more of a voice in how to make it so.

 

Posted in Uncategorized | Leave a comment

Dear APS: It’s not me, it’s YOU !

This year I decided not to rejoin APS.  I’ve been a member for many years, am a Fellow, was on the board, and have been an CE, AE, and Editor of various APS journals.  Last week, I got an e-mail begging me to come back.  I had three reasons not to rejoin but didn’t talk about them publicly because I thought it was just me.  Now it’s clear, it’s not me APS, it’s YOU.

Screen Shot 2018-04-18 at 6.28.59 PM.pngIf you want us to get back together, there are three things you need to do.

The SECOND of the three reasons was recently mentioned by several people on social media.  APS elections.  APS sent out links for voting (or so I’m told, not being a current member). It is silly that along with the ballot, APS provides little information about the candidates.  They give candidates space to indicate various bits of affiliation and service.  But there are no vision statements, nothing about priorities or initiatives, and certainly nothing about the contentious issue of science reform.  The election becomes a popularity-of-sorts contest.  (So, turns out, it wasn’t only me who has been displeased with that process.  It’s you.)

Watching, Sad and Embarrassed

The FIRST, and major, of the three reasons is the decay of the journal Perspectives on Psychological Science and APS’s continued failure (despite much complaining) to do anything about it.  Again, I thought this was only my worry – having been the previous EIC, of course I wouldn’t like all of the changes.  But a few weeks ago, some Facebook groups concerned with methods in psychology were filled with disdain as they saw Editor Sternberg publish yet another invited special section (this time nearly a fully issue): (1) extolling the virtue of citation counts, (2) with nearly all US male authors, and (3) with a foreward and afterward by himself that barely talked about the invited articles and instead was filled with gratuitous self-citations.  These actions resulted in an open letter to APS assembled by Chris Crandall and signed by about 160 people.

The letter notes the focus of the articles and selection of authors – and refers back to how similar concerns were raised after the infamous “Am I Famous Yet” special section, in which a special section on “Merit” in psychology mostly devolved into a section about becoming famous, with 6 articles by men and 1 first-authored article by a women titled, “Scientific Eminence: Where are the Women?”  After some outcries, Sternberg decided to offer to have a follow-up section on the topic.  The selection of papers for that issue (or, rather, lack of selection) is described here by 6 female authors, who had independently come up with similar themes in their (de facto) rejected submissions.

Discussion circling the open letter to APS also referred to Sternberg’s style of introduction and afterward/postscript to the sections.  Nearly all special sections have both.  And nearly all of them are not traditional introductions that are concerned with informing readers about the what/why/how of the topic to come.  Rather, nearly all look like articles from the sections themselves – indeed, articles where the author focuses on his own work (whether central or tangential to the topic), with about half of the 40-60 citations being self-citations.

I believe that this is an editorial abuse of power: Using the label of Introductions/ Afterwards to write articles of several pages extensively extolling and citing oneself rather than focusing on the topic of the section.  (Yes, I wrote introductions to many special sections when I was editor. None had these characteristics. You can look them up.)

But – here’s the thing:  Sternberg has asserted that all papers in Perspectives go out for peer review, including his own introductions and discussions.  I found it difficult to believe that any peer reviewer, or any action editor, would have signed off on what he has published.  And now I’m ready to say:  I don’t believe it.  Aside from the Introduction to the first special “famous” special section (which at best went out for a “light” review), I do not believe that Sternberg used anything approaching peer review on his own articles.  (Unless you believe that “peer review” means asking some folks to read it and then deciding whether or not to take their advice before you approve publication of it.)  And, I am confident of this “beyond a reasonable doubt” (as we say in the law game).

To Get Back Together

So, APS, before we get back together, I want you to fire Sternberg as Editor of Perspectives.  I would like you to do it because, using the techniques above, he has made the journal, and APS, a laughingstock.  And you should do it before he does so again in his next special section, in which his rambling introduction and postscript take us on tours of his youth and, un-peer reviewed, garner him another 39 self-citations.

For now, I’m going to skip my THIRD reason for quitting APS.  But in some ways they are all of a piece.  APS needs to stop acting like it is the new radical psychological society, reacting against the anti-science of APA, and consisting of 400 friends meeting in someone’s living room.  With 30,000 (now minus one) members, it’s time to take its responsibilities to the membership, and to the science, more seriously.

This was indeed my last issue.  But it doesn’t have to be.  It’s on you.

Photo on 4-19-18 at 3.41 PM.jpg

 

Posted in Perspectives on Editing, Professional Society Responsibility | 18 Comments

I was a reviewer…

I was a reviewer on two of the manuscripts discussed recently in Simine Vazire’s blog by guest author Katie Corker (on behalf of the six women authors).

(What I say below won’t make sense  unless you have read that first.)

My review of Ase’s Innes-Ker’s piece is below.  It was early March 2017.  Note: I had previously e-mailed with PoPS Editor Sternberg about making sure more voices were heard in this second round of the symposium on scholarly merit.  Also, I was very conscious that the word limit for these papers was much shorter than for the original contributions.  Thus, I believed that expectations for these papers should be different than for the original submissions. (You’ll see more about how I was influenced by that problem in my second review at the bottom.)

(Hmmm….  I can’t believe that I was Reviewer 2 both times.)

———-

Reviewer: 2

Comments to the Author
I think that this manuscript offers some useful / different perspectives on the AIFY symposium.  As I’ve said elsewhere, I wish that the first symposium had been about merit (the alleged point) rather than about fame (what many authors addressed), but with that background I believe it is appropriate (and even necessary) to directly address problems of fame as a metric.

This manuscript does that, although it is somewhat choppy in places and not all arguments are as well supported as they could be.  It relies on some philosophy of science arguments, which adds a nice angle to the discussion.  I noticed that the author is a European non-native English speaker, so some turns of phrase (and punctuation) seem odd but could be fixed easily.

I like the intro and ending with the connections to Stapel and his craving for fame.

And I like that the manuscript connects back to papers from the first AIFY symposium and says that “Am I Famous Yet” is the wrong question to ask.  The manuscript’s answer to what is the right question to ask appears in the following sentences.  Unfortunately, the focus / language there is not consistent with the rest of the paper; they need to be made more consistent to hold the argument together across the entire paper.

I like the connection to Merton – though I suspect he did not have the sort of data that is now available on the distribution of “fame”.  (So, did he “report” or “surmise”?  Would some modern metascience statistics be useful there?)

The Salganic & Watts paper brings up an interesting issue — although I think that the analogy to scientific publishing needs to be made clearer.  We have peer review, we have journals of different quality, and the initial “rankings” of papers probably does carry some merit.  But I do agree with the overarching point that sometimes a paper becomes the paper to cite for proposition X – whether or not it actually supports it.  (E.g., Nisbett & Wilson for anything having to do with not knowing one’s own preferences.)  This section of the manuscript is jumpy – from the S&W study, to the critique of the h-index, to the Srull & Wyer citation investigation.  (And it seems like there is something missing between the first and second sentences in “The problem of metrics”.)  But, again, I do like the big point and totally agree (and have written about) the problem of how we don’t keep track of what we cite papers FOR; we just keep track of the citation count.  And I like the penultimate sentence of that section – about how rewarding frequency of publication over quality of individual papers can lead to poor scientific practices (some of which should be explained in a bit more detail to the readers).

I like the connection to Hull and the Smaldino/McElreath paper.  I think that those two references belong more tightly woven together as being about the evolution of science and its practices.  Then that leads into the importance of community.  Community is, of course, hugely important to the vetting of science, not just in the peer review process but also in the replication, falsification, and theory-advancement processes.  And I agree that for a long while there had been a real problem with the ability of scientists to “expose these hypotheses to severe testing” – and to get such results published – until recently.

On the other hand, although I like the argument about the importance of community, I’m not quite sure how collaborations per se are important.  Cooperation (including researchers sharing methods and data) and competition (alternative theories) = of course, but I’m not sure about “collaboration” (unless that word is being used differently from how I’m interpreting it).  Hull’s “social churning” – that’s a good phrase.

So, I have trouble following the thread at the bottom of p. 5 – middle of p. 6.  I do like the sentences about how the ideas need to be stress-tested not the scientists.  But some of the surrounding bits don’t hang together. E.g., “other ways scientists contribute” would be a great theme for another paper but is not / cannot be fleshed out here.

In short, good ideas, but the argument needs to be more pointed and better stitched together.

Minor comment:
Isn’t Bowie the “first author” of Fame?

I always sign my reviews,
Bobbie Spellman

———-

Here is my review for Fernanda Fernanda’s manuscript.  This was a couple of weeks later after I had seen a few more submissions.  I saw that several important themes were emerging across papers and, in my view, were not being appropriately appreciated by other reviewers or the editor.

———-

Reviewer: 2

Comments to the Author
Review of PPS-17-136

At this point, I have read about half a dozen of the submissions for this second round of essays on merit.  (Some as a reviewer, some as a friend.)  There are many recurring themes; thus, a large part of the Action Editor’s job is going to be to select manuscripts that represent the diversity of common viewpoints that were not represented in the initial symposium.

This manuscript covers some ground covered by the others, and has some of the flaws of the others, but it also stands out for several reasons.

Strengths:

1. Balance –
This manuscript strikes a nice balance between adoring fame and demeaning fame.  It describes how fame might (and should) rightfully emerge from doing both good work and good (useful to the field) deeds.
It also strikes a nice balance between engaging with (and citing) some of the papers in the initial symposium and moving beyond them.

Relatedly: I like the distinction on p 4 between the two questions of asking how someone became famous versus what one must do to become famous.  And I like the points about fame as a potentially problematic heuristic (like availability).

2. My favorite part of this manuscript is the description on p. 6 of the “two particularly impressive scholars” in the distinguished speaker series.   I think it nicely captures features of exemplary scientists – including how they challenged entrenched views.

I like the term “infrastructure” to describe all the “other tasks” we do to keep the field running.

Note: However, I don’t think that the author makes the best use of these examples.  In the lead in to the description, she mentions “merit and quality”.  After the description she talks about fame and “reputations.”  And the next paragraph starts by discussing the bases of fame.  I think this set of reflections could be made more coherent to really showcase the point of the examples.

3. Voice
I like the personal voice of this manuscript.
I worry, thought, that the manuscript might read a bit “cognitive” – e.g., prizes if they “have contributed more than most to uncovering the nature of psychological processes”; also, among the qualities of good science on pp. 6-7 it doesn’t mention doing work that could be applied, although work being “useful” is mentioned in the first sentence of the conclusion.

Weaknesses:

1. The manuscript suffers from a common weakness of all the second-round manuscripts:  it makes a lot of claims without data supporting them.  I believe that is a common problem due to (a) the authors wanting to make a lot of points (b) within the constraints of a tight word limit.  The manuscript often appeals to how we all know people who… (e.g., bottom of p. 3 – do good work but not known and vice versa).    And it makes claims about the correlations between merit and fame that might/might not be accurate.

2. The author should re-think the abstract.  I don’t think focuses on the core messages of this manuscript (and, instead, mentions a lot of things that the manuscript deals with tangentially).  It also seems long for such a short paper.

3.  Although I generally enjoyed the writing style, I think the manuscript reads a bit “flabby”.  There are sentences that could use fewer words and a couple of redundancies that could be cut.  I think that would help maintain the focus of the paper.
E.g., top of p. 6 – Let me now return to a point that I made at the beginning of this article = … return to an earlier point…

Also, if there are going to be papers that focus on how the recent-past incentives have gotten us into the current replication mess, the paragraph on “The Dangers of Fame” – or at least the part re: incentives — easily could be removed from this manuscript.

I always sign my reviews,
Bobbie Spellman

Posted in Uncategorized | 1 Comment

A “Council of Psychological Advisors to the President” ?

[A couple of months ago I interviewed for a Fellow position with the SBST — the U.S. Social and Behavioral Sciences Team.  (You know, like the UK’s Behavioral Insights Team, a/k/a “The Nudge Unit”.)  It was a dream job but a nightmare interview.  So, I ditched the dream and decided to go back to an idea we had at Perspectives on Psychological Science a couple of years ago — let’s publish memos to President Obama about using psychological science to inform public policy.  Instructions for submission below.]

CALL FOR PROPOSALS:  SPECIAL SERIES ON PSYCHOLOGICAL SCIENCE AND POLICY

The Council of Economic Advisers to the President of the United States is “charged with offering theecon President objective economic advice on the formulation of both domestic and international economic policy” and “bases its recommendations and analysis on economic research and empirical evidence, using the best data available to support the President in setting our nation’s economic policy.”

Imagine serving on a new “Council of Psychological Advisers” on which you had the chance to send memos to the President offering insights from the best research in psychological science to help solve specific, pressing problems facing society.

Perspectives on Psychological Science is planning a special series of memos by the Council of Psychological Advisers to the President. This is an open call inviting authors to pair a societal problem with a psychological “solution” to make a succinct point about how psychological science can inform policy.

Examples might include (but are not limited to):
— Climate change and affective forecasting
— Inequality and status/hierarchy
— Obesity and self-control
— Water conservation and intertemporal choice

To submit a proposal for consideration for this special series, submit an abstract (250 words maximum) that outlines the central thesis and arguments by September 26, 2014.

Submissions can be made through the journal’s standard web portal entrance (http://mc.manuscriptcentral.com/pps). Please indicate that the submission is for the “Council of Psychological Advisers” series.

We will select approximately 10 abstracts and invite these authors to submit a full piece. The final pieces will be brief (1000-1500 words maximum) and can even use bullet points. (Think of these as actual brief memos – the goal is to make them short, punchy, and accessible.)

Abstracts are due by September 26, 2014, and you will be notified approximately two weeks later if you are invited for a full submission. The completed piece will be due by December 1, 2014. (Note that this is a hard deadline because all memos will be published in the same issue.)

A few tips to keep in mind: the memos should be based on reliable, established findings, be written from a nonpartisan view, and be pitched for a broad audience of both academics and policymakers (not colleagues in your subfield).

Please direct questions to Bethany Teachman (bat5x@virginia.edu) and Michael Norton (mnorton@hbs.edu), co-editors of the series.

We look forward to seeing your submissions!

Bethany Teachman, Associate Editor
Michael Norton, Guest Editor
Barbara Spellman, Editor
Perspectives on Psychological Science

Posted in Psychology and Policy | 1 Comment

Barcelona — and Should Revolution 2.0 Go Grassroots?

I discovered something interesting at SPUDM24 last week.  (That is the European Judgement & Decision Making Conference which was held in Barcelona this year.)  I was speaking about things that are happening in psychology having to do with replication and publication when I mentioned that there was something that each person in the (surprisingly large) audience could do on his or her own to help force new norms on the journals:

data_entry

“When a journal asks you to review an empirical manuscript, write back to the editor and say you will do it if only if you can get the data.”

Audible gasp.

Wow.  I hadn’t realized the power of that idea until I said it aloud and saw/heard the reaction.   We are authors AND we are reviewers.  So, if we start asking for the data, we have to be willing to be asked for the data.  And we have to recognize that asking (or asking for all of it) is not appropriate in all cases.  But when you do so appropriately, what is an action editor / journal then to do?

I don’t know whether or not I love this idea.  But I think it’s worth more thought.

Posted in Perspectives on Editing, Research Revolution | Tagged , | 3 Comments

Making It Easier to Submit Your Manuscripts

The other day Retraction Watch described a retraction triggered by the authors’ simultaneous submission to two journals.  A comment asked about how one can go about ethically submitting to multiple journals.  The answer: you can’t.  At least not in science.  (But you can in law; more on that below.)

rejected

Part of Research Revolution 2.0 consists of changes in how we publish empirical research: there are now more outlets (print, electronic, open access, etc.) but also more variability in requirements (word length, citation style, providing raw data, disclosure statements, placement of tables and figures, etc.).  These variations might be appropriate for journals, which wish to maintain their own style and standards, but they can be a nightmare (or at least a waste of time) for authors.  You may have followed all guidelines when submitting to Journal A only to get a desk rejection based on novelty or content.  You then re-format to submit to Journal B only to get rejected 3 months later.  Now what?  Certainly, revising is called for before your next try (if you try at all), but why also shortening or lengthening, moving materials from online supplement to text, placing figures in the text or at the end, and worrying whether you really do need to capitalize (or not) the first letter in every word of article titles.  I believe that there are reasons to slow down the writing / publishing process – but these certainly are not it.

An interesting solution to this problem was suggested to me last week by the wonderful Orit Tykocinski: one stop bidding.  This solution is amusingly similar to how legal academics find homes for their articles.

Here’s the new plan.  You have a manuscript.  You submit it to the psychology website – which is the portal for ALL empirical (or ALL) psychology journals and has one standard format for submission — and you check off which journals are allowed to look at it.  Then you wait.  Soon Journal D says they want to review it and they will get back to you in X days.  You have Y days to either accept or reject that bid.  You must agree that, if accepted, you will make it longer or shorter or whatever necessary for publication in that journal.  When under review at journal D, no other journals can review it. (Though it sure would be interesting to have a version in which other journals could, with knowledge, choose to review a manuscript already under review at another journal.)  D gets your action letter back in X days.  If they accept, you’re happy.  If they say revise & resubmit, then, as usual you decide what to do next.

This way the manuscript goes to a journal that is interested from the start.  As an editor, I would have my consulting editors on the lookout for appropriate manuscripts.  It would make it much easier to create special issues.  And authors wouldn’t have to do so much style revision.

Of course the reason to be under review at only one journal at a time is because we scientists invest so much thought and energy evaluating and reviewing each other’s work.  But check out how it works in legal academia.  You have a manuscript that you submit through a portal.  With the click of a button you can have it sent to 200 law reviews (for a price, but usually your university will have a subscription to the service).  At the law reviews, student editors take a look.  Maybe a student editor from a less-good school e-mails you, “We want it.”  You say, “Give me a few days,” they say, “Three,” and then you immediately e-mail a bunch of somewhat better schools saying, “I have an offer from less-good school and need an answer from you in three days.”  A student editor from a somewhat-better school e-mails you, “We want it.”  You say, “Give me a few days,” they say “Two,” and then you immediately e-mail the good schools…   You bargain up as high as you can and then: Sold.

No, we can’t do that in science.  Those are students and that is not adequate PEER review.  So, no, we can’t go that far.  But we can do better than what we have now.

As I have said before, I believe that the current “crisis” in science owes much to current technology, but I also believe that technology can provide us with some nice help to get out of it.  Although not a critical flaw in the system, fixing this submission irritation can help researchers spend more where it counts, doing better science.

Posted in Perspectives on Editing, Perspectives on Writing, Research Revolution | Tagged , , , | Leave a comment

Research Revolution 2.0 — The Current “Crisis”: How Technology Got Us Into This Mess and How Technology Will Help Us Out

[I just got back from the APS (Association for Psychological Science) Convention where I spent 5 hours in various symposia on Research Practices and 4 hours in various meetings on what Editors and Journals can (or should) do about what’s going on.  Below I provide an edited version of my 8 minutes speaking in the “Program on Building a Better Psychological Science: Good Data Practices and Replicability” in the section: “View from Editors, Program Officers and Publishers”.]

What we are seeing now is a revolution in Psychological Science.  It’s not a Kuhnian type of revolution: we are not changing our core research theories.  Rather, I think a better analogy is to a political revolution, where there is a structural change in how things are done.  When I decided that was the better analog, I also became much more optimistic that this revolution would be a success.  And I mean “success” in two ways.

One meaning of success is that I believe that this time there really is the momentum to change things.  We know that now is neither the first time there have been “disruptions” in our science (e.g., fraud, failures to replicate, questionable practices), nor the first time that there have been calls to change the way we do psychology (e.g., previous calls to change our statistics, to publish null findings, etc.).  But it hasn’t happened.  Well, I can argue that in every revolution there are precursors – failed rebellions, suppressed uprisings, and storming the barricades.  So, why do I think this time there will be change?

Let’s take a break for a brief quiz.
If you were involved in psychology 25 years ago, how would you answer these questions:
1) Did you ever think that you would be able to run 100 subjects in 1 day?  How about 1000?
2) Did you ever think that you would be able to do all of your data analysis and create all of your graphs for your results section in 1 hour?
3) Did you ever think you would be able to find and have on your computer all of the articles related to your study in 1 minute?
4) Did you ever think that you could send your manuscript around the world, to dozens of readers, in 1 second?

So, what’s the difference now?  Obviously, technology.  We have subject pools that we are getting through MTurk and websites and smartphones.  We have computers that can present stimuli, collect measures, and load it all neatly into a spreadsheet.  We have statistical programs that can handle huge data sets and do dozens of analyses in seconds.  And these programs can generate random data, with specified means and standard deviations that look so much better than “real data” that some people decide to call it exactly that.  Also, we know so much more about what’s going on in other labs, be it what they publish, or what the gossip says.

5) Oh – and since we are celebrating APS’s 25th anniversary one other thing – Did you ever think that there would be 25,000 members of an organization of scientific psychologists, all doing trying to do the same thing at the same time?

So now we have more researchers running more experiments, running them more quickly, running more statistics, spreading the word more quickly, and all competing for jobs and grants and publications and fame.  And what all that means is – more trouble.  Yes, the time is right for the revolution.

But here is the second reason that I’m optimistic.  I believe that we are going to come out of this mess a better and more integrated science.  And I think that our journals, yes, with the help of technology, have a huge role to play.

You have already heard editors talk about empirical journals (Barch, Eich, Liben).
Empirical journals can enforce new norms.  For example:
— what needs to be reported with every empirical article (now we have room in online supplements);
— whether researchers should make their data accessible (now there are repositories);
— whether the journal will publish simple replications and/or failures to replicate (now there is more room) – and eliminate the file drawer problem;
— whether the journal will ask people to register their hypotheses, methods, and/or data analysis plans beforehand — thus eliminating HARKing (Hypothesizing After Results are Known) and p-hacking.

But as great as all that would be for assuring the integrity of our data – the foundation of our knowledge – I think we also need to be doing more to amalgamate and synthesize our knowledge.  I don’t know about everyone else, but I often think there is just too much information for me to wrap my head around.  (In my office, I have a print of a New Yorker cartoon in which a mother says to her tearful daughter: “It’s all right, sweetie.  In the information age, everyone feels stupid.”)

And here I believe that the theory and review journals, with the help of technology, can help.  I think we can do a lot to encourage combining, challenging, and linking our science.

Combining:  (1) Perspectives has begun our Registered Replications Reports initiative (with Dan Simons and Alex Holcombe as Associate Editors).  Researchers propose an experiment to be replicated and justify why the study deserves the time and effort to do so.  Then, with the original author, they develop a tight protocol for what it would mean to do as exact a replication as possible.  When that’s set, we post it and open it up for other labs to join and run the studies.  We publish in it Perspectives, regardless of outcome.  By having lots of labs we get away from some of those “what does replication mean?” questions.  We can get a good sense of effective size and even check out some moderators (like does it matter if the lab is believers or non-believers in the effect).  Recently we went public with the first proposal regarding Verbal Overshadowing.  Two weeks later we had 15 labs, in four different countries, wanting to be involved.

(2) Perspectives has always published meta-analyses and will continue to do so.  But now because there are more ways to publish, or at least post (e.g., psychfiledrawer.org), simple replications and failures to replicate, these analyses should be less likely to suffer from file drawer problems.

Challenging:  I think we should have more format for true discussion and debates about theory in which researchers can more directly engage back-and-forth.  For example, there should be more theoretical adversarial collaborations like that of Kahneman and Klein (2009).  Perspectives has tried some things like that: the mirror-neuron forum of a few years ago and an upcoming pair of articles in the July 2013 issue where one person questioned not the research but, rather, the interpretation of another, wrote a long enumerated critique, and then the other had a chance to write a long reply

Oh, and by the way, I think one thing researchers (especially older researchers) have to get over is the love of journal space print.  Every time Perspectives publishes a controversial piece, people demand that I publish their comments and letters.  No, we need to be doing more of this discussing online – faster and public.  And maybe we need to count references to that type of publication as “impact”.

Linking:  With all this information – more research, more conferences, more journals, more alternative outlets — I think we must do better to make sure it doesn’t fragment.  We need to make better connections both back to the past and across in the present.  You’ve heard the mention of reinventing the wheel – researchers failing to reference relevant past studies.  There was a move to shorten references sections, but now, again, we have the space to do things online and, even better, we have digital links.  We should be insuring that our science accumulates.  We also should be looking for connections across fields.  I once published an article called “A Tale of Two Literatures” showing how parallel research in cognitive psychology and social cognition never referenced each other — perhaps because they (intentionally?) used different terms for similar research.  More such parallels should be discovered.  And I am a big fan of adding to the way we do our citations.  We should not be just sticking in names without having to make it clear why we are citing the study.  Just background, used the methods, found the same thing, or totally disagree?  Not all citations are equal and we could do a better job keeping track of how papers are related.

These are some of the roles I see for journal and editors – building a studier and more integrated science.  That, I think, would be a good, and successful, revolution.

(Question for next time: Is this not so much a research revolution but, rather, a civil war?)

Posted in Meta-analysis, Perspectives on Editing, Research Revolution | Tagged , , | 12 Comments

3… 2… 1… Liftoff — A Dream Come True — Registered Replication Reports Take Off

A million… I mean three and a half years ago, when I wrote my incoming editor’s editorial at Perspectives on Psychological Science  (DOI: 10.1177/1745691609356780), I said that I wanted to encourage new types of articles that I thought would help our field grow stronger and faster.  One of them was dubbed ‘‘The File Drawer” and I wrote: “What I envision is … the Editorial Board identifies topics: phenomena that researchers have not been able to replicate. Next, we appoint lead researchers: people who will collect the mostly unpublished failures and write an analysis of what was done, what was (or was not found), etc.  Finally, the authors who published the original research would be given a chance to respond.”

We (Hal Pashler, Tony Greenwald, and I) identified a study to replicate and contacted the original author early on but he seemed so unnerved by the process that we paused to re-group.  In the meantime, Hal and I developed psychfiledrawer.org where researchers can individually post their attempted replications (both successes and failures).

Then flash forward three years to when Dan Simons and Alex Holcombe proposed what has become the Registered Replication Reports initiative — a way to get teams of researchers to try to replicate important studies with the cooperation of the original authors.  OF COURSE Perspectives should host and publish such articles.

For more on the backstory of the creation of RRR see:  http://blog.dansimons.com/2013/05/registered-replication-reports-stay.html

For more about the pushback I’ve gotten to the replication project see: http://wp.me/p2Wics-1o

We are teamed with the Open Science Framework where projects will be developed and shared.

To get started on your very own replication research report, or to join one already in progress, go to:  http://www.psychologicalscience.org/index.php/replication/ongoing-projects

And if you want some ideas for experiments that people would like to see replicated, take a look at psychfiledrawer’s top-20 list of studies users would like to see replicated.

And now…. for our very first public launch… whose study will it be?  3… 2… 1…    You can find out here.

Posted in Research Revolution | 2 Comments

“But I don’t want people to try to replicate my research.”

If you are reading this blog, you have probably seen some of the news about the new Replication Research Reports to appear in Perspectives on Psychological Science. http://www.psychologicalscience.org/index.php/replication  But something you probably haven’t seen, or heard…  A few months ago, I was at a meeting describing this PoPS initiative and a senior researcher said, in front of two dozen other folks,

“But I don’t want people to try to replicate my research.”

There was a hush.  And I had to stop myself from saying, “Wait.  You mean that you want your papers growing musty on shelves and unaccessed in cyberspace?”

Then I realized that THAT was not the fear.  Rather, this researcher feared that the “replication movement” could be out to get her.  She thought that if people were trying to replicate her work, it would mean that they were targeting her and “trying” to show that it was non-reproducible.  In fact, during that meeting the project was called “McCarthyism” not once but twice.

I know that some of you may be against the “replication movement” in psychology.  I assure you that this PoPS project is not meant to be any kind of “debunking” of particular research.  Rather, we intend to involve the original authors, and involve labs that believe, disbelieve, and are neutral about the existence and generalizability and size of the effects.  We also intend to involve all areas of psychology.

And, we certainly do not intend to advocate that this is the only way science should be done.  There are upcoming articles in Perspectives that will “put replication in its place.”  But I believe that replication should be more valued as a tool than it currently is.  And there seems to be a wave across all the biological sciences (especially the medical sciences) agreeing.

Better humor about replication was evinced at a recent discussion at The Columbia University Department of Psychology a few weeks ago.  Speaker Niall Bolger showed the page in psychfiledrawer.org where researchers can nominate and vote for studies that they would like to see replicated.  Here’s the top-20 list:  http://www.psychfiledrawer.org/top-20/

“Look,” I said happily, “I’m number 9.”  But Kevin Ochsner seemed proud to beat me at number 8.  Others strained to find their own names on the list.

Oscar Wilde once said: “The only thing worse than being talked about is not being talked about.”  Shhh…. Don’t tell anyone but — I was the first to nominate my experiment for the replication top-20 list.

Posted in Meta-analysis, Research Revolution | Tagged , | 1 Comment

My First Post-Revolution Action Letter

I’m a little embarrassed to admit it – but the first post-revolution action letter I’ve seen is one I got as an author, not one that I sent as an action editor.

What do I mean by a post-revolution action letter?  I mean one that incorporates some of the scientific values that we have been vigorously discussing in psychology over the past year.  (E.g., in the Perspectives November special issue.)  In particular, I mean the values of not creating post-hoc hypothesis, replicating surprising results, publishing interesting studies (and replications) regardless of how they turn out.

I was the fourth author on an empirical paper with one experiment that included a priming manipulation and some messy results that did NOT confirm our initial hypothesis but did suggest something else interesting.  What did the action letter (and reviewers) say?

1) After noting that the research was interesting, the reviewers called for some type of replication of the surprising results – and were not put off by the fact that we did not confirm our original hypothesis.

2) The Action Editor wrote two lovely things. First, he said he preferred a direct replication.  He said:  “I like the fact that you propose a hypothesis and fail to find support for it (rather than invent post hoc a hypothesis that is supported). However, I also think this surprising finding (in view of your own expectation) calls for a direct replication. The combination of the two experiments would be very compelling.”

3) And second: “The direct replication would be confirmatory in nature and its outcome would not determine eventual acceptance of the paper. Performing the replication attempt and reporting it in the manuscript is sufficient.

So, hats off to Rolf Zwaan and the anonymous reviewers.

Posted in Perspectives on Writing, Research Revolution | Tagged | 5 Comments