The Psychological Science Accelerator: A Distributed Laboratory Network

 

I recently suggested that the time was right to begin building a “CERN for psychological science.” My hope was that like-minded researchers would join me in a collaborative initiative to increase multi-site data collection with the ultimate goal of increasing the pace and quality of evidence accumulation in the field. The response has been immediate, positive, enthusiastic, and a bit overwhelming (in a good way!).  We have quickly assembled a global (and constantly expanding) team of psychological science laboratories. By vote of the team, we have renamed our “CERN for Psych” project to avoid direct comparisons to physics.

We are The Psychological Science Accelerator:  A Distributed Laboratory Network.

27 days

In just 27 days, 106 labs from over 30 countries and 5 continents have expressed interest in the network and have signed up to “stay in the loop.”  Even more promising is the fact that 58 of those labs (more than 2 per day!) have already committed to contributing to our initial data collection projects in 2018. We have divided these labs into 2 data collection teams, with one collecting data in North America and the other collecting data globally for initial projects.

MAP 9 20 2017

We have built it, will they come?

We now turn this data collection capacity over to researchers around the world and welcome submissions for studies to include in these initial data collection projects. These call for studies documents (north american doc here and global doc here) have been posted as HAVES on StudySwap for others to review and download.

We will accept proposals until October 11th. The team will then discuss the strengths, weaknesses, and feasibility concerns of the proposed studies. Ultimately, we will vote as a team on which studies to include and make our final selections by November 1st.

We welcome submissions from all areas of psychological science. A few key points to consider:

  • The proposed studies can test novel hypotheses or focus on the replication of previous findings.
  • We may collect data for multiple “bundled” studies if all parties deem them to be compatible in a single laboratory session, and a mutually agreeable study order can be found.
  • We will include at least one positive control effect at the end of the data collection session.
  • Feasibility of data collection will be a primary component of our evaluation for these first projects.
  • Studies will be pre-registered, even if the research is exploratory in nature.

How to Get Involved

The two projects listed above for 2018 are just the beginning for the Psychological Science Accelerator (PSA). We hope to build a general purpose network where researchers can propose exciting new ideas and important replication studies, where the network laboratories can then democratically decide which studies are most worthy of data collection resources, and where we then collect large amounts of data in labs all around the world.

To join the network or just stay informed on our activities, fill out this 3-item google form.

To add your lab to those collecting data on the initial projects in 2018, please email me at cchartie@ashland.edu

To propose a study for these initial projects, please review the call for studies, complete the submission form, and email your submission to me at cchartie@ashland.edu

If the first 27 days of the Psychological Science Accelerator are any indication of things to come, we have initiated a project that can have a meaningful and lasting impact on psychological science. Please join us!

Dr. Christopher R. Chartier

cchartie@ashland.edu

Associate Professor of Psychology

Ashland University International Collaboration Research Center

 

Update: Building a CERN for Psychological Science

A Big Week

Things have developed rapidly since we initially proposed that now is the time to begin building a CERN for psychological science. Seventy two labs from twenty nine countries have signed up for the network (see the google map below). Furthermore, 31 labs have already committed to our first data collection projects in 2018, taking the generous step of agreeing in principal to collect data for yet-to-be-determined studies. Clearly there is strong grassroots support for such an initiative. What an exciting time to be working on the improvement of psychological science through large-scale collaboration!

Picture1

What’s next?

Here are our next steps, including ways for you to get involved if you aren’t already:

-We will continue recruitment for the CERN network indefinitely. We need many more labs in many more locations with many more resources to make this a truly transformative project. You can still sign up here.

-Specifically, for the two Collections2 that we will coordinate in 2018, we would love to recruit additional labs, even though we have already surpassed our minimum goal of 10 labs devoted to each. We could particularly use more North American labs with diverse student subject populations. Again, fill out the form linked above or contact me at cchartie@ashland.edu to get involved. This specific recruitment effort will continue until September 15th.

-We will release an open “call for studies” on October 1st to select the studies to be included in these initial Collections2.

-Collecting labs will then decide as a group which studies we will collect data for in 2018. Our decisions will be made by October 15th.

-We will then work with the researchers whose proposals were selected for these initial Collections2 to finalize detailed data collection protocols. This work will wrap up by November 15th. During this month, we will also recruit additional data collection labs in case other researchers become interested in the Collections2 once the specific studies are announced.

-On November 16th we will distribute finalized protocols to all data collection labs so they can begin making logistical arrangements and can initiate their IRB review process.

-Data collection will take place between January 1st 2018 and December 31st 2018.

-Manuscripts will be prepared and submitted in 2019. Proposing researchers and all data collection laboratories for each Collection2 will help draft, review, approve of, and be listed as authors on the resulting manuscripts.

What Should We Call Ourselves?

Another open issue is what we should call this distributed laboratory network. My initial title drawing comparisons to CERN in physics was for metaphorical purposes, and we may wish to proceed under a different title. What do you think we should call ourselves? Stick with CERN for psychological science? I’m open to ideas and feedback on this matter. Shoot me an email (cchartie@ashland.edu) or tweet at me (https://twitter.com/crchartier) with your thoughts.

Onward!

Thank you so much for your continued support of this project. I have been overwhelmed with the response and am filled with enthusiasm to continue building a CERN for psychological science.

Building a CERN for Psychological Science

In response to the reproducibility crisis, some in the field have called for a “CERN for Psychology.” I believe the time is right for building just such a tool in psychology science by building on current efforts to increase the use of multi-site collaborations.

What would a CERN for Psych look like? It certainly would not be a massive, centralized facility housing multi-billion dollar equipment. It would instead be comprised of a distributed network of hundreds, perhaps thousands, of individual data collection laboratories around the world working collaboratively on shared projects. These projects would not just be replications efforts, but also tests of the most exciting and promising hypotheses in the field. With StudySwap, an online platform for research resource exchange, we have taken small steps to begin building this network.

Ideally, a CERN for Psych would also have a democratic and decentralized process for the selection of projects to devote collective resources to. Researchers could propose exciting study proposals, publicly post them, and the community of laboratories comprising the distributed network would decide autonomously and freely which projects to devote their resources to. This could be seen as a new form of peer review. Only those projects that a collection of one’s peers deem worthy of time and energy will be supported with large-scale data collection. The most exciting and methodologically sound ideas, as determined by the community, would be those receiving the greatest amount of resources. Again, StudySwap already provides a basic starting point for this feature and can eventually fulfill this requirement more fully with changes to the site.

Finally, a CERN for Psych should involve projects that are open and transparent for their full research life cycle. Using the Open Science Framework, projects would be open from idea proposal to methods development to data collection to eventual dissemination. Any interested party could fully review, criticize, praise, build upon, or reanalyze any component of the projects, their data, and their disseminated summaries.

This is not a pipe dream. The basic constituent parts are already in place, but there is much work to be done. What do we need to build a CERN for Psych?

-We need a large distributed laboratory network. If even 10% of psychological scientists devote a small portion of their lab resources to the CERN for Psych, we would be able to harness a massive amount of data collection capacity. This work has already begun, and dozens of labs have signed on for these efforts. Please join the network by filling out this 3-item form.

-We also need researchers who want to use the network to test important hypotheses and who are brave enough to take an innovative approach to their data collection practices. I believe that if we build it, they will come. We are currently recruiting 10 labs to each collect data from 100 participants in 2018 (Total N = 1,000) for just such a proposed study from someone not on the collection team. We will release a call in the near future soliciting study proposals from researchers without access to large samples at their home institutions and who can demonstrate that they would particularly benefit from a geographically dispersed and relatively diverse sample. Email me (cchartie@ashland.edu) if you have ideas you’d already like to propose. This will be a small demonstration of the feasibility of such projects.

-We also know we need a better online platform for StudySwap. The current page, using the OSF for meetings structure was a short term hack that we are already outgrowing after just 6 months. The new site will need much more sophisticated searching, tagging, and categorizing capabilities. We are working with OSF on these improvements.

-We need funding. For now, we can build the beginnings of a CERN for Psych without big money, but eventually, this endeavor will be much more successful with financial resources at our disposal. We are actively seeking funding to support early adopters of this system.

Please join us in building a CERN for Psych. Eventually, this project could involve data collection from millions of participants, conducted by thousands of research assistants, supervised in hundreds of labs, coordinated by a democratically selected and constantly changing set of dozens of leaders in the field.

Dr. Christopher R. Chartier

Associate Professor of Psychology

Ashland University International Collaboration Research Center

Reacting to Replication Attempts

This is the first post in a three-part mini-series on replication research, to include posts on:

  • Why we should welcome replication attempts of our work
  • My own experience selecting and conducting replication studies
  • The case for offering up our own studies for replication, and how to do it via StudySwap

We should enthusiastically welcome replication attempts

How should we feel and how should we react when we learn that an independent research team either plans to conduct or has conducted a replication attempt of a finding we originally reported? I’ve prepared this flowchart to guide our reactions and elaborated a bit below.

ReplicationReaction

Replication attempts are often perceived as and labeled as “tear down” missions. This response is counterproductive and we need to reframe the discussion surrounding replication attempts. To hear an excellent example of how we can do this, do yourself a favor and listen to this episode of the Black Goat. Sanjay Srivastava, Alexa Tullett, Simine Vazire, and Rich Lucas had a very interesting conversation about replication research and Rich shared some of his actual motivations for conducting replications (spoiler alert, it isn’t to crush souls and destroy careers).

As a starting point for my take on more productive responses to replication attempts of your work, let us assume that you are confident in the finding in question. If you are not, well, that’s another discussion for another time.

If you are confident in the finding, a replication attempt should be taken as a form flattery and a chance to enhance the visibility of your work. It suggests that someone in the field thinks the finding is important enough that we should have an accurate understanding of the finding or estimate of the size of an effect. If the replication attempt is ultimately published, then other members of the field agree on its importance.

The attempt “succeeds”

For example, the replication study finds an effect size very similar to your originally published effect size. Yay! An independent research team has supported the original finding and your confidence in the effect has grown with very little work on your part. You have been cited and received a big pat on the back from the data.

The attempt “fails”

For example, the replication study finds no effect or a much smaller effect size than you did originally. Of course, this will be initially frustrating. BUT, remember, you are confident in the finding. You have essentially been invited to a low-effort publication. Why? The journal will now almost certainly welcome a submission from you showing that you can, in fact, still get the finding. Heck, perhaps you and the replicating team can even work together to figure out what’s going on! This was exactly the positive and productive cycle that developed after we failed to replicate part of the Elaboration Likelihood Model’s predictions in Many Labs 3.

Original -> ML3 -> Response w/ data -> Response to the response w/ data

Charlie Ebersole has even provided some empirical evidence on how responses to “failed” replications are perceived. tl;dr: if one operates as a scientist should, by earnestly pursuing the truth and collaborating with replicators, such behavior will win you friends and enhance your scientific reputation.

So, buy your replicators a beer. You owe them one!

My next two posts will focus on my own experience selecting effects for replication attempts and how to offer up one’s own effects for independent replication.

SURE THING Hypothesis Testing

Studies Until Results Expected, Thinks Hypothesis Is Now Golden

My sons watch a cartoon called Daniel Tiger’s Neighborhood. In one episode, which they (and by extension I) have watched at least one hundred times, Daniel and co. sing a little song that I imagine will repeat in my head for decades. The chorus goes:

“Keep trying, you’ll get better.”

The episode and song have a really nice message. Daniel is struggling to hit a baseball, but his friends encourage him to work at it until he improves.

What does this song have to do with experimental psychology? One interpretation of the lyric could be that of a researcher refining her craft to improve the research she conducts and strengthen the quality of evidence her studies produce. I can’t help but hear it another way.

“Keep trying, you’ll get better…results.”

As in, if at first your hypothesis is not supported, dust yourself off and try again. I think many of us have done too much SURE THING hypothesis testing.

A Twist on an Excellent Cartoon

targets

“Bullseyes” by Charlie Hankin

This cartoon elegantly captures the concept of HARKing. Hypothesizing After Results are Known. SURE THING hypothesis testing definitely isn’t HARKing. The hypothesis in question is often established well before any results, and certainly before the supporting results, are known. The researcher simply tries and tries and tries, all the while making “improvements” or “tweaks” with the best of intentions, until the target is struck.

It also isn’t really p-hacking, a practice in which we exercise myriad researcher degrees of freedom, typically within a single study, until our results reach statistical significance. I think that both p-hacking and SURE THING hypothesis testing deserve their own cartoons. I am not a cartoonist, nor do I know Charlie Hankin, so allow me to simply describe the needed cartoons. The artistically inclined reader is invited to produce these cartoons in exchange for fame and glory.

  • The “p-hacking bullseyes” cartoon: Targets are drawn beforehand, but they cover approximately 67% (drawn from Simmons & Simonsohn’s simulations of how bad it can get if we really go off the p-hacking rails) of the possible-arrow-landing surface.
    • The King’s shot has landed on one of the targets, and the assistant exclaims, “excellent shot, my lord.”
  • The “SURE THING bullseyes” cartoon: This one will need multiple panes, as SURE THING hypothesis testing is more episodic than HARKING or p-hacking.The target is drawn beforehand.
    • The King shoots and misses. “No worries, my lord. The arrow must be faulty. Allow me to retrieve and refine it.”
    • The King shoots again and misses again. “Ah, I know the problem. Let us quickly tighten your bowstring.”
    • The King shoots again and misses again. “Perhaps we shall try again in better lighting and wind conditions tonight.”
    • At night. The King shoots again and hits! “Excellent shot, my lord!”

If you shoot until you hit, then success is a

SURE THING.

Of course, others have described this process in scientific experimentation. Perhaps my favorite description comes from the Planet Money podcast episode on the replication crisis. They describe flipping coins over and over until one of them hits an unlikely sequence of results. What I think hasn’t yet been discussed adequately, is the fact that many of the proposals of the open science movement (pre-registration, open data, open materials) provide weak defense against SURE THING hypothesis testing.

An Illustrative Hypothetical Scenario

In my last post, I discussed Comprehensive Public Dissemination of empirical research. This and the following hypothetical scenarios will help outline why I think it can be so powerful.

One researcher pre-registers and runs attempt after attempt at essentially the same study, “tweaking and refining” with the best of intentions as he goes. Eventually,

bang!

p < .o5.

Publish.

How do we feel about this?

An Alternative Hypothetical Scenario

A different researcher has a hypothesis about a potentially cool new effect. She engages in CPD. She clearly identifies on her CPD log a series of studies intended to pilot methods and  to establish the necessary conditions for the effect to occur. Once she thinks she has established solid methods, she runs a pre-registered confirmatory study and

bang!

p < .o5.

Publish.

How do we feel about this?

If We Are SURE THING Hypothesis Testing, We Aren’t Hypothesis Testing at All

Ditch the File Drawer: Comprehensive Public Dissemination

How can you help fight the file drawer problem? Eliminate your file drawer!

Comprehensive public dissemination (CPD) is a commitment by an individual researcher to publicly post the basic methods and results of all empirical research that they conduct. Some researchers are already leading the way by doing a great job of tracking their research work flows in open and transparent ways (Lorne CampbellKatie Corker). CPD can serve as an important extension to these practices. As Will Gervais noted in his excellent post on emptying one’s file drawer, pre-prints offer a low effort mechanism for the dissemination of null results or other unpublished work. Taking the final step of briefly summarizing and sharing the methods and results of your data collection projects is not an overly burdensome step, and it could be greatly beneficial to the consumers of your research.

Draft CPD Initiative Statement and Guidelines

The results of scientific research must be comprehensively disseminated for researchers and the public to fully evaluate the evidence for any scientific finding and to generate cumulative knowledge. If a research project is worth conducting, its outcomes are worth disseminating. By publicly disseminating all research results, scientists can help combat problems that distort the body of scientific evidence, such as the file drawer problem and publication bias. I therefore agree that, from this date forward,

I will publicly disseminate the methodology and outcomes of all of my scientific work.

Suggested standards to become a comprehensive public disseminator:

  1. Create a comprehensive public dissemination (CPD) log with version control (spreadsheet on OSF, for example)
  2. Share a link to your CPD log (on your homepage, OSF account, twitter profile, etc.)
  3. At the beginning of data collection for any project, add the project to your CPD log and post an initiated date
  4. At the conclusion of data collection, post a completion date
  5. Disseminate your work in any manner you desire: publish a paper, present at conference, write a brief summary and post it to a public repository, etc.
  6. Provide a link to the dissemination product on your CPD log
  7. Repeat steps 3 through 6 for all projects

Why Would I?

You may be wondering, “what’s in it for the researcher?” First, I assume you mean, “what’s in it for the researcher besides doing their part to save the entire enterprise of science?” By signing on to CPD you can send a strong signal to others that you take open and transparent science seriously and are willing to “play ball.” CPD will also increase the confidence that others have in your published work. Joe Hilgard made a similar point in this post on publishing null results.

An Illustrative Hypothetical Scenario

CPD can complement and amplify the efficacy of other open science practices by providing the full data collection context for new findings. Imagine that researchers X and Y have each found a novel and exciting effect. Both publish papers on these effects, with what appears to be equally strong evidence from a single pre-registered study with a large N. If both have signed on for CPD, the full context of their findings is available for you to assess. You look at the CPD logs of each.

  • Researcher X has conducted two pilot studies on the new effect to refine methods and materials followed by the pre-registered study.
  • Researcher Y has conducted 17 similar pre-registered studies that appear to be close variants of the published study.

Are you equally confident in the replicability of the effects published by researcher X and researcher Y? I am not. Without CPD we would not have this important context for the published evidence. The promise of pre-registration is that it demarcates exploratory and confirmatory research. This promise may not be fully realized without CPD.

Just imagine if Deryl Bem kept a CPD log

I am actively seeking folks who want to contribute to the development and dissemination of this idea. Drop me a line at cchartie@ashland.edu.

My new CPD log.