Town Hall on Replication and Cumulative Science

Date
Wed April 12th 2017, 3:45 - 7:00pm
Event Sponsor
Department of Psychology, The Center for Reproducible Neuroscience, The Meta Research Innovation Center (METRICS)
Location
Paul Brest Hall

Town Hall on Replication and Cumulative Science 

5 speakers who represent diverse disciplinary perspectives, including social psychology, neuroscience, medicine and public policy, journalism and the public communication of social science, and philosophy of science:

Christie Aschwanden, lead science writer at 538.com and a health columnist for the Washington Post; Robert Kaplan, Professor of Medicine, Stanford University; Alison Ledgerwood, Associate Professor of Psychology, UC-Davis; Helen Longino, Professor of Philosophy, Stanford University; Tal Yarkoni, Research Assistant Professor, UT-Austin

We hope these speakers and our discussion will advance our collective understanding how to conduct high quality cumulative science, how to think clearly about this, how to communicate about it, and how this intersects with policy. To accommodate this event, we will begin at our usual colloquium time, but divide the speakers into two sessions separated by a buffet dinner, thus:

 

3:45 - 5:15: Session 1 (3 20 minute talks + discussion)

5:15 - 6: Buffet Dinner

6 - 7: Session 2 (2 20 minute talks + discussion)

 

Science Isn’t Broken, It’s Just Harder Than We Give It Credit For

Christie Aschwanden

Lead science writer at FiveThirtyEight

Science is our most effective means for understanding the world, but it’s a messy, slow process. Even the best research cannot eliminate every uncertainty. Yet single studies are often portrayed as the final word, engendering mistrust when overturned by new evidence. This talk will examine some of the pressures underlying reproducibility problems and how scientists can help the public and policymakers better understand and trust the scientific process.

Do Transparent Reporting Requirements Suppress the Probability of Reporting Positive Clinical Trial Results?

Robert M. Kaplan

Director of Research, Clinical Excellence Research Center (CERC). Stanford School of Medicine.

Background:  Stricter requirements for transparent reporting of randomized clinical trials (RCTs) may be associated with an increase in the number of trials reporting null results.

Methods:  Using NIH data-bases, we identified 84 National Heart Lung, and Blood Institute (NHLBI) supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease.   RCTs were included if the trial met the threshold of a large trial (defined as having direct costs >$500,000/year), participants were adult humans, located in the U.S.; and primary outcome was cardiovascular risk, disease or death. A total of 55 trials met these criteria.  Trials were coded for whether they registered in clinicaltrials.gov prior to the initiation of data collection 30 trials registered after data collection began and  25  registered prior to data collection.  We tabulated whether the study reported a positive, negative, or null result on the primary outcome variable and for total mortality; and pooled the results in a meta-analysis.

Results: 17 of 30 studies that had not registered showed a significant benefit of intervention on the primary outcome. Meta analysis showed a significant benefit of treatment for the primary outcome (RR=0.81, 95% CI 0.73, 0.89). Among the 25 trials that had registered prior to data collection, only 2 showed a significant benefit of treatment on the primary outcome. The post-2000 pooled meta-analysis for primary outcomes was non-significant (RR=0.97, 95% CI = 0.93, 1.01). Pooled total mortality was null for RCTs whether or not they have registered.

Conclusions: Prospective declaration of outcomes in RCTs and the adoption of transparent reporting standards may be associated with an increase in the number of null findings.

Promoting Careful Thinking Across the Research Cycle

Alison Ledgerwood

Associate Professor of Psychology, UC Davis

As scientists move from debating whether we should change our research methods and practices to investigating how best to do so, we are increasingly coming to grips with the fact that there are no magic bullet solutions. Indeed, my home discipline of social psychology offers considerable evidence attesting to the temptation—and perils—of taking cognitive shortcuts. Humans tend to love a good heuristic, a simple decision rule, an easy answer. Yet we also know that oversimplified decision rules contributed to the problems with scientific methods and practices that we now face. For example, enshrining p < .05 as the ultimate arbiter of truth created the motivation to p-hack. Heuristics about sample sizes allowed us to ignore power considerations. Bean-counting publications created an immense pressure to build long CVs. The single most important lesson we can draw from our past in this respect is that we need to think more carefully and more deeply about our methods and our data. How do we build systems of conducting and communicating about science that help counteract basic human biases in thinking by pushing researchers to think systematically and objectively across the research cycle, from designing and analyzing studies to communicating with the press and the public? This talk discusses how researchers might approach the challenge of designing nuanced solutions that promote careful thinking throughout the research process, and highlights several cutting-edge and ready-to-implement tools that help accomplish this aim. 

Concepts, Measurement, and Disciplinary Boundaries

Helen Longino

C.I. Lewis Professor of Philosophy, Stanford University

 Concepts are operationalized by specifying what counts as their exemplification.  A standard constraint on operationalization is that it involve measurable or countable units.  This talk will review how operationalizations can either support the illusion of cumulative understanding of a phenomenon or contribute to genuinely cumulative understanding. 

Reproducible science is open-source science

Tal Yarkoni

Director, Psychoinformatics Lab
Research Assistant Professor
University of Texas at Austin

Recent concerns about the reproducibility and replicability (or lack thereof) of scientific findings have left many researchers scrambling to implement new measures that could potentially prevent reproducibility failures in future studies. Although many of the solutions proposed by commentators have focused on substantive scientific questions or statistical concerns, I argue in this talk that perhaps the most effective way to improve the reproducibility of science on a large scale is to adapt practices and norms that are already commonplace in the open-source software development community. These include programmatic analysis, version control, automated testing, and a cultural acceptance of the near-inevitability of (often costly) errors. I suggest that extensive training in scientific computing should be a mandatory feature of graduate training programs throughout the sciences, and discuss common objections to such a proposal.

*/ */