Workshop on Robust Social Science | Composition of panels

Panel 1: Robust design

  • What are some features of experimental design that make empirical claims more likely to replicate?
  • When do we decide that a finding is sufficiently mature that real world applications are feasible?
  • Are findings in certain disciplines systematically more fickle, and what sets these apart?
  • What can psychology learn from other empirical sciences that use other experimental designs? An example is triple-blind studies, where researchers doing statistical or modeling analysis on data do not know the experimental manipulations or other details of the data.
  • What are the strengths and weaknesses of new experimental methods made possible by technological advances? An examples is the “MCMC” approach to studying the effects of nuisance variables in controlled experiments across many labs.
  • What hampers adoption of robust methods by researchers in social science?

Panel 2: Model-based inference

  • To what extent can model-based analysis be pre-registered in the same ways as more standard data analysis?
  • To what extent should model-based analysis be pre-registered? Is there a difference in the balance in the usefulness or likely prevalence of exploratory versus pre-registered analyses?
  • How should model-based analyses be pre-registered, to the extent they can and should be?
  • What needs to be provided for a model-based analysis to be replicated? Are there standard formats that will help support this? What level of code and data is desirable and feasible?
  • What hampers adoption of model-based inference by researchers in social science?

Panel 3: Emerging technologies

  • What does robustness mean for the analysis of archival data sets (e.g., data from a machine-learning repository, or a Kaggle competition, or supplied by an industry collaborator)?
  • What robustness concerns follow from the “found data” / convenience sample aspect of naturally occurring data sets? What does replicability mean in such a context?
  • What implications does on-line collection have for robustness, for technologies like Amazon Mechanical Turk or one of its many variants (Elance, CloudCrowd, Microtask, Expertplanet, Freelancer…)?
  • What are some new technologies with untapped potential for social science?
  • What hampers adoption of emerging technologies by researchers in social science?