A comparison of three approaches for constructing robust experimental designs

Vincent Agboto, William Li, Christopher Nachtsheim

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


While optimal designs are widely used in the design of experiments, a common concern is that they depend on the form of an a priori assumed regression model. If the assumed regression model is not the same as the true, unknown regression function, the optimal design might not be a good choice. Several useful criteria have been proposed to reduce the dependence of optimal designs on a single, assumed model, and efficient designs have been then constructed based on the criteria, often algorithmically. In the model robust design paradigm, a space of possible models is specified and designs are sought that are efficient for all models in the space. The Bayesian criterion given by DuMouchel and Jones (1994) posits a single model that contains both primary and potential terms. In this article we propose a new Bayesian model robustness criterion that combines aspects of both of these approaches. We then evaluate the efficacy of these three alternatives empirically.

Original languageEnglish (US)
Pages (from-to)1-11
Number of pages11
JournalJournal of Statistical Theory and Practice
Issue number1
StatePublished - Mar 1 2011


  • Bayesian designs
  • D-optimality
  • Model-robust design
  • Supersaturated design


Dive into the research topics of 'A comparison of three approaches for constructing robust experimental designs'. Together they form a unique fingerprint.

Cite this