Technical note—Knowledge gradient for selection with covariates: Consistency and computation

Liang Ding, L. Jeff Hong, Haihui Shen, Xiaowei Zhang

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

Knowledge gradient is a design principle for developing Bayesian sequential sampling policies to solve optimization problems. In this paper, we consider the ranking and selection problem in the presence of covariates, where the best alternative is not universal but depends on the covariates. In this context, we prove that under minimal assumptions, the sampling policy based on knowledge gradient is consistent, in the sense that following the policy the best alternative as a function of the covariates will be identified almost surely as the number of samples grows. We also propose a stochastic gradient ascent algorithm for computing the sampling policy and demonstrate its performance via numerical experiments.

Original languageEnglish (US)
Pages (from-to)496-507
Number of pages12
JournalNaval Research Logistics
Volume69
Issue number3
DOIs
StatePublished - Apr 2022
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2021 Wiley Periodicals LLC.

Keywords

  • consistency
  • covariates
  • knowledge gradient
  • selection of the best

Fingerprint

Dive into the research topics of 'Technical note—Knowledge gradient for selection with covariates: Consistency and computation'. Together they form a unique fingerprint.

Cite this