Adaptive jacknife estimators for stochastic programming

dc.contributor.advisorMorton, David P.en
dc.creatorPartani, Amit, 1978-en
dc.date.accessioned2008-08-29T00:10:55Zen
dc.date.accessioned2017-05-11T22:19:10Z
dc.date.available2008-08-29T00:10:55Zen
dc.date.available2017-05-11T22:19:10Z
dc.date.issued2007-12en
dc.description.abstractStochastic programming facilitates decision making under uncertainty. It is usually impractical or impossible to find the optimal solution to a stochastic problem, and approximations are required. Sampling-based approximations are simple and attractive, but the standard point estimate of optimization and the Monte Carlo approximation. We provide a method to reduce this bias, and hence provide a better, i.e., tighter, confidence interval on the optimal value and on a candidate solution's optimality gap. Our method requires less restrictive assumptions on the structure of the bias than previously-available estimators. Our estimators adapt to problem-specific properties, and we provide a family of estimators, which allows flexibility in choosing the level of aggressiveness for bias reduction. We establish desirable statistical properties of our estimators and empirically compare them with known techniques on test problems from the literature.en
dc.description.departmentOperations Research and Industrial Engineeringen
dc.format.mediumelectronicen
dc.identifier.oclc221323428en
dc.identifier.urihttp://hdl.handle.net/2152/3794en
dc.language.isoengen
dc.rightsCopyright © is held by the author. Presentation of this material on the Libraries' web site by University Libraries, The University of Texas at Austin was made possible under a limited license grant from the author who has retained all copyrights in the works.en
dc.subject.lcshStochastic programmingen
dc.subject.lcshEstimation theoryen
dc.titleAdaptive jacknife estimators for stochastic programmingen
dc.type.genreThesisen

Files