Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by
Crossref.
Burnetas, Apostolos
Kanavetas, Odysseas
and
Katehakis, Michael N.
2017.
ASYMPTOTICALLY OPTIMAL MULTI-ARMED BANDIT POLICIES UNDER A COST CONSTRAINT.
Probability in the Engineering and Informational Sciences,
Vol. 31,
Issue. 3,
p.
284.
Zhang, Zijian
Cao, Wenqiang
Qin, Zhan
Zhu, Liehuang
Yu, Zhengtao
and
Ren, Kui
2017.
When privacy meets economics: Enabling differentially-private battery-supported meter reporting in smart grid.
p.
1.
Borkar, Vivek S.
Kasbekar, Gaurav S.
Pattathil, Sarath
and
Shetty, Priyesh Y.
2018.
Opportunistic Scheduling as Restless Bandits.
IEEE Transactions on Control of Network Systems,
Vol. 5,
Issue. 4,
p.
1952.
Gürsoy, Kemal
2020.
An optimal selection for ensembles of influential projects.
Annals of Operations Research,
Cowan, Wesley
and
Katehakis, Michael N.
2020.
EXPLORATION–EXPLOITATION POLICIES WITH ALMOST SURE, ARBITRARILY SLOW GROWING ASYMPTOTIC REGRET.
Probability in the Engineering and Informational Sciences,
Vol. 34,
Issue. 3,
p.
406.
Niño-Mora, José
2020.
A Verification Theorem for Threshold-Indexability of Real-State Discounted Restless Bandits.
Mathematics of Operations Research,
Vol. 45,
Issue. 2,
p.
465.
Bao, Wenqing
Cai, Xiaoqiang
and
Wu, Xianyi
2021.
A General Theory of MultiArmed Bandit Processes with Constrained Arm Switches.
SIAM Journal on Control and Optimization,
Vol. 59,
Issue. 6,
p.
4666.
Borkar, Vivek S.
Choudhary, Shantanu
Gupta, Vaibhav Kumar
and
Kasbekar, Gaurav S.
2021.
Scheduling in wireless networks with spatial reuse of spectrum as restless bandits.
Performance Evaluation,
Vol. 149-150,
Issue. ,
p.
102208.
Caro, Felipe
and
Das Gupta, Aparupa
2022.
Robust control of the multi-armed bandit problem.
Annals of Operations Research,
Vol. 317,
Issue. 2,
p.
461.
Cowan, Wesley
Katehakis, Michael N.
and
Ross, Sheldon M.
2023.
Optimal activation of halting multi‐armed bandit models.
Naval Research Logistics (NRL),
Vol. 70,
Issue. 7,
p.
639.
Gürsoy, Kemal
2024.
Multi-armed bandit games.
Annals of Operations Research,
Burnetas, Apostolos N.
Kanavetas, Odysseas
and
Katehakis, Michael N.
2025.
Optimal data driven resource allocation under multi-armed bandit observations.
Annals of Operations Research,