diff --git "a/WtFQT4oBgHgl3EQfcDb0/content/tmp_files/2301.13326v1.pdf.txt" "b/WtFQT4oBgHgl3EQfcDb0/content/tmp_files/2301.13326v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/WtFQT4oBgHgl3EQfcDb0/content/tmp_files/2301.13326v1.pdf.txt" @@ -0,0 +1,2908 @@ +A Framework for Adapting Offline Algorithms to Solve +Combinatorial Multi-Armed Bandit Problems with Bandit +Feedback +Guanyu Nie +nieg@iastate.edu +Yididiya Y Nadew +yididiya@iastate.edu +Yanhui Zhu +yanhui@iastate.edu +Vaneet Aggarwal +vaneet@purdue.edu +Christopher John Quinn +cjquinn@iastate.edu +Editor: +Abstract +We investigate the problem of stochastic, combinatorial multi-armed bandits where the +learner only has access to bandit feedback and the reward function can be non-linear. We +provide a general framework for adapting discrete offline approximation algorithms into +sublinear α-regret methods that only require bandit feedback, achieving O +� +T +2 +3 log(T) +1 +3 +� +expected cumulative α-regret dependence on the horizon T. The framework only requires +the offline algorithms to be robust to small errors in function evaluation. The adaptation +procedure does not even require explicit knowledge of the offline approximation algorithm +— the offline algorithm can be used as black box subroutine. +To demonstrate the utility of the proposed framework, the proposed framework is +applied to multiple problems in submodular maximization, adapting approximation algo- +rithms for cardinality and for knapsack constraints. The new CMAB algorithms for knap- +sack constraints outperform a full-bandit method developed for the adversarial setting in +experiments with real-world data. +1. Introduction +Many real world sequential decision problems can be modeled using the framework of +stochastic multi-armed bandits (MAB), such as scheduling, assignment problems, ad-campaigns, +and product recommendations, among others. The decision maker sequentially selects ac- +tions and receives stochastic rewards from an unknown distribution. The goal of the decision +maker is to maximize the expected cumulative reward over a (possibly unknown) time hori- +zon. Actions result both in the immediate reward and, more importantly, information about +that action’s reward distribution. Such problems result in a trade-off between trying actions +the agent is uncertain of (exploring) or only taking the action that is empirically the best +seen so far (exploiting). +In the classic MAB setting, the number of possible actions is small relative to the time +horizon, meaning each action can be taken at least once, and there is no assumed relationship +between the reward distributions of different arms. The combinatorial multi-armed bandit +(CMAB) setting involves a large but structured action space. +For example, in product +recommendation problems, the decision maker may select a subset of products (base arms) +1 +arXiv:2301.13326v1 [cs.LG] 30 Jan 2023 + +from among a large set. There are several aspects that can affect the difficulty of these +problems. First, MAB methods are typically compared against a learner with access to +a value oracle of the reward function (an offline problem). For some problems, it is NP- +hard for the baseline learner with value oracle access to optimize. An example is if the +expected/averaged reward function is submodular and actions are subsets constrained by +cardinality. At best, for these problems, approximation algorithms may exist. Thus, unless +the time horizon is large (exponentially long in the number of base arms, for instance), +it would be more reasonable to compare the CMAB agent against the performance of the +approximation algorithm for the related offline problem. Likewise, one could apply state of +the art methods for (unstructured) MAB problems treating each subset as a separate arm, +and obtain ˜O(T +1 +2 ) dependence on the horizon T for the subsequent regret bound. However, +that dependence would only apply for exponentially large T. +Feedback plays an important role in how challenging the problem is. When the decision +maker only observes a (numerical) reward for the action taken, that is known as bandit +or full-bandit feedback. When the decision maker observes additional information, such as +contributions of each base arm in the action, that is semi-bandit feedback. Semi-bandit +feedback greatly facilitates learning. Suppose for instance that the reward function (on +average) was monotone increasing over the inclusion lattice and there was a cardinality +constraint of size k. The agent would know from the start that no set of size smaller than +k could be optimal (or could even be the near-optimal solution the baseline learning using +a value oracle would find). However, there would be +�n +k +� +sets of size k. For n = 100 and +k = 10, the agent would need a horizon T > 1012 to try each cardinality k set even just +once. If the reward function belongs to a certain class, such as the class of submodular +functions, then one approach would be to use a greedy procedure based on base arm values. +With semi-bandit feedback, the agent could on the one hand only take actions of cardinality +k (putatively optimal actions), gain the subsequent rewards, and yet also observe samples +of the base arms’ values to improve future actions. +Bandit feedback is much more challenging, as only the joint reward is observed. In +general, for non-linear reward functions, the individual values or marginal gains of base +arms can only be loosely bounded if actions only consist of maximal subsets. Thus, to +estimate values or marginal gains of base arms, the agent would need to deliberately spend +time sampling actions (such as smaller sets) that are known to be sub-optimal in order to +estimate their values to later better select actions of cardinality k. Standard MAB methods +like UCB or TS based methods by design do not take actions known to be sub-optimal. +Thus, while such strategies could be used when semi-bandit feedback is available, it is less +clear whether they can be effectively used when only bandit feedback is available. +There are important applications where semi-bandit feedback may not be available, such +as in influence maximization and recommender systems. Influence maximization models +the problem of identifying a low-cost subset (seed set) of nodes in a (known) social network +that can influence the maximum number of nodes in a network (Nguyen and Zheng, 2013; +Leskovec et al., 2007; Bian et al., 2020). Recent research has generalized the problem to +online settings where the knowledge of the network and diffusion model is not required +(Wang et al., 2020; Perrault et al., 2020a) but extra feedback is assumed. However, for +many networks the user interactions and user accounts are private; only aggregate feedback +2 + +(such as the count of individuals using a coupon code or going to a website) might be visible +to the decision maker. +In this work, we seek to address these challenges by proposing a general framework for +adapting offline approximation algorithms into algorithms for stochastic CMAB problems +when only bandit feedback is available. We identify that a single condition related to the +robustness of the approximation algorithm to erroneous function evaluations is sufficient to +guarantee that a simple explore-then-commit (ETC) procedure accessing the approximation +algorithm as a black box results in a sublinear α-regret CMAB algorithm despite having +only bandit feedback available. The approximation algorithm does not need to have any +special structure (such as an iterative greedy design). Importantly, no effort is needed on +behalf of the user in mapping steps in the offline method into steps of the CMAB method. +We demonstrate the utility of this framework by assessing the robustness of several +approximation algorithms in the submodular optimization literature (three approximation +algorithms designed for knapsack constraints and one designed for cardinality constraints) +which immediately result in sublinear α-regret CMAB algorithms that only rely on bandit- +feedback, the first such algorithms for CMAB problems with submodular rewards and knap- +sack constraints. We also show that despite the simplicity and universal design of the adap- +tation, the resulting CMAB algorithms work well on budgeted influence maximization and +song recommendation problems using real world data. +The main contributions of this paper can be summarized as: 1. We provide a general +framework for adapting discrete offline approximation algorithms into sublinear α-regret +methods for stochastic CMAB problems where only bandit feedback is available. +The +framework only requires the offline algorithms to be robust to small errors in function +evaluation, a property important in its own right for offline problems. The algorithms are +not required to have a special structure — instead they are used as black boxes. +Our +procedure has minimal storage and time-complexity overhead, and achieves a regret bound +with ˜O(T +2 +3 ) dependence on the horizon T. +2. We illustrate the utility of the proposed framework by assessing the robustness of several +approximation algorithms for (offline) constrained submodular optimization, a class of re- +ward functions lacking simplifying properties of linear or Lipschitz reward functions. Specifi- +cally, we prove the robustness of approximation algorithms given in Nemhauser et al. (1978); +Badanidiyuru and Vondr´ak (2014); Sviridenko (2004); Khuller et al. (1999); Yaroslavtsev +et al. (2020) with cardinality or knapsack constraints, and use the general framework to +give regret bounds for the stochastic CMAB. In particular, we note that this paper gives +the first regret bounds for stochastic submodular CMAB with knapsack constraints under +bandit feedback. +3. We evaluate the performance of proposed framework through the stochastic submod- +ular CMAB with knapsack constraints problem for two applications: Budgeted Influence +Maximization, and Song Recommendation. The evaluation results demonstrate that the +proposed approach significantly outperforms a full-bandit method for a related problem in +the adversarial setting. +3 + +2. Related Work +We now briefly discuss only the most closely related works. See the supplementary material +for more discussion. +Adversarial CMAB +The closest related works are on adversarial CMAB. In (Niazadeh +et al., 2021), the authors propose a framework for transforming greedy α-approximation +algorithms for offline problems to online methods in an adversarial bandit setting, for +both semi-bandit (achieving �O(T 1/2) α−regret) and full-bandit feedback (achieving �O(T 2/3) +α−regret). Their framework requires the offline approximation algorithm to have an iter- +ative greedy structure (unlike ours), satisfy a robustness property (like ours), and satisfy +a property referred to as Blackwell reducibility (unlike ours). In addition to these condi- +tions, the adaptation depends on the number of subproblems (greedy iterations) which for +some algorithms can be known ahead of time (such as with cardinality constraints) but for +other algorithms can only be upper-bounded. (Our adaptation uses the offline algorithm +as a black box.) The authors check those conditions and explicitly adapt several offline +approximation algorithms. In this paper, we consider an approach for converting offline +approximation algorithm to online for stochastic CMAB, while requiring less assumptions. +We also note that (Niazadeh et al., 2021) do not consider submodular CMAB with +knapsack constraints, and thus do not verify whether any approximation algorithms for +the offline problem satisfy the required properties (of sub-problem structure or robustness +or Blackwell reducibility) to be transformed, and this is an example we consider for our +general framework. Consequently, in our experiments for submodular CMAB with knapsack +constraints in Section 7, we use the algorithm in (Streeter and Golovin, 2008) designed for +a knapsack constraint (in expectation) as representative of methods for the adversarial +setting. Other related works for adversarial stochastic CMAB are described in Appendix +H. +Stochastic Submodular CMAB with Full Bandit Feedback +Recently, Nie et al. +(2022) propose an algorithm for stochastic MAB with submodular rewards, when there is a +cardinality constraint. Their algorithm is a specific adaptation of an offline greedy method. +In our work, we propose a general framework that employs the offline algorithm as a black +box (and this result becomes a special case of our approach). While there are multiple +results for semi-bandit feedback (see Appendix I), this paper considers full bandit feedback. +3. Problem Statement +We consider sequential, combinatorial decision-making problems over a finite time horizon +T. Let Ω denote the ground set of base elements (arms). Let n = |Ω| denote the number +of arms. Let D ⊆ 2Ω denote the subset of feasible actions (subsets), for which we presume +membership can be efficiently evaluated. We will later consider applications with cardinality +and knapsack constraints, though our methods are not limited to those. We will use the +terminologies subset and action interchangeably throughout the paper. +At each time step t, the learner selects a feasible action At ∈ D. After the subset At is +selected, the learner receives reward ft(At). We assume the reward ft is stochastic, bounded +in [0, 1], and i.i.d. conditioned on a given subset. Define the expected reward function as +f(A) = E[ft(A)]. +4 + +The goal of the learner is to maximize the cumulative reward �T +t=1 ft(At). To measure +the performance of the algorithm, one common metric is to compare the learner to an agent +with access to a value oracle for f. However, if optimizing f over D is NP-hard, such a +comparison would not be meaningful unless the horizon is exponentially large in the problem +parameters. +If there is a known approximation algorithm A with approximation ratio α ∈ (0, 1] for +optimizing f over D, a more natural alternative is to evaluate the performance of a CMAB +algorithm against what A could achieve. Thus, we consider the the expected cumulative +α-regret Rα,T , which is the difference between α times the cumulative reward of the optimal +subset’s expected value and the average received reward, (we write RT when α is understood +from context) +E[RT ] = αTf(OPT) − E +� T +� +t=1 +ft(At) +� +, +(1) +where OPT is the optimal solution, i.e., OPT ∈ arg maxA∈D f(A) and the expectations are +over both the random rewards and the sequence of actions. +4. Robustness of Offline Algorithms +In this section, we introduce a criterion for an offline approximation algorithm’s sensitivity +to (bounded) additive perturbations to function evaluations. Investigating robustness of +approximation algorithms in offline settings is valuable in its own right. Importantly, we +will show that this property alone is sufficient to guarantee that the offline algorithm can be +adapted to solve analogous combinatorial multi-armed bandit (CMAB) problems with just +bandit feedback and yet achieve sub-linear regret. Furthermore, the CMAB adaptation will +not rely on any special structure of the algorithm design, instead employing it as a black +box. +Definition 1 ((α, δ)-Robust Approximation) An algorithm A is an (α, δ)-robust ap- +proximation algorithm for the combinatorial optimization problem of maximizing a function +f : D → R over a finite domain D ⊆ 2Ω if its output S∗ using a value oracle for ˆf satisfies +the relation below with the optimal solution OPT under f, provided that for any ϵ > 0 that +|f(S) − ˆf(S)| < ϵ for all S ∈ D, +f(S∗) ≥ αf(OPT) − δϵ. +Note that the perturbed ˆf is not required to be in the same class as f (linear, quadratic, +submodular, etc.). Thus, this definition is a stronger notion of robustness than one limited +to ˆf in the same class that have bounded L∞ distance from f. +For (unstructured) k armed bandit problems, one can view the analogous offline algo- +rithm with access to a value oracle for the elements as first evaluating each arm (D = +{{1}, {2}, . . . , {k}}), so N = k queries total, and then evaluating arg max over the k values. +That algorithm trivially is a (1, 2)-robust approximation algorithm. +Remark 2 In Niazadeh et al. (2021), there is a related definition of robustness for offline +approximation algorithms. That definition and the subsequent offline-to-online adaptation +5 + +Algorithm 1 Combinatorial Explore-then-Commit +Input: horizon T, set of base elements Ω, an offline (α, δ)-robust algorithm A, and an +upper-bound N on the number of A’s queries to the value oracle +Initialize m ← +� +δ2/3T 2/3 log(T)1/3 +2N2/3 +� +// Exploration Phase // +while A queries the value of some A ⊆ Ω do +For m times, play action A +Calculate the empirical mean ¯f +Return ¯f to A +end while +// Exploitation Phase // +for remaining time do +Play action S output by algorithm A. +end for +procedure is restricted to approximation algorithms with an iterative greedy structure. The +criterion Theorem 1 we consider does not require the approximation algorithm to have an +iterative greedy structure. +To illustrate the utility of our proposed framework, in Section 6 we will show that +several approximation algorithms from the constrained submodular maximization literature +are (α, δ)-robust, leading to new sublinear α-regret algorithms for related stochastic CMAB +problems with submodular rewards. +5. C-ETC Algorithm: Offline to Stochastic +In this section, we present our proposed algorithm for adapting offline approximation to algo- +rithms for stochastic CMAB, Combinatorial Explore-Then-Commit (C-ETC). The pseudo- +code is shown in Algorithm 1. The algorithm takes an offline (α, δ) robust algorithm A +with an upper bound N on the number of oracle queries by A. In the exploration phase, +when the offline algorithm queries the value oracle for action A, C-ETC will play action A +for m times, where m is a constant chosen to minimizing regret. C-ETC then computes +the empirical mean ¯f of rewards for A and feeds ¯f back to the offline algorithm A. In the +exploitation phase, C-ETC keeps playing the solution S output from algorithm A. Thus, +the CMAB procedure does not need A to have any special structure. No careful construc- +tion is needed for the CMAB procedure beyond running A. All that is needed is checking +robustness (Theorem 1). Also, there is no over-heard in terms of storage and per-round +time complexities— C-ETC is as efficient as the offline algorithm A itself. +Now we analyze the α-regret for C-ETC (Algorithm 1). +Theorem 3 For the sequential decision making problem defined in Section 2 and T ≥ +2 +√ +2N +δ +, the expected cumulative α-regret of C-ETC using an (α, δ)-robust approximation al- +6 + +gorithm as subroutine is at most O +� +δ +2 +3 N +1 +3 T +2 +3 log(T) +1 +3 +� +, where N upper-bounds the number +of value oracle queries made by the offline algorithm A. +The detailed proof is in the supplementary material. We highlight some key steps. +We show that with high probability, the empirical means of all actions taken during +exploration phase will be within rad = +� +log T +2m +of their corresponding statistical means. +As is common in proofs for ETC methods, we refer to this occurrence as the clean event +E. Then, using an (α, δ)-robust approximation algorithm as subroutine will guarantee the +quality of of the set S used in the exploitation phase of Algorithm 1: +f(S) ≥ αf(OPT) − δ · rad. +(2) +We then break up the expected cumulative α-regret conditioned on the clean event E, +E[R(T)|E] = +N +� +i=1 +m (αf(S∗) − E[f(St)]) +� +�� +� +exploration phase ++ +T +� +t=TN+1 +(αf(S∗) − E[f(S)]) +� +�� +� +exploitation phase +. +(3) +Using the fact that the reward is bounded between [0, 1], we have +E[R(T)|E] ≤ Nm + Tδrad. +Optimizing over m then results in +E[R(T)|E] = O +� +δ +2 +3 N +1 +3 T +2 +3 log(T) +1 +3 +� +. +We then show that because the clean event E happens with high probability, the expected +cumulative regret E[R(T)] is dominated by E[R(T)|E], which concludes the proof. +Lower bounds: +For the general setting we explore in this paper, with stochastic (or +even adversarial) combinatorial MAB and only bandit feedback, it is unknown whether +˜O(T 1/2) expected cumulative α-regret is possible (ignoring problem parameters like n). For +special cases, such as linear reward functions, ˜O(T 1/2) is known to be achievable even with +bandit feedback. Even for the special case of submodular reward functions and a cardinality +constraint, it remains an open question. Niazadeh et al. (2021) obtain ˜Ω(T 2/3) lower bounds +for the harder setting where feedback is only available during “exploration” rounds chosen +by the agent, who incurs an associated penalty. +Remark 4 C-ETC uses knowledge of the horizon T to optimize the number m of samples +per action. When the time horizon T is not known, we can use geometric doubling trick +to extend our result to an anytime algorithm. We refer to the general detailed procedure +in (Besson and Kaufmann, 2018). From Theorem 4 in (Besson and Kaufmann, 2018), we +can show that the regret bound conserves the original T 2/3 log(T)1/3 dependence with only +changes in constant factors. +7 + +6. Applications on Submodular Maximization +In this section, we apply our general framework to stochastic CMAB problems with mono- +tone submodular rewards where only bandit feedback is available. This application results +in the first sublinear α-regret CMAB algorithms for knapsack constraints under bandit feed- +back. We begin with a brief background, and analyze the robustness of offline approximation +algorithms, and then obtain problem independent regret bounds. +6.1 Background and Definitions +Denote the marginal gain f(e|A) = f(A ∪ e) − f(A) and the marginal density ρ(e|A) = +f(A∪e)−f(A) +c(e) +for any subset A ⊆ Ω and element e ∈ Ω \ A. A set function f : 2Ω → R +defined on a finite ground set Ω is said to be submodular if it satisfies the diminishing +return property: for all A ⊆ B ⊆ Ω, and e ∈ Ω \ B, it holds that f(e|A) ≥ f(e|B). A set +function is said to be monotonically non-decreasing if f(A) ≤ f(B) for all A ⊆ B ⊆ Ω. Our +aim is to find a set S such that f(S) is maximized subject to some constraints. +For knapsack constraints, we assume that the cost function c : Ω → R>0 is known +and linear, so the cost of a subset is be the sum of the costs of individual items: c(A) = +� +v∈A c(v). +To simplify the presentation, we avoid the cases of trivially large budgets +B > � +v∈Ω c(v) and assume all items have non-trivial costs 0 < c(v) ≤ B. A cardinality +constraint is a special case with unit costs. +In the following, we consider both types of those constraints: cardinality and knapsack. +Maximizing a monotone submodular set function under a k-cardinality constraint is NP- +hard even with a value oracle Nemhauser et al. (1978). The best achievable approximation +ratio with a polynomial time algorithm is 1−1/e Nemhauser et al. (1978) using O(nk) oracle +calls. In Badanidiyuru and Vondr´ak (2014), 1−1/e−ϵ′ is achieved within O( n +ϵ′ log n +ϵ′ ) time, +where ϵ′ is a user selected parameter to balance accuracy and time complexity. +Maximizing a monotone submodular set function under a knapsack constraint is conse- +quently also NP-hard Khuller et al. (1999). The best achievable approximation ratio with +a polynomial time algorithm is 1 − 1/e (Sviridenko, 2004; Khuller et al., 1999), but that +requires O(n5) function evaluations, making it prohibitive for many applications. There +are other offline algorithms that achieve worse approximation ratios but are much more ef- +ficient. We adapt a 1 +2 approximation algorithm (Yaroslavtsev et al., 2020) and a 1 +2(1 − 1/e) +approximation algorithm (Khuller et al., 1999), both of which use O(n2) function evalua- +tions. There is another algorithm proposed recently in Li et al. (2022), but since it queries +infeasible sets, we do not consider it. +6.2 Offline Approximation Algorithms – Robustness +For an overview of offline approximation algorithms for submodular optimization, please +refer Appendix A. We next state our results on (α, δ)-robustness of the offline algorithms +considered. The assumption of complete/noiseless access to a value oracle is often a strong +assumption for real world applications. Thus, even for offline applications, it is worthwhile +knowing how robust an algorithm is. So the following results are relevant even in the offline +setting. +For the CMAB setting we consider, robustness is also a sufficient property to +8 + +guarantee a no-regret adaptation of the offline algorithm. Detailed proofs are included in +Appendix B in the supplementary material. +Theorem 5 (Corollary 4.3 of Nie et al. (2022)) Greedy in Nemhauser et al. (1978) +is a (1 − 1 +e, 2k)-robust approximation algorithm for submodular maximization under a k- +cardinality constraint. +Theorem 6 ThresholdGreedy Badanidiyuru and Vondr´ak (2014) is a (1− 1 +e −ϵ′, 2(2− +ϵ′)k)-robust approximation algorithm for submodular maximization under a k-cardinality +constraint. +Theorem 7 PartialEnumeration Sviridenko (2004); Khuller et al. (1999) is a (1− 1 +e, 4+ +2 ˜K + 2β)-robust approximation algorithm for submodular maximization under a knapsack +constraint. +Theorem 8 Greedy+Max Yaroslavtsev et al. (2020) is a ( 1 +2, 1 +2 + ˜K + 2β)-robust approx- +imation algorithm for submodular maximization problem under a knapsack constraint. +Theorem 9 Greedy+ Khuller et al. (1999) is a ( 1 +2(1− 1 +e), 2+ ˜K+β)-robust approximation +algorithm for submodular maximization problem under a knapsack constraint. +Remark 10 For the offline setting, Greedy+Max is superior to Greedy+, as it achieves +a better α approximation ratio with the same calls to the value oracle. However, their (α, δ) +pairs are incomparable, as for β > 1.5 (with β = 1 corresponding to a cardinality con- +straint), Greedy+ has a smaller δ (thus more robust) which affects exploration time in +their adaptations and in turn affects their regret. +To illustrate the robustness analysis, we highlight some key steps for the proof of Theo- +rem 8 for Greedy+Max. Let o1 ∈ arg maxe:e∈OPT c(e) denote the most expensive element +in OPT. Inspired by the proof techniques in (Yaroslavtsev et al., 2020), we consider the +last item added by the greedy solution (based on noisy evaluation) before the cost of this +solution exceeds B −c(o1). Let Gi denote the set selected by Greedy that has cardinality i +and denote the constituent elements as Gi = {g1, · · · , gi}. Denote Gℓ as the largest greedy +sequence that consumes less than B−c(o1) of the budget B, so c(Gℓ) ≤ B−c(o1) < c(Gℓ+1). +Let Si denote the augmented set at i-th iteration and S denote the final output of the algo- +rithm. Denote ˆf(e|S) := ˆf(S ∪ e) − ˆf(S) and ˆρ(e|S) := +ˆf(S∪e)− ˆf(S) +c(e) +. We prove the following +lemma. +Lemma 11 (Greedy+Max inequality) For i ∈ {0, 1, · · · , ℓ}, the following inequality +holds: +ˆf(Gi ∪ o1)+ max{0, ˆρ(gi+1|Gi)}(B − c(o1)) +≥ f(OPT) − (2 ˜K − 1)ϵ. +For i = ℓ, Theorem 11 tells us that there can be two cases: +ˆf(Gℓ ∪ o1) ≥ 1 +2f(OPT) − +� +˜K − 1 +2 + γ +� +ϵ, or +9 + +ˆρ(gℓ+1|Gℓ)(B − c(o1)) ≥ 1 +2f(OPT) − +� +˜K − 1 +2 − γ +� +ϵ, +where γ will be selected later to minimize the additive error δ coefficient. +If ˆf(Gℓ ∪ o1) ≥ 1 +2f(OPT) − +� +˜K − 1 +2 + γ +� +ϵ, then denote aℓ = arg maxe∈Ω\Gℓ ˆf(e|Gℓ), +which is the element selected to augment Gℓ. We have +ˆf(Gℓ ∪ aℓ) ≥ ˆf(Gℓ ∪ o1) +≥ 1 +2f(OPT) − +� +˜K − 1 +2 + γ +� +ϵ. +(4) +Then the final output of the algorithm S will satisfy +f(S) ≥ ˆf(S) − ϵ +≥ ˆf(Gℓ ∪ aℓ) − ϵ +≥ 1 +2f(OPT) − +� +˜K + 1 +2 + γ +� +ϵ. +(using (4)) +If ˆρ(gℓ+1|Gℓ)(B − c(o1)) ≥ 1 +2f(OPT) − ( ˜K − 1 +2 − γ)ϵ, rearranging we have +ˆρ(gℓ+1|Gℓ) ≥ +f(OPT) +2(B − c(o1)) − ( ˜K − 1 +2 − γ)ϵ +B − c(o1) +. +(5) +Moreover, +ˆf(Gℓ) = +l−1 +� +j=0 +ˆρ(gj+1|Gj)c(gj+1) +≥ +l−1 +� +j=0 +ˆρ(gℓ+1|Gj)c(gj+1) +(6) +≥ +l−1 +� +j=0 +� +ρ(gℓ+1|Gj) − +2ϵ +c(gℓ+1) +� +c(gj+1) +≥ +l−1 +� +j=0 +� +ρ(gℓ+1|Gℓ) − +2ϵ +c(gℓ+1) +� +c(gj+1) +(7) += +� +ρ(gℓ+1|Gℓ) − +2ϵ +c(gℓ+1) +� +c(Gℓ) +≥ +� +ˆρ(gℓ+1|Gℓ) − +4ϵ +c(gℓ+1) +� +c(Gℓ) +≥ ˆρ(gℓ+1|Gℓ)c(Gℓ) − 4βϵ, +(8) +10 + +where (6) follows from the greedy selection rule, the (7) follows from submodularity of f, +and (8) follows from the definition of β. We then have +ˆf(Gℓ+1) += ˆf(Gℓ) + c(gℓ+1)ˆρ(gℓ+1|Gℓ) +≥ +� +ˆρ(gℓ+1|Gℓ)c(Gℓ) − 4βϵ +� ++ c(gℓ+1)ˆρ(gℓ+1|Gℓ) +(9) += ˆρ(gℓ+1|Gℓ)c(Gℓ+1) − 4βϵ +≥ +1 +2f(OPT) − ( ˜K − 1 +2 − γ)ϵ +B − c(o1) +c(Gℓ+1) − 4βϵ +(10) +≥ 1 +2f(OPT) − ( ˜K − 1 +2 − γ)ϵ − 4βϵ +(11) += 1 +2f(OPT) − +� +˜K − 1 +2 − γ + 4β +� +ϵ, +(12) +where (9) follows from (8), (10) follows from (5), and (11) follows from the chosen ℓ satisfies +c(Gℓ+1) > B − c(o1). Thus, the final output of the algorithm S will satisfy +f(S) ≥ ˆf(S) − ϵ +≥ ˆf(Gℓ+1) − ϵ +≥ 1 +2f(OPT) − +� +˜K + 1 +2 − γ + 4β +� +ϵ. +Finally, combining both cases and selecting γ = 2β completes the proof. +6.3 CMAB algorithms for Submodular Rewards with Knapsack Constraints +Now that we have analyzed the robustness of several offline algorithms, we can invoke +Theorem 3 to bound the expected cumulative α regret for stochastic CMAB adaptations +that rely only on bandit feedback. We name the adapted algorithms as C-ETC-N, C-ETC- +B for cardinality constraint, C-ETC-S C-ETC-K and C-ETC-Y for knapsack constraint, +respectively, based on which offline algorithm it is adapted from (using the first author’s +last name); which are in order Nemhauser et al. (1978); Badanidiyuru and Vondr´ak (2014); +Sviridenko (2004); Khuller et al. (1999); Yaroslavtsev et al. (2020). PartialEnumeration +was first proposed and analyzed by Khuller et al. (1999) for maximum coverage problems +and then analyzed by Sviridenko (2004) for monotone submodular functions. To distinguish +CMAB adaptations of Greedy+ and C-ETC-K, both proposed in Khuller et al. (1999), we +use C-ETC-S for the adaption of PartialEnumeration. The following corollaries hold +immediately: +Corollary 12 For an online submodular maximization under a cardinality constraint, the +expected cumulative (1 − 1/e)-regret of C-ETC-N is at most O +� +kn +1 +3 T +2 +3 log(T) +1 +3 +� +for T ≥ +√ +2n. +Remark 13 This result improves upon the result from Nie et al. (2022) by a factor of k +1 +3 +despite our use of a generic framework. +11 + +Corollary 14 For an online submodular maximization under a cardinality constraint, the +expected cumulative (1−1/e−ϵ′)-regret of C-ETC-B is at most O +� +k +2 +3 n +1 +3 (ϵ′) +1 +3 (log n +ϵ′ ) +1 +3 T +2 +3 log(T) +1 +3 +� +for T ≥ +√ +2n +(2−ϵ′)ϵ′k log n +ϵ′ . +Corollary 15 For an online submodular maximization under a knapsack constraint, the +expected cumulative (1 − 1/e)-regret of C-ETC-S is at most O +� +β +2 +3 ˜K +1 +3 n +4 +3 T +2 +3 log(T) +1 +3 +� +for +T ≥ +√ +2 ˜ +Kn4 +2+ ˜ +K+β. +Corollary 16 For an online submodular maximization under a knapsack constraint, the +expected cumulative +1 +2-regret of C-ETC-Y is at most O +� +β +2 +3 ˜K +1 +3 n +1 +3 T +2 +3 log(T) +1 +3 +� +for T ≥ +2 +√ +2 ˜ +Kn +1 +2 + ˜ +K+2β. +Corollary 17 For an online submodular maximization under a knapsack constraint, the +expected cumulative 1 +2(1 − 1 +e)-regret of C-ETC-K is at most O +� +β +2 +3 ˜K +1 +3 n +1 +3 T +2 +3 log(T) +1 +3 +� +for +T ≥ 2 +√ +2 ˜ +Kn +2+ ˜ +K+β. +Storage and Per-Round Time Complexities: C-ETC-Y and C-ETC-K have low +storage complexity and per-round time-complexity. During exploitation, only the indices of +at most ˜K base arms are needed in memory and does not need any computation. During +exploration, they just need to update the empirical mean for the current action at time +t, which can be done in O(1) time. It additionally stores the highest empirical density so +far in the current iteration of the greedy routine and its associated base arm (C-ETC-K +needs to store one more arm and C-ETC-Y an additional O( ˜K) storage is needed to store +the augmented set). Thus, C-ETC-Y and C-ETC-K have O( ˜K) storage complexity and +O(1) per-round time complexity. For comparison, the algorithm proposed by Streeter and +Golovin (2008) for an averaged knapsack constraint in the adversarial setting uses O(n ˜K) +storage complexity and O(n) per-round time complexity. Some comments on lower bound +are given in Appendix E. +7. Experiments +In this section, we conduct experiments on real world data with a Budgeted Influence Maxi- +mization (BIM). We also conduct experiments on Song Recommendation (SR) in Appendix +J. Both of these are applications of stochastic CMAB with submodular rewards under a +knapsack constraint. There are three adaptions we considered in Section 6 for knapsack +constraint. Since the time complexity for PartialEnumeration is much larger than the +other two offline algorithms we consider, it will use at least T ≈ 108 for C-ETC-S to finish +exploration. +For this reason, we do not consider C-ETC-S in the experiments. +To our +knowledge, our work is the first to consider these applications with only bandit feedback +available. +Baseline: +The only other algorithm designed for combinatorial MAB with general sub- +modular rewards, under a knapsack constraint, and using full-bandit feedback is Online +Greedy with opaque feedback model (OGo) proposed by Streeter and Golovin (2008) +12 + +(a) +(b) +(c) +(d) +Figure 1: Plots for budgeted influence maximization (BIM) example. (a) and (b) are comparison +results for cumulative regret as a function of time horizon T. (c) and (d) are the moving average +plot with window size 100 of instantaneous reward as a function of t. The gray dashed lines in (a) +and (b) represent y = aT 2/3 for various values of a for visual reference. The gray dashed lines in (c) +and (d) represent expected rewards for the action chosen by an offline greedy algorithm. +for the adversarial setting. However, OGo only satisfies the knapsack constraint in expecta- +tion, while our algorithms C-ETC-K ands C-ETC-Y satisfies a strict constraint (i.e. every +action At must be under budget). See Appendix D for more details about OGo and its +implementation. +In Section 6, we used N = ˜Kn as an upper bound on the number of function evaluations +for both C-ETC-K and C-ETC-Y, where n is the number of base arms and ˜K is an upper +bound of the cardinality of any feasible set. When the time horizon T is small, it is possible +that the exploration phase will not finish due to the formula being optimized for m (the +number of plays for each action queried by A) uses a loose bound on the exploitation time. +When this is the case, we select the largest m (closest to the formula) for which we can +guarantee that exploration will finish. For details, see Appendix F. +13 + +BIM B=6 +Cumulative Regret +10 +4 +C-ETC-K +C-ETC-Y +103 +3 +OGo +104 +105 +Horizon TBIM B=8 +Cumulative Regret +10 +4 +C-ETC-K +C-ETC-Y +103 +3 +OGo +104 +105 +Horizon TBIM B=6 +1e-1 +Instantaneous Reward +3 +2 +C-ETC-K +C-ETC-Y +OGo +0.00 +0.25 +0.50 +0.75 +1.00 +1e5 +Time-step tBIM B=8 +1e-1 +Instantaneous Reward +3 +2 +C-ETC-K +C-ETC-Y +OGo +0.00 +0.25 +0.50 +0.75 +1.00 +Time-step t +1e5We first conduct experiments for the application of budgeted influence maximization +(BIM) on a portion of the Facebook network graph. BIM models the problem of identifying +a low-cost subset (seed set) of nodes in a (known) social network that can influence the +maximum number of nodes in a network. While there are prior works proposing algorithms +for budgeted online influence maximization problems, the state of the art (e.g., Perrault et al. +(2020b)) presumes knowledge of the diffusion model (such as independent cascade) and, +more importantly, extensive semi-bandit feedback on individual diffusions, such as which +specific nodes became active or along which edges successful infections occurred, in order +to estimate diffusion parameters. For social networks with user privacy, this information is +not available. +Data Set Description and Experiment Details: The Facebook network dataset +was introduced in Leskovec and Mcauley (2012). To facilitate running multiple experiments +for different horizons, we used the community detection method proposed by Blondel et al. +(2008) to detect a community with 354 nodes and 2853 edges. We further changed the +network to be directed by replacing every undirected edge by two directed edge with opposite +directions, yielding a directed network with 5706 edges. The diffusion process is simulated +using the independent cascade model (Kempe et al., 2003), where in each discrete step, an +active node (that was inactive at the previous time step) independently attempts to infect +each of its inactive neighbors. Following existing work of Tang et al. (2015, 2018); Bian et al. +(2020), we set the probability of each edge (u, v) as 1/din(v), where din(v) is the in-degree of +node v. Moreover, we consider a user u is more influential if the user has more out-degrees, +dout(u). In our experiment, we only consider influential users to spend our budget more +efficiently. We pick the users with out-degrees that are above 95th percentile (18 users). +Denote this set as I, then for a user u ∈ I, the cost is defined as c(u) = 0.01dout(u) + 1, +similar to (Wu et al., 2022). For each time horizon that was used, we ran each method ten +times. +For this set of experiments, instead of cumulative +1 +2-regret, which requires knowing +OPT, we compare the cumulative rewards achieved by C-ETC and OGo against Tf(Sgrd), +where Sgrd is the solution returned by the offline 1 +2-approximation algorithm proposed by +Yaroslavtsev et al. (2020). +Tf(Sgrd) ≥ +1 +2Tf(OPT), so Tf(Sgrd) is a more challenging +reference value. +Results and Discussion: +Figures 1a and 1b show average cumulative regret curves +for C-ETC-K (in blue), C-ETC-Y (in orange) and OGo (in green) for different horizon T +values when the budget constraint B is 6 and 8, respectively. For B = 8, the turning point +is T = 21544. Standard errors of means are presented as error bars, but might be too small +to be noticed. Figures 1c and 1d are the instantaneous reward plots. The peaks at the +very beginning of exploration phase correspond to the time step that the single person with +highest influence is sampled. +C-ETC significantly outperforms OGo for all time horizons and budget considered. To +evaluate the gap between the empirical performance and the theoretical guarantee, we +estimated the slope for both methods on log-log scale plots. +Over the horizons tested, +OGo’s cumulative regret (averaged over ten runs) has a growth rate of 0.98. The growth +rates of C-ETC-K for budgets 6 and 8 are 0.76 and 0.68, respectively. The growth rates of +C-ETC-Y for budgets 6 and 8 are 0.75 and 0.69, respectively. The slopes are close to the +2/3 ≈ 0.67 theoretical guarantee, and notably, the performance for larger B is better. +14 + +References +Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: +a meta-algorithm and applications. Theory Comput., 8:121–164, 2012. +Ashwinkumar Badanidiyuru and Jan Vondr´ak. Fast algorithms for maximizing submodular +functions. In ACM-SIAM Symposium on Discrete Algorithms, 2014. +Thierry Bertin-Mahieux, Daniel P.W. Ellis, Brian Whitman, and Paul Lamere. The million +song dataset. In Proceedings of the 12th International Conference on Music Information +Retrieval (ISMIR 2011), 2011. +Lilian Besson and Emilie Kaufmann. What doubling tricks can and can’t do for multi-armed +bandits. ArXiv, abs/1803.06971, 2018. +Song Bian, Qintian Guo, Sibo Wang, and Jeffrey Xu Yu. Efficient algorithms for bud- +geted influence maximization on massive social networks. +Proc. VLDB Endow., 13 +(9):1498–1510, may 2020. +ISSN 2150-8097. +doi: +10.14778/3397230.3397244. +URL +https://doi.org/10.14778/3397230.3397244. +Vincent D. Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast +unfolding of communities in large networks. Journal of Statistical Mechanics: Theory +and Experiment, 2008:10008, 2008. +Daniel Golovin, Andreas Krause, and Matthew Streeter. Online submodular maximization +under a matroid constraint with application to learning assignments. +arXiv preprint +arXiv:1407.1082, 2014. +Gaurush Hiranandani, Harvineet Singh, Prakhar Gupta, Iftikhar Ahamath Burhanuddin, +Zheng Wen, and Branislav Kveton. Cascading linear submodular bandits: Accounting +for position bias and diversity in online learning to rank. In Ryan P. Adams and Vibhav +Gogate, editors, Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, +volume 115 of Proceedings of Machine Learning Research, pages 722–732. PMLR, 22–25 +Jul 2020. URL https://proceedings.mlr.press/v115/hiranandani20a.html. +David Kempe, Jon Kleinberg, and ´Eva Tardos. Maximizing the spread of influence through +a social network. In Proceedings of the ninth ACM SIGKDD international conference on +Knowledge discovery and data mining, pages 137–146, 2003. +Samir Khuller, Anna Moss, and Joseph Seffi Naor. The budgeted maximum coverage prob- +lem. Information processing letters, 70(1):39–45, 1999. +Andreas Krause and Carlos Guestrin. A note on the budgeted maximization of submodular +functions. 01 2005. +Jure Leskovec and Julian Mcauley. Learning to discover social circles in ego networks. In +Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., +2012. +15 + +Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne M. Vanbriesen, +and Natalie S. Glance. Cost-effective outbreak detection in networks. In KDD ’07, 2007. +Wenxin Li, Moran Feldman, Ehsan Kazemi, and Amin Karbasi. Submodular maximization +in clean linear time. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun +Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https: +//openreview.net/forum?id=JXY11Tc9mwY. +Tian Lin, Jian Li, and Wei Chen. +Stochastic online greedy learning with semi-bandit +feedbacks. In NIPS, pages 352–360, 2015. +George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approxi- +mations for maximizing submodular set functions—i. Mathematical programming, 14(1): +265–294, 1978. +Huy Nguyen and Rong Zheng. On budgeted influence maximization in social networks. +IEEE Journal on Selected Areas in Communications, 31:1084–1094, 2013. +Rad Niazadeh, Negin Golrezaei, Joshua R Wang, Fransisca Susan, and Ashwinkumar +Badanidiyuru. +Online learning via offline greedy algorithms: Applications in market +design and optimization. In Proceedings of the 22nd ACM Conference on Economics and +Computation, pages 737–738, 2021. +Guanyu Nie, Mridul Agarwal, Abhishek Kumar Umrawal, Vaneet Aggarwal, and Christo- +pher John Quinn. An explore-then-commit algorithm for submodular maximization under +full-bandit feedback. In The 38th Conference on Uncertainty in Artificial Intelligence, +2022. +Pierre Perrault, Jennifer Healey, Zheng Wen, and Michal Valko. Budgeted online influence +maximization. In ICML, 2020a. +Pierre Perrault, Jennifer Healey, Zheng Wen, and Michal Valko. Budgeted online influ- +ence maximization. +In Hal Daum´e III and Aarti Singh, editors, Proceedings of the +37th International Conference on Machine Learning, volume 119 of Proceedings of Ma- +chine Learning Research, pages 7620–7631. PMLR, 13–18 Jul 2020b. +URL https: +//proceedings.mlr.press/v119/perrault20a.html. +Aleksandrs Slivkins. Introduction to multi-armed bandits. Foundations and Trends® in +Machine Learning, 12(1-2):1–286, 2019. ISSN 1935-8237. +Matthew Streeter and Daniel Golovin. An online algorithm for maximizing submodular +functions. In Proceedings of the 21st International Conference on Neural Information +Processing Systems, NIPS’08, page 1577–1584, Red Hook, NY, USA, 2008. Curran Asso- +ciates Inc. +Maxim Sviridenko. A note on maximizing a submodular set function subject to a knapsack +constraint. Oper. Res. Lett., 32:41–43, 2004. +16 + +Sho Takemori, Masahiro Sato, Takashi Sonoda, Janmajay Singh, and Tomoko Ohkuma. +Submodular bandit problem under multiple constraints. In Jonas Peters and David Son- +tag, editors, Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence +(UAI), volume 124 of Proceedings of Machine Learning Research, pages 191–200. PMLR, +03–06 Aug 2020a. URL https://proceedings.mlr.press/v124/takemori20a.html. +Sho Takemori, Masahiro Sato, Takashi Sonoda, Janmajay Singh, and Tomoko Ohkuma. +Submodular bandit problem under multiple constraints. In Conference on Uncertainty +in Artificial Intelligence, pages 191–200. PMLR, 2020b. +Jing Tang, Xueyan Tang, Xiaokui Xiao, and Junsong Yuan. Online processing algorithms +for influence maximization. Proceedings of the 2018 International Conference on Man- +agement of Data, 2018. +Youze Tang, Yanchen Shi, and Xiaokui Xiao. Influence maximization in near-linear time: A +martingale approach. Proceedings of the 2015 ACM SIGMOD International Conference +on Management of Data, 2015. +Shatian Wang, Shuoguang Yang, Zhen Xu, and Van-Anh Truong. Fast thompson sampling +algorithm with cumulative oversampling: Application to budgeted influence maximiza- +tion. CoRR, abs/2004.11963, 2020. URL https://arxiv.org/abs/2004.11963. +Jianshe Wu, Junjun Gao, Hongde Zhu, and Zulei Zhang. Budgeted influence maximization +via boost simulated annealing in social networks. arXiv preprint arXiv:2203.11594, 2022. +Grigory +Yaroslavtsev, +Samson Zhou, +and +Dmitrii Avdiukhin. +“bring your +own +greedy”+max: +Near-optimal 1/2-approximations for submodular knapsack. +In Sil- +via Chiappa and Roberto Calandra, editors, Proceedings of the Twenty Third Inter- +national Conference on Artificial Intelligence and Statistics, volume 108 of Proceed- +ings of Machine Learning Research, pages 3263–3274. PMLR, 26–28 Aug 2020. URL +https://proceedings.mlr.press/v108/yaroslavtsev20a.html. +Baosheng Yu, Meng Fang, and Dacheng Tao. Linear submodular bandits with a knapsack +constraint. In Thirtieth AAAI Conference on Artificial Intelligence, 2016. +Yisong Yue and Carlos Guestrin. +Linear submodular bandits and their application to +diversified retrieval. +In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K.Q. +Weinberger, editors, Advances in Neural Information Processing Systems, volume 24. +Curran Associates, Inc., 2011a. URL https://proceedings.neurips.cc/paper/2011/ +file/33ebd5b07dc7e407752fe773eed20635-Paper.pdf. +Yisong Yue and Carlos Guestrin. +Linear submodular bandits and their application to +diversified retrieval. Advances in Neural Information Processing Systems, 24, 2011b. +17 + +A. Offline Approximation Algorithms – Overview +We give a brief overview of the offline approximation algorithms which we will analyze (α, δ) +robustness for. +For a k-cardinality constraint, the greedy algorithm Greedy proposed in Nemhauser +et al. (1978) starts from an empty set G ← ∅. Then it repeatedly add the element with +highest marginal gain f(e|G) until the cardinality |G| reaches k. ThresholdGreedy, pro- +posed in Badanidiyuru and Vondr´ak (2014), considers a sequence of decreasing thresholds: +{τ = d; τ ≥ ϵ′ +n d; τ ← (1−ϵ′)τ} where d = maxe∈Ω f(e). Then starting from empty set G = ∅, +the algorithm includes any element e /∈ G such that f(e|G) ≥ τ whenever the cardinality +is smaller than k. The algorithm then repeats using a lower threshold. Badanidiyuru and +Vondr´ak (2014) showed that ThresholdGreedy can achieve 1 − 1/e − ϵ′ approximation. +For a knapsack constraint, several algorithms run the following greedy subroutine, which +we refer to as Greedy (cardinality is a special case of this routine with budget k and +unit cost, so we keep the same name without confusion). Start with empty set G ← ∅. +Repeatedly add the element e with the highest marginal density ρ(e|G) that fits into the +budget. Let Gi denote the set selected by Greedy that has cardinality i and denote the +constituent elements as Gi = {g1, · · · , gi}. Let L denote the cardinality of the final greedy +set (i.e. when no more elements remain that are under budget), so GL is output by Greedy. +Note that L can only be bounded ahead of time—there could be maximal subsets (to which +no other elements could be added without violating the budget) of different cardinalities. +Greedy can have an unbounded approximation ratio Khuller et al. (1999) for knapsack +constraint. Khuller et al. (1999) proposed Greedy+, which outputs the better of the best +individual element a∗ ∈ arg maxe∈Ω f(e) and the output of Greedy, arg maxS∈{GL,a∗} f(S). +Khuller et al. (1999) proved that Greedy+ achieves a 1 +2(1− 1 +e) approximation ratio. Then, +Sviridenko (2004); Khuller et al. (1999) proposed PartialEnumeration. It first enumerate +all sets with cardinality up to three. For each enumerated triplets, it build the rest of the +solution set greedily. Then it outputs the set with largest value among all evaluated sets. +They showed that PartialEnumeration can achieve 1 − 1/e approximation ratio. +Greedy+Max generalizes Greedy+ by augmenting each set {Gi}L +i=1 in the nested se- +quence produced by Greedy with another element. +For 0 ≤ i ≤ L − 1, define G′ +i ← +Gi ∪ arg maxe∈Ω:c(Gi)+c(e)≤B f(Gi ∪ e). By construction, G′ +0 = {a∗}, the best individual +element. For i = L, G′ +L ← GL. Greedy+Max then outputs the best set in the augmented +sequence, arg maxS∈{G′ +0,...,G′ +L} f(S). +Yaroslavtsev et al. (2020) proposed Greedy+Max +and proved it achieves an approximation ratio of 1 +2. +A bound on the number of value oracle calls will be important in adapting offline meth- +ods. Denote β := B/cmin and ˜K := min{n, β} as an upper bound of the number of items +in any feasible set. We note here that while PartialEnumeration uses O( ˜Kn4) function +evaluations, both Greedy+Max and Greedy+ use O( ˜Kn) oracle calls, same as Greedy. +We use N = ˜Kn in the analysis for Greedy+Max and Greedy+. +B. Proof for Robustness of Offline Algorithms +In this section, we prove the (α, δ) robustness of algorithms considered in Section 6 of the +main paper. +18 + +B.1 Notation +We first review notations used in the analysis. Recall that we are only able to evaluate the +surrogate function ˆf such that | ˆf(S) − f(S)| ≤ ϵ for any feasible set S and some ϵ > 0, +we further denote ˆf(e|S) = ˆf(S ∪ e) − ˆf(S) and ˆρ(e|S) = +ˆf(S∪e)− ˆf(S) +c(e) +. Let Gi denote the +set selected by basic Greedy (based on surrogate function ˆf) as described in Section 3 up +until ith item and Gi = {g1, · · · , gi} in the order of each item is selected. Without loss of +generality, define G0 = ∅ and f(G0) = ˆf(G0) = 0. Denote cmin = mine∈Ω c(e) be the item +with lowest individual cost. Let β = B/cmin and ˜K = min{n, β} being an upper bound of +the number of items in any feasible set. Since all selected actions should be feasible, for +ease of notation, we omit denoting that condition throughout the proof. For example, we +write arg maxe∈Ω\A f(e|A) to simplify the notation of arg maxe:e∈Ω\A and A∪e∈D f(e|A). Let +S be the set returned by modified algorithms in corresponding context. +B.2 Robustness of Offline Methods for Submodular Maximization under +Cardinality Constraint +B.2.1 Greedy +We consider the original greedy algorithm Greedy proposed in Nemhauser et al. (1978), +which gives a (1 − 1 +e)-approximation guarantee for submodular maximization under a k- +cardinality constraint. To restate Theorem 5 in the main paper, Greedy is a (1 − 1 +e, 2k)- +robust approximation algorithm for submodular maximization under a k-cardinality con- +straint. The result follows from Corollary 4.3 of Nie et al. (2022), part of the regret analysis +for a CMAB adaptation of Greedy. +B.2.2 ThresholdGreedy +We then consider the threshold greedy algorithm ThresholdGreedy proposed in Badani- +diyuru and Vondr´ak (2014), which gives a (1− 1 +e −ϵ′)-approximation guarantee for submod- +ular maximization under a k-cardinality constraint, where ϵ′ is a user specified parameter +to balance accuracy and run time. +Restating Theorem 6 in the main paper, Thresh- +oldGreedy is a (1 − 1 +e − ϵ′, 2(2 − ϵ′)k)-robust approximation algorithm for submodular +maximization under a k-cardinality constraint. +Proof From the assumption of the surrogate function ˆf we know +f(e|S) − 2ϵ ≤ ˆf(e|S) ≤ f(e|S) + 2ϵ +for any e ∈ Ω \ S and S ⊆ Ω. Now assume the the next chosen element is a and the current +partial solution is S. On one hand, we have +ˆf(a|S) ≥ w =⇒ f(a|S) ≥ w − 2ϵ, +(13) +on the other hand, for every e ∈ OPT \ S, +ˆf(e|S) ≤ +w +1 − ϵ′ =⇒ f(e|S) ≤ +w +1 − ϵ′ + 2ϵ. +(14) +Combining and manipulating (13) and (14) we have for any e ∈ OPT \ S: +f(a|S) + 2ϵ ≥ (f(e|S) − 2ϵ)(1 − ϵ′) =⇒ f(a|S) ≥ (1 − ϵ′)f(e|S) − 2(2 − ϵ′)ϵ. +(15) +19 + +Taking an average over all e ∈ OPT \ S, +f(a|S) ≥ +1 − ϵ′ +|OPT \ S| +� +e∈OPT\S +f(e|S) − 2(2 − ϵ′)ϵ +≥ 1 − ϵ′ +k +� +e∈OPT\S +f(e|S) − 2(2 − ϵ′)ϵ. +(16) +Now consider after i ∈ [k − 1] steps, we get a partial solution Si = {a1, · · · , ai}. By (16), +we have +f(ai+1|Si) ≥ 1 − ϵ′ +k +� +e∈OPT\S +f(e|Si) − 2(2 − ϵ′)ϵ +≥ 1 − ϵ′ +k +f(OPT|Si) − 2(2 − ϵ′)ϵ +(submodularity) +≥ 1 − ϵ′ +k +(f(OPT) − f(Si)) − 2(2 − ϵ′)ϵ, +(monotonicity) +and hence for i ∈ [k − 1], +f(Si+1) − f(Si) = f(ai+1|Si) ≥ 1 − ϵ′ +k +(f(OPT) − f(Si)) − 2(2 − ϵ′)ϵ. +(17) +Using (17) as induction hypothesis, we then prove by induction (omitted) that for i ∈ [k−1], +f(Si+1) ≥ +� +1 − +� +1 − 1 − ϵ′ +k +�i+1� +f(OPT) − 2(i + 1)(2 − ϵ′)ϵ, +and plugging in i = k − 1 we get +f(Sk) ≥ +� +1 − +� +1 − 1 − ϵ′ +k +�k� +f(OPT) − 2k(2 − ϵ′)ϵ +≥ (1 − e−(1−ϵ′))f(OPT) − 2k(2 − ϵ′)ϵ +≥ (1 − 1/e − ϵ′)f(OPT) − 2k(2 − ϵ′)ϵ. +We finish the proof by observing that Sk is the output. +B.3 Proof for Robustness of Greedy+Max +In this section, we give a detailed proof for Theorem 8 in Section 6 of the main paper. Recall +the statement is that Greedy+Max is a ( 1 +2, 1 +2 + ˜K + 2β)-robust approximation algorithm +for submodular maximization problem under a knapsack constraint. +Let o1 ∈ arg maxe:e∈OPT c(e) denote the most expensive element in OPT. During the ith +iteration of the greedy process, having previously selected the set Gi−1 with i − 1 elements, +20 + +it will select the element gi with highest marginal density (based on surrogate function ˆf) +among feasible elements, +gi = +arg max +e: e∈Ω\Gi−1 +ˆρ(e|Gi−1). +(18) +Inspired by the proof techniques in Yaroslavtsev et al. (2020), we consider the last item +added by the greedy solution (based the surrogate function ˆf) before the cost of this solution +exceeds B − c(o1). +Denote Gℓ as the largest greedy sequence that consumes less than +B − c(o1) budgets, c(Gℓ) ≤ B − c(o1) < c(Gℓ+1). +Let ai denote the element selected +to augment with the greedy solution Gi, i.e., ai = arg maxe∈Ω\Gi ˆf(e|Gi), and Si denote +the augmented set at i-th iteration. Before proving the theorem, we show Theorem 11 in +Section 6 of the main paper, that for i ∈ {0, 1, · · · , ℓ}, the following inequality holds: +ˆf(Gi ∪ o1) + max{0, ˆρ(gi+1|Gi)}(B − c(o1)) ≥ f(OPT) − (2 ˜K − 1)ϵ. +Proof +Recall that from the definition of ˆf, we have | ˆf(S) − f(S)| ≤ ϵ for any evaluated +set S and some ϵ > 0. Consequently, we have for any i ∈ {0, 1, · · · , ℓ}, +| ˆf(Gi) − f(Gi)| ≤ ϵ. +(19) +Now we evaluate the set Gi ∪ o1. +• Case 1: If o1 has already been added, o1 ∈ Gi, then +| ˆf(Gi ∪ o1) − f(Gi ∪ o1)| = | ˆf(Gi) − f(Gi)| ≤ ϵ. +• Case 2: If o1 /∈ Gi, then ˆf(Gi ∪ o1) is evaluated in iteration i + 1. This iteration i + 1 +does exist1 because for any i ∈ {0, 1, · · · , ℓ}, we only used less than B − c(o1) budget. +For the remaining budget, at least o1 can still fit into the budget so Gi ∪ o1 will be +evaluated in iteration i + 1. In this case, we still have +| ˆf(Gi ∪ o1) − f(Gi ∪ o1)| ≤ ϵ. +Combining these two cases, we have +| ˆf(Gi ∪ o1) − f(Gi ∪ o1)| ≤ ϵ. +(20) +Also, for any evaluated action in iteration i + 1, namely the actions {Gi ∪ e|e ∈ Ω \ +Gi and c(e) + c(Gi) ≤ B}, we have +ρ(e|Gi) = f(Gi ∪ e) − f(Gi) +c(e) +≤ +ˆf(Gi ∪ e) − ˆf(Gi) +c(e) ++ 2ϵ +c(e) += ˆρ(e|Gi) + 2ϵ +c(e). +(21) +1. For (α, δ) robustness alone, this point is not necessary due to the assumption of |f(S)− ˆf(S)| ≤ ϵ for all +S ⊆ Ω. For the regret bound proof of our proposed C-ETC method in Appendix C.4, the “clean event” +(corresponding to concentration of empirical mean of set values around their statistical means) will only +imply concentration for those actions taken and thus for which empirical estimates exist. +21 + +Then we have +f(OPT) ≤ f(Gi ∪ OPT) +(Monotonicity of f) +≤ f(Gi ∪ o1) + f(OPT \ (Gi ∪ o1)|Gi ∪ o1) +≤ f(Gi ∪ o1) + +� +e∈OPT\(Gi∪o1) +f(e|Gi ∪ o1) +(Submodularity of f) +≤ ˆf(Gi ∪ o1) + ϵ + +� +e∈OPT\(Gi∪o1) +c(e)ρ(e|Gi ∪ o1). +(22) +where (22) uses (20). +Since we picked iteration i such that c(Gi) ≤ B −c(o1), then all items in OPT\(Gi ∪o1) +still fit, as o1 is the largest item in OPT. Since the greedy algorithm always selects the +item with the largest marginal density with respect to the surrogate function ˆf, gi = +arg maxe∈Ω\Gi ˆρ(e|Gi), thus we have +ˆρ(gi+1|Gi) = max +e∈Ω\Gi +ˆρ(e|Gi) ≥ +max +e∈Ω\(Gi∪o1) ˆρ(e|Gi). +(23) +Hence, continuing with (22), +f(OPT) ≤ ˆf(Gi ∪ o1) + ϵ + +� +� +� +e∈OPT\(Gi∪o1) +c(e)ρ(e|Gi ∪ o1) +� +� +≤ ˆf(Gi ∪ o1) + ϵ + +� +e∈OPT\(Gi∪o1) +c(e)ρ(e|Gi) +(Submodularity) +≤ ˆf(Gi ∪ o1) + ϵ + +� +e∈OPT\(Gi∪o1) +c(e) +� +ˆρ(e|Gi) + 2ϵ +c(e) +� +(using (21)) +≤ ˆf(Gi ∪ o1) + ϵ + +� +e∈OPT\(Gi∪o1) +� +c(e)ˆρ(e|Gi) +� ++ 2ϵ|OPT \ (Gi ∪ o1)| +≤ ˆf(Gi ∪ o1) + ϵ + ˆρ(gi+1|Gi) +� +e∈OPT\(Gi∪o1) +� +c(e) +� ++ 2ϵ|OPT \ (Gi ∪ o1)| +(Using (23)) +≤ ˆf(Gi ∪ o1) + ϵ + ˆρ(gi+1|Gi)c(OPT \ (Gi ∪ o1)) + 2ϵ|OPT \ (Gi ∪ o1)| +≤ ˆf(Gi ∪ o1) + ϵ + max{0, ˆρ(gi+1|Gi)}c(OPT \ (Gi ∪ o1)) + 2ϵ|OPT \ (Gi ∪ o1)| +≤ ˆf(Gi ∪ o1) + ϵ + max{0, ˆρ(gi+1|Gi)}(gi+1|Gi)(B − c(o1)) + 2ϵ|OPT \ (Gi ∪ o1)| +≤ ˆf(Gi ∪ o1) + max{0, ˆρ(gi+1|Gi)}(gi+1|Gi)(B − c(o1)) + (2 ˜K − 1)ϵ. +Rearranging terms gives the desired result. +Now we are ready to prove Theorem 8 (robustness of Greedy+Max algorithm). Applying +Theorem 11 (Greedy+Max inequality) for i = ℓ, and recalling that ℓ is chosen as the +index of the last greedy set such that c(Gℓ) ≤ B − c(o1) < c(Gℓ+1), +ˆf(Gℓ ∪ o1) + max{0, ˆρ(gℓ+1|Gℓ)}(B − c(o1)) ≥ f(OPT) − (2 ˜K − 1)ϵ. +(24) +22 + +From (24), we will next argue at least one of the terms in the left hand side must be large. +We will consider cases for the two terms being large. To minimize the worst-case additive +error term from the cases, we will split the cases into whether ˆf(Gℓ ∪ o1) is larger than or +equal to 1 +2f(OPT) − ( ˜K − 1 +2 + γ)ϵ, or max{0, ˆρ(gℓ+1|Gℓ}(B − c(o1)) is larger than or equal +to 1 +2f(OPT) − ( ˜K − 1 +2 − γ)ϵ, where γ will be selected later to minimize the additive error δ +coefficient. +Case 1: If ˆf(Gℓ ∪ o1) ≥ 1 +2f(OPT) − ( ˜K − 1 +2 + γ)ϵ, recall that aℓ as the element selected +to augment with the greedy solution Gℓ, aℓ = arg maxe∈Ω\Gℓ ˆf(e|Gℓ), then +ˆf(Gℓ ∪ aℓ) ≥ ˆf(Gℓ ∪ o1) +≥ 1 +2f(OPT) − +� +˜K − 1 +2 + γ +� +ϵ. +(25) +The set S that the algorithm selects in the end will be the set with the highest mean (based +on surrogate function ˆf) among all those evaluated (both sets in the greedy process and +their augmentations). Also, its observed value ˆf(Sℓ) is at most ϵ above f(S). Thus +f(S) ≥ ˆf(S) − ϵ +≥ ˆf(Gℓ ∪ aℓ) − ϵ +≥ 1 +2f(OPT) − +� +˜K + 1 +2 + γ +� +ϵ. +(using (25)) +Case 2(a): If max{0, ˆρ(gℓ+1|Gℓ)}(B−c(o1)) ≥ 1 +2f(OPT)−( ˜K− 1 +2−γ)ϵ and ˆρ(gℓ+1|Gℓ) > +0, rearranging we have +ˆρ(gℓ+1|Gℓ) ≥ +f(OPT) +2(B − c(o1)) − ( ˜K − 1 +2 − γ)ϵ +B − c(o1) +. +(26) +23 + +Then, +ˆf(Gℓ) = ˆf(Gℓ) − ˆf(Gℓ−1) + ˆf(Gℓ−1) + · · · − ˆf(G1) + ˆf(G1) − ˆf(G0) +(telescoping sum; G0 = ∅, ˆf(G0) := 0) += +l−1 +� +j=1 +ˆf(gj+1|Gj) +(Definition of ˆf(·|·)) += +l−1 +� +j=0 +ˆρ(gj+1|Gj)c(gj+1) +(Definition of ˆρ(·|·)) +≥ +l−1 +� +j=0 +ˆρ(gℓ+1|Gj)c(gj+1) +(greedy choice of gj+1) +≥ +l−1 +� +j=0 +� +ρ(gℓ+1|Gj) − +2ϵ +c(gℓ+1) +� +c(gj+1) +≥ +l−1 +� +j=0 +� +ρ(gℓ+1|Gℓ) − +2ϵ +c(gℓ+1) +� +c(gj+1) +(submodularity of f) += +� +ρ(gℓ+1|Gℓ) − +2ϵ +c(gℓ+1) +� +c(Gℓ) +(simplifying) +≥ +� +ˆρ(gℓ+1|Gℓ) − +4ϵ +c(gℓ+1) +� +c(Gℓ) +≥ ˆρ(gℓ+1|Gℓ)c(Gℓ) − 4βϵ. +(27) +Recalling that ℓ is chosen as the index of the last greedy set that has a remaining budget +as big as the cost of the heaviest element in OPT, c(Gℓ) ≤ B − c(o1) < c(Gℓ+1), +ˆf(Gℓ+1) = ˆf(Gℓ ∪ gℓ+1) += ˆf(Gℓ) + c(gℓ+1)ˆρ(gℓ+1|Gℓ) +≥ +� +ˆρ(gℓ+1|Gℓ)c(Gℓ) − 4βϵ +� ++ c(gℓ+1)ˆρ(gℓ+1|Gℓ) +(from (27)) += ˆρ(gℓ+1|Gℓ)c(Gℓ+1) − 4βϵ +(simplifying) +≥ +1 +2f(OPT) − ( ˜K − 1 +2 − γ)ϵ +B − c(o1) +c(Gℓ+1) − 4βϵ +(case 2 condition) +≥ 1 +2f(OPT) − ( ˜K − 1 +2 − γ)ϵ − 4βϵ +(ℓ chosen so that c(Gℓ+1) > B − c(o1)) += 1 +2f(OPT) − +� +˜K − 1 +2 − γ + 4β +� +ϵ. +(28) +The set S that the algorithm selects at the end of the exploitation phase will be the set +with the highest empirical mean among all those explored (both sets in the greedy process +24 + +and augmented sets). Thus its empirical mean is at most ϵ above f(S). +f(S) ≥ ˆf(S) − ϵ +≥ ˆf(Gℓ+1) − ϵ +≥ 1 +2f(OPT) − +� +˜K + 1 +2 − γ + 4β +� +ϵ. +(using (28)) +Case 2(b): If max{0, ˆρ(gℓ+1|Gℓ)}(B−c(o1)) ≥ 1 +2f(OPT)−( ˜K− 1 +2−γ)ϵ and ˆρ(gℓ+1|Gℓ) ≤ +0, then the set S that the algorithm selects at the end satisfies +f(S) ≥ 0 +≥ 1 +2f(OPT) − ( ˜K − 1 +2 − γ)ϵ +(Case 2(b) condition) +≥ 1 +2f(OPT) − ( ˜K − 1 +2 − γ + 4β)ϵ. +Thus, combining cases 1 and 2, and selecting γ = 2β, the additive 1 +2-approximation error +we get by the modified Greedy+Max algorithm is at most (1 +2 + ˜K + 2β)ϵ, which concludes +the proof. +B.4 Proof for Robustness of Greedy+ +In this section, we prove Theorem 9 in Section 6 of the main paper. The following state- +ments, Lemmas 18,19 and 21, and their proofs are adapted from the proof of 1 +2(1 − 1 +e) +approximation ratio in the offline setting Khuller et al. (1999) using a value oracle. Krause +and Guestrin (2005) adapted the proof of Khuller et al. (1999) to an offline setting where +the greedy process relies on an exact oracle to evaluate individual element values and to +compare the best individual element to the set output by the greedy process, but use an +inexact value oracle (within ϵ of the correct value) to evaluate marginal densities. +The main differences arise from (i) the algorithms of Khuller et al. (1999); Krause +and Guestrin (2005) evaluate densities before checking for feasibility,2 leading to different +definitions of the augmented greedy sequence, necessitating us to use more care to show +analogous properties, (ii) exact value oracles for best individual elements and for selecting +OPT are used in Khuller et al. (1999); Krause and Guestrin (2005), simplifying work to +conclude the final bound for the approximation ratio α = 1 +2(1− 1 +e) and leading to a different +δ. +Recall that Theorem 9 in Section 6 of the main paper states that Greedy+ is a ( 1 +2(1 − +1 +e), 2+ ˜K +β)-robust approximation algorithm for submodular maximization problem under +a knapsack constraint. +We define Gi and gi the same as previous section. Recall that the greedy process (using +a surrogate ˆf) produces a nested sequence of subsets ∅ = G0 ⊂ G1 ⊂ · · · ⊂ GL, where +L denotes the cardinality of the set final output of the greedy process. For the proof, we +describe the greedy process as running for L + 1 iterations, though on the final iteration no +elements are added. +2. As noted in Footnote 1, concentration of estimates (i.e. the surrogate ˆf) used by C-ETC in the bandit +setting will only be for evaluated subsets, which by restriction will all be feasible. +25 + +For any action Gi−1 ∪a evaluated in iteration i of the greedy process, its marginal gains +are upper bounded by that of the best subset based on surrogate function ˆf, +f(Gi−1 ∪ a) − f(Gi−1) − 2ϵ +c(a) +≤ +ˆf(Gi−1 ∪ a) − ˆf(Gi−1) +c(a) +≤ +ˆf(Gi−1 ∪ gi) − ˆf(Gi−1) +c(gi) +(gi selected by greedy rule based on ˆf) +≤ f(Gi−1 ∪ gi) − f(Gi−1) + 2ϵ +c(gi) += f(Gi) − f(Gi−1) + 2ϵ +c(gi) +, +(29) +where (29) just uses the definition of Gi ← Gi−1 ∪ gi. We will use (29) to lower bound the +true marginal gains (i.e. in terms of f) achieved for each iteration of the greedy process. +Let ℓ ∈ {1, . . . , L + 1} denote the ���rst iteration for which there was an element a′ ∈ +Ω\Gℓ−1 whose cost exceeds the remaining budget (c(a′)+c(Gℓ−1) > B) (thus subset Gℓ−1∪a′ +was not sampled), yet whose marginal density was higher than the marginal density of the +chosen element gℓ up to ±2ϵ normalized by the cost, specifically, for ℓ ≤ L, +f(Gℓ−1 ∪ a′) − f(Gℓ−1) − 2ϵ +c(a′) +> f(Gℓ−1 ∪ aℓ) − f(Gℓ−1) + 2ϵ +c(ar) +. +(30) +If there is no such iteration ℓ < L+1, then for ℓ = L+1, we take the element a′ maximizing +the term on the left hand side of (30), +a′ = arg max +a∈Ω\Gℓ−1 +f(Gℓ−1 ∪ a) − f(Gℓ−1) − 2ϵ +c(a) +. +(31) +Likewise, if there is more than one element satisfying (30) for some (earliest) iteration r, +then we also take the maximizer (31). +We define an “augmented” greedy sequence of length ℓ which matches the greedy se- +quence up to the set of cardinality ℓ, where the element a′ is selected despite violating the +budget, +{ �G0 = G0 = ∅, �G1 = G1, . . . , �Gℓ−1 = Gℓ−1, �Gℓ = Gℓ−1 ∪ {a′}} +(32) +and correspondingly enumerate the elements of �Gℓ in the order they were selected, +{�g1 = g1, . . . , �gℓ−1 = gℓ−1, �gℓ = g′}. +(33) +We first prove the following lemma, bounding the marginal gains of the augmented +greedy sequence { �G0, . . . , �Gℓ}. +Lemma 18 For all i ∈ {1, 2, · · · , ℓ}, the following inequality holds: +f( �Gi) − f( �Gi−1) ≥ c(�gi) +B +� +f(OPT) − f( �Gi−1) +� +− 2 +� +1 + +˜Kc(�gi) +B +� +ϵ. +26 + +Proof +Set any i ∈ {1, 2, · · · , ℓ}. +Let {v1, v2, . · · · , vk} = OPT \ �Gi−1. +Note that by +construction (32), we have �Gi−1 = Gi−1. +The difference f(OPT) − f( �Gi−1) can be bounded by the marginal gains of elements in +the set difference, +f(OPT) − f( �Gi−1) ≤ +k +� +j=1 +� +f( �Gi−1 ∪ vj) − f( �Gi−1) +� +(Fact 1) += +k +� +j=1 +� +f( �Gi−1 ∪ vj) − f( �Gi−1) − 2ϵ + 2ϵ +� += +k +� +j=1 +c(vj)f( �Gi−1 ∪ vj) − f( �Gi−1) − 2ϵ +c(vj) ++ 2kϵ +≤ +k +� +j=1 +c(vj)f( �Gi−1 ∪ �gi) − f( �Gi−1) + 2ϵ +c(�gi) ++ 2kϵ +(34) += +k +� +j=1 +c(vj)f( �Gi) − f( �Gi−1) + 2ϵ +c(�gi) ++ 2kϵ +(35) +where (34) holds by following. We consider four cases, depending on whether or not ˆf(Gi−1∪ +vj) was evaluated during the iteration i. +• Case 1 ( ˆf(Gi−1 ∪ vj) was evaluated and i < ℓ): At iteration i (necessarily i ≤ L +since no subsets were evaluated in iteration L + 1) with current greedy set Gi−1, +adding the element vj to the current greedy set was feasible, c(vj) ≤ B − c(Gi−1). +Then Greedy+ would have evaluated ˆf(Gi−1 ∪ vj). Since vj was not selected, the +chosen element gi = Gi\Gi−1 must have had a higher surrogate density ˆf(Gi−1∪vj) > +ˆf(Gi−1 ∪ gi), so for i < ℓ, for which �gi = gi by construction (33), (29) implies (34). +• Case 2 ( ˆf(Gi−1 ∪ vj) was evaluated and i = ℓ): By the reasoning in the previous +case, for the item aℓ chosen at iteration ℓ by the greedy process (due to feasibility and +having the highest surrogate density), we still have the bound (29) on true values, +which coupled with our specific construction of �gℓ (30) means +f( �Gℓ−1 ∪ vj) − f( �Gℓ−1) − 2ϵ +c(vj) +≤ f( �Gℓ−1 ∪ ar) − f( �Gℓ−1) + 2ϵ +c(ar) +(by (29)) +< f( �Gℓ−1 ∪ �gr) − f( �Gℓ−1) − 2ϵ +c(�gr) +(by construction (30)) +< f( �Gℓ−1 ∪ �gr) − f( �Gℓ−1) + 2ϵ +c(�gr) +. +• Case 3 ( ˆf(Gi−1 ∪ vj) was not evaluated and i < ℓ): At iteration i < ℓ ≤ L + 1 +with the current greedy set Gi−1, adding the element vj to the current greedy set was +27 + +not feasible, c(vj) > B −c(Gi−1). By construction of the augmented greedy sequence, +only at iteration ℓ was there an infeasible element whose surrogate marginal density +satisfied the inequality (30). Thus, for iterations i < ℓ, Gi−1 = �Gi−1 and Gi = �Gi, so +(34) holds. +• Case 4 ( ˆf(Gi−1 ∪ vj) was not evaluated and i = ℓ): For iteration i = ℓ, with +current greedy set Gi−1, the augmented greedy sequence construction implies (34). +Namely, with i = ℓ, +f( �Gℓ−1 ∪ vj) − f( �Gℓ−1) − 2ϵ +c(vj) +< f( �Gℓ−1 ∪ �gr) − f( �Gℓ−1) − 2ϵ +c(�gr) +(by (31)) +< f( �Gℓ−1 ∪ �gr) − f( �Gℓ−1) + 2ϵ +c(�gr) +. +menaing (34) holds. +We now continue lower bounding f(OPT) − f( �Gi−1), +f(OPT) − f( �Gi−1) ≤ +� +� +k +� +j=1 +c(vj)f( �Gi) − f( �Gi−1) + 2ϵ +c(�gi) +� +� + 2kϵ +(copying (35)) += +� +� +k +� +j=1 +c(vj) +� +� f( �Gi) − f( �Gi−1) + 2ϵ +c(�gi) ++ 2kϵ +≤ B f( �Gi) − f( �Gi−1) + 2ϵ +c(�gi) ++ 2kϵ +(OPT is feasible, so �k +j=1 c(vj) ≤ B) +≤ +B +c(�gi) +� +f( �Gi) − f( �Gi−1) +� ++ 2 +� B +c(�gi) + ˜K +� +ϵ. +(rearranging; k ≤ ˜K) +Multiplying both sides by c(�gi) +B +and rearranging finishes the proof. +We unravel the recurrence in Theorem 18 to lower bound f( �Gi). +Lemma 19 For all i ∈ {1, 2, · · · , ℓ}, +f( �Gi) ≥ +� +�1 − +i� +j=1 +(1 − c(�gj) +B +) +� +� f(OPT) − 2(β + ˜K)ϵ. +Remark 20 The steps to unravel the recurrence to obtain the first term (coefficient of +f(OPT)) is the same as the proof for the analogous result in the offline setting Khuller +et al. (1999). The second term (with ϵ) is due to working with marginal densities of a +28 + +surrogate function ˆf. The basic steps for working with that second term is the same as +Krause and Guestrin (2005), though we use a looser bound β; in Krause and Guestrin +(2005) we think there may be a mistake in applying the induction step (with “c(Xi)” fixed +for different i in the proof), though they were loosely bounded with β later on. +Proof +The proof will follow by induction. +We first show the base case i = 1 using +Theorem 18. +f( �G1) = f( �G1) − f( �G0) +(f is normalized; �G0 = ∅) +≥ c(�g1) +B +� +f(OPT) − f( �G0) +� +− 2 +� +1 + +˜Kc(�g1) +B +� +ϵ +(using Theorem 18) += +� +1 − +� +1 − c(�g1) +B +�� +f(OPT) − 2 +� +1 + +˜Kc(�g1) +B +� +ϵ +(36) +where (36) follows from rearranging. For the second term in (36), using that +1 + +˜Kc(�g1) +B +≤ +B +c(�g1) +� +1 + +˜Kc(�g1) +B +� +(since +B +c(�g1) ≥ 1) += +B +c(�g1) + ˜K +≤ +B +cmin ++ ˜K += β + ˜K, +(37) +then +f( �G1) ≥ +� +1 − +� +1 − c(�g1) +B +�� +f(OPT) − 2 +� +1 + +˜Kc(�g1) +B +� +ϵ +(copying (36)) +≥ +� +1 − +� +1 − c(�g1) +B +�� +f(OPT) − 2(β + ˜K)ϵ. +(using (37)) +This completes the base case of i = 1. +29 + +We next consider i > 1. Unraveling the recurrence shown in Theorem 18, +f( �Gi) = f( �Gi) − f( �Gi−1) + f( �Gi−1) +≥ +� +c(�gi) +B +� +f(OPT) − f( �Gi−1) +� +− 2 +� +1 + +˜Kc(�gi) +B +� +ϵ +� ++ f( �Gi−1) +(using Theorem 18) += +�c(�gi) +B +� +f(OPT) − 2 +� +1 + +˜Kc(�gi) +B +� +ϵ + +� +1 − c(�gi) +B +� +f( �Gi−1) +(rearranging) += +� +1 − (1 − c(�gi) +B ) +� +f(OPT) − 2 +� +1 + +˜Kc(�gi) +B +� +ϵ ++ +� +1 − c(�gi) +B +� +f( �Gi−1) +(rearranging) +≥ +� +1 − (1 − c(�gi) +B ) +� +f(OPT) − 2 +� +1 + +˜Kc(�gi) +B +� +ϵ ++ +� +1 − c(�gi) +B +� � +� +� +�1 − +i−1 +� +j=1 +(1 − c(�gj) +B +) +� +� f(OPT) − 2(β + ˜K)ϵ +� +� +(induction step) += +� +�1 − (1 − c(�gi) +B ) + +� +1 − c(�gi) +B +� � +�1 − +i−1 +� +j=1 +(1 − c(�gj) +B +) +� +� +� +� f(OPT) +− 2 +� +1 + +˜Kc(�gi) +B ++ +� +1 − c(�gi) +B +� +(β + ˜K) +� +ϵ +(rearranging) += +� +�1 − +i� +j=1 +(1 − c(�gj) +B +) +� +� f(OPT) +− 2 +� +1 + β − β c(�gi) +B ++ ˜K +� +ϵ. +(38) +For the second term in (38), using that +β c(�gi) +B += +B +cmin +c(�gi) +B +(def. of β) += c(�gi) +cmin +≥ 1, +(39) +then +−2 +� +1 + β − β c(�gi) +B ++ ˜K +� +ϵ = −2 +� +β + ˜K +� +ϵ + 2 +� +β c(�gi) +B +− 1 +� +ϵ +(rearranging) +≥ −2 +� +β + ˜K +� +ϵ. +(using (39)) +30 + +Applying this to (38) completes the proof. +The inequality in Theorem 19 for the augmented greedy set of cardinality ℓ can be +further simplified. We will use the following observations. +Lemma 21 The following inequality holds: +f( �Gℓ) ≥ (1 − 1 +e)f(OPT) − 2(β + ˜K)ϵ. +Proof Applying i = ℓ to Theorem 19 and bounding the coefficient for f(OPT), +f( �Gℓ) ≥ +� +�1 − +ℓ� +j=1 +(1 − c(�gj) +B +) +� +� f(OPT) − 2(β + ˜K)ϵ +≥ +� +�1 − +ℓ� +j=1 +(1 − c(�gj) +c( �Gℓ) +) +� +� f(OPT) − 2(β + ˜K)ϵ +(by construction, c( �Gℓ) > B) +≥ +� +�1 − +ℓ� +j=1 +(1 − c( �Gℓ)/ℓ +c( �Gℓ) +) +� +� f(OPT) − 2(β + ˜K)ϵ +(using Fact 2) += +� +1 − (1 − 1 +ℓ )ℓ +� +f(OPT) − 2(β + ˜K)ϵ +(simplifying) +≥ +� +1 − 1 +e +� +f(OPT) − 2(β + ˜K)ϵ. +(using Fact 3) +Using the aforementioned lemmas, we are now ready to complete the proof for Theorem +3 (robustness of Greedy+ algorithm). We will bound the value of set GL using the results +on the augmented greedy set (32) of cardinality ℓ, and in turn bound the value of the set +S, the final output of Greedy+. +Recall that Greedy+ chooses the set S to be either the best individual element +(based on ˆf) a∗ ← arg maxe∈Ω ˆf(e) or the output of the greedy process GL. Let aOPT = +arg maxe∈Ω f(e) denote the element with the highest value under f. Then +f(a∗) ≥ ˆf(a∗) − ϵ +≥ ˆf(aOPT) − ϵ +(by definition of a∗) +≥ f(aOPT) − 2ϵ. +(40) +By construction (32), �Gℓ includes one more element a′ than �Gℓ−1 (and a′ maximizes +(31)). By submodularity, the marginal gain of a′ is bounded by f(a′) and in turn by the +31 + +best individual element based on surrogate function ˆf, +f( �Gℓ−1) + f(aOPT) ≥ f( �Gℓ−1) + f(a′) +(by definition of aOPT) +≥ f( �Gℓ−1) + +� +f( �Gℓ−1 ∪ a′) − f( �Gℓ−1) +� +(by submodularity) += f( �Gℓ−1 ∪ a′) += f( �Gℓ) +(by construction (32)) +≥ (1 − 1 +e)f(OPT) − 2(β + ˜K)ϵ, +(41) +where (41) follows from Theorem 21. +Also by construction (32), the greedy and augmented greedy processes match up to and +including the set of cardinality ℓ − 1, so +f(GL) ≥ f(Gℓ−1) +(monotonicity) += f( �Gℓ−1). +(By construction (32)) +Thus, +f(GL) + f(aOPT) ≥ f( �Gℓ−1) + f(aOPT) +≥ (1 − 1 +e)f(OPT) − 2(β + ˜K)ϵ. +(using (41)) +At least one of f(GL) and f(aOPT) is at least half of the value of the right hand side, +max{f(GL), f(aOPT)} ≥ 1 +2(1 − 1 +e)f(OPT) − (β + ˜K)ϵ +(42) +Thus, for the chosen set S +f(S) ≥ ˆf(S) − ϵ += max{ ˆf(GL), ˆf(a∗)} − ϵ +≥ max{ ˆf(GL), ˆf(aOPT)} − ϵ +(a∗ is the element with largest ˆf value) +≥ max{f(GL) − ϵ, f(aOPT) − ϵ} − ϵ +(element-wise dominance) += max{f(GL), f(aOPT)} − 2ϵ +≥ 1 +2(1 − 1 +e)f(OPT) − (β + ˜K)ϵ − 2ϵ +(from (42)) += 1 +2(1 − 1 +e)f(OPT) − (2 + β + ˜K)ϵ. +which completes the proof. +B.5 Proof for Robustness of PartialEnumeration +Now we analyze the PartialEnumeration algorithm for submodular maximization under +a knapsack constraint proposed in Sviridenko (2004); Khuller et al. (1999). Recall that +Theorem 7 in Section 6 of the main paper states PartialEnumeration is a (1 − 1 +e, 4 + +32 + +2 ˜K + 2β)-robust approximation algorithm for submodular maximization under a knapsack +constraint. +Proof Assume |OPT| > 3, otherwise the algorithm finds a (1, 2)-robust approximation, so it +is also a (1− 1 +e, 2( ˜K+β))-robust approximation for non-trivial cases where ˜K ≥ 1 and β ≥ 1. +Enumerate the elements of the optimal solution as OPT = {Y1, · · · , Ym}, corresponding to +the order they would be selected by the simple greedy algorithm (iteratively selecting the +element with the largest marginal gain, not the largest marginal density) +Yi+1 = arg max +Y ∈OPT +f({Y1, · · · , Yi, Y }) − f({Y1, · · · , Yi}), +(43) +and let R = {Y1, Y2, Y3}. Consider the iteration where the algorithm considers R. Define +the function +f′(A) = f(A ∪ R) − f(R). +(44) +f′ is a non-decreasing submodular set function with f′(∅) = 0, and the optimal solution +(with budget B − c(R)) is OPT \ R since for any set S with cost c(S) ≤ B − c(R), +f′(OPT \ R) = f(OPT ∪ R) − f(R) +(def of f′) += f(OPT) − f(R) +(R ⊆ OPT by construction) +≥ f(S ∪ R) − f(R) += f′(S). +Hence we can apply Greedy+ algorithm to f′ (based on noisy evaluations). Let gℓ be the +first element from OPT \ R which could not be added due to budget constraints, and let +A = {g1, · · · , gℓ−1} be first ℓ−1 elements selected by Greedy+ algorithm. Let G = A∪R. +Using Theorem 21, we get +f′(A ∪ gℓ) ≥ (1 − 1 +e)f′(OPT \ R) − 2(β′ + ˜K′)ϵ, +where β′ = B−c(R) +c′ +min , ˜K′ = min{n − 3, β′} and c′ +min = mine∈Ω\R c(e). Simple calculation can +show that β′ ≤ β and ˜K′ ≤ ˜K. Thus, +f′(A ∪ gℓ) ≥ (1 − 1 +e)f′(OPT \ R) − 2(β + ˜K)ϵ, +From the definition of f′, we have f(G) = f′(A) + f(R). Let ∆ = f′(A ∪ gℓ) − f′(A). We +have +f′(A) + ∆ ≥ (1 − 1 +e)f′(OPT \ R) − 2(β + ˜K)ϵ. +(45) +Further observe that elements in OPT are ordered that for all 1 ≤ i ≤ 3, +f({Y1, · · · , Yi}) − f({Y1, · · · , Yi−1}) +≥f({Y1, · · · , Yi−1, gℓ}) − f({Y1, · · · , Yi−1}) +(ordering rule) +≥f(R ∪ A ∪ gℓ) − f(R ∪ A) +({Y1, · · · , Yi−1} ⊆ R when 1 ≤ i ≤ 3 and submodularity) +=f(R ∪ A ∪ gℓ) − f(R) − (f(R ∪ A) − f(R)) +=f′(A ∪ gℓ) − f′(A) +=∆. +33 + +By telescoping sum, f(R) ≥ 3∆. Now we get +f(G) = f(R) + f′(A) +≥ f(R) + (1 − 1 +e)f′(OPT \ R) − 2(β + ˜K)ϵ − ∆ +≥ f(R) + (1 − 1 +e)f′(OPT \ R) − 2(β + ˜K)ϵ − f(R)/3 +≥ (1 − 1 +3)f(R) + (1 − 1 +e)f′(OPT \ R) − 2(β + ˜K)ϵ +≥ (1 − 1 +e) +� +f′(OPT \ R) + f(R) +� +− 2(β + ˜K)ϵ +(e ≤ 3) += (1 − 1 +e)f(OPT) − 2(β + ˜K)ϵ. +(definition of f′) +The output of the algorithm is not necessarily G because the values of the evaluated triplets +are based on surrogate function ˆf. Denote O as the output of the algorithm and denote G′ +as the best evaluated set (with respect to ˆf) with size ℓ + 2 (same as G). We must have +that ˆf(G′) ≥ ˆf(G). Also denote the final set (until violating budget) continuing G′ as G′′. +We have, +f(O) ≥ ˆf(O) − ϵ +≥ ˆf(G′′) − ϵ +(selection rule of the algorithm) +≥ f(G′′) − 2ϵ +≥ f(G′) − 2ϵ +(G′ ⊆ G′′ and monotonicity of f) +≥ ˆf(G′) − 3ϵ +≥ ˆf(G) − 3ϵ +≥ f(G) − 4ϵ +≥ (1 − 1 +e)f(OPT) − (4 + 2β + 2 ˜K)ϵ, +finishing the proof. +C. Proof for Regret of C-ETC +In this section, we prove Theorem 3 in Section 4 of the main paper. We restate the theorem: +For the sequential decision making problem defined in Section 2 and T ≥ 2 +√ +2N +δ +, the expected +cumulative α-regret of C-ETC using an (α, δ)-robust approximation algorithm as subroutine +is at most O +� +δ +2 +3 N +1 +3 T +2 +3 log(T) +1 +3 +� +, where N upper-bounds the number of value oracle queries +made by the offline algorithm A. +C.1 Overview and Notations +We will separate the proof into two cases. The first case is for when the clean event E +happens, which we will show in Theorem 24 happens with high probability. Under the +34 + +clean event, using the fact that the offline algorithm is an (α, δ)-robust approximation, C- +ETC’s chosen set S for the exploitation phase will nonetheless be near-optimal. The second +case is when the complementary event happens, which occurs with low probability. +The proof structure analyzing a high-probability “clean event” where empirical estimates +are sufficiently concentrated around their means is analogous to that for the unstructured +non-combinatorial setting (see for instance, Section 1.2 in (Slivkins, 2019)). However, un- +like the ETC procedure for non-combinatorial MAB problems, C-ETC makes sequences of +decisions during exploration. Furthermore, the combinatorial action space, non-linearity +of the reward function, and lack of extra feedback (like marginal gains) make the problem +challenging. Even in the special setting of deterministic rewards, the standard MAB prob- +lem becomes trivial (finding the largest of n base arms) while the problem we considered +are NP-hard. +Recap that for any (feasible) action A, ft(A) denotes a (random) reward at time t for the +agent taking that action, f(A) denotes the expected value for action A. Let ¯ft(A) denote +the empirical mean of rewards received from playing action A up to and including time t. In +the following, we will drop the subscript t from the empirical mean, writing ¯f(A) when it is +clear from context that action A has been played m times. Also, we use Ai, i ∈ {1, · · · , N} +denotes the i-th action the algorithm samples. We further denote Ti, i ∈ {1, . . . , N} as the +time step when the sampling of the i-th action has been determined, or Ai has been played +m times. For notation consistency, we also denote T0 = 0 and TN+1 = T. +C.2 Probability of the Clean Event +Now we define events that are important in our analysis. Recall that for each action A being +explored, the m rewards are i.i.d. with mean f(A) and bounded in [0, 1]. Thus, we can +bound the deviation of the (unbiased) empirical mean ¯f(Ai) from the expected value f(Ai) +for each action played. Specifically, we can use a two-sided Hoeffding bound for bounded +variables. +Remark 22 For convenience, we assume the reward function bounded in [0, 1], but the +result can be generalized to the case where the deviation of the true reward and the expected +reward has a light tailed distribution (e.g., sub-Gaussian). +Lemma 23 (Hoeffding’s inequality) Let X1, · · · , Xn be independent random variables +bounded in the interval [0, 1], and let ¯X denote their empirical mean. Then we have for any +ϵ > 0, +P +��� ¯X − E[ ¯X] +�� ≥ ϵ +� +≤ 2exp +� +−2nϵ2� +. +(46) +By C-ETC, each sampled action will be played the same number of times, denoted by m, +so we consider bounding the probabilities of equal-sized confidence radii rad := +� +log(T)/2m +for all the actions played during exploration. +We next analyze the probability of the event that the empirical means of all actions +played during exploration are concentrated around their statistical means within a radius +rad. Denote the corresponding events for each action played having empirical means con- +centrated around their respective statistical means as Ei, +Ei := +� +{ +�� ¯f(Ai) − f(Ai) +�� < rad}, +i ∈ {1, · · · , N}. +(47) +35 + +Define the clean event E to be the event that the empirical means of all actions played in +the exploration phase are within rad of their corresponding statistical means: +E := E1 ∩ · · · ∩ EN. +(48) +Lemma 24 The probability of the clean event E (48) satisfies: +P(E) ≥ 1 − 2N +T . +Proof +Applying the Hoeffding bound Theorem 23 to the empirical mean ¯f(Ai) of m +rewards for action Ai and choosing ϵ = rad = +� +log(T)/2m gives +P( ¯Ei) = P +��� ¯f(Ai) − f(Ai) +�� ≥ rad +� +≤ 2exp +� +−2mrad2� += 2exp (−2m(log(T)/2m)) += 2exp (− log(T)) += 2 +T . +(49) +Then, we can bound the probability of clean events +P(E) = P(E1 ∩ · · · ∩ EN) += 1 − P( ¯E1 ∪ · · · ∪ ¯EN) +(De Morgan’s Law) +≥ 1 − +N +� +i=1 +P( ¯Ei) +(union bounds) +≥ 1 − 2N +T . +(using (49)) +C.3 Near Optimality of the final S (Exploitation Phase Action) +In Theorem 24, we showed that the clean event E will happen with high probability. When +the clean event E happens, we have | ¯f(A) − f(A)| ≤ rad for all evaluated action A. For an +online algorithm (with output S) using an (α, δ)-robust approximation as subroutine, we +have +f(S) ≥ αf(OPT) − δ · rad. +(50) +C.4 Final Regret +Now we are ready to show the regret of C-ETC (Theorem 3 in Section 4 of the main paper). +36 + +Case 1: clean event E happens +In the first case we analyse the expected regret under the condition that the clean event E +happens. In this section, all expectations will be conditioned on E, but to simplify notation +we will write E[·] instead of E[·|E] in some cases. +First we can break up the expected α-regret conditioned on E into two parts, one for +the first L exploration iterations, and the second for the exploitation iteration. Although +the number of actions taken per iteration and the number of iterations of the greedy is not +known a priori, we can upper bound the duration. Also recall that ft(At) is the random +reward for taking action At, which itself is random, depending on empirical means of actions +in earlier iterations. +E[R(T)|E] = αTf(OPT) − +T +� +t=1 +E[ft(At)] += αTf(OPT) − +T +� +t=1 +E[E[ft(At)|At]] +(law of total expectation) += αTf(OPT) − +T +� +t=1 +E[f(At)] +(f(·) defined as expected reward) += +T +� +t=1 +(αf(OPT) − E[f(At)]) +(rearranging) += +N +� +i=1 +m (αf(OPT) − E[f(Ai)]) +� +�� +� +Exploration phase ++ +T +� +t=TN+1 +(αf(OPT) − E[f(At)]) +� +�� +� +Exploitation phase += +N +� +i=1 +m (αf(OPT) − E[f(Ai)]) + +T +� +t=TN+1 +(αf(OPT) − E[f(S)]) . +(51) +Case 1 (clean event): Bounding exploration regret: +We will separately bound the +regret incurred from the exploration and exploitation. We begin with bounding regret from +exploration, +N +� +i=1 +m (αf(OPT) − E[f(Ai)]) +≤ +N +� +i=1 +m (α − 0) +(rewards are bounded in [0, 1]) +≤ Nm. +(52) +Case 1 (clean event): Bounding exploitation regret: +We next bound the regret +incurred during the exploitation iteration. Since the set S used during exploitation is a +random variable, we can take the expectation of (50) (conditioned on event E), to bound +37 + +the expected instantaneous regret for each time step of the exploitation iteration, +αf(OPT) − E[f(S)] ≤ δrad. +(53) +Using a loose bound for the duration of the exploitation iteration, T − TL + 1 < T, +T +� +t=TN+1 +(αf(OPT) − E[f(S)]) ≤ +T +� +t=TN+1 +δrad +(using (53)) +≤ Tδrad. +(54) +Case 1 (clean event): Bounding total regret: +Then the expected cumulative regret +(51) can be bounded as +E[R(T)|E] = +N +� +i=1 +m (αf(OPT) − E[f(Ai)]) + +T +� +t=TN+1 +(αf(OPT) − E[f(S)]) (copying (51)) +≤ Nm + Tδrad +(using (52), (54)) +Plugging in the formula for the confidence radius rad = +� +log(T)/2m, we have +E[R(T)|E] ≤ Nm + Tδ +� +log(T)/2m +We want to optimize m, the number of times each action is played. Denoting the regret +bound (55) as a function of m +g(m) = Nm + Tδ +� +log(T)/2m, +(55) +then +g′(m) = N − 1 +2Tδ +� +log(T)/2m−3/2. +(56) +Setting g′(m) = 0 and solving for m, +m∗ = δ2/3T 2/3 log(T)1/3 +2N2/3 +. +(57) +We next check the second derivative, +g′′(m) = 3 +4δT +� +log(T)/2m−5/2. +(58) +For positive values of m, g′′(m) > 0, thus g(m) reaches a minimum at (57). +Since m is the number of times actions are played, we (trivially) need m ≥ 1 and m to +be an integer. We choose +m† = +� +δ2/3T 2/3 log(T)1/3 +2N2/3 +� +. +(59) +38 + +Since from (58) we have that g′′(m) > 0 for positive m, g(m∗) ≤ g(m†). For T ≥ 2 +√ +2N +δ +, +we have m∗ ≥ 1. +Plugging (59) back in to (55), +E[R(T)|E] ≤ m†N + Tδ +� +log(T)/2m† +((55) with m† samples for each action) += ⌈m∗⌉N + Tδ +� +log(T)/2⌈m∗⌉ +≤ ⌈m∗⌉N + Tδ +� +log(T)/2m∗ +(Since ⌈m∗⌉ ≥ m∗) +≤ 2m∗N + Tδ +� +log(T)/2m∗ +(Since m∗ ≥ 1, ⌈m∗⌉ ≤ 2m∗) += 2δ2/3T 2/3 log(T)1/3 +2N2/3 +N ++ Tδ +� +log(T)/2 +� +δ2/3T 2/3 log(T)1/3 +2N2/3 +�−1/2 +(using (57)) += 3δ2/3N1/3T 2/3 log(T)1/3 +(60) += O +� +δ +2 +3 N +1 +3 T +2 +3 log(T) +1 +3 +� +. +In conclusion, the expected α-regret of C-ETC using an (α, δ)-robust approximation as +subroutine is upper bounded by (60) if the clean event E happens. +Case 2: clean event E does not happen +We next derive an upper bound for the expected α-regret for case that the event E does +not happen. By Theorem 24, +P( ¯E) = 1 − P(E) ≤ 2N +T . +Since the reward function ft(·) is upper bounded by 1, the expected α-regret incurred under +¯E for a horizon of T is at most T, +E[R(T)| ¯E] ≤ T. +(61) +Putting it all together +Combining Cases 1 and 2 we have, +E[R(T)] = E[R(T)|E] · P(E) + E[R(T)| ¯E] · P( ¯E) +(Law of total expectation) +≤ 3δ2/3N1/3T 2/3 log(T)1/3 · 1 + T · 2N +T +(using (60), Theorem 24, and (61)) += O +� +δ +2 +3 N +1 +3 T +2 +3 log(T) +1 +3 +� +. +This concludes the proof. +39 + +Algorithm 2 Online Greedy for Opaque Feedback Model (OGo) +Input: set of base arms Ω, horizon T, cost for each arm c(a), budget B +Initialize n ← |Ω|, cmin ← mina∈Ω{c(a)}, β ← +B +cmin , γ ← n1/3β +� +log(n) +T +�1/3 +, ϵ ← +� +β log(n) +γT +Initialize ω1 ← ones(β, n) +for t ∈ [1, · · · , T] do +St ← ��� +l ← zeros(β, n) +// loss +Randomly sample a value ξ ∼ Uniform([0, 1]) +if ξ ≤ γ then +e ∼ Uniform({1, · · · , β}) +for i ∈ [1, · · · , e − 1] do +// For experts before e, exploit +Select an arm a with probability +ωt[i,a] +� ωt[i,:], re-sample if a ∈ St +St ← St ∪ {a} with probability cmin +c(a) ; St ← St−1 otherwise +end for +a ∼ Uniform({1, · · · , n}\St) +// For expert e, explore +St ← St ∪ {a} +Play action St, observe ft(St) +Update l[i, j] ← cminft(St) +c(a) +for all i = e and j ̸= a +// Feed cminft(St) +c(a) +back to expert +e associated with action a +Update ωt+1[i, j] ← ωt[i, j] exp(−ϵl[i, j]) for all pairs of i and j +else +// Exploitation with probability 1 − γ +for i ∈ [1, · · · , β] do +// For experts before e, exploit +Select arm a with probability +ωt[i,a] +� ωt[i,:], re-sample if a ∈ St +St ← St ∪ {a} with probability cmin +c(a) ; St ← St−1 otherwise +end for +Play action St, observe ft(St) +ωt+1[i, j] ← ωt[i, j] +// Since feeding back 0 to all expert-action payoffs, loss is 0, +no update +end if +end for +D. Implementation of Algorithm OGo +In this section we describe implementation details and parameter selection for OGo +algorithm Streeter and Golovin (2008). The choice of exploration probability is given by +the original paper:γ = n1/3β +� +log(n) +T +�1/3 +, where β = B/cmin. Note that in the original paper, +B is used instead of β, because they assume the minimum cost is 1. Here we generalize it +to arbitrary non-negative costs. ϵ is the learning rate for Randomized Weighted Majority +(WMR) expert algorithm Arora et al. (2012). It is chosen by setting the derivative of regret +40 + +upper bound to zero, which is ϵ = +� +log(n) +Te , where Te is the time spent on updating expert +e. Since it explores with probability γ, and there are β expert algorithms, we have Te ≈ γT +β . +Thus we pick ϵ = +� +β log(n) +γT +. In experiments, there are many cases the chosen γ is large or +even larger than 1, so we cap the probability of exploring γ by 1/2 to avoid exploring too +much. Note that unlike a hard budget in our setting, for OGo, it only requires the budget +to be satisfied in expectation, so in general we might choose sets over budget. Algorithm 2 +is the pseudo code for implementation details of OGo. +E. Comments on Lower bounds of Submodular CMAB +For the setting we explore in this paper, with stochastic (or even adversarial) knapsack- +constrained combinatorial MAB with submodular expected rewards and just bandit feed- +back, it remains an open question if ˜O(T 1/2) expected cumulative α-regret is possible (ig- +noring n and β). Both Streeter and Golovin (2008) and Niazadeh et al. (2021) analyze lower +bounds for the adversarial setting. However, Streeter and Golovin (2008) obtain bounds +for 1-regret (it is NP-hard in offline setting to obtain an approximation ratio better than +1 − 1/e). Niazadeh et al. (2021) obtain ˜Ω(T 2/3) lower bounds for the harder setting where +feedback is only available during “exploration” rounds chosen by the agent, who incurs an +associated penalty. +F. Dealing with Small Time Horizons in Experiments +In Section 6, we used N = ˜Kn as an upper bound on the number of function evaluations for +both C-ETC-K and C-ETC-Y, where n is the number of base arms and ˜K is an upper bound +of the cardinality of any feasible sets. When the time horizon T is small, it is possible that +the exploration phase will not finish due to the formula being optimized for m (the number +of plays for each action queried by A) uses a loose bound on the exploitation time. When +this is the case, we select the largest m (closest to the formula) for which we can guarantee +that exploration will finish. Recall that for C-ETC-Y and C-ETC-K, the number of oracle +calls can only be upper bounded in advance. +We first calculate m† using (59): +m† = +� +δ2/3T 2/3 log(T)1/3 +2 ˜K2/3n2/3 +� +. +Note that a (slightly tighter) upper bound on the number of subsets evaluated during the +exploration phase (with ˜K bounding the number of iterations of the greedy process) is +N ≤ n + (n − 1) + · · · + (n − ˜K + 1) += +� +n − +˜K +2 + 1 +2 +� +˜K. +We compare +� +n − ˜ +K +2 + 1 +2 +� +˜Km† with T. +41 + +• Case 1. If +� +n − ˜ +K +2 + 1 +2 +� +˜Km† < T, C-ETC can finish exploring. We select m = m†. +• Case 2. If +� +n − ˜ +K +2 + 1 +2 +� +˜Km† ≥ T, it is possible that the algorithm cannot finish +exploring. In this case, we will find a new m, so that the exploration can be guaranteed +to finish. We select the largest m (closest to m†) so that the exploration time is upper +bounded by T, +m = +T +� +n − ˜ +K +2 + 1 +2 +� +˜K +. +G. Basic Facts +Fact 1 For a monotonically non-decreasing submodular set function f defined over subsets +of Ω, we have for arbitrary subsets A, B ⊆ Ω, +f(B) − f(A) ≤ +� +j∈B\A +[f(A ∪ {j}) − f(A)] . +Fact 2 (Khuller et al., 1999) +For x1, · · · , xn ∈ R+ such that ��� xi = A, the function +[1 − �n +i=1(1 − xi +A )] achieves its minimum at x1 = x2 = · · · = xn = A/n. +Fact 3 For k ≥ 1, +1 − +� +1 − 1 +k +�k +≥ 1 − 1 +e. +H. Other Related Work for Adversarial CMAB with Knapsack +constraints +Streeter and Golovin (2008) propose and analyze an algorithm for adversarial CMAB with +submodular rewards, full-bandit feedback, and under a knapsack constraint (though only +in expectation, taken over randomness in the algorithm). We discuss this in more detail in +the supplemental material, here only highlighting a few key points. We also use this as a +baseline in our experiments in Section 7. The authors adapted a simpler greedy algorithm +than the one we adapt (Khuller et al., 1999), using an ϵ-greedy exploration type framework. +We provide evidence in our experiments that their algorithm requires large horizons to +learn. The offline algorithm they adapted achieves an approximation ratio (1 − 1/e) for +budgets that exactly match the cost used up by the greedy solution, but otherwise does not +achieve a constant approximation (Khuller et al., 1999). +In (Golovin et al., 2014), the authors propose an algorithm for adversarial setting with +submodular rewards when there is a matroid constraint (neither knapsack nor matroid +constraints are special cases of the other). +I. Related work on Stochastic Submodular CMAB with Semi-Bandit +Feedback +There are also a number of works that require additional “semi-bandit” feedback. +For +combinatorial MAB with submodular rewards, a common type of semi-bandit feedback are +42 + +marginal gains (Lin et al., 2015; Yue and Guestrin, 2011b; Yu et al., 2016; Takemori et al., +2020b), which enable the learner to take actions of maximal cardinality or budget, receive +a corresponding reward, and gain information not just on the set but individual elements. +For the full-bandit setting we consider, to greedily build a solution, we need to spend time +taking small cardinality actions to estimate their quality, incurring regret. +J. Experiments with Song Recommendation +We test our methods on the application of song recommendation on the Million Song Dataset +Bertin-Mahieux et al. (2011). In this problem, the agents aims to recommend a bundle of +songs to users such that they are liked by as many users as possible. +Data Set Description and Experiment Details +From the Million Song Dataset, we extract most popular 20 songs and 100 most active +users. As in Yue and Guestrin (2011a), we model the system as having a set of topics +(or genres) G with |G| = d and for each item e ∈ Ω, there is a feature vector x(e) := +(Pg(e))g∈G ∈ Rd that represents the information coverage on different genres. For each +genre g, we define the probabilistic coverage function fg(S) by 1 − � +e∈S (1 − Pg(e)) and +define the reward function f(S) = � +i wifi(S) with linear coefficients wi. The vector w := +[w1, . . . , wd] represents user preference on genres. In calculating Pg(e) and w, we use the +same formula for calculating ¯w(e, g) and θ∗ in Hiranandani et al. (2020). Like Takemori +et al. (2020a), we define the cost of a song by its length (in seconds). For each user, the +stochastic rewards of set S are sampled from a Bernoulli distribution with parameter f(S). +For the total reward, we take the average over all users. When making the plots, we use +statistics taken from 10 runs. +Results and Discussion +Figures 2a and 2b show average cumulative regret curves for C-ETC-K (in blue), C- +ETC-Y (in orange) and OGo (in green) for different horizon T values when the budget +constraint B is 500 and 800, respectively. Figures 2c and 2d are the instantaneous reward +plots over a single horizon T = 215, 443. Again, C-ETC significantly outperforms OGo for +all time horizons and budget considered. We again estimated the slopes for both methods +on log-log scale plots. Over the horizons tested, OGo’s cumulative regret (averaged over +ten runs) has a growth rate above 0.85. The growth rates of C-ETC-K for budgets 500 and +800 are 0.70 and 0.73, respectively. The growth rates of C-ETC-Y for budgets 500 and 800 +are 0.70 and 0.71, respectively. +43 + +(a) +(b) +(c) +(d) +Figure 2: Plots for song recommendation example. (a) and (b) are comparison results for +cumulative regret as a function of time horizon T. (c) and (d) are the moving average plot +with window size 100 of instantaneous reward as a function of t. The gray dashed lines in +(a) and (b) represent y = aT 2/3 for various values of a for visual reference. The gray dashed +lines in (c) and (d) represent expected rewards for the action chosen by an offline greedy +algorithm. +44 + +SR B=500 +Cumulative Regret +10° +4 +C-ETC-K +C-ETC-Y +103 +OGo +104 +105 +Horizon TSR B=800 +Cumulative Regret +10° +4 +C-ETC-K +C-ETC-Y +103 +3 +OGo +104 +105 +Horizon TSR B=500 +le-1l +Instantaneous Reward +6 +5 +4 +3 +C-ETC-K +C-ETC-Y +2 +OG° +0.0 +0.5 +1.0 +1.5 +2.0 +1e5 +Time-step tSR B=800 +1e-1 +Instantaneous Reward +6 +4 +C-ETC-K +C-ETC-Y +2 +OG° +0.0 +0.5 +1.0 +1.5 +2.0 +1e5 +Time-step t \ No newline at end of file