diff --git "a/MNAyT4oBgHgl3EQfs_kD/content/tmp_files/2301.00584v1.pdf.txt" "b/MNAyT4oBgHgl3EQfs_kD/content/tmp_files/2301.00584v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/MNAyT4oBgHgl3EQfs_kD/content/tmp_files/2301.00584v1.pdf.txt" @@ -0,0 +1,6681 @@ +Selective Conformal Inference with FCR Control +Yajie Baoa, Yuyang Huob, Haojie Rena and Changliang Zoub∗ +aSchool of Mathematical Sciences, Shanghai Jiao Tong University +Shanghai, P.R. China +bSchool of Statistics and Data Science, Nankai University +Tianjin, P.R. China +January 3, 2023 +Abstract +Conformal inference is a popular tool for constructing prediction intervals (PI). We consider here +the scenario of post-selection/selective conformal inference, that is PIs are reported only for individuals +selected from an unlabeled test data. To account for multiplicity, we develop a general split conformal +framework to construct selective PIs with the false coverage-statement rate (FCR) control. We first +investigate the Benjamini and Yekutieli (2005)’s FCR-adjusted method in the present setting, and show +that it is able to achieve FCR control but yields uniformly inflated PIs. We then propose a novel solution +to the problem, named as Selective COnditional conformal Predictions (SCOP), which entails performing +selection procedures on both calibration set and test set and construct marginal conformal PIs on the +selected sets by the aid of conditional empirical distribution obtained by the calibration set. Under +a unified framework and exchangeable assumptions, we show that the SCOP can exactly control the +FCR. More importantly, we provide non-asymptotic miscoverage bounds for a general class of selection +procedures beyond exchangeablity and discuss the conditions under which the SCOP is able to control +the FCR. As special cases, the SCOP with quantile-based selection or conformal p-values-based multiple +testing procedures enjoys valid coverage guarantee under mild conditions. Numerical results confirm the +effectiveness and robustness of SCOP in FCR control and show that it achieves more narrowed PIs over +existing methods in many settings. +Keywords: Conditional empirical distribution; Distribution-free; Non-exchangeable conditions; Post- +selection inference; Prediction intervals; Split conformal. +∗Corresponding Author: nk.chlzou@gmail.com +1 +arXiv:2301.00584v1 [stat.ME] 2 Jan 2023 + +1 +Introduction +To improve the prediction performance in modern data, many sophisticated machine learning algorithms +including various “black-box” models are proposed. While often witnessing empirical success, quantifying +prediction uncertainty is one of the major issues for interpretable machine learning. Conformal inference +(Vovk et al., 1999, 2005) provides a powerful and flexible tool to quantify the uncertainty of predictions. +Consider a typical setting that we observe one labeled data set Dl = {(Xi, Yi)}2n +i=1 and a set of unla- +belled/test samples Du = {Xi}2n+m +i=2n+1 whose outcomes {Yi}2n+m +i=2n+1 are unobserved. Generally, suppose all +(Xi, Yi) ∈ X × Y are i.i.d from some unknown distribution, and µ(x) := Y | X = x as the prediction model +associated with (X, Y ), which is usually estimated by the labeled data Dl. For any Xj ∈ Du and a given +miscoverage level α, standard conformal prediction methods (Lei et al., 2018), yield a prediction interval (PI) +with distribution-free coverage guarantee, PIα(Xj), +P(Yj ∈ PIα(Xj)) ≥ 1 − α, +under independent and identically distributed (i.i.d) (or exchangeable data) assumptions. +With the development of big data, making predictive inference on all available data (Du) is either +unnecessary or inefficient in many applications. For example, in the recruitment decisions, only some selected +viable candidates can get into interview processes (Faliagka et al., 2012; Shehu and Saeed, 2016). In the drug +discovery trials, researchers select promising ones based on predicting candidates’ activity for further clinical +trials (Carracedo-Reboredo et al., 2021; Dara et al., 2021). Related applications also appear in financial +investment and scientific discovery (Jin and Candès, 2022). In such problems, the most common way is to +select a subset of interest with some rules through some statistical/machine learning algorithms at first, and +then perform statistical inference only on the selected samples. +Formally, letting ˆSu ⊆ {2n + 1, . . . , 2n + m} be the selected subset, our goal is to construct the PI of Yj +for each j ∈ ˆSu. As pointed by Benjamini and Yekutieli (2005), ignoring the multiplicity in construction of +post-selection intervals will result in distorted average coverage. Under the context of post-selection inference +in which confidence intervals for multiple selected parameters/variables are being reported, Benjamini and +Yekutieli (2005) pioneered the criterion, false coverage-statement rate (FCR), to take account for multiplicity. +The FCR, an analog of the false discovery rate (FDR), can readily be adapted to the present conformal +inference setting. It is defined as the expected ratio of number of reported PIs failing to cover their respective +true outcomes to the total number of reported PIs, say +FCR := E +� +|{j ∈ ˆSu : Yj ̸∈ PIj}| +max{| ˆSu|, 1} +� +, +(1) +where PIj is the PI for the selected sample j ∈ ˆSu. Benjamini and Yekutieli (2005) provided a selection- +agnostic method which adjusts the confidence level through multiplying α by a quantity which is related +to the proportion of selected candidates over all candidates and then constructed the marginal confidence +2 + +intervals at the adjusted level for each selected candidate. We will hereafter call it the FCR-adjusted method. +Accordingly, we may expect that the FCR-adjusted PIs enjoy valid FCR control properties. However, due +to the dependence structure among PI| ˆ +Su|α/m(Xj)’s, the results in Benjamini and Yekutieli (2005) are not +directly applicable in the setting of conformal inference. Please refer to Section 2.1 for detailed discussions +and rigorous theories. We notice that Weinstein and Ramdas (2020) also discussed the selective inference +problem under the framework of conformal prediction. The authors suggested to use the FCR-adjusted +method, however, they did not provide theoretical or empirical investigations. +While the FCR-adjusted approach can reach FCR control and is widely used, it is generally known to yield +uniformly inflated confidence intervals (Weinstein et al., 2013). This is because that when calculating the +noncoverage probabilities of confidence intervals, the adjusted confidence intervals do not take into account the +selection event. Along this line, Weinstein et al. (2013), Zhao and Cui (2020) and Zhao (2022) further proposed +some methods to narrow the adjusted confidence intervals by incorporating more selection information. Among +some others, Fithian et al. (2014), Lee et al. (2016) and Taylor and Tibshirani (2018) proposed constructing +conditional confidence intervals for each selected variables and showed that the selective error rate can +be controlled given that the selected set is equal to some deterministic subset. However, those methods +either require some tractable conditional distribution assumptions or are only applicable for some given +prediction algorithms, such as normality assumptions or LASSO model, which would limit their applicability +in the conformal inference. Fortunately, by the virtue of the availability of Dl, distribution/model-agnostic +conditional prediction intervals with theoretical guarantee can be achieved. +1.1 +Our contributions +In this paper, we develop a novel conformal framework to construct post-selection prediction intervals while +control the FCR, named as Selective COnditional conformal Predictions (SCOP). Our method stems from the +split conformal inference (Lei et al., 2018; Fithian and Lei, 2020), where the labeled data Dl is split into two +disjoint parts, one as the training set for obtaining a prediction model ˆµ(X), and the remaining one as the +calibration set for estimating the distribution of the discrepancy between the Y and ˆµ(X). Then, the key +ingredient of our proposal entails performing a pre-specified selective procedure on both the calibration set and +the test set and construct the marginal conformal PIs on the selected sets with the help of conditional empirical +distribution obtained by the calibration set. The proposed SCOP procedure is model- or distribution-agnostic, +in the sense that it could wrap around any prediction algorithms with commonly used selection procedures to +construct PIs. +The main contributions of the paper can be summarized as follows: +• Firstly, we investigate the FCR-adjusted method in the setting of conformal inference and show that +it is able to achieve FCR control under mild conditions, which lays a foundation for our subsequent +development of SCOP. +3 + +• Secondly, under a unified framework and exchangeable assumptions, we show that the SCOP can exactly +control the FCR at the target level. +• Thirdly, we provide non-asymptotic miscoverage bounds for a general class of selection procedures beyond +exchangeablity, termed as ranking-based procedures. This broadens the scopes of our SCOP in theoretical +guarantee and practical use. To address the non-exchangeability between the the post-selection test set +and calibration set, we introduce a virtual post-selection calibration set in our proof, and then quantify +the conditional miscoverage gap between the virtual calibration and the real calibration in SCOP. This +new technique may be of independent interest for conformal prediction for non-exchangeable data. +• Finally, we illustrate the easy coupling of the SCOP with commonly used prediction algorithms. +Numerical experiments indicate that it yields more accurate FCR control than existing methods, while +offers the narrowed prediction intervals. +1.2 +Connections to existing works +Post-selection inference. +Post-selection inference on a large number of variables has attracted considerable +research attention. Besides the references mentioned before, a relevant direction is the splitting-based strategy +for high-dimensional inference. The number of variables is firstly reduced to a manageable size using one +part of data, while confidence intervals or significance tests can be constructed by computing estimates in a +low-dimensional region with the other part of data and selected variables. See Wasserman and Roeder (2009), +Rinaldo et al. (2019), Du et al. (2021) and the references therein. One potential related work is Chen and +Bien (2020), in which the authors considered to construct confidence intervals for regression coefficients after +removing the potential outliers from the data. Our paradigm differs substantially with those works as we +focus on post-selection inference for sample selection rather than variable selection, and existing works on +variable selection is difficult to extend to the present problem due to the requirements on model or distribution +assumptions. +Conformal prediction. +The building block of our SCOP is the conformal inference framework, which has +been well studied in many settings, including non-parametric regression (Lei et al., 2013), quantile regression +(Romano et al., 2019), high-dimensional regression (Lei et al., 2018) and classification (Sadinle et al., 2019; +Romano et al., 2020), etc. More comprehensive reviews can be found in Shafer and Vovk (2008), Zeni et al. +(2020) and Angelopoulos and Bates (2021). Conventionally, conformal PIs enjoy distribution-free marginal +coverage guarantee with the assumption that the data are exchangeable. However, the exchangeability may be +violated in practice and would be more severe in the post-selection conformal inference because the selection +procedure might be determined by either the labelled data or the test data, or both. In such situations, one +particularly difficult issue is that the selected set ˆSu is random and has a complex dependence structure to +the labelled data and test data. Some conformal inference beyond exchangeability has attracted attention +4 + +(Tibshirani et al., 2019; Lei et al., 2021; Candès et al., 2021). In particular, Barber et al. (2022) proposed a +general framework to implement conformal inference when the algorithms cannot treat data exchangeable and +theoretically displayed the coverage deviations compared from exchangeability. However, how to decouple the +dependence to achieve FCR control in the present framework remains a challenge. +Taking a different but related perspective from multiple-testing, Bates et al. (2021) proposed a method to +construct conformal p-values with data splitting and apply it to detect outliers with finite-sample FDR control. +Zhang et al. (2022) extended that method and proposed a Jackknife implementation combined with automatic +model selection. Jin and Candès (2022) considered a scenario that one aims to select some individuals of +interest from the test sample and proposed a conformal p-value based method to control the FDR. Those +existing works are not concerned about the construction of PIs, which differs with our focus essentially. +1.3 +Organization and notations +The remainder of this paper is organized as follows. We introduce the FCR-adjusted prediction and SCOP +for valid FCR control in Section 2. Section 3 presents the theoretical properties of SCOP for ranking-based +procedures. Numerical results and real-data examples are presented in Sections 4. Section 5 concludes the +paper, and the technical proofs are relegated to the Supplementary Material. +Notations. For a positive integer n, we use [n] to denote the index set {1, 2, ..., n}. Let A = {Ai : i = 1, ..., n} +be a set of n real numbers, and S ⊆ [n] be an index subset. We use AS +(ℓ) to denote the ℓ-th smallest value +in {Ai : i ∈ S}. We use 1 {·} to denote the indicator function. For a real random sequences Xn and an +non-negative real deterministic sequence an, we write Xn = Op(an) if for any ϵ > 0, there exists some constant +C > 0 such that P(|Xn| > Can) ≤ ϵ. In our paper, the notations with subscript c or u refer to depending on +the calibration set or the test set respectively. +2 +Selective conditional conformal prediction +Denote the index sets for the labelled data Dl and the test data Du as L = {1, . . . , 2n} and U = {2n + +1, . . . , 2n + m}. The main prediction method studied in this paper is built upon the split conformal framework +(Vovk et al., 2005; Lei et al., 2018), which is also called “inductive conformal prediction”. That is we randomly +split Dl into two disjoint parts, the training set Dt and the calibration set Dc with n samples respectively. We +can firstly train a prediction model ˆµ(X) on the Dl, and then compute the empirical quantiles of the residuals +Ri = |Yi − ˆµ(Xi)| on the calibration set Dc. For Xj ∈ Du, the (1 − α)-marginal conformal PI is +PIM +j = +� +ˆµ(Xj) − QC(1 − α), ˆµ(Xj) + QC(1 − α) +� +, +(2) +where QC(1 − α) is the ⌈(1 − α)(n + 1)⌉-st smallest value in RC = {Ri = |Yi − ˆµ(Xi)| : i ∈ C}. Under the i.i.d. +(or more generally, exchangeable) assumption on Dc ∪ {(Xj, Yj)}, the marginal PI in (2) enjoys the coverage +guarantee, P +� +Yj /∈ PIM +j +� +≤ α. +5 + +Suppose g : X → R be one plausible score function, which can be user-specified or estimated by the +training data Dt. A particular selection procedure S can be applied to g(Xi) for i ∈ U to find the samples of +interest. For simplicity, denote Ti = g(Xi) and those Xi’s with smaller values of Ti tend to be chosen. Denote +the selected set as ˆSu = {i ∈ U : Ti ≤ ˆτ}, where ˆτ is the threshold. Different selective procedures S can be +chosen from different perspectives, and we summarize the selection threshold ˆτ into three types. +• (Fixed threshold) The ˆτ is user-specified or independent of the whole data. For example, ˆτ = t, where t +is either known as a priori or could possibly be obtained from an independent process of Dc ∪ Du. +• (Self-driven threshold) The ˆτ is only dependent on the scores {Ti : i ∈ U}. This type includes the +Top-K which choose the first K individuals, and the quantile of Ti values in the test set which a given +proportion of individuals with smallest Ti values in the test set, respectively (Fithian and Lei, 2020). +• (Calibration-assisted selection) The ˆτ relies on the calibration set. For example, ˆτ is some quantile of +true response Yi in calibration set, or the quantile based on both calibration and test set. In particular, +one may employ some multiple testing procedures to achieve error rate control, such like FDR control +based on the Benjamini–Hochberg (BH) procedure (Benjamini and Hochberg, 1995). Consequently, the +{Ti : i ∈ C} is required to approximate the distribution of {Ti : i ∈ U}. +Our goal is to construct conformal PIs for the selected subset ˆSu with the FCR control at α ∈ (0, 1). +2.1 +Adjusted conformal prediction +We firstly adapt the Benjamini and Yekutieli (2005)’s FCR-adjusted method to the present setting. Define +Mj +min := min +y +� +| ˆSTj←y +u +| : j ∈ ˆSTj←y +u +� +, +where ˆSTj←y +u +denotes the selected subset when replacing Tj with value y. The FCR-adjusted conformal PIs +are amount to marginally constructing larger 1 − α × Mj +min/m PIs instead of 1 − α level in (2), i.e., +PIAD +j += +� +ˆµ(Xj) − QC(1 − α × Mj +min), ˆµ(Xj) + QC(1 − α × Mj +min) +� +, j ∈ ˆSu. +(3) +Notice that given ˆµ(·) and ˆSu, PIAD +j +’s are not independent of each other because they all rely on the empirical +quantile obtained from Dc, and therefore the proofs in Benjamini and Yekutieli (2005) are not readily extended +to our setting. The following result demonstrates that the FCR-adjusted approach can successfully control +the FCR for any selection threshold that is independent of the calibration set given training set. +Proposition 2.1. Suppose that given Dt, {Ti : i ∈ C ∪ U} are independent random variables and the selection +threshold ˆτ is independent of Dc. Then the FCR value of the FCR-adjusted method in (3) satisfies FCRAD ≤ α. +For many plausible selection rules such as fixed-threshold selection, the Mj +min can be replaced by the +cardinality of the selected subset | �Su|. In practice, for ease of computation, one may prefer to use this +6 + +0.0 +0.1 +0.2 +0.0 +2.5 +5.0 +7.5 +10.0 +Ri +density +Marginal +Conditional +Test +Figure 1: The densities of Ri for i ∈ Dc (in blue), Ri for i ∈ ˆSc (in green) and the density of Rj for j ∈ ˆSu (in red), +respectively. There are 2n = 400 labeled data and m = 200 test data generated from a linear model with heterogeneous +noise, where the details of the model are in Section 4.1. The selection rule is ˆS = {k : ˆµ(Xk) ≤ −1}. +simplification, even though it does not have a theoretical guarantee for many data-dependent selection rules. +The FCR-adjusted method is known to be quite conservative (Weinstein et al., 2013), because it does not +incorporate the selection event into the calculation. Take the Top-K selection as an intuitive example. The +selected set ˆSu is fixed with | ˆSu| = K and the FCR can be written as +FCR = 1 +K +� +j∈U +P +� +j ∈ ˆSu, Yj ̸∈ PIj +� +. +(4) +Since the marginal PIAD +j +reaches the 1 − αK/m confidence level for any fixed K, the FCR-adjusted method +achieves the FCR control via +FCRAD = 1 +K +� +j∈U +P +� +j ∈ ˆSu, Yj ̸∈ PIAD +j +� +≤ 1 +K +� +j∈U +P +� +Yj ̸∈ PIAD +j +� +≤ α, +where the first inequality might be rather loose. A simple yet effective remedy is to use conditional calibration. +2.2 +Selective conditional conformal prediction (SCOP) +We start by making a decomposition of the FCR according to the contribution of each sample in the selected +set ˆSu, given as P +� +Yj ̸∈ PIj |j ∈ ˆSu +� +P +� +j ∈ ˆSu +� +. Notice that the FCR can naturally be controlled at level α if +the conditional control satisfies P(Yj ̸∈ PIj |j ∈ ˆSu) ≤ α, which sheds light on the construction of conditional +conformal PI. +In the regime of conformal inference, the conditional uncertainty of |Yj − ˆµ(Xj)| given j ∈ ˆSu can be reliably +approximated using the calibration set Dc, enabling us to construct model/distribution-agnostic conditional +7 + +PIs. To be specific, we conduct the selective algorithm S on the fitted score values {Ti = g(Xi) : i ∈ C} +and obtain the post-selection calibration set ˆSc = {i ∈ C : Ti ≤ ˆτ} with the same threshold ˆτ. Notice that +ˆSc is formed via the same selection criterion with ˆSu, and thus we utilize the residuals Ri for i ∈ ˆSc to +approximately characterize the conditional uncertainty of Rj for j ∈ ˆSu. To visualize the effect, we consider +a linear model with heterogeneous noise, where we use ordinary least-squares for predictions and select +ˆSu = {j ∈ U : ˆµ(Xj) ≤ −1}. In Figure 1, we display the densities of Ri for i ∈ Dc, Ri for i ∈ ˆSc and +the density of Rj for j ∈ ˆSu, respectively. The selection procedure significantly distorts the distribution of +residuals, but the conditional uncertainty on ˆSu can be well approximated by that on ˆSc. +The conditional conformal PI for j ∈ ˆSu can accordingly be constructed as +PISCOP +j += +� +ˆµ(Xj) − Q +ˆ +Sc(1 − α), ˆµ(Xj) + Q +ˆ +Sc(1 − α) +� +, +(5) +where Q +ˆ +Sc(1 − α) is the ⌈(1 − α)(| ˆSc| + 1)⌉-st smallest value in R +ˆ +Sc = {Ri : i ∈ ˆSc}. We refer this procedure +as Selective COnditional conformal Prediction (SCOP) and summarize it in Algorithm 1. +The following theorem shows that SCOP can control the FCR at α for exchangeable selective procedures. +Further, if the selection scores Ti are continuous (or almost surely distinct), we can obtain a lower bound for +the FCR value, guaranteeing that the SCOP is nearly exact in O(n−1). +Theorem 1. Suppose {Ti : i ∈ C ∪ U} are exchangeable random variables, and the threshold ˆτ is also +exchangeable with respective to the {Ti : i ∈ C ∪ U}. For each j ∈ U, the conditional miscoverage probability is +bounded by +P +� +Yj ̸∈ PISCOP +j +|j ∈ ˆSu +� +≤ α. +(6) +Further, the FCR value of the SCOP algorithm is controlled at FCRSCOP ≤ α. In addition, if Ti follows a +continuous distribution for i ∈ C ∪ U and P(| ˆSu| > 0) = 1, we also have +P +� +Yj ̸∈ PISCOP +j +|j ∈ ˆSu +� +≥ α − +1 +n + 1 +and FCRSCOP ≥ α − +1 +n+1. +Under the exchangeable assumption, the FCR results actually match the marginal miscoverage results of +original conformal PIs (Vovk et al., 2005). This theorem relies on exchangeability in two ways, i.e., the fitted +selection score {Ti : i ∈ C ∪ U} are exchangeable and the selection threshold ˆτ is assumed to keep the same +value by swapping Tj and Tk for any j, k ∈ C ∪ U. The former one is commonly used in conformal inference +and holds easily when the data are i.i.d given Dt (Lei et al., 2018). The later one imposes restrictions on +the selection procedures and can be fulfilled with some practical thresholds. The simplest case is the fixed +threshold. Another popular example is that ˆτ is some quantile of {Ti : i ∈ C ∪ U}. However, many selection +procedures may be excluded, such as the Top-K selection. In such cases, the threshold ˆτ is only determined +by the test data U, which does not treat the data points from calibration and test sets symmetrically. We will +next explore the effectiveness of the proposed SCOP for more general selection procedures. +8 + +Algorithm 1 Selective COnditional conformal Prediction (SCOP) +Input: Labeled set Dl, test set Du, selection procedure S, target FCR level α ∈ (0, 1). +Step 1 (Splitting and training) Split Dl into training set Dt and calibration set Dc with equal size n. Fit +prediction model ˆµ(·) and score function g (if needed) on the training set Dt. +Step 2 (Selection) Compute the scores: TC = {Ti = g(Xi) : i ∈ C} and TU = {Ti = g(Xi) : i ∈ U}. Apply +the selective procedure S to TC ∪ TU and obtain the threshold value ˆτ. Obtain the post-selection subsets: +ˆSu = {i ∈ U : Ti ≤ ˆτ} and ˆSc = {i ∈ C : Ti ≤ ˆτ}. +Step 3 (Calibration) Compute residuals: RSc = {Ri = |Yi − ˆµ(Xi)| : i ∈ ˆSc}. Find the ⌈(1−α)(| ˆSc|+1)⌉-st +smallest value of RSc, Q +ˆ +Sc(1 − α). +Step 4 (Construction) Construct PI for each j ∈ ˆSu as PISCOP +j += [ˆµ(Xj)−Q +ˆ +Sc(1−α), ˆµ(Xj) +Q +ˆ +Sc(1−α)]. +Output: Prediction intervals {PISCOP +j +: j ∈ ˆSu}. +Remark 2.1. In predictive inference, several works considered to approximately construct the conditional PI +(Chernozhukov et al., 2021; Feldman et al., 2021), i.e., +P (Yj /∈ PI(Xj)|Xj = x) ≤ α, +for any x ∈ X. +(7) +However, it is well known that achieving “fully" conditional validity in (7) is impossible in distribution-free +regime (Lei et al., 2013; Foygel Barber et al., 2020). Our conditional miscoverage control in (6) is a weaker +guarantee compared with (7), since we only consider the selection events. For more discussion about these two +conditional guarantees, we refer to Appendix B in Weinstein and Ramdas (2020). The SCOP can leverage +the post-selection calibration set to approximate the selective conditional distribution of residuals, which +contributes to achieve a better conditional coverage. In addition, the conditional calibration of SCOP provides +an anti-conservative lower bound for FCR value in the continuous case. +3 +Ranking-based selection +In this section, we consider a general class of selection procedures named ranking-based selection and discuss +the conditions under which the proposed SCOP is able to control the FCR. In Sections 3.1 and 3.2, we discuss +the FCR control for the self-driven selection procedures and calibration-assisted ones, respectively. Then, in +Section 3.3, we demonstrate the effectiveness of the SCOP procedure when the selection procedures based on +conformal p-values are used. +We begin with some general assumptions and notations. For simplicity, we suppose Ti ∈ [0, 1] and the +selection algorithm S conducted on {Ti : i ∈ C ∪ U} outputs a ranking threshold ˆκ ∈ [m]. Say, we have the +selection threshold ˆτ = TU +(ˆκ) as the ˆκ-th smallest value in TU = {Tj : j ∈ U}. Then the selected subset of the +9 + +test set can be rewritten as +ˆSu = +� +j ∈ U : Tj ≤ TU +(ˆκ) +� +. +(8) +The ranking-based procedure in (8) incorporates many practical examples, such as Top-K selection, quantile- +based selection, step-up procedures (Fithian and Lei, 2020) and the well-known BH procedure1 (Benjamini +and Hochberg, 1995). +With the ranking based selection, we have | ˆSu| = ˆκ, which is usually random and coupled to each test +sample Xj ∈ Du. To decouple the dependence, we introduce Lemma 1 to control the FCR through conditioning +on the leave-one-out data set Du,−j, which is the test set Du without the sample j. Denote EDu,−j[·] and +PDu,−j(·) as the conditional expectation and probability given Du,−j. Let ˆκj←tu be the ranking threshold +obtained from the selection algorithm S by replacing Tj with some deterministic value tu ∈ [0, 1]. +Lemma 1. Suppose | ˆSu| > 0 almost surely and ˆκ = ˆκj←tu holds for any j ∈ ˆSu. If the conditional false +coverage probability satisfies +���PDu,−j +� +Yj ̸∈ PIj +��j ∈ ˆSu +� +− α +��� ≤ ∆(Du,−j), +(9) +where ∆(Du,−j) only depends on the data set Du,−j, then we have +| FCR −α| ≤ E +� +� 1 +| ˆSu| +� +j∈ ˆ +Su +∆(Du,−j) +� +� . +The leave-one-out technique often appears in the literature about FDR control under dependence (Heesen +and Janssen, 2015; Fithian and Lei, 2020; Luo et al., 2022). The key is to decompose the FCR into the +summation of conditional miscoverage probability of each candidate given other test samples, i.e, +FCR = E +� +�� +j∈U +1 +ˆκj←tu PDu,−j +� +Yj ̸∈ PIj |j ∈ ˆSu +� +PDu,−j +� +j ∈ ˆSu +� +� +� . +We may regard the term ∆(Du,−j) in (9) as the individual FCR contribution of the j-th candidate in ˆSu. The +detailed proof of Lemma 1 is deferred to Appendix C.1. +Next, we introduce two universal assumptions to find how the conditional false coverage probability in +(9) holds and further control the FCR of SCOP with the ranking-based selection. Denote the cumulative +distribution functions (CDF) of Ri and Ti as FT (·) and FR(·), respectively. Let F(R,T )(·, ·) be the joint CDF +of (Ri, Ti). +Assumption 1. The score function g depends only on the training set. Suppose {Ti : i ∈ C ∪ U} and +{Ri : i ∈ C ∪ U} are both i.i.d. continuous random variables. There exists some ρ ∈ (0, 1) such that +d +drF(R,T ) +� +F −1 +R (r), F −1 +T (t) +� +≥ ρt, +holds for any t ∈ (0, 1) and r ∈ (0, 1). +1The BH procedure is also an example of step-up procedures, see Fithian and Lei (2020) +10 + +Assumption 2. There exists some deterministic value tu ∈ [0, 1] such that ˆκj←tu = ˆκ holds for any j ∈ ˆSu, +and ˆκj←tu ≤ ˆκ + Iu holds for any j ∈ U \ ˆSu and some positive integer Iu ≤ m. +To facilitate our technical development, we impose mild distributional assumption on the joint CDF of +(Ri, Ti) in Assumption 1. It is worth noticing that the same selected set ˆSu and ˆSc can be obtained if one +applies the ranking-based selection procedure to the transformed scores {FT (Ti) : i ∈ U} instead of the scores +{Ti : i ∈ U}. Also, the transformed residuals {FR(Ri) : i ∈ C ∪ U} keep the original order as the residuals +{Ri : i ∈ C ∪ U} in the conformal coverage control. Therefore, without loss of generality, we can assume that +Ti +i.i.d. +∼ Unif([0, 1]) and Ri +i.i.d. +∼ Unif([0, 1]) for i ∈ C ∪ U in the theoretical analysis. Then the condition on +CDF in Assumption 1 will reduce to +d +drF(R,T )(r, t) ≥ ρt which appears quite weak. +For Assumption 2, we can verify that ˆκ = ˆκj←tu under the event {j ∈ ˆSu} in many cases, such as +the quantile-based selection procedure and BH procedure. For the selection procedures with fixed ranking +threshold, such as quantile-based selection and Top-K selection, Assumption 2 is clearly satisfied with tu = 0 +and Iu = 0. For the BH procedure based on the conformal p-values, taking tu as 0 for each j ∈ ˆSu leads a +smaller p-value pj. By the property of the BH procedure, for j ∈ ˆSu, assigning pj to a smaller value will not +change the rejection set (Fithian and Lei, 2020), and hence we have κj←0 = ˆκ for j ∈ ˆSu. In Section 3.3, we +also show that Iu = Op(log m) for the BH procedure. From now on, we write ˆκ(j) = ˆκj←tu for simplicity. +3.1 +FCR control for self-driven selection +When the self-driven selection procedures are used, the samples in the selected calibration set ˆSc and the +selected test set ˆSu are not exchangeable, but the original samples from the calibration set and the test set +are exchangeable. The following theorem provides delicate bounds for the conditional miscoverage gap of the +SCOP. +Theorem 2. Under Assumptions 1 and 2, for any absolute constant C ≥ 1, if 8C log n/(nTU\{j} +(ˆκ(j)) ) ≤ 1 holds +almost surely, the conditional miscoverage probability can be bounded by +PDu,−j +� +Yj ̸∈ PISCOP +j +���j ∈ ˆSu +� +≤ α + ∆(Du,−j), +and +PDu,−j +� +Yj ̸∈ PISCOP +j +���j ∈ ˆSu +� +≥ α − 2∆(Du,−j), +where +∆(Du,−j) = +8C log n +ρTU\{j} +(ˆκ(j)−Iu) +� +�6C log n +nTU\{j} +(ˆκ(j)) ++ +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +TU\{j} +(ˆκ(j)) +� +� + +2 +� +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) +� +TU\{j} +(ˆκ(j)−Iu) +. +(10) +Our theorem is closely connected to Barber et al. (2022)’s Theorem 2a. Both theorems involve assessing +how the deviations from the “idealized” exchangability would affect the actual miscoverage level. However, +11 + +the interpretations are very different. Barber et al. (2022) showed that under assumption that the test and +calibration samples are non-exchangeable, the miscoverage gap can be bounded by an error term regarding the +total variation between the two samples. Whereas in SCOP the deviation comes from the possible violation of +the similarity between the distributions of {Ri : i ∈ ˆSc} and {Rj : j ∈ ˆSu}. +Remark 3.1. The technical difficulty in proving Theorem 2 lies in coping with the dependence of ˆSc +and ˆSu. To address this problem, we introduce virtual post-selection test set and calibration set, ˆS(j) +u += +� +i ∈ U : Ti ≤ TU\{j} +(ˆκ(j)) +� +and ˆS(j) +c += +� +i ∈ C : Ti ≤ TU\{j} +(ˆκ(j)) +� +respectively. We denote the corresponding virtual +conformal PI constructed by ˆS(j) +c +as PIj( ˆS(j) +c ). For clarity, we rewrite the real conformal PI constructed by ˆSc in +Algorithm 1 as PIj( ˆSc) ≡ PISCOP +j +. Notice that, the threshold TU\{j} +(ˆκ(j)) and the virtual selected calibration set ˆS(j) +c +are independent of the test candidate j. Therefore, the test candidate j and the calibration candidate k are ex- +changeable in the set ˆS(j) +c +∪{j} under the selection conditions. It remains to control two conditional miscoverage +gaps: PDu,−j +� +j ̸∈ PIj( ˆS(j) +c )|j ∈ ˆS(j) +u +� +− α and PDu,−j +� +j ̸∈ PIj( ˆSc)|j ∈ ˆSu +� +− PDu,−j +� +j ̸∈ PIj( ˆS(j) +c )|j ∈ ˆS(j) +u +� +; +the former can be bounded as in conventional conformal inference. +Our theorem shows that a tight control of the deviation term ∆(Du,−j) in (10) leads to effective FCR +control. Next we carefully interpret the bound and present more explicit settings in which the FCR achieves +or is very close to the nominal level. Observe that controlling ∆(Du,−j) actually boils down to establishing +the upper bound of the difference TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) and the lower bound of the denominator TU\{j} +(ˆκ(j)−Iu). To +guarantee that the denominator will stay away from 0, we impose the following assumption on ˆκ. +Assumption 3. The ranking threshold satisfies ˆκ ≥ γm for some γ ∈ (0, 1). +The lower bound on ˆκ in Assumption 3 is mild and reasonable, since the FCR control will be extremely +difficult when | ˆSu|/n = ˆκ/n = o(1) for a small level α. Applying the well-known representation of spacing +between consecutive order statistics (c.f. Lemma D.1) to {Ti}i∈U\{j}, together with Assumption 3, we can +obtain the following FCR control result for self-driven selection procedures. +Theorem 3. Under Assumptions 1-3. If γ > Iu/m, the FCR value of SCOP with self-driven selection +procedures can be controlled at +FCRSCOP = α + O +� log2(n ∨ m) +ργ(γ − Iu/m) +�Iu +m + 1 +n +�� +. +In the asymptotic regime, FCRSCOP is exact if Iu = o(m), that is lim(n,m)→∞ FCRSCOP = α. Recalling +that for quantile-based selection and Top-K selection, we have Iu = 0, and thus Theorem 3 guarantees the +FCR of SCOP with such selection procedures can attain the target level in a nearly optimal rate (up to a +logarithmic factor). +12 + +3.2 +FCR control for calibration-assisted selection +For calibration-assisted selective procedures, the analysis is more complex because the ranking threshold ˆκ +depends also on the calibration set Dc. It implies that a more tractable ranking threshold is needed to decouple +the dependence on the selected samples and the calibration samples simultaneously. That is for any j ∈ ˆSu and +k ∈ C, let ˆκ(j,k) ≡ ˆκj←tu,k←tc be the ranking threshold by replacing Tj with tu and Tk with tc simultaneously. +The virtual post-selection calibration set is further defined as ˆS(j,k) +c += +� +i ∈ C \ {k} : Ti ≤ TU\{j} +(ˆκ(j,k)) +� +. +The following assumption, an analog of Assumption 2, is imposed to restrict the change in the ranking +threshold after replacing one calibration score. +Assumption 4. There exists some tc ∈ R and some positive integer Ic ≤ m such that ˆκ ≤ ˆκk←tc ≤ ˆκ + Ic +holds for any k ∈ C. +The following theorem is parallel with Theorem 2. +Theorem 4. Under Assumptions 1-4, for the calibration-assisted selection, the conditional miscoverage +probability of SCOP satisfies +PDu,−j +� +Yj ̸∈ PISCOP +j +��j ∈ ˆSu +� +≤ α + EDu,−j +� +max +k +∆(D(j,k)) +� +, +and +PDu,−j +� +Yj ̸∈ PISCOP +j +��j ∈ ˆSu +� +≥ α − 2EDu,−j +� +max +k +∆(D(j,k)) +� +, +where +∆(D(j,k)) := +2TU\{j} +(ˆκ(j,k)) +� +R +ˆ +S(j,k) +c +(U (j,k)) − R +ˆ +S(j,k) +c +(L(j,k)) +� +� +TU\{j} +(ˆκ(j,k)−Iu−Ic) +�2 ++ +4 +� +TU\{j} +(ˆκ(j,k)) − TU\{j} +(ˆκ(j,k)−Iu−Ic) +� +TU\{j} +(ˆκ(j)) ++ +d(j,k) +| ˆS(j,k) +c +| + 1 +, +(11) +with d(j,k) = � +i∈ ˆ +S(j,k) +c +1 +� +TU\{j} +(ˆκ(j,k)−Ic−1) < Ti ≤ TU\{j} +(ˆκ(j,k)) +� +, U (j,k) = ⌈(1 − α)(| ˆS(j,k) +c +| + 2)⌉ + d(j,k) and L(j,k) = +⌈(1 − α)(| ˆS(j,k) +c +| + 2 − d(j,k))⌉ − 2. +Remark 3.2. We can see that all the terms in (11) are independent of the samples j ∈ ˆSu and k ∈ C. The +quantity d(j,k) measures the size difference between the real calibration set ˆSc and the virtual calibration set +ˆS(j,k) +c +. The term R +ˆ +S(j,k) +c +(U (j,k)) − R +ˆ +S(j,k) +c +(L(j,k)) represents the largest possible distance of the corresponding quantiles +in ˆSc and ˆS(j,k) +c +, which can be bounded in ˆS(j,k) +c +conditional on the data set Du,−j. The remaining parts in +maxk ∆(D(j,k)) rely on the difference between thresholds from TU\{j}. +Equipped with the conditional miscoverage gap in Theorem 4, we can obtain the FCR control results of +SCOP with calibration-assisted selection in the following theorem. +Theorem 5. Under Assumptions 1-4. If γ > (Ic + Iu)/m, the FCR value of SCOP for calibration-assisted +selection can be controlled at +FCRSCOP = α + O +� +log2(n ∨ m) +ρ(γ − (Ic + Iu)/m)2 +�Ic + Iu +m ++ 1 +n +�� +. +13 + +Similar to the results with self-driven selection, the SCOP can control the FCR around the target value +with a small gap. To decouple the dependence between the ranking threshold and calibration set, an addition +term Ic/m appears in Theorem 5, regarding to the effect of replacing one calibration sample. If Ic ∨Iu = o(m), +then we can take m = exp{o(n +1 +2 )} and have lim(n,m)→∞ FCRSCOP = α. +3.3 +Prediction-oriented selection with conformal p-values +We discuss the implementation of the SCOP with a special calibration-assisted selection procedure, the +selection via multiple testing based on conformal p-values. The concept of the conformal p-value was proposed +by Vovk et al. (2005). Similar to the conformal PI, the conformal p-values enjoy model/distribution-free +properties. Recently, there exists some works to apply conformal p-values to implement sample selection from +a multiple-testing perspective, such as Bates et al. (2021) and Jin and Candès (2022). +In particular, Jin and Candès (2022) investigated the prediction-oriented selection problem, aiming to +select samples whose unobserved outcomes exceed some specified values while control the proportion of falsely +selected units. This problem can be viewed as the following multiple hypothesis tests: for i ∈ U and some +b0 ∈ R, +H0,i : Yi ≥ b0 +v.s. +H1,i : Yi < b0. +By choosing a monotone function g0 : R+ → [0, 1], one could take the score function as g(x) = g0(ˆµ(x) − b0) +and compute the conformity scores as {Ti = g(Xi) : i ∈ C ∪ U}. +Denote the null set of calibration samples as C0 = {i ∈ C : Yi ≥ b0} and its size as n0 = |C0|. Given the +conformity scores {Ti : i ∈ C0} in the calibration set, the conformal p-value for each test data point can be +calculated by2 +pj := p(Xj) = 1 + |{i ∈ C0 : Ti ≤ Tj}| +n0 + 1 +, +for j ∈ U. +(12) +To control FDR at the target level β ∈ (0, 1), we may deploy BH procedure to {pj : j ∈ U} and obtain the +rejection set ˆSu. +Proposition 3.1. Let pU +(1) ≤ ... ≤ pU +(m) be order statistics of conformal p-values in the test set U. For any +i ∈ U, it holds that {pi ≤ pU +(ˆκ)} = {Ti ≤ TU +(ˆκ)}. +Proposition 3.1 indicates that using the conformal p-values in (12) to obtain ˆSu is equivalent to using the +conformity scores in TU with the same ranking threshold ˆκ, that is ˆSu = {i ∈ U : pi ≤ pU +(ˆκ)} ≡ {i ∈ U : Ti ≤ +TU +(ˆκ)}. Further, we can also obtain the post-selection calibration set by ˆSc = {i ∈ C : Ti ≤ TU +(ˆκ)}. Therefore, +we can frame the BH procedures with conformal p-values as a calibration-assisted selection in Section 3.2. +2Under our continuous assumption, we present the form of conformal p-value without ties in the conformity scores. For the +tie-breaking form, please refer to Bates et al. (2021) for more details. +14 + +Algorithm 2 SCOP under selection with conformal p-values +Input: Training data Dt, calibration data Dc, test data Du, threshold sequence {δ(r) : r ∈ [m]}. +Step 1 Fit prediction model ˆµ(·) and score function g(·) on Dt. Compute the score values TC = {Ti = +g(Xi) : i ∈ C} and TU = {Ti = g(Xi) : i ∈ U}. +Step 2 Compute the conformal p-values {pi : i ∈ U} according to (12) based on DC0. Apply the BH +procedure with target level β to TU and obtain (13). Obtain the post-selection subsets: ˆSu = {i ∈ U : Ti ≤ +TU +(ˆκ)} and ˆSc = {i ∈ C : Ti ≤ TU +(ˆκ)}. +Step 3 Compute residuals: RSc = {Ri = |Yi − ˆµ(Xi)| : i ∈ ˆSc}. Find the ⌈(1 − α)(| ˆSc| + 1)⌉-st smallest +value of RSc, denoted by Q +ˆ +Sc(1 − α). +Step 4 Construct PI for each j ∈ ˆSu as PIj = [ˆµ(Xj) − Q +ˆ +Sc(1 − α), ˆµ(Xj) + Q +ˆ +Sc(1 − α)]. +Output: {PIj : j ∈ ˆSu}. +To study the FCR control with selection procedures based on conformal p-values, we consider a more +general class of step-up procedures introduced by Fithian and Lei (2020). Let 0 ≤ δ(1) ≤ · · · ≤ δ(m) ≤ 1 +denote an increasing sequence of thresholds, we choose the ranking threshold for step-up procedures as +ˆκ = max +� +r : pU +(r) ≤ δ(r) +� +, +(13) +where pU +(r) is the rth-smallest conformal p-value. Specially, the BH procedure takes δ(r) = rβ/m. We +summarize the SCOP with the step-up selection procedures in Algorithm 2. +To adapt Assumptions 2 and 4, we can simply take ˆκ(j) = ˆκj←0 and ˆκ(k) = ˆκk←1 by replacing Tj with 0 +for j ∈ U and Tk with 1 for k ∈ C, respectively. From Lemma 1 in Fithian and Lei (2020), we have ˆκ(j) = ˆκ +for any j ∈ ˆSu in the step-up procedures. The next proposition characterizes the magnitudes of ˆκ(j) − ˆκ for +any j ̸∈ ˆSu and ˆκ(k) − ˆκ for any k ∈ C. +Proposition 3.2. Suppose {Xi : i ∈ C ∪ U} are i.i.d. continuous random variables. Let Ω(r) = {ℓ ∈ [n0] : +δ(r) < +ℓ+1 +n0+1 ≤ δ(r + 1)}. For step-up procedures (13) using conformal p-values defined in (12) and any +absolute constant C > 1, +1. For any j ∈ ˆSu, we have ˆκ(j) = ˆκ. In addition, for any j ∈ U \ ˆSu, +ˆκ(j) − ˆκ ≤ 12C log m + 8Cm log m +n0 + 1 +max +⌈γm⌉−0.4 + 4(X(1) − 1)1X(2)≤−0.4.The +noise is ϵ ∼ N(0, 1) and independent of X too. +16 + +We fix the labeled data size 2n = 400 at first and equally split it into Dt and Dc. The regression model ˆµ(·) +is fitted on Dt by ordinary least-squares (OLS) for Scenario A, support vector machine (SVM) for Scenario B +and random forest (RF) for Scenario C, respectively. The SVM and RF are implemented by R packages ksvm +and randomForest with default parameters. In most cases, the selection score is chosen as the same as the +prediction value, i.e., Ti = ˆµ(Xi). +We want to select one subset via ˆSu = {i ∈ U : Ti ≤ ˆτ}, where ˆτ is the threshold. To illustrate the wide +applicability of the proposed method, several selection thresholds ˆτ are considered. +(1) T-cal(q): q%-quantile of true response Y in calibration set, that is ˆτ is q%-quantile of {Yi : i ∈ Dc}; +(2) T-test(q): q%-quantile of predicted response ˆµ(X) in test set, i.e. ˆτ = TU +(qm/100); +(3) T-exch(q): q%-quantile of predicted response ˆµ(X) in both calibration set and test set, that is ˆτ is the +q%-quantile of TC∪U = {Ti : Ti ∈ Dc ∪ Du}; +(4) T-cons(b0): a pre-determined constant value b0, i.e., ˆτ = b0; +(5) T-pos(b0,β): The prediction-oriented selection proposed by Jin and Candès (2022), where one would +like to select those test samples with response Y smaller than b0 while controlling the FDR level at +β = 0.2. Here the threshold ˆτ is computed by the BH procedure with conformal p-values in Section 4. +(6) T-top(K): Kth smallest value of predicted responses ˆµ(X) in test set, i.e. ˆτ = TU +(K). +(7) T-clu: One popular choice of thresholds is based on clustering, where the boundary value of two +possible groups is a natural cut point. To be specific, we want to find the threshold ˆτ that minimizes +the within-group sum of squares, i.e, +ˆτ = arg min +t∈TC∪U +� +i∈S1(t) +(Ti − ¯T +S1(t))2 + +� +k∈Sc +1(t) +(Tk − ¯T +Sc +1(t))2, +where S1(t) = {i ∈ C ∪ U : Ti ≤ t}, Sc +1(t) = C ∪ U/S1(t) and ¯T +S1(t) is the sample mean in S1(t). +Among all the considered threshold selections, only the T-exch(q), T-cons(b0) and T-clu satisfy the +exchangeability with respective to {Ti : i ∈ C ∪ U}. The threshold T-top(K) is actually a special case of +T-test(q) with K = [qm/100]. We apply SCOP to construct PIs for the selected individuals in ˆSu with +target FCR level α = 10%. Two benchmarks are included for comparison. One is to directly construct a +(1 − α)-marginal prediction interval as (2) for each selected sample based on the whole calibration set. We +refer this method as ordinary conformal prediction (OCP) and notice that it takes no account of the selection +effects. Another one is the FCR-adjusted conformal prediction (ACP), which builds 1 − α| ˆSu|/m level PI as +(3) for each selected individual. The performances are compared in terms of the FCR and average length (AL) +of constructed PIs among 1,000 repetitions. +17 + +method +SCOP +OCP +ACP +T−cal +T−test +T−exch +Scenario A +Scenario B +Scenario C +30 +60 +90 +30 +60 +90 +30 +60 +90 +5 +10 +15 +20 +25 +5 +10 +15 +20 +5 +10 +15 +q% +FCR(%) +T−cal +T−test +T−exch +Scenario A +Scenario B +Scenario C +30 +60 +90 +30 +60 +90 +30 +60 +90 +10.0 +12.5 +15.0 +17.5 +5.0 +5.5 +6.0 +6.5 +7.0 +7.5 +5 +6 +7 +8 +q% +Length +Figure 2: Empirical FCR (%) and average PI length for quantile based thresholds with varying quantile level q. The +black dashed line represents the target FCR level 10%. +We firstly fix the size of test data Du as m = 200 and consider the three quantile-based thresholds: +T-cal(q), T-test(q) and T-exch(q). Figure 2 displays the estimated FCR and AL of PIs through varying +the quantile level q% from 20% to 100%. Across all the settings, it is evident that the SCOP is able to deliver +quite accurate FCR control and have more narrowed PIs. As expected, the OCP yields the same lengths of +PIs under all the settings and can only control the FCR under q% = 100%, that is the situation all the test +data are included without selection. This can be understood since the OCP builds the marginal PIs using the +whole calibration set without consideration of the selection procedure and thus possesses the length of PI as +2QC(1 − α) in (2). The ACP results in considerably conservative FCR levels and accordingly it performs not +well in terms of the AL of PIs in most settings. This is not surprising as the ACP marginally constructs much +larger 1 − α| ˆSu|/m PIs to ensure the FCR control than the target level. +In Table 1, we present the results of the remaining four thresholds, including T-cons(b0), T-pos(b0,β), +T-top(K) and T-clu. Here, we fix the constant b0 for both T-cons(b0) and T-pos(b0,β) as the 30%-quantile +of the true response Yi’s, and choose the target FDR level β = 20% for T-pos(b0,β) and K = 60 for T-top(K). +It can be seen that the FCR levels of SCOP are close to the nominal level. The SCOP also achieves satisfactory +narrowed PIs under all the scenarios. The OCP leads to much different FCR levels but same average length +18 + +Table 1: Comparisons of Empirical FCR (%) and average length (AL) under different scenarios and thresholds +with target FCR α = 10%. The sample sizes of the calibration set and the test set are fixed as n = m = 200. +T-con(b0) +T-pos(b0, 20%) +T-top(60) +T-clu +SCOP +OCP +ACP +SCOP +OCP +ACP +SCOP +OCP +ACP +SCOP +OCP +ACP +Scenario A +FCR +10.02 +14.27 +4.87 +7.31 +13.56 +2.51 +9.75 +15.53 +4.85 +9.78 +10.16 +5.07 +AL +11.77 +9.91 +14.77 +15.52 +9.91 +22.53 +12.02 +9.91 +15.02 +10.15 +9.91 +12.86 +Scenario B +FCR +9.77 +17.41 +7.24 +9.61 +16.33 +7.28 +9.63 +17.70 +6.86 +9.75 +14.98 +7.41 +AL +5.86 +4.70 +6.43 +5.73 +4.70 +6.25 +5.95 +4.70 +6.53 +11.88 +4.70 +5.99 +Scenario C +FCR +9.74 +13.07 +3.62 +9.70 +12.06 +3.88 +10.03 +12.77 +3.92 +9.89 +12.66 +3.67 +AL +5.82 +5.27 +7.41 +5.72 +5.27 +7.19 +5.73 +5.27 +7.23 +5.77 +5.27 +7.33 +of PIs by different selection thresholds. This corroborates the insight that the OCP is unable to give a valid +coverage guarantee on the selected ones. In contrast, the FCRs of ACP are overly conservative and in turn its +PI lengths would be considerably inflated. +At last, we evaluate the effect of different sizes of calibration and test sets under Scenario C by varying +n and m from 100 to 200. Here four selection thresholds are included: T-test(30%), T-pos(−1.2,20%), +T-top(60) and T-clu. The results are reported in Table 2. We see that all the three methods tend to yield +narrowed PIs as the calibration size n increases. However, the SCOP method performs much better than +OCP and ACP in terms of FCR control across all the settings. This clearly demonstrates the efficiency of +our proposed SCOP, that is a data-driven method which enables FCR control with a wide range of selection +procedures and meanwhile tends to build a relatively narrowed PIs with 1 − α level. +4.2 +Real data applications +4.2.1 +Drug discovery +Early stages of drug discovery aim at finding those high binding affinity of a specific target from a pool of +drug-target pairs (Santos et al., 2017). It is important to provide reliable PIs for those drug-target pairs +which own high binding affinity predictions. After screening, an effective subset one may be interested in +can be selected for further clinical trials (Huang et al., 2022). In this example, we apply the proposed SCOP +to construct PIs of binding affinities for those promising drug-target pairs meanwhile achieve FCR control. +We consider the DAVIS dataset (Rogers and Hahn, 2010), which contains 25, 772 drug-target pairs. Each +pair includes the binding affinity, the structural information of drug compound and the amino acid sequence +19 + +Table 2: Empirical FCR (%) and AL values under Scenario C with different combinations of (n, m). The +target FCR level is α = 10%. +(n,m) +T-test(30%) +T-pos(−1.2,20%) +T-top(60) +T-clu +SCOP +OCP +ACP +SCOP +OCP +ACP +SCOP +OCP +ACP +SCOP +OCP +ACP +(100,100) +FCR +9.70 +14.54 +4.49 +9.31 +14.48 +4.40 +9.89 +8.95 +5.41 +9.73 +15.03 +4.34 +AL +7.22 +6.25 +8.69 +7.31 +6.25 +8.743 +6.12 +6.25 +7.31 +7.31 +6.25 +8.8 +(200,100) +FCR +9.89 +12.68 +3.69 +9.75 +12.48 +3.75 +10.00 +8.05 +4.62 +10.14 +13.24 +3.48 +AL +5.75 +5.26 +7.26 +5.77 +5.26 +7.23 +4.94 +5.26 +6.14 +5.82 +5.26 +7.39 +(100,200) +FCR +9.60 +14.55 +4.44 +9.23 +14.63 +4.39 +9.51 +14.55 +4.44 +9.66 +15.03 +4.26 +AL +7.32 +6.30 +8.72 +7.41 +6.30 +8.75 +7.34 +6.30 +8.72 +7.37 +6.30 +8.85 +(200,200) +FCR +9.88 +12.35 +3.83 +9.69 +12.18 +3.81 +9.80 +12.35 +3.83 +9.76 +12.71 +3.64 +AL +5.76 +5.29 +7.29 +5.75 +5.29 +7.27 +5.77 +5.29 +7.29 +5.81 +5.29 +7.40 +of target protein; the drugs and targets are firstly encoded into numerical features through Python library +DeepPurpose (Huang et al., 2020), and the responses are taken as the log-scale affinities. We randomly sample +2,000 observations as calibration set and another 2,000 ones as test set, and use the remaining ones as training +set to fit a small neural network model with 3 hidden layers and 5 epochs. +Our goal is to consider building PIs of drug-target pairs on the test set through selecting their predicted +affinities which exceed some specific threshold. Here we consider three different thresholds: 70%-quantile of +true responses in calibration set (T-cal(70%)), 70%-quantile of predicted affinities in test set (T-test(70%)), +and selecting those drug-target pairs with affinities larger than 9.21 while controlling FDR at 0.2 level +(T-pos(9.21,20%)). The target FCR level is α = 10%. +To evaluate the performance of SCOP , we also consider building PIs for those selected candidates by OCP +and ACP. Figure 3 shows the boxplots of FCP and average length of PIs based on three methods among 100 +runs. As illustrated, the SCOP has stable FCP close to the nominal level across all the threshold selections. +In comparison, both OCP and ACP result in conservative FCP levels. The lengths of PIs constructed by OCP +and ACP are much broader compared to those of the SCOP. In fact, the responses of log-scale affinities truly +range at (−5, 10), and thus those broader PIs would barely provide useful information for further clinical +trails. +4.2.2 +House price analysis +We apply the SCOP to implement the prediction of house prices of interest. As an economic indicator, better +understanding house prices can provide meaningful suggestions to researchers and decision makers in real +estate market (Anundsen and Jansen, 2013). In recent decades, business analysts use machine learning tools +to forecast house prices and determine the investment strategy (Park and Bae, 2015). In this example, we use +20 + +method +SCOP +OCP +ACP +0 +5 +10 +15 +T−cal(70%) +T−test(70%) +T−pos(9.21,20%) +FCP(%) +0 +5 +10 +15 +T−cal(70%) +T−test(70%) +T−pos(9.21,20%) +Length +Figure 3: Boxplots of the values of FCP (%) and PI length for the drug discovery example. The black dashed line +represents the target FCR level 10% and the red rhomboid dot denotes the average value. +our method to build PIs for those house prices which exceed certain thresholds. +We consider one house price prediction dataset from Kaggle3, which contains 4, 251 observations after +removing the missing data. The data records the house price and other covariates about house area, location, +building years and so on. We randomly sample 1, 500 observations and equally split them into three parts +as training, calibration and test sets respectively in each repetition. We firstly train a random forest model +to predict the house prices and consider three different thresholds to select those test observations with +high predicted house prices: T-test(70%), T-pos(0.6,20%) and T-clu. For example, the threshold T- +pos(0.6,20%) means that one would like to select those observations with house prices larger than 0.6 million +under the FDR control at 20% level. After selection, we construct PIs with α = 10%. Table 3 reports the +empirical FCR level and lengths of PIs among 500 replications. We observe that both SCOP and ACP achieve +valid FCR control, but our SCOP has more narrowed PIs compared to ACP. The FCRs of the OCP are +much inflated which implies that many test samples with truly high house prices cannot be covered. The two +examples demonstrate that the proposed SCOP works well for building PIs of selected samples in practical +applications. +3The data is available from https://www.kaggle.com/datasets/shree1992/housedata. +21 + +Table 3: Empirical FCRs (%) and average lengths of PIs for house price dataset with α = 10%. +T-test(70%) +T-pos(0.6,20%) +T-clu +SCOP +OCP +ACP +SCOP +OCP +ACP +SCOP +OCP +ACP +FCR +9.91 +21.75 +9.09 +9.78 +34.74 +7.27 +10.08 +39.71 +8.49 +AL +1.06 +0.67 +1.12 +1.58 +0.67 +2.64 +1.72 +0.67 +1.98 +5 +Concluding remarks +We have investigated the FCR control problem in the scenario of selective conformal inference. The validity +of FCR-adjusted method is verified in the predictive setting, and our proposed SCOP procedure is shown to +be widely applicable against both selection with exchangeable thresholds and non-exchangeable ranking-based +selection with rigorous theoretical guarantee. +To conclude, we point out several directions for future work. First, we focus mainly on using the residuals +as nonconformity scores for the construction of PI. In fact, our framework can readily be extended to +more general nonconformity scores such as the one based on quantile regression (Romano et al., 2019) or +distributional regression (Chernozhukov et al., 2021). Similar theoretical results are still valid for those more +general methods. Second, as split conformal introduces extra randomness from data splitting and reduces +the effectiveness of training models, we may consider the implementation via Jackknife and cross-validation +to refine the prediction intervals (Barber et al., 2021). The SCOP is applicable in such regimes, but certain +stability conditions posed on the predictive algorithms are necessary and theoretical guarantee of SCOP +requires further investigation. Third, it is of interest to consider adapting SCOP to the online setting, where +one encounters an infinite sequence of samples ordered by time. +22 + +References +Anastasios N Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution- +free uncertainty quantification. arXiv preprint arXiv:2107.07511, 2021. +André K Anundsen and Eilev S Jansen. Self-reinforcing effects between housing prices and credit. Journal of +Housing Economics, 22(3):192–212, 2013. +Barry C Arnold, Narayanaswamy Balakrishnan, and Haikady Navada Nagaraja. A first course in order +statistics. SIAM, 2008. +Rina Foygel Barber, Emmanuel J Candès, Aaditya Ramdas, and Ryan J Tibshirani. Predictive inference with +the jackknife+. The Annals of Statistics, 49(1):486–507, 2021. +Rina Foygel Barber, Emmanuel J Candès, Aaditya Ramdas, and Ryan J Tibshirani. Conformal prediction +beyond exchangeability. arXiv preprint arXiv:2202.13415, 2022. +Stephen Bates, Emmanuel Candès, Lihua Lei, Yaniv Romano, and Matteo Sesia. Testing for outliers with +conformal p-values. arXiv preprint arXiv:2104.08279, 2021. +Yoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: a practical and powerful approach +to multiple testing. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 57(1): +289–300, 1995. +Yoav Benjamini and Daniel Yekutieli. False discovery rate–adjusted multiple confidence intervals for selected +parameters. Journal of the American Statistical Association, 100(469):71–81, 2005. +Emmanuel J Candès, Lihua Lei, and Zhimei Ren. +Conformalized survival analysis. +arXiv preprint +arXiv:2103.09763, 2021. +Paula Carracedo-Reboredo, Jose Liñares-Blanco, Nereida Rodríguez-Fernández, Francisco Cedrón, Francisco J +Novoa, Adrian Carballal, Victor Maojo, Alejandro Pazos, and Carlos Fernandez-Lozano. A review on +machine learning approaches and trends in drug discovery. Computational and Structural Biotechnology +Journal, 19:4538–4558, 2021. +Shuxiao Chen and Jacob Bien. Valid inference corrected for outlier removal. Journal of Computational and +Graphical Statistics, 29(2):323–334, 2020. +Victor Chernozhukov, Kaspar Wüthrich, and Yinchu Zhu. Distributional conformal prediction. Proceedings +of the National Academy of Sciences, 118(48):e2107794118, 2021. +Suresh Dara, Swetha Dhamercherla, Surender Singh Jadav, CH Babu, and Mohamed Jawed Ahsan. Machine +learning in drug discovery: a review. Artificial Intelligence Review, pages 1–53, 2021. +23 + +Lilun Du, Xu Guo, Wenguang Sun, and Changliang Zou. False discovery rate control under general dependence +by symmetrized data aggregation. Journal of the American Statistical Association, pages 1–15, 2021. +Evanthia Faliagka, Kostas Ramantas, Athanasios Tsakalidis, and Giannis Tzimas. Application of machine +learning algorithms to an online recruitment system. In Proc. International Conference on Internet and +Web Applications and Services, pages 215–220, 2012. +Shai Feldman, Stephen Bates, and Yaniv Romano. Improving conditional coverage via orthogonal quantile +regression. Advances in Neural Information Processing Systems, 34:2060–2071, 2021. +William Fithian and Lihua Lei. Conditional calibration for false discovery rate control under dependence. +arXiv preprint arXiv:2007.10438, 2020. +William Fithian, Dennis Sun, and Jonathan Taylor. Optimal inference after model selection. arXiv preprint +arXiv:1410.2597, 2014. +Rina Foygel Barber, Emmanuel J Candès, Aaditya Ramdas, and Ryan J Tibshirani. The limits of distribution- +free conditional predictive inference. Information and Inference: A Journal of the IMA, 10(2):455–482, 08 +2020. +Philipp Heesen and Arnold Janssen. Inequalities for the false discovery rate (fdr) under dependence. Electronic +Journal of Statistics, 9(1):679–716, 2015. +Kexin Huang, Tianfan Fu, Lucas M Glass, Marinka Zitnik, Cao Xiao, and Jimeng Sun. Deeppurpose: a deep +learning library for drug–target interaction prediction. Bioinformatics, 36(22-23):5545–5547, 2020. +Kexin Huang, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W Coley, Cao +Xiao, Jimeng Sun, and Marinka Zitnik. Artificial intelligence foundation for therapeutic science. Nature +Chemical Biology, 18(10):1033–1036, 2022. +Ying Jin and Emmanuel J Candès. +Selection by prediction with conformal p-values. +arXiv preprint +arXiv:2210.01408, 2022. +Jason D Lee, Dennis L Sun, Yuekai Sun, and Jonathan E Taylor. Exact post-selection inference, with +application to the lasso. The Annals of Statistics, 44(3):907–927, 2016. +Jing Lei, James Robins, and Larry Wasserman. Distribution-free prediction sets. Journal of the American +Statistical Association, 108(501):278–287, 2013. +Jing Lei, Max G’Sell, Alessandro Rinaldo, Ryan J Tibshirani, and Larry Wasserman. Distribution-free +predictive inference for regression. Journal of the American Statistical Association, 113(523):1094–1111, +2018. +24 + +Lihua Lei, Emmanuel J Candès, et al. Conformal inference of counterfactuals and individual treatment effects. +Journal of the Royal Statistical Society Series B (Statistical Methodology), 83(5):911–938, 2021. +Yixiang Luo, William Fithian, and Lihua Lei. Improving knockoffs with conditional calibration. arXiv preprint +arXiv:2208.09542, 2022. +Byeonghwa Park and Jae Kwon Bae. Using machine learning algorithms for housing price prediction: The +case of fairfax county, virginia housing data. Expert Systems with Applications, 42(6):2928–2934, 2015. +Rolf-Dieter Reiss. Approximate distributions of order statistics: with applications to nonparametric statistics. +Springer science & business media, 2012. +Alessandro Rinaldo, Larry Wasserman, and Max G’Sell. +Bootstrapping and sample splitting for high- +dimensional, assumption-lean inference. The Annals of Statistics, 47(6):3438–3469, 2019. +David Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of Chemical Information and +Modeling, 50(5):742–754, 2010. +Yaniv Romano, Evan Patterson, and Emmanuel Candès. Conformalized quantile regression. Advances in +Neural Information Processing Systems, 32:3543–3553, 2019. +Yaniv Romano, Matteo Sesia, and Emmanuel Candès. Classification with valid and adaptive coverage. +Advances in Neural Information Processing Systems, 33:3581–3591, 2020. +Mauricio Sadinle, Jing Lei, and Larry Wasserman. Least ambiguous set-valued classifiers with bounded error +levels. Journal of the American Statistical Association, 114(525):223–234, 2019. +Rita Santos, Oleg Ursu, Anna Gaulton, A Patrícia Bento, Ramesh S Donadi, Cristian G Bologa, Anneli +Karlsson, Bissan Al-Lazikani, Anne Hersey, Tudor I Oprea, et al. A comprehensive map of molecular drug +targets. Nature Reviews Drug Discovery, 16(1):19–34, 2017. +Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. Journal of Machine Learning Research, +9(3):371–421, 2008. +Muhammad Ahmad Shehu and Faisal Saeed. An adaptive personnel selection model for recruitment using +domain-driven data mining. Journal of Theoretical and Applied Information Technology, 91(1):117, 2016. +Jonathan Taylor and Robert Tibshirani. Post-selection inference for-penalized likelihood models. Canadian +Journal of Statistics, 46(1):41–61, 2018. +Ryan J Tibshirani, Rina Foygel Barber, Emmanuel Candès, and Aaditya Ramdas. Conformal prediction +under covariate shift. Advances in Neural Information Processing Systems, 32:2530–2540, 2019. +25 + +Vladimir Vovk, Alexander Gammerman, and Glenn Shafer. Algorithmic learning in a random world. Springer +Science & Business Media, 2005. +Volodya Vovk, Alexander Gammerman, and Craig Saunders. Machine-learning applications of algorithmic +randomness. In International Conference on Machine Learning, pages 444–453, 1999. +Larry Wasserman and Kathryn Roeder. High dimensional variable selection. The Annals of Statistics, 37(1): +2178–2201, 2009. +Asaf Weinstein and Aaditya Ramdas. Online control of the false coverage rate and false sign rate. In +International Conference on Machine Learning, pages 10193–10202, 2020. +Asaf Weinstein, William Fithian, and Yoav Benjamini. Selection adjusted confidence intervals with more +power to determine the sign. Journal of the American Statistical Association, 108(501):165–176, 2013. +Gianluca Zeni, Matteo Fontana, and Simone Vantini. Conformal prediction: a unified review of theory and +new challenges. arXiv preprint arXiv:2005.07972, 2020. +Yifan Zhang, Haiyan Jiang, Haojie Ren, Changliang Zou, and Dejing Dou. Automs: Automatic model selection +for novelty detection with error rate control. In Advances in Neural Information Processing Systems, 2022. +Haibing Zhao. General ways to improve false coverage rate-adjusted selective confidence intervals. Biometrika, +109(1):153–164, 2022. +Haibing Zhao and Xinping Cui. Constructing confidence intervals for selected parameters. Biometrics, 76(4): +1098–1108, 2020. +26 + +Supplementary Material for “Selective conformal inference with +FCR control” +A +Auxiliary lemmas +Variants of the following lemma often appears in the conformal inference literature (Vovk et al., 2005; Lei +et al., 2018; Romano et al., 2019; Barber et al., 2021, 2022), which is also called the inflation of quantiles. +Here we restate it in the deterministic form. +Lemma A.1. Let x(⌈n(1−α)⌉) is the ⌈n(1 − α)⌉-smallest value in {xi ∈ R : i ∈ [n]}. Then for any α ∈ (0, 1), +it holds that +1 +n +n +� +i=1 +1 +� +xi > x(⌈n(1−α)⌉) +� +≤ α. +If all values in {xi : i ∈ [n]} are distinct, it also holds that +1 +n +n +� +i=1 +1 +� +xi > x(⌈n(1−α)⌉) +� +≥ α − 1 +n, +Next lemma characterizes the change of order statistics after dropping one of the samples, which is very +useful in the theory of conformal inference (see Lemma 2 in Romano et al. (2019)). +Lemma A.2. For almost surely distinct random variables x1, ..., xn, let {x(r) : r ∈ [n]} be order statistics of +{xi : i ∈ [n]}, and {x[n]\{j} +(r) +: r ∈ [n − 1]} be the order statistics of {xi : i ∈ [n] \ {j}}, then we have: +(1) x(k) ≤ x[n]\{j} +(k) +≤ x(k+1). +(2) +� +xj ≤ x(k) +� += {xj ≤ x[n]\{j} +(k) +}. +Lemma A.3. Suppose all values in TC ∪ TU are almost surely distinct, ˆκ(j) = ˆκ holds for any j ∈ ˆSu and +ˆκ(j) ≤ ˆκ + Iu holds for any j ∈ U \ ˆSu, then we have +(1) For any j ∈ U, TU\{j} +(ˆκ(j)−Iu) ≤ TU +(ˆκ); +(2) For any j ∈ ˆSu, TU\{j} +(ˆκ(j)−1) ≤ TU +(ˆκ) ≤ TU\{j} +(ˆκ(j)) . +The proof of Lemma A.3 is deferred to Section D.1. The next two lemmas are corollaries of the well-known +spacing representation of consecutive random variables (c.f. Lemma D.1), and the proofs can be found in D.2 +and D.3. +Lemma A.4. Suppose Ui +i.i.d. +∼ Unif([0, 1]) and let U(1) ≤ · · · ≤ U(n) be the corresponding order statistics. For +any absolute constant C ≥ 1, it holds that +P +� +� +max +0≤ℓ≤n−1 +� +U(ℓ+1) − U(ℓ) +� +≥ +1 +1 − 2 +� +log(n∨m) +n+1 +log(n ∨ m) +n + 1 +� +� ≤ 2(n ∨ m)−C. +27 + +Lemma A.5. Let ˆSc(t) = {i ∈ C : Ti ≤ t}. If +d +drF(R,T )(r, t) ≥ ρt holds, then for any absolute constant C ≥ 1, +we have +P +� +� +max +0≤ℓ≤| ˆ +Sc(t)|−1 +� +R +ˆ +Sc(t) +(ℓ+1) − R +ˆ +Sc(t) +(ℓ) +� +≥ 1 +ρ +1 +1 − 2 +� +C log(n∨m) +| ˆ +Sc(t)|+1 +2C log(n ∨ m) +| ˆSc(t)| + 1 +� +� ≤ 2(n ∨ m)−C. +Lemma A.6 is used to bound the change in the size of the selected calibration set after changing threshold. +We defer the proof to Section D.4. Lemma A.7 is used to lower bound the size of selected calibration set with +arbitrary threshold t ∈ (0, 1), and Lemma A.8 is used to guarantee the threshold in SCOP will be far away +from 0 under Assumption 3. The proofs can be found in Section D.5 and D.6. +Lemma A.6. Let ˆSc(t2) = {i ∈ C : Ti ≤ t2} and Zi = 1 {t1 < Ti ≤ t2} − t2−t1 +t2 +for some 0 ≤ t1 < t2 ≤ 1. +Then we have +P +� +� +1 +| ˆSc(t2)| +������ +� +i∈ ˆ +Sc(t2) +Zi +������ +≥ 2 +� +eC log(n ∨ m) +| ˆSc(t2)| +� +t2 − t1 +t2 ++ 2eC log(n ∨ m) +| ˆSc(t2)| +� +� ≤ 2(n ∨ m)−C. +Lemma A.7. Let ˆSc(t) = {i ∈ C : Ti ≤ t}. For any C ≥ 1, if 8C log(n ∨ m)/(nt) ≤ 1, we have +P +� +| ˆSc(t)| ≥ n · t +2 +� +≤ (n ∨ m)−C. +Lemma A.8. For any fixed γ ∈ (0, 1) and any j ∈ U, if 8C log(n ∨ m)/(mγ) ≤ 1, we have +P +� +TU\{j} +⌈γm⌉ ≤ γ +2 +� +≤ 2(n ∨ m)−C. +B +Proof of the results in Section 2 +B.1 +Proof of Proposition 2.1 +Proof. For simplicity, we assume Dt is fixed. Let Av,r be the event: r PIs are constructed, and v of these do +not cover the corresponding true responses. Let NPIj denote the event that {Yj ̸∈ PIAD(Xj)}. It holds that +P (Av,r) = 1 +v +m +� +j=1 +P(Av,r, NPIj). +This claim is proved by the Lemma 1 in Benjamini and Yekutieli (2005). Note that ∪r +v=1Av,r is a disjoint +union of events such that | ˆSu| = r and |Nu| = v, where Nu := {j ∈ �Su : Yj ̸∈ PIAD +j +} is the set of constructed +PIs not covering their true labels. By the definition of FCR, we have +FCR = +m +� +r=1 +r +� +v=1 +v +r P (Av,r) = +m +� +r=1 +r +� +v=1 +1 +r +m +� +j=1 +P (Av,r, NPIj) = +m +� +r=1 +m +� +j=1 +1 +r P +� +| ˆSu| = r, NPIj +� +. +(B.1) +For each j ∈ ˆSu, we define the following events +M(j) +k +:= +� +M j +min = k +� +for +k = 1, ..., m. +28 + +Recalling the definition of M j +min = miny +� +| ˆSTj←y +u +| : j ∈ ˆSTj←y +u +� +, we have M j +min ≤ | ˆSu| if j ∈ ˆSu. Following +the proof of Theorem 1 in Benjamini and Yekutieli (2005) and using the decomposition (B.1), we have +FCR = +m +� +r=1 +m +� +j=1 +1 +r +m +� +l=1 +P +� +| ˆSu| = r, j ∈ ˆSu, M(j) +l , Yj ̸∈ PIAD +j +(Xj) +� +(i) += +m +� +r=1 +m +� +j=1 +r +� +l=1 +1 +r P +� +| ˆSu| = r, j ∈ ˆSu, M(j) +l , Yi ̸∈ PIAD +j +(Xj) +� +≤ +m +� +j=1 +m +� +r=1 +r +� +l=1 +1 +l P +� +| ˆSu| = r, j ∈ ˆSu, M(j) +l , Yi ̸∈ PIAD +j +(Xj) +� +(ii) += +m +� +j=1 +m +� +l=1 +1 +l +m +� +r=l +P +� +| ˆSu| = r, j ∈ ˆSu, M(j) +l , Yj ̸∈ PIAD +j +(Xj) +� += +m +� +j=1 +m +� +k=1 +1 +k P +� +M(j) +k , Yj ̸∈ PIAD +j +(Xj) +� +(iii) += +m +� +j=1 +m +� +k=1 +1 +k P +� +M(j) +k +� +P +� +Yj ̸∈ PIAD +j +(Xj) +� +, +(iv) +≤ +m +� +j=1 +m +� +k=1 +1 +k P +� +M(j) +k +� kα +m = α, +where (i) holds due to Mmin(T(−j)) ≤ | ˆSu|, (ii) follows from the interchange of summations over l and r, (iii) +holds since M (j) +k +is independent of the calibration set Dc and the sample j, and (iv) is true because of the +marginal coverage guarantee that for any j ∈ [m], +P +� +Yj ̸∈ PIAD +j +� +≤ α∗ +j = αMmin(TU\{j}) +m +. +(B.2) +Here we emphasize that the miscoverage probability P(Yj ̸∈ PIAD +j +(Xj)) in (iv) does not depend on the selection +condition since the summation is over [m]. Consequently, we may utilize the exchangeability between the +sample j and the calibration set to verify (B.2). +B.2 +Proof of Theorem 1 +Proof. For any given non-empty subsets Su ⊆ U and Sc ⊆ C, and for any j ∈ Su, if it holds that +α − +1 +n + 1 ≤ P +� +Yj ̸∈ PIj +��� ˆSc = Sc, ˆSu = Su +� +≤ α. +(B.3) +29 + +Following Lemma 2.1 in Lee et al. (2016), we can decompose the FCR value with non-empty ˆSu as +FCR0 = E +�� +j∈ ˆ +Su 1 {Yj ̸∈ PIj} +| ˆSu| +���| ˆSu| ̸= 0 +� += E +� +E +�� +j∈ ˆ +Su 1 {Yj ̸∈ PIj} +| ˆSu| +��� ˆSu, ˆSc +� ���| ˆSu| ̸= 0 +� += E +� +� 1 +| ˆSu| +� +j∈ ˆ +Su +P +� +j ̸∈ PIj +�� ˆSu, ˆSc +� ��| ˆSu| ̸= 0 +� +� +≤ E +� +� 1 +| ˆSu| +� +j∈ ˆ +Su +α +��| ˆSu| ̸= 0 +� +� += α, +where the last inequality holds due to the right hand side of (B.3). The FCR value can be controlled by +FCR = FCR0 ×P +� +| ˆSu| > 0 +� +≤ α. +Similarly, using the left hand side of (B.3) we can also obtain that FCR0 ≥ α− +1 +n+1. Further, if P(| ˆSu| > 0) = 1, +we can also obtain the lower bound of the FCR value by +FCR = FCR0 ×P +� +| ˆSu| > 0 +� += FCR0 ≥ α − +1 +n + 1. +Therefore, it suffices to verify (B.3) for SCOP under exchangeable assumption. +According to the construction of conformal PI and Lemma A.2, given ˆSc = Sc, we know that +{Yj ̸∈ PIj} = +� +Rj > QSc(1 − α) +� += +� +Rj > QSc∪{j}(1 − α) +� +, +where QSc∪{j}(1 − α) is the ⌈(1 − α)(|Sc| + 1)⌉-st smallest value in {Ri : i ∈ Sc ∪ {j}}. Next we will suppress +the dependency on 1 − α. Invoking Lemma A.1 to Sc ∪ {j}, we have +P +� +Rj > QSc∪{j}��� ˆSc = Sc, ˆSu = Su +� +≤ α + +1 +|Sc| + 1E +� � +k∈Sc +1 +� +Rj > QSc∪{j}� +− 1 +� +Rk > QSc∪{j}� ��� ˆSu = Su, ˆSc = Sc +� += α + +1 +|Sc| + 1 +� +k∈Sc +∆j,k, +(B.4) +where +∆j,k := P +� +Rj > QSc∪{j}��� ˆSc = Sc, ˆSu = Su +� +− P +� +Rk > QSc∪{j}��� ˆSc = Sc, ˆSu = Su +� +. +For any j ∈ Su and k ∈ Sc, we denote +ESu,j(ˆτ) = +� +� +� +� +i∈Su\{j} +{Ti ≤ ˆτ} , +� +i∈U\Su +{Ti > ˆτ} +� +� +� , +ESc,k(ˆτ) = +� +� +� +� +i∈Sc\{k} +{Ti ≤ ˆτ} , +� +i∈C\Sc +{Ti > ˆτ} +� +� +� . +30 + +Then the selection condition can be equivalently written as +� +ˆSc = Sc, ˆSu = Su +� += {Tk ≤ ˆτ, Tj ≤ ˆτ, ESu,j(ˆτ), ESc,k(ˆτ)} . +As a consequence, we have +∆j,k = P +� +Rj > QSc∪{j}|Tk ≤ ˆτ, Tj ≤ ˆτ, ESu,j(ˆτ), ESc,k(ˆτ) +� +− P +� +Rk > QSc∪{j}|Tk ≤ ˆτ, Tj ≤ ˆτ, ESu,j(ˆτ), ESc,k(ˆτ) +� +. +(B.5) +Let ˆτ(j,k) be the threshold obtained by swapping sample j ∈ U and k ∈ C. Under our assumption, we know +ˆτ(j,k) = ˆτ with probability 1. It follows that +{Tk ≤ ˆτ, Tj ≤ ˆτ, ESu,j(ˆτ), ESc,k(ˆτ)} = +� +Tk ≤ ˆτ(j,k), Tj ≤ ˆτ(j,k), ESu,j(ˆτ(j,k)), ESc,k(ˆτ(j,k)) +� +. +Therefore, we can guarantee ∆j,k = 0 from (B.5). +C +Proof of the results in Section 3 +In this section, we use Eu,−j[·] and Pu,−j(·) to denote the expectation and probability given data set Du,−j. +C.1 +Proof of Lemma 1 +Proof. According to the definition of FCR, we have +FCR = E +� +� 1 +| ˆSu| +� +j∈ ˆ +Su +1 {Yj ̸∈ PIj} +� +� (i) += E +� +�� +j∈U +1 +� +j ∈ ˆSu +� +ˆκ +1 {Yj ̸∈ PIj} +� +� +(ii) += E +� +�� +j∈U +1 +ˆκ(j) 1 +� +j ∈ ˆSu +� +1 {Yj ̸∈ PIj} +� +� += E +� +�� +j∈U +1 +ˆκ(j) Pu,−j +� +Yj ̸∈ PIj +��j ∈ ˆSu +� +Pu,−j +� +j ∈ ˆSu +� +� +� +≤ E +� +�� +j∈U +(α + ∆(Du,−j)) +1 +� +j ∈ ˆSu +� +ˆκ(j) +� +� +(iii) += α · E +� +�� +j∈U +1 +� +j ∈ ˆSu +� +| ˆSu| +� +� + E +� +� 1 +| ˆSu| +� +j∈ ˆ +Su +∆(Du,−j) +� +� += α + E +� +� 1 +| ˆSu| +� +j∈ ˆ +Su +∆(Du,−j) +� +� , +where the equality (i) holds due to | ˆSu| = ˆκ (c.f. the definition of ˆSu in (8)), (ii) and (iii) come from the +assumption ˆκ = ˆκ(j) under the event j ∈ ˆSu. The lower bound can be derived similarly. +31 + +C.2 +Self-driven selection +In this subsection, we provide the proofs for the results of self-driven selection procedures, where the ranking +threshold ˆκ only depends on the test set Du. +C.2.1 +Proof of Theorem 2 +Proof of Theorem 2. Recall the definitions +ˆS(j) +c += +� +i ∈ C : Ti ≤ TU\{j} +(ˆκ(j)) +� +, +Q +ˆ +S(j) +c +∪{j} = R +ˆ +S(j) +c +∪{j} +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉). +Invoking Lemma A.1, we know it holds that +α − +1 +| ˆS(j) +c | + 1 +≤ +1 +| ˆS(j) +c | + 1 +� +i∈ ˆ +S(j) +c +∪{j} +1 +� +Ri > Q +ˆ +S(j) +c +∪{j}� +≤ α. +(C.1) +In addition, we also know {j ̸∈ PIj} = {Rj > R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉)} = {Rj > R +ˆ +S(j) +c +∪{j} +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉)} by the +construction of PIj and Lemma A.2. Rearranging the inequality in the right hand side of (C.1) gives +1 {j ̸∈ PIj} = 1 +� +Rj > Q +ˆ +Sc∪{j}� +≤ α − +1 +| ˆS(j) +c | + 1 +� +i∈ ˆ +S(j) +c +∪{j} +1 +� +Ri > Q +ˆ +S(j) +c +∪{j}� ++ 1 +� +Rj > Q +ˆ +Sc∪{j}� += α − +1 +| ˆS(j) +c | + 1 +� +k∈ ˆ +S(j) +c +� +1 +� +Rk > Q +ˆ +S(j) +c +∪{j}� +− 1 +� +Rj > Q +ˆ +S(j) +c +∪{j}�� ++ 1 +� +Rj > Q +ˆ +Sc∪{j}� +− 1 +� +Rj > Q +ˆ +S(j) +c +∪{j}� +. +(C.2) +Given the dataset Du,−j, taking expectation on both sides of (C.2) conditional on the event {j ∈ ˆSu} (that is +{Tj ≤ TU +(ˆκ)}) yields +Pu,−j +� +Yj ̸∈ PIj +���Tj ≤ TU +(ˆκ) +� +≤ α − Eu,−j +� +� +1 +| ˆS(j) +c | + 1 +� +k∈ ˆ +S(j) +c +� +1 +� +Rk > Q +ˆ +S(j) +c +∪{j}� +− 1 +� +Rj > Q +ˆ +S(j) +c +∪{j}�� ���Tj ≤ TU +(ˆκ) +� +� ++ Pu,−j +� +Rj > Q +ˆ +Sc∪{j}���Tj ≤ TU +(ˆκ) +� +− Pu,−j +� +Rj > Q +ˆ +S(j) +c +∪{j}���Tj ≤ TU +(ˆκ) +� +. +(C.3) +Rearranging the inequality in the left hand side of (C.1), we can also have +Pu,−j +� +Yj ̸∈ PIj +���Tj ≤ TU +(ˆκ) +�� +≥ α − +1 +| ˆS(j) +c | + 1 +− Eu,−j +� +� +1 +| ˆS(j) +c | + 1 +� +k∈ ˆ +S(j) +c +� +1 +� +Rk > Q +ˆ +S(j) +c +∪{j}� +− 1 +� +Rj > Q +ˆ +S(j) +c +∪{j}�� ���Tj ≤ TU +(ˆκ) +� +� ++ Pu,−j +� +Rj > Q +ˆ +Sc∪{j}���Tj ≤ TU +(ˆκ) +� +− Pu,−j +� +Rj > Q +ˆ +S(j) +c +∪{j}���Tj ≤ TU +(ˆκ) +� +. +(C.4) +Next we introduce three lemmas to control the additional terms except α in (C.3) and (C.4). And we +defer their proofs to Section C.3.1, C.3.2 and C.3.3. +32 + +Lemma C.1. Under the conditions of Theorem 2, we have +������ +Eu,−j +� +� +1 +| ˆS(j) +c | + 1 +� +k∈ ˆ +S(j) +c +� +1 +� +Rj > Q +ˆ +S(j) +c +∪{j}� +− 1 +� +Rk > Q +ˆ +S(j) +c +∪{j}�� ���Tj ≤ TU +(ˆκ) +� +� +������ +≤ 2 +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) +TU\{j} +(ˆκ(j)−Iu) +. +Lemma C.2. Under the conditions of Theorem 2, we have +���Pu,−j +� +Rj > Q +ˆ +Sc∪{j}���Tj ≤ TU +(ˆκ) +� +− Pu,−j +� +Rj > Q +ˆ +S(j) +c +∪{j}���Tj ≤ TU +(ˆκ) +���� +≤ 4C log(n ∨ m) +ρTU\{j} +(ˆκ(j)−Iu) +� +�12C log(n ∨ m) +nTU\{j} +(ˆκ(j)) ++ +2 +� +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +� +TU\{j} +(ˆκ(j)) +� +� + 3(n ∨ m)−C +TU\{j} +(ˆκ(j)−Iu) +. +Lemma C.3. Under the conditions of Theorem 2, we have +Eu,−j +� +1 +| ˆS(j) +c | + 1 +���Tj ≤ TU +(ˆκ) +� +≤ +TU\{j} +(ˆκ(j)) +TU\{j} +(ˆκ(j)−Iu) +� +� +2 +nTU\{j} +(ˆκ(j)) + 2 ++ (n ∨ m)−C +� +� . +Armed with Lemmas C.1, C.2 and C.3, we can obtain the result of Theorem 2 after some simplifications. +A remark on the construction of virtual calibration set. +To decouple the dependence on the candidate +j, we introduce the virtual calibration set ˆS(j) +c +by using the threshold TU\{j} +(ˆκ(j)) , where ˆκ(j) is independent of +test sample j. Another good property of ˆS(j) +c +is that ˆSc ⊆ ˆS(j) +c . In Assumption 3, we assume ˆκ ≥ γm holds +almost surely. Let us consider the following naive virtual calibration set, +ˆSnaive +c +:= +� +i ∈ C : Ti ≤ TU\{j} +(⌈γm⌉) +� +. +Even though ˆSnaive +c +possesses two good properties of ˆS(j) +c , but we cannot use ˆSnaive +c +in our proof. Notice that +| ˆSnaive +c +| − | ˆSc| = +� +i∈ ˆ +Snaive +c +1 +� +TU +(ˆκ) < Ti ≤ TU\{j} +(⌈γm⌉) +� +. +Given Du,−j, the conditional expectation of the indicator function is +Eu,−j +� +1 +� +TU +(ˆκ) < Ti ≤ TU\{j} +(⌈γm⌉) +� +|Ti ≤ TU\{j} +(⌈γm⌉) +� += +TU\{j} +(⌈γm⌉) − TU +(ˆκ) +TU\{j} +(⌈γm⌉) +. +Since the ranking threshold ˆκ can be very close to m, the conditional expectation above may scale as Op(1) +according to the spacing representation (c.f. Lemma D.1). As a consequence, we cannot have an op(1) gap for +FCR control. In Section C.3.2, we show the corresponding conditional expectation of our careful chosen ˆS(j) +c +scales as Op( log(n∨m) +m +). This explains why we cannot use ˆSnaive +c +as the virtual calibration set in the proof, and +ˆS(j) +c +is indeed a nontrivial construction. +33 + +C.2.2 +Proof of Theorem 3 +Proof. Recall that, +∆(Du,−j) = 8C log(n ∨ m) +ρTU\{j} +(ˆκ(j)−Iu) +� +�12C log(n ∨ m) +nTU\{j} +(ˆκ(j)) ++ +2 +� +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +� +TU\{j} +(ˆκ(j)) +� +� + 2 +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) +TU\{j} +(ˆκ(j)−Iu) +. +From Lemma A.4, we can guarantee that +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) ≤ Iu × +max +1≤ℓ≤m−2 +� +TU\{j} +(ℓ+1) − TU\{j} +(ℓ) +� +≤ 2CIu log(n ∨ m) +m +, +holds with probability at least 1 − (n ∨ m)−C. In addition, by Lemma A.8, we have +TU\{j} +(ˆκ(j)−Iu) ≥ TU\{j} +(⌈γm⌉−Iu) ≥ TU +(⌈γm⌉−Iu) ≥ TU +(⌈(γ−Iu/m)m⌉) ≥ γ − Iu/m +2 +holds with probability at least 1 − (n ∨ m)−C. Then we have +E +� +max +1≤j≤m ∆(Du,−j) +� +≲ +log2(n ∨ m) +ργ(γ − Iu/m) +� 1 +m + 1 +n +� ++ Iu log(n ∨ m) +m(γ − Iu/m). +Then the conclusion follows from Lemma 1 immediately. +C.3 +Deferred Proofs of Section C.2 +C.3.1 +Proof of Lemma C.1 +Proof. We first notice that +Eu,−j +� +� +1 +| ˆS(j) +c | + 1 +� +k∈ ˆ +S(j) +c +� +1 +� +Rj > Q +ˆ +S(j) +c +∪{j}� +− 1 +� +Rk > Q +ˆ +S(j) +c +∪{j}�� ���j ∈ ˆSu +� +� += +� +Sc⊆[m] +Pu,−j +� +ˆS(j) +c += Sc +� +|Sc| + 1 +× +Eu,−j +� � +k∈Sc +1 +� +Rj > QSc∪{j}� +− 1 +� +Rk > QSc∪{j}� ���Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +. +(C.5) +Next we will bound the conditional expectation term in (C.5). From the definition of ˆS(j) +c , we know +{ ˆS(j) +c += Sc} = +� � +k∈Sc +{Tk ≤ TU\{j} +(ˆκ(j)) } +� � +� +� +� +k∈C\Sc +{Tk > TU\{j} +(ˆκ(j)) } +� +� . +Since both sample j ∈ ˆSu and k ∈ Sc are independent of TU\{j} +(ˆκ(j)) , we can guarantee that +Pu,−j +� +Rk > Q +ˆ +S(j) +c +∪{j}�� ˆS(j) +c += Sc, Tj ≤ TU\{j} +(ˆκ(j)) +� += +Pu,−j +� +Rk > QSc∪{j}, � +i∈Sc∪{j} +� +Ti ≤ TU\{j} +(ˆκ(j)) +�� +Pu,−j +�� +i∈Sc∪{j} +� +Ti ≤ TU\{j} +(ˆκ(j)) +�� += +Pu,−j +� +Rj > QSc∪{j}, � +i∈Sc∪{j} +� +Ti ≤ TU\{j} +(ˆκ(j)) +�� +Pu,−j +�� +i∈Sc∪{j} +� +Ti ≤ TU\{j} +(ˆκ(j)) +�� += Pu,−j +� +Rj > Q +ˆ +S(j) +c +∪{j}�� ˆS(j) +c += Sc, Tj ≤ TU\{j} +(ˆκ(j)) +� +, +(C.6) +34 + +where the penultimate equality holds since the distribution of (Xj, Yj) and (Xk, Yk) are same. It follows that +Eu,−j +� � +k∈Sc +1 +� +Rj > Q +ˆ +S(j) +c +∪{j}� +− 1 +� +Rk > QSc∪{j}� ���Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� += +� +k∈Sc +� +Pu,−j +� +Rk > QSc∪{j}���Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +− Pu,−j +� +Rk > QSc∪{j}���Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� � +− +|Sc| +|Sc| + 1 +� +Pu,−j +� +Rj > QSc∪{j}���Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +− Pu,−j +� +Rj > QSc∪{j}���Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� � ++ +� +k∈Sc +� +Pu,−j +� +Rk > QSc∪{j}���Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +− Pu,−j +� +Rj > QSc∪{j}���Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� � +=: +� +k∈Sc +∆k + +|Sc| +|Sc| + 1∆j + 0. +(C.7) +For ∆k, we have the following upper bound +∆k = +Pu,−j +� +Rk > QSc∪{j}, Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +− +Pu,−j +� +Rk > QSc∪{j}, Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +(i) +≤ +Pu,−j +� +Rk > QSc∪{j}, TU\{j} +(ˆκ(j)−Iu) < Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +(ii) +≤ +Pu,−j +� +TU\{j} +(ˆκ(j)−Iu) < Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ) +, ˆS(j) +c += Sc +� +(iii) +≤ +Pu,−j +� +TU\{j} +(ˆκ(j)−Iu) < Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)−Iu), ˆS(j) +c += Sc +� +(iv) += +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) +TU\{j} +(ˆκ(j)−Iu) +, +(C.8) +where (i) and (iii) hold since TU\{j} +(ˆκ(j)−Iu) ≤ TU +(ˆκ) ≤ TU\{j} +(ˆκ(j)) (c.f. Lemma A.3), (ii) follows from the conclusion +(2) of Lemma A.2, and (iv) follows from the with firstly conditioning on Du,−j ∪ Dc. Similarly, we can obtain +35 + +the following lower bound +∆k = +Pu,−j +� +Rk > QSc∪{j}, Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +− +Pu,−j +� +Rk > QSc∪{j}, Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +(v) +≥ +Pu,−j +� +Rk > QSc∪{j}, Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +� +� +Pu,−j +� +Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� − 1 +� +� +(vi) +≥ − +Pu,−j +� +TU +(ˆκ) < Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +≥ − +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) +TU\{j} +(ˆκ(j)) +, +(C.9) +where (v) holds because TU +(ˆκ) ≤ TU\{j} +(ˆκ(j)) , and (vi) is true since the term in the bracket is negative. Combining +(C.8) and (C.9), we have +|∆k| ≤ +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) +TU\{j} +(ˆκ(j)−Iu) +. +(C.10) +For ∆j, we have the following upper bound +∆j = +Pu,−j +� +Rj > QSc∪{j}, Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +− +Pu,−j +� +Rj > QSc∪{j}, Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +≤ +Pu,−j +� +Rj > QSc∪{j}, Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +� +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +− 1 +� +� +≤ +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +− 1 +≤ +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) +TU\{j} +(ˆκ(j)−Iu) +. +The lower bound of ∆j can be derived as +∆j ≥ +Pu,−j +� +Rj > QSc∪{j}, Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +− +Pu,−j +� +Rj > QSc∪{j}, Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +≥ − +Pu,−j +� +TU\{j} +(ˆκ(j)−Iu) < Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)) , ˆS(j) +c += Sc +� +≥ − +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) +TU\{j} +(ˆκ(j)) +. +36 + +Hence we have +|∆j| ≤ +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) +TU\{j} +(ˆκ(j)−Iu) +. +(C.11) +Plugging (C.10) and (C.11) into (C.7), we have +�����Eu,−j +� � +k∈Sc +1 +� +Rj > Q +ˆ +S(j) +c +∪{j}� +− 1 +� +Rk > QSc∪{j}� ���Tj ≤ TU +(ˆκ), ˆS(j) +c += Sc +������ +≤ (|Sc| + 1) · +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−Iu) +TU\{j} +(ˆκ(j)−Iu) +. +Taking consideration of (C.5), we can complete the proof of Lemma C.1. +C.3.2 +Proof of Lemma C.2 +Proof. From TU +(ˆκ) ≤ TU\{j} +(ˆκ(j)) in Lemma A.3, we know ˆSc ⊆ ˆS(j) +c . Then for any j ∈ ˆSu, we have +| ˆS(j) +c | − | ˆSc| = +� +i∈ ˆ +S(j) +c +1 +� +TU +(ˆκ) < Ti ≤ TU\{j} +(ˆκ(j)) +� +≤ +� +i∈ ˆ +S(j) +c +1 +� +TU\{j} +(ˆκ(j)−1) < Ti ≤ TU\{j} +(ˆκ(j)) +� +=: d(j), +(C.12) +where the inequality holds since TU\{j} +(ˆκ(j)−1) ≤ TU +(ˆκ) holds for j ∈ ˆSu (c.f. Lemma A.3). Recall that +Q +ˆ +Sc∪{j} = R +ˆ +Sc∪{j} +(⌈(1−α)(| ˆ +Sc|+1)⌉), +Q +ˆ +S(j) +c +∪{j} = R +ˆ +S(j) +c +∪{j} +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉). +Next we bound the rank of value R +ˆ +Sc +(⌈(1−α)(| ˆ +Sc|+1)⌉) in the set ˆS(j) +c . It follows from ˆSc ⊆ ˆS(j) +c +and (C.12) that +� +i∈ ˆ +S(j) +c +1 +� +Ri ≤ R +ˆ +Sc +(⌈(1−α)(| ˆ +Sc|+1)⌉) +� +≥ +� +i∈ ˆ +Sc +1 +� +Ri ≤ R +ˆ +Sc +(⌈(1−α)(| ˆ +Sc|+1)⌉) +� += ⌈(1 − α)(| ˆSc| + 1)⌉ +≥ ⌈(1 − α)(| ˆS(j) +c | − d(j) + 1)⌉, +and +� +i∈ ˆ +S(j) +c +1 +� +Ri ≤ R +ˆ +Sc +(⌈(1−α)(| ˆ +Sc|+1)⌉) +� +≤ d(j) + +� +i∈ ˆ +Sc +1 +� +Ri ≤ R +ˆ +Sc +(⌈(1−α)(| ˆ +Sc|+1)⌉) +� += d(j) + ⌈(1 − α)(| ˆSc| + 1)⌉ +≤ d(j) + ⌈(1 − α)(| ˆS(j) +c | + 1)⌉. +Two bounds above indicate that +R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|−d(j)+1)) ≤ R +ˆ +Sc +(⌈(1−α)(| ˆ +Sc|+1)⌉) ≤ R +ˆ +S(j) +c +(d(j)+⌈(1−α)(| ˆ +S(j) +c +|+1)). +(C.13) +37 + +By the right hand side of (C.13), we have +Pu,−j +� +Rj > Q +ˆ +Sc∪{j}���Tj ≤ TU +(ˆκ) +� +− Pu,−j +� +Rj > Q +ˆ +S(j) +c +∪{j}���Tj ≤ TU +(ˆκ) +� +(i) += Pu,−j +� +Rj > Q +ˆ +Sc +���Tj ≤ TU +(ˆκ) +� +− Pu,−j +� +Rj > Q +ˆ +S(j) +c +���Tj ≤ TU +(ˆκ) +� +≥ −Eu,−j +� +1 +� +Rj > R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉) +� +− 1 +� +Rj > R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉+d(j)) +� ���Tj ≤ TU +(ˆκ) +� +(ii) +≥ − +Pu,−j +� +R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉) < Rj ≤ R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉+d(j)), Tj ≤ TU\{j} +(ˆκ(j)) +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)−Iu) +� +(iii) +≥ − +Eu,−j +� +R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉+d(j)) − R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉) +� +TU\{j} +(ˆκ(j)−Iu) +, +(C.14) +where (i) comes from the conclusion (2) of Lemma A.2 by dropping j, (ii) holds due to TU\{j} +(ˆκ(j)−Iu) ≤ TU +(ˆκ) ≤ +TU\{j} +(ˆκ(j)) for any j ∈ U (c.f. Lemma A.3), and (iii) holds by dropping event {Tj ≤ TU\{j} +(ˆκ(j)) }. By the left hand +side of (C.13), we similarly have +Pu,−j +� +Rj > Q +ˆ +Sc∪{j}���Tj ≤ TU +(ˆκ) +� +− Pu,−j +� +Rj > Q +ˆ +S(j) +c +∪{j}���Tj ≤ TU +(ˆκ) +� +≤ +Eu,−j +� +R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉) − R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1−d(j))⌉) +� +TU\{j} +(ˆκ(j)−Iu) +. +(C.15) +Applying Lemma A.7, with probability at least 1 − (n ∨ m)−C, it holds +| ˆS(j) +c | ≥ n · TU\{j} +(ˆκ(j)) − C +� +n log(n ∨ m) +� +TU\{j} +(ˆκ(j)) +� +1 − TU\{j} +(ˆκ(j)) +� +≥ n · TU\{j} +(ˆκ(j)) +� +�1 − C +� +� +� +�log(n ∨ m) +nTU\{j} +(ˆκ(j)) +� +� ≥ 1 +2n · TU\{j} +(ˆκ(j)) , +(C.16) +where we used the assumption 8C log(n ∨ m)/(nTU\{j} +(ˆκ(j)) ) ≤ 1 almost surely. Given Du,−j, applying Lemma +A.6 with t1 = TU\{j} +(ˆκ(j)−1) and t2 = TU\{j} +(ˆκ(j)) , we can guarantee that +d(j) +| ˆS(j) +c | += +1 +| ˆS(j) +c | +� +i∈ ˆ +S(j) +c +� +�1 +� +TU\{j} +(ˆκ(j)−1) < Ti ≤ TU\{j} +(ˆκ(j)) +� +− +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +TU\{j} +(ˆκ(j)) +� +� + +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +TU\{j} +(ˆκ(j)) +≤ 2 +� +eC log(n ∨ m) +| ˆS(j) +c | +� +� +� +� +� +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +TU\{j} +(ˆκ(j)) ++ 2eC log(n ∨ m) +| ˆS(j) +c | ++ +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +TU\{j} +(ˆκ(j)) +≤ 6 +� +� +� +�C log(n ∨ m) +nTU\{j} +(ˆκ(j)) +� +� +� +� +� +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +TU\{j} +(ˆκ(j)) ++ 6C log(n ∨ m) +nTU\{j} +(ˆκ(j)) ++ +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +TU\{j} +(ˆκ(j)) +≤ 12C log(n ∨ m) +nTU\{j} +(ˆκ(j)) ++ +2 +� +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +� +TU\{j} +(ˆκ(j)) +=: ∆1(Du,−j), +(C.17) +38 + +holds with probability at least 1 − 2(n ∨ m)−C. Given Du,−j, with probability at least 1 − 3(n ∨ m)−C, we +have +R +ˆ +S(j) +c +(⌈(1��α)(| ˆ +S(j) +c +|+1)⌉) − R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|−d(j)+1)⌉) = +⌈(1−α)(| ˆ +S(j) +c +|+1)⌉+d(j)−1 +� +ℓ=⌈(1−α)(| ˆ +S(j) +c +|−d(j)+1)⌉ +R +ˆ +S(j) +c +(ℓ+1) − R +ˆ +S(j) +c +(ℓ) +≤ d(j) +max +0≤ℓ≤| ˆ +S(j) +c +| +� +R +ˆ +S(j) +c +(ℓ+1) − R +ˆ +S(j) +c +(ℓ) +� +(i) +≤ +d(j) +| ˆS(j) +c | +· 1 +ρ +| ˆS(j) +c | +1 − 2 +� +C log(n∨m) +| ˆ +S(j) +c +|+1 +2C log(n ∨ m) +| ˆSc(t)| + 1 +(ii) +≤ ∆1(Du,−j) · 1 +ρ +2C log(n ∨ m) +1 − 2 +� +C log(n∨m) +| ˆ +S(j) +c +|+1 +(iii) +≤ 4C log(n ∨ m) +ρ +∆1(Du,−j), +(C.18) +where (i) follows from applying Lemma A.5 with t = TU\{j} +(ˆκ(j)) , (ii) comes from (C.17), and (iii) comes from +(C.16) and the assumption 8C log(n ∨ m)/(nTU\{j} +(ˆκ(j)) ) ≤ 1 almost surely. Similarly, we also have +R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉+d(j)) − R +ˆ +S(j) +c +(⌈(1−α)(| ˆ +S(j) +c +|+1)⌉) ≤ d(j) +max +0≤ℓ≤| ˆ +S(j) +c +| +� +R +ˆ +S(j) +c +(ℓ+1) − R +ˆ +S(j) +c +(ℓ) +� +≤ 4C log(n ∨ m) +ρ +∆1(Du,−j). +(C.19) +Plugging the upper bound (C.18) and (C.19) into (C.15) and (C.14) respectively, we can guarantee +Pu,−j +� +Rj > Q +ˆ +Sc∪{j}���Tj ≤ TU +(ˆκ) +� +− Pu,−j +� +Rj > Q +ˆ +S(j) +c +∪{j}���Tj ≤ TU +(ˆκ) +� +≤ 4C log(n ∨ m) +ρTU\{j} +(ˆκ(j)−Iu) +� +�12C log(n ∨ m) +nTU\{j} +(ˆκ(j)−Iu) ++ +2 +� +TU\{j} +(ˆκ(j)) − TU\{j} +(ˆκ(j)−1) +� +TU\{j} +(ˆκ(j)) +� +� + 3(n ∨ m)−C +TU\{j} +(ˆκ(j)) +. +C.3.3 +Proof of Lemma C.3 +Proof. Notice that +Eu,−j +� +1 +| ˆS(j) +c | + 1 +���Tj ≤ TU +(ˆκ) +� += +n +� +s=0 +1 +s + 1 +Pu,−j +� +| ˆS(j) +c | = s, Tj ≤ TU +(ˆκ) +� +Pu,−j +� +Tj ≤ TU +(ˆκ) +� +(i) +≤ +n +� +s=0 +1 +s + 1 +Pu,−j +� +| ˆS(j) +c | = s, Tj ≤ TU\{j} +(ˆκ(j)) +� +Pu,−j +� +Tj ≤ TU\{j} +(ˆκ(j)−Iu) +� +(ii) += +n +� +s=0 +1 +s + 1 +TU\{j} +(ˆκ(j)) Eu,−j +� +1 +� +| ˆS(j) +c | = s +�� +TU\{j} +(ˆκ(j)−Iu) += +TU\{j} +(ˆκ(j)) +TU\{j} +(ˆκ(j)−Iu) +Eu,−j +� +1 +| ˆS(j) +c | + 1 +� +, +(C.20) +39 + +where (i) holds due to TU +(ˆκ) ≥ TU\{j} +(ˆκ(j)−Iu) (c.f. Lemma A.3), and (ii) comes from the independence between Tj +and ˆS(j) +c +such that +Pu,−j +� +| ˆS(j) +c | = s, Tj ≤ TU\{j} +(ˆκ(j)) +� += Eu,−j +� +E +� +1 +� +Tj ≤ TU\{j} +(ˆκ(j)) +� +1 +� +| ˆS(j) +c | = s +� ��Du,−j, Dc +�� += TU\{j} +(ˆκ(j)) Eu,−j +� +1 +� +| ˆS(j) +c | = s +�� +. +From the relation (C.16), we know +Pu,−j +� +�| ˆS(j) +c | ≤ +nTU\{j} +(ˆκ(j)) +2 +� +� ≤ (n ∨ m)−C. +Together with (C.20), we have +Eu,−j +� +1 +| ˆS(j) +c | + 1 +���Tj ≤ TU +(ˆκ) +� +≤ +TU\{j} +(ˆκ(j)) +TU\{j} +(ˆκ(j)−Iu) +� +� +2 +nTU\{j} +(ˆκ(j)) + 2 ++ (n ∨ m)−C +� +� . +C.4 +Calibration-assisted selection +In this subsection, we provide the proofs for the results of calibration-assisted selection procedures, where the +ranking threshold ˆκ depends on the test set Du and the calibration set Dc. From now on, we use E−(j,k)[·] +and P−(j,k)[·] to denote the expectation and probability given Dc,−k and Du,−j. +C.4.1 +Proof of Theorem 4 +Proof. From the definition of Q +ˆ +Sc∪{j} and Lemma A.1, we know that +α − +1 +| ˆSc| + 1 +≤ +1 +| ˆSc| + 1 +� +i∈ ˆ +Sc∪{j} +1 +� +Ri > Q +ˆ +Sc∪{j}� +≤ α. +(C.21) +In addition, we also know {j ̸∈ PIj} = {Rj > Q +ˆ +Sc∪{j}}. Rearranging the right hand side of (C.21) gives +1 {j ̸∈ PIj} = 1 +� +Rj > Q +ˆ +Sc∪{j}� +≤ α − +1 +| ˆSc| + 1 +� +i∈ ˆ +Sc∪{j} +1 +� +Ri > Q +ˆ +Sc∪{j}� ++ 1 +� +Rj > Q +ˆ +Sc∪{j}� += α − +1 +| ˆSc| + 1 +� +k∈ ˆ +Sc +� +1 +� +Rk > Q +ˆ +Sc∪{j}� +− 1 +� +Rj > Q +ˆ +Sc∪{j}�� +. +(C.22) +Given the dataset Du,−j, taking expectation on both sides of (C.22) conditional on the event Tj ≤ TU +(ˆκ) yields +Pu,−j +� +Yj ̸∈ PIj +���Tj ≤ TU +(ˆκ) +� +≤ α − Eu,−j +� +� +1 +| ˆSc| + 1 +� +k∈ ˆ +Sc +� +1 +� +Rk > Q +ˆ +Sc∪{j}� +− 1 +� +Rj > Q +ˆ +Sc∪{j}�� ���Tj ≤ TU +(ˆκ) +� +� . +(C.23) +40 + +Similarly, from the left hand side of (C.21), we can have +Pu,−j +� +Yj ̸∈ PIj +���Tj ≤ TU +(ˆκ) +� +≥ α − Eu,−j +� +� +1 +| ˆSc| + 1 +� +k∈ ˆ +Sc +� +1 +� +Rk > Q +ˆ +Sc∪{j}� +− 1 +� +Rj > Q +ˆ +Sc∪{j}�� ���Tj ≤ TU +(ˆκ) +� +� +− Eu,−j +� +1 +| ˆSc| + 1 +���Tj ≤ TU +(ˆκ) +� +. +(C.24) +For any j ∈ ˆSu and k ∈ C, we introduce a new selected virtual calibration set w.r.t. to (j, k), +ˆS(j,k) +c += +� +i ∈ C \ {k} : Ti ≤ TU\{j} +ˆκ(j,k) +� +. +According to Assumptions 2 and 4, we know +ˆκ(j,k) = ˆκj←tu,k←tc ≥ ˆκj←tu = ˆκ(j). +Together with Lemmas A.3 and A.2, we also know +TU +(ˆκ) ≤ TU\{j} +(ˆκ(j)) ≤ TU\{j} +(ˆκ(j,k)). +(C.25) +Hence we can guarantee ˆSc ⊆ ˆS(j,k) +c +∪ {k} if k ∈ ˆSc. Denote +∆(j,k) = P−(j,k) +� +Rk > Q +ˆ +Sc∪{j}��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +− P−(j,k) +� +Rj > Q +ˆ +Sc∪{j}��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +=: ∆(j,k) +k +− ∆(j,k) +j +. +(C.26) +Notice that +������ +Eu,−j +� +� +1 +| ˆSc| + 1 +� +k∈ ˆ +Sc +� +1 +� +Rk > Q +ˆ +Sc∪{j}� +− 1 +� +Rj > Q +ˆ +Sc∪{j}�� ��Tj ≤ TU +(ˆκ) +� +� +������ += +������ +Eu,−j +� +�� +k∈C +1 +� +k ∈ ˆSc +� +| ˆS(j,k) +c +| + 1 +� +1 +� +Rk > Q +ˆ +Sc∪{j}� +− 1 +� +Rj > Q +ˆ +Sc∪{j}�� ��Tj ≤ TU +(ˆκ) +� +� +������ ++ +������ +Eu,−j +� +�� +k∈C +| ˆSc| − | ˆS(j,k) +c +| +| ˆS(j,k) +c +| + 1 +1 +� +k ∈ ˆSc +� +| ˆSc| + 1 +� +1 +� +Rk > Q +ˆ +Sc∪{j}� +− 1 +� +Rj > Q +ˆ +Sc∪{j}�� ��Tj ≤ TU +(ˆκ) +� +� +������ +(i) +≤ Eu,−j +�� +k∈C +P−(j,k) +� +k ∈ ˆSc|Tj ≤ TU +(ˆκ) +� +��∆(j,k)�� +| ˆS(j,k) +c +| + 1 +� ++ Eu,−j +� +�max +k∈C +���| ˆSc| − | ˆS(j,k) +c +| +��� +| ˆS(j,k) +c +| + 1 +� +� += Eu,−j +� +� +���∆(j,k)��� E−(j,k) +� +� +� +k∈C 1 +� +k ∈ ˆSc +� +| ˆS(j,k) +c +| + 1 +���Tj ≤ TU +(ˆκ) +� +� +� +� + Eu,−j +� +�max +k∈C +���| ˆSc| − | ˆS(j,k) +c +| +��� +| ˆS(j,k) +c +| + 1 +� +� +(ii) +≤ Eu,−j +� +max +k∈C +���∆(j,k)��� +� ++ Eu,−j +� +�max +k∈C +���| ˆSc| − | ˆS(j,k) +c +| +��� +| ˆS(j,k) +c +| + 1 +� +� , +(C.27) +41 + +where (i) follows from the tower rule and | ˆS(j,k) +c +| is independent of samples j and k, and (ii) holds since +| ˆSc| ≤ | ˆS(j,k) +c +| + 1. Further, we can decompose ∆(j,k) +j +in (C.26) as +∆(j,k) +j += +� +P−(j,k) +� +Rj > Q +ˆ +Sc∪{j}��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +− P−(j,k) +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� � ++ +� +P−(j,k) +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +− P−(j,k) +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� � ++ P−(j,k) +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +=: ∆(j,k) +j,1 ++ ∆(j,k) +j,2 ++ P−(j,k) +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +. +(C.28) +Similarly, for ∆(j,k) +k +in (C.26), we have +∆(j,k) +k += +� +P−(j,k) +� +Rk > Q +ˆ +Sc∪{j}��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +− P−(j,k) +� +Rk > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� � ++ +� +P−(j,k) +� +Rk > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +− P−(j,k) +� +Rk > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� � ++ P−(j,k) +� +Rk > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +=: ∆(j,k) +k,1 ++ ∆(j,k) +k,2 ++ P−(j,k) +� +Rk > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +. +(C.29) +Using the identical distribution of sample j and k and the fact ˆS(j,k) +c +, TU\{j} +(ˆκ(j,k)) are independent of sample j +and k, we have +P−(j,k) +� +Rk > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +=P−(j,k) +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}��Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +. +Taking account of (C.26), (C.28) and (C.29), the equality above results in +∆(j,k) = ∆(j,k) +k,1 ++ ∆(j,k) +k,2 +− +� +∆(j,k) +j,1 ++ ∆(j,k) +j,2 +� +. +(C.30) +Since ˆSc \ {k} ⊆ ˆS(j,k) +c +, under the event j ∈ ˆSu, we know that +| ˆS(j,k) +c +| − | ˆSc| + 1 ≤ +� +i∈ ˆ +S(j,k) +c +1 +� +TU +(ˆκ) < Ti ≤ TU\{j} +(ˆκ(j,k)) +� += +� +i∈ ˆ +S(j,k) +c +1 +� +TU +(ˆκ(j)) < Ti ≤ TU\{j} +(ˆκ(j,k)) +� +≤ +� +i∈ ˆ +S(j,k) +c +1 +� +TU\{j} +(ˆκ(j,k)−Ic−1) < Ti ≤ TU\{j} +(ˆκ(j,k)) +� +=: d(j,k), +(C.31) +where the last inequality holds since ˆκ(j,k) ≤ ˆκ(j) + Ic. +42 + +Bound ∆(j,k) +j,1 +and ∆(j,k) +k,1 . +We first bound the rank of value Q +ˆ +Sc∪{j} in the set {Ri : i ∈ ˆS(j,k) +c +}. For k ∈ ˆSc +(that is, Tk ≤ TU +(ˆκ)), we have +� +i∈ ˆ +S(j,k) +c +1 +� +Ri ≤ Q +ˆ +Sc∪{j}� +≥ +� +i∈ ˆ +S(j,k) +c +∪{j,k} +1 +� +Ri ≤ Q +ˆ +Sc∪{j}� +− 2 +(i) +≥ +� +i∈ ˆ +Sc∪{j} +1 +� +Ri ≤ Q +ˆ +Sc∪{j}� +− 2 += ⌈(1 − α)(1 + | ˆSc|)⌉ − 2 +(ii) +≥ ⌈(1 − α)(| ˆS(j,k) +c +| − d(j,k) + 2)⌉ − 2, +(C.32) +and +� +i∈ ˆ +S(j,k) +c +1 +� +Ri ≤ Q +ˆ +Sc∪{j}� +≤ +� +i∈ ˆ +S(j,k) +c +∪{j,k} +1 +� +Ri ≤ Q +ˆ +Sc∪{j}� +(iii) +≤ +�� +i∈ ˆ +Sc∪{j} +1 +� +Ri ≤ Q +ˆ +Sc∪{j}� ++ d(j,k) − 1 +≤ ⌈(1 − α)(2 + | ˆS(j,k) +c +|)⌉ + d(j,k) − 1, +(C.33) +where (i), (ii) holds due to ˆSc ∪ {j} ⊆ ˆS(j,k) +c +∪ {j, k}, and (ii), (iii) comes from (C.31). Similarly, we have +� +i∈ ˆ +S(j,k) +c +1 +� +Ri ≤ Q +ˆ +S(j,k) +c +∪{j,k}� +≥ +� +i∈ ˆ +S(j,k) +c +∪{j,k} +1 +� +Ri ≤ Q +ˆ +S(j,k) +c +∪{j,k}� +− 2 += ⌈(1 − α)(2 + | ˆS(j,k) +c +|)⌉ − 2, +(C.34) +and +� +i∈ ˆ +S(j,k) +c +1 +� +Ri ≤ Q +ˆ +S(j,k) +c +∪{j,k}� +≤ +� +i∈ ˆ +S(j,k) +c +∪{j,k} +1 +� +Ri ≤ Q +ˆ +S(j,k) +c +∪{j,k}� += ⌈(1 − α)(2 + | ˆS(j,k) +c +|)⌉. +(C.35) +Combining (C.32)-(C.35), we can guarantee +���Q +ˆ +Sc∪{j} − Q +ˆ +S(j,k) +c +∪{j,k}��� ≤ max +� +R +ˆ +S(j,k) +c +(⌈(1−α)(2+| ˆ +S(j,k) +c +|)⌉+d(j,k)−1) − R +ˆ +S(j,k) +c +(⌈(1−α)(2+| ˆ +S(j,k) +c +|)⌉−2), +R +ˆ +S(j,k) +c +(⌈(1−α)(2+| ˆ +S(j,k) +c +|)⌉) − R +ˆ +S(j,k) +c +(⌈(1−α)(| ˆ +S(j,k) +c +|−d(j,k)+2)⌉−2) +� +≤ R +ˆ +S(j,k) +c +(⌈(1−α)(| ˆ +S(j,k) +c +|+2)⌉+d(j,k)) − R +ˆ +S(j,k) +c +(⌈(1−α)(| ˆ +S(j,k) +c +|+2−d(j,k))⌉−2) +=: R +ˆ +S(j,k) +c +(U (j,k)) − R +ˆ +S(j,k) +c +(L(j,k)). +(C.36) +In addition, using Assumptions 2 and 4, we know for any j ∈ U, it holds that +TU +(ˆκ) ≥ TU\{j} +(ˆκ(j)−Iu) ≥ TU\{j} +(ˆκ(j,k)−Iu−Ic). +(C.37) +43 + +Since the samples j and k are independent of R +ˆ +S(j,k) +c +(U (j,k)) and R +ˆ +S(j,k) +c +(L(j,k)), we have +|∆(j,k) +k,1 | ≤ E−(j,k) +����1 +� +Rk > Q +ˆ +Sc∪{j}� +− 1 +� +Rk > Q +ˆ +S(j,k) +c +∪{j,k}���� +��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +(i) +≤ P−(j,k) +� +R +ˆ +S(j,k) +c +(L(j,k)) < Rk ≤ R +ˆ +S(j,k) +c +(U (j,k)) +��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +(ii) +≤ +P−(j,k) +� +R +ˆ +S(j,k) +c +(L(j,k)) < Rk ≤ R +ˆ +S(j,k) +c +(U (j,k)), Tj ≤ TU\{j} +(ˆκ(j,k)) +� +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)−Iu−Ic), Tk ≤ TU\{j} +(ˆκ(j,k)−Iu−Ic) +� +≤ +TU\{j} +(ˆκ(j,k)) +� +R +ˆ +S(j,k) +c +(U (j,k)) − R +ˆ +S(j,k) +c +(L(j,k)) +� +� +TU\{j} +(ˆκ(j,k)−Iu−Ic) +�2 +, +(C.38) +where (i) comes from (C.36) and (ii) holds due to (C.37). Similarly, we can also have +|∆(j,k) +j,1 | ≤ E−(j,k) +����1 +� +Rj > Q +ˆ +Sc∪{j}� +− 1 +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}���� +��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +≤ P−(j,k) +� +R +ˆ +S(j,k) +c +(L(j,k)) < Rj ≤ R +ˆ +S(j,k) +c +(U (j,k)) +��Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +≤ +P−(j,k) +� +R +ˆ +S(j,k) +c +(L(j,k)) < Rj ≤ R +ˆ +S(j,k) +c +(U (j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)−Iu−Ic), Tk ≤ TU\{j} +(ˆκ(j,k)−Iu−Ic) +� +≤ +TU\{j} +(ˆκ(j,k)) +� +R +ˆ +S(j,k) +c +(U (j,k)) − R +ˆ +S(j,k) +c +(L(j,k)) +� +� +TU\{j} +(ˆκ(j,k)−Iu−Ic) +�2 +. +(C.39) +Bound ∆(j,k) +j,2 +and ∆(j,k) +k,2 . +For ∆(j,k) +j,2 , we have the following upper bound +∆(j,k) +j,2 +(i) +≤ +P−(j,k) +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}, Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +P−(j,k) +� +Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +− +P−(j,k) +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}, Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +(ii) +≤ 1 − +P−(j,k) +� +Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� += +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)), TU +(ˆκ) < Tk ≤ TU\{j} +(ˆκ(j,k)) +� +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� ++ +P−(j,k) +� +TU +(ˆκ) < Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU +(ˆκ) +� +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +(iii) +≤ +2 +� +TU\{j} +(ˆκ(j,k)) − TU\{j} +(ˆκ(j,k)−Iu−Ic) +� +TU\{j} +(ˆκ(j)) +, +(C.40) +44 + +where (i) and (ii) holds since TU\{j} +(ˆκ(j,k)) ≥ TU +(ˆκ) in (C.25), and (iii) comes from (C.37) and the towerrule. +Similarly, we can get the lower bound of ∆(j,k) +j,2 +as +∆(j,k) +j,2 +≥ +P−(j,k) +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}, Tj ≤ TU +(ˆκ), Tk ≤ TU +(ˆκ) +� +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +− +P−(j,k) +� +Rj > Q +ˆ +S(j,k) +c +∪{j,k}, Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +≥ − +P−(j,k) +� +TU +(ˆκ) < Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU +(ˆκ) +� +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +− +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)), TU +(ˆκ) < Tk ≤ TU\{j} +(ˆκ(j,k)) +� +P−(j,k) +� +Tj ≤ TU\{j} +(ˆκ(j,k)), Tk ≤ TU\{j} +(ˆκ(j,k)) +� +≥ − +2 +� +TU\{j} +(ˆκ(j,k)) − TU\{j} +(ˆκ(j,k)−Iu−Ic) +� +TU\{j} +(ˆκ(j)) +. +(C.41) +Applying the same calculations in (C.40) and (C.41) to ∆(j,k) +k,2 , we can get +|∆(j,k) +j,2 |, |∆(j,k) +k,2 | ≤ +2 +� +TU\{j} +(ˆκ(j,k)) − TU\{j} +(ˆκ(j,k)−Iu−Ic) +� +TU\{j} +(ˆκ(j)) +. +(C.42) +Conclusion. +Substituting (C.39), (C.38) and (C.42) into (C.30), we have +|∆(j,k)| ≤ +2TU\{j} +(ˆκ(j,k)) +� +R +ˆ +S(j,k) +c +(U (j,k)) − R +ˆ +S(j,k) +c +(L(j,k)) +� +� +TU\{j} +(ˆκ(j,k)−Iu−Ic) +�2 ++ +4 +� +TU\{j} +(ˆκ(j,k)) − TU\{j} +(ˆκ(j,k)−Iu−Ic) +� +TU\{j} +(ˆκ(j)) +, +where U (j,k) = ⌈(1 − α)(| ˆS(j,k) +c +| + 2)⌉ + d(j,k) and L(j,k) = ⌈(1 − α)(| ˆS(j,k) +c +| + 2 − d(j,k))⌉ − 2. In addition, from +(C.31), we know +���| ˆSc| − | ˆS(j,k) +c +| +��� +| ˆS(j,k) +c +| + 1 +≤ +d(j,k) +| ˆS(j,k) +c +| + 1 +. +Plugging two upper bounds above into (C.27) can finish the proof. +C.4.2 +Proof of Theorem 5 +Proof of Theorem 5. Denote ˆSc(κ) = +� +i ∈ C \ {k} : Ti ≤ TU\{j} +(κ) +� +. From the definition of ˆS(j,k) +c +, we can write +ˆS(j,k) +c += ˆSc(ˆκ(j,k)). In addition, we write +d(j,k) = +� +i∈ ˆ +S(j,k) +c +1 +� +TU\{j} +(ˆκ(j,k)−Ic−1) < Ti ≤ TU\{j} +(ˆκ(j,k)) +� +=: d(j,k)(ˆκ(j,k)). +45 + +Then we have +R +ˆ +S(j,k) +c +(U (j,k)) − R +ˆ +S(j,k) +c +(L(j,k)) = +U (j,k)−1 +� +ℓ=L(j,k) +R +ˆ +S(j,k) +c +(ℓ+1) − R +ˆ +S(j,k) +c +(ℓ) +≤ +� +d(j,k) + 2 +� +max +1≤ℓ≤| ˆ +S(j,k) +c +|−1 +� +R +ˆ +S(j,k) +c +(ℓ+1) − R +ˆ +S(j,k) +c +(ℓ) +� += +� +d(j,k)(ˆκ(j,k)) + 2 +� +max +1≤ℓ≤| ˆ +Sc(ˆκ(j,k))|−1 +� +R +ˆ +Sc(ˆκ(j,k)) +(ℓ+1) +− R +ˆ +Sc(ˆκ(j,k)) +(ℓ) +� +≤ +max +⌈γm⌉≤κ≤m +�� +d(j,k)(κ) + 2 +� +max +1≤ℓ≤| ˆ +Sc(κ)|−1 +� +R +ˆ +Sc(κ) +(ℓ+1) − R +ˆ +Sc(κ) +(ℓ) +�� +. +(C.43) +Given Du,−j, applying Lemma A.6 with t1 = TU\{j} +(κ−Ic−1) and t2 = TU\{j} +(κ) +, we can have +d(j,k)(κ) ≤ | ˆSc(κ)| +� +�TU\{j} +(κ) +− TU\{j} +(κ−Ic−1) +TU\{j} +(κ) ++ 6C log(n ∨ m) +| ˆSc(κ)| +� +� , +(C.44) +holds with probability at least 1 − (n ∨ m)−C. In addition, given Du,−j, applying Lemma A.5, we get +max +1≤ℓ≤| ˆ +Sc(κ)|−1 +� +R +ˆ +Sc(κ) +(ℓ+1) − R +ˆ +Sc(κ) +(ℓ) +� +≤ 1 +ρ +1 +1 − 2 +� +C log(n∨m) +| ˆ +Sc(κ)|+1 +2C log(n ∨ m) +| ˆSc(κ)| + 1 +, +(C.45) +holds with probability at least 1 − (n ∨ m)−C. Substituting (C.44) and (C.45) into (C.43) and using union +bound, with probability at least 1 − (n ∨ m)−C+2, we have +R +ˆ +S(j,k) +c +(U (j,k)) − R +ˆ +S(j,k) +c +(L(j,k)) ≤ +max +⌈γm⌉≤κ≤m +� +� +� +1 +ρ +2C log(n ∨ m) +1 − 2 +� +C log(n∨m) +| ˆ +Sc(κ)|+1 +� +�TU\{j} +(κ) +− TU\{j} +(κ−Ic−1) +TU\{j} +(κ) ++ 6C log(n ∨ m) +| ˆSc(κ)| +� +� +� +� +� +(i) +≤ +max +⌈γm⌉≤κ≤m +� +� +� +� +� +� +� +1 +ρ +2C log(n ∨ m) +1 − 2 +� +C log(n∨m) +nTU\{j} +(κ) +/2+1 +� +�TU\{j} +(κ) +− TU\{j} +(κ−Ic−1) +TU\{j} +(κ) ++ 6C log(n ∨ m) +nTU\{j} +(κ) +/2 +� +� +� +� +� +� +� +� +� +(ii) +≤ 4C log(n ∨ m) +ρ +� +� +max⌈γm⌉≤κ≤m +� +TU\{j} +(κ) +− TU\{j} +(κ−Ic−1) +� +TU\{j} +(κ) ++ 6C log(n ∨ m) +nTU\{j} +(κ) +/2 +� +� +(iii) +≤ 4C log(n ∨ m) +ρTU\{j} +(⌈γm⌉) +�4CIc log(n ∨ m) +m + 1 ++ 12C log(n ∨ m) +n +� +, +(C.46) +where (i) follows from Lemma A.7, (ii) comes from Lemma A.8, and (iii) follows from Lemma A.4. Using +Lemma A.4 again, we have +TU\{j} +(ˆκ(j,k)) − TU\{j} +(ˆκ(j,k)−Iu−Ic) ≤ (Ic + Iu) +max +1≤ℓ≤m−2 +� +TU\{j} +(ℓ+1) − TU\{j} +(ℓ) +� +≤ 4C(Ic + Iu) log(n ∨ m) +m +, +(C.47) +46 + +holds with probability at least 1 − 2(n ∨ m)−C. With probability at least 1 − (n ∨ m)−C+2, we can guarantee +that +d(j,k) +| ˆS(j,k) +c +| + 1 +(i) +≤ +max +⌈γm⌉≤κ≤m +� +d(j,k)(κ) +| ˆSc(κ)| + 1 +� +≤ +max +⌈γm⌉≤κ≤m +� +� +� +TU\{j} +(κ) +− TU\{j} +(κ−Ic−1) +TU\{j} +(κ) ++ 6C log(n ∨ m) +| ˆSc(κ)| +� +� +� +(ii) +≤ +max +⌈γm⌉≤κ≤m +� +� +� +TU\{j} +(κ) +− TU\{j} +(κ−Ic−1) +TU\{j} +(κ) ++ 6C log(n ∨ m) +nTU\{j} +(κ) +/2 +� +� +� +≤ +max⌈γm⌉≤κ≤m +� +TU\{j} +(κ) +− TU\{j} +(κ−Ic−1) +� +TU\{j} +(⌈γm⌉) ++ 12C log(n ∨ m) +nTU\{j} +(⌈γm⌉) +(iii) +≤ 8CIc log(n ∨ m) +γm ++ 24C log(n ∨ m) +nγ +, +(C.48) +where (i) holds due to (C.44), (ii) comes from Lemma A.7, and (iii) comes from Lemma A.4 and A.8. Plugging +(C.46), (C.47) and (C.48) into (11), we can get +∆(−(j,k)) ≲ +log2(n ∨ m) +ρ(κ − (Ic + Iu)/m)2 +�Ic + Iu +m + 1 + 1 +n +� +, +where we also used the fact TU\{j} +(⌈γm⌉) ≤ TU\{j} +(ˆκ(j,k)) since ˆκ(j,k) ≥ ˆκ ≥ ⌈γm⌉. Then the conclusion follows from +Lemma 1 immediately. +C.5 +Prediction-oriented selection with conformal p-values +In section 3.3, we split the calibration set according the null hypothesis and alternative hypothesis, that is +C = C0 ∪ C1. Denote n0 = |C0| and n1 = |C1|. Let us first recall the definition of conformal p-value: +pj = 1 + � +i∈C0 1 {Ti ≤ Tj} +n0 + 1 +. +Therefore, the ranking threshold ˆκ only depends on the test set Du and the null calibration set Dc0. The +selected calibration set is defined as +ˆSc1 = +� +i ∈ C1 : Ti ≤ TU +(ˆκ) +� +. +In the following proof, we will fix the null calibration set Dc0 = {(Xi, Yi) : i ∈ C, Yi ≤ b0}, and then the +randomness of ˆκ only comes from the test set. Moreover, the index set C1 is also fixed once given Dc0. +C.5.1 +Proof of Proposition 3.1 +Proof. Notice that, there may exist ties in {pj : j ∈ U}, but there is no tie in {Tj : j ∈ U}. Let Rank(pj) = +� +i∈U 1 {pi ≤ pj} and Rank(Tj) = � +i∈U 1 {Ti ≤ Tj} be the ranks of pj and Tj in the test set, respectively. +47 + +Then we have Rank(pj) ≥ Rank(Tj), which means +� +pj ≤ pU +(ˆκ) +� += {Rank(pj) ≤ ˆκ} =⇒ {Rank(Tj) ≤ ˆκ} = +� +Tj ≤ TU +(ˆκ) +� +. +By the definition of ˆκ = max +� +τ : pU +(τ) ≤ δ(τ) +� +, we know +� +Tj ≤ TU +(ˆκ) +� += +� +Tj ≤ max +� +Ti : pi = pU +(ˆκ) +�� +=⇒ +� +pj ≤ pU +(ˆκ) +� +. +Therefore, we have proved that {p(Xj) ≤ pU +(τ)} = {Tj ≤ TU +(τ)} for any j ∈ U. +C.5.2 +Proof of Proposition 3.2 +Proof. Conclusion 1. +For any j ∈ ˆSu, the Lemma 1 in Fithian and Lei (2020) has showed ˆκ = ˆκ(j) +for the step-up procedures. Next, we prove the conclusion for j ∈ U \ ˆSu. Denote ri as the rank of Ti +in the set TU. Denote r(j) +i +as the rank of Ti in the set TU but the value of Tj is replaced by 0, that is +{0, T1, ..., Tj−1, Tj+1, ..., Tm}. Then we know +r(j) +i += ri + 1 for ri < rj; and r(j) +i += ri for ri > rj. +(C.49) +Recall the definition of ˆκ in step-up procedures, ˆκ = max{r : +pU +(r) ≤ δ(r)}. Together with (C.49) and +ˆκ ≥ ⌈γm⌉ in Assumption 3, we have +ˆκ(j) − ˆκ = +max +ˆκ+1≤r≤rj +� +r : δ(r) < pU +(r) ≤ δ(r + 1) +� +− ˆκ += +rj +� +r=ˆκ+1 +1 +� +δ(r) < pU +(r) ≤ δ(r + 1) +� +≤ +max +⌈γm⌉≤r≤m−1 +m +� +i=1 +1 {δ(r) < pi ≤ δ(r + 1)} . +(C.50) +Given the calibration set, we know {pi : i ∈ U} are i.i.d. random variables with +PDc +� +pi = +k +n0 + 1 +� += PDc +� � +k∈C0 +1 {Tk ≤ Ti} = k − 1 +� += PDc +� +TC0 +(k−1) < Ti ≤ TC0 +(k) +� += TC0 +(k) − TC0 +(k−1), +where TC0 +(k) is the k-th smallest value in TC0. Now we denote Ω(r) = {k ∈ [n0] : δ(r) < +k +n0+1 < δ(r + 1)}. +Then we have +PDc (δ(r) < pi ≤ δ(r + 1)) = +� +k∈Ω(r) +PDc +� +pi = +k +n0 + 1 +� += +� +k∈Ω(r) +TC0 +(k) − TC0 +(k−1) =: ω(r). +(C.51) +Similar to Lemma A.6, given Dc, we can verify that for any C ≥ 1, +����� +1 +m +m +� +i=1 +1 {δ(r) < pi ≤ δ(r + 1)} − ω(r) +����� ≤ 2eC log m +m ++ 2 +� +eCω(r) log m +m +, +(C.52) +48 + +holds with probability at least 1 − (n ∨ m)−C. Invoking maximal spacing in Lemma A.4, we have +P +� +ω(r) ≥ min +�2C|Ω(r)| log m +n0 + 1 +, 1 +�� +≤ m−C. +(C.53) +Combining (C.50)-(C.53), we can guarantee that +ˆκ(j) − ˆκ ≤ 4eC log m + 4m +max +1≤r≤m−1 ω(r) +≤ 4eC log m + 4m +max +1≤r≤m−1 +��2C|Ω(r)| log(n ∨ m) +n0 + 1 +� +∧ 1 +� +, +holds with probability at least 1 − 2m−C+1. +Conclusion 2. If we replace Tk with 1 for any k ∈ C, the corresponding p-value is +p(k) +i += +1 + � +l∈C0\{k} 1 {Tl ≤ Ti} +n0 + 1 +, +for i ∈ U. +It indicates that 0 ≤ pi − p(k) +i +≤ 1/(n0 + 1) for any i ∈ U. Hence we have +ˆκ(k) − ˆκ = +max +ˆκ+1≤r≤m +� +r : δ(r) < pU +(r) ≤ δ(r) + +1 +n0 + 1 +� +− ˆκ +≤ +max +1≤r≤m−1 +m +� +i=1 +1 +� +δ(r) < pi ≤ δ(r) + +1 +n0 + 1 +� +. +Notice that +���� +� +k ∈ [n0] : δ(r) < k + 1 +n0 + 1 ≤ δ(r) + +1 +n0 + 1 +����� ≤ 1. +Similar arguments yield that +ˆκ(k) − ˆκ ≤ 4eC log m + 8Cm log m +n0 + 1 +, +holds with probability at least 1 − 2m−C+1. +D +Proof of Auxiliary Lemmas +D.1 +Proof of Lemma A.3 +Proof. Let Rank(Ti) for i ∈ U be the rank of Ti in the test set {Ti : i ∈ U}. Notice that, ˆκ(j) = ˆκ if +Rank(Tj) ≤ ˆκ, and ˆκ(j) ≤ ˆκ + Iu if Rank(Tj) > ˆκ. Next we discuss the value of TU\{j} +(ˆκ(j)) in different scenarios: +• If Rank(Tj) > ˆκ, then TU\{j} +(ˆκ(j)−Iu) ≤ TU\{j} +(ˆκ) += TU +(ˆκ). +• If Rank(Tj) = ˆκ, then TU\{j} +(ˆκ(j)) = TU\{j} +(ˆκ) += TU +(ˆκ+1) and TU\{j} +(ˆκ(j)−1) = TU\{j} +(ˆκ−1) = TU +(ˆκ−1). +• If Rank(Tj) < ˆκ, then TU\{j} +(ˆκ(j)) = TU\{j} +(ˆκ) += TU +(ˆκ+1) and TU\{j} +(ˆκ(j)−1) = TU\{j} +(ˆκ−1) = TU +(ˆκ). +Then the conclusion follows immediately. +49 + +D.2 +Proof of Lemma A.4 +Lemma D.1 (Representation of spacing (Arnold et al., 2008)). Let U1, · · · , Un +i.i.d. +∼ Unif([0, 1]), and U(1) ≤ +U(2) ≤ · · · ≤ U(n) be their order statistics. Then +� +U(1) − U(0), · · · , U(n+1) − U(n) +� d= +� +V1 +�n+1 +k=1 Vk +, · · · , +Vn+1 +�n+1 +k=1 Vk +� +, +where U0 = 0, U(n+1) = 1, and V1, · · · , Vn+1 +i.i.d. +∼ Exp(1). +Lemma D.2 (Quantile transformation of order statistics, Theorem 1.2.5 in Reiss (2012)). Let X1, · · · , Xn be +i.i.d. random variables with CDF F(·), and U1, · · · , Un +i.i.d. +∼ Unif([0, 1]), then +� +F −1(U(1)), · · · , F −1(U(n)) +� d= +� +X(1), · · · , X(n) +� +. +Fact 1. For the random variable X ∼ Exp(λ), it holds that P (X ≥ x) = e−λx. +Fact 2. For the random variable X ∼ χ2 +ν, it holds that P (X − ν ≥ x) ≤ e−νx2/8. +Proof of Lemma A.4. Using the spacing representation in Lemma D.1, we have +PDu +� +� max +0≤ℓ≤n +� +U(ℓ+1) − U(ℓ) +� +≥ +1 +1 − 2 +� +C log(n∨m) +n+1 +C log(n ∨ m) +n + 1 +� +� += P +� +� max +0≤ℓ≤n +Vℓ +�n+1 +i=1 Vi +≥ +1 +1 − 2 +� +C log(n∨m) +n+1 +2C log(n ∨ m) +n + 1 +� +� +≤ P +� +max +0≤ℓ≤n Vℓ ≥ 2C log(n ∨ m) +� ++ P +� +1 +n + 1 +n+1 +� +i=1 +Vi ≤ 1 − 2 +� +C log(n ∨ m) +n + 1 +� +, +(D.1) +where U(0) = 0 and U(n+1) = 1. By the tail probability of Exp(1) in Fact 1 and union bound, we know +P +� +max +0≤ℓ≤n Vℓ ≥ 2C log(n ∨ m) +� +≤ (n + 1) P (V1 ≥ 2C log(n ∨ m)) += (n + 1) (n ∨ m)−2C ≤ (n ∨ m)−C, +(D.2) +holds for any C ≥ 1. In addition, we know that 1 +2 +�n+1 +i=1 Vi ∼ Γ(n + 1, 2) +d= χ2 +2(n+1). Using the tail bound of +χ2 distribution with ν = 2(n + 1) in Fact 2, we have +P +� +1 +n + 1 +n+1 +� +i=1 +Vi ≤ 1 − 2 +� +C log(n ∨ m) +n + 1 +� +≤ (n ∨ m)−C. +(D.3) +Substituting (D.2) and (D.3) into (D.1) gives the desired result. +D.3 +Proof of Lemma A.5 +Proof. Notice that, for any Sc ⊆ C, we can write +� +ˆSc(t) = Sc +� += +� � +i∈Sc +{Ti ≤ t} +� � +� +� � +i∈C\Sc +{Ti > t} +� +� . +(D.4) +50 + +Then for any i ∈ Sc, we have +P +� +Ri ≤ r +�� ˆSc(t) = Sc +� += P +� +Ri ≤ r +��Ti ≤ t +� += F(R,T )(r, t) +t +=: G(r). +Hence, given ˆSc(t) = Sc ⊆ C, {Ri}i∈Sc are i.i.d. random variables with the common CDF G(·). Applying +Lemma D.2, we know there exist Ui +i.i.d. +∼ Unif([0, 1]) for i ∈ Sc such that +� +RSc +(1), · · · , RSc +(|Sc|)| ˆSc(t) = Sc +� d= +� +G−1(U(1), · · · , G−1(U(|Sc|) +� +. +(D.5) +Let G−1(·) be the inverse function of G(·), and use our assumption on +d +drF(R,T )(r, t) ≥ ρt, we can get +d +drG−1(r) = +� d +drG(r) +�−1 += +t +d +drF(R,T )(r, t) ≤ 1 +ρ. +(D.6) +Then for any x ≥ 0, we have +P +� +max +0≤ℓ≤| ˆ +Sc(t)|−1 +� +R +ˆ +Sc(t) +(ℓ+1) − R +ˆ +Sc(t) +(ℓ+1) +� +≥ x +ρ +��� ˆSc(t) = Sc +� +(i) += P +� +max +0≤ℓ≤|Sc|−1 +� +RSc +(ℓ+1) − RSc +(ℓ) +� +≥ x +ρ +��� +� +i∈Sc +{Ti ≤ t} +� +(ii) += P +� +max +0≤ℓ≤|Sc|−1 +� +G−1(U(ℓ+1)) − G−1(U(ℓ)) +� +≥ x +ρ +� +(iii) +≤ P +� +max +0≤ℓ≤|Sc|−1 +� +U(ℓ+1) − U(ℓ) +� +≥ x +� +, +(D.7) +where (i) holds due to (D.4) and the fact that {Ti}i∈Sc are independent of {Ti}i∈C\Sc, (ii) follows from (D.5), +and (iii) comes from (D.6). Invoking Lemma A.4, we can finish the proof. +D.4 +Proof of Lemma A.6 +Proof of Lemma A.6. For any absolute constant C ≥ 1, we divide the subsets of C into two groups: +Ξ1 = +� +Sc ⊆ C : |Sc| > C log(n ∨ m) · +t2 +t2 − t1 +� +, +Ξ2 = +� +Sc ⊆ C : |Sc| ≤ C log(n ∨ m) · +t2 +t2 − t1 +� +. +Then for any x, y ≥ 0, it holds that +P +� +� +1 +| ˆSc(t2)| +������ +� +i∈ ˆ +Sc(t2) +Zi +������ +≥ x + y +� +� ≤ P +� +� +1 +| ˆSc(t2)| +������ +� +i∈ ˆ +Sc(t2) +Zi +������ +≥ x1 +� +ˆSc(t2) ∈ Ξ1 +� ++ y1 +� +ˆSc(t2) ∈ Ξ2 +� +� +� += +� +Sc⊆C +P +� +� +1 +| ˆSc(t2)| +������ +� +i∈ ˆ +Sc(t2) +Zi +������ +≥ x1 +� +ˆSc(t2) ∈ Ξ1 +� ++ y1 +� +ˆSc(t2) ∈ Ξ2 +� �� ˆSc(t2) = Sc +� +� × P +� +ˆSc(t2) = Sc +� += +� +Sc∈Ξ1 +P +� +1 +|Sc| +����� +� +i∈Sc +1 {t1 < Ti ≤ t2} +����� ≥ x +�� ˆSc(t2) = Sc +� +P +� +ˆSc(t2) = Sc +� ++ +� +Sc∈Ξ2 +P +� +1 +|Sc| +����� +� +i∈Sc +1 {t1 < Ti ≤ t2} +����� ≥ y +�� ˆSc(t2) = Sc +� +P +� +ˆSc(t2) = Sc +� +. +(D.8) +51 + +According to the definition of ˆSc(t2), we have +� +ˆSc(t2) = Sc +� += +� +� +� +� +l∈Sc +{Tl ≤ t2} , +� +l∈C\Sc +{Tl > t2} +� +� +� . +It implies that for any i ∈ Sc, +P +� +t1 < Ti ≤ t2 +��� ˆSc(t2) = Sc +� += P +� +t1 < Ti ≤ t2 +��Ti ≤ t2 +� += P (t1 < Ti ≤ t2) +P (Ti ≤ t2) += t2 − t1 +t2 +, +where the first equality follows from the independence of the samples in C. Recall the definition Zi = +1 {t1 < Ti ≤ t2} − t2−t1 +t2 +, then for any i ∈ Sc we have +E +� +Zi +��� ˆSc(t2) = Sc +� += E +� +�Zi +��� +� +l∈Sc +{Tl ≤ t2} , +� +l∈C\Sc +{Tl > t2} +� +� += P +� +t1 < Ti ≤ t2 +��� ˆSc(t2) = Sc +� +− t2 − t1 +t2 += 0. +(D.9) +Next we will bound the conditional moment generating function of � +i∈Sc Zi. For any λ > 0, we have +E +� +eλ � +i∈Sc Zi +��� ˆSc(t2) = Sc +� += E +� +� � +i∈Sc +eλZi +��� +� +l∈Sc +{Tl ≤ t2} , +� +l∈C\Sc +{Tl > t2} +� +� += E +� +E +� � +i∈Sc +eλZi +���Ti ≤ t2, {Tl}l∈Sc\{i} +� ��� +� +l∈Sc +{Tl ≤ t2} +� += E +� +� +� +l∈Sc\{i} +eλZlE +� +eλZi +���Ti ≤ t2 +� ��� +� +l∈Sc +{Tl ≤ t2} +� +� +(i) +≤ +� +1 + λ2E +� +Z2 +i eλ|Zi|���Ti ≤ t2 +�� +E +� +� +� +l∈Sc\{i} +eλZl +��� +� +l∈Sc +{Tl ≤ t2} +� +� +(ii) += +� +1 + λ2E +� +Z2 +i eλ|Zi|���Ti ≤ t2 +�� +E +� +� +� +l∈Sc\{i} +eλZl +��� +� +l∈Sc\{i} +{Tl ≤ t2} +� +� +≤ +� +i∈Sc +� +1 + λ2E +� +Z2 +i eλ|Zi|���Ti ≤ t2 +�� +(iii) +≤ exp +� +λ2 � +i∈Sc +E +� +Z2 +i eλ|Zi|���Ti ≤ t2 +�� +, +(D.10) +where (i) holds since (D.9) and the basic inequality ey − 1 − y ≤ y2e|y|, (ii) holds due to the independence, +and (iii) comes from the basic inequality 1 + y ≤ ey. Notice that +� +i∈Sc +E +� +Z2 +i eλ|Zi|���Ti ≤ t2 +� += |Sc|E +� +Z2 +1eλ|Z1|���T1 ≥ t2 +� +≤ eλ|Sc|E +� +1 {t1 < Ti ≤ t2} +���Ti ≤ t2 +� += eλ|Sc|t2 − t1 +t2 +=: K2 +Sc(λ), +(D.11) +52 + +where the inequality holds since |Z1| ≤ 1 {t1 < Ti ≤ t2} ≤ 1. Using Markov’s inequality and (D.10), for any +z ≥ 0, we can get +P +� +�� +i∈Sc +Zi ≥ 2KSc(1)z +��� +� +l∈Sc +{Tl ≤ t2} , +� +l∈C\Sc +{Tl > t2} +� +� += P +� +�eλ � +i∈Sc Zi ≥ e2λKSc(1)z��� +� +l∈Sc +{Tl ≤ t2} , +� +l∈C\Sc +{Tl > t2} +� +� +≤ e−2λKSc(1)zE +� +�eλ � +i∈Sc Zi +��� +� +l∈Sc +{Tl ≤ t2} , +� +l∈C\Sc +{Tl > t2} +� +� +(i) +≤ e−2λKSc(1)z exp +� +λ2 � +i∈Sc +E +� +Z2 +i eλ|Zi|���Ti ≤ t2 +�� +(ii) +≤ e−2λKSc(1)z+λ2K2 +Sc(λ), +where (i) comes from (D.10), and (ii) comes from the definition of KSc(λ) in (D.11). By the definition of Ξ1, +we know that KSc(1)2 ≥ C log(n ∨ m) for any Sc ∈ Ξ1. Taking z = (C log(n ∨ m))1/2, λ = +z +KSc(1) ≤ 1, then +we have +P +������ +� +i∈Sc +Zi +����� ≥ 2KSc(1)z +��� ˆSc(t2) = Sc +� +≤ 2 exp +� +−2z2 + z2 K2 +Sc(λ) +K2 +Sc(1) +� +≤ 2e−z2 = 2(n ∨ m)−C. +(D.12) +For Sc ∈ Ξ2 such that KSc(1)2 < C log(n ∨ m), applying (D.11) with λ = 1 gives +P +������ +� +i∈Sc +Zi +����� ≥ 2eC log(n ∨ m) +��� ˆSc(t2) = Sc +� +≤ 2e−2eC log(n∨m)E +� +e +� +i∈Sc Zi +��� ˆSc(t2) = Sc +� +≤ 2e−2eC log(n∨m)+K2 +Sc(1) +≤ 2e−2eC log(n∨m)+eC log(n∨m) +≤ 2(n ∨ m)−C. +(D.13) +Taking x = 2KSc(1)(C log(n ∨ m))1/2 and y = 2eC log(n ∨ m), then plugging (D.12) and (D.13) into (D.8) +gives +P +� +� +1 +| ˆSc(t2)| +������ +� +i∈ ˆ +Sc(t2) +Zi +������ +≥ 2 +� +eC log(n ∨ m) +| ˆSc(t2)| +� +t2 − t1 +t2 ++ 2eC log(n ∨ m) +| ˆSc(t2)| +� +� +≤ 2(n ∨ m)−C +� +� � +C⊆Ξ1 +P +� +ˆSc(t2) = Sc +� ++ +� +C⊆Ξ2 +P +� +ˆSc(t2) = Sc +� +� +� += 2(n ∨ m)−C. +Thus we have complete the proof. +53 + +D.5 +Proof of Lemma A.7 +Proof. By the definition of ˆSc(t), we have +| ˆSc(t)| − nt = +� +i∈C +1 {Ti ≤ t} − nt = +� +i∈C +1 {Ti ≤ t} − P (Ti ≤ t) . +Applying Hoeffding’s inequality, we have +P +����| ˆSc(t)| − nt +��� ≥ 2C +� +n log(n ∨ m) +� +≤ (n ∨ m)−C. +Using the assumption 8C log(n ∨ m)/(nt) ≤ 1, we can finish the proof. +D.6 +Proof of Lemma A.8 +Proof. Invoking the spacing representation in Lemma D.1, we have +P +� +TU\{j} +(⌈γm⌉) ≤ γ +2 +� += P +��⌈γm⌉ +i=1 +Vi +�m +k=1 Vi +≤ γ +2 +� += P +� +� +1 +⌈γm⌉ +�⌈γm⌉ +i=1 +Vi +1 +m +�m +k=1 Vk +≤ γ +2 +m +⌈γm⌉ +� +� +≤ P +� +� +1 +⌈γm⌉ +�⌈γm⌉ +i=1 +Vi +1 +m +�m +k=1 Vk +≤ 1 +2 +� +� +≤ P +� +� +1 +⌈γm⌉ +⌈γm⌉ +� +i=1 +Vi − 1 ≤ −1 +4 +� +� + P +� +1 +m +m +� +k=1 +(Vk − 1) ≥ 1 +2 +� +≤ (n ∨ m)−C. +54 +