Lecture 11
- consistent: given more accurate predictions, the online algorithm should perform close to the optimal offline algorithm, and
- robust: if the prediction is wrong, the online algorithm performance should be close to the online algorithm without predictions:
Lecture 12
- Uniform distribution: All chances are equal: $\text{Pr}(X=k)=\frac{1}{|\text{codom}(X)}$
- Geometric distribution: Number of trials until the first success in a sequence of Bernoulli trials: $\text{Pr}(X=k)=(1-p)^{k-1}p$, $\mathbb{E}[X]=\frac{1}{p}$.
Lecture 13
- Yao’s Principle, informal: The worst-case performance of the best
randomized algorithm is equal to the average performance of the best
deterministic algorithm on the worst distribution of the inputs or
- If $$\frac{\text{min}{\text{Alg}\in\mathcal{A}}\mathbb{E}{\text{ADV}}\left[\text{Alg}(\mathcal{I})\right]}{\mathcal{E}{\text{Adv}}\left[\text{Opt}(\mathcal{I})\right]}\geq c$$. In other words the average performance of the best deterministic algorithm is greater than $c$, then $$\frac{\mathcal{E}{\text{Rand}}\left[\text{Rand}(I)\right]}{\text{Opt}(I)}\geq c$$
- Coupon Collector’s Problem: If each box of breakfast cereal contains a
coupon, and there are $n$ different coupons, how many boxes do you need to buy
in expectation to collect all $n$ coupon? Answer $nH_n$.
Theorem
No randomized online algorithm for paging is better than Hk -competitive in
expectation, even when $M = k + 1$. For each random algorithm $\mathcal{A}$