Lecture 11

- *consistent:* given more accurate predictions, the online algorithm should perform close to the optimal offline algorithm, and - *robust:* if the prediction is wrong, the online algorithm performance should be close to the online algorithm without predictions:

Lecture 12

- *Uniform distribution:* All chances are equal: $\text{Pr}(X=k)=\frac{1}{|\text{codom}(X)}$ - *Geometric distribution:* Number of trials until the first success in a sequence of Bernoulli trials: $\text{Pr}(X=k)=(1-p)^{k-1}p$, $\mathbb{E}[X]=\frac{1}{p}$.

Lecture 13

Theorem No randomized online algorithm for paging is better than Hk -competitive in expectation, even when $M = k + 1$. For each random algorithm $\mathcal{A}$