
The Minimax Theorem
... Proof. Without loss of generality, assume v = 0. Indeed, if v 6= 0, Lemma 2 allows us to consider the matrix A − v1 instead of A. Our goal is to show that w = 0. Lemma 1 tells us 0 ≤ w, so we will assume 0 < w and look for a contradiction. Let C = { Ay : y is a probability vector}. Just as in the pr ...
... Proof. Without loss of generality, assume v = 0. Indeed, if v 6= 0, Lemma 2 allows us to consider the matrix A − v1 instead of A. Our goal is to show that w = 0. Lemma 1 tells us 0 ≤ w, so we will assume 0 < w and look for a contradiction. Let C = { Ay : y is a probability vector}. Just as in the pr ...
Test 2 Working with Polynomials
... Donkey Kong is competing in a shot-put challenge at the Olympics. His throw can be modeled by the function h(t) = -5t2 + 8.5t + 1.8, where h is the height, in metres, of a shot-put t seconds after it is thrown. Determine the remainder when h(t) is divided by (t – 1.4). What does this value represent ...
... Donkey Kong is competing in a shot-put challenge at the Olympics. His throw can be modeled by the function h(t) = -5t2 + 8.5t + 1.8, where h is the height, in metres, of a shot-put t seconds after it is thrown. Determine the remainder when h(t) is divided by (t – 1.4). What does this value represent ...
Condition estimation and scaling
... not a backward stable algorithm. Even if we could compute A−1 essentially exactly, only committing rounding errors when storing the entries and when performing matrix-vector multiplication, we would find fl(A−1 b) = (A−1 + F )b, where |F | ≤ nmach |A−1 |. But this corresponds to to a backward error ...
... not a backward stable algorithm. Even if we could compute A−1 essentially exactly, only committing rounding errors when storing the entries and when performing matrix-vector multiplication, we would find fl(A−1 b) = (A−1 + F )b, where |F | ≤ nmach |A−1 |. But this corresponds to to a backward error ...