Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
8.13 Cryptography • Introduction Secret writing means code. • A simple code Let the letters a, b, c, …., z be represented by the numbers 1, 2, 3, …, 26. A sequence of letters cab then be a sequence of numbers. Arrange these numbers into an m n matrix M. Then we select a nonsingular m m matrix A. The new sent message becomes Y = AM, then M = A-1Y. 8.14 An Error Correcting Code • Parity Encoding Add an extra bit to make the number of one is even Example 2 (a) W = (1 0 0 0 1 1) (b) W = (1 1 1 0 0 1) Solution (a) The extra bit will be 1 to make the number of one is 4 (even). The code word is then C = (1 0 0 0 1 1 1). (b) The extra bit will be 0 to make the number of one is 4 (even). So the edcoded word is C = (1 1 1 0 0 1 0). Fig 8.12 Example 3 Decoding the following (a) R = (1 1 0 0 1 0 1) (b) R = (1 0 1 1 0 0 0) Solution (a) The number of one is 4 (even), we just drop the last bit to get (1 1 0 0 1 0). (b) The number of one is 3 (odd). It is a parity error. Hamming Code W ( w1 w2 w3 w4 ) C (c1 c2 w1 c3 w2 w3 w4 ) where c1, c2, and c3 denote the parity check bits. Encoding c1 w1 w2 w4 c2 w1 w3 w4 c3 w2 w3 w4 w1 c1 1 1 0 1 w2 c2 1 0 1 1 c 0 1 1 1 w3 3 w4 Example 4 Encode the word W = (1 0 1 1). Solution w 1, w 0, w 1, w 1 1 2 3 4 1 c1 1 1 0 1 1.1 1.0 0.1 1.1 0 0 c2 1 0 1 1 1.1 0.0 1.1 1.1 1 c 0 1 1 1 1 0.1 1.0 1.1 1.1 0 3 1 c1 0, c2 1, c3 0 C (0 1 1 0 0 1 1) Decoding S HR 0 T Example 5 Compute the syndrome of (a) R = (1 1 0 1 0 0 1) and (b) R = (1 0 0 1 0 1 0) 1 Solution (a) 1 0 0 0 1 1 1 1 0 0 S 0 1 1 0 0 1 1 1 0 1 0 1 0 1 0 1 0 0 0 1 we conclude that R is a code word. By the check bits in (1 1 0 1 0 0 1), we get the decoded Example 5 (2) (b) 1 0 0 0 0 1 1 1 1 0 0 S 0 1 1 0 0 1 1 1 1 1 0 1 0 1 0 1 0 1 1 0 Since S 0, the received message R is not a code word. E [e1 e2 e3 e4 e5 e6 e7 ] 1 , if noise changes the ith bit ei 0, if noise does ot change the ith bit R C E, RT CT ET HRT H(CT ET ) HCT HET 0 HET HET e4 e5 e6 e7 T HE e2 e3 e6 e7 e e e e 1 3 5 7 0 0 0 1 1 1 1 T HE e1 0 e2 1 e3 1 e4 0 e5 0 e6 1 e7 1 1 0 1 0 1 0 1 Example 6 R (1 0 0 1 0 1 0) 0 S 1 1 Changing zero to one gives the code word C = (1 0 1 1 0 1 0). Hence the first, second, and fourth bits from C we arrive at the decoded message (1 0 1 0). 8.15 Method of Least Squares • Example 2 If we have the data (1, 1), (2, 3), (3, 4), (4, 6), (5,5), we want to fit the function f(x) =ax + b. Then a+b=1 2a + b = 3 3a + b = 4 4a + b = 6 5a + b = 5 Example 2 (2) Let 1 1 1 3 2 1 Y 4 , A 3 1 6 4 1 5 5 1 55 15 A A 15 5 T we have Example 2 (3) 1 1 2 55 15 X 3 15 5 4 5 T 1 1 1 1 1 1 3 4 6 5 1 3 1 5 15 1 2 3 4 5 4 50 15 55 1 1 1 1 1 6 5 1 5 15 68 1.1 50 15 55 19 0.5 Example 2 (4) We have AX = Y. Then the best solution of X will be X = (ATA)-1ATY = (1.1, 0.5)T. For this line the 2 sum E [1of fthe (1)]2 square [3 f (2)]error [4 is f (3)]2 [6 f (4)]2 [5 f (5)]2 [1 1.6]2 [3 2.7]2 [4 3.8]2 [6 4.9]2 [5 6]2 2.7 The fit function is y = 1.1x + 0.5 Fig 8.15 8.16 Discrete Compartmental Models • The General Two-Compartment Model dx ( F12 F10 )c1 (t ) F21c2 (t ) I (t ) dt dy F21c1 (t ) ( F21 F20 )c2 (t ) dt Fig 8.16 Discrete Compartmental Model x1 x2 X , xn y1 y2 Y yn y1 x1 (amount of tracer entering 1) (amount of tracer leaving 1) x1 ( 12 x2 13 x3 1n xn ) ( 21 31 n1 ) x1 (1 21 31 n1 ) x1 12 x2 1n xn. y1 11x1 12 x2 1n xn y2 21x1 22 x2 2 n xn yn n1x1 n 2 x2 nn xn y1 11 12 1n x1 y2 21 22 2 n x2 yn n1 n 2 nn xn Fig 8.17 Example 1 • See Fig 8.18. The initial amount is 100, 250, 80 for these three compartment. For Compartment 1 (C1): 20% to C2 0% to C3 then 80% to C1 For C2: 5% to C1 30% to C3 then 65% to C2 For C3: 25% to C1 - 0.05 0.25 0%Tto C3 then 75% to 0 C3 0.2 - 0 0.3 - Fig 8.18 Example 1 (2) That is, New C1 = 0.8C1 + 0.05C2 + 0.25C3 New C2 = 0.2C1 + 0.65C2 + 0C3 New C3 = 0C1 + 0.3C2 + 0.75C3 We get the transfer matrix as 0.8 0.05 0.25 T 0.2 0.65 0 0 0 . 3 0 . 75 Example 1 (3) Then one day later, 0.8 0.05 0.25 100 112.5 Y TX 0.2 0.65 0 250 182.5 0 0.3 0.75 80 135 • Note: m days later, Y = TmX0 X1 TX0 , X 2 TX1 , X3 TX 2 , , X n1 TX n X2 T(TX0 ) T2 X0 , X3 (T2 X0 ) T3X0 , X n T n X0 , n 1, 2, Example 2 Example 2 (2) 20 60 X0 15 20 0.85 0.01 0 0.05 0.98 0.2 T 0.1 0 0.8 0.01 0 0 0 0 0 1 Example 2 (3) 0.85 0.01 0 0.05 0.98 0.2 X1 TX0 0.1 0 0.8 0.01 0 0 0 20 17.6 0 60 62.8 0 15 14.0 1 20 20.6 Para la matriz simétrica: tenemos = −9, −9, 9. 16 4 4 0 ( A 9I 0) 4 1 1 0 4 1 1 0 2 4 4 0 ( A 9I 0) 4 17 1 0 4 1 17 0 4 4 7 A 4 8 1 4 1 8 operacione s filas operacione s filas 1 1/4 1/ 4 0 0 0 0 0 0 0 0 0 1 0 4 0 0 0 0 0 0 0 0 0 1 1 k1 k2 k3 4 4 0 1 K1 1, K 2 4 1 0 k1 4k2 4 K 3 1 0 Recuerda que si A es una matriz n × n simétrica, los autovectores correspondientes a distintos (diferentes) autovalores son ortogonales. Observa que: K3 K1 = K3 K2 = 0, K1 K2 = – 4 0 Usando el método de Gram-Schmidt: V1 = K1 1 K 2 V1 V2 K 2 V1 2 V1 V1 2 Ahora si que tenemos un conjunto ortogonal y podemos normalizarlo: 1 4 0 3 2 3 1 2 1 , , 2 3 3 2 1 2 1 2 3 3 2 P 0 1 2 1 2 1 4 3 3 2 2 1 3 3 2 2 1 3 3 2 Ortogonal