Download Outline of the Pre-session Tianxi Wang

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Rotation matrix wikipedia , lookup

Jordan normal form wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Four-vector wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Determinant wikipedia , lookup

Ordinary least squares wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Gaussian elimination wikipedia , lookup

System of linear equations wikipedia , lookup

Transcript
Outline of the Pre-session
Tianxi Wang
[email protected]
1. Tuesday (21/09): Linear Algebra
2.Wednesday (22/09): Calculus
3.Thursday (23/09): Unconstrained Optimization
4. Friday (24/09): Equality Constrained Optimization -1
5.Monday (27/09) and Tuesday (28/09): Equality Constrained
Optimization -1 & Inequality Constrained Optimization
Lecture 1: Linear Equations and Linear Algebra
Systems of Linear Equations: Chapters 6-9.
Systems of linear equations (SLE) are very important tool in Economics.
Part of their appeal is due to the fact that they are simple to deal with and
deliver exact solutions. Although the world is likely to be nonlinear, linear
systems are useful approximations.
We can write a system of linear equation in matrix form. We will study
some basic properties of a square matrix. These properties will also be
useful when we study differential equations.
Example A company earns before-tax profits 100, 000. It has agree to
contribute 10 percent of its after-tax profits to Red Cross. It must pay a
state tax of 5 percent of its profits after the Red Cross donation and a
federal tax of 40 percent of its profits after the donation and state taxes
are paid. How much the company pay in state taxes, federal taxes and Red
Cross donation?
We need to structure our analysis. Let C, S, and F be charitable
contribution, state tax and federal tax, respectively.
After tax-profits: 100, 000 − (S + F ), so that C = 0.10[100, 000 − (S + F )]
State tax: S = 0.05[100, 000 − C]
Federal tax: F = 0.40[100, 000 − (C + S)]
Putting together we have a system of linear equations:
C + 0.1S + 0.1F = 10, 000
(1)
0.05C + S = 5, 000
(2)
0.4C + 0.4S + F = 40, 000
(3)
solving we obtain C = 5, 956, S = 4, 702, and F = 35, 737.
Example: IS-LM Analysis IS-LM analysis is Sir John Hicks,s interpretation
of the basic elements of John Maynard Keynes’ classic work: the General
Theory of Employment, Interest, and Money.
Description of the Economy: no imports, exports, or other leakages. Value
of total production=total spending, which in turn equals total national
income. We denote this Y . Total spending Y can be decomposed into
consumption C plus investment I, government spending G:
Y =C +G+I
Consumer spending is proportional to total income C = bY , b ∈ (0, 1)
is the marginal propensity to consume; thus s = (1 − b) is the marginal
propensity to save. Firms’ investment I is decreasing in the interest rate r:
I = I0 − ar, where a is the marginal efficiency of capital.
The IS schedule is a summary of the real side of the economy (consumption,
investment and saving decisions) and it is given by
Y = bY + (I0 − ar) + G
or
sY + ar = I0 + G
The LM schedule is a summary of the money market: money supply equals
money demand. Money supply is determined outside the system. Money
demand depends on
Transaction (precautionary demand), Md: as national income increases so
does demand for fund, i.e., Md = mY .
Speculative demand, Ms, summarizes portfolio management of an investor:
holding bonds with a return r or holding money which is liquidity at not
interest: Ms = M0 − hr.
So LM is
Ms = mY + M0 − hr
Equilibrium occurs when production equilibrium and monetary equilibrium
are simultaneously satisfied
sY + ar = I0 + G
(4)
mY − hr = Ms − M0
(5)
which is a linear system.
Definition.
a11x1 + ... + a1nxn = b1
...... = .....
an1x1 + ... + amnxn = bm
aij and bi are parameters/coefficients, and xj are unknows/variables.
Examples
Unique solution:
x1 + x2 = 2
x1 − x2 = 0
No Solution
x1 + x2 = 2
x1 + x2 = 4
Many Solutions
x1 + x2 = 2
2x1 + 2x2 = 4
Questions
When does a system of linear equation has a solution?
When a solution exists, how many are the solutions?
Is there an efficient algorithm that computes solutions?
We will focus in systems that have a unique solution. For this, we need to
impose enough structure on the problem: for n unknowns, we will need n
linear independent equations.
It is easy to describe these conditions in matrix form.
It is more compact to write a system of linear equations in matrix form:
a11x1 + a12x2 = b1
a21x1 + a22x2 = b2
can be rewritten as
a11 a12 x1
b
= 1
a21 a22 x2
b2
or
Ax = b,
where A is the coefficients matrix, and x and b are two vectors. x is the
vector of unknowns.
Basic Matrix Algebra
Addition
a11 a12
c
c
a + c11 a12 + c12
+ 11 12 = 11
a21 a22
c21 c22
a21 + c21 a22 + c22
Substraction
a11 a12
c
c
a − c11 a12 − c12
− 11 12 = 11
a21 a22
c21 c22
a21 − c21 a22 − c22
Scalar Multiplication Addition
a
a12
ka11 ka12
k 11
=
a21 a22
ka21 ka22
Matrix Multiplication Addition
a11 a12 c11 c12
a11c11 + a12c21 a11c12 + a12c22
=
a21 a22 c21 c22
a21c11 + a22c21 a21c12 + a22c22
Identity Matrix: I such that
AI = A
IA = A,
hence
1 0
I=
0 1
Associative Law
(A + B) + C = A + (B + C)
(AB)C = A(BC)
Commutative Law
A+B=B+A
Distributive Law
A(B + C) = AB + AC
(A + B)C = AC + BC
Transpose Matrix:
T a11 a12
a
a21
= 11
a21 a22
a12 a22
Properties of Transpose Matrix:
(A + B)T = AT + BT
(A − B)T = AT − BT
(AT )T = A
(rA)T = rAT
(AB)T = BT AT
Definition. A row of a matrix has k-leading zeros if the first k elements of
the row are all zeros and the k + 1 element is non-zero. A matrix is in row
echelon form if each row has more leading zeros then the row proceeding it.
With elementary row operations we can always go from a matrix A to its
row echelon representation, B.
Definition The Rank of a Matrix A is the number of non-zero rows in its
row echelon form B.
SIMPLE EXAMPLE HERE
Some basic results on system of linear equations. Result 1 A linear
system of equations must have either no solution, one solution or infinitely
many solutions. So, if a system has more than one solution it has infinitely
many.
Result 2 If a system has a unique solution then it has at least as many
equations as unknowns. If a system has more unknowns than equations
then it has either no solutions of infinitely many solutions.
An homogenous system is Ax = 0 . Note that this system has always a
solution. Hence, result 2 immediately implies that an homogenous system
with more unknowns than equations has infinitely many solutions.
Result For any given vector b, the system Ax = b has a solution if and
only if rank A equals the number of rows of A.
Result For any given vector b, the system Ax = b has at most one solution
if and only if rank A equals the number of columns of A.
These two last results imply that
Result. For any given vector b, the system Ax = b has a unique solution
if and only if rank A equals the number of columns of A which is equal
to number of rows of A. A matrix A with these properties is said to be
nonsingular.
Note that a necessary condition for a unique solution is that the coefficient
matrix A has the same number of rows and columns. This is the definition
of a square matrix. If the rank of a square matrix equals the number of
its rows, then we say that the rank of A is maximal. In view of the result
above this is a crucial condition for existence of a unique solution. As we
will see to determine the maximal rank property we will use the notion of
determinant of a matrix.
Summary Consider the linear system Ax = b.
I. if the number of equations < number of unknowns, then: (Ia.) Ax = 0
has infinitely many solutions; (Ib) for any given b, Ax = b has 0 or infinitely
many solutions; (Ic) if rank A=number of equations, then Ax = b has
infinitely many solutions.
II. if the number of equations > number of unknowns, then: (IIa.) Ax = 0
has one or infinitely many solutions; (IIb) for any given b, Ax = b has 0,
one or infinitely many solutions; (IIc) if rank A=number of unknown, then
Ax = b has 0 or one solution.
III. if the number of equations = number of unknowns, then: (IIIa.) Ax = 0
has one or infinitely many solutions; (IIIb) for any given b, Ax = b has 0,
one or infinitely many solutions; (IIc) if rank A=number of unknown, then
Ax = b has exactly one solution.
Algebra for Square Matrices
Definition. For a square matrix A, A−1 is an inverse of A if A−1A =
AA−1 = I.
Definition. If A has an inverse we say that A is invertible.
Result. A square matrix A can have at most one inverse.
Proof. Suppose that B and C are both inverse of A. Then
CI = C(AB) = (CAB) = IB = B
Result. If a square A is invertible then it is nonsingular. Hence, the system
Ax = b has a unique solution given by x = A−1b.
To see this note that
Ax = b
A−1Ax = A−1b
Ix = A−1b
x = A−1b
Result If a square matrix is nonsingular then it is invertible.
Hence, a square matrix is nonsingular if and only if is invertible if and only
if has maximal rank
Determinant of a Matrix The notion of determinant of a matrix is useful
because it gives a simple criteria to check whether a square matrix is
nonsingular. In fact,
Result. A square matrix is nonsingular if and only if its determinant is
nonzero.
The determinant of a matrix is defined inductively. Take a scalar a (a one
times one matrix). The inverse of a is 1/a, which is well defined if and only
if a 6= 0. So, we can define the determinant of a as det(a) = a, and this
satisfies that a is invertible if and only if its determinant is nonzero.
For a 2 × 2 matrix, A−1 is defined if and only if a11a22 − a12a21 6= 0. Thus
a11 a12
= a11a22 − a12a21
det(A) = a21 a22
For a 3 × 3 matrix the determinant of A is
a11a22a33 + a12a23a31 + a13a21a32 − a13a22a31 − a12a21a33 − a11a23a32
In general, there is a simple algorithm to find the determinant of a matrix.
Cremer’s Rule
Let A be a nonsingular square matrix. Then the unique solution to the
system Ax = b is
|Bi|
xi =
,
|A|
for i = 1....n, where Bi is the matrix A with the raight-hand side vector b
replacing the i-th column of A. In our example
b1 a12
b2 a22
x1 =
|A|
a11 b1
a21 b2
x2 =
|A|