Download Slides - School of Mathematics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Black–Scholes model wikipedia , lookup

Transcript
Scientific Computing
Martin Lotz
School of Mathematics
The University of Manchester
Projects 2, November 21, 2014
Outline
Overview
Projects
Outline
Overview
Projects
Projects
1
Compressed Sensing
2
Discontinuous Galerkin Methods
3
American Options
4
Revenue Management of Car parks
5
Random Numbers, Stochastic Simulation and the Gillespie
Algorithm
6
Efficient Sparse Matrix-Vector Multiplication
1 / 21
Project structure
Projects consist of roughly three parts
I
I
Introduction and Theory
Exercises
I
I
These involve programming tasks
Report
I
The report will draw upon the work done in the exercises
You are generally given more freedom in how you approach a
particular problem.
2 / 21
Project structure
I
Virtually all projects require the use of vectors and matrices
I
2 of the projects deal with PDEs
I
2 are related to finance
3 / 21
Outline
Overview
Projects
Compressed Sensing
Figure: Compressed image
Figure: Block DCT
≈ 95% coefficients are 0.
Given that most data is compressible, shouldn’t it be possible to
acquire it already in compressed form?
4 / 21
The sparse recovery problem
I
In its simplest form, Compressed Sensing is about finding a
sparse solution to a system of linear equations:
Ax = b,
(3.1)
where A is an m × n matrix and m < n.
I
Is it possible to solve this system if the number of equations
m << n?
I
Surprisingly, if (3.1) has a k-sparse solution, then it can be
recovered from m ∼ k equations, which can be much lower
than n!
I
An algorithm: Normalised Iterative Hard Thresholding (NIHT)
5 / 21
Phase transitions
The success of NIHT at recovering a k-sparse vector depends
on whether the number of equations m is above or below a
certain threshold.
1
0.9
0.8
Probability of success
I
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
Number of equations m
150
k = 50, m = 25, n = 200
200
6 / 21
Phase transitions
The success of NIHT at recovering a k-sparse vector depends
on whether the number of equations m is above or below a
certain threshold.
1
0.9
0.8
Probability of success
I
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
Number of equations m
150
k = 50, m = 50, n = 200
200
6 / 21
Phase transitions
The success of NIHT at recovering a k-sparse vector depends
on whether the number of equations m is above or below a
certain threshold.
1
0.9
0.8
Probability of success
I
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
Number of equations m
150
k = 50, m = 75, n = 200
200
6 / 21
Phase transitions
The success of NIHT at recovering a k-sparse vector depends
on whether the number of equations m is above or below a
certain threshold.
1
0.9
0.8
Probability of success
I
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
Number of equations m
150
k = 50, m = 100, n = 200
200
6 / 21
Phase transitions
The success of NIHT at recovering a k-sparse vector depends
on whether the number of equations m is above or below a
certain threshold.
1
0.9
0.8
Probability of success
I
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
Number of equations m
150
k = 50, m = 125, n = 200
200
6 / 21
Phase transitions
The success of NIHT at recovering a k-sparse vector depends
on whether the number of equations m is above or below a
certain threshold.
1
0.9
0.8
Probability of success
I
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
Number of equations m
150
k = 50, m = 150, n = 200
200
6 / 21
Phase transitions
The success of NIHT at recovering a k-sparse vector depends
on whether the number of equations m is above or below a
certain threshold.
1
0.9
0.8
Probability of success
I
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
Number of equations m
150
k = 50, m = 175, n = 200
200
6 / 21
Phase transitions
The success of NIHT at recovering a k-sparse vector depends
on whether the number of equations m is above or below a
certain threshold.
1
0.9
0.8
Probability of success
I
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
Number of equations m
150
k = 50, m = 200, n = 200
200
6 / 21
Tasks
I
Implement Gradient Descent and the NIHT algorithm
I
I
Uses the MVector and Matrix classes
Makes use of sorting, but you can use the standard library for
this
I
Test the performance of the algorithm, both for speed and
recovery success
I
Find the parameter values for which NIHT is able to
successfully recover sparse solution
7 / 21
Discontinuous Galerkin Methods
I
Goal is to solve a one dimensional transport equation
∂u ∂f (u)
+
= 0,
∂t
∂x
where f (u) is a flux function, on a bounded interval.
I
A numerical method, the Discontinuous Galerkin Method, will
be employed to solve this problem.
I
The spatial domain is split into N elements, that may be
connected to each other, and the problem is solved on these
elements.
8 / 21
Discontinuous Galerkin Methods
I
Goal is to solve a one dimensional transport equation
∂u ∂f (u)
+
= 0,
∂t
∂x
where f (u) is a flux function, on a bounded interval.
I
A numerical method, the Discontinuous Galerkin Method, will
be employed to solve this problem.
I
The spatial domain is split into N elements, that may be
connected to each other, and the problem is solved on these
elements.
9 / 21
Discretisation
I
Galerking methods are based on the weak form
Z ∂u ∂f (u)
+
v(x) dx = 0.
∂t
∂x
I
On each element, the equation can be turned into a coupled
(2 × 2) system of ordinary differential equations.
I
These differential equations can be solved using methods used
in the previous project ODE, or alternatively by an iterative
scheme based on a finite difference approximation.
I
The project also involves numerical integration.
10 / 21
Tasks
I
Create a class AdvectionElement for modelling the domain.
I
Implement a timestepping scheme to update the values of the
function u at the boundaries of the elements with time.
I
Use the discontinuous Galerkin method to solve the advection
equation
∂u
∂u
+C
=0
∂t
∂x
and the inviscid Burgers’ equation
∂u
∂u
+u
=0
∂t
∂x
on x ∈ [0, 2π] with periodic boundary conditions, and produce
graphs of the solutions.
11 / 21
American Options
I
An option gives the holder the right to buy or sell an
underlying asset at a particular price in the future.
12 / 21
American Options
I
An option gives the holder the right to buy or sell an
underlying asset at a particular price in the future.
I
An American Option is an option that allows the holder to sell
or buy the asset at any time before the expiry.
12 / 21
American Options
I
An option gives the holder the right to buy or sell an
underlying asset at a particular price in the future.
I
An American Option is an option that allows the holder to sell
or buy the asset at any time before the expiry.
I
The pricing of American Options is an important and
long-standing problem in mathematical finance.
12 / 21
American Options
I
An option gives the holder the right to buy or sell an
underlying asset at a particular price in the future.
I
An American Option is an option that allows the holder to sell
or buy the asset at any time before the expiry.
I
The pricing of American Options is an important and
long-standing problem in mathematical finance.
I
Based on the Black-Scholes model, the problem can be
transformed into a non-linear PDE.
12 / 21
American Options
I
An option gives the holder the right to buy or sell an
underlying asset at a particular price in the future.
I
An American Option is an option that allows the holder to sell
or buy the asset at any time before the expiry.
I
The pricing of American Options is an important and
long-standing problem in mathematical finance.
I
Based on the Black-Scholes model, the problem can be
transformed into a non-linear PDE.
I
This non-linear PDE is to be solved numerically using an
adapted tri-diagonal matrix solver.
12 / 21
American Options
I
The Black-Scholes Pricing Framework for a derivative V and
underlying S is given by
∂V
1
∂2V
∂V
= σ 2 S 2 2 + (r − d)S
− rV,
∂t
2
∂S
∂S
where r is the risk free interest, d the dividend rate and σ the
volatility.
I
The initial condition for an American call option is
V (S, τ = 0) = max(S − X, 0), but the boundary condition is
not so easy as we don’t know when the holder will exercise
the option!
13 / 21
American Options
I
The project involves numerical techniques such as
I
I
I
Numerical schemes such as Crank-Nicholson
Solving large sparse systems of linear equations
A further application involves convertible bonds.
14 / 21
Revenue Management of Car parks
I
I
Revenue management is involved in such applications such as
online purchases or bookings.
A simple example to be studied is the car park model
I
I
I
A car park can hold a finite number of cars at any given time
Customers can reserve parking slots in advance or on the go,
and with any duration.
The goal is to device a pricing system and allocate spaces to
different kinds of customers.
15 / 21
Revenue Management of Car parks
I
Each booking has a booking time, arrival and departure time.
I
The revenue generated by the car park is a function of all
bookings, the duration of bookings, and a cost function.
I
The bookings coming in are modelled using a random Poisson
process.
16 / 21
Revenue Management of Car parks
I
There are different kinds of customers:
I
I
Customers may book well in advance and stay longer
Customers may arrive unannounced and pay on the spot
I
What is the best way to assign places and prices to different
kinds of customers?
I
The solution could be a simple model based on two types of
customers, or an adaptive one based on current free spaces
and expected revenue.
17 / 21
Gillespie Algorithm
I
The Gillespie algorithm is an approach to stochastic
simulation in chemical reaction networks
I
The discussion is based on the Michaelis-Mentem system of
enzyme catalysis.
I
While in large systems of molecules a differential equation can
model the system, in small systems one deals with individual
molecules.
18 / 21
Gillespie Algorithm
I
Given is a vector
n(t) = (NS (t), NE (t), NC (t), NP (t))
that describes the populations of different kinds of molecules
(substrate, enzymes, complex, product) that interact through
reactions.
I
The populations of the molecules change with time, and this
is described a Markov process
I
Gillespie’s algorithm is a way to describe the evolution of the
particles in time
19 / 21
Efficient Sparse Matrix-Vector Multiplication
I
Almost all of the previous projects involved linear systems in
one form or the other.
I
Multiplying a matrix with a vector is one of the most basic
operations in all of scientific computing!
I
When the matrix is sparse, the performance can be enhanced
dramatically by using efficient sparse representations of the
matrix.
20 / 21
Efficient Sparse Matrix-Vector Multiplication
I
Two important storage formats are Compressed Sparse Row
(CSR) and Block Compressed Sparse Row (BCRS).
I
The basic idea is that only the non-zero values are stored in an
array, and the positions of these values are stored separately.
I
The project involves creating Matrix classes for the different
storage formats, implementing matrix-vector multiplication,
and testing the performance through experiments.
21 / 21