Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Falcon (programming language) wikipedia , lookup
Hindley–Milner type system wikipedia , lookup
C Sharp (programming language) wikipedia , lookup
Anonymous function wikipedia , lookup
Standard ML wikipedia , lookup
Closure (computer programming) wikipedia , lookup
Curry–Howard correspondence wikipedia , lookup
Lambda lifting wikipedia , lookup
Lambda calculus wikipedia , lookup
CSci 4223 Lecture 14 March 20, 2013 Topics: Lambda calculus 1 Lambda calculus We’ll conclude this section of the course by looking at a programming language that we saw briefly while we looked at type inference. That language is the lambda calculus. It was originally introduced in the 1930s by Alonzo Church (who was born in Washington, DC). It’s called a “calculus” because it’s a method of calculating by symbolic manipulation of symbols, much like the integral calculus of Newton and Leibniz. The pure lambda calculus contains just anonymous function definitions, which are called abstractions, as well as function application and variables. In ML, the syntax would therefore be variable application abstraction x e1 e2 fn x => e But there’s a slightly different syntax that is standard in the study of lambda calculus: x e1 e2 λx.e variable application abstraction The only real change is that fn is replaced by λ, and => by a dot. Abstractions extend as far to the right as possible. For example, λx.x λy.y is the same as λx.(x λy.y), and is not the same as (λx.x) (λy.y). Application is left-associative. For example, e1 e2 e3 is the same as (e1 e2 ) e3 . These rules are actually the same as in the ML syntax. But when in doubt, use parentheses to make the parsing of a lambda expression clear. The pure lambda calculus is strictly a subset of SML: every lambda calculus program is a legal SML program. That shouldn’t be surprising. What might be surprising is that every SML program can be compiled down to a pure lambda calculus program! (Though that’s not exactly what the compiler actually does—it wouldn’t be efficient.) That’s true for many other functional languages, too. Lambda calculus is the assembly language of functional programming. 2 Bindings The symbol λ is a binding operator, as it binds a variable within some scope. Every occurrence of a variable x in an expression is either bound or free. Variable x is bound in e in expression λx.e. If x is not bound, then it is free. For example, in λx.(x (λy.y a) x) y both occurrences of x are bound, the first occurrence of y is bound, a is free, and the last y is also free because it is outside the scope of λy. A closed expression (also called a combinator ) is on in which all identifiers are bound. If an expression has any free variables, the expression cannot be closed. If an expression does have some free variables, then you do not have a complete program, as the values of those variables cannot be determined. So a well-formed program in the lambda calculus must be closed. The names of bound variables are important, R 7 not R 7 2 which is something you’re familiar with from integral 2 calculus. Consider the integrals 0 x dx and 0 y dy. They describe the same integral, even though one 1 uses variable x and the other y. We can change the name of the bound variable without changing the value of the integral. In the same way, λx.x is the same function as λy.y. If expressions e1 and e2 differ only in the names of their bound variables, then the two expressions are called alpha equivalent. (The etymology of “alpha” here is unimportant and unenlightening.) 3 Semantics The type-checking rules for the lambda calculus are exactly what you’d expect from our study of ML: • A variable x has whatever type t is recorded in the current static environment for x. If the environment doesn’t have a type for x, then there is a type error (and x is free). • If e1 has type t → t0 , and e2 has type t, then e1 e2 has type t0 . • If e has type t0 in a static environment in which x has type t, then λx.e has type t → t0 . There might be many types t for which this holds; the type inferencer we discussed last class will infer the most lenient type for t possible. Likewise, the evaluation rules (using closures) are what you’d expect: • Variable x evaluates to whatever value is bound to x in the current dynamic environment. (Again, if x isn’t bound, then there is an error.) • If e1 evaluates to value v1 , and v1 is a function closure with code part λx.e and environment part E, and e2 evaluates to v2 , then application e1 e2 evaluates to the result of evaluating e in E extended to map x to v2 . • Abstraction λx.e immediately evaluates to a function closure. The code part is λx.e, and the environment part is the current dynamic environment. However, there’s another way of stating the evaluation rules that is more elegant, and indeed more standard for the lambda calculus. It doesn’t involve function closures, but instead uses substitution. The key idea is that we’d like to regard (λx.e1 ) e2 as equivalent to e1 where e2 is substituted for every (free) occurrence of x. For example (assuming for the moment that the lambda calculus also had integers and multiplication), we’d like to regard (λx.x ∗ 2) 5 as equivalent to 5 ∗ 2. We write e1 {x := e2 } to mean e1 with all free occurrences of x replaced with e2 . (There are many other notations for substitution, e.g. [x 7→ e2 ]e1 , or [e2 /x]e1 , or e1 [e2 /x], or e1 {e2 /x}.) Using that notation, our key idea is that we’d like (λx.e1 ) e2 and e1 {x := e2 } to be equivalent. This equivalence is called beta equivalence. (The etymology of “beta” here is again unimportant and unenlightening.) The evaluation rules1 (using substitution) are as follows: • If e1 evaluates to λx.e, and e2 evaluates to v2 , and e{x := v2 } evaluates to v, then application e1 e2 evaluates to v. Note that no dynamic environment is used at all in this rule. • Abstraction λx.e immediately evaluates to just λx.e. That is, a function is already a value; there is no computation remaining. Note that no closure is created, and that no computation is performed in body e. • Variable x does not evaluate to anything and immediately results in an error. For this to happen, x must have been free in the expression that is being evaluated. Since it was free, there is no value that can be associated with it. 1 These are the call-by-value semantics for the lambda calculus. Other semantics, such as call-by-name and call-by-need are possible. We cover call-by-value because it’s the semantics that SML uses. 2 These rules are, in many ways, simpler than the rules using closures: the substitution rules don’t need a dynamic environment, and don’t need function closures. All they need is substitution. 4 Substitution Defining substitution e1 {x := e2 } is trickier than you might expect, because the expressions involved in the substitution might share some variable names. In fact, many mathematicians (including Newton, Church, and Gödel) tried to define substitution but got it wrong. It took until around 1950 for the first correct definition to appear. As a first attempt, consider this tempting (but incorrect) definition of substitution: x{x := e} = e y{x := e} = y (e1 e2 ){x := e} = (e1 {x := e}) (e2 {x := e}) (λy.e0 ){x := e} = λy.(e0 {x := e}) (if x 6= y) Unfortunately, this definition produces the wrong result when there are free variables inside an abstraction. For example, according to this definition, (λy.x){x := y} = (λy.y). The broken definition of substitution has changed the something that is not the identity function (λy.x) to something that is the identity function (λy.y); free variable x has accidentally become bound by the λy binder. You could think of that as x being “captured” by the λy. That conflicts with a basic intuition about functions, which is that the names of bound variables shouldn’t matter. To fix this problem, we need to revise the definition of substitution so that, when substituting inside an abstraction, free variables don’t accidentally become bound. We can do that just changing the last line of the definition. The following definition correctly implements capture-avoiding substitution: x{x := e} = e y{x := e} = y (e1 e2 ){x := e} 0 (λy.e ){x := e} = (if x 6= y) (e1 {x := e}) (e2 {x := e}) = λz.((e0 {y := z}){x := e}) (where z is a fresh variable name) The idea is that the y in λy.e0 is first renamed to a new variable name z that is not used in e0 or e. (And, of course, x 6= z.) Then substitution {x := e} can safely be performed without accidentally capturing any free variables. According to this definition, (λy.x){x := y} = λz.((x{y := z}){x := y}) = (λz.(x{x := y}) = λz.y. 5 Encodings The pure lambda calculus has only functions as values. It’s possible, however, to encode all the usual values that you would expect to find in a full-featured programming language. Natural numbers. There are many possible encodings of natural numbers (i.e., the non-negative integers) into the lambda calculus. Here we’ll see one invented by Church. A Church numeral n is an encoding of natural number n as an expression in the pure lambda calculus: 0 = λs. λz. z 1 = λs. λz. s z 2 = λs. λz. s (s z) 3 3 = λs. λz. s (s (s z)) etc. So a Church numeral n is a function that takes another function s and an argument z, and applies s to z a total of n times. If you think of s as meaning “successor” and z as meaning “zero”, this makes a lot of sense: n is the application of successor to 0 a total of n times, which should of course produce n. Now we can define some standard arithmetic operations: SUCC = λn. λs. λz. s (n s z) PLUS = λm. λn. m SUCC n TIMES = λm. λn. m (PLUS n) 0 SUCC takes a Church numeral n and returns another Church numeral—that is, it yields a function that takes arguments s and z and applies s repeatedly to z. We get the right number of applications of s to z by first passing s and z as arguments to n, then explicitly applying s once more time to the result. PLUS takes two Church numerals m and n as arguments, then applies SUCC m times to n, yielding the sum of m and n. TIMES takes two Church numerals m and n as arguments, then applies (PLUS n) to 0 a total of m times. Repeated addition is exactly what multiplication is, intuitively. Pairs. We define a pair constructor and two destructors (i.e., selectors) as follows: PAIR = λa. λb. λf. f a b FIRST = λp. p (λx. λy. x) SECOND = λp. p (λx. λy. y) PAIR takes two arguments a and b which are the components of the pair, and returns a function. That function itself takes a function f as an argument, then applies f to a and b. Essentially, PAIR is wrapping its two arguments for later extraction. FIRST takes a pair p as an argument and passes it function λx. λy. x, which extracts the first component of the pair. Likewise, SECOND extracts the second component. Local variables. One feature that seems to be missing from the pure lambda calculus is the ability to declare local variables. For example, in SML we can introduce a new local variable with a let expression: let val x = e1 in e2 end According to the evaluation rules for let, we expect this expression to evaluate e1 to a value v, then evaluate e2 in a dynamic environment that maps x to v. We can construct a lambda expression that behaves the same way: (λx.e2 ) e1 So let expressions are actually just syntactic sugar for application of an abstraction! Acknowledgements This lecture builds on materials from Profs. Andrew Myers and Nate Foster at Cornell University. 4