Download Notes on Slick - The Risberg Family

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Open Database Connectivity wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Clusterpoint wikipedia , lookup

SQL wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Relational algebra wikipedia , lookup

PL/SQL wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Join (SQL) wikipedia , lookup

Database model wikipedia , lookup

Relational model wikipedia , lookup

Transcript
Notes on Slick
Created 03/06/14
Updated 03/18/14, Updated 06/03/14, Updated 09/30/14, Updated 10/04/14, Updated 12/09/14, Updated 02/12/15
Updated 09/20/15, Updated 10/01/15
http://stackoverflow.com/questions/31062288/scala-slick-3-0-creating-table-and-then-inserting-rows
Introduction
Slick is Typesafe’s modern database query and access library for Scala. It allows you to work with stored data
almost as if you were using Scala collections while at the same time giving you full control over when a database
access happens and which data is transferred. You can also use SQL directly.
val limit = 10.0
// Your query could look like this:
( for( c <- coffees; if c.price < limit ) yield c.name ).list
// Or using more plain SQL String Interpolation:
sql"select COF_NAME from COFFEES where PRICE < $limit".as[String].list
// Both queries result in SQL equivalent to:
// select COF_NAME from COFFEES where PRICE < 10.0
When using Scala instead of raw SQL for your queries you benefit from compile-time safety and compositionality.
Slick can generate queries for different back-end databases including your own, using its extensible query compiler.
Get started learning Slick in minutes using the Hello Slick template in Typesafe Activator.
Resources
Slick requires Scala 2.10 or later.
The current GA version is 3.0.3, released in September 2015.
A prior GA version was 2.1.0, released in August 2014.
There is a very interesting “book” being written at https://mackler.org/LearningSlick2/
Concepts
Write a subclass of Table – you actually create one of these objects for each entity, as a proxy or gateway for
persisting of the instances.
You use case classes for the instances. Each table specifies the name of the case class to instantiate when records
are fetched. The definition of case class in this context is:
 contains only a set of fields – typically no behavior methods
 quick to create
 contains support for hashcode, compare, and serialize/deserialize
 they are like data transfer objects
Slick then provides abstractions for the database, database connection, query-forming objects, etc.
There are two mapping models:
1)
The Lifted Embedding is the standard API for type-safe queries and updates in Slick. In this
case, you write classes that are transformed (“lifted”) in to the world persistence. The term that
they use for the lifted world is “Rep” (not sure what this means)
Page 1
2)
The experimental Direct Embedding is available as an alternative to the Lifted Embedding.
For query building, the concept of lifting also applies, in fact even expressions such as “:cost > 8.0” are lifted into a
sequence of Reps, such as Rep(“:cost”), Rep(>), Rep(8.0).
Most of this user manual (and all of the production code at InsideVault) focuses on the Lifted Embedding.
For writing your own SQL statements you can use the Plain SQL API.
Example of Lifted Embedding
The name Lifted Embedding refers to the fact that you are not working with standard Scala types (as in the direct
embedding) but with types that are lifted into a Rep type constructor. This becomes clear when you compare the
types of a simple Scala collections example
case class Coffee(name: String, price: Double)
val coffees: List[Coffee] = //...
val l = coffees.filter(_.price > 8.0).map(_.name)
//
^
^
^
//
Double Double
String
... with the types of similar code using the lifted embedding:
class Coffees(tag: Tag) extends Table[(String, Double)](tag, "COFFEES") {
def name = column[String]("COF_NAME")
def price = column[Double]("PRICE")
def * = (name, price)
}
val coffees = TableQuery[Coffees]
val q = coffees.filter(_.price > 8.0).map(_.name)
//
^
^
^
//
Rep[Double] Rep[Double] Rep[String]
All plain types are lifted into Rep. The same is true for the table row type Coffees which is a subtype of
Rep[(String, Double)]. Even the literal 8.0 is automatically lifted to a Rep[Double] by an implicit
conversion because that is what the > operator on Rep[Double] expects for the right-hand side. This lifting is
necessary because the lifted types allow us to generate a syntax tree that captures the query computations. Getting
plain Scala functions and values would not give us enough information for translating those computations to SQL.
Database connector classes
There are distinct connector classes for H2, MySQL, Postgres, and probably others.
There are classes to create and manage connections to these databases.
Example
// The main application
object HelloSlick extends App {
// The query interface for the Suppliers table
val suppliers: TableQuery[Suppliers] =
TableQuery[Suppliers].asInstanceOf[TableQuery[Suppliers]]
// the query interface for the Coffees table
val coffees: TableQuery[Coffees] = TableQuery[Coffees].asInstanceOf[TableQuery[Coffees]]
// Create a connection (called a "session") to an in-memory H2 database
Database.forURL("jdbc:mysql://localhost:3306/slick01", user="developer", password="123456",
driver = "com.mysql.jdbc.Driver").withSession {
implicit session =>
Page 2
// Create the schema by combining the DDLs for the Suppliers and Coffees tables using the
query interfaces
(suppliers.ddl ++ coffees.ddl).create
/* Create / Insert */
// Insert
suppliers
suppliers
suppliers
some suppliers
+=(101, "Acme, Inc.", "99 Market Street", "Groundsville", "CA", "95199")
+=(49, "Superior Coffee", "1 Party Place", "Mendocino", "CA", "95460")
+=(150, "The High Ground", "100 Coffee Lane", "Meadows", "CA", "93966")
}
}
New Features in Version 3
The Version 3 API
Slick, Typesafe's database query and access library for Scala, received a major overhaul in the just released version
3.0. Also dubbed "Reactive Slick", it introduces a new API to compose and run database queries in a reactive way.
Reactive meaning that it supports the Reactive Streams API, an "initiative to provide a standard for
asynchronous stream processing with non-blocking back pressure" (other implementations include Akka,
MongoDB, RxJava, Vert.x, etc.) and supports building applications according to the Reactive Manifesto.
Database queries in Slick can be written as if they were operating on Scala collections, supporting the same
vocabulary and similar operations (even though this illusion fails on more complex expressions). As a bonus,
because the queries are not just plain strings, they are being type-checked by the compiler. An example of a
simple join of two tables and a where-clause:
val q2 = for {
c <- coffees if c.price < 9.0
s <- suppliers if s.id ===
c.supID } yield (c.name, s.name)
An equivalent SQL code could look like this:
select c.COF_NAME, s.SUP_NAME from COFFEES c, SUPPLIERS s where c.PRICE < 9.0
and s.SUP_ID = c.SUP_ID;
While the query API has remained largely unchanged in Slick 3, a query now returns a DBIOAction instead of an
actual result (the old APIs are deprecated and will be removed in 3.1). Such actions can then be combined and in the
end are passed to the database, where they will eventually be run.
New Features in Version 2
These are the major new features added since Slick 1.0.1:

A code generator that reverse-engineers the database schema and generates all code required for working with Slick.

New driver architecture to allow support for non-SQL, non-JDBC databases.

Table definitions in the Lifted Embedding use a new syntax which is slightly more verbose but also more robust and logical, avoiding
several pitfalls from earlier versions.

Table definitions (and their * projections) are not restricted to flat tuples of columns anymore. They can use any type that would be
valid as the return type of a Query. The old projection concatenation methods ~ and ~: are still supported but not imported by default.

In addition to Scala tuples, Slick supports its own HList abstraction for records of arbitrary size. You can also add support for your
own record types with only a few lines of code. All record types can be used everywhere (including table definitions and mapped
projections) and they can be mixed and nested arbitrarily.

Soft inserts are now the default, i.e. AutoInc columns are automatically skipped when inserting with += , ++= , insert and insertAll .
This means that you no longer need separate projections (without the primary key) for inserts. There are separate
methods forceInsert and forceInsertAll in JdbcProfile for the old behavior.
Page 3

A new model for pre-compiled queries replaces the old QueryTemplate abstraction. Any query (both, actual collectionvalued Query objects and scalar queries) or function from Column types to such a query can now be lifted into a Compiled wrapper.
Lifted functions can be applied (without having to recompile the query), and you can use both monadic composition
of Compiled values or just get the underlying query and use that for further composition. Pre-compiled queries can now be used for
update and delete operations in addition to querying.

threadLocalSession has been renamed to dynamicSession and the corresponding methods have distinct names
(e.g. withDynSession vs the standard withSession ). This allows the use of the standard methods without extra type annotations.

Support for server-side Option conversions (e.g. .getOrElse on a computed Option column)

Some changes to the API to bring it closer to Scala Collections syntax.
Defining the models
In Slick 1.0 tables were defined by a single val or object (called the table object) and the * projection was
limited to a flat tuple of columns that had to be constructed with the special ~ operator:
// --------------------- Slick 1.0 code -- does not compile in 2.0 --------------------object Suppliers extends Table[(Int, String, String)]("SUPPLIERS") {
def id = column[Int]("SUP_ID", O.PrimaryKey)
def name = column[String]("SUP_NAME")
def street = column[String]("STREET")
def * = id ~ name ~ street
}
In Slick 2.0 you need to define your table as a class that takes an extra Tag argument (the table row class) plus an
instance of a TableQuery of that class (representing the actual database table). Tuples for the * projection can
use the standard tuple syntax:
class Suppliers(tag: Tag) extends Table[(Int, String, String)](tag, "SUPPLIERS") {
def id = column[Int]("SUP_ID", O.PrimaryKey)
def name = column[String]("SUP_NAME")
def street = column[String]("STREET")
def * = (id, name, street)
}
val suppliers = TableQuery[Suppliers]
* (star) Projection
See http://stackoverflow.com/questions/13906684/scala-slick-method-i-can-not-understand-so-far
This returns the default projection - which is how you describe:
'all the columns (or computed values) I am usually interested' in.
Your table could have several fields; you only need a subset for your default projection. The default projection must
match the type parameters of the table.
Now about <> and Bar.unapply
This provides what are called Mapped Projections.
Here is an example:
case class Bar(id: Option[Int] = None, name: String)
object BarTable extends Table[Bar]("bar") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
Page 4
def name = column[String]("name")
// Every table needs a * projection with the same type as the table's type parameter
def * = (id, name) <> (Bar, Bar.unapply _)
}
So far we've seen how slick allows you to express queries in Scala that return a projection of columns (or computed
values). So when executing these queries you have to think of the result row of a query as a Scala tuple. The type
of the tuple will match the Projection that is defined (by your for-comprehension as in the previous example, of by
the default * projection). This is why field1 ~ field2 returns a projection of Projection2[A, B] where A is
the type of field1 and B is the type of field2.
Which lets you do: Query(Bars).list.map ( b.name )
Instead of:
Query(Bars).list.map { case (_, name) => name }
Note that this example uses list.map instead of mapResult just for explanation's sake.
Using the Code Generator
See more details at http://slick.typesafe.com/doc/2.0.0-RC1/code-generation.html
This will create the model files for you.
Using Queries
Look again at Table 2.1, “The value of the suppliers Relation, represented as a table”.
While the names of each attribute appearing in the column headings might suggest the meaning of the columns, in
the context of the hypothetical business using this relation, these data are intended to represent reality, specifically
an instantaneous state of the enterprise. As such, the meaning of the data can only be known with reference to
a predicate, which may be expressed as a natural-language denotation of that predicate. Here is an English
denotation of the predicate of the suppliers relation:
Example 4.8. English expression of the predicate for the suppliers relation
psuppliers: The supplier identifed as snum is named sname, has a quality rating of status, and is located in city.
Here is a tabular representation of a different relation from the same database:
Table 4.1. The value of the shipments Relation, represented as a table
snum: SID pnum: PID qty: INTEGER
S1
P1
300
S1
P2
200
S1
P3
400
S1
P4
200
S1
P5
100
S1
P6
100
S2
P1
300
S2
P2
400
S3
P2
200
S4
P2
200
Page 5
snum: SID pnum: PID qty: INTEGER
S4
P5
200
S4
P4
300
S4
P5
400
Example 4.9. English expression of the predicate for the shipments relation
pshipments: The supplier identifed as snum ships the part identified is pnum in quantities of qty units.
The predicates for the suppliers and shipments relations can be conjoined, yielding the following predicate:
Example 4.10. Conjunction of the predicates for
the suppliers and shipments relations
psuppliers ∧ pshipments: The supplier identifed as snum is named sname, has a quality rating of status, and is
located in city and ships the part identified is pnum in quantities of qty units.
Example 4.10, “Conjunction of the predicates for
the suppliers and shipments relations” is the predicate of the relation that is the value of
the join operation where the operants are the suppliers and shipments relations.
The conjoined predicate given in
In order to demonstrate joins, we must define an extension the Table class for the shipments relation just as we
did for the suppliers relation:
Example 4.11. Defining a Table for the shipments relation
case class Shipment(snum: String, pnum: String, qty: Int)
class Shipments(tag: Tag) extends Table[Shipment](tag, "shipments") {
def snum = column[String]("snum")
def pnum = column[String]("pnum")
def qty = column[Int]("qty")
def * = (snum, pnum, qty) <> (Shipment.tupled, Shipment.unapply _)
def supplier = foreignKey("SUP_FK", snum, suppliers)(_.snum)
}
val shipments = TableQuery[Shipments]
Inserting
In Slick 1.0 you used to construct a projection for inserting from the table object:
// --------------------- Slick 1.0 code -- does not compile in 2.0 --------------------(Suppliers.name ~ Suppliers.street) insert ("foo", "bar")
Since there is no raw table object any more in 2.0 you have to use a projection from the table query:
suppliers.map(s => (s.name, s.street)) += ("foo", "bar")
Note the use of the new += operator for API compatibility with Scala collections. The old name insert is still
available as an alias.
Slick 2.0 will now automatically exclude AutoInc fields by default when inserting data. In 1.0 it was common to
have a separate projection for inserts in order to exclude these fields manually:
Page 6
// --------------------- Slick 1.0 code -- does not compile in 2.0 --------------------case class Supplier(id: Int, name: String, street: String)
object Suppliers extends Table[Supplier]("SUPPLIERS") {
def id = column[Int]("SUP_ID", O.PrimaryKey, O.AutoInc)
def name = column[String]("SUP_NAME")
def street = column[String]("STREET")
// Map a Supplier case class:
def * = id ~ name ~ street <> (Supplier.tupled, Supplier.unapply)
// Special mapping without the 'id' field:
def forInsert = name ~ street <> (
{ case (name, street) => Supplier(-1, name, street) },
{ sup => (sup.name, sup.street) }
)
}
Suppliers.forInsert.insert(mySupplier)
This is no longer necessary in 2.0. You can simply insert using the default projection and Slick will skip the autoincrementing id column:
case class Supplier(id: Int, name: String, street: String)
class Suppliers(tag: Tag) extends Table[Supplier](tag, "SUPPLIERS") {
def id = column[Int]("SUP_ID", O.PrimaryKey, O.AutoInc)
def name = column[String]("SUP_NAME")
def street = column[String]("STREET")
def * = (id, name, street) <> (Supplier.tupled, Supplier.unapply)
}
val suppliers = TableQuery[Suppliers]
suppliers.insert(mySupplier)
If you really want to insert into an AutoInc field, you can use the new methods forceInsert and
forceInsertAll.
New in 2.0: Using HList to support tables with more than 26 columns
The above examples are all built around case classes, which cannot have more than 26 columns in Scala.
So 2.0 offers HList as a way to define a many-field class. Here is an example:
// An Animals table with 4 columns: name, color, age, weight
class Animals(tag: Tag) extends Table[HCons[String, HCons[String, HCons[Int, HCons[Double,
HNil]]]]](tag, "ANIMALS") {
def name: Column[String] = column[String]("NAME", O.PrimaryKey)
def color: Column[String] = column[String]("COLOR")
def age: Column[Int] = column[Int]("AGE")
def weight: Column[Double] = column[Double]("WEIGHT")
def * = name :: color :: age :: weight :: HNil
}
The differences are on the first and last line of this example code.
Page 7
Practical Issues
Connection Pools
Slick does not provide a connection pool implementation of its own. When you run a managed application in some
container (e.g. JEE or Spring), you should generally use the connection pool provided by the container. For standalone applications you can use an external pool implementation like DBCP, c3p0 or BoneCP.
Note that Slick uses prepared statements wherever possible but it does not cache them on its own. You should
therefore enable prepared statement caching in the connection pool’s configuration and select a sufficiently large
pool size.
Appendix A: Learning programs created
Slick01
Based on tutorial (tables Coffee and Supplier), then I added tables “customer”, then used HList to create tables
“Animal” and “Monster”. This has 26 columns in it.
SC55
Completed in early October 2014. Uses Slick2 to have entities for Challenge and Activity. Also defines
ChallengeTable and ActivityTable classes.
For comparison, also uses DBOperation, manually generated queries, and marshaling methods for Leaderboard
entities.
Page 8