Download databaseName

Document related concepts

Concurrency control wikipedia , lookup

IBM Notes wikipedia , lookup

SQL wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Database wikipedia , lookup

Functional Database Model wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Relational model wikipedia , lookup

Clusterpoint wikipedia , lookup

Database model wikipedia , lookup

Transcript
Informix 12.10xC4 (and a little of xC3 too)
A Hybrid Multi-Tenant Database with JSON at REST
Scott Pickett
WW Informix Technical Sales
For questions about this presentation contact [email protected]
© 2015 IBM Corporation
Agenda
 Multi Tenancy Support
 REST API
 Other Enhancements
2
© 2015 IBM Corporation
Agenda
 Multi Tenancy Support
 REST API
 Other Enhancements
3
© 2015 IBM Corporation
Current Informix Architecture
 Well suited for Isolated Tenancy
–
–
–
–
Optimally tuned Informix instance
Typically a single database in a production environment
Informix does allow multiple databases in a single instance
All databases share available resources (cpu / memory / disk)
 Consolidated and Shared data
– All data organized together
– Data inter-mingled with other database data
 All resources shared by all the databases
– Query processing can impact other databases
– No way to restrict resource utilization by database
4
© 2015 IBM Corporation
Multi-Tenancy in Informix
 Multiple tenant databases within a
single instance of Informix
– Assign resources (cpu / storage)
specific to tenant databases
– Segregate data specific to each tenant
– Informix instance will share certain
resources across all tenants (pio / lio
etc)
– Capability to associate VP’s specific to
tenant databases
5
A single instance of the
software runs on a server,
serving multiple clientorganizations (tenants).
With a multi-tenant architecture,
software is designed to virtually
partition its data and
configuration, and each client
organization works with a
customized virtual application.
© 2015 IBM Corporation
Advantages of a Multi-tenant environment
 Shared costs
– License cost could be shared across all the tenants
– Several tenants are allocated only a portion of the software and hardware
resources
– Ideally suited for SAAS / PAAS environment
– Cost effective when compared with virtualization
– Ideal for service providers
 Secure and isolated data for each tenant
– Informix treats each tenant database as an independent unit from the
application perspective
– Storage spaces are specific to each tenant
 Typically database objects are the same in all tenant databases.
– Makes for easier implementation in a cloud environment
– Capability to alter schema (if required) just for tenant(s)
6
© 2015 IBM Corporation
Multi-Tenant Architecture - Features
• 1 tenant = 1 database
• Segregated and dedicated storage for tenants
– Storage provisioning can be used to allocate dbspaces on demand for a multitenant system
– Dbspaces allocated as a small size, but set to auto-expand
– New dbspace maximum size can now be set to prevent expansion above a
certain size.
• Tenants could share CPU resources using CPU VPs or have user
defined VPs per tenant
– Tenants, however will share VP resources for non user defined VPs (aio, pio,
lio, net, etc)
• Capability to specify session limits on…
– Locks
• Manage tenants with a graphical view (OAT)
– A tenant can see only information relevant to a that specific tenant
– Storage information for that tenant
– CPU utilization for that tenant
7
© 2015 IBM Corporation
Multi-Tenant Storage
 Storage provisioning can be used to allocate dbspaces on demand
for a multi-tenant system
 Dbspaces allocated as a small size, but set to auto-expand
 New dbspace maximum size can now be set to prevent expansion
above a certain size.
8
© 2015 IBM Corporation
Multi-Tenancy
9
© 2015 IBM Corporation
Creating a Tenant
execute function task(“create dbspace from storagepool”, “tenant1_1”, “10m”);
execute function task(“create dbspace from storagepool”, “tenant1_2”, “10m”);
execute function task(“create sbspace from storagepool”, “tenant1_3”, “10m”);
execute function task(“tenant create”, “tenant1”,
‘{ “DBSPACE”:”tenant1_1,tenant1_2”,
“SBSPACE”,”tenant1_3”,
“VPCLASS”:”VPCLASS tenant1,num=1”,
“SESSION_LIMIT_LOCKS”:”2000” }’);
10
© 2015 IBM Corporation
Creating a Tenant
execute function task(“tenant create”, “tenant1”,
‘{
“DBSPACE”:”dbslist”,
“BLOBSPACE”:,”dbslist”,
“SBSPACE”:,”dbslist”,
“VPCLASS”:”vpclass definition”,
“DBSPACETEMP”:”dbslist”,
“SBSPACETEMP”:”sblist”,
“SESSION_LIMIT_LOCKS”:”value”,
“LOGMODE”:”db log mode definition”,
“CASE”:”case”,
“LOCALE”:”locale definition”,
“CATALOGS”:”dbslist”
}’);
11
© 2015 IBM Corporation
Multi-Tenant – Limiting Database Creation
 DBCREATE_PERMISSION
– Specifies the users who can create databases (by default, any user can)
Example
DBCREATE_PERMISSION informix
12
© 2015 IBM Corporation
Multi-Tenancy Management
 Use SQL Administration API command to manage tenant databases
EXECUTE FUNCTION task('tenant create', 'companyA',
'{dbspace:"companyA_dbs1, companyA_dbs2, companyA_dbs3",
sbspace:"companyA_sbs",
vpclass:"tvp_A,num=6",
dbspacetemp:"companyA_tdbs",
session_limit_locks:"1000", logmode:"ansi", locale:"fr_ca.8859-1"'});
EXECUTE FUNCTION task('tenant update', 'companyA',
'{dbspace:"companyA_dbs4, companyA_dbs5", sbspace:"companyA_sbs3",
vpclass:"tvp_B"}’);
The tenant database gains two dbspaces and a sbspace.
EXECUTE FUNCTION task('tenant drop', 'companyA');
Cannot just “drop database database_name ” a tenant database. You must ‘tenant
drop’ a database created as tenant.
Note: The options for the tenant database are treated as a JSON document.
Return to Agenda
13
© 2015 IBM Corporation
The vpclass option for task/admin() tenant update (1)
 vpclass_name is limited to 8 characters.
 A maximum of 200 tenant virtual processor classes can be created.
 If the vpclass_name is unique,
– You create a new tenant virtual processor class.
 If the vpclass_name exists,
– The tenant database shares the class with other tenant databases.
 If the vpclass property has an existing value,
– You change the virtual processor class for the tenant database.
 If you do not include the num=vps property,
– one virtual processor is started.
14
© 2015 IBM Corporation
The vpclass option for task/admin() tenant update (2)
 From a couple of slides earlier:
EXECUTE FUNCTION task('tenant update', 'companyA',
'{dbspace:"companyA_dbs4, companyA_dbs5", sbspace:"companyA_sbs3",
vpclass:"tvp_B"}’);
 This much is certain:
– The instance gains two dbspaces and a sbspace.
 From the previous slide:
– If the vpclass_name is unique,
• You create a new tenant virtual processor class.
– If the vpclass_name exists,
• The tenant database shares the class with other tenant databases.
– If the vpclass property has an existing value,
• You change the virtual processor class for the tenant database.
– If you do not include the num=vps property,
• One virtual processor is started.
15
© 2015 IBM Corporation
Agenda
 Multi Tenancy Support
 REST API
 Other Enhancements
16
© 2015 IBM Corporation
REST Definition
17
© 2015 IBM Corporation
REST Architecture
 Distributed communication architecture
 Widely popular in cloud environments
 REST
–
–
–
–
–
An architectural style for web based communication
Permits clients to communicate with servers in a unique manner
Represents resources (databases in this case) as URI’s
Architecture uses HTTP protocol
A set of operations (GET/POST/PUT/DELETE) permit manipulation of
resources
 RESTful architectures are stateless
– Server does not maintain any client context between transactions
– Transaction must contain all information necessary to satisfy the particular
request.
– Makes RESTful architectures more reliable and also helps to expand their
scalability.
The strength of REST
REST is an architectural style, not a protocol or an implementation. REST has some core principles, but
in the end, it's an abstraction, not a specific implementation.
(Source: http://www.ibm.com/developerworks/library/os-understand-rest-ruby/)
18
© 2015 IBM Corporation
Access Informix from REST API Clients
 Directly connect applications or devices that communicate through
the REST API to Informix
– No client side drivers needed, freedom from client dependency
– Web based applications can connect seamlessly to the database using HTTP
protocol
– Create connections by configuring the wire listener for the REST API
– Use MongoDB and SQL queries against JSON and BSON document
collections, traditional relational tables, and time series data
– The REST API uses MongoDB syntax and returns JSON documents
– Widely popular in Cloud / IOT architectures
– Simplify web application development in Cloud environments
 A subset of the HTTP protocol (GET / POST / DELETE / PUT)
supported
–
–
–
–
19
POST method maps to mongo db insert or create command
GET method maps to mongo db query command
PUT method maps to mongo db update command
DELETE method maps to mongo db delete command
© 2015 IBM Corporation
Access Informix from REST API Clients (contd)
20
© 2015 IBM Corporation
Wire Listener & REST (1)
 The wire listener is a mid-tier gateway server that enables
communication between MongoDB applications and the Informix®
database server.
 The wire listener is provided as an executable JAR file that is named
$INFORMIXDIR/bin/jsonListener.jar. The JAR file provides access to
the MongoDB API and REST API.
 You can connect to a JSON collection by using the REST API.
 When a client is connected to the wire listener by using the REST API,
each database is registered; session events such as create or drop a
database.
 If a REST request refers to a database that exists but is not registered,
the database is registered and a redirect to the root of the database is
returned.
21
© 2015 IBM Corporation
Wire Listener & REST (2)
 The JSONListener.properties file has an optional parameter called
listener.type It specifies the type of wire listener to start:
– The default is mongo which connects the wire listener to the MongoDB API
• listener.type=mongo
 To connect to a REST API, connect to the wire listener, connect the
wire listener to the REST API using the following parameter value
which must be specified to use the REST API:
– listener.type=rest
 There are some new REST related optional parameters for the
JSONListener.properties file which may be necessary for use.
22
© 2015 IBM Corporation
Multiple wire listeners configuration (1)
 You can run more than one wire listener at the same time to access
both Mongo and REST data, by creating a properties file for each:
– Create each properties file in the $INFORMIXDIR/etc directory using the
$INFORMIXDIR/etc/jsonListener-example.properties file as a template.
– Customize each properties file and assign a unique name:
• The url parameter must be specified, either in each individual properties file or in the
file that is referenced by the include parameter.

Optional: Specify the include parameter to reference another properties file. The path can be
relative or absolute.
• If you have multiple properties files, you can avoid duplicating parameter settings in
the multiple properties files by specifying a subset of shared parameters in a single
properties file, and the unique parameters in the individual properties files.
– Start the wire listeners.
23
© 2015 IBM Corporation
Multiple wire listeners configuration (2) - Example
 The same url, authentication.enable, and security.sql.passthrough
parameters are used to run two separate wire listeners:
 Create a properties file named shared.properties that includes the
following parameters
– :url=jdbc:informix-sqli://localhost:9090/sysmaster:
INFORMIXSERVER=lo_informix1210; authentication.enable=true
security.sql.passthrough=true
 Create a properties file for use with the MongoDB API that is named
mongo.properties, with the parameter setting
include=shared.properties included:
– include=shared.properties listener.type=mongo listener.port=27017
 Create a properties file for use with the REST API that is named
rest.properties, with the parameter setting include=shared.properties
included:
– include=shared.properties listener.type=rest listener.port=8080
24
© 2015 IBM Corporation
Multiple wire listeners configuration (3) - Example
 Start the wire listeners by using the command line:
– java -jar jsonListener.jar -start -config json.properties -config
rest.properties
25
© 2015 IBM Corporation
HTTP: POST
 The POST method maps to the MongoDB insert or create command.
Method Path
Description
POST
/
Create a database
POST
/databaseName
Create a collection
databaseName – database name
POST
/databasename/collectionName
Create a document
databaseName – database name
collectionName – collection name
26
© 2015 IBM Corporation
HTTP: POST – Create a database
 With the locale specified.
 Request: Specify the POST method:
– POST / Data:
 Specify database name mydb and an English UTF-8 locale:
– {name:"mydb",locale:"en_us.utf8"}
 Response: The following response indicates that the operation was
successful:
– Response does not contain any data.
27
© 2015 IBM Corporation
HTTP: POST – Collection Creation
 Creates a collection in the mydb database.
 Request: Specify the POST method and the database name as mydb:
– POST /mydb
 Data: Specify the collection name as bar:
– {name:“bar”}
 Response: The following response indicates that the operation was
successful:
– {"msg":"created collection mydb.bar","ok":true}
28
© 2015 IBM Corporation
HTTP: POST – Relational Table Creation
 This example creates a relational table in an existing database.
 Request: Specify the POST method and stores_mydb as the database:
– POST /stores_mydb
 Data: Specify the table attributes:
– { name: "rel", columns: [{name:"id",type:"int",primaryKey:true,},
{name:"name",type:"varchar(255)"}, {name:"age",type:"int",notNull:false}]}
 Response: The following response indicates that the operation was
successful:
– {msg: "created collection stores_mydb.rel" ok: true}
29
© 2015 IBM Corporation
HTTP: POST – Insert a Single Document
 Inserts a document into an existing collection.
 Request: Specify the POST method, mydb database, and people
collection:
– POST /mydb/people
 Data: Specify John Doe age 31:
– {firstName:"John",lastName:"Doe",age:31}
 Response: Because the _id field was not included in the document,
the automatically generated _id is included in the response. Here is a
successful response:
– {"id":{"$oid":"537cf433559aeb93c9ab66cd"},"ok":true}
30
© 2015 IBM Corporation
HTTP: POST – Insert Multiple Documents
 This example inserts multiple documents into a collection.
 Request: Specify the POST method, mydb database, and people
collection:
– POST /mydb/people
 Data: Specify John Doe age 31 and Jane Doe age 31:
– [{firstName:"John",lastName:"Doe",age:31},
{firstName:"Jane",lastName:"Doe",age:31}]
 Response: Here is a successful response:
– {ok: true}
31
© 2015 IBM Corporation
HTTP: GET
 The GET method maps to the MongoDB query command.
Method Path
Description
GET
/
List all databases
GET
/databaseName
List all collections in a database
databaseName – database name
GET
/databasename/collectionName?
queryParameters
Query a collection
databaseName – database name
collectionName – collection name
queryParameters - The query
parameters.
The supported Informix
queryParameters are:
batchSize, query, fields, and sort.
These map to the equivalent
MongoDB batchSize, query, fields,
and sort parameters.
32
© 2015 IBM Corporation
HTTP: GET – List All Databases on the Server
 Specify the GET method and forward slash (/):
– GET /
 Data: None.
 Response: Here is a successful response:
– [ "mydb" , "test“]
33
© 2015 IBM Corporation
HTTP: GET – List All Collections in a Database
 Request: Specify the GET method and mydb database:
– GET /mydb
 Data: None.
 Response: Here is a successful response:
– ["bar"]
34
© 2015 IBM Corporation
HTTP: GET – Show Sorted Data from a Collection
 This example sorts the query results in ascending order by age.
 Request: Specify the GET method, mydb database, people collection,
and query with the sort parameter.
– The sort parameter specifies ascending order (age:1), and filters id (_id:0) and
last name (lastName:0) from the response
– GET /mydb/people?sort={age:1}&fields={_id:0,lastName:0}
 Data: None.
 Response: The first names are displayed in ascending order with the
_id and lastName filtered from the response:
– [{"firstName":"Sherry","age":31}, {"firstName":"John","age":31},
{"firstName":"Bob","age":47}, {"firstName":"Larry","age":49}]
35
© 2015 IBM Corporation
HTTP: PUT
 The PUT method maps to the MongoDB update command.
Method Path
Description
PUT
Update a document
databaseName – database name
collectionName – collection name
queryParameters - The supported
Informix queryParameters are
query, upsert, and multiupdate.
/databasename/collectionName
?queryParameters
These map to the equivalent
MongoDB query, insert, and multi
query parameters, respectively.-
36
© 2015 IBM Corporation
HTTP: PUT – Document Update in a Collection
 Update the value for Larry in an existing collection, from age 49 to 25:
– [{"_id":{"$oid":"536d20f1559a60e677d7ed1b"},"firstName":"Larry"
,"lastName":"Doe","age":49},{"_id":{"$oid":"536d20f1559a60e677d7ed1c"}
,"firstName":"Bob","lastName":"Doe","age":47}]
 Request: Specify the PUT method and query the name Larry:
– PUT /?query={name:"Larry"}
 Data: Specify the MongoDB $set operator with age 25:
– {"$set":{age:25}}
 Response: Here is a successful response:
– {"n":1,"ok":true}
37
© 2015 IBM Corporation
HTTP: DELETE
 The DELETE method maps to the MongoDB delete command.
Method
Path
Description
DELETE
/
Delete all databases
DELETE
/databaseName
Delete a database
databaseName – database name
DELETE
/databasename/collectionName
Delete a collection
databaseName – database name
collectionName – collection name
DELETE
/databasename/collectionName
?queryParameter
Delete a document
databaseName – database name
collectionName – collection name
queryParameter - The query
parameter.
The supported Informix
queryParameter is query.
This maps to the equivalent
MongoDB query parameter.
38
© 2015 IBM Corporation
HTTP: DELETE (1) – Database Deletion
 Delete a database called mydb.
 Request: Specify the DELETE method and the mydb database:
– DELETE /mydb
 Data: None.
 Response: Here is a successful response:
– {msg: "dropped database"ns: "mydb"ok: true}
39
© 2015 IBM Corporation
HTTP: DELETE (2) – Collection deletion
 This example deletes a collection from a database.
 Request: Specify the DELETE method, mydb database, and bar
collection:
– DELETE /mydb/bar
 Data: None.
 Response: Here is a successful response:
– {"msg":"dropped collection""ns":"mydb.bar""ok":true}
40
© 2015 IBM Corporation
Agenda
 Multi Tenancy Support
 REST API
 Other Enhancements
41
© 2015 IBM Corporation
Storage Optimization Improvements
 Parallelized Storage Optimization
– Compress, Uncompress, Repack of table and index data in parallel
– Parallel operations reduces maintenance windows
– Examples:
EXECUTE FUNCTION task("table compress rows parallel","dental");
EXECUTE FUNCTION task("fragment uncompress rows
parallel","14680071");
EXECUTE FUNCTION task("table update_ipa parallel","auto");
EXECUTE FUNCTION task("index compress repack shrink parallel","ind5",
"customer", "jayson");
42
© 2015 IBM Corporation
Limit LOCKS per session
 Limit number of LOCKS per session
–
–
–
–
–
–
Useful in Multi-tenant environments and in embedded applications
New ONCONFIG parameter SESSION_LIMIT_LOCKS
Prevents users from acquiring too many locks per session
Will have no impact for DBSA user
Also can use IFX_SESSION_LIMIT_LOCKS along with set environment ….
Caution:
• For sessions with repeatable read isolation level make sure there are enough locks
for users
• In a ER environment a low value can interfere with transaction generated by nonDBSA users
– Minimum value is 500 and maximum is 2^31 -1
– Takes effect with onmode –wf or onmode –wm as well
43
© 2015 IBM Corporation
Improved memory pool allocation for ER
 Aids in efficient ER transaction throughput
 Better memory pool allocation handling for ER
 New ONCONFIG parameter CDR_MEM
– 0 - The traditional method of memory allocation
• Use this setting when resource allocation is more important than performance
– 1 - Prevents multiple threads from simultaneously accessing a memory pool
• In large-scale ER environments significant performance improvements occur,
because memory allocation is done by multiple threads that work in parallel
– 2 - Improves performance at the cost of increased memory usage
• Memory allocation requests are increased to the closest fixed-block size, so that free
memory blocks can be found faster.
• Memory pools are not associated with specific CPU virtual processors, so memory
can be freed directly to the memory pool.
44
© 2015 IBM Corporation
SQL Compatibility: Limit Clause (1)
 Helps 3rd party tools (Tableau / Business objects / Cognos)
 New LIMIT clause after the optional ORDER BY clause in a SELECT
statement
 Use the LIMIT clause to specify the maximum number of rows the
query can return
– LIMIT clause is same as the LIMIT option, except that the LIMIT option is part of
projection clause of the SELECT statement
SELECT a, b FROM tab1 LIMIT 10;
SELECT *
FROM TABLE(MULTISET(SELECT * FROM employees ORDER BY
employee_id LIMIT 10 )) vt(x,y), tab2
WHERE tab2.id = vt.x;
SELECT a, b FROM tab1 LIMIT 10
UNION
SELECT a, b FROM tab2;
45
© 2015 IBM Corporation
SQL Compatibility: Limit Clause (2)
 Limitations:
–
–
–
–
46
Cannot use with view
Cannot use in nested SELECT statements
Cannot use in a singleton SELECT (where max = 1) within an SPL routine
Cannot use in situations where embedded SELECT statements are used as
expressions.
© 2015 IBM Corporation
GLS Improvements
 With GLS 6.00xC4, specify the display widths that DB-Access and
other character-based Informix applications use for characters in the
Unicode Private Use Area (PUA) ranges:
– First, set the new IFX_PUA_DISPLAY_MAPPING environment variable
– Second, create a mapping file:
• $INFORMIXDIR/gls/etc/pua.map.
• List each character in the file followed by the character representation display width.
 Valid display widths in the file are:
– 1 (halfwidth character representation, default) or
– 2 (fullwidth character representation).
 If there is not a display width for a character in the file, the default is
halfwidth.
47
© 2015 IBM Corporation
Other Server Improvements
 Limit size of extendable storage spaces
– Use SQL Administration API functions to set the limits
– Arguments are space name / max size / new create size / new extend size
– Useful for multi-tenant Informix instances
EXECUTE FUNCTION task("modify space sp_sizes", "dbspace3", "60000",
"10000", "100000");
dbspace3 is modified with new chunk size of 60000 KB of space when
expanding the storage pool entry for a storage space, a new extend size of
10000 KB and a maximum total size of 100000 KB.
EXECUTE FUNCTION task("modify space sp_sizes", "dbspace8", "20",
"1.5");
dbspace8 is modified with new chunk size of 20 % of storage space, an
extend size of 1.5 % of the storage space and a maximum size of undefined.
48
© 2015 IBM Corporation
Automatic removal of in-place alters
 Removing outstanding in-place alter operations improves
performance
 Remove outstanding in-place alter operations for tables or fragments
with:
– IBM OpenAdmin Tool (OAT) for Informix
– Update_ipa or fragment update_ipa argument of the admin() or task() SQL
administration command.
– Previously, a dummy UPDATE was done on tables to remove outstanding inplace alter operations.
 Speed up operations faster by including the parallel option with the
table update_ipa or fragment update_ipa argument of the admin() or
task() SQL administration command.
 A prerequisite for reverting to an earlier version of Informix.
 An example showing removal of pending in-place alter operations on
a table that is named auto in parallel:
EXECUTE FUNCTION task("table update_ipa parallel","auto");
49
© 2015 IBM Corporation
Enhanced JSON Compatibility
 Create and store Timeseries data:
– With the REST API or the MongoDB API
 Access BSON data from JDBC client applications
 Quickly export relational tables to BSON or JSON documents
– Allows for moving data between environments / architectures
50
© 2015 IBM Corporation
Text Search of JSON Documents (NoSQL)
 Use the Basic Text Search (BTS) database extension to:
– Perform basic text search on columns that have JSON or BSON data types
– Create the BTS index on JSON or BSON data types through SQL with the
CREATE INDEX statement or on BSON data types through the Informix
extension to MongoDB with the createTextIndex command
– Control how JSON and BSON columns are indexed by including JSON index
parameters when you create the basic text search index.
– Run a basic text query on JSON or BSON data with the bts_contains() search
predicate in SQL queries or the $ifxtext query operator in JSON queries.
51
© 2015 IBM Corporation
Include JSON documents in timeseries
 Schema flexibility for Timeseries applications
–
–
–
–
Add key value pairs as data elements
Create a time series that contains JSON documents
JSON documents are unstructured data
No schema changes required, if the structure of the data changes
 Easy to load and query the data
 Provides performance benefit over classic timeseries implementation
 Allows for multiple timeseries for each device
52
© 2015 IBM Corporation
Include JSON documents in timeseries (contd)
 Use MongoDB and REST API clients
– Can load JSON documents directly from MongoDB and REST API clients
without formatting the data.
 Application compatibility
– No need for schema migration, therefore minimal changes required for the
application.
 Variable schema
– Storing data as unstructured data in JSON documents prevents the need to
update your schema or your application.
– For example, if you have sensors that monitor every machine in a factory, when
you add a machine, the new sensors might collect different types of data than
existing sensors. Greater flexibility.
 Simplified schema
– If schema for time-based data includes more columns than each record
typically uses, or if data records typically contain many NULL values, storing it
as unstructured JSON documents makes perfect sense.
– For example, if there are 50 different measurements but each sensor collects
only 5 of those measurements, each record will have 45 NULL values.
53
© 2015 IBM Corporation
Questions
54
© 2015 IBM Corporation
Bufferpool Enhancements (cont’d)
 When auto tuning the bufferpool:
– Analyze bufferpool cache hit ratio every 5 minutes
– Examine two metrics
• Foreground Writes
 Indicated the bufferpool is to small to handle the current activities
 LRU flushing needs to be more aggressive
• Cache Hit Ratio
 A low cache hit ratio means we need more buffers to maintain the working set
of data
55
© 2015 IBM Corporation
Viewing Buffer Pool Memory
56
onstat –g seg
© 2015 IBM Corporation
Auto Tuning of the Bufferpool – onstat –g buf
57
© 2015 IBM Corporation
Automatic Physical Log Tuning
 Auto tuning only occurs if the physical log is stored in a separate
“plogspace” which is set to expandable
execute function task ("create plogspace from storagepool", "plog",
"1gb");
onspaces -c -P plog -p /dev/disk -o 0 -s 1000000
 Automatically grows the physical log by 10% when checkpoint
blocking was triggered because the server ran out of physical log
space.
 Automatically grows the physical log by 10% when 5 of the last 20
checkpoints were triggered because of running short of the physical
log
58
© 2015 IBM Corporation
Complex Query Processing
 Use view folding mechanisms
 Avoid materialized views or large temporary tables
 More efficient when query processing through 3rd party BI tools
59
© 2015 IBM Corporation
Other Server Improvements
 Improve performance with private memory caches
 Default mode for VP_MEMORY_CACHE_KB is STATIC
– To change it to DYNAMIC make changes to ONCONFIG
VP_MEMORY_CACHE_KB 800, DYNAMIC
 Best Practice Tip:
– If using DYNAMIC, memory caches grow and uses more memory
– Consider setting SHMTOTAL to a specific value instead of 0
 Complex Query processing improvements
– Use View folding mechanisms
– Avoid materialized views or large temporary tables
– More efficient when query processing through 3rd party BI tools
60
© 2015 IBM Corporation
Other Enterprise Replication (ER) improvements
 ER now supports Hertz and Compressed timeseries
 New Event alarm for blocked replicate transactions
– New event alarm 33003 appears
• If ER transactions are blocked when a replicated table is in alter mode
61
© 2015 IBM Corporation
Create Timeseries using REST or Mongo DB Wire
Listener (1)
 Application developers can create a time series through the wire
listener:
– Choose a predefined calendar from the system.timeseries.calendar collection
or create a calendar by adding a document to the system.timeseries.calendar
collection.
– Create a TimeSeries row type by adding a document to the collection
system.timeseries.rowType
– Create a container by adding a document to the system.timeseries.container
collection.
– Create a time series table with the time series table format syntax.
– Instantiate the time series by creating a virtual table with the time series virtual
table format syntax.
– Load time series data by inserting documents into the virtual table.
– Query and update the data using the virtual table.
62
© 2015 IBM Corporation
Create Timeseries using REST or Mongo DB Wire
Listener (2)
 The following restrictions apply when you create a time series
through the wire listener:
–
–
–
–
–
Cannot define hertz or compressed time series.
Cannot define rolling window containers.
Cannot load time series data through a loader program.
Load time series data through a virtual table.
Cannot run time series SQL routines or methods from the time series Java
class library.
– Operate on the data through a virtual table.
63
© 2015 IBM Corporation
Create Time series using REST API or Mongo API
(contd…)
 Create a time series calendar named ts_10min by adding the
following document to the system.timeseries.calendar collection with
the REST API POST method or the MongoDB API insert method:
{name:"ts_10min", calendarStart:"2014-01-01 00:00:00", patternStart:"2014-0101 00:00:00", pattern:{type:"minute",
intervals:[{duration:1,on:true},
{duration:9,on:false}]}}
 Create a TimeSeries row type named reading by adding the following
document to the system.timeseries.rowType collection with the REST
API POST method or the MongoDB API insert method:
{name:"reading",fields:[{name:"tstamp", type:"datetime year to fraction(5)"},
{name:"temp", type:"float"}, {name:"hum", type:"float"}]}
64
© 2015 IBM Corporation
Enhanced JSON Compatibility (1)
 Informix now supports the following MongoDB 2.4 features:
–
–
–
–
–
Cursor support to query large volumes of data
Text search of string content in collections and tables.
Geospatial indexes and queries.
Pipeline aggregation operators.
The array update modifiers:
• $each,
• $slice,
• $sort.
 Perform the following new tasks that extend MongoDB functionality
in your JSON application:
– Import and export data directly with the wire listener by using the Informix JSON
commands
• exportCollection and
• importCollection.
– Configure a strategy for calculating the size of your database by using the
Informix extension to the MongoDB listDatabases command:
• sizeStrategy option or
• command.listDatabases.sizeStrategy property.
65
© 2015 IBM Corporation
Enhanced JSON Compatibility (2)
 Customize the behavior of the wire listener by setting new properties.
For example:
–
–
–
–
–
66
Control Logging,
Caching,
Timeout,
Memory Pools,
Maximum Document Size.
© 2015 IBM Corporation
Other JSON / BSON features
 Transaction support for a session. This command binds or unbinds a
connection to the current MongoDB session in a database.
db.runCommand( {transaction : "enable" } )
db.runCommand( {transaction : "commit" } )
db.runCommand( {transaction : “rollback" } )
 Export JSON collections from the wire listener to a file
db.runCommand( {exportCollection: "c" , file: "/tmp/export.out" ,
format:"json"} )
 Import JSON collections using the wire listener from a file.
db.runCommand( {importCollection: "c" , file: "/tmp/import.out" ,
format:"json"} )
db.runCommand( {exportCollection: "c" , file: "/tmp/import.out" ,
format:"jsonArray"} )
 Easily access BSON data on the Informix server from JDBC client
applications through the IfxBSONObject class.
67
© 2015 IBM Corporation
Create Timeseries using REST or Mongo DB Wire
Listener
 Application developers can create a time series through the wire
listener:
– Choose a predefined calendar:
• From the system.timeseries.calendar collection or
• Create a calendar by adding a document to the system.timeseries.calendar
collection.
– Create a TimeSeries row type by adding a document to the
system.timeseries.rowType collection.
– Create a container by adding a document to the system.timeseries.container
collection.
– Create a time series table with the time series table format syntax.
– Instantiate the time series by creating a virtual table with the time series virtual
table format syntax.
– Load time series data by inserting documents into the virtual table.
– Query and update the data using the virtual table.
68
© 2015 IBM Corporation
Create Time series using REST API or Mongo API
(contd…)
 Create a container named c_0 in the dbspace dbspace1 by adding
the following document to the system.timeseries.container collection
with the REST API POST method or the MongoDB API insert method:
{name:"c_0", dbspaceName:"dbspace1", rowTypeName:"reading",
firstExtent:1000, nextExtent:500}
 Create the time series table named ts_data1 by running the REST API
POST method or the MongoDB API create method with the following
table format:
{name:"ts_data1", columns:[{name:"id", type:"int", primaryKey:true,
notNull:true},
{name:"ts", type:"timeseries(reading)"}]}
69
© 2015 IBM Corporation
Create Time series using REST API or Mongo API
(contd…)
 Create the virtual table that is named ts_data1_v by running the
REST API POST method or the MongoDB API create method with the
following table format:
{name:"ts_data1_v", timeseriesVirtualTable: {baseTableName:"ts_data1",
newTimeseries:"calendar(ts_10min), origin(2014-01-01 00:00:00.00000),
container(c_0)", virtualTableMode:0, timeseriesColumnName:"ts"}}
 Load records into the time series by inserting the following
documents into the ts_data1_v virtual table with the REST API POST
method or the MongoDB API insert method:
{ id: 1, temp: 15.0, hum: 20.0}{ id: 1, temp: 16.2, hum: 19.0}{ id: 1, temp: 16.5,
hum: 22.0}
70
© 2015 IBM Corporation
Create Time series using REST API or Mongo API
(cont’d…)
 List all device id’s:
GET /demo/$cmd?query={distinct:"ts_data_v",key:"loc_esi_id"}
 List device id’s greater than 10:
GET
/stores_demo/$cmd?query={distinct:"ts_data_v",key:"loc_esi_id",query:{va
lue:{"$gt":10}}}
 Find data for specific device id:
GET /stores_demo/ts_data_v?query={loc_esi_id:"4727354321046021"}
 Find and sort data with multiple qualifications
GET
/stores_demo/ts_data_v?query={"$and":[{loc_esi_id:"4727354321046021"},{
value:{"$gt":100.0}},{direction:"P"}]}&fields={tstamp:1,value:1}&sort=
{value:-1}
71
© 2015 IBM Corporation
Enhancements to the timeseries Java class library (1)
 Define time series objects with the new builder classes
 String representations of SQL statements no longer needed
 Use builder classes to:
– Reduce the possibility of errors
– Improve usability
– Methods in the Java class library run faster
72
© 2015 IBM Corporation
Enhancements to the timeseries Java class library (2)
 Ability to determine whether the definitions of two calendars or
calendar patterns are the same.
 Create calendar patterns and calendars with new
IfmxCalendarPattern.Builder and IfmxCalendar.Builder classes.
 Create and manage containers with the new TimeSeriesContainer
and TimeSeriesContainer.Builder classes.
 Create TimeSeries row types with the new TimeSeriesRowType and
TimeSeriesRowType.Builder classes.
 Create a simpler custom type map that uses a PatternClassMap
instead of individual entries for each data type with the new
TimeSeriesTypeMap and TimeSeriesTypeMap.Builder classes.
73
© 2015 IBM Corporation
Enhancements to the timeseries Java class library
 Insert data into a time series with the new IfmxTimeSeries.Builder
class.
 Modify data and process query results because the results of queries
on time series data are now JDBC updatable result sets.
 Distinguish between case sensitive and case insensitive databases
and make multiple updates within a row.
 Convert the time series data to the appropriate time zone on the
client.
 Select and update data by specifying similar data types instead of the
exact data types:
– Data is implicitly cast during read and write operations.
– Previously, transactions that did not specify the exact data types failed.
74
© 2015 IBM Corporation
Text Search of JSON Documents (NoSQL)
 To create an index named myidx in the mytab relational table on the
title and abstract columns:
db.runCommand( { createTextIndex: “mytab”, name:”myidx”,
key:{“title”:”text”, “abstract”:”text”}, options : {} } )
 To create an index named articlesIdx on the articles collection by
using the bts parameter all_json_names="yes“
db.runCommand( { createTextIndex: “articles”, name:”articlesIdx”,
options : {all_json_names : "yes"} } )
 If text indexes are created by using the Informix createTextIndex
command, query them by using the Informix $ifxtext query operator.
 If text indexes are created by using the MongoDB syntax for text
indexes, query them by using the MongoDB $text query operator
75
© 2015 IBM Corporation
ODBC / JDBC / ESQL/C to handle BSON data type
 API’s to exchange JSON/BSON data between application and
database server as character or binary stream
 Application developer to be responsible for parsing the data suitably
 For JDBC:
– New Informix Extension class IfxBSON will be introduced that can be used to
access BSON/JSON data
– IfxBSON will extend java.sql.SQLData interface, which means users can get
access to the stream using writeSQL() and readSQL() methods
– IfxBSON will also introduce few more utility methods
– Implementation would use a factory pattern to let user choose which provider to
use to manipulate the BSON data.
– JDBC driver will embed org.bson.BSON library as default provider to
manipulate the BSON data, for both encode and decode operations.
76
© 2015 IBM Corporation
Include JSON documents in timeseries
Subject
Implementation
Create a TimeSeries subtype that is
named ts_data_j
CREATE ROW TYPE ts_data_j( tstamp
datetime year to fraction(5), sensor_data BSON);
Create a time series table that is
named tstable_j
CREATE TABLE IF NOT EXISTS tstable_j( id int
not null primary key, ts timeseries(ts_data_j))
LOCK MODE ROW;
Create a container that is named
container_b in a dbspace
EXECUTE PROCEDURE
TSContainerCreate('container_b', 'dbspace1',
'ts_data_j', 512, 512);
Create a time series with a JSON
document
INSERT INTO tstable_j VALUES(1, 'origin(2014-0101 00:00:00.00000), calendar(ts_15min),
container(container_b), [({"v1":1.5, "v2":20.5})]');
Create a virtual table that is named
virt_tstable_j by running the
TSCreateVirtualTab procedure
EXECUTE PROCEDURE
TSCreateVirtualTab(virt_tstable_j, tstable_j);
77
© 2015 IBM Corporation
Include JSON documents in timeseries
Subject
Implementation
Insert the rows through the virtual
table.
You must explicitly cast the JSON
data to the JSON data type; then
cast the data to the BSON data type.
INSERT INTO virt_tstable_j values (1,"2014-01-01
00:15:00.00000",('{"v1":2.0, "v2":17.4}'::json)::bson);
INSERT INTO virt_tstable_j values (1,"2014-01-01
00:30:00.00000", ('{"v1":1.9, "v2":20.2}'::json)::bson);
INSERT INTO virt_tstable_j values (1,"2014-01-01
00:45:00.00000", ('{"v1":1.8, "v2":19.7}'::json)::bson);
INSERT INTO virt_tstable_j values(1, "2014-01-01
01:00:00.00000",('{"v1":2.1, "v2":20.1}'::json)::bson);
Select data from the virtual table
virt_tstable_j;
SELECT id, tstamp, sensor_data::json FROM
virt_tstable_j;
78
© 2015 IBM Corporation
Support Spatial Type and Index (JSON)
 New functionality – catching up with Mongo API
 Allow for defining spatial data type and index creation
 Translate spatial data type / index creation to Informix Spatial R-Tree
 With Mobile apps using NoSQL, this feature is important
 Better performance / functionality / Mongo API compatibility
 Helps location based services (LBS)
79
© 2015 IBM Corporation
Cloud Backup Administration (OAT)
 In a Cloud environment, several servers could be setup with
sharding.
 The need is for the server backup to be simplified across multiple
servers in this setup.
 Extends OAT Backup administration to multiple servers.
 Currently, the sharding mechanism is implemented using Enterprise
Replication.
 In a Cloud Environment, Flexible Grid will be used to execute the
commands to configure the backups across the servers.
80
© 2015 IBM Corporation
OAT Enhancements
 Monitor the tenant databases of a multi-tenant Informix instance
– Tenant Information can be found in Space Administration
 Perform in-place alter removal operations
 Propagate backup configurations to all nodes in a grid
81
© 2015 IBM Corporation
Questions
82
© 2015 IBM Corporation