* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download SQL Server 2014 Mission Critical Performance Level 300 Deck
Survey
Document related concepts
Oracle Database wikipedia , lookup
Tandem Computers wikipedia , lookup
Concurrency control wikipedia , lookup
Entity–attribute–value model wikipedia , lookup
Ingres (database) wikipedia , lookup
Microsoft Access wikipedia , lookup
Team Foundation Server wikipedia , lookup
Microsoft Jet Database Engine wikipedia , lookup
Open Database Connectivity wikipedia , lookup
Relational model wikipedia , lookup
Clusterpoint wikipedia , lookup
Extensible Storage Engine wikipedia , lookup
Transcript
SQL Server 2014 and the Data Platform George Walters Technical Solutions Professional, Data Platform [email protected] Moore’s Law means more transistors and therefore cores, but… CPU clock rate stalled… Meanwhile RAM cost continues to drop $ per GB of PC Class Memory 1000000 US$/GB 100000 10000 1000 100 10 http://www.gotw.ca/publications/concurrency-ddj.htm 1990 1991 1992 1993 1994 1994 1995 1996 1997 1998 1999 2000 2000 2001 2002 2004 2005 2007 2008 2009 2011 1 New High performance, memory-optimized OLTP engine integrated into SQL Server and architected for modern hardware trends. Integrated into SQL Server relational database Full ACID support Memory-optimized indexes (no BTrees, buffer pools) Non blocking multi-version optimistic concurrency control (no locks/latches) T-SQL compiled to native code Customer Benefits: • • • • Low latency Up to 10x improvement in performance 2-5x improvement in scalability Leverage investments in SQL Server 8 Application is suited for in-memory processing Application is “OLTP-Like” Application porting simplified if Client App No changes to communication stack, parameter passing, result set generation 10-30x more efficient Reduced log bandwidth & contention. Log latency remains Checkpoints are background sequential IO TDS Handler and Session Management In-Memory OLTP Compiler Parser, Catalog, Algebrizer, Optimizer Key Proc/Plan cache for ad-hoc TSQL and SPs Interpreter for TSQL, query plans, expressions Natively Compiled SPs and Schema Access Methods In-Memory OLTP Engine for Memory_optimized Tables & Indexes Query Interop Buffer Pool for Tables & Indexes SQL Server.exe Memory-optimized Table Filegroup Transaction Log Data Filegroup Existing SQL Component In-Memory OLTP Component Generated .dll Benefits Main-Memory Optimized Drivers High performance data operations In-Memory OLTP Tech Pillars Table Constructs • Optimized for in-memory data • Indexes (hash and range) exist only in memory • No buffer pool • Stream-based storage for durability Hardware trends Steadily declining memory price, NVRAM Fixed schema – no ALTER TABLE, must drop/recreate/reload No LOB datatypes; row size limited 8060 No constraints support (PK only) No Identity or Calculated columns, CLR etc… Data and table size considerations Size of tables = (row size * # of rows) Size of hash index = (bucket_count * 8-bytes) Max size SCHEMA_AND_DATA = 512 GB IO for Durability SCHEMA_ONLY vs. SCHEMA_AND_DATA Memory Optimized Filegroup Data and Delta files Transaction Log Database Recovery 12 CREATE TABLE [Customer]( [CustomerID] INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000), [Name] NVARCHAR(250) NOT NULL Hash Index INDEX [IName] HASH WITH (BUCKET_COUNT = 1000000), [CustomerSince] DATETIME NULL Secondary Indexes ) are specified inline WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); This table is memory optimized This table is durable Benefits In-Memory OLTP Tech Pillars Drivers Frictionless scaleup High Concurrency • Multi-version optimistic concurrency control with full ACID support • Core engine uses lock-free algorithms • No lock manager, latches or spinlocks Hardware Trends Many-core processors Impact of no locks or latches Write-Write Conflict: Design Application for condition with try-catch code Applications dependent on locking: May not be a good candidate Can take an application lock Multiple versions of records Increases the memory needed by memory optimized tables Garbage Collection used to reclaim old versions Transaction scope Support for Isolation Levels: Snapshot, Repeatable Read, Serializable Commit time validation – again have retry logic to deal with failure 16 • Interpreted T-SQL access • Access both memory- and diskbased tables • Less performant • Virtually full T-SQL surface area • When to use • Ad hoc queries • Reporting-style queries • Speeding up app migration • Natively compiled stored procedures • Access only memory optimized tables • Maximum performance • Limited T-SQL surface area • When to use • OLTP-style operations • Optimize performance critical business logic Backup and Restore Failover Clustering AlwaysOn DMVs, Catalog Views, Perfmon counters, XEvents SSMS • What’s being delivered: • Main benefits In-memory In the Data Warehouse Data Stored Row-Wise: Heaps, b-Trees, Key-Value “By using SQL Server 2012 In-Memory ColumnStore, we were able to extract about 100 million records in 2 or 3 seconds versus the 30 minutes required previously. “ - Atsuo Nakajima Asst Director, Bank of Nagoya • In-Memory ColumnStore • Both memory and disk • Built-in to core RDBMS engine Customer Benefits: C C C 1 2 3 Columnstore Index Representation C C C6 4 5 • 10-100x faster • Reduced design effort • Work on customers’ existing hardware • Easy upgrade; Easy deployment 30 In-Memory ColumnStore Storage Model Data stored Column-wise • Each page stores data from a single column • Highly compressed • More data fits in memory • Each column can be accessed independently • Fetch only columns needed • Can dramatically decrease I/O C1 C2 C3 C4 C5 C6 Row Groups & Segments Segments • A segment contains values for one • • • • column for a set of rows Segments for the same set of rows comprise a row group Segments are compressed Each segment stored in a separate LOB Segment is unit of transfer between disk and memory C1 C2 C3 C4 C5 C6 Row group Processing an Example Row Groups Segments Compress Each Segment Some Compress More than Others Segment Elimination Segment Elimination QP Vector Operators Batch object bitmap of qualifying rows Column vectors Clustered & Updatable (2014) 101 Million Row Table + Index Space 19.7GB 10.9GB 5.0GB TABLE WITH CUSTOMARY INDEXING TABLE WITH CUSTOMARY INDEXING (PAGE COMPRESSION) TABLE WITH NO INDEXING 6.9GB 4.0GB TABLE WITH NO INDEXING (PAGE COMPRESSION) 1.8GB TABLE WITH COLUMNSTORE INDEX CLUSTERED COLUMNSTORE How It Works • CREATE CLUSTERED COLUMNSTORE • Organizes and compresses data into ColumnStore • BULK INSERT Partition ColumnStore • Creates new ColumnStore Row Groups • INSERT • Rows are placed in the Row Store (heap) • When Row Store is big enough, a new ColumnStore Row Group is created • DELETE • Rows are marked in the Deleted Bitmap • UPDATE • Delete plus insert • Most data is in ColumnStore format Row Store Deleted Bitmap What’s New? • Adds an additional layer of compression on top of the inherent compression used by columnstore • Shrink on-disk database sizes by up to 27% • Compression applies per partition and can be set either during index creation or during rebuild ColumnStore Enhancements Summary • What’s being delivered: • Clustered & Updateable In-Memory Columnstore • ColumnStore archive option for data compression • Global batch aggregation • Main benefits: • Real-time super-fast data warehouse engine • Ability to continue queries while updating without the need to drop and recreate index or partition switching • Huge disk space saving due to compression • Ability to compress data 5X – 15X using archival per-partition compression • Better performance and more efficient (less memory) batch query processing using batch mode rather than row mode • What’s being delivered: • Usage of non-volatile drives (SSD) to extend buffer pool • NUMA aware Large page and BUF array allocation • Main benefits: • BP Extension for SSDs • Improve OLTP query performance with no application changes • No risk of data loss (using clean pages only) • Easy configuration optimized for OLTP workloads on commodity servers (32GB RAM) Example: ALTER SERVER CONFIGURATION SET BUFFER POOL EXTENSION ON (FILENAME = 'F:\SSDCACHE\EXAMPLE.BPE‘, SIZE = 50 GB) • Scalability improvements for systems with >8 sockets • What’s being delivered: • New cardinality estimator • Incremental statistics for partition • Parallel SELECT INTO • Main benefits: • Better query performance: • Better choice of query plans • Faster and more frequent statistics refresh on partition level • Consistent query performance • Better supportability using two steps (decision making & execution) to enable better query plan troubleshooting • Loading speed into table improved significantly using parallel operation • What’s being delivered: • Single partition online index rebuild • Managing Lock Priority for table SWITCH and Online Index Rebuild • Built-in diagnostics • Main benefits: • Increased concurrency and application availability • New partition-level granularity for online index rebuild allows lower resource (CPU/memory) usage for customers who need 24/7 access • Provide customers greater control over impacting running transactions if using switch partition or online index rebuild commands • What’s being delivered: • 3 new permissions (CONNECT ANY DATABASE, IMPERSONATE ANY LOGIN, SELECT ALL USER SECURABLES) • Main benefit: • Greater role separation • Ability to create new roles for database administrators who are not sysadmin (super-user) • Ability to create new roles for users or apps with specific purposes Examples: 1. Database administrators cannot see user data 2. Database administrators cannot modify logins 3. Database administrators cannot impersonate any logins 4. New roles (e.g. auditors) to read all data but not database administrators 5. New roles to read all metadata for monitoring purposes (e.g. SCOM) but cannot see user data 6. New roles for middle tier app to impersonate any login except sa, sysadmin, or other high privilege logins -- Create a login CREATE LOGIN Mary WITH PASSWORD = 'strongpassword' -- Create a new server role and we will limit the access CREATE SERVER ROLE MyLimitedAdmin AUTHORIZATION sysadmin -- Add login Mary into the newly created server role ALTER SERVER ROLE MyLimitedAdmin ADD MEMBER Mary -- Add CONTROL SERVER permission to the new server role to manage the dB instance GRANT CONTROL SERVER to MyLimitedAdmin -- However, we deny the new server role to see any user data but can do other DBA tasks -- such as backup the databases, etc. DENY SELECT ALL USER SECURABLES to MyLimitedAdmin Goals: • Ability to differentiate workloads • Ability to monitor resource usage per group • Limit controls to allow throttled execution or prevent/minimize probability of “run-aways” • Prioritize workloads • Provide predictable execution of workloads • Specify resource boundaries between workloads • What’s being delivered: • Add Max/Min IOPS per volume to Resource Governor pools • Add DMVs and Perf counters for IO statistics per pool per volume • Update SSMS Intellisense for new T-SQL • Update SMO & DOM for new T-SQL and objects • Main benefits: • Better isolation (CPU, memory, and IO) for multi-tenant workloads • Guarantee performance in private cloud and hosters scenario • What’s being delivered: • Extensions to SQL Server SysPrep functionality in order to support image based deployment of clustered SQL Server instances • Main benefit: • Supports full automation of SQL Server Failover Cluster deployment scenarios • Reduces deployment times for SQL Server Failover Clusters • Combined together, these features allow customers to automate the provisioning of SQL Server Failover Clusters both on-premises and through Infrastructure as a Service (IaaS) • Built on top of SQL Server 2012 SP1 CU2 Sysprep enhancements • What’s being delivered: • • • • Increase number of secondaries to 8 (from 4) Increase availability of Readable Secondaries Support for Windows Server 2012 Cluster Shared Volumes (CSV) Enhanced Diagnostics • Main benefits: • Further scale-out read workloads across, possibly geo-distributed, replicas • Use readable secondaries despite network failures (important in geo-distributed environments) • Improve SAN storage utilization • Avoid drive letter limitation (max 24 drives) via CSV paths • Increase resiliency of storage failover • Ease troubleshooting SQL Server Journey to the Cloud Backup to Azure Smart Backup AlwaysOn Replica in Azure SQL Server with Azure Storage integration SQL Server with Azure IaaS Backup to Azure - Manual • What’s being delivered • Main benefit Backup to Azure - Automatic • What’s being delivered • An agent that manages and automates SQL Server backup policy • Main benefit • Large scale management and no need to manage backup policy • Minimal knobs – control retention period • Context-aware – e.g. workload/throttling Example: • Manage whole instance, or particular DBs EXEC smart_admin.sp_set_db_backup • Leverage Backup to Azure @database_name='TestDB', • Inherently off-site @storage_url=<storage url>, • Geo-redundant @retention_days=30, @credential_name='MyCredential', • Minimal storage costs @enable_backup=1 • Zero hardware management AlwaysOn Replica in Azure What’s being delivered • Wizard to add a replica in a Windows Azure VM Main benefits • Easily deploy one or more replicas of your databases to Windows Azure • No need for a DR site (hardware, rent, ops) • Use these replicas for • Disaster Recovery • Workloads (reads/backups) Windows Cluster Availability Group PRIMARY On-Premises SECONDARY SECONDARY Cloud SQL Server with Azure Storage Integration • What’s being delivered • Native support for SQL Server database files stored as Windows Azure blobs. This is a first step in moving your on-prem SQL Server databases to Azure environment. • Main benefit • Delivers an incremental path to database in the cloud. Can move one database at a time, without application changes • Additional benefits • Save storage cost: On-premises computing, with bottomless off-site storage in Azure • Increase HA and DR: • A set of database files stored in Azure storage – backed by Azure Storage SLA • Fast disaster recovery using database attach operation without the need to restore data • Maintain on-prem control of security: Allow TDE key on-prem while encrypted data in Azure Storage Deploy dB to Windows Azure VM • What’s being delivered • New wizard to deploy dB to SQL Server in Azure VM • Main benefits • Easy to use • Perfect for DBAs new to Azure and for ad hoc scenarios • Complexity hidden • Detailed Azure knowledge is not needed • Almost no overhead: the defining factor for time to transfer is dB size Use SQL 2014 today! Call to action www.microsoft.com/sqlserver © 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION