Download Why Was ODBC Created?

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Information privacy law wikipedia , lookup

Expense and cost recovery system (ECRS) wikipedia , lookup

Data vault modeling wikipedia , lookup

Concurrency control wikipedia , lookup

Business intelligence wikipedia , lookup

Versant Object Database wikipedia , lookup

Relational model wikipedia , lookup

Microsoft Access wikipedia , lookup

Next-Generation Secure Computing Base wikipedia , lookup

Database wikipedia , lookup

Database model wikipedia , lookup

Clusterpoint wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Transcript
ODBC Overview
Open Database Connectivity (ODBC) is a widely accepted application programming interface (API) for
database access. It is based on the Call-Level Interface (CLI) specifications from Open Group and ISO/IEC for
database APIs and uses Structured Query Language (SQL) as its database access language.
ODBC is designed for maximum interoperability - that is, the ability of a single application to access
different database management systems (DBMSs) with the same source code. Database applications call
functions in the ODBC interface, which are implemented in database-specific modules called drivers. The
use of drivers isolates applications from database-specific calls in the same way that printer drivers isolate
word processing programs from printer-specific commands. Because drivers are loaded at run time, a user
only has to add a new driver to access a new DBMS; it is not necessary to recompile or relink the
application.
Why Was ODBC Created?
Historically, companies used a single DBMS. All database access was done either through the front end of
that system or through applications written to work exclusively with that system. However, as the use of
computers grew and more computer hardware and software became available, companies started to
acquire different DBMSs. The reasons were many: People bought what was cheapest, what was fastest, what
they already knew, what was latest on the market, what worked best for a single application. Other reasons
were reorganizations and mergers, where departments that previously had a single DBMS now had several.
The issue grew even more complex with the advent of personal computers. These computers brought in a
host of tools for querying, analyzing, and displaying data, along with a number of inexpensive, easy-to-use
databases. From then on, a single corporation often had data scattered across a myriad of desktops,
servers, and minicomputers, stored in a variety of incompatible databases, and accessed by a vast number
of different tools, few of which could get at all of the data.
The final challenge came with the advent of client/server computing, which seeks to make the most efficient
use of computer resources. Inexpensive personal computers (the clients) sit on the desktop and provide
both a graphical front end to the data and a number of inexpensive tools, such as spreadsheets, charting
programs, and report builders. Minicomputers and mainframe computers (the servers) host the DBMSs,
where they can use their computing power and central location to provide quick, coordinated data access.
How then was the front-end software to be connected to the back-end databases?
A similar problem faced independent software vendors (ISVs). Vendors writing database software for
minicomputers and mainframes were usually forced to write one version of an application for each DBMS
or write DBMS-specific code for each DBMS they wanted to access. Vendors writing software for personal
computers had to write data access routines for each different DBMS with which they wanted to work. This
often meant a huge amount of resources were spent writing and maintaining data access routines rather
than applications, and applications were often sold not on their quality but on whether they could access
data in a given DBMS.
What both sets of developers needed was a way to access data in different DBMSs. The mainframe and
minicomputer group needed a way to merge data from different DBMSs in a single application, while the
personal computer group needed this ability as well as a way to write a single application that was
independent of any one DBMS. In short, both groups needed an interoperable way to access data; they
needed open database connectivity.
What Is ODBC?
Many misconceptions about ODBC exist in the computing world. To the end user, it is an icon in the
Microsoft® Windows® Control Panel. To the application programmer, it is a library containing data access
routines. To many others, it is the answer to all database access problems ever imagined.
First and foremost, ODBC is a specification for a database API. This API is independent of any one DBMS or
operating system; although this manual uses C, the ODBC API is language-independent. The ODBC API is
based on the CLI specifications from Open Group and ISO/IEC. ODBC 3.x fully implements both of these
specifications — earlier versions of ODBC were based on preliminary versions of these specifications but did
not fully implement them — and adds features commonly needed by developers of screen-based database
applications, such as scrollable cursors.
The functions in the ODBC API are implemented by developers of DBMS-specific drivers. Applications call
the functions in these drivers to access data in a DBMS-independent manner. A Driver Manager manages
communication between applications and drivers.
Although Microsoft provides a driver manager for computers running Microsoft Windows® 95 and later,
has written several ODBC drivers, and calls ODBC functions from some of its applications, anyone can write
ODBC applications and drivers. In fact, the vast majority of ODBC applications and drivers available today
are written by companies other than Microsoft. Furthermore, ODBC drivers and applications exist on the
Macintosh® and a variety of UNIX platforms.
To help application and driver developers, Microsoft offers an ODBC Software Development Kit (SDK) for
computers running Windows 95 and later that provides the driver manager, installer DLL, test tools, and
sample applications. Microsoft has teamed with Visigenic Software to port these SDKs to the Macintosh and
a variety of UNIX platforms.
It is important to understand that ODBC is designed to expose database capabilities, not supplement them.
Thus, application writers should not expect that using ODBC will suddenly transform a simple database into
a fully featured relational database engine. Nor are driver writers expected to implement functionality not
found in the underlying database. An exception to this is that developers who write drivers that directly
access file data (such as data in an Xbase file) are required to write a database engine that supports at least
minimal SQL functionality. Another exception is that the ODBC component of the Windows SDK, formerly
included in the Microsoft Data Access Components (MDAC) SDK, provides a cursor library that simulates
scrollable cursors for drivers that implement a certain level of functionality.
Applications that use ODBC are responsible for any cross-database functionality. For example, ODBC is not
a heterogeneous join engine, nor is it a distributed transaction processor. However, because it is DBMSindependent, it can be used to build such cross-database tools.