Download Paper

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Extensible Storage Engine wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Relational model wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Clusterpoint wikipedia , lookup

Database model wikipedia , lookup

Transcript
International Journal on Scientific Research in Emerging TechnologieS, Vol. 2 Issue 4, July 2016
ISSN: 1934-2215
Design and Implementation of TARF:
Attribute-Based Data Sharing
Kowsalya .S
Department of CSE,
MeenakshiRamaswamyEngineering
College,
TamilNadu,
India.
[email protected]
Rajadurai .N
Assistant Professor,
Department of CSE,
Meenakshi Ramaswamy
Engineering College,
TamilNadu, India.
[email protected]
Makuru.M
Head of the Department/CSE,
Meenakshi Ramaswamy
Engineering College,
TamilNadu, India.
[email protected]
Abstract— The recent adoption and diffusion of the data
data storing center, (2) fine-grained user revocation per each
sharing paradigm in distributed systems such as online
attribute could be done by proxy encryption which takes
social networks or cloud computing, there have been
advantage of the selective attribute group key distribution on
increasing demands and concerns for distributed data
top of the ABE. The performance and security analyses
security. One of the most challenging issues in data sharing
indicate that the proposed scheme is efficient to securely
systems is the enforcement of access policies and the support
manage the data distributed in the data sharing system.
of policies updates. Cipher text policy attribute-based
Algorithm’s- CP-ABE (Cipher text policies-Attribute based
encryption
Encryption)
(CP-ABE)
is
becoming
a
promising
cryptographic solution to this issue. It enables data owners to
define their own access policies over user attributes and
enforce the policies on the data to be distributed. However,
OBJECTIVE
To improve security and efficiency of data sharing
the advantage comes with a major drawback which is known
an Attribute-based encryption (ABE) is a
promising
as a key escrow problem. The key generation center could
cryptographic approach that achieves a fine-grained data
decrypt any messages addressed to specific users by
access control. It provides a way of defining access policies
generating their private keys. This is not suitable for data
based on different attributes of the requester, environment, or
sharing scenarios where the data owner would like to make
the data object. Especially, cipher text-policy attribute-based
their private data only accessible to designated users. In
encryption (CP-ABE) enables an encryptor to define the
addition, applying CP-ABE in the data sharing system
attribute set over a universe of attributes that a decryptor needs
introduces another challenge with regard to the user
to possess in order to decrypt the cipher text, and enforce it on
revocation since the access policies are defined only over the
the contents. Thus, each user with a different set of attributes
attribute universe. Therefore, in this study, we propose a
is allowed to decrypt different pieces of data per the security
novel CP-ABE scheme for a data sharing system by
policy. This effectively eliminates the need to rely on the data
exploiting the characteristic of the system architecture. The
storage server for preventing unauthorized data access, which
proposed scheme features the following achievements: (1)
is the traditional access control approach of such as the
the key escrow problem could be solved by escrow-free key
reference monitor.
issuing protocol, which is constructed using the secure twoparty computation between the key generation center and the
© 2016, IJSRETS All Rights Reserved
Page | 42
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
INTRODUCTION
drawback which is known as a key escrow problem. The KGC
Recent development of the network and computing technology
can decrypt every cipher text addressed tospecific users by
enables many people to easily share their data with others uses
generating their attribute keys. This could be a potential threat
online external storages. People can share their lives with
to the data confidentiality or privacy in the data sharing
friends by uploading their private photos or messages into the
systems. Another challenge is the key revocation. Since some
online social networks such as Facebook and MySpace; or
users may change their associate attributes at some time, or
upload highly sensitive personal health records (PHRs) into
some private keys might be compromised, key revocation or
online data servers such as Microsoft HealthVault, Google
update for each attribute is necessary in order to make systems
Health for ease of sharing with their primary doctors or for
secure. This issue is even more difficult especially in ABE,
cost saving. As people enjoy the advantages of these new
since each attribute is conceivably shared by multiple users
technologies and services, their concerns about data security
(henceforth, we refer to such a set of users as an attribute
and access control also arise. Improper use of the data by the
group). This implies that revocation of any attribute or any
storage server or unauthorized access by outside users could
single user in an attribute group would affect all users in the
be potential threats to their data. People would like to make
group. It may result in bottleneck during rekeying procedure
their sensitive or private data only accessible to the authorized
or security degradation due to the windows of vulnerability.
people with credentials they specified. Attribute-based
SYSTEM ANALYSYS
encryption (ABE) is a promising cryptographic approach that
Existing System
achieves a fine-grained data access control. [3],[4],[5],[6]. It
With the recent adoption and diffusion of the data sharing
provides a way of defining access policies based on different
paradigm in distributed systems such as online social networks
attributes of the requester, environment, or the data object.
or cloud computing, there have been increasing demands and
Especially, cipher text-policy attribute-based encryption (CP-
concerns for distributed data security. One of the most
ABE) enables an encryptor to define the attribute set over a
challenging issues in data sharing systems is the enforcement
universe of attributes that a decryptor needs to possess in order
of access policies and the support of policies updates. Cipher
to decrypt the cipher text, and enforce it on the contents [5].
text policy attribute-based encryption (CP-ABE) is becoming
Thus, each user with a different set of attributes is allowed to
a promising cryptographic solution to this issue. It enables
decrypt different pieces of data per the security policy. This
data owners to define their own access policies over user
effectively eliminates the need to rely on the data storage
attributes and enforce the
server for preventing unauthorized data access, which is the
distributed. However, the advantage comes with a major
traditional access control approach of such as the reference
drawback which is known as a key escrow problem. The key
monitor [1].
generation center could decrypt anymessages addressed to
policies on the data to be
Nevertheless, applying CP-ABE in the
specific users by generating their private keys. This is not
data sharing system has several challenges. In CP-ABE, the
suitable for data sharing scenarios where the data owner would
key generation center (KGC) generates private keys of users
like to make their private data only accessible to designated
by applying the KGC’s master secret keys to users’ associated
users. In addition, applying CP-ABE in the data sharing
set of attributes. Thus, the major benefit of this approach is to
system introduces another challenge with regard to the user
largely reduce the need for processing and storing public key
revocation since the access policies are defined only over
certificates under traditional public key infrastructure (PKI).
the attribute universe.
However, the advantage of the CP-ABE comes with a major
© 2016, IJSRETS All Rights Reserved
Page | 43
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
Age, Phone number, E-Mail id, Username, Password. When
Proposed System
To overcome the drawbacks in the existing
they agree for the terms and conditions provided and then
systemwe propose a novel CP-ABE scheme for a data sharing
submit their details will be updated to the database and they
system by exploiting the characteristic of the system
will be considered as valid member.
architecture. The proposed scheme features the following
File Uploading Phase:
achievements: (1) the key escrow problem could be solved by
Sender:
escrow-free key issuing protocol, which is constructed using
Here the sender is the admin. When the users
the secure two-party computation between the key generation
request for a file to admin then sends that file in an encrypted
center and the data storing center, (2) fine-grained user
format with a secrete key.
revocation per each attribute could be done by proxy
Receiver:
encryption which takes advantage of the selective attribute
The receiver receives the entire encrypted file
group key distribution on top of the ABE. The performance
with the secrete key from the admin and decrypts the file. So
and security analyses indicate that the proposed scheme is
that the receiver can view the file in a secure manner. The data
efficient to securely manage the data distributed in the data
sharing between the sender and receiver in a secure manner by
sharing system.
encryption and decryption technique.
System Requirements:
Hardware Interfaces
2)Key generation center:
ABE comes in two flavors called key-policy
Hard Disk Capacity
120 GB
ABE (KP-ABE) and cipher text policy ABE (CP-ABE). In
Memory (RAM)
1 GB
KPABE, attributes are used to describe the encrypted data and
Processor
Pentium IV
policies are built into users’ keys; while in CP-ABE, the
Mouse
Wireless Mouse
attributes are used to describe users’ credentials, and an
Keyboard
Wireless Keyboard
encryptor determines a policy on who can decrypt the data.
Monitor
LCD Monitor
Between the two approaches, CP-ABE is more appropriate to
the data sharing system because it puts the access policy
decisions in the hands of the data owners. So we are using the
A. Software Interfaces
Operating System
Business Components
Connectivity
Windows
2000/98/XP/Vista
C#.Net, Html, JavaScript
Internet/Intranet
Connectivity
in this paper proposed technique of cipher text-policy ABE
(CP-ABE).
We propose a novel CP-ABE scheme for a
secure data sharing system, which features the following
achievements. First, the key escrow problem is resolved by a
key issuing protocol that exploits the characteristic of the data
Module Description:
sharing system architecture. The key issuing protocol
1)User:
generates and issues user secret keys by performing a secure
Registration Phase:
two-party computation (2PC) protocol between the KGC and
Login form is provided for user. In this we have option for
the data storing center with their own master secrets.
member who have already registered he can directly give the
3)Key generation protocol:
username and passwordprovided by the administrator of this
site, other for new user who is willing to register. To
registering the following details are gathered from the user,
© 2016, IJSRETS All Rights Reserved
Page | 44
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
deters them from knowing each other’s master secrets so that
none of them can generate the whole secret keys of a user
alone.
CP-ABE SCHEME:
Attribute-based
encryption
(ABE)
is
a
promising
cryptographic approach that achieves a fine-grained data
access control. It provides a way of defining access policies
based on different attributes of the requester, environment, or
the data object. Especially, cipher text-policy attribute-based
encryption (CPABE) enables an encryptor to define the
attribute set over a universe of attributes that a decryptor needs
to possess in order to decrypt the cipher text, and enforce it on
In the key generation protocol generate a secret key in
between the key generation center and data storing center. The
KGC and the data storing center are involved in the following
key generation protocol. The key generation centre take some
random input and generate a key and issue to the data storing
centre. Then the data storing centre take as that key as input
and generate a another key and issued to the key generation
centre. Then, the KGC and the data storing center engage in a
secure 2PC protocol, where the KGC’s private input is (rt, β),
and the data storing center’s private input is α. The secure 2PC
protocol returns a private output x = (α + rt)β to the data
the contents .Thus, each user with a different set of attributes
is allowed to decrypt different pieces of data per the security
policy. This effectively eliminates the need to rely on the data
storage server for preventing unauthorized data access, which
is the traditional access control approach of such as the
reference monitor .Nevertheless, applying CP-ABE in the data
sharing system has several challenges. In CP-ABE, the key
generation center (KGC) generates private keys of users by
applying the KGC’s master secret keys to users’ associated set
of attributes. Thus, the major benefit of this approach is to
largely reduce the need for processing and storing public key
certificates under traditional public key infrastructure (PKI).
storing center.
Advantage:
4)Escrow-Free Key Issuing Protocol:
The KGC and the data storing center are
involved in the user key issuing protocol. In the protocol, a
user is required to contact the two parties before getting a set
of keys. The KGC is responsible for authenticating a user and
issuing attribute keys to him if the user is entitled to the
attributes. The secret key is generated through the secure 2PC
protocol between the KGC and the data storing center. They
engage in the arithmetic secure 2PC protocol with master
secret keys of their own, and issue independent key
components to a user. Then, the user is able to generate the
whole secret keys with the key components separately
received from the two authorities. The secure 2PC protocol
© 2016, IJSRETS All Rights Reserved
However, the advantage of the CP-ABE comes with a major
drawback which is known as a key escrow problem. The KGC
can decrypt every cipher text addressed to specific users by
generating their attribute keys. This could be a potential threat
to the data confidentiality or privacy in the data sharing
systems. Another challenge is the key revocation. Since some
users may change their associate attributes at some time, or
some private keys might be compromised, key revocation or
update for each attribute is necessary in order to make systems
secure. This issue is even more difficult especially in ABE,
since each attribute is conceivably shared by multiple users
(henceforth, we refer to such a set of users as an attribute
group). This implies that revocation of any attribute or any
Page | 45
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
single user in an attribute group would affect all users in the
ISSN: 1934-2215
Login
group. It may result in bottleneck during rekeying procedure
or security degradation due to the windows of vulnerability.
programmer. It might even be a stand-alone program or a
Admin
functional unit a larger program.
Check
User
Unit testing
Unit testing is performed prior to integration of the unit into a
larger system. It is like
no
Exists
no
Exists
Coding and debugging -> unit testing -> integration testing
Data Flow Diagram:
Create Account
yes
yes
Upload Fileto Multiple User
Sent
View Received File Details
Stop
FileSending
no
File Send with PrivateKey
Create Account
Exists
Check
Mobile
Not Exists
Get PrivateKey
yes
no
Check
yes
End
File Download
End
© 2016, IJSRETS All Rights Reserved
Page | 46
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
UsecaseDiagram
Create an Account
Login
Login
Check
Upload File to
Mutiple User
Create Account
No
Admin
User
Exists
Exists
Generate Private Key
Admin
File Maitainance
Search Admin/User
Download File with
PrivateKey
Create Account
Yes
Yes
User
No
Upload File to Multiple User
Success
Redeived Files
File Download
Check
PrivateKey exists
Fie Sending
Download File
Generate PrivateKey
Failure
File Received/
Not Received
If PrivateKey not exists
SYSTEM TESTING
Activity Diagram:
© 2016, IJSRETS All Rights Reserved
Page | 47
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
A “Program unit” stands for a routine or a collection of
5.
Re examine
routines implemented by an individual
6.
Implement appropriate corrections
A program unit must be tested for Functional tests,
7. Verify the corrections. Re run the system and test
Performance tests, Stress tests and Structure tests.
again until satisfactory
Functional tests refer to executing the code with standard
Debugging by deduction involves the following steps:
inputs, for which the results will occur within the expected

List possiblecauses for observed failure
boundaries. Performance test determines the execution time

Use the available information to eliminate various
spent in various parts of the unit, response time, device
hypotheses
utilization and throughput. Performance testing will help the

Prove/disprove the remaining hypotheses
tuning of the system.

Determine the appropriate corrections
Stress tests drive the system to its limits. They are designed to

Carry out the corrections and verify
intentionally break the unit.

Debugging
Structure tests verify logical
execution along different execution paths.
Functional,
performance and stress tests are collectively known as “Black
box testing”. Structure testing is referred to as “White box” or
“glass box” testing.
Program errors can be classified as
by
backtracking
involves
working
backward in the source code from Point where the
error was observed. Run additional test cases and
collect more information.
missing path errors, computational errors and domain errors.
Even if it looks like all possible execution paths have been
System Testing
tested, there might still exist some more paths. A missing path
System testing involves two activities: Integration testing and
error occurs, when a branching statement and the associated
Acceptance testing. Integration strategy stresses on the order
computations are accidentally omitted. Missing paths can be
in which modules are written, debugged and unit tested.
detected only by functional specifications. A domain error
Acceptance test involves functional tests, performance tests
occurs when a program traverses the wrong path because of an
and stress tests to verify requirements fulfillment. System
incorrect predicate in a branching statement. When a test case
checking checks the interfaces, decision logic, control flow,
fails to detect a computational error there is said to be a
recovery procedures, and throughput, capacity and timing
coincidental error.
characteristics of the entire system.
Integration testing
Debugging
Integration testing strategies include bottom-up (traditional),
Debugging is eliminating the cause of known errors.
top-down and sandwich strategies.
Commonly
consists of unit testing, followed by sub system testing,
used
debugging
techniques
are
induction,
Bottom-up integration
deduction and backtracking. Debugging by induction involves
followed by testing entire system.
the following steps:
discover errors in modules. Modules are tested independently
1.
Unit testing tries to
Collect all the information about test details and test
in an artificial environment known as a “test harness”. Test
results
harnesses provide data environments and calling sequences for
2.
Look for patterns
the routines and subsystem that are being tested in isolation.
3.
Form one or more hypotheses and rank /classify
Disadvantages of bottom-up testing include that harness
them.
preparation, which can sometimes take almost 50% or more of
Prove/disprove hypotheses.
the coding and debugging effort for a smaller product. After
4.
© 2016, IJSRETS All Rights Reserved
Page | 48
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
testing all the modules independently and in isolation, they are
Acceptance test tools include a test coverage analyzer, and a
linked and executed in one single integration and in isolation;
coding standards checker. Test coverage analyzer records the
they are linked and executed in one single integration run.
control paths followed for each test case. A timing analyzer
This known as “Big bang” approach to integration testing.
reports the time spent in various regions of the source code
Isolating sources of errors is difficult in “big bang “approach.
under different test cases. Coding standards are stated in the
Top-down integration starts with the main routine and one or
product requirements. Manual inspection is usually not an
two immediately next lower level routines. After a thorough
adequate mechanism from detecting violations of coding
checking the top level becomes a test harness to its immediate
standards.
subordinate routines.
Top-down integration offers the
following advantages.
1.
Software testing is an important element of software quality
System integration is distributed throughout the
implementation phase. Modules are integrated
as they are developed.
assurance and represents the ultimate review of specification
design and coding. The user tests the developed system and
changes are made according the needs. The testing phase
2.
Top-level interfaces are first test
3.
Top-level routines provide a natural test harness
involves testing of developed system using various kinds of
for lower-level routines.
4.
System Testing
data. An elaborated test data is prepared and system using the
data. Whole testing is noted and corrections are made.
Errors are localized to the new modules and
B. Testing Objectives
interfaces that are being added.
Though top-down integrations seem to offer better advantages,

the intent of finding on errors.
it may not be applicable in certain situations. Sometimes it
may be necessary to test certain critical low-level modules
first. In such situations, a sandwich strategy is preferable.
Testing is a process of executing a program with

A good test is on that has a high probability of
finding an undiscovered errors.
Sandwich integration is mostly top-down, but bottom-up
Testing is vital to the success of the system. System testing is
techniques are used on some modules and sub systems. This
the state of implementation, which ensures that the system
mixed approach retains the advantages of both strategies.
works accurately before live operations commence. System
Acceptance testing
testing makes a logical assumption that the system is correct
Acceptance testing involves planning and execution of
and that the goals are successfully achieved.
functional tests, performance tests and stress tests in order to
check
whether
the
system implemented
satisfies
the
requirements specifications. Quality assurance people as well
Effective Testing Prerequisites
C. 1) Types of Testing Done
Integration Testing
as customers may simultaneously develop acceptance tests and
run them. In addition to functional and performance tests,
stress test are performed to determine the limits/limitations of
the system developed. For example, a complier may be tested
An overall test plan for the project is prepared before
the start of coding.
D. Validation Testing
for its symbol table overflows or a real-time system may be
This project will be tested under this testing using
tested for its symbol table overflows or a real-time system
sample data and produce the correct sample output.
may be tested to find how it responds to multiple interrupts of
Recovery Testing
different/same priorities.
© 2016, IJSRETS All Rights Reserved
Page | 49
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
This project will be tested under this testing using
correct data input and its product and the correct
valid output without any errors.
Security Testing
ISSN: 1934-2215

Programming Language Integration

Garbage Collection

Consistent Error Handling
This project contains password to secure the data
Simplified Programming Model
E. Test Data and Input
All operating system services accessed through common
Taking various types of data we do the above testing.
object-oriented programming model (E.g.) File Access, Data
Preparation of test data plays a vital role in system testing.
Access, Threading and Graphics. The CLR removes many
After preparing the test data the system under study is tested
cumbersome concepts.
using the test data. While testing the system by using the
Simplified Deployment
above testing and correction methods. The system has been
Installing most .NET applications involves copying files to a
verified and validated by running with both.
directory, Adding a shortcut to Start menu, desktop, or Quick
Launch bar and registry access. No more GUIDs, ClassIDs,
i) Run with live data
etc are required. To uninstall, just delete the files.
ii) Run with test data
Language Interoperability
Run With Test Data
The CLR allows different programming languages to share
In the case the system was run with some sample data.
Specification testing was also done for each conditions or
types. The CLR provides a Common Type System (CTS)
CTS is also a set of standards. CTS define the basic data types
that IL understands. Each .NET compliant language should
combinations for conditions.
map its data types to these standard data types. This makes it
Run With Live Data
possible for the 2 languages to communicate with each other
The system was tested with the data of the old system for a
by passing/receiving parameters to/from each other. For
particular period. Then the new reports were verified with the
example, CTS defines a type Int32, an integral data type of 32
old one.
bits (4 bytes) which is mapped by C# through int and VB.Net
through its Integer data type
Common Language Specification
(CLS) Defines
the
minimum set of features that all .NET languages that target the
About .NET:
Dot Net Framework is a Programming model of the
CLR must support
.NET environment for building, deploying, and running Web-
Garbage Collection
based applications, smart client applications, and XML Web
The CLR automatically tracks all references to memory when
services. It enables the developers to focus on writing the
a block of memory no longer has any “live” references to it, it
business logic code for their applications.
can be released and reused (collected)
The .NET
Framework includes the common language runtime and class
Consistent Error Handling
libraries.
Traditional Win32 programming incorporates many different
Following are the core features of Dot Net Framework
error handling mechanisms Status Codes, GetLastError,

Simplified Programming Model
HRESULTS and Structured Exceptions. In the CLR, all
failures are reported via Exceptions.

Simplified Deployment
© 2016, IJSRETS All Rights Reserved
Dot Net Framework Components
Page | 50
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016

ISSN: 1934-2215
The Common Language Runtime (CLR) is a runtime engine
Common Language Runtime (CLR)
that manages .NET Code (such as C# applications). It
o
Common type system for all languages
o
Rich runtime environment
Provides features such as memory management, thread
management, object type safety, security, etc. It is a part of the
.NET Framework

Rich class libraries (.NET Framework)
o
Base class libraries, ADO.NET and XML
o
Windows
Forms
for
rich,
Win32
applications

Web application platform ASP.NET
o
Rich, interactive pages
o
Powerful web services
Framework Class Library
Dot Net Framework Architecture
Dot net framework contains a huge collection of reusable
types (class, enumerations and structures). These types are
used for common task such as IO, Memory and Thread
management. A complete object oriented toolbox is available
with Visual Studio Dot Net.
Common Language Runtime
The Common Language Runtime (CLR) is a runtime engine
that manages .NET Code (such as C# applications). It
Provides features such as memory management, thread
management, object type safety, security, etc. It is a part of the
.NET Framework
Managed code is a code that targets the CLR. Any
.NET Language, including C#, Visual Basic, C++, Java,
COBOL, etc.
Common Language Runtime
© 2016, IJSRETS All Rights Reserved
Assemblies
Assemblies are the basic unit of any application. It is a
logical grouping of one or more modules or files. Smallest unit
of reuse, security, and versioning. Assemblies can be created
Page | 51
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
directly by compiler (e.g. DCCIL.exe, CSC.exe, and

ISSN: 1934-2215
To build all communication on industry standards to
VBC.exe). It contains resource data (strings, icons, etc.) and
ensure that code based on the .NET Framework can
are loaded at runtime based on user locale. Assemblies can be
integrate with any other code.
private or shared. It can also be stored in Global Assembly
The .NET Framework has two main components: the common
Cache (GAC)
language runtime and the .NET Framework class library. The
Namespaces
A namespace is a logical container for types. It is designed to
eliminate name collisions. An assembly can contribute to
multiple namespaces, multiple assemblies can contributed to a
common language runtime is the foundation of the .NET
Framework. You can think of the runtime as an agent that
manages code at execution time, providing core services such
as memory management, thread management, and remoting,
namespace
while also enforcing strict type safety and other forms of code
ASP .NET
accuracy that ensure security and robustness. In fact, the
Microsoft.NET Framework
concept of code management is a fundamental principle of the
The .NET Framework is a new computing platform that
simplifies application development in the highly distributed
environment of the Internet. The .NET Framework is designed
to fulfill the following objectives:

runtime. Code that targets the runtime is known as managed
code, while code that does not target the runtime is known as
unmanaged code. The class library, the other main component
of the .NET Framework, is a comprehensive, object-oriented
To provide a consistent object-oriented programming
environment whether object code is stored and
executed locally, executed locally but Internet-
collection of reusable types that you can use to develop
applications ranging from traditional command-line or
graphical user interface (GUI) applications to applications
based on the latest innovations provided by ASP.NET, such as
distributed, or executed remotely.
Web Forms and XML Web services.

To provide a code-execution environment that
minimizes software deployment and versioning
The .NET Framework can be hosted by unmanaged
components that load the common language runtime into their
conflicts.
processes and initiate the execution of managed code, thereby



To provide a code-execution environment that
creating a software environment that can exploit both
guarantees safe execution of code, including code
managed and unmanaged features. The .NET Framework not
created by an unknown or semi-trusted third party.
only provides several runtime hosts, but also supports the
development of third-party runtime hosts.
To provide a code-execution environment that
eliminates the performance problems of scripted or
For example, ASP.NET hosts the runtime to provide a
interpreted environments.
scalable,
server-side
environment
for
managed
code.
ASP.NET works directly with the runtime to enable Web
To make the developer experience consistent across
widely varying types of applications, such as
Windows-based
applications
and
applications.
Forms applications and XML Web services, both of which are
discussed later in this topic.
Web-based
Internet Explorer is an example of an unmanaged application
that hosts the runtime (in the form of a MIME type extension).
Using Internet Explorer to host the runtime enables you to
© 2016, IJSRETS All Rights Reserved
Page | 52
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
embed managed components or Windows Forms controls in
types and instances, while strictly enforcing type fidelity and
HTML documents. Hosting the runtime in this way makes
type safety.
managed mobile code (similar to Microsoft® ActiveX®
controls) possible, but with significant improvements that only
managed code can offer, such as semi-trusted execution and
In addition, the managed environment of the runtime
eliminates many common software issues. For example, the
runtime automatically handles object layout and manages
secure isolated file storage.
references to objects, releasing them when they are no longer
The following illustration shows the relationship of the
being used. This automatic memory management resolves the
common language runtime and the class library to your
two most common application errors, memory leaks and
applications and to the overall system. The illustration also
invalid memory references. The runtime also accelerates
shows how managed code operates within a larger
developer productivity. For example, programmers can write
architecture.
applications in their development language of choice, yet take
full advantage of the runtime, the class library, and
F. Features of the Common Language Runtime
components written in other languages by other developers.
The common language runtime manages memory, thread
Any compiler vendor who chooses to target the runtime can
execution,
verification,
do so. Language compilers that target the .NET Framework
compilation, and other system services. These features are
make the features of the .NET Framework available to
intrinsic to the managed code that runs on the common
existing code written in that language, greatly easing the
language runtime.
migration process for existing applications.
With regards to security, managed components are awarded
While the runtime is designed for the software of the future, it
varying degrees of trust, depending on a number of factors that
also supports software of today and yesterday. Interoperability
include their origin (such as the Internet, enterprise network,
between managed and unmanaged code enables developers to
or local computer). This means that a managed component
continue to use necessary COM components and DLLs. The
might or might not be able to perform file-access operations,
runtime is designed to enhance performance. Although the
registry-access operations, or other sensitive functions, even if
common language runtime provides many standard runtime
it is being used in the same active application.
services, managed code is never interpreted. A feature called
code
execution,
code
safety
just-in-time (JIT) compiling enables all managed code to run
The runtime enforces code access security. For example, users
can trust that an executable embedded in a Web page can play
an animation on screen or sing a song, but cannot access their
personal data, file system, or network. The security features of
the runtime thus enable legitimate Internet-deployed software
to be exceptionally featuring rich. The runtime also enforces
code robustness by implementing a strict type- and codeverification infrastructure called the common type system
(CTS). The CTS ensures that all managed code is selfdescribing. The various Microsoft and third-party language
in the native machine language of the system on which it is
executing. Meanwhile, the memory manager removes the
possibilities of fragmented memory and increases memory
locality-of-reference to further increase performance. Finally,
the runtime can be hosted by high-performance, server-side
applications, such as Microsoft® SQL Server™ and Internet
Information Services (IIS). This infrastructure enables you to
use managed code to write your business logic, while still
enjoying the superior performance of the industry's best
enterprise servers that support runtime hosting.
compilers, Generate managed code that conforms to the CTS.
This means that managed code can consume other managed
© 2016, IJSRETS All Rights Reserved
NET Framework Class Library
Page | 53
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
The .NET Framework class library is a collection of reusable
types that tightly integrate with the common language runtime.
The class library is objecting oriented, providing types from
which your own managed code can derive functionality. This
not only makes the .NET Framework types easy to use, but
also reduces the time associated with learning new features of
the .NET Framework. In addition, third-party components can
integrate seamlessly with classes in the .NET Framework.
ISSN: 1934-2215
G. Client Application Development
Client applications are the closest to a traditional style of
application in Windows-based programming. These are the
types of applications that display windows or forms on the
desktop, enabling a user to perform a task. Client applications
include
applications
such
as
word
processors
and
spreadsheets, as well as custom business applications such as
data-entry tools, reporting tools, and so on. Client applications
For example, the .NET Framework collection classes
usually employ windows, menus, buttons, and other GUI
implement a set of interfaces that you can use to develop your
elements, and they likely access local resources such as the
own collection classes. Your collection classes will blend
file system and peripherals such as printers.
seamlessly with the classes in the .NET Framework. As you
would expect from an object-oriented class library, the .NET
Another kind of client application is the traditional ActiveX
Framework types enable you to accomplish a range of
control (now replaced by the managed Windows Forms
common programming tasks, including tasks such as string
control) deployed over the Internet as a Web page. This
management, data collection, database connectivity, and file
application is much like other client applications: it is
access. In addition to these common tasks, the class library
executed natively, has access to local resources, and includes
includes types that support a variety of specialized
graphical elements.
development scenarios. For example, you can use the .NET
Framework to develop the following types of applications and
services:

In the past, developers created such applications using C/C++
in conjunction with the Microsoft Foundation Classes (MFC)
or with a rapid application development (RAD) environment
Console applications.
such as Microsoft® Visual Basic®. The .NET Framework
incorporates aspects of these existing products into a single,

Scripted or hosted applications.

Windows GUI applications (Windows Forms).

ASP.NET applications.

XML Web services.
You can easily create command w

Windows services.
indows, buttons, menus, toolbars, and other screen elements
consistent development environment that drastically simplifies
the development of client applications.
The Windows Forms classes contained in the .NET
Framework are designed to be used for GUI development.
with the flexibility necessary to accommodate shifting
For example, the Windows Forms classes are a comprehensive
business needs.
set of reusable types that vastly simplify Windows GUI
development. If you write an ASP.NET Web Form
For example, the .NET Framework provides simple properties
application, you can use the Web Forms classes.
to adjust visual attributes associated with forms. In some cases
the underlying operating system does not support changing
these attributes directly, and in these cases the .NET
Framework automatically recreates the forms. This is one of
© 2016, IJSRETS All Rights Reserved
Page | 54
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
many ways in which the .NET Framework integrates the
Dataset held the data. In the past, data processing has been
developer interface, making coding simpler and more
primarily connection-based. Now, in an effort to make multi-
consistent.
tiered apps more efficient, data processing is turning to a
message-based approach that revolves around chunks of
Unlike ActiveX controls, Windows Forms controls have semitrusted access to a user's computer. This means that binary or
natively executing code can access some of the resources on
the user's system (such as GUI elements and limited file
access) without being able to access or compromise other
information. At the center of this approach is the Data
Adapter, which provides a bridge to retrieve and save data
between a Dataset and its source data store. It accomplishes
this by means of requests to the appropriate SQL commands
made against the data store.
resources. Because of code access security, many applications
that once needed to be installed on a user's system can now be
The XML-based Dataset object provides a consistent
safely deployed through the Web. Your applications can
programming model that works with all models of data
implement the features of a local application while being
storage: flat, relational, and hierarchical. It does this by having
deployed like a Web page.
no 'knowledge' of the source of its data, and by representing
the data that it holds as collections and data types. No matter
Introduction
ACTIVE X DATA OBJECTS.NET
a) ADO.NET Overview
what the source of the data within the DataSet is, it is
manipulated through the same set of standard APIs exposed
through the Dataset and its subordinate objects.
ADO.NET is an evolution of the ADO data access model that
While the Dataset has no knowledge of the source of its data,
directly addresses user requirements for developing scalable
the managed provider has detailed and specific information.
applications. It was designed specifically for the web with
The role of the managed provider is to connect, fill, and persist
scalability, statelessness, and XML in mind. ADO.NET uses
the Dataset to and from data stores. The OLE DB and SQL
some ADO objects, such as the Connection and Command
Server .NET
objects, and also introduces new objects. Key new ADO.NET
System.Data.SqlClient) that are part of the .Net Framework
objects include the Dataset, Data Reader, and Data Adapter.
provide four basic objects: the Command, Connection, Data
The important distinction between this evolved stage of
ADO.NET and previous data architectures is that there exists
an object -- the Dataset -- that is separate and distinct from any
data stores. Because of that, the Dataset functions as a
Data Providers (System.Data.OleDb
and
Reader and Data Adapter. In the remaining sections of this
document, we'll walk through each part of the Dataset and the
OLE DB/SQL Server .NET Data Providers explaining what
they are, and how to program against them.
standalone entity. You can think of the Dataset as an always
The following sections will introduce you to some objects that
disconnected record set that knows nothing about the source or
have evolved, and some that are new. These objects are:
destination of the data it contains. Inside a Dataset, much like
in a database, there are tables, columns, relationships,

transactions against a database.
constraints, views, and so forth.
A Data Adapter is the object that connects to the database to
fill the Dataset. Then, it connects back to the database to
Connections. For connection to and managing

Commands. For issuing SQL commands against a
database.
update the data there, based on operations performed while the
© 2016, IJSRETS All Rights Reserved
Page | 55
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016

ISSN: 1934-2215
Data Readers. For reading a forward-only stream
format of the returned DataReader object is different from a
of data records from a SQL Server data source.
recordset. For example, you might use the DataReader to show
the results of a search list in a web page.

Datasets. For storing, remoting and programming

against flat data, XML data and relational data.
DataSets and DataAdapters
DataAdapters. For pushing data into a DataSet,
DataSets
and reconciling data against a database.
The DataSet object is similar to the ADO Recordset object,
but more powerful, and with one other important distinction:
When dealing with connections to a database, there are two
different
options:
SQL
Server
.NET
Data
Provider
(System.Data.SqlClient) and OLE DB .NET Data Provider
(System.Data.OleDb). In these samples we will use the SQL
Server .NET Data Provider. These are written to talk directly
to Microsoft SQL Server. The OLE DB .NET Data Provider is
used to talk to any OLE DB provider (as it uses OLE DB
underneath).
represents a cache of data, with database-like structures such
as tables, columns, relationships, and constraints. However,
though a DataSet can and does behave much like a database, it
is important to remember that DataSet objects do not interact
directly with databases, or other source data. This allows the
developer to work with a programming model that is always
consistent, regardless of where the source data resides. Data
coming from a database, an XML file, from code, or user
Connections
Connections are used to 'talk to' databases, and are
respresented
the DataSet is always disconnected. The DataSet object
by
provider-specific
classes
such
as
SQLConnection. Commands travel over connections and
resultsets are returned in the form of streams which can be
read by a DataReader object, or pushed into a DataSet object.
input can all be placed into DataSet objects. Then, as changes
are made to the DataSet they can be tracked and verified
before updating the source data. The GetChanges method of
the DataSet object actually creates a second DatSet that
contains only the changes to the data. This DataSet is then
used by a DataAdapter (or other objects) to update the original
Commands
data source.
Commands contain the information that is submitted to a
The DataSet has many XML characteristics, including the
database, and are represented by provider-specific classes such
ability to produce and consume XML data and XML schemas.
as SQLCommand. A command can be a stored procedure call,
XML schemas can be used to describe schemas interchanged
an UPDATE statement, or a statement that returns results. You
via WebServices. In fact, a DataSet with a schema can
can also use input and output parameters, and return values as
actually be compiled for type safety and statement completion.
part of your command syntax. The example below shows how
to issue an INSERT statement against the Northwind database.
DataReaders
DataAdapters (OLEDB/SQL)
The DataAdapter object works as a bridge between the
DataSet and the source data. Using the provider-specific
The DataReader object is somewhat synonymous with a read-
SqlDataAdapter (along with its associated SqlCommand and
only/forward-only cursor over data. The DataReader API
SqlConnection) can increase overall performance when
supports flat as well as hierarchical data. A DataReader object
working with a Microsoft SQL Server databases. For other
is returned after executing a command against a database. The
OLE
© 2016, IJSRETS All Rights Reserved
DB-supported
databases,
you
would
use
Page | 56
the
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
OleDbDataAdapter object and its associated OleDbCommand
and OleDbConnection objects.
ISSN: 1934-2215
H. Server Application Development
Server-side
applications
in
the
managed
world
are
The DataAdapter object uses commands to update the data
implemented through runtime hosts. Unmanaged applications
source after changes have been made to the DataSet. Using the
host the common language runtime, which allows your custom
Fill method of the DataAdapter calls the SELECT command;
managed code to control the behavior of the server. This
using the Update method calls the INSERT, UPDATE or
model provides you with all the features of the common
DELETE command for each changed row. You can explicitly
language runtime and class library while gaining the
set these commands in order to control the statements used at
performance and scalability of the host server.
runtime to resolve changes, including the use of stored
procedures. For ad-hoc scenarios, a CommandBuilder object
can generate these at run-time based upon a select statement.
However, this run-time generation requires an extra round-trip
to the server in order to gather required metadata, so explicitly
The following illustration shows a basic network schema with
managed code running in different server environments.
Servers such as IIS and SQL Server can perform standard
operations while your application logic executes through the
managed code.
providing the INSERT, UPDATE, and DELETE commands at
design time will result in better run-time performance.
1.
Server-side managed code
ADO.NET is the next evolution of ADO for the
ASP.NET is the hosting environment that enables developers
.Net Framework.
to use the .NET Framework to target Web-based applications.
However, ASP.NET is more than just a runtime host; it is a
2.
ADO.NET was created with n-Tier, statelessness
and XML in the forefront. Two new objects, the
DataSet and DataAdapter, are provided for these
scenarios.
3.
ADO.NET can be used to get data from a stream,
or to store data in a cache for updates.
4.
complete architecture for developing Web sites and Internetdistributed objects using managed code. Both Web Forms and
XML Web services use IIS and ASP.NET as the publishing
mechanism for applications, and both have a collection of
supporting classes in the .NET Framework.
XML Web services, an important evolution in Web-based
There is a lot more information about ADO.NET
technology,
are
distributed,
server-side
application
in the documentation.
components similar to common Web sites. However, unlike
Web-based applications, XML Web services components have
5.
Remember, you can execute a command directly
against the database in order to do inserts, updates,
and deletes. You don't need to first put data into a
DataSet in order to insert, update, or delete it.
6.
no UI and are not targeted for browsers such as Internet
Explorer and Netscape Navigator. Instead, XML Web services
consist of reusable software components designed to be
consumed by other applications, such as traditional client
Also, you can use a DataSet to bind to the data,
applications, Web-based applications, or even other XML
move through the data, and navigate data
Web services. As a result, XML Web services technology is
relationships
rapidly moving application development and deployment into
the highly distributed environment of the Internet.
ASP.Net
© 2016, IJSRETS All Rights Reserved
Page | 57
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
If you have used earlier versions of ASP technology, you will
on the logic of your service, without concerning yourself with
immediately notice the improvements that ASP.NET and Web
the communications infrastructure required by distributed
Forms offers. For example, you can develop Web Forms pages
software development.
in any language that supports the .NET Framework. In
addition, your code no longer needs to share the same file with
your HTTP text (although it can continue to do so if you
prefer). Web Forms pages execute in native machine language
Finally, like Web Forms pages in the managed environment,
your XML Web service will run with the speed of native
machine language using the scalable communication of IIS.
because, like any other managed application, they take full
Active Server Pages.NET
advantage of the runtime. In contrast, unmanaged ASP pages
ASP.NET is a programming framework built on the common
are always scripted and interpreted. ASP.NET pages are faster,
language runtime that can be used on a server to build
more functional, and easier to develop than unmanaged ASP
powerful Web applications. ASP.NET offers several important
pages because they interact with the runtime like any managed
advantages over previous Web development models:
application.

Enhanced Performance. ASP.NET is compiled
The .NET Framework also provides a collection of classes and
common language runtime code running on the
tools to aid in development and consumption of XML Web
server. Unlike its interpreted predecessors, ASP.NET
services applications. XML Web services are built on
can take advantage of early binding, just-in-time
standards such as SOAP (a remote procedure-call protocol),
compilation,
XML (an extensible data format), and WSDL ( the Web
services right out of the box. This amounts to
Services Description Language). The .NET Framework is built
dramatically better performance before you ever
on these standards to promote interoperability with non-
write a line of code.
Microsoft solutions.

native
optimization,
and
caching
World-Class Tool Support. The ASP.NET framework
is complemented by a rich toolbox and designer in
For example, the Web Services Description Language tool
the
included with the .NET Framework SDK can query an XML
classes in the class library that handle all the underlying
communication using SOAP and XML parsing. Although you
can use the class library to consume XML Web services
directly, the Web Services Description Language tool and the
other tools contained in the SDK facilitate your development
efforts with the .NET Framework.
integrated
development
server controls, and automatic deployment are just a
description, and produce C# or Visual Basic source code that
service. The source code can create classes derived from
Studio
environment. WYSIWYG editing, drag-and-drop
Web service published on the Web, parse its WSDL
your application can use to become a client of the XML Web
Visual
few of the features this powerful tool provides.

Power and Flexibility. Because ASP.NET is based on
the common language runtime, the power and
flexibility of that entire platform is available to Web
application developers. The .NET Framework class
library, Messaging, and Data Access solutions are all
seamlessly accessible from the Web. ASP.NET is
also language-independent, so you can choose the
language that best applies to your application or
If you develop and publish your own XML Web service, the
partition your application across many languages.
.NET Framework provides a set of classes that conform to all
Further, common language runtime interoperability
the underlying communication standards, such as SOAP,
guarantees that your existing investment in COM-
WSDL, and XML. Using those classes enables you to focus
© 2016, IJSRETS All Rights Reserved
Page | 58
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016

ISSN: 1934-2215
based development is preserved when migrating to
the ASP.NET runtime with your own custom-written
ASP.NET.
component. Implementing custom authentication or
Simplicity. ASP.NET makes it easy to perform
state services has never been easier.
common tasks, from simple form submission and
Security. With built in Windows authentication and per-
client
application configuration, you can be assured that your
authentication
to
deployment
and
site
configuration. For example, the ASP.NET page
framework allows you to build user interfaces that
applications are secure.
a) Language Support
cleanly separate application logic from presentation
The Microsoft .NET Platform currently offers built-in support
code and to handle events in a simple, Visual Basic -
for three languages: C#, Visual Basic, and JScript.
like forms processing model. Additionally, the
What is ASP.NET Web Forms?
common language runtime simplifies development,
with managed code services such as automatic
scalable common language runtime programming model that
reference counting and garbage collection.

Manageability.ASP.NET
employs
a
text-based,
can be used on the server to dynamically generate Web pages.
hierarchical configuration system, which simplifies
Intended as a logical evolution of ASP (ASP.NET
applying settings to your server environment and
provides syntax compatibility with existing pages), the
Web applications.Because configuration information
ASP.NET Web Forms framework has been specifically
is stored as plain text, new settings may be applied
designed to address a number of key deficiencies in the
without the aid of local administration tools. This
previous model. In particular, it provides:
"zero local administration" philosophy extends to
deploying ASP.NET Framework applications as well.
An ASP.NET Framework application is deployed to
a server simply by copying the necessary files to the
server. No server restart is required, even to deploy or
replace running compiled code.

The ASP.NET Web Forms page framework is a
Scalability and Availability. ASP.NET has been
designed with scalability in mind, with features
The ability to create and use reusable UI controls that can
encapsulatecommonfunctionality and thus reduce the amount
of code that a page developer has to write.
The ability for developers to cleanly structure their page logic
in an orderly fashion (not "spaghetti code").
The ability for development tools to provide strong
WYSIWYG design support for pages (existing ASP code is
opaque to tools).
specifically tailored to improve performance in

clustered and multiprocessor environments. Further,
ASP.NET Web Forms pages are text files with an .aspx file
processes are closely monitored and managed by the
name extension. They can be deployed throughout an IIS
ASP.NET runtime, so that if one misbehaves (leaks,
virtual root directory tree. When a browser client requests
deadlocks), a new process can be created in its place,
.aspx resources, the ASP.NET runtime parses and compiles
which helps keep your application constantly
the target file into a .NET Framework class. This class can
available to handle requests.
then be used to dynamically process incoming requests. (Note
Customizability and Extensibility. ASP.NET delivers
that the .aspx file is compiled only the first time it is accessed;
a well-factored architecture that allows developers to
the compiled type instance is then reused across multiple
"plug-in" their code at the appropriate level. In fact, it
requests).
is possible to extend or replace any subcomponent of
© 2016, IJSRETS All Rights Reserved
Page | 59
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
An ASP.NET page can be created simply by taking an existing
on their pages. For example, the
HTML file and changing its file name extension to .aspx (no
demonstrates how the <asp:adrotator> control can be used to
modification of code is required). For example, the following
dynamically display rotating ads on a page.
sample demonstrates a simple HTML page that collects a
1.
user's name and category preference and then performs a form
postback to the originating page when a button is clicked:
following sample
ASP.NET Web Forms provide an easy and powerful
way to build dynamic Web UI.
2.
ASP.NET Web Forms pages can target any browser
ASP.NET provides syntax compatibility with existing ASP
client (there are no script library or cookie
pages. This includes support for <% %> code render blocks
requirements).
that can be intermixed with HTML content within an .aspx
3.
file. These code blocks execute in a top-down manner at page
render time.
ASP.NET
Web
Forms
pages
provide
syntax
compatibility with existing ASP pages.
4.
ASP.NET server controls provide an easy way to
encapsulate common functionality.
Code-Behind Web Forms
5.
ASP.NET ships with 45 built-in server controls.
ASP.NET supports two methods of authoring dynamic pages.
Developers can also use controls built by third
The first is the method shown in the preceding samples, where
parties.
the page code is physically declared within the originating
6.
.aspx file. An alternative approach--known as the code-behind
method--enables the page code to be more cleanly separated
both uplevel and downlevel HTML.
7.
from the HTML content into an entirely separate file.
Introduction to ASP.NET Server Controls
ASP.NET server controls can automatically project
ASP.NET templates provide an easy way to
customize the look and feel of list server controls.
8.
ASP.NET validation controls provide an easy way to
do declarative client or server data validation.
In addition to (or instead of) using <% %> code blocks to
SQL SERVER
program dynamic content, ASP.NET page developers can use
DATABASE
ASP.NET server controls to program Web pages. Server
A database management, or DBMS, gives the user
controls are declared within an .aspx file using custom tags or
access to their data and helps them transform the data into
intrinsic HTML tags that contain a runat="server" attribute
information. Such database management systems include
value. Intrinsic HTML tags are handled by one of the controls
dBase, paradox, IMS, SQL Server and SQL Server. These
in the System.Web.UI.HtmlControls namespace. Any tag that
systems allow users to create, update and extract information
doesn't explicitly map to one of the controls is assigned the
from their database.
type of System.Web.UI.HtmlControls.HtmlGenericControl.
A database is a structured collection of data. Data
refers to the characteristics of people, things and events. SQL
Server controls automatically maintain any client-entered
values between round trips to the server. This control state is
not stored on the server (it is instead stored within an <input
type="hidden"> form field that is round-tripped between
requests). Note also that no client-side script is required.
In addition to supporting standard HTML input controls,
Server stores each data item in its own fields. In SQL Server,
the fields relating to a particular person, thing or event are
bundled together to form a single complete unit of data, called
a record (it can also be referred to as raw or an occurrence).
Each record is made up of a number of fields. No two fields
in a record can have the same field name.
ASP.NET enables developers to utilize richer custom controls
© 2016, IJSRETS All Rights Reserved
Page | 60
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
During an SQL Server Database design project, the
ISSN: 1934-2215
Data Abstraction
analysis of your business needs identifies all the fields or
A major purpose of a database system is to provide
attributes of interest. If your business needs change over time,
users with an abstract view of the data. This system hides
you define any additional fields or change the definition of
certain details of how the data is stored and maintained. Data
existing fields.
abstraction is divided into three levels.
SQL Server Tables
Physical level: This is the lowest level of abstraction at which
SQL Server stores records relating to each other in a
one describes how the data are actually stored.
table. Different tables are created for the various groups of
Conceptual Level: At this level of database abstraction all
information. Related tables are grouped together to form a
the attributed and what data are actually stored is described
database.
and entries and relationship among them.
Primary Key
View level: This is the highest level of abstraction at which
Every table in SQL Server has a field or a
combination of fields that uniquely identifies each record in
one describes only part of the database.
Advantages of RDBMS
the table. The Unique identifier is called the Primary Key, or

Redundancy can be avoided
simply the Key.
The primary key provides the means to

Inconsistency can be eliminated
distinguish one record from all other in a table. It allows the

Data can be Shared
user and the database system to identify, locate and refer to

Standards can be enforced
one particular record in the database.

Security restrictions can be applied
Relational Database

Integrity can be maintained
Sometimes all the information of interest to a

Conflicting requirements can be balanced
business operation can be stored in one table. SQL Server

Data independence can be achieved.
makes it very easy to link the data in multiple tables. Matching
an employee to the department in which they work is one
example.
This is what makes SQL Server a relational
database management system, or RDBMS. It stores data in
two or more tables and enables you to define relationships
between the tables and enables you to define relationships
between the tables.
Foreign Key
When a field is one table matches the primary key of
another field is referred to as a foreign key. A foreign key is a
field or a group of fields in one table whose values match
those of the primary key of another table.
Referential Integrity
Not only does SQL Server allow you to link multiple
tables, it also maintains consistency between them. Ensuring
that the data among related tables is correctly matched is
referred to as maintaining referential integrity.
© 2016, IJSRETS All Rights Reserved
Disadvantages of DBMS
A significant disadvantage of the DBMS system is
cost. In addition to the cost of purchasing of developing the
software, the hardware has to be upgraded to allow for the
extensive programs and the workspace required for their
execution and storage.
While centralization reduces
duplication, the lack of duplication requires that the database
be adequately backed up so that in case of failure the data can
be recovered.
FEATURES OF SQL SERVER (RDBMS)
SQL SERVER is one of the leading database
management systems (DBMS) because it is the only Database
that meets the uncompromising requirements of today’s most
demanding information systems.
From complex decision
support systems (DSS) to the most rigorous online transaction
processing (OLTP) application, even application that require
simultaneous DSS and OLTP access to the same critical data,
Page | 61
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
SQL Server leads the industry in both performance and
ISSN: 1934-2215
Unmatched Performance
capability
The most advanced architecture in the industry
SQL SERVER is a truly portable, distributed, and
allows the SQL SERVER DBMS to deliver unmatched
open DBMS that delivers unmatched performance, continuous
performance.
operation and support for every database.
Sophisticated Concurrency Control
SQL SERVER RDBMS is high performance fault
Real World applications demand access to critical
tolerant DBMS which is specially designed for online
data.
With most database Systems application becomes
transactions processing and for handling large database
“contention bound” – which performance is limited not by the
application.
CPU power or by disk I/O, but user waiting on one another for
SQL SERVER with transactions processing option
data access . SQL Server employs full, unrestricted row-level
offers two features which contribute to very high level of
locking and contention free queries to minimize and in many
transaction processing throughput, which are
cases entirely eliminates contention wait times.
Enterprise wide Data Sharing
No I/O Bottlenecks
The unrivaled portability and connectivity of the SQL
SQL Server’s fast commit groups commit and
SERVER DBMS enables all the systems in the organization to
deferred write technologies dramatically reduce disk I/O
be linked into a singular, integrated computing resource.
bottlenecks. While some database write whole data block to
Portability
disk at commit time, SQL Server commits transactions with at
SQL SERVER is fully portable to more than 80
most sequential log file on disk at commit time, On high
distinct hardware and operating systems platforms, including
throughput systems, one sequential writes typically group
UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary
commit multiple transactions. Data read by the transaction
platforms. This portability gives complete freedom to choose
remains as shared memory so that other transactions may
the
access that data without reading it again from disk. Since fast
database
sever
platform
that
meets
the
system
requirements.
commits write all data necessary to the recovery to the log file,
Open Systems
modified
SQL SERVER offers a leading implementation of
industry –standard SQL.
blocks
are
written
back
to
the
database
independently of the transaction commit, when written from
SQL Server’s open architecture
memory to disk.
integrates SQL SERVER and non –SQL SERVER DBMS
CONCLUSION
with industries most comprehensive collection of tools,
The enforcement of access policies and the support
application, and third party software products SQL Server’s
of policy updates are important challenging issues in the data
Open architecture provides transparent access to data from
sharing systems. In this study, we proposed a attribute-based
other relational database and even non-relational database.
data sharing scheme to enforce a fine-grained data access
Distributed Data Sharing
control by exploiting the characteristic of the data sharing
SQL Server’s networking and distributed database
system. The proposed scheme features a key issuing
capabilities to access data stored on remote server with the
mechanism that removes key escrow during the key
same ease as if the information was stored on a single local
generation. The user secret keys are generated through a
computer. A single SQL statement can access data at multiple
secure two-party computation such that any curious key
sites. You can store data where system requirements such as
generation center or data storing center cannot derive the
performance, security or availability dictate.
private keys individually. Thus, the proposed scheme
© 2016, IJSRETS All Rights Reserved
Page | 62
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
enhances data privacy and confidentiality in the data sharing
Proceedings of Instrumentation and Measurement Technology
system against any system managers as well as adversarial
Conference (I2MTC ’09), 5-7 2009.
outsiders without corresponding (enough) credentials. The
[6] M. Jain and H. Kandwal, “A survey on complex wormhole
proposed scheme can do an immediate user revocation on each
attack in wireless ad hoc networks,” in Proceedings of
attribute set while taking full advantage of the scalable access
International Con- ference on Advances in Computing,
control provided by the cipher text policy attribute based
Control, and Telecommunication Technologies (ACT ’09),
encryption. Therefore, the proposed scheme achieves more
28-29 2009.
secure and fine-grained data access control in the data sharing
[7] I. Krontiris, T. Giannetsos, and T. Dimitriou, “Launching a
system. We demonstrated that the proposed scheme is efficient
sink- hole attack in wireless sensor networks; the intruder
and scalable to securely manage user data in the data sharing
side,” in Proceedings of IEEE International Conference on
system.
Wireless
Future Work:
Communications(WIMOB ’08), 12-14 2008.
and
Mobile
Computing,
Networking
and
Though the Data sharing technique has many
[8] J. Hee-Jin, N. Choon-Sung, J. Yi-Seok, and S. Dong-
advantages it will be developed by using some algorithms for
Ryeol, “A mobile agent based leach in wireless sensor
improving the efficiency of the scope. Another direction for
networks,” in Proceed- ings of the 10th International
future work is to allow proxy servers to update user secret key
Conference
without disclosing user attribute information.
(ICACT 2008), vol. 1, 17-20 2008.
Reference:
[1] W. Gong, Z. You, D. Chen, X. Zhao, M. Gu, and K. Lam,
[9] A. Liu and P. Ning, “Tinyecc: A configurable library for
“Trust based routing for misbehavior detection in ad hoc
networks,” Journal of Networks, vol. 5, no. 5, May 2010.
[2] T. Zahariadis, H. Leligou, P. Karkazis, P. Trakadas, I.
Papaefs- tathiou, C. Vangelatos, and L. Besson, “Design and
implementa- tion of a trust-aware routing protocol for large
wsns,” International Journal of Network Security & Its
Applications (IJNSA), vol. 2, no. 3, Jul. 2010.
on
Advanced
Communication
Technology
elliptic curve cryptography in wireless sensor networks,” in
Proceedings of the 7th international conference on Information
processing in sensor networks (IPSN ’08). IEEE Computer
Society, 2008.
[10] W. Xue, J. Aiguo, and W. Sheng, “Mobile agent based
moving target methods in wireless sensor networks,” in IEEE
International Symposium on Communications and Information
Technology (ISCIT 2005), vol. 1, 12-14 2005.
[11] F. Zhao and L. Guibas, Wireless Sensor Networks: An
[3] G. Zhan, W. Shi, and J. Deng, “Tarf: A trust-aware routing
framework for wireless sensor networks,” in Proceeding of the
7th European Conference on Wireless Sensor Networks
(EWSN’10), 2010.
[4] L. Bai, F. Ferrese, K. Ploskina, and S. Biswas,
“Performance analy- sis of mobile agent-based wireless sensor
network,” in Proceedings of the 8th International Conference
on Reliability, Maintainability and Safety (ICRMS 2009), 2024 2009.
Information
Processing
Approach.
Morgan
Kaufmann
Publishers, 2004.
[12] J. Newsome, E. Shi, D. Song, and A. Perrig, “The sybil
attack in sensor networks: Analysis and defenses,” in Proc. of
the 3rd International Conference on Information Processing in
Sensor Networks (IPSN’04), Apr. 2004.
[13] J. Al-Karaki and A. Kamal, “Routing techniques in
wireless
sensor
networks:
a
survey,”
Wireless
Communications, vol. 11, no. 6, pp. 6–28, Dec. 2004.
[5] L. Zhang, Q. Wang, and X. Shu, “A mobile-agent-based
middle- ware for wireless sensor networks data fusion,” in
© 2016, IJSRETS All Rights Reserved
Page | 63
International Journal on Scientific Research in Emerging TechnologieS, Vol. 1 Issue 5, July 2016
ISSN: 1934-2215
[14] C. Karlof, N. Sastry, and D. Wagner, “Tinysec: A link
layer security architecture for wireless sensor networks,” in
Proc. of ACM SenSys 2004, Nov. 2004.
© 2016, IJSRETS All Rights Reserved
Page | 64