Download Data security - E

Document related concepts

Computer program wikipedia , lookup

Fabric of Security wikipedia , lookup

Cyberattack wikipedia , lookup

Cybercrime wikipedia , lookup

Data remanence wikipedia , lookup

Theoretical computer science wikipedia , lookup

Transcript
Unit – I
Introduction: Managing in Information Age. Evolution of IT Management – Types of
Information Systems – Internet Based Business Systems – Value Chain Reconstruction for
EBusiness – IT Management Challenges and issues – Critical success Factors for IT Managers.
Unit - II
Hardware Software And Communication: Computing Hierarchy – Input – Output Technologies –
Hardware Issues – System Architecture – Operating Systems – Network Operating Systems –
Grid Computing – Mobile Computing – Ubiquitous Computing – Application Programming –
Managing Application Development – Data Resources – Managing Data Resources – Problem of
Change and Recovery.
Unit - III
Communication Technology: Communication Technology – WWW – Intranets – Extranets –
Voice Networks Data Communication Networks – Last Mile – Wireless System – Web Hosting –
Application Service Providers.
Unit – IV
IT Applications: Enterprise Resource Planning – Enterprise System – Expert System – Decision
Support System – Neural Networks – Executive Information System – Customer Relationship
Management System – Supply Chain Management Systems – Knowledge Management – Data
Warehousing – Data Mining – Virtual Reality – E-Business and Alternatives. E-Business
Expectations and Customer Satisfaction.
Unit - V
IT Management: IT Strategy Statements – Planning Models for IT Managers Legislation and
Industry Trends. Independent Operations – Headquarters Driver – Intellectual Synergy –
Integrated Global IT – IT investment – Estimating Returns – IT Value Equation – Pricing Frame
work – Hardware and Software Buying – Factors of IT Management – Implementation Control –
Security – Quality - Ethical Issues – Chief Information Officer.
UNIT-I
Evolution of IT Management:
Federal agencies rely extensively on information technology (IT) to perform basic missions.
Arguably, public administration should be driving the theory, policy, and practice for managing
these increasingly important resources. This is especially true as public organizations move to
electronic government. Despite some maturation in the literature for managing IT in federal
agencies in the last several years, public administration has contributed little to this effort. Other
academic fields, such as information science, business administration, and practitioners from the
federal government and related contractors have contributed more recently to the theory and
practice of IT management at the federal level than public administration. This chapter analyzes
federal IT management literature from several academic disciplines and government documents.
The analysis compares federal IT management with a normative model of management maturity
focusing on the strategic objectives for IT and related management approaches. Public
administration’s minimal contribution to federal IT management raises profound questions
whether federal agencies are performing commensurate with public expectations in an
information age.
Types of Information Systems:
An Information System (IS) is a system composed of people and computers that processes or
interprets information. [1][2][3] The term is also sometimes used in more restricted senses to refer to
only the software used to run a computerized database or to refer to only a computer system.
Information systems is an academic study of systems with a specific reference to information
and the complementary networks of hardware and software that people and organizations use to
collect, filter, process, create and also distribute data. An emphasis is placed on an Information
System having a definitive Boundary, Users, Processors, Stores, Inputs, Outputs and the
aforementioned communication networks [4]
Any specific information system aims to support operations, management and decision
making.[5][6] An information system is the information and communication technology (ICT) that
an organization uses, and also the way in which people interact with this technology in support of
business processes.[7]
Some authors make a clear distinction between information systems, computer systems, and
business processes. Information systems typically include an ICT component but are not purely
concerned with ICT, focusing instead on the end use of information technology. Information
systems are also different from business processes. Information systems help to control the
performance of business processes.[8]
Alter[9][10] argues for advantages of viewing an information system as a special type of work
system. A work system is a system in which humans or machines perform processes and activities
using resources to produce specific products or services for customers. An information system is
a work system whose activities are devoted to capturing, transmitting, storing, retrieving,
manipulating and displaying information.[11]
As such, information systems inter-relate with data systems on the one hand and activity systems
on the other. An information system is a form of communication system in which data represent
and are processed as a form of social memory. An information system can also be considered a
semi-formal language which supports human decision making and action.
Information systems are the primary focus of study for organizational informatics.
Information technologies are a very important and malleable resource available to
executives.[17]Many companies have created a position of Chief Information Officer (CIO) that
sits on the executive board with the Chief Executive Officer (CEO), Chief Financial
Officer (CFO), Chief Operating Officer (COO) and Chief Technical Officer (CTO). The CTO
may also serve as CIO [Chief Information Officer], and vice versa. The Chief Information
Security Officer (CISO) focuses on information security management.
The six components that must come together in order to produce an information system are:
1. Hardware: The term hardware refers to machinery. This category includes the computer
itself, which is often referred to as the central processing unit (CPU), and all of its support
equipments. Among the support equipments are input and output devices, storage devices
and communications devices.
2. Software: The term software refers to computer programs and the manuals (if any) that
support them. Computer programs are machine-readable instructions that direct the
circuitry within the hardware parts of the system to function in ways that produce useful
information from data. Programs are generally stored on some input / output medium,
often a disk or tape.
3. Data: Data are facts that are used by programs to produce useful information. Like
programs, data are generally stored in machine-readable form on disk or tape until the
computer needs them.
4. Procedures: Procedures are the policies that govern the operation of a computer system.
"Procedures are to people what software is to hardware" is a common analogy that is used
to illustrate the role of procedures in a system.
5. People: Every system needs people if it is to be useful. Often the most over-looked
element of the system are the people, probably the component that most influence the
success or failure of information systems. This includes "not only the users, but those
who operate and service the computers, those who maintain the data, and those who
support the network of computers." <Kroenke, D. M. (2015). MIS Essentials. Pearson
Education>
6. Feedback: it is another component of the IS, that defines that an IS may be provided with
a feedback (Although this component isn't necessary to function).
Data is the bridge between hardware and people. This means that the data we collect is only data,
until we involve people. At that point, data is now information.
Types of information system
A four level pyramid model of different types of information systems based on the different
levels of hierarchy in an organization
The "classic" view of Information systems found in the textbooks[18] in the 1980s was of a
pyramid of systems that reflected the hierarchy of the organization, usually transaction processing
systems at the bottom of the pyramid, followed by management information systems, decision
support systems, and ending with executive information systems at the top. Although the pyramid
model remains useful, since it was first formulated a number of new technologies have been
developed and new categories of information systems have emerged, some of which no longer fit
easily into the original pyramid model.
Some examples of such systems are:








data warehouses
enterprise resource planning
enterprise systems
expert systems
search engines
geographic information system
global information system
office automation.
A computer(-based) information system is essentially an IS using computer technology to carry
out some or all of its planned tasks. The basic components of computer based information system
are:





Hardware- these are the devices like the monitor, processor, printer and keyboard, all of
which work together to accept, process, show data and information.
Software- are the programs that allow the hardware to process the data.
Databases- are the gathering of associated files or tables containing related data.
Networks- are a connecting system that allows diverse computers to distribute resources.
Procedures- are the commands for combining the components above to process
information and produce the preferred output.
The first four components (hardware, software, database, and network) make up what is known as
the information technology platform. Information technology workers could then use these
components to create information systems that watch over safety measures, risk and the
management of data. These actions are known as information technology services.[19]
Certain information systems support parts of organizations, others support entire organizations,
and still others, support groups of organizations. Recall that each department or functional area
within an organization has its own collection of application programs, or information systems.
These functional area information systems (FAIS) are supporting pillars for more general IS
namely, business intelligence systems and dashboards[citation needed]. As the name suggest, each
FAIS support a particular function are within the organization, e.g.: accounting IS, finance IS,
production/operation management (POM) IS, marketing IS, and human resources IS. In finance
and accounting, managers use IT systems to forecast revenues and business activity, to determine
the best sources and uses of funds, and to perform audits to ensure that the organization is
fundamentally sound and that all financial reports and documents are accurate. Other types of
organizational information systems are FAIS, Transaction processing systems, enterprise resource
planning, office automation system, management information system, decision support system,
expert system, executive dashboard, supply chain management system, and electronic commerce
system. Dashboards are a special form of IS that support all managers of the organization. They
provide rapid access to timely information and direct access to structured information in the form
of reports. Expert systems attempt to duplicate the work of human experts by applying reasoning
capabilities, knowledge, and expertise within a specific domain.
Internet Based Business Systems:
One of the newest Internet technologies designed to organize company databases is integrated,
Internet-based, business information systems, which are starting to be implemented, particulary at
Fortune 500 and other large companies. The goal of these systems is to provide comprehensive
data warehousing delivered with extreme efficiency. This means that, in correctly designed
systems, information only needs to be entered once, regardless of whether it is a contact or a
property record.
The Building Blocks Internet-based systems integrate three main sources of company
information: the Internet, intranet, and extranet.
Internet. As most everyone knows, the Internet holds the public area of a company Web site that
anyone can access. It should contain marketing information, general company information,
service offerings, contact data, and relevant market information.
Intranet. In contrast, only company personnel are allowed to access an intranet, which is the
private area of a company Web site. Numerous applications are found in this area, including
contact management programs, company calendars, company rosters, commission management
programs, and listing management programs.
Extranet. Finally, the extranet is a private area of a company Web site that clients, outside
contractors, and business affiliates can access. Extranets often are password-protected and also
may have other hardware- and software-based security systems. This area should contain
information that the company feels is appropriate to share with its outside contacts, such as
company rosters and project management cabinets, which are central places to share files or other
information relating to a particular project.
Along with offering access to the Internet, intranet, and extranet, Internet-based systems also
provide access to the multiple applications that a company may use; all components are one
system but are accessed differently by different people.
How It Works While contact management programs are popular industry tools (see “Staying
Connected,” CIRE , May/June 2001), the future is poised to incorporate more information in one
place. For instance, overlay programs exist now for contact programs that allow some property
data integration with contact management information. But because they are add-ons, some
systems could work better if all of the components were part of the philosophy from the
beginning.
A good information management system should integrate all company databases, as well as
assimilate into the company's Web site. In other words, an employee should be able to access all
information relevant to a transaction, even if he is not in the office.
It is relatively easy to make Internet-based systems accessible to multiple people, in various
offices worldwide. This means that armed with a browser and a reasonably fast Internet
connection, company personnel can access a centralized, well-organized, comprehensive
company database from anywhere.
Updating information in the database also is easy. For example, if a client's address changes, only
one person has to input the new information and the entire database is updated automatically.
Likewise, if someone has a meeting with the client, notes from that conversation can be posted
immediately. To most organizations, this is a massive change and improvement in company
communications.
- See more at: http://www.ccim.com/cire-magazine/articles/internet-based-information-systemsimprove-company-efficiency#sthash.41kYZ7kI.dpuf
Value Chain Reconstruction for EBusiness:
With the rapid development of computer network technology and information technology, ecommerce emerged as the times require and it has gradually become the main operation model of
the world economy. In the new business model, the external and internal condition of enterprises
have happened fundamental change, this has had a profound impact on value chain model of
traditional business, and it has given a challenge for enterprises' competitive ability. In order to
survive and develop in the fierce market competition, enterprises must reconstruct their value
chain to adapt the new business model. Base on Porter's value chain, this paper analyzes the
defects of traditional enterprises' value chain and the impacts on enterprises' value chain by ecommerce, then put forward a new value chain operation model in e-commerce environment,
which can meet enterprises development needs in the new era and enhance their competitive
advantages, finally puts forward some suggestions for the transform of value chain into ecommerce value chain.
IT Management Challenges and issues:
IT project managers face challenges on a daily basis: a new issue here, a concern with resource
scheduling there. It’s just part of the job, and for many, sorting out problems is one of the
highlights of this varied role. However, the project management role is evolving into one that
requires leadership and a broader awareness of IT issues. As a result there are some wider
challenges facing IT project managers outside of individual projects. Here are 4 things that
project managers can’t afford to overlook.
It’s “the business”
You probably hear this a lot in your organisation too: “the business wants this” or “the business
thinks that.” There’s often a split between “the business” and “IT”. Well, it’s not “the business” –
it’s your company, and you work there too. IT isn’t some special unit; it’s just another department
in the business, so start talking about it as such.
Stop using terminology that sets IT and IT projects outside of normal business discussions. You
all work for the same firm with the same corporate goals!
Understand priorities
We’d all like to think that our projects are the most important but in reality that isn’t the case.
Good IT project managers understand where their projects fall in the grand scheme of things.
Some will be top priority, some won’t. Have an appreciation of all the projects in the IT portfolio
and how your project fits in.
Top priority projects will (and should) grab all the available resources and senior management
attention. Lower priority projects make do. If that’s your project, you’ll have to quickly come to
terms with the fact that you can’t get the resources you want or make the progress that you want
all the time. You may have to complete your project deliverables more slowly or find ways to cut
your budget in order to support higher profile projects achieving what they need to for the
corporate good. Don’t worry, karma will thank you for it one day.
Talk to your portfolio or programme office manager or other project managers and establish the
relative priority of all the IT projects. Then manage your project accordingly.
Understand value
Do you really understand what value your project is delivering? If you can’t explain it to someone
else, then you should question why you are working on it. Of course, there will be some projects
with a corporate mandate because someone on high decided it was the right thing to do. The
business case might not stack up and the benefits might be flaky. But it’s your role as a project
leader to gently challenge whether this is the best use of company resources.
You should also be able to link your work on this project to the overall company objectives. How
does this initiative help your business achieve its goals? If it doesn’t, shouldn’t you be working
on something that does?
Read and understand the business case for your project and be able to articulate the benefits to the
rest of the project team.
Involve users
IT projects are rarely just IT projects – they have an impact on multiple business areas. It’s not
acceptable to make a list of requirements at the beginning of the project and then shut yourself
away in the IT office and only emerge to talk to the users when there’s a product for them to look
at.
Project management approaches are moving away from this and towards more Agile ways of
working with users as an integral part of the project team from the beginning. Think of them as
customers of the project management process and customers of what you are delivering instead of
users, and then try to offer them the customer service you would expect from any other service
organisation.
Reframe users and other stakeholders as project customers. Get them involved from the beginning
and seconded on to the project team if possible.
Critical success Factors for IT Managers:
IT departments implement their chosen approaches hinge on the following success factors:

Open communication lines. IT departments and their business counterparts should set up
a communication system that actively involves all stakeholders. This allows IT to get a
feedback from the business side to formulate the best solutions possible; on the other hand, an
open communication line with their technical counterparts familiarize business decision-makers
to identify and take advantage of the available technical knowledgebase for better
organizational and market performance.

Business requirements analysis. IT’s exposure to business allows them to identify
business needs that should be the key drivers behind most aspects of their operations. CIOs are
best positioned to frame projects, infrastructures, and systems according to the needs of their
primary clients. The success of IT as a business strategy is judged on how it helped in meeting
business objectives.

Expectation management. Both sides should be realistic about their expectations of each
other. This can be achieved through the two mentioned success factors: communication and
requirements. Business managers should know the limitations of IT, and that solutions do not
come in cheap, such that in-house resources for application development and maintenance may
require engaging third-parties to fulfill business needs. On the other hand, IT should be aware
of the technical—and sometimes, financial—limitations of business operations. For example,
introducing new systems to the IT enterprise landscape means training batches of end-users
which then result in additional work to include end-user documentation and training designs.

Organizational protocols and sponsorship. Internal protocols do affect the success of
IT-business alignment. Sadly, protocols do not necessarily mean processes; protocols in most
traditional institutions mean “just how things are done.” To navigate through layers of
bureaucracy where it exists is to identify key personnel and project sponsors who understand
and can articulate the justifications for IT projects as business strategies. Where all decisionmakers must stamp their signatures in all IT ventures, CIOs should find the right people to
champion their causes through coherent analyses of business needs and presentations of
business solutions and the hoped-for success criteria.
SHORT ANSWER QUESTIONS:
1.
2.
3.
4.
5.
6.
7.
Define Data and Information.
What is Data Processing?
What are the differences between Data and Information.?
Define ROM,PROM, and EPROM.
What is the difference between main memory and auxillary memory?
What is data retrieval?
Differentiate RAM and ROM?
LONG ANSWER QUESTIONS:
1.
2.
3.
4.
5.
Explain Data processing and its types
Define data retrieval and explain various data retrieval techniques.
Explain data storage
Explain the importance of computers.
Describe various computer related jobs in software and hardware industry

Unit - II
COMPUTING HIERARCHY:
Directory(Computing):
In computing, a directory is a file system cataloging structure which contains references to
other computer files, and possibly other directories. On many computers directories are known
as folders, catalogs was used on the Apple II, the Commodore 128 and some other early home
computers as a command for displaying disk contents; the file systems used by these did not
support hierarchal directories), or drawers to provide some relevancy to a workbench or the
traditional office file cabinet.
Files are organized by storing related files in the same directory. In a hierarchical file system (that
is, one in which files and directories are organized in a manner that resembles a tree), a directory
contained inside another directory is called a subdirectory. The terms parent and child are often
used to describe the relationship between a subdirectory and the directory in which it is cataloged,
the latter being the parent. The top-most directory in such a file system, which does not have a
parent of its own, is called the root directory.
Historically, and even on some modern embedded systems, the file systems either have no
support for directories at all or only have a "flat" directory structure, meaning subdirectories are
not supported; there is only a group of top-level directories each containing files. The first
popular fully general hierarchical file system was that of Multics. This type of file system was an
early research interest of Ritchie. Most modern Unix-like systems, especially Linux, have a
standard directory structure defined by the File system Hierarchy Standard.
In many operating systems, programs have an associated working directory in which they
execute. Typically, file names accessed by the program are assumed to reside within this
directory if the file names are not specified with an explicit directory name.
Some operating systems restrict a user's access to only their home directory or project directory,
thus isolating their activities from all other users. In early versions of Unix the root directory was
the home directory of the root user, but modern Unix usually uses another directory such as /root
for this purpose.
Folder metaphor
The name folder, presenting an analogy to the file folder used in offices, and used in a
hierarchical file system design for the Electronic Recording Machine, Accounting (ERMA) Mark
1 published in 1958[3] as well as by Xerox Star,[4] is used in almost all modern operating systems'
desktop environments. Folders are often depicted with icons which visually resemble physical file
folders.
There is a difference between a directory, which is a file system concept, and the graphical user
interface metaphor that is used to represent it (a folder). For example, Microsoft Windows uses
the concept of special folders to help present the contents of the computer to the user in a fairly
consistent way that frees the user from having to deal with absolute directory paths, which can
vary between versions of Windows, and between individual installations. Many operating
systems also have the concept of "smart folders" that reflect the results of a file system search or
other operation. These folders do not represent a directory in the file hierarchy. Many email
clients allow the creation of folders to organize email. These folders have no corresponding
representation in the file system structure.
If one is referring to a container of documents, the term folder is more appropriate. The term
directory refers to the way a structured list of document files and folders is stored on the
computer. The distinction can be due to the way a directory is accessed; on Unix systems,
/usr/bin/ is usually referred to as a directory when viewed in a command line console, but if
accessed through a graphical file manager, users may sometimes call it a folder.
Operating systems that support hierarchical file systems (practically all modern ones) implement
a form of caching to RAM of recent pathnames lookups. In the Unix world, this is usually called
Directory Name Lookup Cache (DNLC), although it is called dcache on Linux.[5]
For local file systems, DNLC entries normally expire only under pressure from other more recent
entries. For network file systems a coherence mechanism is necessary to ensure that entries have
not been invalidated by other clients.
INPUT OUTPUT TECHNOLOGIES:
HARDWARE:
Hardware can be classified into the following broad categories: input, processing, storage, and
output. Input technologies are used to convert data into computer readable form, either
automatically or through varying degrees of human involvement. Processing technologies are
contained within the "black box" itself and are used to convert raw data into meaningful
information. Data storage technologies are employed to either temporarily or permanently store
data. Finally, output technologies come into play in making information available to the end user.
Conventional "hard copy" output in the form of paper, as well as "soft copy" output on the
computer screen are two of the most common output options used in computer-based information
systems. There are a number of recent developments in hardware which have revolutionized
input, processing, storage, and output technologies. Multimedia technologies in general, and
optical disks or CD-ROM in particular, have become extremely popular while their costs continue
on a downward spiral. Note that the discussion of hardware that follows focuses primarily,
although not exclusively, on microcomputer technology simply because you will very likely be
interacting with microcomputers either in a stand alone or a networked environment.
Input technology
There are a number of technologies available for entering data into computer systems. Older
technologies, some of which are still in use, require extensive human involvement. Newer
technologies for data input require less extensive human involvement. Some new technologies
almost entirely automate the process of converting data into computer-readable form.
An example of a relatively old technology is a keying device used for data entry. There are some
variations of keying devices but they all involve manual entry of data using a keyboard attached
to some device. The keyboard could be attached either to a tape device or a disk device.
Additionally, the keying device may or may not be directly connected to the computer's central
processing unit (CPU). When the keying device is connected to the CPU then data entry is said to
occur on-line. When the keying device is not connected to the CPU then data entry is said to
occur off-line. With off-line data entry, the data are stored temporarily on a tape or a disk and are
read into the CPU at some later stage.
Input devices commonly used in personal computing environments include the mouse and its
variants such as the trackball, the track pad, and the TrackPoint. These devices involve
manipulating a hand-held device to move the cursor on the computer screen. The devices also
allow the user to select objects and perform actions by clicking on buttons either attached to or
adjacent to the mouse, trackball, track pad, and TrackPoint devices. A mouse is an
Opto-mechanical device in which movements of a ball underneath the mouse cause
corresponding movements of the cursor on the computer screen. A trackball is simply an inverted
mouse in which the user directly manipulates the ball rather than moving the entire mouse to
cause movement in the encased ball. The track pad device presents the user with a small flat panel
about three inches square. The user moves his or her finger on the pad to control the movement of
the cursor on the screen. Finally, a track point is an eraser-head like device wedged between keys
near the center of the keyboard. The user presses the TrackPoint in the direction the cursor should
be moved on the screen. On a mouse, the buttons to be clicked by the user are placed on the top
of the mouse itself. For the trackball, track pad, and track point, the buttons are typically placed
below and to the left and right of either the ball, the pad, or the track point. Today's graphical user
interface (GUI) operating systems almost require the use of a mouse or similar device.
Light pens and touch screen devices are also used for input in certain applications. A light pen is
a small, photosensitive device that is connected to a computer. By moving the light pen over a
computer screen, the user can in effect manipulate data in the computer system. Touch screen
devices, commonly used in airports and large hotels, allow the user to simply touch the computer
screen with a finger to make selections. Another technology for input that has recently matured
is audio input or voice input. It is now possible to speak to a computer not only to issue
commands but also to enter data into the computer system. At the heart of this technology is voice
recognition software that is capable of recognizing words spoken clearly into a microphone.
Although many strides have been made in voice recognition technology, most systems typically
require the user to "train" the software to recognize the user's voice since the same word may
sound very different to the computer as a function of differences in pronunciation, tone, and
inflection. In addition to audio input, video input is also possible where a video or still camera
transmits moving or static images directly into the computer system. It is important to recognize,
however, that audio and video data streams take up enormous amounts of storage space.
Let us now turn to input devices which automate, to varying degree, the task of entering data into
a computer system. Bar code scanners, optical character readers (OCR), and magnetic ink
character readers (MICR) are all designed to automatically read data. A bar code scanner is a
special device designed to read the Universal Product Code symbol (UPC) attached to a product.
This UPC is attached to most goods sold today. An OCR device works much like a bar code
scanner except that it is designed to read characters that are imprinted in a specific manner. A
MICR device is used by banks and other financial institutions to automatically read the
magnetically coated characters imprinted at the bottom of checks, deposit slips, and similar
documents. A key advantage of these devices is that data entry is fast and virtually error free. Bar
code scanners in particular have fostered huge efficiencies in the checkout lanes at grocery and
department stores. A related input technology is the point of sale device (POS) which reads the
bar code of products being sold and instantaneously trigger a series of actions such as updating
inventory, reorder levels, and perhaps even a purchase order or a production schedule. Thus, more
than simply automating the task of entering data, a POS device goes on to perform related actions
based on the automatically entered data.
Page and hand held scanners are other input devices which can be used to automatically enter
text and graphics into a computer system. Scanning photographs or other images into the
computer system results in the creation of a graphics file which can then be edited using
appropriate software. Scanning text is very useful when combined with OCR software that can
convert the images of characters into editable text. Many organizations are using scanners to
digitize paper documents received from external sources such as invoices from vendors. It is thus
possible, at least in theory, to have an entirely "paperless" office where all data inputs are
converted into computer readable form and all information outputs are delivered to users
electronically. The following table lists the input devices described above.
Input Devices
On-line keying device
Off-line keying device
Mouse
Trackball
Track pad
TrackPoint
Light pen
Touch screen device
Audio input
Bar code scanner
OCR reader
MICR reader
Processor technology
Having discussed alternative input technologies let us now turn our attention to processor
technology. At the core of any computer system is the central processing unit (CPU). The CPU
is comprised of a control unit and an arithmetic-logic unit (ALU). As its name suggests, it is
the ALU that performs all the calculations and comparisons. In essence, the ALU is the number
crunching unit that performs the bulk of the work within a computer system. The control unit,
which is synchronized to the computer's internal clock, receives program instructions and
coordinates the functioning of the ALU. The speed of operation of the control unit is a function of
the speed of the computer's clock unit which oscillates at a frequency of several million cycles
per second. For example, the clock on a 100 megahertz (MHz) processor oscillates at a speed of
100 million cycles per second. Thus, the speed of the clock unit is one determinant of the speed of
a computer since the operation of the CPU is synchronized to the internal clock. The I/O bus is
simply a channel over which information flows between the CPU and peripheral devices like a
modem, hard drive or serial port. The following diagram shows the CPU and its interaction with
the memory components within a typical computer system.
As shown in the above diagram, typically only the control unit and the ALU are housed on the
processor chip; the memory unit is external to this chip. The memory unit comprises electronic
registers that temporarily hold data and instructions both before and after they are processed by
the ALU. Each location on the memory unit has a unique address, and the ALU accesses a
memory location by activating the address of that location. The memory unit in the CPU is also
referred to as primary memory or random access memory (RAM). Today, microcomputers are
typically equipped with a minimum of 32 megabytes of RAM, or approximately 32 million bytes
of storage. Many PCs come configured with 64 or 128 megabytes of RAM. High end
workstations and mainframe computers house anywhere from 128 megabytes to over 1 gigabyte
of RAM. Most present day CPUs contain another type of memory called cache memory. Cache
memory, which is relatively small compared to RAM, is used to store data and instructions that
are likely to be needed by the ALU. Access to cache memory is about four times faster than
accessing RAM. On a well designed processor, the ALU will find what it needs in cache memory
95% of the time. Due to the high cost of cache memory, microcomputers rarely house more than
512 kilobytes of cache memory and typically house only 256 kilobytes of cache memory. Data
and instructions stored in both RAM and cache memory are lost when the power supply to the
CPU is turned off.
The memory unit and the CPU (control unit and ALU) communicate via a channel. This channel
or data path is called the internal bus. There are three manifestations of this internal bus.
The data bus sends data and instructions back and forth between the memory unit and the
processor unit. The address bus identifies the memory locations that will be accessed during the
next processing cycle. Finally, the control bus is used to carry signals from the control unit
which direct the operation of the ALU. The width of the internal bus, or the data path, is another
factor that determines the speed of the CPU. Older buses were 16 bit, but newer buses are 32 and
even 64 bits. Thus, wide data paths and fast clock units contribute to faster CPUs.
The bus system on Intel based personal computers today typically supports both the Industry
Standard Architecture (ISA) and the Peripheral Component Interconnect (PCI) standard. With
PCI comes enhanced data throughput, automatic component and adapter configuration and
processor independence. PCI also supports "Plug and Play". This feature allows a user to add
devices to a computer, like a sound card, without having to physically configure and set-up the
card. This is accomplished with a device that supports "Plug and Play" and an operating system
like Windows 98 that recognizes "Plug and Play" devices.
Today, each peripheral device needs its own port, usually gained through one of a few add-in
slots available on the PC motherboard. To install all but the most fundamental peripherals – likely
a new internal modem, TV Card or SCSI disk drive -- the user must open the case and insert a
board. Often, switches must be set, jumper wires configured or the different physical connectors,
such as serial or parallel, matched. In contrast, simplicity and ease stand at the center of
the Universal Serial Bus (USB). Drawing its intelligence from the host PC, USB ports
automatically detect when devices are added or removed, which, unlike with conventional add-in
slots, can be done with the power on and without having to re-boot the system. Moreover,
offering true plug-and-play operation, the Universal Serial Bus automatically determines what
host resources, including driver software and bus bandwidth, each peripheral needs and makes
those resources available without user intervention. Lastly, the Universal Serial Bus specification
defines a standardized connector and socket, which all peripherals can use, thus eliminating the
existing mixture of connector types. With the new bus, only one peripheral device, say the
keyboard, needs to plug directly into the PC. The other devices simply connect into either an
expansion hub built into the keyboard or monitor or into a stand-alone USB box. Typical devices
that will connect to the Universal Serial Bus include telephones or telephone network, modem,
printer, microphone, digital speakers, writing stylus, joystick, mouse, scanner and digital camera.
Another technology that facilitates devices on the PC is the FireWire Technology (IEEE
1394). This technology is most predominantly found on the MacIntosh, although some use is
finding its way onto Intel based PC’s. FireWire is like the Universal Serial Bus. While the USB
is great for lower-speed multimedia peripherals, FireWire is aimed at higher speed multimedia
peripherals such as video camcorders, music synthesizers and hard disks. FireWire is likely the
future of computer I/O technology. Together, FireWire and USB radically simplify I/O
connections for the user. The age of SCSI, dedicated serial, modem ports and analog video is fast
coming to a close.
Let us now relate the above discussion of processor technology specifically to microcomputers. In
1982, Intel provided a processor for the first serious personal computer, the IBM Personal
Computer (PC). This machine used the Intel 8088 microprocessor, running at a lighting fast six
megahertz. In the intervening years the PC and PC Compatibles used Intel 80286, 80386 and
80486 processors. Each of these microprocessors brought more speed and capabilities.
Intel’s successor to the 80486 was the Pentium processor. This widely-used microprocessor was
first offered in 1993. The Pentium processor was a technological marvel when it was first
introduced -- it had over 3 million transistors integrated on a chip about 2 square inches! The bus
size of the Pentium is 64 bit in contrast to the 32 bit bus of the older Intel 486 processor. It has a
built-in math coprocessor, also called the floating point unit (FPU). This FPU is dedicated to
the task of performing all of the arithmetic calculations, thus freeing the ALU unit to perform
other tasks such as logical comparisons and execution of other instructions such as fetching and
delivering data. Through its superscalar architecture, the Pentium processor could perform two
operations in one clock cycle. In terms of the clock speeds, the fastest Pentium processor operated
at 233 MHz. The Pentium processor had 16 kilobytes of Level 1 cache memory (i.e., on the chip)
-- 8 K of the cache is dedicated for data and 8 K for instructions. Note that Level 1 (L1), which is
integrated into the microprocessor chip, and Level 2 (L2) cache, which is usually on a separate
chip, will be discussed in the next subsection.
The Pentium Pro processor, which is now obsolete, had a clock speed 200 mhz. In order to
substantially enhance the Pentium Pro's ability to handle multimedia applications, Internet
applications and enhanced storage devices like DVD, Intel developed its MMX (Multimedia
extension). This technology is an extension to the Intel Architecture (IA) instruction set. The
technology uses a single instruction, multiple data technique to speedup multimedia and
communications software by processing multiple data elements in parallel.
The Pentium II processor was released in early May 1997. The Pentium II can be thought of as a
Pentium Pro with MMX support. The Pentium II initially came in versions with clock speeds of
233, 266, 300, and 333 MHz All of these Pentium processors came with a 66 MHz internal bus.
In April of 1998, a new generation of Pentium II processors, code named Deschutes, was
released. These Pentium II processors are now available in speeds of 350, 400, and 450 MHz and
are based upon the .25 micron manufacturing process. This innovative process makes it possible
for these CPUs to include over 7.5 million transistors resulting in more power in less space. The
450 mhz Pentium II processor was released in the fourth quarter of 1998. Aside from the higher
CPU speeds, the most significant change in PCs based on these new processors is a shift from a
66 MHz system bus to a 100 MHz system bus. This is the first time Intel has increased bus speed
since the debut of the Pentium processor. The major benefit of the 100 MHz bus in Pentium II
PCs is that it provides 50 percent more bandwidth to memory. Thus, the peripheral devices, like
the hard drive, will be able to communicate faster with the programs running in RAM.
The latest variant of the Pentium II processor is the Pentium II Xeon processor, which is intended
specifically for servers. The Xeon family of processors have large, fast caches (1 and 2 MB L2
caches) and include a number of other architectural features to enhance the performance of
servers. Also at the time of announcing the 350 and 400 MHz Pentium II processor technology,
Intel announced the availability of 233 and 266 MHz Pentium II mobile technology for laptop
computers. This is the first time in Intel's history that laptop and desktop technologies were at the
same relative level. Previously, technology for laptops lagged that of the technology found in
desktop machines. This also suggests that for the first time, laptops can begin to rival desktops in
terms of processor speed and also in terms of other capabilities (the availability of larger hard
drives, zip drives and DVD drives).
Intended mainly for low cost PCs, Intel released the Celeron processor in June, 1998. In its initial
version, the Celeron processor lacked an L1 cache. Intel had hoped that the Celeron processor
would stave off competition from low cost chip producers like AMD. However, given the low
market acceptance of the initial release of the Celeron processor, Intel added a 128k L1 cache to
the Celeron processor. This processor is still offered by Intel, in speeds ranging from 500 to 700
mhz. A low-power version of this processor, intended for portable computers, is also offered in
speeds as high as 650 mhz. A mobile version of the Celeron processor is offered by Intel and is
used in some laptop applications.
In early 1999 Intel released the Pentium III microprocessor. The Pentium III is faster than the
Pentium II, especially for applications written to take advantage of the new set of instructions
encoded into the Pentium III for multimedia applications (code-named "Katmai" instructions).
These 70 new computer instructions make it possible to run 3-D, imaging, streaming video,
speech recognition, and audio applications more quickly. The Pentium III is currently offered in
clock speeds ranging from 650 mhz to a blazing 1 gigahertz!
The latest variant of the Pentium processor, the Pentium 4, has recently been released by
Intel. Available in speeds of 1.3, 1.4, or 1.5 gigahertz, the Pentium 4 processor features a speedy
400 mhz system bus. This processor is designed to deliver enhanced performance for
applications such as Internet audio and streaming video, image processing, video content
creation, speech, 3D, CAD, games, multi-media, and multi-tasking user environments. Thus, the
processor is targeted towards “power users” and PC gaming enthusiasts, rather than for general
purpose uses (for which the Pentium III processor offers more than adequate performance).
Mobile variations of the Pentium III processor now exist and run at speeds up to 850 mhz. Prior
to the appearance of this technology, owing a laptop typically meant that your machine was not
quite as fast as desktop. With the mobile Pentium III processor, this is no longer true.
In contrast to the Pentium and Pentium Pro processors, Motorola's PowerPC 750 processor
running from 200 to 400 mhz is a reduced instruction set or RISC processor. RISC implies that
the processor recognizes only a limited number of assembly language instructions, but is
optimized to handle those instructions. The Pentium processor is considered a complex
instruction set processor (CISC) which means that it is capable of recognizing and executing a
wider variety of instructions. As a result of being optimized to handle fewer instructions, a RISC
processor can outperform a conventional CISC processor when running software that has been
appropriately designed to take advantage of the RISC processor. Tests have shown that a RISC
processor can be as much as 70% faster than a CISC processor. Like the Pentium, the PowerPC
also has on-chip cache, but the size of the cache is 32K rather than 16K. The PowerPC was also
designed using superscalar architecture, but can perform three operations in one clock cycle (as
opposed to the Pentium's two instructions per cycle). The following table summarizes the above
discussion on microcomputer processor technology.
Storage technology
Temporary storage
In our discussion of processor technology we have already discussed temporary storage of data
and instructions within the CPU. The memory unit, or random access memory (RAM), is the
main location for temporary storage of data and instructions. Many of today's most complex
software programs require large amounts of RAM to operate. Cache memory is another type of
temporary internal storage of data and instructions. A third type of memory is read-only memory
(ROM) which, as the name suggests, cannot be altered by the user.
For an application software program to run, it must first be loaded into the computer's RAM. The
main RAM unit is sometimes referred to as dynamic RAM or DRAM to distinguish it
from static RAM or SRAM which refers to the computer's cache memory (to be discussed a little
later). As indicated above, many software programs require a minimum amount of RAM in order
to successfully load and run. Program instructions are loaded into primary memory, RAM, from
secondary memory storage device, typically a magnetic disk drive (referred to as a hard drive) or
a floppy disk drive. As needed, data requiring processing are also loaded into RAM. These data
and instructions can be transferred to and processed by the computer's arithmetic-logic unit
(ALU) very quickly, as directed by the control unit. Access times to RAM are expressed in terms
of nanoseconds (or billionths of a second). Access times to RAM can range from 60 to 80
nanoseconds (a nanosecond, ns, is a billionth of a second -- lower the ns number the faster the
access time). Eventually, data are written back from RAM to the secondary storage device - either
a hard drive or a floppy drive. The size of RAM dictates the number of applications software
and/or programs that can be run simultaneously. The larger the RAM, the greater the number of
programs that can be run concurrently. Applications also run faster when the size of RAM is large
because data and instructions needed for processing are more likely to be found in RAM, which
can be accessed very quickly, than on the secondary storage device, to which access is
considerably slower.
As recently as two years ago, most microcomputers were equipped with asynchronous DRAM.
In asynchronous mode, the CPU sends a request to RAM which then fulfills the request. These
two steps occur in one clock cycle. Synchronous DRAM (SDRAM) which is more expensive
than asynchronous DRAM is now commonly available for PCs. Synchronous DRAM stores the
requested data in a register and can receive the next data address request while the CPU is reading
data from the previous request. The CPU and RAM can therefore be synchronized to the same
clock speed; hence the term "synchronous" DRAM. Systems equipped with SDRAM can
significantly outperform systems with conventional DRAM. An advanced type of memory,
usually used only for servers because of the high cost, is Error Checking and
Correcting (ECC) memory. This type of memory that can find and automatically correct certain
types of memory errors thereby providing greater data integrity. By contrast, non-ECC memory
would result in a system crash when a memory error is encountered.RDRAM, short for Rambus
DRAM, is a type of memory (DRAM) developed by Rambus, Inc. Whereas the fastest current
memory technologies used by PCs (SDRAM) can deliver data at a maximum speed of about 100
MHz, RDRAM transfers data at up to 600 MHz RDRAM is touted by some as the preferred
replacement for SDRAM. However, RDRAM remains very expensive and is therefore found
mainly in high-end workstations.
Cache memory can significantly improve system performance. Cache memory, also referred to
as static RAM (SRAM), is an area of very high speed memory which stores data and instructions
that are likely to be needed. When the ALU needs data and/or instructions, it first accesses cache
memory and accesses RAM only if the needed data or instructions were not found in cache
memory. However, more often than not the ALU will find the needed data and instructions in
cache memory. Why does cache memory speed up processing? Whereas dynamic RAM (DRAM)
is typically accessed at the rate of 60 to 80 ns, cache memory -- static RAM (SRAM) -- can be
accessed at under 10 ns. Thus, access times to cache memory are six to seven times faster than
that for RAM. Most processors in today's computers include a certain amount of cache memory
built into the chip itself. As discussed earlier, the Intel Pentium processor comes with 16K of
cache memory built into the chip, 8K of which is used for data and 8K for instructions. Cache
memory that is integrated onto the chip itself is referred to as Level 1 (or simply L1) cache.
Other than cache memory built into the chip, the system board can also house external cache
memory (i.e., external to the processor chip) on a separate chip. While this external cache,
referred to as Level 2 (L2) cache, is somewhat slower than the cache built into the chip, it speeds
up processing nevertheless. L2 cache can be either asynchronous or synchronous. In an
asynchronous cache design, the CPU sends an address request to the cache which looks it up and
returns the result. All three of these steps occur in one clock cycle. Asynchronous cache is
adequate for computers with clock speeds under 100 MHz . But at speeds of 100 MHz and above,
the three steps simply cannot be performed in one clock cycle. The solution is synchronous cache,
a variation of which is called pipeline burst cache. In these designs, the address request, access,
and return steps are spread over more than one clock cycle. In this manner, cache accesses can
occur while the CPU is reading data from the previous access thereby speeding up the process.
Instructions that direct the computer's operations when power is turned on are stored
in ROM. These instructions involve checking the memory registers, determining which devices
are connected to the computer, and loading the operating system into RAM. Unlike RAM and
cache memory, the contents of ROM are not lost when power is turned off (a small long-life
battery provides sufficient power to retain ROM instructions). In older computers, ROM
instructions were stored in a chip housed on the system board and could be upgraded only by
replacing the ROM chip. In newer computers, ROM instructions are stored in a special type of
memory located on the system board, referred to as "flash" memory. The ROM instructions
located in flash memory can be easily upgraded via a diskette. The various types of memory are
summarized in the table on the following table.
Memory Types
Memory
Description
Cache
memory
High speed memory used to store data and instructions that are likely to be
required in the next cycle. Cache memory represents the speediest type of
memory.
RAM
Random access memory; used to temporarily store data and instructions to
run applications and the operating system.
ROM
Read only memory; used to permanently store instructions required upon
boot up. "Flash" ROM instructions facilitate easy upgrades.
SDRAM
Synchronous RAM; is speedier than asynchronous RAM because CPU does
not have to wait for the next instruction.
RDRAM
The likely next replacement for SDRAM. Currently much more expensive
than SDRAM.
ECC
Error checking and correcting memory. A very expensive type of memory
used mainly for servers.
Permanent Storage
Let us now turn to a discussion of permanent storage of data. The three primary media for
permanent storage of data are magnetic tapes, magnetic disks, and optical disks (also referred to
as compact digital disks, or CD-ROM). Magnetic tape is a low cost sequential storage medium.
While the low cost is an advantage, the major drawback of tape is that data must be accessed in
sequence. Thus, to access a record in the tenth block on a tape, the first nine blocks must be
traversed - the tenth block cannot be directly accessed. Although magnetic tapes were used
extensively in the early days of computing, the dramatic drop in the cost of magnetic disks has
relegated tape to be used primarily for backup purposes. Most computer systems use tape drives
for periodic backup of data. In case of system or magnetic disk failure, data can be restored from
the backup tape. On mainframe computer systems tapes are stored in the form of reels, but on
microcomputers tapes are housed within cartridges and are thus more compact and durable.
Magnetic disks, also referred to as hard disks, are more expensive than magnetic tape but have
the advantage of random or direct access. A record in the tenth block can be directly accessed
without accessing or traversing the first nine blocks. Access times for magnetic disks are
expressed in thousandths of a second (milliseconds, or ms). Current magnetic disks support
access times under 12 ms, with some as low as 7 ms. Records stored on magnetic disks are
overwritten when they need to be updated. Magnetic disk drives are sealed units with one or more
disk surfaces. Each surface has a number of concentric circles or "tracks." Each track in turn is
divided into a number of sectors which is where the data are stored. Thus, a record's address
would comprise the disk surface, the track number, and the sector number at which it is located.
Hard disks for microcomputer applications rotate at a high speed, anywhere from 5,400
revolutions per minute (rpm), or 7,200 rpm, all the way up to 10,000 rpm. The capacity of
magnetic disk drives varies, but 10 gigabytes is considered a bare minimum. The attached picture
shows the inside of a disk drive. In this picture, the top of case has been removed to show the disk
platters.
There are two primary types of interfaces to magnetic disks in microcomputers. The first and
cheaper type is Ultra ATA. "ATA" stands for Advanced Technology Attachment and is
synonymous with extended integrated drive electronics (EIDE). As the name suggests, the
circuits that control the drive are integrated onto the drive and the connector simply provides the
channel between the drive and the system board. Disk drives with the Ultra ATA interface are
capable of a maximum data throughput of 33 megabits per second (or mbps). However, the
average data throughput of Ultra ATA drives is still under 20 mbps. Ultra ATA drives can have
a capacity as high as 80 GB, and drive capacities above 20 GB are quite common on present day
PCs.
The second and more expensive type of interface is the small computer system interface (SCSI pronounced "scuzzy") which typically requires a separate controller card. The latest incarnation
of the SCSI interface is Ultra2 SCSI, which is capable of a burst data throughput of 80 mbps.
Also commonly available is Ultra SCSI which offers a sustained data throughput of 40 mbps.
Since Ultra2 and Ultra SCSI drives have a much higher data throughput, they are often chosen for
servers. Regarding disk access times, however, it should be noted that Ultra ATA and Ultra SCSI
drives have similar sub 10 ms access times since both interface types can have drives spinning at
7,200 revolutions per minute (rpm). Like Ultra ATA drives, SCSI drives also have large
capacities. The largest capacity SCSI drive available today is 50 GB. By way of a cost
comparison, a good quality 18 GB Ultra SCSI drive cost around $190 whereas a good quality 20
GB Ultra ATA drive cost under $100 as of January 2001.
Whereas magnetic disks are usually permanently affixed within a computer system, floppy disks
are transportable and thereby permit data to be copied and moved between computer systems.
Floppy disks have one magnetic disk shielded in a hard plastic case, a read-write opening behind
a metal shutter, and a write-protect notch that can be used to make the diskette "read only"
preventing both accidental erasure of files on the diskette and writing of data onto the diskette.
The capacity of floppy diskettes on microcomputers is 1.44 megabytes, which can prove to be
very limiting considering that file sizes greater than 1.44 megabytes are very frequently
encountered in present day systems. Access times to data on floppy diskettes are considerably
slower when compared to accessing the same data on hard drives. Substantial reading from and
writing to a floppy diskette can severely detract from the performance of a data processing
system.
The "Zip" drive from Iomega is fast becoming the industry standard replacement for the 1.44
diskette. Many of the major personal computer vendors like Dell, Gateway, and Compaq, are
offering the ZIP drive as one of the default diskette drives or as an option. These drives exist in
either 100MB or 250MB versions and are USB devices. The 250MB drive is downward
compatible with the 100MB disk. The size of Zip disks is about the same as a 3.5" diskette. These
drives also support the FireWire technology as an option. The street price of the 250MB drive is
approximately is $179.
A competitor to the Iomega Zip drive is the Imation LS-120 (also referred to as the Super Disk).
It is also an alternative to the 3.5" floppy disk drive and holds 120 MB of data on a single disk.
Unlike the Zip drive, the LS-120 is fully compatible with conventional 3.5" floppy diskettes -- it
can read from and write to 3.5" floppy diskettes. For even larger data storage needs, Iomega
offers the Jaz drive with a capacity of 2 gigabytes. Although more expensive than the Zip drive,
Jaz drives have access times in the 10-12 millisecond range (comparable to that of regular hard
drives). These drives support SCSI and FireWire technology.
A technology that has become very popular on personal computers in recent years is the compact
digital disk, also called optical disk and CD-ROM (for "compact disk - read only memory"). One
compact disk (CD) can store approximately 650 megabytes of data. Given this large capacity,
today's multimedia applications employing audio and video clips, which are extremely data
intensive, are being almost exclusively distributed on CDs. The "read-only" nature of CDs
indicates that conventional CDs cannot be written on and therefore cannot be used and re-used to
store data. However, a variant of conventional CDs, called recordable CDs or "CD-R" has
recently been developed. CD-R drives, which cost approximately $250, can not only create
multimedia CDs, but they can also write compressed data files on a CD. Thus, a CD-R drive can
also be used as a backup device, with each CD holding about 1.3 gigabytes of data in compressed
form. A CD-R disk costs approximately $1. A variation of CD-R technology is CD-RW (for
"rewritable") which, as the name suggests, can rewrite data onto special CD-RW disks. These
CD-RW disk drives, as well as the disks, are slightly more expensive than CD-R drives and disks.
A single-speed CD-ROM drive can transfer data at the rate of about 150 kilobytes per second. A
12X speed CD-ROM drives can transfer data at roughly twelve times the rate of single speed
drives (1800 kilobytes per second). Today, variable speed (12/24X, 13/32X, or 17/48X) CDROM drives are commonly available, with the fastest CD-ROM drive spinning at 72X. Note that
a "17/48X" CD-ROM drives spins at a minimum of 17 times faster than a single-speed drive and
a maximum of 48 times faster. Access times to CD-ROM disks are considerably higher (i.e.,
slower access) than to magnetic disks (hard disks). The 13/32X variable speed CD-ROM drive
would have an access time of about 75 ms and a transfer rate of 5 Megabits/second. Many
magnetic disk drives have access times below 10 ms and transfer rates of 33.3 mbps. Due to their
large capacities, most software manufacturers distribute their products on CDs. Users find it more
convenient to install new software simply by inserting one CD rather than switching multiple
floppy diskettes in and out of the floppy disk drive. Programs that deliver a sizable amount of
sound and graphics also benefit from the high speed CD ROM drives. Permanent storage options
are summarized in the table below.
In the same way that CDs supplanted vinyl LP's, a new technology, DVD, will replace CD-ROM.
DVD has been termed digital video disk or digital versatile disk. High capacity storage for the
personal computer is on the verge of a major product shift. This technology provides for high
capacity, interoperability and backward compatibility. DVD-ROM drives are backward
compatible with CD-ROMs. With 4.7GBs per DVD disk (equivalent to 7 CD-ROMs or over
3,000 floppy diskettes), a typical DVD-ROM drive transfers DVD-ROM data at up to 13,500
KB/sec (10X) and CD-ROM data at up to 6,000 KB/sec (40X). Access times are roughly 110 ms
(DVD) and 80 ms (CD). The technology used for the personal computer and for the home
electronics market is the same. For example, movies on DVD disk will play on both your
television and on your PC. This approach will let you use a DVD drive without losing your
investment in CD-ROMs. DVD will eventually make CD-ROMs and laser disks obsolete.
Viewing a DVD movie on your personal computer can be a fun experience. If your laptop
computer has a DVD drive, this can be a particularly nice way to pass the time on a long plane
flight. The cost of these drives for personal computers will typically be found in the $150 to $300
range.
Storage Options
Option
Description
Magnetic
tape
Slow sequential access. Used primarily for backup purposes.
Magnetic
disk
Fast access times (under 10 milliseconds). Capacities up to 16.8 GB for Ultra
ATA drives, and up to 47 GB with Ultra SCSI controllers.
Floppy disk 3.5" disk can store 1.44 MB of data. Slow access and limited storage capacity.
Optical disk Used to distribute software and for multimedia applications. Can store 650 MB of
(CD-ROM) data. Read only device.
CD-R
Recordable CDs. Drives and disks are significantly more expensive than CDROM drives.
CD-RW
Compact disk-erasable. Users may overwrite files on these CD's. CD-RW disks
are backward compatible with standard CD-ROM Drives.
DVD
A major product shift. Substantial storage capacity at 17 GB per disk.
Iomega Zip Alternative to the 3.5" 1.44 MB drive. Can store 100 MB of data. The most
drive
recent version can store 250 MB. External (Portable) or Internal variants.
Imation LSAlternative to the 3.5" 1.44 MB drive. Can store 120 MB of data.
120 drive
Backward compatible with 3.5" diskettes.
Iomega Jaz Removable disks can store 2 GB of data. More expensive than Zip drives.
drive
Access times comparable to that of hard drives.
Output technology
The two broad categories of output technology are hard copy output and soft copy output. As the
name suggests, hard copy output involves printing out the desired output on paper. There are a
number of options available for obtaining hard copy output, which we will discuss below. Soft
copy output involves displaying the output on the user's computer screen (also called the "video
display terminal"). A number of characteristics determine the quality of the soft copy output.
These will also be discussed later.
Hard copy output options
Printers can be broadly classified into two categories: impact printers and non-impact
printers. Dot matrix printers are impact printers and generate output by forming characters from
a matrix of pins which then strike an inked ribbon. Although dot matrix printers are slow and
noisy, and are only slightly cheaper than ink jet and low end laser printers, they are still in use
because of one significant advantage over ink jet and laser printers - dot matrix printers can
generate multiple copies simultaneously. This feature is particularly useful for printing out
invoices, receipts, orders, and similar documents when multiple copies are almost always
required. The speed of printing of dot matrix printers is measured in terms of the number of
characters per second (cps) that are printed.
Ink jet printers are one type of non-impact printers. An ink jet printer generates output by
shooting a microscopic jet of ink from the print head onto the paper. The ink is of special quality
and dries almost instantly. Although the quality of ink jet printing is very good, the printed
images will appear somewhat smudged when regular copier/printer paper is used. Special high
gloss paper, which is more expensive, results in better quality output. Ink jet printers available
today provide inexpensive color printing . While some low cost color ink jet printers require the
user to change the ink cartridge from black to color, other more expensive ones can automatically
switch between printing in color and printing black only using a single ink cartridge. Like dot
matrix printers, ink jet printers also print a character at a time. Print resolutions of ink jet printers
are expressed in terms of dots per inch (dpi). Expect resolutions of 600 to 1200 dpi even for
inexpensive printers. The speed of mid-range ink jet printer is roughly nine pages a minute in
black and six pages per minute in color.
A laser printer uses laser beams to create an image of the output on a photosensitive drum. The
drum, which contains toner ink, then rolls against a sheet of paper to transfer the image onto the
paper. Laser printers thus print an entire page at one time. The print resolution of laser printers is
also expressed in terms of dpi. Three hundred dpi is the minimum resolution of laser printers,
while 600 dpi is common even in relatively low cost laser printers. High end laser printers, which
cost in excess of $1,000, can generate output at 1,200 dpi. In terms of speed, laser printers print at
a minimum of four pages a minute, while speeds of 8, 12, 17, and 22 pages per minute are not
uncommon for business laser printers. A recent trend in laser printers is the falling cost of color
laser printers. Previously costing over $5,000, good quality color laser printers can now be
purchased for as little as $1,500.
Soft copy output
The quality of soft copy output, i.e., screen or video display, is a function of the video card and
the monitor. Let us examine each of these issues.
Video card: In a microcomputer, the processing tasks related to video display are usually handled
either by a dedicated video card that fits into a slot on the system board or by a special chip
integrated onto the system board. The latest interface for the video card is the accelerated
graphics port (AGP). Prior to the development of the AGP, the video card interface used the
peripheral component interconnect (PCI) bus. AGP cards are up to four times faster than cards
using the PCI bus -- they offer up to 533 mbps in contrast to 133 mbps on the PCI bus. The
amount of memory in the video card is another characteristic that determines the speed and
quality of video display. Two megabytes of RAM for video is considered a bare minimum, with
four and even eight megabytes increasingly becoming the norm. Memory reserved for video
display determines the number of colors that can be displayed on the screen at a particular screen
resolution; the more the memory, the more the number of colors that can be displayed.
Monitor: The resolution of a microcomputer's monitor is expressed in terms of the number of
columns by the number of rows that are displayed. Standard VGA (video graphics array)
resolution displays 640 columns by 480 rows. Super VGA resolution is 800 x 600, while
extended VGA is 1024 x 768. Even higher resolutions of 1280 x 1024 resolutions are available on
certain monitors. Note, however, that although higher resolutions translate to crisper images, the
size of the characters being displayed shrinks proportionately. Thus, the higher resolutions almost
require monitors substantially larger than the standard 14" or 15" monitors. The highest resolution
recommended for a 15" monitor is Super VGA (800 x 600). Seventeen inch monitors are more
expensive, but much easier on the eye if resolutions higher than Super VGA are to be used
continually. Seventeen and even 19 inch monitors are becoming default options on personal
computer systems sold today. For certain computer aided design and graphics applications, a 21"
monitor is very useful.
The monitor's refresh rate -- the number of times per second that the screen is repainted or
refreshed is expressed in terms of hertz (hz) -- cycles per second. The higher the refresh rate at a
certain resolution the more likely the display will be flicker free. In terms of the size of each dot
or pixel on the monitor, the smaller the dot pitch the crisper will be the characters displayed on
the monitor. Good quality monitors have dot pitches of .28 mm, .26 mm, or less. Newer monitors
are also typically rated as being "energy star compliant" which means that they consume less
power and can automatically shut off after a certain period of inactivity. Energy star compliant
monitors also typically emit less radiation--a critical consideration for users likely to be in front
of a computer monitor for a considerable portion of the work day. The latest advance in PC
displays is the flat-panel TFT (thin-film transistor) displays. These displays, typically found on
notebook computers, offer a space saving alternative to conventional monitors while still offering
exceptional display quality. However, flat-panel displays are still considerably more expensive
than traditional monitors.
Apart from hard and soft copy output, the sound card present on most microcomputers offers an
output option. For example a microcomputer with a sound card and CD or DVD drive can play
an audio CD. An electronic piano keyboard can interface with a computer using a MIDI (musical
instrument digital interface) port on the sound card. Thus, with support software and
a MIDI port, an electronic piano keyboard can be used as an input device to a microcomputer.
Musical selections previously input can be played back as outputs.
SOFTWARE
Having discussed a considerable number of hardware terms and concepts, let us now turn to a
discussion of computer software. The most basic definition of software is that it comprises
instructions that the hardware can execute. The two broad categories of software are systems
software and applications software. Systems software consists of the operating system and other
utility programs that allow application programs to interact with computer hardware. Applications
software consists of programs written to process the user's data and convert it into information.
The relationship between applications software and systems software is easily understood in the
context of an application designed to convert the user's data into meaningful information. Let us
assume that a user has designed an application program to process payroll time tickets resulting in
the printing of employee paychecks. The time tickets represent data that needs to be processed.
The application program sends the data and the program instructions detailing how the data is to
be processed to the operating system. The operating system in turn directs the hardware devices
(i.e., the central processing unit) to perform the functions necessary to process the data and return
the results to the user (i.e., display the results on the computer screen or output to the printer).
Systems software
The various types of systems software include the operating system, utility programs, and
language translators. The operating system manages and directs the functioning of all CPU and
peripheral components. Allocating resources such as memory and processor time for tasks is one
of the primary functions of the operating system. Tasks such as writing data from primary
memory to secondary storage devices such as disk and tape drives are also handled by the
operating system. As needed by application programs, the operating system allocates memory and
processor time for the specific tasks that need to be performed in execution of the user's
application program.
Three capabilities of operating system are noteworthy: (1) multitasking, (2) multiprogramming,
and (3) multiprocessing. Most present day operating systems such as Unix and OpenVMS for
mainframe computers, and Windows 95/98 and the Macintosh System 8 for personal computers,
are capable of multitasking. Multitasking is the ability of the operating system to concurrently
handle the processing needs of multiple tasks. Thus, the user can perform word processing
functions at the same time that the spreadsheet program prints a large file. Both personal
computers and mainframe computers can perform multitasking. Mainframe computers alone are
capable of multiprogramming. In a multi-user mainframe computing environment,
multiprogramming is the ability to rapidly switch back and forth between different users' jobs.
Each user receives a response very quickly, giving the user the impression that the computer is
dedicated to that user's job. The immense speed of the mainframe computer allows it to switch
between jobs very quickly, but at any one instant the computer is processing only one job.
Another related ability of both mainframes and high end personal computers
is multiprocessing which is the ability to simultaneously control multiple processors within the
same computing system. Whereas typical computers have only one CPU, a multiprocessing
computer actually has several CPUs that are linked together. Only very complex scientific and
mathematical processing jobs require multiprocessing. Some advanced servers can also benefit
from multiple CPUs.
The two most popular operating systems for personal computers today are Microsoft
Windows and the Macintosh Operating System 8 (Mac OS 8). Since its release in August 1995,
Microsoft's Windows 95 operating system has been adopted by more than 20 million users. The
current version of Windows is Windows 98, which was released in July 1998. The new integrated
Internet user interface in Windows 98 allows users not only the simplicity of surfing the Web, but
also the ability to find and access information quickly on their local network or intranet. Windows
98 enables users to take advantage of innovative new hardware designs such as the Universal
Serial Bus. The next version of the consumer-oriented Windows operating system, which is
being called Windows "Millennium Edition," will be released later in the year 2000. For
corporate users, Microsoft released Windows 2000 in February 2000. Windows 2000 is a full 32
bit operating system and is a much more stable operating system than Windows 98. It is intended
primarily for the networked environments commonplace in most businesses today. Windows
2000 sports a number of advances over the previous version (called Windows NT 4.0), especially
in the security arena. It is offered in two flavors -- Windows 2000 server (the upgrade to NT 4.0
server) and Windows 2000 professional (the upgrade to NT 4.0 workstation). You are
encouraged to read more about these new operating systems by clicking on the above hyperlinks
to the related areas on Microsoft's web site.
For its part, Apple has released its MAC OS 8.1, the first major upgrade to its operating system
since 1984. Apple currently offers version 9.0. This OS is a critical part of Apple's drive to
recapture market share. IBM's OS/2 Warp 4 operating systems have earned critical acclaim in the
computing industry but little acceptance in the marketplace. All of these personal computer
operating systems have a "graphical user interface" or GUI. These operating systems allow most
functions to be performed by pointing and clicking using devices such as a mouse or a trackball.
Programs, files, and peripheral devices such as printers and disk drives are all represented by
icons on the screen.
Linux has recently been receiving significant interest in the market place, notably as a competitor
to Windows. Linux (often pronounced lynn-ucks ) is a UNIX-like operating system that was
designed to provide personal computer users a free or very low-cost operating system comparable
to traditional and usually more expensive UNIX systems. Linux has a reputation as a very
efficient and fast-performing system. Linux's kernel (the central part of the operating system) was
developed by Linus Torvalds at the University of Helsinki in Finland. To complete the operating
system, Torvalds and other team members made use of system components developed by
members of the Free Software Foundation for the GNU project.
Linux is a remarkably complete operating system, including a graphical user interface, X
Window System, TCP/IP, the Emacs editor, and other components usually found in a
comprehensive UNIX system. Although copyrights are held by various creators of Linux's
components, Linux is distributed using the Free Software Foundation's copy left stipulations that
mean any copy is in turn freely available to others. Red Hat and VA Linux are two popular
vendors offering distributions of the Linux operating system. Dell Computer Corporation offers
Linux as a preloaded option on some of its computers. Linux is sometimes suggested as a
possible publicly-developed alternative to the desktop predominance of Microsoft Windows.
Although Linux is popular among users already familiar with UNIX, it remains far behind
Windows in numbers of users.
Utility programs are the second category of systems software. Mini-programs for performing
commonly used functions like formatting disks, compressing files, scanning for viruses, and
optimizing the hard disk are some examples of utility programs. In essence, utility programs
complement the operating system by providing functions that are not already built into the
operating system. Third party vendors typically provide suites of utility programs that are extend
the functionality of the operating system.
The third category of systems software is language translators. Assemblers, interpreters, and
compilers are the three types of language translators. As the term implies, a language translator
takes a program written by the user, which is called the source code, and converts the source code
into machine language which is called the object code. The source code program is written in
English using a text editor or a word processor capable of creating an ASCII (text) file. The
computer's hardware can only understand machine language commands (object code) which are
in binary code consisting of 0s and 1s.
Interpreters convert source code into object code one line at a time. Some versions of the BASIC
(Beginner's All-purpose Symbolic Instruction Code) programming language used an interpreter
for execution. The interpreter must be invoked each time the program is to be run. An assembler
is used to convert an assembly language program, rarely used these days, to machine language.
Assembly language, referred to as a "second generation" programming language (machine
language is considered to be the "first generation" programming language). Compilers are used to
convert the source code of "third generation" programs such as COBOL (Common Business
Oriented Language), Pascal, C and C++ into object code. Unlike interpreters, compilers process
the entire source code file and create an object code or executable file if the program is
successfully compiled. Interpreters, assemblers, and compilers check the source code program for
syntax errors (logic errors can be detected only by running test data and comparing the actual
results to expected results). An interpreter indicates the syntax error and simply does not execute
the line of code. A compiler generates a listing file highlighting each line of code with syntax
errors. A successful compilation will generate an object file. The object file is then linked to other
needed object libraries and the output of this process is an executable file. Debuggers are useful
utility programs that allow programmers to process a program one step at a time while examining
how variables change values during execution. Debuggers thus assist in the detection of logic
errors. Once a program is successfully compiled and an executable file is created, the user can run
the program simply by executing the resulting executable file (.exe file); the source code file is
not required to run the program. In fact, in most applications it will be appropriate to distribute
only the executable file to users without providing them with the source code.
Programming languages
In the above discussion of language translators we have already discussed first, second, and third
generation programming languages. To repeat, machine language programming using 0s and 1s is
the first generation programming language. Assembly language using cryptic symbols
comprises the second generation programming language, in which the assembly language
program needed to be "assembled" or converted to machine language. Third generation
languages use plain English syntax to create the source code program that must then be compiled
to create an object program or executable file. COBOL, Pascal, Visual Basic, and C are examples
of third generation languages.
Fourth generation languages, referred to as 4GLs, are even more high level than third
generation languages and use a very English-like syntax. Third generation languages are
procedural languages in that the user must specify exactly how data is to be accessed and
processed in order to generate desired output. In contrast, 4GLs are non-procedural, meaning that
the user simply specifies what is desired (i.e., procedural details regarding how the data should be
processed need not be provided). FOCUS and SQL (structured query language) are two examples
of 4GLs. SQL (pronounced "sequel") is a very popular 4GL and is fast becoming the standard
language for interacting with relational database systems.
Both third and fourth generation languages adopt the perspective that data are separate from
processes. Data are stored in repositories and programs specify the processing steps that modify
data. A radically different viewpoint is adopted by object-oriented programming
languages (OOPL). Rather than focusing on data versus processes, an OOPL simply focuses on
the objects of interest in a particular domain. For example, in a sales order processing application
the objects of interest would be customers, inventory, and orders. For each object, an OOPL
defines attributes that need to be stored and also the processing methods that would be used to
modify those attributes. For example, a "customer" object might have the following attributes:
name, address, phone number, balance, and credit limit. The methods associated with the
customer object might be add new (to add a new customer), add balance (to increase the
customer's balance to reflect a credit sale), deduct balance (to decrease the customer's balance to
reflect a collection received from the customer), and show balance (to show the customer's
current outstanding balance). In an OOPL, the attributes and the methods are defined together in
one package. This property of OOPLs is called encapsulation.
Objects can communicate with one another by means of messages passed between them. For
example, when a new sales order is placed, a new instance of the "orders" object is created. After
this new order instance has been created, messages would be passed to the "customer" object to
update the customer's balance, and also the "inventory" object to decrease the on-hand quantity of
the items ordered (and presumably to be shipped). In effect, the messages passed between objects
trigger methods that have been defined and stored internally within each object. Another unique
feature of OOPL is polymorphism. The same message passed to different objects might result in
different actions depending on the exact specification of the method invoked within each object
as a result of the object. For example, a "depreciate" message passed to several asset objects
might result in different actions as a function of the depreciation method defined for that asset.
A third feature unique to OOPL is inheritance. New objects can be created based on existing
objects. The new objects can simply inherit the attributes and methods already defined for an
existing object. Attributes and methods unique to the new object would be defined within the new
object. As an example, a new "international customer" object can be created by inheriting the
attributes and methods of an existing "customer" object. Only attributes and methods unique to
international customers, such as the country and currency, would have to be defined in the new
"international customers" object. In this manner, OOPL facilitates code reusability thereby
simplifying the process of developing new applications. In summary, OOPL have three unique
features: (1) encapsulation, (2) polymorphism, and (3) inheritance. Smalltalk and C++ are two
popular object-oriented programming languages.
Applications software
Writing a program using a programming language such as C, C++, COBOL or Visual Basic is
one way of converting raw data into useful information. However, the vast majority of users
would more than likely use an applications software package to perform common data processing
tasks such as spreadsheets and database programs. Applications software packages are designed
with a host of features and are easily customizable to meet almost any user need. The two broad
categories of applications software are (1) general purpose applications software such as word
processing, spreadsheet, database, graphics, and communications, and (2) special purpose
applications software such as accounting software packages. Both categories of software
packages have several offerings for both the PC and Macintosh platforms.
General purpose applications software
You are probably already very familiar with word processing and spreadsheet software, and
possibly with database software as well. Microsoft Word, Corel WordPerfect, and Lotus Ami Pro
are the leading word processing software packages. Microsoft Excel, Corel Quattro Pro, and
Lotus 1-2-3 are the leading spreadsheet packages. Microsoft Access, Corel Paradox, and Lotus
Approach are among the major database software packages. All of these software packages listed
are for the Microsoft Windows operating system, the latest version of which is Windows 98.
Software programs such as Microsoft PowerPoint and Lotus Freelance Graphics are used to
create presentation graphics. Data graphics -- presenting data graphically - is a function included
in most spreadsheet software.
Communications software and fax software such as Symantec’s WinFax Pro are also quite
popular. However the functionality provided by such software packages is increasingly being
integrated into the operating system, obviating the need to obtain separate software packages for
that functionality. For example, Microsoft's Windows 95 operating system includes accessories
for dialing in to remote computers using a modem and also for sending and receiving faxes using
a fax modem. Other types of applications software include project management software such as
Microsoft Project, personal information managers such as Lotus Organizer, and
scheduling/meeting software such as Microsoft Schedule +.
Special purpose applications software
Although there are a host of special purpose applications software packages, such as packages for
keeping track of real estate listings, we will focus exclusively on accounting software packages.
Accounting software packages can be broadly categorized into three groups. The first
category comprises low end packages for use by small business, are nothing more than
sophisticated electronic checkbooks. Packages like Intuit Quicken, QuickBooks Pro, Peachtree,
and Microsoft Money fall into this category. Many home users find packages like Quicken to be
very useful for tracking their checking account use and to manage their finances. Some of these
packages can be used by small businesses and include some very basic accounting functions.
Most of these software packages can be purchased for under $200. Installing and configuring
these low end packages is also relatively easy.
The second category comprises mid-range packages such as Macola, Great Plains Dynamics,
and SBT. These packages can cost anywhere from $5,000 to $15,000 and usually require the
expertise of a consultant or a "value added reseller" (VAR) to install and configure the package.
Most medium sized businesses will likely find that one of these packages will meet their
accounting information processing needs. It should be noted that these packages are considered
"modular" in that separate modules, such as inventory, payroll, and general ledger, can often be
purchased separately. Subsequently, when the company grows and intends to automate additional
accounting processes, the remaining modules from the same package can be purchased and
integrated along with the existing modules. The first two categories of software almost always
use proprietary file management systems to manage the necessary files within the software
packages. The data files are accessible only through the file manager interfaces provided by the
accounting software package.
The third category comprises high end packages such as SAP, Oracle Applications, PeopleSoft,
and Baan. These software packages are referred to as enterprise resource planning (ERP) systems
since they typically span the entire enterprise and address all of the enterprise's resources.
Depending on the configuration, these packages can cost a company several hundreds of
thousands of dollars. Taking into account the cost of analyzing and redesigning existing business
processes, the cost of implementing an ERP system can run into millions of dollars! Just how
much more sophisticated are ERP systems relative to some of the other packages in the first two
categories? Take SAP for example. This complex software is ideally suited for multinational
companies that have operations in different countries with different currencies and accounting
conventions. Employees throughout the world can obtain access to data regardless of where the
data is located. SAP also automatically handles foreign currency translations as well as the
reconciliations that are necessary between countries that have different accounting conventions. A
key feature of ERP systems is cross-functional integration. For example, for a manufacturing
enterprise, an ERP system like SAP can be configured to automatically react to the creation of a
new customer order by (1) updating the production schedule, (2) updating the shipping schedule,
(3) ordering any needed parts, and (4) updating online sales analysis reports to reflect the new
order. Without an ERP system, the four procedures indicated would have to be performed by
employees in at least four different departments (sales, production, inventory, purchasing)
perhaps using four different information systems. It is precisely this fragmentation of information
systems across the company that ERP systems are designed to correct. Thus, the key advantage
of an ERP system is the integration of related business processes. This cross-functional
integration is enabled chiefly using relational database technology. You can therefore imagine
that ERP systems such as SAP must indeed be very complex.
Unlike accounting packages in the first two categories, the high end packages such as SAP almost
always use relational databases to store the raw data. Thus, the data is accessible not only via the
accounting package, but also through the relational database management system. This ability to
access the data via the database management system allows for much greater flexibility in
accessing and analyzing the data. File-oriented data structures, such as the file managers alluded
to above, and database-oriented data structures such as relational database structures. The
following table summarizes the various software categories.
Hardware Issues:
General Hardware Issues
If you need assistance with any of these steps, or you cannot resolve your issue, please contact a
member of DIDE IT for further help.
My computer freezes or is behaving strangely

Try restarting your computer. Many basic problems can be resolved easily and quickly
this way.

Press the Ctrl & Alt & Del keys on your keyboard together at the same time. This should
bring up a menu that will allow you to run Task Manager. In Task Manager, switch to the
Applications tab. Highlight any programs with the status 'Not Responding' and choose End
Task. You may be asked to confirm if you want to end the unresponsive program, so choose
Yes. Do this for all programs that are not responding.

If all else fails and you cannot shutdown/restart your computer, then hold down the power
button on the machine until it forcibly turns off. Wait a few seconds and then turn it back on
again.
My computer doesn't power up

Check that all the cables are securely plugged into the back of the machine and the
monitor.

Check that the power cables are plugged into a power socket and the socket has been
turned on.

Try using a different power socket or, if you are using a power extension strip, plug the
power cable directly into a power socket in the wall.

Replace the power cable with one that you know works.

Check if there are any lights on at the front of the machine:
If there are lights on the machine but not the monitor, then it's probably a monitor issue.

If there are lights on the monitor but not the machine, then it's probably a machine issue.

If there are no lights on anything, then it may be possible there is a local power cut.

With laptops, try removing the power cable and the battery. Hold down the power button
for about ten seconds, and then plug the battery and power cable in again. Press the power
button to see if it switches on.
Nothing appears on the monitor

Make sure both the computer and monitor are on.

Make sure the monitor is securely plugged into the computer.

Make sure the power cable is firmly plugged into the monitor.

Some computers have multiple display ports, so make sure you have plugged the monitor into
the correct one. Try each one in turn, switching the monitor off then on in between moves.

Most monitors have a status window displayed when you turn it on. Check if you can see this
status window when you press the power button on the monitor. You can also try this with
the menu button on the monitor, which should bring up an options menu on the screen. This
shows the screen is working ok, so it may be an issue with the video cable from the monitor
or the machine itself.

Check the brightness & contrast levels of the monitor via the menu button, to make sure it has
not been set too dark.

Move the mouse and press any key on the keyboard to make sure the screensaver hasn't
activated or that the computer hasn't gone into standby/hibernation mode.
Non-system disk or disk error at boot

Remove any floppy disks, CD/DVD discs and USB memory sticks or external hard drives and
try booting up again.

If you can hear a repeated scraping or clunking noise, power off the computer as soon as
possible, as there may be a physical problem with the hard disk and you may lose data.
Keyboard/Mouse does not work

Make sure the keyboard/mouse is firmly plugged into the back of the computer.

Try unplugging one or both, and then reinserting it into the back of the computer.

Try plugging your USB keyboard/mouse into a different USB socket.

Replace the keyboard/mouse with one that you know works.

If you cannot see any lights on your keyboard when you press the Caps Lock or Num Lock
key, it may be a dead keyboard.

Make sure there is no dirt or fluff clogging up either the optical laser or roller ball on the
underside of your mouse. It may require a clean.

If you are using a wireless keyboard/mouse, try pressing the reset button on the device or
replace the batteries.
GRID COMPUTING:
Grid computing is the collection of computer resources from multiple locations to reach a
common goal. The grid can be thought of as a distributed system with non-interactive workloads
that involve a large number of files. Grid computing is distinguished from conventional high
performance computing systems such as cluster computing in that grid computers have each node
set to perform a different task/application. Grid computers also tend to be more heterogeneous
and geographically dispersed (thus not physically coupled) than cluster computers. Although a
single grid can be dedicated to a particular application, commonly a grid is used for a variety of
purposes. Grids are often constructed with general-purpose grid middleware software libraries.
Grid sizes can be quite large.
Grids are a form of distributed computing whereby a “super virtual computer” is composed of
many networked loosely coupled computers acting together to perform large tasks. For certain
applications, “distributed” or “grid” computing, can be seen as a special type of computing that
relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces,
etc.) connected to a computer network (private or public) by a conventional network interface,
such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many
processors connected by a local high-speed computer bus.
Grid computing combines computers from multiple administrative domains to reach a common
goal, to solve a single task, and may then disappear just as quickly.
One of the main strategies of grid computing is to use middleware to divide and apportion pieces
of a program among several computers, sometimes up to many thousands. Grid computing
involves computation in a distributed fashion, which may also involve the aggregation of largescale clusters.
The size of a grid may vary from small confined to a network of computer workstations within a
corporation, for example to large, public collaborations across many companies and networks.
"The notion of a confined grid may also be known as intra-nodes cooperation whilst the notion of
a larger, wider grid may thus refer to an inter-nodes cooperation".
Grids are a form of distributed computing whereby a “super virtual computer” is composed of
many networked loosely coupled computers acting together to perform very large tasks. This
technology has been applied to computationally intensive scientific, mathematical, and academic
problems through volunteer computing, and it is used in commercial enterprises for such diverse
applications as drug discovery, economic forecasting, seismic analysis, and back office data
processing in support for e-commerce and Web services.
Coordinating applications on Grids can be a complex task, especially when coordinating the flow
of information across distributed computing resources. Grid workflow systems have been
developed as a specialized form of a workflow management system designed specifically to
compose and execute a series of computational or data manipulation steps, or a workflow, in the
Grid context.
Market segmentation of the grid computing market
For the segmentation of the grid computing market, two perspectives need to be considered: the
provider side and the user side:
The provider side
The overall grid market comprises several specific markets. These are the grid middleware
market, the market for grid-enabled applications, the utility computing market, and the softwareas-a-service (SaaS) market.
Grid middleware is a specific software product, which enables the sharing of heterogeneous
resources, and Virtual Organizations. It is installed and integrated into the existing infrastructure
of the involved company or companies, and provides a special layer placed among the
heterogeneous infrastructure and the specific user applications. Major grid middlewares are
Globus Toolkit, gLite, and UNICORE.
Utility computing is referred to as the provision of grid computing and applications as service
either as an open grid utility or as a hosting solution for one organization or a VO. Major players
in the utility computing market are Sun Microsystems, IBM, and HP.
Grid-enabled applications are specific software applications that can utilize grid infrastructure.
This is made possible by the use of grid middleware, as pointed out above.
Software as a service (SaaS) is “software that is owned, delivered and managed remotely by one
or more providers.” (Gartner 2007) Additionally, SaaS applications are based on a single set of
common code and data definitions. They are consumed in a one-to-many model, and SaaS uses a
Pay As You Go (PAYG) model or a subscription model that is based on usage. Providers of SaaS
do not necessarily own the computing resources themselves, which are required to run their SaaS.
Therefore, SaaS providers may draw upon the utility computing market. The utility computing
market provides computing resources for SaaS providers.
The user side
For companies on the demand or user side of the grid computing market, the different segments
have significant implications for their IT deployment strategy. The IT deployment strategy as
well as the type of IT investments made are relevant aspects for potential grid users and play an
important role for grid adoption.
MOBILE COMPUTING:
Mobile computing is human–computer interaction by which a computer is expected to be
transported during normal usage. Mobile computing involves mobile communication, mobile
hardware, and mobile software. Communication issues include ad hoc and infrastructure
networks as well as communication properties, protocols, data formats and concrete technologies.
Hardware includes mobile devices or device components. Mobile software deals with the
characteristics and requirements of mobile applications.
Mobile Computing is "taking a computer and all necessary files and software out into the
field". There are several different dimensions under which mobile computers can be defined: in
terms of physical dimensions; in terms of how devices may be hosted; in terms of when the
mobility occurs; in terms of how devices are networked; in terms of the type of computing that
is performed.
In terms of dimensions, mobile computers tend to be planar and tend to range in size from
centimeters to decimeters. Mobile computer may themselves be mobile, e.g., it is embedded into
a Robot or Vehicle that is mobile or itself may not be mobile, but is carried by a mobile host, e.g.,
the mobile phone is not mobile but it is carried by a mobile human. The most flexible mobile
computer is one that can move during its operation or user session but this depends in part on the
range of any wireless network it is connected to. A tablet or laptop computer connected viaWiFi can move while staying connected within the range of its WLAN transmitter. To move
between multiple different located WLANs, the device must interrupt, suspend, or close its
current user session before connecting to another WLAN transmitter in another session. A device
such as a tablet or mobile phone can move much further while staying connected within the range
of a GSM network as it can seamlessly move between multiple GSM transmitters or Base
stations. Mobile computers may also support or form part of a more local network that moves as
the devices, i.e., mobile computers may also be used as part of a Wireless Body Area
Network, Wireless Personal Area Network or a piconet.
The most common forms of mobile computing devices are as follows.

portable computers, compacted lightweight units including a full character set keyboard and
primarily intended as hosts for software that may be parameterized, as laptops, notebooks,
notepads, etc.

mobile phones including a restricted key set primarily intended but not restricted to for vocal
communications, as cell phones, smart phones, phone pads, etc.

Smart cards that can run multiple applications but typically payment, travel and secure area
access

wearable computers, mostly limited to functional keys and primarily intended as incorporation
of software agents, as watches, wristbands, necklaces, keyless implants, etc.
The existence of these classes is expected to be long lasting, and complementary in personal
usage, none replacing one the other in all features of convenience.
Limitations

Range & Bandwidth: Mobile Internet access is generally slower than direct cable connections,
using
technologies
such
as GPRS and EDGE,
and
more
recently HSDPA and HSUPA 3G and4G networks. These networks are usually available within
range of commercial cell phone towers. Higher speed wireless LANs are inexpensive but have
very limited range.

Security standards: When working mobile, one is dependent on public networks, requiring
careful use of VPN. Security is a major concern while concerning the mobile computing
standards on the fleet. One can easily attack the VPN through a huge number of networks
interconnected through the line.

Power consumption: When a power outlet or portable generator is not available, mobile
computers must rely entirely on battery power. Combined with the compact size of many
mobile devices, this often means unusually expensive batteries must be used to obtain the
necessary battery life.

Transmission interferences: Weather, terrain, and the range from the nearest signal point can
all interfere with signal reception. Reception in tunnels, some buildings, and rural areas is
often poor.

Potential health hazards: People who use mobile devices while driving are often distracted
from driving and are thus assumed more likely to be involved in traffic accidents.[3] (While
this may seem obvious, there is considerable discussion about whether banning mobile device
use while driving reduces accidents or not.[4][5]) Cell phones may interfere with sensitive
medical devices. Questions concerning mobile phone radiation and health have been raised.

Human interface with device: Screens and keyboards tend to be small, which may make them
hard to use. Alternate input methods such as speech or handwriting recognition require
training.
Ubiquitous computing:
The word "ubiquitous" can be defined as "existing or being everywhere at the same
time," "constantly encountered," and "widespread." When applying this concept to technology,
the term ubiquitous implies that technology is everywhere and we use it all the time.
Because of the pervasiveness of these technologies, we tend to use them without thinking about
the tool. Instead, we focus on the task at hand, making the technology effectively invisible to the
user.
Ubiquitous computing (ubicomp) is a concept in software engineering and science where
computing is made to appear everywhere and anywhere. In contrast to desktop computing,
ubiquitous computing can occur using any device, in any location, and in any format. A user
interacts with the computer, which can exist in many different forms, including laptop computers,
tablets and terminals in everyday objects such as a fridge or a pair of glasses. The underlying
technologies to support ubiquitous computing include Internet, advanced middleware, operating
system, mobile code, sensors, microprocessors, new I/O and user interfaces, networks, mobile
protocols, location and positioning and new materials.
This new paradigm is also described as pervasive computing, ambient intelligence,[1] ambient
media or 'everywhere'.[3] Each term emphasizes slightly different aspects. When primarily
concerning the objects involved, it is also known as physical computing, the Things, haptic,[4] and
'things that think'. Rather than propose a single definition for ubiquitous computing and for these
related terms, a taxonomy of properties for ubiquitous computing has been proposed, from which
different kinds or flavors of ubiquitous systems and applications can be described.[5]
Ubiquitous computing touches on a wide range of research topics, including computing, mobile,
location computing, mobile networking, context-aware computing, sensor networks, humancomputer interaction, and artificial intelligence.
Ubiquitous computing presents challenges across computer science: in systems design and
engineering, in systems modeling, and in user interface design. Contemporary human-computer
interaction models, whether command-line, menu-driven, or GUI-based, are inappropriate and
inadequate to the ubiquitous case. This suggests that the "natural" interaction paradigm
appropriate to a fully robust ubiquitous computing has yet to emerge - although there is also
recognition in the field that in many ways we are already living in a ubicomp world (see also the
main article on Natural User Interface). Contemporary devices that lend some support to this
latter idea include mobile phones, digital audio players, radio-frequency identification tags, GPS,
and interactive.
Mark Weiser proposed three basic forms for ubiquitous system devices (see also smart device):
tabs, pads and boards.

Tabs: wearable centimeter sized devices

Pads: hand-held decimeter-sized devices

Boards: meter sized interactive display devices.
These three forms proposed by Weiser are characterized by being macro-sized, having a planar
form and on incorporating visual output displays. If we relax each of these three characteristics
we can expand this range into a much more diverse and potentially more useful range of
Ubiquitous Computing devices. Hence, three additional forms for ubiquitous systems have been
proposed:[5]

Dust: miniaturized devices can be without visual output displays, e.g. Micro ElectroMechanical Systems (MEMS), ranging from nanometers through micrometers to millimeters.
See also Smart dust.

Skin: fabrics based upon light emitting and conductive polymers, organic computer devices,
can be formed into more flexible non-planar display surfaces and products such as clothes
and curtains, see OLED display. MEMS device can also be painted onto various surfaces so
that a variety of physical world structures can act as networked surfaces of MEMS.

Clay: ensembles of MEMS can be formed into arbitrary three dimensional shapes as artifacts
resembling many different kinds of physical object (see also Tangible interface).
In his book The Rise of the Network Society, Manuel Castells suggests that there is an ongoing
shift from already-decentralized, stand-alone microcomputers and mainframes towards entirely
pervasive computing. In his model of a pervasive computing system, Castells uses the example of
the Internet as the start of a pervasive computing system. The logical progression from that
paradigm is a system where that networking logic becomes applicable in every realm of daily
activity, in every location and every context. Castells envisages a system where billions of
miniature, ubiquitous inter-communication devices will be spread worldwide, "like pigment in the
wall paint".
Ubiquitous computing may be seen to consist of many layers, each with their own roles, which
together form a single system:
Layer 1: task management layer

Monitors user task, context and index

Map user's task to need for the services in the environment

To manage complex dependencies
Layer 2: environment management layer

To monitor a resource and its capabilities

To map service need, user level states of specific capabilities
Layer 3: environment layer

To monitor a relevant resource

To manage reliability of the resources.
Issues
Privacy is easily the most often-cited criticism of ubiquitous computing (ubicomp), and may be
the greatest barrier to its long-term success.
These are the kinds of privacy principles that have been established by the industry - but over the
past two years, we have been trying to understand whether such principles reflect the concerns of
the ordinary citizen.
Public policy problems are often “preceded by long shadows, long trains of activity”, emerging
slowly, over decades or even the course of a century. There is a need for a long-term view to
guide policy decision making, as this will assist in identifying long-term problems or
opportunities related to the Ubiquitous Computing environment. This information can reduce
uncertainty and guide the decisions of both policy makers and those directly involved in system
development (Wedemeyer et al. 2001).
One important consideration is the degree to which different opinions form around a single
problem. Some issues may have strong consensus about their importance, even if there are great
differences in opinion regarding the cause or solution. For example, few people will differ in their
assessment of a highly tangible problem with physical impact such as terrorists using new
weapons of mass destruction to destroy human life. The problem statements outlined above that
address the future evolution of the human species or challenges to identity have clear cultural or
religious implications and are likely to have greater variance in opinion about them.[16]
There is still the issue, what actually 'content' is in an ubiquitous environment. Where in other
media environment the interface is clearly distinct, in an ubiquitous environment 'content' differs.
Artur Lugmayr defined such a smart environment by describing it as ambient media. It is
constituted of the communication of information in ubiquitous and pervasive environments. The
concept of ambient media relates to ambient media form, ambient media content, and ambient
media technology. It's principles have been established by Artur Lugmayr and are manifestation,
morphing, intelligence, and experience.
Short Answer:
1. What is Grid computing?
2. What is a high level language?
3. What is Ubiquitous computing?
4.
what is operating system?
5. Explain about Data Resources?
Long Answers:
1. Explain the system s/w and application s/w.
2. Discuss about Managing Application development?
3. Explain the concept of Input Output technologies?
4. Briefly explain the concept of System Architecture?
Unit - III
Communication Technology: Communication Technology
Information and communications technology (ICT) is often used as an extended synonym
forinformation technology (IT), but is a more specific term that stresses the role of unified
communications[1] and the integration of telecommunications (telephone lines and wireless
signals), computers as well as necessary enterprise software, middleware, storage, and audiovisual systems, which enable users to access, store, transmit, and manipulate information.[2]
The term ICT is also used to refer to the convergence of audio-visual and telephone
networks withcomputer networks through a single cabling or link system. There are large
economic incentives (huge cost savings due to elimination of the telephone network) to merge the
telephone network with the computer network system using a single unified system of cabling,
signal distribution and management.
WWW – Intranets – Extranets:
Internet
Intranet
Extranet
This is the world-wide
network of
computers
accessible to anyone who
knows
their
Internet
Protocol (IP) address - the
IP address is a unique set
of numbers (such as
209.33.27.100)
that
defines the computer's
location. Most will have
accessed a computer using
a
name
such
ashttp://www.hcidata.com.
Before
this named
computer can be accessed,
the name needs to be
resolved (translated) into
an IP address. To do this
your browser (for example
Netscape
or
Internet
Explorer) will access a
Domain Name Server
(DNS) computer to lookup
the name and return an IP
address - or issue an error
message to indicate that
the name was not found.
Once your browser has the
IP address it can access
the remote computer. The
actual
server
(the
computer that serves up
the web pages) does not
reside behind a firewall if it did, it would be an
Extranet.
It
may
implement security at a
This is a network that is not available to the
world outside of the Intranet. If the Intranet
network is connected to the Internet, the
Intranet will reside behind a firewall and, if
it allows access from the Internet, will be
an Extranet. The firewall helps to control
access between the Intranet and Internet to
permit access to the Intranet only to people
who are members of the same company or
organisation.
An
Extranet
is
actually an Intranet
that
is
partially
accessible
to
authorized outsiders.
The actual server
(the computer that
serves up the web
pages) will reside
behind a firewall.
The firewall helps to
control
access
between the Intranet
and
Internet
permitting access to
the Intranet only to
people who are
suitably authorized.
The level of access
can be set to
different levels for
individuals or groups
of outside users. The
access can be based
on a username and
password or an IP
address (a unique set
of numbers such as
209.33.27.100 that
defines the computer
that the user is on).
In its simplest form, an Intranet can be set
up on a networked PC without any PC on
the network having access via the Intranet
network
to
the
Internet.
For example, consider an office with a few
PCs and a few printers all networked
together. The network would not be
connected to the outside world. On one of
the drives of one of the PCs there would be
a directory of web pages that comprise the
Intranet. Other PCs on the network could
access this Intranet by pointing their
browser (Netscape or Internet Explorer) to
this directory - for example
U:\inet\index.htm.
From then onwards they would navigate
around the Intranet in the same way as they
would get around the Internet.
directory level so that
access is via a username
and
password,
but
otherwise
all
the
information is accessible.
To see typical security
have alook at a sample
secure directory - the
username is Dr and the
password
is Who (both
username and password
are case sensitive).
Voice Networks Data Communication Networks:
A telecommunications network is a collection of terminal nodes, links and any intermediate
nodes which are connected so as to enable telecommunication between the terminals.[1]
The transmission links connect the nodes together. The nodes use circuit switching, message
switching or packet switching to pass the signal through the correct links and nodes to reach the
correct destination terminal.
Each terminal in the network usually has a unique address so messages or connections can be
routed to the correct recipients. The collection of addresses in the network is called the address
space.
Examples of telecommunications networks are:[2]





computer networks
the Internet
the telephone network
the global Telex network
the aeronautical ACARS network
Benefits of Telecommunications and Networking
Telecommunications can greatly increase and expand resources to all types of people. For
example, businesses need a greater telecommunications network if they plan to expand their
company. With Internet, computer, and telephone networks, businesses can allocate their
resources efficiently. These core types of networks will be discussed below:
Computer Network: A computer network consists of computers and devices connected to one
another. Information can be transferred from one device to the next. For example, an office filled
with computers can share files together on each separate device. Computer networks can range
from a local network area to a wide area network. The difference between the types of networks
is the size. These types of computer networks work at certain speeds, also known as broadband.
The Internet network can connect computer worldwide.
Internet Network: Access to the network allows users to use many resources. Over time the
Internet network will replace books. This will enable users to discover information almost
instantly and apply concepts to different situations. The Internet can be used for recreational,
governmental, educational, and other purposes. Businesses in particular use the Internet network
for research or to service customers and clients.
Telephone Network: The telephone network connects people to one another. This network can be
used in a variety of ways. Many businesses use the telephone network to route calls and/or
service their customers. Some businesses use a telephone network on a greater scale through a
private branch exchange. It is a system where a specific business focuses on routing and servicing
calls for another business. Majority of the time, the telephone network is used around the world
for recreational purposes.
Last Mile:
The last mile or last kilometer is a widely accepted phrase used in the telecommunications, cable
television and internet industries to refer to the final leg of the telecommunications networks
delivery components and mechanisms to retail end-users / customers.
Wireless System:
The World Wireless System was a turn of the 19th century proposed communication and
electrical power delivery system designed by the inventor Nikola Tesla based on his theories of
using the Earth and/or the Earth's atmosphere as an electrical conductor.
he second result demonstrated how energy can be made to go through space without any
connecting wires. The wireless energy transmission effect involves the creation of an electric
field between two metal plates, each being connected to one terminal of an induction coil’s
secondary winding. A gas discharge tube) was used as a means of detecting the presence of the
transmitted energy. Some demonstrations involved lighting of two partially evacuated tubes in an
alternating electrostatic field while held in the hand of the experimenter.[14]
In his wireless transmission lectures Tesla proposed the technology could include the
telecommunication of information.
Web Hosting:
A web hosting service is a type of Internet hosting service that allows individuals and
organizations to make their website accessible via the World Wide Web. Web hosts are
companies that provide space on a server owned or leased for use by clients, as well as providing
Internet connectivity, typically in a data center. Web hosts can also provide data center space and
connectivity to the Internet for other servers located in their data center, called colocation, also
known as Housing in Latin America or France.
The scope of web hosting services varies greatly. The most basic is web page and small-scale file
hosting, where files can be uploaded via File Transfer Protocol (FTP) or a Web interface. The
files are usually delivered to the Web "as is" or with minimal processing. Many Internet service
providers(ISPs) offer this service free to subscribers. Individuals and organizations may also
obtain Web page hosting from alternative service providers. Personal web site hosting is typically
free, advertisement-sponsored, or inexpensive. Business web site hosting often has a higher
expense depending upon the size and type of the site.
Application Service Providers:
An Application Service Provider (ASP) is a business providing computer-based services to
customers over a network; such as access to a particular software application (such as customer
relationship management) using a standard protocol (such as HTTP).
Short Answer:
1.
2.
3.
4.
5.
List out the types of operating systems.
What is single user operating system?
What is multi user operating system?
List out the different kinds of DOS.
What are the types of Command in DOS?
Long Answer
1.
2.
3.
4.
5.
What are the types of Operating System?
Explain the components of Internet.
Explain the levels of Internet Connectivity.
Explain FTP and its features.
Explain Telnet and its features.
Unit-iv
IT APPLICATIONS:
ENTERPRICE SYSTEM:
Enterprise systems (ES) are large-scale application software packages that support business
processes, information flows, reporting, and data analytics in complex organizations. While ES
are generally packaged enterprise application software (PEAS) systems they can also be bespoke,
custom developed systems created to support a specific organization's needs.
Types of enterprise systems include:

enterprise resources planning (ERP) systems,

customer relationship management software(CRM).

Supply Chain Management(SCM)
Although data warehousing or business intelligence systems are enterprise-wide packaged
application software often sold by ES vendors, since they do not directly support execution of
business processes, they are often excluded from the term.
Enterprise systems are built on software platforms, such as SAP’s Net Weaver and Oracle’s
Fusion, and databases.
From a hardware perspective, enterprise systems are the servers, storage and associated software
that large businesses use as the foundation for their IT infrastructure. These systems are designed
to manage large volumes of critical data. These systems are typically designed to provide high
levels of transaction performance and data security.
In the book "Enterprise Information Systems: Contemporary Trends and Issues," David Olson
states that enterprise systems integrate a number of different applications, protocols and formats.
In doing so, an enterprise system allows companies to integrate business processes, such as sales,
deliveries and accounts receivable, by sharing information across business functions and
employee hierarchies. Enterprise systems can replace multiple independent systems that may or
may not interact with other systems and that process data to support particular business functions
or processes. For example, enterprise resource planning supports the entire sales process that
includes pre-sales activities, sales orders, inventory sourcing, deliveries, billing and customer
payments. Enterprise resource planning, supply chain management and customer relationship
management systems are each examples of enterprise systems.
Customer Relationship Management(CRM)
Customer relationship management systems were developed to address the need to raise a sales
department’s productivity and make the management of a company’s customers an effective way
to increase sales. With CRM functions, such as sales opportunity management, a company learns
more about its customers’ needs and buying behavior and combines this information with market
information to enhance the quality of the company’s marketing plans and sales forecasts. Other
attributes of the CRM system, including the integration of this system with other systems and
system access via mobile devices, allow employees to update and compare data regardless of the
system it’s in and to access information from any client site or other location. Equally important,
CRM supports mass e-mail communications and automates the sales process workflow to
improve employee productivity.
CRM products come with many features and tools and it is important for a company to choose a
product based on their specific organizational needs. Most vendors will present information on
their respective websites.

Features These are what the product actually does and what value it can provide to an
organization.

Support Many CRM vendors have a basic level of support which generally only includes
email and/or access to a support forum.[2][3][4] Telephone support is often charged in either an
annual or ad hoc pricing strategy. Some companies offer on-site support for an extra
premium.[5]


Pricing This will be shown either per-user[4] or as a flat price for a number of
users.[6] Vendors charge annually, quarterly, or monthly with variable pricing options for
different features.
Demonstration Periods Many vendors offer a trial period and/or online demonstrations.
CRM software
CRM software consolidates customer information and documents into a single CRM database so
business users can more easily access and manage it. The other main functions of this software
include recording various customer interactions (over email, phone calls, social media or other
channels, depending on system capabilities),automating various workflow processes such as
tasks, calendars and alerts, and giving managers the ability to track performance and productivity
based on information logged within the system.
Common features of CRM software include:

Marketing automation: CRM tools with automation capabilities can automate repetitive
tasks to enhance marketing efforts to customers at different points in the lifecycle. For
example, as sales prospects come into the system, the system might automatically send them
marketing materials, typically via email or social media, with the goal of turning a sales lead
into a full-fledged customer.

Sales force automation: Also known as sales force management, sales force
automation is meant to prevent duplicate efforts between a salesperson and a customer. A
CRM system can help achieve this by automatically tracking all contact and follow-ups
between both sides.

Contact center automation: Designed to reduce tedious aspects of a contact center
agent's job, contact center automation might include pre-recorded audio that assists in
customer problem-solving and information dissemination. Various software tools that
integrate with the agent's desktop tools can handle customer requests in order to cut down the
time of calls and simplify customer service processes.

Geolocation technology, or location-based services: Some CRM systems include
technology that can create geographic marketing campaigns based on customers' physical
locations, sometimes integrating with popular location-based GPS apps. Geolocation
technology can also be used as a networking or contact management tool in order to find sales
prospects based on location.
Supply Chain Management(SCM)
A supply chain refers to the collection of people, tasks, equipment, data and other resources
required to produce and move products from a vendor to a customer. Dr. Robert Hanfield of Bank
of America describes supply chain management as the management of supply chain activities by
the supply chain firms in an effective and efficient way. According to Hanfield, such activities
include product development, material sourcing, production and logistics as well as the
information systems that coordinate these activities. Information flows allow supply chain
partners to coordinate their strategic and operational plans as well as the day-to-day flow of goods
and materials through the supply chain. The physical flows include the manufacture, transport
and storage of goods or materials.
Supply Chain Management(SCM) is the management of the flow of goods and services.[2] It
includes the movement and storage of raw materials, work-in-process inventory, and finished
goods from point of origin to point of consumption. Interconnected or interlinked networks,
channels and node businesses are involved in the provision of products and services required by
end customers in a supply chain.[3] Supply chain management has been defined as the "design,
planning, execution, control, and monitoring of supply chain activities with the objective of
creating net value, building a competitive infrastructure, leveraging worldwide logistics,
synchronizing supply with demand and measuring performance globally.
SCM draws heavily from the areas of operations management, logistics, procurement, and
information technology, and strives for an integrated approach.
ENTERPRICE RESOURCE PLANNING(ERP)
The enterprise resource planning system integrates software applications just as a company
integrates business processes, such as purchasing, finance, human resources and inventory
management. Within an ERP system, the integrated software modules, such as sales, quality
management and accounts receivable, communicate and share data. Each of these modules
consists of multiple applications that perform the functions required to execute particular end-toend business processes. For example, the sales module includes the applications necessary to
create and manage sales contracts, sales orders, sales invoices and sales order pricing. ERP
applications support not only various operational and administrative tasks, such as the creation of
an account payable or a time sheet, they may also be customized to support a number of different
industries, including oil and gas, retail and banking.
This is not a book about how to select software and install it on your computers. Rather, it’s a
book about how to implement superior business processes in your company—processes that yield
a competitive advantage.
Right now you might be thinking: “Wait a minute. The name of this book is ERP. How can it
not be about software?”
The answer is that Enterprise Resource Planning (ERP) is not software. One more time: ERP is
not software. There’s a lot of sloppy terminology flying around today in the business press, and
one misnomer is to label enterprise-wide transaction processing software systems as ERP. These
software packages support effective resource planning and make much of it feasible, but they
don’t truly do it. Plus these packages contain many business processes other than resource
planning.
Therefore, we need to trot out another acronym that does refer to software: ES. This stands for
Enterprise System or Enterprise Soft-ware. In his book Mission Critical,i author Thomas H.
Davenport de-scribes enterprise systems as “packages of computer applications that support
many, even most, aspects of a company’s information needs.”
That makes sense to us. Now for another distinction: Not all ERP business functions are
contained in the typical Enterprise Software
Figure 1-1
ERP Processes
E
E
R
S
P
ERP
PROCESSES
ERP PROCESSES
NON-ERP
PROCESSES
NOT PART OF
A
FOUND IN A
FOUND IN A
TYPICAL ES: TYPICAL ES:
TYPICAL ES:
Master
Production
Sales Forecasting Scheduling
Accounts Receivable
Sales
and
Operations
Rough-Cut
Capacity
Planning
Planning
Accounts Payable
Advanced
Material Requirements
Planning Systems Planning
General Ledger
Supplier
Rating Capacity Requirements
Systems
Planning
Cash Management
Performance
Distribution
Customer Relations
Metrics
Requirements Planning Management
Customer Order Entry
and Promising
Human Resources
Data Warehousing
(ES) suite. Similarly, the typical ES contains software support for business processes that are not
a part of ERP. In Figure 1-1, we can see that distinction graphically. Please note the three areas
on that diagram. The rightmost part of the figure refers to those functions contained within a
typical ES that are not part of ERP; the leftmost area is for those ERP functions not normally
supported by an ES; the area of overlap in the center references those ERP functions typically
supported by Enterprise Software.
Now let’s take a look at just what this ERP thing is all about.
HAT IS ENTERPRISE RESOURCE PLANNING AND WHAT DOES IT DO?
Enterprise Resource Planning (ERP)—and its predecessor, Manufacturing Resource Planning
(MRP II)—is helping to transform our industrial landscape. It’s making possible profound
improvements in the way manufacturing companies are managed. It is a strong contributor to
America’s amazing economic performance of the 1990s and the emergence of the New Economy.
A half century from now, when the definitive industrial history of the twentieth century is written, the evolution of ERP will be viewed as a watershed event. Let’s describe Enterprise Resource
Planning as:
An enterprise-wide set of management tools that balances demand and supply, containing the
ability to link customers and suppliers into a complete supply chain, employing proven
business processes for decision-making, and providing high degrees of cross-functional
integration among sales, marketing, manufacturing, operations, logistics, purchasing, finance,
new product development, and human resources, thereby enabling people to run their business
with high levels of customer service and productivity, and simultaneously lower costs and
inventories; and providing the foundation for effective e-commerce.
Here are some descriptions of ERP, not definitions but certainly good examples.
Enterprise Resource Planning is a company increasing its sales by 20 percent in the face of an
overall industry decline. Discussing how this happened, the vice president of sales explained:
“We’re capturing lots of business from our competitors. We can out deliver . Thanks to (ERP),
we can now ship quicker than our competition, and we ship on time.”
Enterprise Resource Planning is a Fortune 50 corporation achieving enormous cost savings and
acquiring a significant competitive advantage. The vice president of logistics stated: “ERP has
provided the key to becoming a truly global company. Decisions can be made with accurate
data and with a process that connects demand and supply across borders and oceans. This
change is worth billions to us in sales worldwide.”
Enterprise Resource Planning is a purchasing department generating enormous cost reductions
while at the same time increasing its ability to truly partner with its suppliers. The director of
purchasing claimed: “For the first time ever, we have a good handle on our future requirements
for components raw and materials. When our customer demand changes, we ourselves and our
suppliers can manage changes to our schedules on a very coordinated and con-trolled basis. I
don’t see how any company can do effective supply chain management without ERP.”
THE EVOLUTION OF ENTERPRISE RESOURCE PLANNING
Step One—Material Requirements Planning (MRP)
ERP began life in the 1960s as Material Requirements Planning (MRP), an outgrowth of early
efforts in bill of material processing. MRP’s inventors were looking for a better method of
ordering mate-rial and components, and they found it in this technique. The logic of material
requirements planning asks the following questions:
• What are we going to make?
• What does it take to make it?
• What do we have?
• What do we have to get?
This is called the universal manufacturing equation. Its logic applies wherever things are being
produced whether they be jet aircraft, tin cans, machine tools, chemicals, cosmetics . . . or
Thanksgiving dinner.
Material Requirements Planning simulates the universal manufacturing equation. It uses the
master schedule (What are we going to make?), the bill of material (What does it take to make
it?), and inventory records (What do we have?) to determine future requirements (What do we
have to get?).
For a visual depiction of this and the subsequent evolutionary steps, please see Figure 1-2, a
modified version of a diagram in Carol Ptak’s recent book on ERP.
Figure 1-2
EVOLUTION OF ERP
ERP
MRP II
Closed-Loop MRP
MRP
Step Two—Closed-Loop MRP
MRP quickly evolved, however, into something more than merely a better way to order. Early
users soon found that Material Requirements Planning contained capabilities far greater than
merely giving better signals for reordering. They learned this technique could help to keep order
due dates valid after the orders had been released to production or to suppliers. MRP could detect
when the due date of an order (when it’s scheduled to arrive) was out of phase with its need date
(when it’s required)
Figure 1-3
Priority vs. Capacity
Capaci
Priority
ty
Which ones?
Sequence
Scheduling
Enough
?
Volum
e
Loadin
g
This was a breakthrough. For the first time ever in manufacturing, there was a formal
mechanism for keeping priorities valid in a constantly changing environment. This is important,
because in a manufacturing enterprise, change is not simply a possibility or even a probability.
It’s a certainty, the only constant, the only sure thing. The function of keeping order due dates
valid and synchronized with these changes is known as priority planning.
customer-order promising (demand management); and high-level re-source analysis (Rough-Cut
Capacity Planning). Systems to aid in executing the plan were tied in: various plant scheduling
techniques for the inside factory and supplier scheduling for the outside factory the suppliers.
These developments resulted in the second step in this evolution: closed-loop MRP.
Closed-loop MRP has a number of important characteristics:
It’s a series of functions, not merely material requirements planning.
It contains tools to address both priority and capacity, and to sup-port both planning and
execution.
It has provisions for feedback from the execution functions back to the planning functions. Plans
can then be altered when necessary, thereby keeping priorities valid as conditions change.
Step Three—Manufacturing Resource Planning (MRP II)
The next step in this evolution is called Manufacturing Resource Planning or MRP II (to
distinguish it from Material Requirements Planning, MRP). A direct outgrowth and extension of
closed-loop MRP, it involves three additional elements:
1. Sales & Operations Planning—a powerful process to balance demand and supply at the
volume level, thereby providing top management with far greater control over operational
aspects of the business.
2. Financial interface—the ability to translate the operating plan (in pieces, pounds, gallons,
or other units) into financial terms (dollars).
3. Simulation—the ability to ask “what-if” questions and to obtain actionable answers—in
both units and dollars. Initially this was done only on an aggregate, “rough-cut” basis, but
to-day’s advanced planning systems (APS) enable effective simulation at very detailed
levels.
Now it’s time to define Manufacturing Resource Planning. This definition, and the one to
follow, come from APICST he Educational Society for Resource Management. APICS is the
leading professional society in this field, and its dictionary has set the standard for terminology
over the years.
MANUFACTURING RESOURCE PLANNING (MRP II)— A method for the effective
planning of all resources of a manufacturing company. Ideally, it addresses operational
planning in units, financial planning in dollars, and has a simulation capability to answer
“what-if” questions. It is made up of a variety of functions, each linked together: business
planning, sales and operations planning, production planning, master scheduling, material
requirements planning, capacity requirements planning, and the execution support systems for
capacity and material. Output from these systems is integrated with financial reports such as the
business plan, purchase commitment report, shipping budget, and inventory projections in
dollars. Manufacturing resource planning is a direct outgrowth and extension of closed-loop
MRP.
Step Four—Enterprise Resource Planning (ERP)
The latest step in this evolution is Enterprise Resource Planning (ERP). The fundamentals of ERP
are the same as with MRP II. How-ever, thanks in large measure to enterprise software, ERP as a
set of business processes is broader in scope, and more effective in dealing with multiple business
units. Financial integration is even stronger. Supply chain tools, supporting business across
company boundaries, are more robust. For a graphical view of ERP, see Figure 1-5.
Let’s now look at a complete definition of ERP, based on the description we saw a few pages
back:
ENTERPRISE RESOURCE PLANNING (ERP) predicts and balances demand and supply. It
is an enterprise-wide set of fore-casting, planning, and scheduling tools, which:
Figure 1-5
• links customers and suppliers into a complete supply chain,
• employs proven processes for decision-making, and
• coordinates sales, marketing, operations, logistics, purchasing, finance, product
development, and human re-sources.
Its goals include high levels of customer service, productivity, cost reduction, and inventory
turnover, and it provides the foundation for effective supply chain management and ecommerce. It does this by developing plans and schedules so that the right resources manpower,
materials, machinery, and money are available in the right amount when needed.
Enterprise Resource Planning is a direct outgrowth and extension of Manufacturing Resource
Planning and, as such, includes all of MRP II’s capabilities. ERP is more powerful in that it: a)
applies a single set of resource planning tools across the entire enterprise, b) provides real-time
integration of sales, operating, and financial data, and c) connects resource planning approaches to
the extended supply chain of customers and suppliers.
The primary purpose of implementing Enterprise Resource Planning is to run the business, in a
rapidly changing and highly competitive environment, far better than before. How to make that
happen is what this book is all about.
THE APPLICABILITY OF ERP
ERP and its predecessor, MRP II, have been successfully implemented in companies with the
following characteristics:
• Make-to-stock
• Make-to-order
• Design-to-order
• Complex product
• Simple product
• Multiple plants
• Single plant
• Contract manufacturers
• Manufacturers with distribution networks
• Sell direct to end users
• Sell through distributors
• Businesses heavily regulated by the government
• Conventional manufacturing (fabrication and assembly)
• Process manufacturing
• Repetitive manufacturing
• Job shop
• Flow shop
• Fabrication only (no assembly)
• Assembly only (no fabrication)
• High-speed manufacturing
• Low-speed manufacturing
Within the universe of companies that make things manufacturing enterprises ERP has virtually
universal application. This book deals with how to implement ERP in any of the above
environments. Some people struggle with this applicability issue; they sometimes say: “We’re
different, we’re unique, it won’t work for us.” We’ve heard that a lot over the years. What we
have never heard is: “We’re different, we’re unique, Generally Accepted Accounting Principles
(GAAP) won’t work for us.” Well, ERP is the logistics analog of GAAP. It’s a defined body of
knowledge that contains the standard best practices for managing that part of the business. The
main difference between the two is that ERP and its predecessors have been with us for about four
decades; double-entry bookkeeping and its offshoots have been around for four centuries. More
on this later.
ERP AS A FOUNDATION
Today, there are a wide variety of tools and techniques that have been designed to help companies
and their people produce their products better and more efficiently. These include Lean
Manufacturing, Six Sigma Quality, Employee Involvement, Factory Automation, De-sign for
Manufacturability, and many more. These are excellent tools with enormous potential.
But . . . none of them will ever yield their full potential unless they’re coupled with effective
forecasting, planning, and scheduling processes. Here’s why:
It’s not good enough to be extremely efficient . . . if you’re making the wrong stuff.
It’s not good enough to make items at a very high level of quality . . .
if they’re not the ones needed.
It’s not good enough to reduce setup times and cut lot sizes . . . if bad schedules prevent knowing
what’s really needed and when.
Back in the early 1980s, a new way of thinking about manufacturing came out of Japan, and it
was truly revolutionary. In this country we’ve called it Just-In-Time (JIT), and more recently it
has evolved into Lean Manufacturing.1
As with most new tools and processes, its early adherents promoted JIT with a missionary
zeal—and rightly so. This is great stuff. Some of them, however, took the approach that
MRP/MRP II was no longer necessary for companies doing JIT. The MRP establishment pushed
back and the result was a raging debate that generated a lot of heat and not much light.
Today we can see the situation much more clearly, and we feel this view has been best
articulated by Chris Gray, president of Gray Re-search in Wakefield, NH. Chris says that
improvements to business processes take one of three forms:
1. Improving process reliability. Six Sigma and other Total Quality tools are predominant here
Also called Agile Manufacturing or Synchronous Flow Manufacturing.
2. Reducing process complexity. Lean Manufacturing is heavily used here.
3. Coordinating the individual elements of the overall set of business processes. ERP lives
here.
Enterprise Resource Planning, when operating at a high level of effectiveness, will do several
things for a company. First, it will enable the company’s people to generate enormous benefits.
Many companies have experienced, as a direct result of ERP (or MRP II) dramatic increases in
responsiveness, productivity, on-time shipments and sales, along with substantial decreases in
lead times, purchase costs, quality problems, and inventories.
Further, ERP can provide the foundation upon which additional productivity and quality
enhancements can be built an environment where these other tools and techniques can reach their
full potential.
Effective forecasting, planning and scheduling knowing routinely what is needed and when via
the formal system is fundamental to productivity. ERP is the vehicle for getting valid plans and
schedules, but not just of materials and production. It also means valid schedules of shipments to
customers, of personnel and equipment requirements, of required product development resources,
and of cash flow and profit. Enterprise Resource Planning has proven it-self to be the foundation,
the bedrock, for supply chain management. It’s the glue that helps bind the company together
with its customers, distributors, and suppliers all on a coordinated, cooperative basis.
MORE ABOUT SOFTWARE
Now that we’ve kicked the ERP topic around a bit, let’s double back on the software issue.
Software for ERP is like a set of golf clubs. You could give the greatest, most expensive set of
golf clubs ever made to either one of your friendly authors, but they wouldn’t break 120. Why?
It’s simple; neither of us knows how to play golf.
On the other hand, let’s say we send Tiger Woods out on the pro tour with only a four-wood
and a sand wedge. Would Tiger win any tournaments? Not a chance. He’d never even make the
cut. The reason: To be competitive at the highest levels of the game, you need a full set of clubs
in the bag.
Two principles flow from this analogy:
1. The acquisition of the tools, of and by itself, will not make you proficient in their
use and thus will not provide a competitive advantage.
2. To be truly competitive, you need a good and reasonably complete set of tools.
Too many companies have bought an extremely expensive set of “golf clubs” (an
enterprise software system) but haven’t learned how to play golf. That’s why we read
about so many “ERP failures” in the business press. The fact of the matter is that ERP
hasn’t failed at all in those cases; it hasn’t even been attempted. Saying that ERP failed in
these cases is like saying that golf failed because one of your authors bought a $2,000 set
of golf clubs and didn’t break 120. Golf failed? Makes no sense.
EXPERT SYSTEM:
1
What is an Expert System?
2
The Architecture of Expert Systems
3
Knowledge Acquisition
4
Representing the Knowledge
5
The Inference Engine
6
The Rete-Algorithm
7
The User Interface
What is an Expert System?
Jackson (1999) provides us with the following definition:
An expert system is a computer program that represents and reasons with knowledge of
some specialist subject with a view to solving problems or giving advice.
To solve expert-level problems, expert systems will need efficient access to a substantial
domain knowledge base, and a reasoning mechanism to apply the knowledge to the
problems they are given. Usually they will also need to be able to explain, to the users
who rely on them, how they have reached their decisions.
They will generally build upon the ideas of knowledge representation, production rules,
search, and so on, that we have already covered.
Often we use an expert system shell which is an existing knowledge independent
framework into which domain knowledge can be inserted to produce a working expert
system. We can thus avoid having to program each new system from scratch.
TYPICAL TASKS FOR EXPERT SYSTEMS
There are no fundamental limits on what problem domains an expert system can be built
to deal with. Some typical existing expert system tasks include:
1.
The interpretation of data Such as sonar data or geophysical measurements
2.
Diagnosis of malfunctions Such as equipment faults or human diseases
3.
Structural analysis or configuration of complex objects Such as chemical
compounds or computer systems
4.
Planning sequences of actions Such as might be performed by robots
5.
Predicting the future Such as weather, share prices, exchange rates
However, these days, “conventional” computer systems can also do some of these things.
Characteristics of Expert Systems
Expert systems can be distinguished from conventional computer systems in that:
1.
They simulate human reasoning about the problem domain, rather than simulating
the domain itself.
2.
They perform reasoning over representations of human knowledge, in addition to
doing numerical calculations or data retrieval. They have corresponding distinct
modules referred to as the inference engine and the knowledge base.
3.
Problems tend to be solved using heuristics (rules of thumb) or approximate
methods or probabilistic methods which, unlike algorithmic solutions, are not
guaranteed to result in a correct or optimal solution.
4.
They usually have to provide explanations and justifications of their solutions or
recommendations in order to convince the user that their reasoning is correct.
Note that the term Intelligent Knowledge Based System (IKBS) is sometimes used as a
synonym for Expert System.
DECISION SUPPORT SYSTEM(DSS):
A decision support system is an information system application that assists decision
making. DSS tends to be used in planning, analyzing, alternatives and trial and error
search for solutions. A DSS as a system that provide tools to managers to assist them in
solving semi structured and unstructured problems in their own. In other words, A DSS is
an information system that support to managers for decision making. DSS is the
intellectual resources of individuals with the capabilities of the computer to improve the
quality of decision.
A DSS can be defined as a computer based information system that aids a decision maker
in taking decisions for semi-structured problems.
Definition of DSS: - “A decision support system is a specialized kind of information
system which is an interactive system that supports in the decision making process of a
manager in an organization especially in semi-structured and unstructured situations. The
system utilizes information, models and data manipulation tools to help make decisions in
semi-structured to unstructured situations.
“Decision Support System is an interactive, computer based system which supports
managers in making unstructured decisions.”
Characteristics of DSS: - The characteristics of the DSS are as follows: -
1
DSS focus on towards providing help in analyzing situations rather than providing
right information in form of various types of reports.
2
DSS is individual specific. Each decisions maker can incorporate his own
perceptions about the problem and analyze its effect.
3 DSS incorporates various mathematical, statistical and operations research
models.
4 DSS is only supportive in nature and human decisions makers still retain their
supremacy. It does not thrust its outcomes on the decision maker.
5
DSS is effective in providing assistance to solve semi-structured problems at all
levels. It is used at first line, middle level and top level management.
6
DSS needs an effective database management system. It is extensively uses
databases.
7
DSS helps decisions makers to carry out ‘What-if” analysis.
Objectives of DSS: - The objective of the DSS are as stated below: 1
Provide assistance to decision makers in situations which are semi-structured.
2
Identify plans and potential actions to resolve problems.
3
Rank among the solutions identified, those which can be implemented and
provide a list of viable alternatives.
Needs of DSS: - DSS have become necessary for today’s manager because of following
reasons: Fast computation: - A decision maker can perform a large number of computations very
quickly and that too at a low cost with the help of computer support systems.
Enhanced productivity: - Decision support system can enhance the productivity of
support staff and also enable the group members to discuss the problems among
themselves as a distance.
Better decisions: - Computer support system can help a decision-maker in arriving at a
better decision. For example, more alternatives can be evaluated, risk analysis be
performed quickly, and views of experts from different places can be collected quickly
and at a lower cost.
Data transmission: - Sometimes the data, which may be stored at different locations,
may be required to be transmitted quickly from distant locations. Computer support
system can search, store, and transmitted the required data quickly and economically.
components and classification of DSS.
Components of DSS: - The main component of DSS is
1.
Hardware
2.
Software
Hardware: - Hardware is that parts of the computer system that can be touched.
These are tangible parts. Without hardware, software is nothing. Hardware is just
like human body and software is like soul in body. All input and output devices are
hardware parts. For example Mouse, Keyboard etc. are the parts of hardware.
There is no fixed hardware configuration for designing, developing, maintaining and
executing DSS. The hardware configuration for a DSS is mainly determined by:a) The size of the database
b) The DBMS package which one intends to use.
c) The type of model that are being used.
d)
2.
Ways in which reports/presentations are expected.
Software: - Software is a set of computer programs that are designed and develop to
perform a specific task. Software acts as a interface between the user and computer.
Software can be defined as a set of instructions written by a programme to solve a
problem. It can be classified as:a) Database Management Sub-System
b) Model Management Sub-system
c) Dialogue Management Sub-system
This is explained as below:a)
Database Management Sub-system:- Normally there are two sources of data such
as internal source or external source. Database management system provides
facilities for organizing, storing and queering these data. It acts as an information
bank. DBMS software provides various facilities to modify and delete for database
creation, manipulate the data present in database, query the data in the database.
The architecture of a database management system includes External Schema,
Conceptual Schema, and Internal Schema.
b)
Model Management Sub-system:- A model presents the relationship between
various parameters of the system. It gives a mathematical description of reality. The
model builder provides a structured framework for developing models by helping
decision makes. The model builder also contains model dictionary consistencies in
the definitions user of models.
A model management subsystem provides the following: 1.
A model base management system which helps in the creation of models and
maintenance of the same.
2.
An external interface which permits a user to choose a model to be executed
and provides facilities for entering data.
An interface to the database.
3.
c)
Dialogue Management Sub-system:- This acts as the gateway for the user to
communicate with the DSS. It provides menus and icons for the user to
communicate effectively with the system. It converts the queries given by the user
into forms which the other subsystems can recognize and execute. It keeps a track of
activities that are being performed.
The major activities of a Dialogue management subsystem are to:
1.
2.
3.
Provides menus and icons for the user to communicate effectively with the
system.
Provide necessary on-line context sensitive help to various kinds of users.
Convert the queries given by the user into forms which the other subsystems
can recognize and execute.
4.
Keep track of the activities that are being performed.
Classified of DSS: - This can be classified are as follows:(i)
Fie Drawer Systems :- This is a system which provide the user with organized
information regarding specific demands. This system provides on-line information.
This is very useful system for decision making.
(ii)
Data Analysis Systems: - These decision systems are based on comparative
analysis and makes use of a formula. The cash flow analysis, inventory analysis and
personnel inventory systems are examples of the analysis systems. This use of
simple data processing tools and business rules are required to develop that system.
(iii) Information Analysis System: - In this system the data is analyzed and the
information reports are generated. The decision makers use these reports for
assessment of the situation for decision-making. The sales analysis, accounts
receivables system, market research analysis are examples of such systems
(iv) Accounting Systems: - These systems are not necessarily required for decision
making but they are desirable to keep track of the major aspects of the business.
These systems account items such as cash, inventory, and personnel and so on.
(v)
Model Based Systems: - These systems are simulation models or optimization
models for decision making. It provides guidelines for operation or management.
The product decision mix decisions, material mix, job scheduling rules are the
examples. It is the most important type of DSS.
(vi) Solver Oriented DSS: - It is performing certain computations for solving a
particular type of problem. The solver could be economic order quantity procedure
for calculating an optimal ordering quantity.
vii) Suggestion System: - There are used for operational purposes. They give
suggestion to the management for a particular problem. This model helps in making
required collection of data before taking a suitable decision.
viii) Compound DSS: - It is a system that includes two or more of the above five basic
structures explained above. It can be built by using a set of independent DSS, each
specializing in one area.
ix)
Text oriented DSS: - A Text oriented DSS supports a decision maker by
electronically keeping trade of textual represented information that have a bearing
on decision. It allows documents to be electronically created, revised and viewed as
needed. The information technologies such as documents emerging, hypertext and
intelligent agents can be incorporated into this type.
steps in constructing the DSS and the role in business.
Steps in constructing a DSS: There are following steps which are constructing the DSS.
1.
Identification of the problem: - In this stage the developer and the knowledge
engineer interact to identify the problems. The following points are discussed:a)
The scope and extent are analyzed.
b)
The return of investment analysis is done.
c)
The amount of resources needed is identified.
d)
Areas in the problems that can give much trouble are identified and a
conceptual solution of that problem is found.
e)
Over all specification is made.
2. Decision about mode of development: - Once the problem is identified, the
immediate step would be to decide about the vehicle for development. He can
develop shell for development by any programming language. In this stage various
shells and tools are identified and analyzed for their suitability. These tools whose
features fit the characteristics of the problems are analyzed in details.
3.
Development of a prototype: - Before the development of a prototype we decide
the knowledge level to solve the particular problem. For this we adopted some
methods in sequence. After this the taste of knowledge begins the knowledge of
Engineer and developer which interact frequently and domain specific knowledge is
entranced. When knowledge representation scheme and knowledge is available a
prototype is constructed.
4.
Prototype validation: - The prototype under goes the process of testing for various
problems and revision of the prototype takes place. It is very important step the
DSS.
5.
Planning for full scale system: - In prototype construction, the area in the problem
that can be implemented with relative case is first choice extensive planning is done.
Each subsystem development is assigned a group leader and schedules are drawn.
6.
Final implementation, maintenance and evaluation: - This is the final stage of
DSS Life Cycle. The full scale system developed is implemented at the basic
resources requirements are fulfilled and parallel conversion.
Role of DSS in Business
DSS is computer based information system for management decision maker who
deal with the semi-structured problems. DSS play an important role in business. It
performs various activities. The role of DSS is explained as follows:-
1.
1.
What if analysis
2.
Goal oriented
3.
Risk analysis
4.
Model building
5.
Graphical analysis
What - if analysis: - This is the process of assessing the impart of variables. This
helps managers to be proactive rather than reactive in their decision making. This
analysis is critical for semi-structured and unstructured problems because the data
necessary to make such decisions are not available.
2.
Goal oriented: - It is process of determining the input values required to achieve a
certain goal. For example house buyers determine the monthly payment they can
afford (say for example Rs. 5000/-) and calculate the number of such payments
required to pay the desired house.
3.
Risk analysis: - Risk is the important factor which affects the business enterprise.
DSS allows managers to assess the risks associated with various alternatives.
Decisions can be classified as low risk, medium risk and high risk. A DSS is
particularly useful in medium risk and high risk environments.
4.
Model building: - DSS allows decisions markets to identify the most appropriate
model for solving the problems. It takes into account input variables; inter
relationship among the variables problem assumptions and constraints. For example
a marketing manager of a television manufacturing company is charged with the
responsibility of developing a sales forecasting model for colour TV sets.
5.
Graphical analysis: - This helps managers to quickly digest large volumes of data
and visualize the impacts of various courses of action. They recommend the use of
graph when:a)
Seeking a quick summary of data.
b)
Forecasting activities.
c)
Detecting trends over time.
d)
Composing points and patterns at different variables.
The characteristics of Group Decision support system And advantage and
application of group decision support system.
Group decision support system (DSS):- A group decision support system is a
decision support system that facilitates decision making by a team of decision
markets working as a group. The importance of collective decisions is being felt
today. For main issue to be sorted out, brainstorming sessions are carried out and the
collective pool of ideas and opinions give a final shape to a decision. A GDSS is a
DSS that facilitates decision making by a team of decision maker working as a
group.
“A GDSS is an interactive, computer based system that facilitates solution of
unstructured problems by a set of decisions makers working together as a group. A
GDSS is superior then DSS because in GDSS the decisions are taken by a group of
DSS. So it is superior to the DSS.”
Characteristics of GDSS : The main features of GDSS is explained as follows:(i)
A GDSS is a goal oriented. A GDSS is designed with the goal of supporting groups
of decision makers in their work.
(ii)
A GDSS is a specially designed information system.
(iii) A GDSS is easy to learn and to use.
(iv) A GDSS is designed with the goal of supporting groups of decisions makers in their
work.
(v)
The GDSS is designed to encourage activities such as idea generation, conflict
resolution and freedom of expression.
Types of GDSS :
There are three types of computer based supports are available: 1.
Decision Network : This type helps the participants to communicate each other
through network or through a central database. Application software may use
commonly shared models to provide support. The commonest implementation using
local area network and microcomputers. The technology filters out many group
dynamics of a participative meeting.
2.
Decision Room : Participants are located at one place i.e the decision room. The
purpose of this is to enhance participant’s interactions and decision making by
computerized within a fixed period of time using a facilitator.
3.
Teleconferencing : Groups are composed of members or sub groups that are
geographically dispersed; teleconferencing provides interactive connection between
two or more decisions rooms. This interaction will involve transmission of
computerized and audio visual information .Decision network can be viewed as the
use of local area network, for decision making involving groups the decision room
is entirely new development. The decision room should consist of a table with
network
workstations for the purpose. Breakout rooms, used for similar discussions, are also
equipped with similar machines. A combinations of overhead projector, flipchart,
photocopier and other presentations devices are provided as well.
Advantages of GDSS
1.
Take better decision.
2.
To solve the problem.
3.
To minimize the risk.
4.
To collect large amount of information.
5.
To provide interactive communication.
6.
Improve the decision making process.
7.
To make coordination in various activities.
1.
Take better decision : Through the GDSS we can take better decisions because the
under GDSS the decisions are taken by a group of DSS.
2.
To solve the problem : GDSS provide solution to unstructured problems. GDSS
collects various type of information at various sources.
3.
To minimize the risk : GDSS allows managers to assess the risks associated with
various alternatives. This helps managers to be proactive rather than reactive.
4.
To collect large amount of information : GDSS collect information at various
sources for making decision making. This information minimizes the risk.
5.
To provide interactive communication : GDSS provide interactive
communication. It takes better decision through the interactive communication.
6.
To improve the decision making process : GDSS improve the decision making
process because GDSS is a goal oriented. When the GDSS is designed the goal is
considered.
7.
To make coordination in various activities : In GDSS decision are taken by a
group of DSS. The work is divided into different parts then each DSS performs own
work. So the coordination is possible.
Disadvantage of GDSS : The disadvantage of GDSS are as follows: 1.
More chances for clash of opinions are there.
2.
Very large group bring work complex.
Application of Group Decision Support System
1.
For meetings.
2.
Marketing.
3.
Banking sector.
4.
5.
Stock exchange/foreign market.
Brain storming.
6.
Maintaining records.
7.
Assessing the judgmental tasks.
8.
Office automation.
9.
Documentation.
10.
Engineering firms.
Components of GDSS : The main components of GDSS is explained as follows:1.
Hardware : It includes Single PC, Computer PCs and Keypads, a decision room,
distributed GDSS, Audio Visual Aids, Network Equipment etc.
2.
Software : GDSS software includes modules to support the individual, the group,
the process and specific tasks. The software allows each individual to do private
work, the usual collection of text and file creation, graphics, spreadsheet and
DBMS.
3.
Procedure : It enables ease of operation and effective use of the technology by
group members.
4.
People
How GDSS can enhance group decision making :
GDSS help to enhance group decision making by following ways :
1.
Improved preplanning : Improved preplanning forces an agenda to keep the
meeting on track.
2.
Increased participation : More number of persons, result more effective
contribution towards decisions.
3.
Open, collaborative meetings atmosphere : GDSS help to provide open, and
collaborative meeting atmosphere which provide nonjudgmental input by all
attendees.
4.
Criticism free idea generation : GDSS provide criticism free idea generation with
more inputs and better ideas.
5.
Documentation of meeting : GDSS help for effective documentation of meetings
which are used for further discussion and use.
6.
Setting priorities and making decisions : GDSS help to set the priorities and give
importance to those problems which are more critical.
NEURAL NETWORKS:
In information technology, a neural network is a system of programs and data structures
that approximates the operation of the human brain. A neural network usually involves a
large number of processors operating in parallel, each with its own small sphere of
knowledge and access to data in its local memory. Typically, a neural network is initially
"trained" or fed large amounts of data and rules about data relationships (for example, "A
grandfather is older than a person's father"). A program can then tell the network how to
behave in response to an external stimulus (for example, to input from a computer user
who is interacting with the network) or can initiate activity on its own (within the limits of
its access to the external world).
In making determinations, neural networks use several principles, including gradientbased training, fuzzy logic, genetic algorithms, and Bayesian methods. Neural networks
are sometimes described in terms of knowledge layers, with, in general, more complex
networks having deeper layers. In feed forward systems, learned relationships about data
can "feed forward" to higher layers of knowledge. Neural networks can also learn
temporal concepts and have been widely used in signal processing and time series
analysis.Current applications of neural networks include: oil exploration data analysis,
weather prediction, the interpretation of nucleotide sequences in biology labs, and the
exploration of models of thinking and consciousness.
1. Introduction to neural networks
1.1 What is a Neural Network?
An Artificial Neural Network (ANN) is an information processing paradigm that is
inspired by the way biological nervous systems, such as the brain, process information.
The key element of this paradigm is the novel structure of the information processing
system. It is composed of a large number of highly interconnected processing elements
(neurones) working in unison to solve specific problems. ANNs, like people, learn by
example. An ANN is configured for a specific application, such as pattern recognition or
data classification, through a learning process. Learning in biological systems involves
adjustments to the synaptic connections that exist between the neurones. This is true of
ANNs as well.
1.2 Why use neural networks?
Neural networks, with their remarkable ability to derive meaning from complicated or
imprecise data, can be used to extract patterns and detect trends that are too complex to be
noticed by either humans or other computer techniques. A trained neural network can be
thought of as an "expert" in the category of information it has been given to analyse. This
expert can then be used to provide projections given new situations of interest and answer
"what
if"
questions.
Other advantages include:
1. Adaptive learning: An ability to learn how to do tasks based on the data given for
training or initial experience.
2. Self-Organisation: An ANN can create its own organisation or representation of
the information it receives during learning time.
3. Real Time Operation: ANN computations may be carried out in parallel, and
special hardware devices are being designed and manufactured which take
advantage of this capability.
4. Fault Tolerance via Redundant Information Coding: Partial destruction of a
network leads to the corresponding degradation of performance. However, some
network capabilities may be retained even with major network damage.
2. Human and Artificial Neurones - investigating the similarities
2.1 How the Human Brain Learns?
Much is still unknown about how the brain trains itself to process information, so theories
abound. In the human brain, a typical neuron collects signals from others through a host
of fine structures called dendrites. The neuron sends out spikes of electrical activity
through a long, thin stand known as an axon, which splits into thousands of branches. At
the end of each branch, a structure called a synapse converts the activity from the axon
into electrical effects that inhibit or excite activity from the axon into electrical effects
that inhibit or excite activity in the connected neurones. When a neuron receives
excitatory input that is sufficiently large compared with its inhibitory input, it sends a
spike of electrical activity down its axon. Learning occurs by changing the effectiveness
of the synapses so that the influence of one neuron on another changes.
Components of a neuron
The synapse
2.2 From Human Neurones to Artificial Neurones
We conduct these neural networks by first trying to deduce the essential features of
neurones and their interconnections. We then typically program a computer to simulate
these features. However because our knowledge of neurones is incomplete and our
computing power is limited, our models are necessarily gross idealisations of real
networks of neurones.
The neuron model
3 Architecture of neural networks
3.1 Feed-forward networks
Feed-forward ANNs (figure 1) allow signals to travel one way only; from input to output.
There is no feedback (loops) i.e. the output of any layer does not affect that same layer.
Feed-forward ANNs tend to be straight forward networks that associate inputs with
outputs. They are extensively used in pattern recognition. This type of organisation is also
referred to as bottom-up or top-down.
3.2 Feedback networks
Feedback networks (figure 1) can have signals travelling in both directions by introducing
loops in the network. Feedback networks are very powerful and can get extremely
complicated. Feedback networks are dynamic; their 'state' is changing continuously until
they reach an equilibrium point. They remain at the equilibrium point until the input
changes and a new equilibrium needs to be found. Feedback architectures are also
referred to as interactive or recurrent, although the latter term is often used to denote
feedback connections in single-layer organizations.
Figure 4.1 An example of a
simple feed forward network
Figure 4.2 An example of a complicated network
3.3 Network layers
The commonest type of artificial neural network consists of three groups, or layers, of
units: a layer of "input" units is connected to a layer of "hidden" units, which is
connected to a layer of "output" units. (see Figure 4.1)
The activity of the input units represents the raw information that is fed into the
network.
The activity of each hidden unit is determined by the activities of the input units and
the weights on the connections between the input and the hidden units.
The behavior of the output units depends on the activity of the hidden units and the
weights between the hidden and output units.
This simple type of network is interesting because the hidden units are free to construct
their own representations of the input. The weights between the input and hidden units
determine when each hidden unit is active, and so by modifying these weights, a hidden
unit can choose what it represents.
We also distinguish single-layer and multi-layer architectures. The single-layer
organisation, in which all units are connected to one another, constitutes the most general
case and is of more potential computational power than hierarchically structured multilayer organizations. In multi-layer networks, units are often numbered by layer, instead of
following a global numbering.
3.4 Perceptron’s
The most influential work on neural nets in the 60's went under the heading of
'perceptron’s' a term coined by Frank Rosenblatt. The perceptron (figure 4.4) turns out to
be an MCP model ( neuron with weighted inputs ) with some additional, fixed, preprocessing. Units labeled A1, A2, Aj , Ap are called association units and their task is to
extract specific, localized featured from the input images. Perceptron’s mimic the basic
idea behind the mammalian visual system. They were mainly used in pattern recognition
even though their capabilities extended a lot more.
Figure 4.4
In 1969 Minsky and Paper wrote a book in which they described the limitations of single
layer Perceptron’s. The impact that the book had was tremendous and caused a lot of
neural network researchers to loose their interest. The book was very well written and
showed mathematically that single layer perceptron’s could not do some basic pattern
recognition operations like determining the parity of a shape or determining whether a
shape is connected or not. What they did not realized, until the 80's, is that given the
appropriate training, multilevel perceptron’s can do these operations.
EXECUTIVE INFORMATION SYSTEM:
An executive information system (EIS), also known as an executive support
system (ESS),[1]is a type of management information system that facilitates and supports
senior executive information and decision-making needs. It provides easy access to
internal and external information relevant to organizational goals. It is commonly
considered a specialized form of decision support system (DSS).
EIS emphasizes graphical displays and easy-to-use user interfaces. They offer strong
reporting and drill-down capabilities. In general, EIS are enterprise-wide DSS that help
top-level executives analyze, compare, and highlight trends in important variables so that
they can monitor performance and identify opportunities and problems. EIS and data
warehousing technologies are converging in the marketplace.
In recent years, the term EIS has lost popularity in favor of business intelligence (with the
sub areas of reporting, analytics, and digital dashboards).
EIS components can typically be classified as:

Hardware

Software

User interface

Telecommunication
Hardware
When talking about computer hardware for an EIS environment, we should focus on the
hardware that meet the executive’s needs. The executive must be put first and the
executive’s needs must be defined before the hardware can be selected. The basic
hardware needed for a typical EIS includes four components:
1. Input data-entry devices. These devices allow the executive to enter, verify, and
update data immediately
2. The central processing unit (CPU), which is the important because it controls the
other computer system components
3. Data storage files. The executive can use this part to save useful business
information, and this part also help the executive to search historical business
information easily
4. Output devices, which provide a visual or permanent record for the executive to
save or read. This device refers to the visual output device such as monitor or
printer
In addition, with the advent of local area networks (LAN), several EIS products for
networked workstations became available. These systems require less support and less
expensive computer hardware. They also increase EIS information access to more
company users.
Software
Choosing the appropriate software is vital to an effective EIS.[citation needed] Therefore, the
software components and how they integrate the data into one system are important. A
typical EIS includes four software components:
1. Text-handling software—documents are typically text-based
2. Database—heterogeneous databases on a range of vendor-specific and open
computer platforms help executives access both internal and external data
3. Graphic base—graphics can turn volumes of text and statistics into visual
information for executives. Typical graphic types are: time series charts, scatter
diagrams, maps, motion graphics, sequence charts, and comparison-oriented
graphs (i.e., bar charts)
4. Model base—EIS models contain routine and special statistical, financial, and
other quantitative analysis
User interface
An EIS must be efficient to retrieve relevant data for decision makers, so the user
interface is very important. Several types of interfaces can be available to the EIS
structure, such as scheduled reports, questions/answers, menu driven, command language,
natural language, and input/output.
Telecommunication
As decentralizing is becoming the current trend in companies, telecommunications will
play a pivotal role in networked information systems. Transmitting data from one place to
another has become crucial for establishing a reliable network. In addition,
telecommunications within an EIS can accelerate the need for access to distributed data.
Applications
EIS helps executives find data according to user-defined criteria and promote
information-based insight and understanding. Unlike a traditional management
information system presentation, EIS can distinguish between vital and seldom-used data,
and track different key critical activities for executives, both which are helpful in
evaluating if the company is meeting its corporate objectives. After realizing its
advantages, people have applied EIS in many areas, especially, in manufacturing,
marketing, and finance areas.
Manufacturing
Manufacturing is the transformation of raw materials into finished goods for sale, or
intermediate processes involving the production or finishing of semi-manufactures. It is a
large branch of industry and of secondary production. Manufacturing operational control
focuses on day-to-day operations, and the central idea of this process is effectiveness and
efficiency.
Marketing
In an organization, marketing executives’ duty is managing available marketing resources
to create a more effective future. For this, they need make judgments about risk and
uncertainty of a project and its impact on the company in short term and long term. To
assist marketing executives in making effective marketing decisions, an EIS can be
applied. EIS provides sales forecasting, which can allow the market executive to compare
sales forecast with past sales. EIS also offers an approach to product price, which is found
in venture analysis. The market executive can evaluate pricing as related to competition
along with the relationship of product quality with price charged. In summary, EIS
software package enables marketing executives to manipulate the data by looking for
trends, performing audits of the sales data, and calculating totals, averages, changes,
variances, or ratios.
Financial
Financial analysis is one of the most important steps to companies today. Executives
needs to use financial ratios and cash flow analysis to estimate the trends and make
capital investment decisions. An EIS integrates planning or budgeting with control of
performance reporting, and it can be extremely helpful to finance executives. EIS focuses
on financial performance accountability, and recognizes the importance of cost standards
and flexible budgeting in developing the quality of information provided for all executive
levels.
Advantages and disadvantages
Advantages of EIS

Easy for upper-level executives to use, extensive computer experience is not
required in operations




Provides timely delivery of company summary information
Information that is provided is better understood
EIS provides timely delivery of information. Management can make decisions
promptly.
Improves tracking information

Offers efficiency to decision makers
Disadvantages of EIS

System dependent

Limited functionality, by design

Information overload for some managers

Benefits hard to quantify

High implementation costs

System may become slow, large, and hard to manage

Need good internal processes for data management

May lead to less reliable and less secure data.
KNOWLEDGE MANAGEMENT:
Knowledge management (KM) is the process of capturing, developing, sharing, and
effectively using organizational knowledge. It refers to a multi-disciplined approach to
achieving organizational objectives by making the best use of knowledge.
An established discipline since 1991 (see Nonaka 1991), KM includes courses taught in
the fields of business administration, information systems, management, and library
and information sciences. More recently, other fields have started contributing to KM
research; these include information and media, computer science, public health,
and public policy. University, Kent and the University of Haifa offer dedicated Master of
Science degrees in Knowledge Management.
Many large companies, public institutions and non-profit organization have resources
dedicated to internal KM efforts, often as a part of their business strategy, information
technology, or human resource management departments. Several consulting companies
provide strategy and advice regarding KM to these organizations.
Knowledge management efforts typically focus on organizational objectives such as
improved performance, competitive advantage, innovation, the sharing of lessons learned,
integration and continuous improvement of the organisation. KM efforts overlap
with learning and may be distinguished from that by a greater focus on the management
of knowledge as a strategic asset and a focus on encouraging the sharing of knowledge. It
is an enabler of organizational learning.
Knowledge management (KM) therefore implies a strong tie to organizational goals and
strategy, and it involves the management of knowledge that is useful for some purpose
and which creates value for the organization.
Expanding upon the previous knowledge management definition, KM involves the
understanding of:
Where and in what forms knowledge exists; what the organization needs to know; how to
promote a culture conducive to learning, sharing, and knowledge creation; how to make
the right knowledge available to the right people at the right time; how to best generate or
acquire new relevant knowledge; how to manage all of these factors so as to enhance
performance in light of the organization's strategic goals and short term opportunities and
threats.
A broad range of thoughts on the KM discipline exist; approaches vary by author and
school. As the discipline matures, academic debates have increased regarding both
the theory and practice of KM, to include the following perspectives:

Techno-centric with a focus on technology, ideally those that enhance knowledge
sharing and creation.

Organizational with a focus on how an organisation can be designed to facilitate
knowledge processes best.

Ecological with a focus on the interaction of people, identity, knowledge, and
environmental factors as a complex adaptive system akin to a natural ecosystem.
Regardless of the school of thought, core components of KM include people, processes,
technology
(or)
culture,
structure,
technology,
depending
on
the
specific perspective (Spender & Scherer 2007). Different KM schools of thought
include lenses through which KM can be viewed and explained, to include:

community of practice

social network analysis

intellectual capital (Bontis & Choo 2002)

information theory (McInerney 2002)

complexity science

constructivism(Nanjappa & Grant 2003)
The practical relevance of academic research in KM has been questioned (Ferguson 2005)
with action research suggested as having more relevance (Andriessen 2004) and the need
to translate the findings presented in academic journals to a practice (Booker, Bontis &
Serenko 2008).
Strategies[edit]
Knowledge may be accessed at three stages: before, during, or after KM-related activities.
Organizations have tried knowledge capture incentives, including making content
submission
mandatory
and
incorporating
rewards
into performance
measurement plans. Considerable controversy exists over whether incentives work or not
in this field and no consensus has emerged.
One strategy to KM involves actively managing knowledge (push strategy).In such an
instance, individuals strive to explicitly encode their knowledge into a shared knowledge
repository, such as a database, as well as retrieving knowledge they need that other
individuals have provided to the repository. This is commonly known as the Codification
approach to KM.
Another strategy to KM involves individuals making knowledge requests of experts
associated with a particular subject on an ad hoc basis (pull strategy). In such an instance,
expert individual(s) can provide their insights to the particular person or people needing
this (Snowden 2002). This is commonly known as the Personalization approach to KM.
Hansen et al. propose a simple framework, distinguishing two opposing KM strategies:
codification and personalization. Codification focuses on collecting and storing codified
knowledge in previously designed electronic databases to make it accessible to the
organisation. Codification can therefore refer to both tacit and explicit knowledge. In
contrast, the personalization strategy aims at encouraging individuals to share their
knowledge directly. Information technology plays a less important role, as it is only
supposed to facilitate communication and knowledge sharing among members of an
organisation.
Other knowledge management strategies and instruments for companies include:

Knowledge Sharing (fostering a culture that encourages the sharing of
information, based on the concept that knowledge is not irrevocable and
should be shared and updated to remain relevant)

Storytelling (as a means of transferring tacit knowledge)

Cross-project learning

After action reviews

Knowledge mapping (a map of knowledge repositories within a company
accessible by all)

Communities of practice

Expert directories (to enable knowledge seeker to reach to the experts)

Best practice transfer

Knowledge fairs

Competence management (systematic evaluation and planning of competences
of individual organisation members)

Proximity & architecture (the physical situation of employees can be either
conducive or obstructive to knowledge sharing)

Master-apprentice relationship

Collaborative technologies (groupware, etc.)

Knowledge repositories (databases, bookmarking engines, etc.)

Measuring and reporting intellectual capital (a way of making explicit
knowledge for companies)

Knowledge brokers (some organizational members take on responsibility for a
specific "field" and act as first reference on whom to talk about a specific
subject)

Social software (wikis, social bookmarking, blogs, etc.)

Inter-project knowledge transfer.
DATA WAREHOUSING:
In computing, a data warehouse (DW or DWH), also known as an enterprise data
warehouse (EDW), is a system used for reporting and data analysis. DWs are central
repositories of integrated data from one or more disparate sources. They store current and
historical data and are used for creating analytical reports for knowledge workers
throughout the enterprise. Examples of reports could range from annual and quarterly
comparisons and trends to detailed daily sales analyses.
The data stored in the warehouse is uploaded from the operational systems (such as
marketing, sales, etc., shown in the figure to the right). The data may pass through
an operational data store for additional operations before it is used in the DW for
reporting.
Types of Data Warehouses
Data mart
A data mart is a simple form of a data warehouse that is focused on a single subject (or
functional area), such as sales, finance or marketing. Data marts are often built and
controlled by a single department within an organization. Given their single-subject focus,
data marts usually draw data from only a few sources. The sources could be internal
operational systems, a central data warehouse, or external data.[1]
Online analytical processing (OLAP)
Is characterized by a relatively low volume of transactions. Queries are often very
complex and involve aggregations. For OLAP systems, response time is an effectiveness
measure. OLAP applications are widely used by Data Mining techniques. OLAP
databases store aggregated, historical data in multi-dimensional schemas (usually star
schemas). OLAP systems typically have data latency of a few hours, as opposed to data
marts, where latency is expected to be closer to one day.
Online Transaction Processing (OLTP)
Is characterized by a large number of short on-line transactions (INSERT, UPDATE,
DELETE). OLTP systems emphasize very fast query processing and maintaining data
integrity in multi-access environments. For OLTP systems, effectiveness is measured by
the number of transactions per second. OLTP databases contain detailed and current data.
The schema used to store transactional databases is the entity model (usually 3NF).
Predictive analysis
Predictive analysis is about finding and quantifying hidden patterns in the data using
complex mathematical models that can be used to predict future outcomes. Predictive
analysis is different from OLAP in that OLAP focuses on historical data analysis and is
reactive in nature, while predictive analysis focuses on the future. These systems are also
used for CRM (Customer Relationship Management)
Benefits
A data warehouse maintains a copy of information from the source transaction systems.
This architectural complexity provides the opportunity to :
Congregate data from multiple sources into a single database so a single query engine can
be used to present data.
Mitigate the problem of database isolation level lock contention in transaction processing
systems caused by attempts to run large, long running, analysis queries in transaction
processing databases.
Maintain data history, even if the source transaction systems do not.
Integrate data from multiple source systems, enabling a central view across the enterprise.
This benefit is always valuable, but particularly so when the organization has grown by
merger.
Improve data quality, by providing consistent codes and descriptions, flagging or
even fixing bad data.
Present the organization's information consistently.
Provide a single common data model for all data of interest regardless of the data's
source.
Restructure the data so that it makes sense to the business users.
Restructure the data so that it delivers excellent query performance, even for complex
analytic queries, without impacting the operational systems.
Add value to operational business applications, notably customer relationship
management(CRM) systems.
Make decision–support queries easier to write.
Generic data warehouse environment
The environment for data warehouses and marts includes the following:

Source systems that provide data to the warehouse or mart;

Data integration technology and processes that are needed to prepare the data for
use;

Different architectures for storing data in an organization's data warehouse or data
marts;


Different tools and applications for the variety of users;
Metadata, data quality, and governance processes must be in place to ensure that
the warehouse or mart meets its purposes.
In regards to source systems listed above, Rainer states, “A common source for the data in
data warehouses is the company’s operational databases, which can be relational
databases”.[6]
Regarding data integration, Rainer states, “It is necessary to extract data from source
systems, transform them, and load them into a data mart or warehouse”.[6]
Rainer discusses storing data in an organization’s data warehouse or data marts.[6]
Metadata are data about data. “IT personnel need information about data sources;
database, table, and column names; refresh schedules; and data usage measures“.[6]
Today, the most successful companies are those that can respond quickly and flexibly to
market changes and opportunities. A key to this response is the effective and efficient use
of data and information by analysts and managers. A “data warehouse” is a repository of
historical data that are organized by subject to support decision makers in the
organization.[6] Once data are stored in a data mart or warehouse, they can be accessed.
DATA MINING
What can data mining do?
Data mining is primarily used today by companies with a strong consumer focus - retail,
financial, communication, and marketing organizations. It enables these companies to
determine relationships among "internal" factors such as price, product positioning, or
staff skills, and "external" factors such as economic indicators, competition, and customer
demographics. And, it enables them to determine the impact on sales, customer
satisfaction, and corporate profits. Finally, it enables them to "drill down" into summary
information to view detail transactional data.
With data mining, a retailer could use point-of-sale records of customer purchases to send
targeted promotions based on an individual's purchase history. By mining demographic
data from comment or warranty cards, the retailer could develop products and promotions
to appeal to specific customer segments.
For example, Blockbuster Entertainment mines its video rental history database to
recommend rentals to individual customers. American Express can suggest products to its
cardholders based on analysis of their monthly expenditures.
WalMart is pioneering massive data mining to transform its supplier relationships.
WalMart captures point-of-sale transactions from over 2,900 stores in 6 countries and
continuously transmits this data to its massive 7.5 terabyte Teradata data warehouse.
WalMart allows more than 3,500 suppliers, to access data on their products and perform
data analyses. These suppliers use this data to identify customer buying patterns at the
store display level. They use this information to manage local store inventory and identify
new merchandising opportunities. In 1995, WalMart computers processed over 1 million
complex data queries.
The National Basketball Association (NBA) is exploring a data mining application that
can be used in conjunction with image recordings of basketball games. The Advanced
Scout software analyzes the movements of players to help coaches orchestrate plays and
strategies. For example, an analysis of the play-by-play sheet of the game played between
the New York Knicks and the Cleveland Cavaliers on January 6, 1995 reveals that when
Mark Price played the Guard position, John Williams attempted four jump shots and
made each one! Advanced Scout not only finds this pattern, but explains that it is
interesting because it differs considerably from the average shooting percentage of
49.30% for the Cavaliers during that game.
By using the NBA universal clock, a coach can automatically bring up the video clips
showing each of the jump shots attempted by Williams with Price on the floor, without
needing to comb through hours of video footage. Those clips show a very successful pickand-roll play in which Price draws the Knick's defense and then finds Williams for an
open jump shot.
How does data mining work?
While large-scale information technology has been evolving separate transaction and
analytical systems, data mining provides the link between the two. Data mining software
analyzes relationships and patterns in stored transaction data based on open-ended user
queries. Several types of analytical software are available: statistical, machine learning,
and neural networks. Generally, any of four types of relationships are sought:

Classes: Stored data is used to locate data in predetermined groups. For example,
a restaurant chain could mine customer purchase data to determine when
customers visit and what they typically order. This information could be used to
increase traffic by having daily specials.

Clusters: Data items are grouped according to logical relationships or consumer
preferences. For example, data can be mined to identify market segments or
consumer affinities.

Associations: Data can be mined to identify associations. The beer-diaper
example is an example of associative mining.

Sequential patterns: Data is mined to anticipate behavior patterns and trends. For
example, an outdoor equipment retailer could predict the likelihood of a backpack
being purchased based on a consumer's purchase of sleeping bags and hiking
shoes.
Data mining consists of five major elements:

Extract, transform, and load transaction data onto the data warehouse system.

Store and manage the data in a multidimensional database system.

Provide data access to business analysts and information technology professionals.

Analyze the data by application software.

Present the data in a useful format, such as a graph or table.
Different levels of analysis are available:

Artificial neural networks: Non-linear predictive models that learn through
training and resemble biological neural networks in structure.

Genetic algorithms: Optimization techniques that use processes such as genetic
combination, mutation, and natural selection in a design based on the concepts of
natural evolution.

Decision trees: Tree-shaped structures that represent sets of decisions. These
decisions generate rules for the classification of a dataset. Specific decision tree
methods include Classification and Regression Trees (CART) and Chi Square
Automatic Interaction Detection (CHAID) . CART and CHAID are decision tree
techniques used for classification of a dataset. They provide a set of rules that you
can apply to a new (unclassified) dataset to predict which records will have a
given outcome. CART segments a dataset by creating 2-way splits while CHAID
segments using chi square tests to create multi-way splits. CART typically
requires less data preparation than CHAID.

Nearest neighbor method: A technique that classifies each record in a dataset
based on a combination of the classes of the k record(s) most similar to it in a
historical dataset (where k 1). Sometimes called the k-nearest neighbor technique.

Rule induction: The extraction of useful if-then rules from data based on
statistical significance.

Data visualization: The visual interpretation of complex relationships in
multidimensional data. Graphics tools are used to illustrate data relationships.
The Scope of Data Mining
Data mining derives its name from the similarities between searching for valuable
business information in a large database — for example, finding linked products in
gigabytes of store scanner data — and mining a mountain for a vein of valuable ore. Both
processes require either sifting through an immense amount of material, or intelligently
probing it to find exactly where the value resides. Given databases of sufficient size and
quality, data mining technology can generate new business opportunities by providing
these capabilities:

Automated prediction of trends and behaviors. Data mining automates the
process of finding predictive information in large databases. Questions that
traditionally required extensive hands-on analysis can now be answered directly
from the data — quickly. A typical example of a predictive problem is targeted
marketing. Data mining uses data on past promotional mailings to identify the
targets most likely to maximize return on investment in future mailings. Other
predictive problems include forecasting bankruptcy and other forms of default,
and identifying segments of a population likely to respond similarly to given
events.

Automated discovery of previously unknown patterns. Data mining tools
sweep through databases and identify previously hidden patterns in one step. An
example of pattern discovery is the analysis of retail sales data to identify
seemingly unrelated products that are often purchased together. Other pattern
discovery problems include detecting fraudulent credit card transactions and
identifying anomalous data that could represent data entry keying errors.
Data mining techniques can yield the benefits of automation on existing software and
hardware platforms, and can be implemented on new systems as existing platforms are
upgraded and new products developed. When data mining tools are implemented on high
performance parallel processing systems, they can analyze massive databases in minutes.
Faster processing means that users can automatically experiment with more models to
understand complex data. High speed makes it practical for users to analyze huge
quantities of data. Larger databases, in turn, yield improved predictions.
Databases can be larger in both depth and breadth:

More columns. Analysts must often limit the number of variables they examine
when doing hands-on analysis due to time constraints. Yet variables that are
discarded because they seem unimportant may carry information about unknown
patterns. High performance data mining allows users to explore the full depth of a
database, without preselecting a subset of variables.

More rows. Larger samples yield lower estimation errors and variance, and allow
users to make inferences about small but important segments of a population.
A recent Gartner Group Advanced Technology Research Note listed data mining and
artificial intelligence at the top of the five key technology areas that "will clearly have a
major impact across a wide range of industries within the next 3 to 5 years."2 Gartner also
listed parallel architectures and data mining as two of the top 10 new technologies in
which companies will invest during the next 5 years. According to a recent Gartner HPC
Research Note, "With the rapid advance in data capture, transmission and storage, largesystems users will increasingly need to implement new and innovative ways to mine the
after-market value of their vast stores of detail data, employing MPP [massively parallel
processing] systems to create new sources of business advantage (0.9 probability)."3
The most commonly used techniques in data mining are:

Artificial neural networks: Non-linear predictive models that learn through
training and resemble biological neural networks in structure.

Decision trees: Tree-shaped structures that represent sets of decisions. These
decisions generate rules for the classification of a dataset. Specific decision tree
methods include Classification and Regression Trees (CART) and Chi Square
Automatic Interaction Detection (CHAID) .

Genetic algorithms: Optimization techniques that use processes such as genetic
combination, mutation, and natural selection in a design based on the concepts of
evolution.

Nearest neighbor method: A technique that classifies each record in a dataset
based on a combination of the classes of the k record(s) most similar to it in a
historical dataset (where k ³ 1). Sometimes called the k-nearest neighbor
technique.

Rule induction: The extraction of useful if-then rules from data based on
statistical significance.
Many of these technologies have been in use for more than a decade in specialized
analysis tools that work with relatively small volumes of data. These capabilities are now
evolving to integrate directly with industry-standard data warehouse and OLAP
platforms. The appendix to this white paper provides a glossary of data mining terms.
VIRTUAL REALITY
VIRTUAL REALITY Virtual Reality is a technology which allows a user to
interact with a computer-simulated environment
IMPACT OF VIRTUAL REALITY
Virtual reality will be integrated into daily life and activity, and will be used in
various human ways
Techniques will be developed to influence human behavior,
communication , and cognition
interpersonal
As we spend more and more time in virtual space, there will be a gradual &quot
migration to virtual space&quot, resulting in important changes in economics,
worldview, and culture.
Virtual reality can also be used to induce body transfer illusions
APPLICATIONS OF VIRTUAL REALITY IN BUSINESS
Retailing and Marketing
Design and Manufacturing
Accounting an d Finance
Training and Human Resources.
TYPES OF VIRTUAL REALITY
A major distinction of VR systems is the mode with which they interface to the
user. This section describes some of the common modes used in VR systems.
A.Window on World Systems (WoW)
Some systems use a conventional computer monitor to display the visual world.
This sometimes called Desktop VR or a Window on a World (WoW). This
concept traces its lineage back through the entire history of computer graphics. In
1965, Ivan Sutherland laid out a research program for computer graphics in a
paper called "The Ultimate Display" that has driven the field for the past nearly
thirty years. "One must look at a display screen," he said, "as a window through
which one beholds a virtual world. The challenge to computer graphics is to make
the picture in the window look real, sound real and the objects act real."
B. Video Mapping
A variation of the WoW approach merges a video input of the user's silhouette
with a 2D computer graphic. The user watches a monitor that shows his body's
interaction with the world.
C. Immersive Systems
The ultimate VR systems completely immerse the user's personal viewpoint inside
the virtual world. These "immersive" VR systems are often equipped with a Head
Mounted Display (HMD). This is a helmet or a face mask that holds the visual and
auditory displays. The helmet may be free ranging, tethered, or it might be
attached to some sort of a boom armature. A nice variation of the immersive
systems use multiple large projection displays to create a 'Cave' or room in which
the viewer(s) stand. An early implementation was called "The Closet Cathedral"
for the ability to create the impression of an immense environment within a small
physical space. The Holodeck used in the television series "Star Trek: The Next
Generation" is afar term extrapolation of this technology.
D.Telepresence
Telepresence is a variation on visualizing complete computer generated worlds.
This technology links remote sensors in the real world with the senses of a human
operator. The remote sensors might be located on a robot, or they might be on the
ends of WALDO like tools. Fire fighters use remotely operated vehicles to handle
some dangerous conditions. Surgeons are using very small instruments on cables
to do surgery without cutting a major hole in their patients. The instruments have a
small video camera at the business end. Robots equipped with telepresence
systems have already changed the way deep sea and volcanic exploration is done.
NASA plans to use telerobotics for space exploration. There is currently a joint
US/Russian project researching telepresence for space rover exploration.
E. Mixed Reality
Merging the Telepresence and Virtual Reality systems gives the Mixed Reality or
Seamless Simulation systems. Here the computer generated inputs are merged
with telepresence inputs and/or the users view of the real world. A surgeon's view
of a brain surgery is overlaid with images from earlier CAT scans and real-time
ultrasound. A fighter pilot sees computer generated maps and data displays inside
his fancy helmet visor or on cockpit displays. The phrase "fish tank virtual reality"
was used to describe a Canadian VR system. It combines a stereoscopic monitor
display using liquid crystal shutter glasses with a mechanical head tracker. The
resulting system is superior to simple stereo-WoW systems due to the motion
parallax effects introduced by the head tracker.
F. Semi- Immersive
Most advanced flight, ship and vehicle simulators are semi-immersive. The
cockpit, bridge, or driving seat is a physical model, whereas the view of the world
outside is computer-generated.
III.DISADAVANTAGES OF VIRTUAL REALITY
The disadvantages of VR are numerous. The hardware needed to create a fully
immersed VR experience is still cost prohibitive. The total cost of the machinery
to create a VR system is still the same price as a new car, around $20,000. The
technology for such an experience is still new and experimental. VR is becoming
much more commonplace but programmers are still grappling with how to interact
with virtual environments. The idea of escapism is common place among those
that use VR environments and people often live in the virtual world instead of
dealing with the real one. This happens even in the low quality and fairly hard to
use VR environments that are online right now. One worry is that as VR
environments become much higher quality and immersive, they will become
attractive to those wishing to escape real life. Another concern is VR training.
Training with a VR environment does not have the same consequences as training
and working in the real world. This means that even if someone does well with
simulated tasks in a VR environment, that person might not do well in the real
world.
ADVANTAGES OF VIRTUAL REALITY
Although the disadvantages of VR are numerous, so are the advantages. Many
different fields can use VR as a way to train students without actually putting
anyone in harm's way. This includes the fields of medicine, law enforcement,
architecture and aviation. VR also helps those that can't get out of the house
experience a much fuller life. These patients can explore the world through virtual
environments like Second Life, a VR community on the Internet, exploring virtual
cities as well as more fanciful environments like J.R.R. Tolkien's Middle Earth.
VR also helps patients recover from stroke and other injuries. Doctors are using
VR to help reteach muscle movement such as walking and grabbing as well as
smaller physical movements such as pointing. The doctors use the malleable
computerized environments to increase or decrease the motion needed to grab or
move an object.
E-BUSINESS AND ALTERNATIVES:
E-business Models
The explosion in the use of the Internet has paved the way for several pathbreaking innovations. One of the most interesting and exciting aspects of this evolution is
the emergence of electronic business (e-business) as a mainstream and viable alternative
to more traditional methods of businesses being conducted today. E-business is defined as
the process of using electronic technology to do business. It is the day and age of
electronic business. Also the structure of the Web is rapidly evolving from a loose
collection of Web sites into organized market places. The phenomena of aggregation,
portals, large enterprise sites, and business-to-business applications are resulting in
centralized, virtual places, through which millions of visitors pass daily.
E-business has become standard operating procedure for the vast majority of
companies. Setting up and running an e-business, especially one that processes a large
number of transactions, requires technical, marketing and advertising expertise.
Consumers like to access to products and services on a 24-by-7 basis, and the easiest way
to provide that is to move operations online. The businesses that provide the most reliable,
most functional, most user-friendly and fastest services will be the ones that succeed.
The emergence of e-commerce and its related technologies had lead to the
creation of many different robust applications that are typically grouped into several
categories of e-commerce.
Business to Consumer (B2C) are applications that provide an interface from
businesses directly to their consumers. The most common example of a B2C application
is a retail web site featuring the business's products or services that can be directly
purchased by the consumer. The importance of B2C varies dramatically from company to
company. For some companies, reaching consumers has been the critical aspect of their
business. For some companies that run a chain of retail stores, B2C should be one of the
most important pieces of their Internet strategy. Even some companies that already have
third parties to distribute, market, and sell their products are not much concerned about
B2C. Many companies that never have sold directly to consumers, having realized it is
clearly much more cost efficient to open a B2C site than to open a physical store, have
begun to lean towards B2C. In this case, it becomes necessary for them to address a whole
lot of small and big issues. But still B2C applications remains on top of the applications
of the Internet as this is directed related to the masses.
Business to Business (B2B) - Forging new relationships between businesses is
becoming critical for businesses to survive and blossom in this increasingly fast paced
world. B2B applications provide new opportunities for businesses to leverage emerging
technologies to build their businesses. Examples of B2B applications include facilitating
transactions for goods/services between companies, selling goods/services on the Internet
to businesses, and supply chain integration. Another example is online procurement of
goods from one company to another. Legacy integration is a huge issue in B2B
applications. If existing applications such as EDI or EFT are extended to help the B2B
process, then the existing legacy applications can be a big help in moving forward. On the
other hand, if two companies want to trade data, but have dramatically different legacy
systems, legacy integration can be a challenge to overcome. There are other issues such as
security, speed, and flexibility, in B2B applications.
Business to Business to Consumer (B2B2C) is one of the emerging models of
e-commerce. B2B2C is basically defined as using B2B to help support and rejuvenate
companies attempting B2C. This is due to the fact that B2B has been an overwhelming
financial success and B2C has not performed up to the expectations. This model is poised
to do well as it capitalizes the success of B2B and the potential demand of B2C. B2B
provides a way for B2C companies to reduce costs and improve their B2C services. An
example of B2B2C is developing products to help B2C companies increase profit by
integrating inventory from the manufacturer to the distributor. An application that links
one online catalog to another would be considered a B2B2C application as it capitalizes
on both B2B and B2C.
Consumer to Consumer (C2C) - C2C is an interesting relatively new piece of
the e-commerce world. C2C applications involve consumers conducting commerce
directly with other consumers. This obviously means that the company facilitating the
transaction must find some non-traditional revenge stream. This could be a small cut of
the transaction, a service fee, advertising, or some combination of these. E-bay is an
excellent example of a C2C application that is extremely popular with consumers.
Customer to Business to Consumer (C2B2C) involves consumers conducting
transactions with other consumers using a business as an intermediary.
www.autotrader.com is the best example for this sort of application. This site facilitates
the transactions of selling used cars between consumers, but also contains an inventory of
used cars to sell to the consumer.
Apart from above categorized e-commerce applications, there are several
specific models of businesses operating on the Web. Here comes a brief of each model.
Auction Model - The Web offers many different kinds of auction sites. Auction
sites act as forums through which Internet users can log-on and assume the role of either
bidder or seller. As a seller, one can post an item to sell, the minimum price he requires to
sell his item and a deadline to close the auction. As a bidder, one can search the site for
availability of the item he is seeking, view the current bidding activity and place a bid.
Also there are sites designed to search existing auction sites in order to pinpoint the
lowest prices on an available item. Although auction sites usually require a commission
on sales, these sites are only a forum for online buying and selling. They get the
commission from both parties once the deal is over.
Portal Model - Portal sites give visitors the chance to find almost everything they
are looking for in one place. They often offer news, sports, and weather, as well as the
ability to search the Web. Portals are subdivided into two kinds: horizontal portals and
vertical portals. Horizontal portals aggregate information on a broad range of topics.
Vertical portals are more specific, offering a great deal of information pertaining to a
single area of interest. Online shopping is a popular addition to the major portals. Portals
linking consumers to online merchants, online shopping malls and auction sites provide
several advantages.
Dynamic Pricing Models - The Web has changed the way business is done and
the way products are priced. There are companies which enable customers to name their
prices for travel, homes, automobiles and consumer goods. Buying in bulk has always
driven prices down and there are now Web sites that allow one to lower the price by
joining with other buyers to purchase products in large quantities to get price reduction.
There are a number of variety of models here. They are Name-Your-Price Model,
Comparison pricing model, Demand-sensitive pricing model, and Bartering model. E-
business allows companies to follow a variety of ways to keep prices down on the
Internet, such as rebates and offering free products and services.
Online Trading and Lending Models - Another fast-growing area of e-commerce
is online securities trading. Many brokerage houses have established a presence on the
Web. Trading sites allow one to research securities, buy, and sell and manage all of his
investments from his desktop. Online trading often costs less than conventional
brokerage.
The Web offers a quite number of exciting services including getting a loan
online, recruitment through the Web, online News services, online travel services, online
entertainment, online automotive sites, energy online, selling brain-power, online art
dealers, e-learning and e-banking.
Short Questions:
1. what is Enterprise system?
2.Explain about Neural networks?
3.write short notes about Supply chain management?
4.Explain E-Business and alternatives?
Long Questions:
1.Briefly explain the concept of Enterprise resource planning?
2.Explain about Knowledge management?
3.Discuss about Customer Relationship Management?
Unit – V
IT INVESTMENT:
INVESTMENT IN INFORMATION TECHNOLOGY
What is investment?
Investment can be defined as putting in money, effort, time, etc. into something to make a
profit out of it or to get some advantage.
What is Technology?
Technology can be defined as the study and knowledge of the practical, especially
industrial use of scientific discoveries or advancement.
Information Technology (IT)
Out of different kind of technologies, the Information Technology (IT) is the most
important one, which is being widely used in different fields of industries. This is also the
fastest changing and growing technology of the day. The IT which is latest and most
relevant to-day, will become altogether obsolete and irrelevant tomorrow, as soon as a
more advanced version of the same comes to the market.
Investing in IT
As we have seen that IT is a fast changing technology, it becomes a very difficult task for
the policy makers of an organization to decide about the investment in IT. There can be
two possibilities:
(i) Before investing in IT, wait for the new advance version to be launched in the
market, that too at much cheaper price than the present one. By the time a decision
is reached regarding investment in IT a further advanced version of IT starts
knocking at the door. Thus another postponement is taken regarding the
investment. This process goes on and on and ultimately investment in IT never
materializes.
(ii) In the second scenario, the policy makers having decided to implement IT in the
organization instantly invest on whatever version of IT available in the market at
the huge prevailing cost. But soon they realize that whatever they have bought has
become obsolete and has become a liability.
Thus we see that while investing in IT a very balanced and judicious decision is required
to be taken by the competent policy makers, who are also essentially required to be IT
enabled persons.
Relation ship among Technology, Investment & Business.
In today’s global market no industry or business can survive without having latest
technology. Technology is the tool through which a business is going to grow Investment
is required to buy the technology. Thus we find that there is direct relation among the
three:
(i) Technology
(ii) Investment
(iii) Business.
2
Technology is the major driving force behind the globalization of production and changes
in the patterns of business and investment. Investment is seen as a vector of production,
technology and business expertise. Business, on the other hand, is seen both as a cause
and consequence of increased investment and technological development.
Business
Technology
Investment
Environmental Shifts & Technological Discontinuity
History tells us out of 500 companies which were there in 1918 only10% of them
survived for more than 50 years. Statistics show that 50% of all the new organizations,
which are born, go out of business within five years.
Reason:
When we analyze the reasons for these failures, we find that it is the inability to adapt to a
rapidly changing environment and the lack of resources. New technologies, new products,
and changing public tastes and values put strains on any organization’s culture, policies
and people working there in.
Technology is a major environmental factor that continuously threatens the existing
arrangements. At times, technological changes occur so radically as to constitute a
“technological discontinuity”, a sharp break in industrial practice that either enhances or
destroys the competence of firms in an industry.
.
Why should one invest in technology?
•
•
•
•
•
Information systems can provide competitive advantage to the firm.
A consistently strong IT based infrastructure can, over the longer term, play an
important strategic roll in the life of the firm.
Information systems are essential for very existence of the firm.
Survival of the firm even at a mediocre level demands investment in IT.
Government regulations may require these survival investments.
Importance of IT Investment Portfolio
For the U.S. economy as a whole, the IT investment represents about 25% of all capital
investment. Having invested such a huge amount in IT, it becomes obvious to analyze the
out come of the same. The foremost question comes to one’s mind, is to see whether the
firm is receiving a good return on its investment (ROI) in IT or not.
3
The good R.O.I. can be reflected through various factors, such as:
(i)
(ii)
(iii)
(iv)
Cost saving: The foremost impact will be felt through the cost reduction of the
products of the firm. This is a clear indication of good return on investment.
Improved productivity: The productivity of the employees will increase
dramatically and enhance their efficiency. This results into better employer and
employee relations.
Improved Quality: There will be appreciable improvement is the quality of the
products, thus the products will have decisive edge over other such products
available in the market. Due to this factor more and more customers will be
attracted towards the products of this firm, which has invested in technology.
Able to provide better customer services: Presently having been equipped with
better technology than earlier, the firm is in a position to render much better
services to the customers. This will help in creating good will of the firm in the
market.
Investment v/s Return:
Now the key question is , the benefits due to the implementation of IT are achieved at
what cost ? Has the firm spent too much or too little as compared to other competitors in
the field? This is very essential information to know, because this will decide the viability
of the firm.
The nature of the benefits may be short-term financial returns, or term strategic
positioning, or market share.
A second and altogether different challenge understands precisely how the firm’s
strategic position is affected by the IT investment.
Many firms recognize that it may be necessary to accept a low financial return on
investment for initial few years in order to establish a market dominating strategic
position.
The Right Choice:
The key issue, that decides the success of investment in IT, lies on the fact, whether the
firm has made the right choice regarding purchase of processor hardware, software,
telecommunication and last but not the least the human resources from the various supply
markets. Another point is, does the firm have the capability to achieve the business’s
strategic objectives?
It is obvious that if poor choices are made, there will be very low return on the IT
portfolio.
4
Right choices generally mean being reliable, cost efficient, extendable, and supportable. But
right choices also suggest that the infrastructure must support the strategic business interests of
the firm. The challenge in addressing these issues is that there are no simple quantitative
measures of right choices.
The Evaluation:
The evaluation of the firm investing in IT can be done broadly on the basis of following two
types of benefits: (a) Tangible Benefits (b) Intangible Benefit
4. Tangible Benefits: Tangible benefits are those benefits which can be seen clearly and
physically felt, such as:
Cost Savings
Increased Productivity
Low operational costs
Reduction in work force
Lower computer expenses
Lower out side vendor cost
Lower Electrical and professional costs
Reduction in expenses
Reduction in facility costs.
5. Intangible Benefits: Intangible benefits are those benefits which cannot be seen and
have no physical existence but the effects of these benefits can be realized qualitatively,
such as:
Improved resource control & utilization
Improved planning
Increased flexibility
Timely information
More information
Increased learning
Less legal requirements
Enhanced good will of the firm
Increased job satisfaction
Improved employer – employee relation
Improved decision making
Improved operations
Higher client satisfaction
Better corporate image.
ESTIMATING RETURNS:
Definition
Approximation, prediction, or projection of a quantity based on experience and/or information
available at the time, with the recognition that other pertinent facts are unclear or unknown. An
estimate is almost the same as an educated guess, and the cheapest (and least accurate) type of
modelling.
An approximation of the probable cost of a product, program, or project, computed on the basis
of available information.
Four common types of cost estimates are: (1) Planning estimate: a rough approximation of cost
within a reasonable range of values, prepared for information purposes only. Also called ball
park estimate. (2) Budget estimate: an approximation based on well-defined (but preliminary)
cost data and established ground rules. (3) Firm estimate: a figure based on cost data sound
enough for entering into a binding contract. (4) Not-to-exceed /Not-less-than estimate: the
maximum or minimum amount required to accomplish a given task, based on a firm cost
estimate.
PRICING FRAMEWORK:
1. Understand the factors in the pricing framework.
2. Explain the different pricing objectives organizations have to choose from.
Prices can be easily changed and easily matched by your competitors. Consequently, your
product’s price alone might not provide your company with a sustainable competitive advantage.
Nonetheless, prices can attract consumers to different retailers and businesses to different
suppliers.
Organizations must remember that the prices they charge should be consistent with their
offerings, promotions, and distribution strategies. In other words, it wouldn’t make sense for an
organization to promote a high-end, prestige product, make it available in only a limited number
of stores, and then sell it for an extremely low price. The price, product, promotion
(communication), and placement (distribution) of a good or service should convey a consistent
image. If you’ve ever watched the television show The Price Is Right, you may wonder how
people guess the exact price of the products. Watch the video clip below to see some of the price
guessing on The Price Is Right.
The Pricing Framework
Before pricing a product, an organization must determine its pricing objectives. In other words,
what does the company want to accomplish with its pricing? Companies must also estimate
demand for the product or service, determine the costs, and analyze all factors (e.g., competition,
regulations, and economy) affecting price decisions. Then, to convey a consistent image, the
organization should choose the most appropriate pricing strategy and determine policies and
conditions regarding price adjustments. The basic steps in the pricing framework are shown in
Figure .
The Firm’s Pricing Objectives
Different firms want to accomplish different things with their pricing strategies. For example,
one firm may want to capture market share, another may be solely focused on maximizing its
profits, and another may want to be perceived as having products with prestige. Some examples
of different pricing objectives companies may set include profit-oriented objectives, salesoriented objectives, and status quo objectives.
Earning a Targeted Return on Investment (ROI)
ROI, or return on investment, is the amount of profit an organization hopes to make given the
amount of assets, or money, it has tied up in a product. ROI is a common pricing objective for
many firms. Companies typically set a certain percentage, such as 10 percent, for ROI in a
product’s first year following its launch. So, for example, if a company has $100,000 invested in
a product and is expecting a 10 percent ROI, it would want the product’s profit to be $10,000.
Maximizing Profits
Many companies set their prices to increase their revenues as much as possible relative to their
costs. However, large revenues do not necessarily translate into higher profits. To maximize its
profits, a company must also focus on cutting costs or implementing programs to encourage
customer loyalty.
In weak economic markets, many companies manage to cut costs and increase their profits, even
though their sales are lower. How do they do this? The Gap cut costs by doing a better job of
controlling its inventory. The retailer also reduced its real estate holdings to increase its profits
when its sales were down during the latest economic recession. Other firms such as Dell, Inc.,
cut jobs to increase their profits. Meanwhile, Wal-Mart tried to lower its prices so as to undercut
its competitors’ prices to attract more customers. After it discovered that wealthier consumers
who didn’t usually shop at Wal-Mart before the recession were frequenting its stores, Wal-Mart
decided to upgrade some of its offerings, improve the checkout process, and improve the
appearance of some of its stores to keep these high-end customers happy and enlarge its
customer base. Other firms increased their prices or cut back on their marketing and advertising
expenses. A firm has to remember, however, that prices signal value. If consumers do not
perceive that a product has a high degree of value, they probably will not pay a high price for it.
Furthermore, cutting costs cannot be a long-term strategy if a company wants to maintain its
image and position in the marketplace.
Maximizing Sales
Maximizing sales involves pricing products to generate as much revenue as possible, regardless
of what it does to a firm’s profits. When companies are struggling financially, they sometimes
try to generate cash quickly to pay their debts. They do so by selling off inventory or cutting
prices temporarily. Such cash may be necessary to pay short-term bills, such as payroll.
Maximizing sales is typically a short-term objective since profitability is not considered.
Maximizing Market Share
Some organizations try to set their prices in a way that allows them to capture a larger share of
the sales in their industries. Capturing more market share doesn’t necessarily mean a firm will
earn higher profits, though. Nonetheless, many companies believe capturing a maximum amount
of market share is downright necessary for their survival. In other words, they believe if they
remain a small competitor they will fail. Firms in the cellular phone industry are an example. The
race to be the biggest cell phone provider has hurt companies like Motorola. Motorola holds only
10 percent of the cell phone market, and its profits on their product lines are negative.
Maintaining the Status Quo
Sometimes a firm’s objective may be to maintain the status quo or simply meet, or equal, its
competitors’ prices or keep its current prices. Airline companies are a good example. Have you
ever noticed that when one airline raises or lowers its prices, the others all do the same? If
consumers don’t accept an airline’s increased prices (and extra fees) such as the charge for
checking in with a representative at the airport rather than checking in online, other airlines may
decide not to implement the extra charge and the airline charging the fee may drop it.
Companies, of course, monitor their competitors’ prices closely when they adopt a status quo
pricing objective.
HARDWARE SOFTWARE BUYING:
It is designed to help storage managers with their storage software and hardware decisions. It
contains information on managing, implementing and maintaining storage technologies to help
IT professionals with their storage software and hardware purchases.
This guide to buying storage hardware and software covers hard disks, tape drives, disk storage,
and virtual storage.
Table of contents:





Hard disks
Tape drives
Disk storage
Virtual storage
Further Resources
Hard disks
A hard disk stores and provides access to large amounts of data. The data is stored on an
electrometrically charged surface and recorded in concentric circles. The storage of the data is
referred to as different tracks within a set of stacked disks.
Laptops remotely wiped and tracked at Camden Borough Council
automated alerts let the IT manager know when the user’s hard drive is becoming full. When free
space drops below 10% the user can be contacted about an assessment and maintenance.
How to manage Virtual Hard Disk data with encapsulation
If you think encapsulating Virtual Hard Disk data is the best way to manage Hyper-V storage,
pass-through disks might be an easier option for you. Find out how pass-through disks can
backup and restore huge VHD files.
A how-to on Hard-disk erasure: HD Derase and Secure Erase
Learn the importance of correctly sanitizing a drive, in order for all sensitive electronic data to be
destroyed.
If hard disk drive areal density is limited, how much further can a spinning disk go?
A hard disk is limited by the number of edges and transitions between states, so we take a look at
how much further a spinning disk can go.
How to justify the cost of a solid-state drive (SSD)
We put solid-state drives up against hard disk drives and work out the best use cases for SSDs.
Learn why applications, designed to impact revenue, are a perfect fit.
Tape drives
A tape drive is designed to store computer data on a magnetic tape. This is typically used for
backup and archiving. A tape drive works on the same basis as a tape recorder, in the fact that
both record data on a loop of flexible material. This data can be read and erased from the tape
drive. Recording and playback is recorded in two different ways onto a hard drive – either
through a helical scan, where the tape’s heads touch the tape or through linear tape technology
where the head never touches the tape.
Personal data of 34,000 customers misplaced by Morgan Stanley
Morgan Stanley’s compact disks went missing, with the details of 34,000 customers on them.
The password protected but not encrypted disks disappeared whilst in transit.
Bid of £2.6 billion for Hitachi Global Storage Technologies from Western Digital
Tape drive vendor Digital offered a bid of £2.6 billion to purchase Hitachi Global Storage
Technologies. The bid saw an end to the Japanese HDD vendor's previous preparations for an
IPO.
Is mainframe tape backup: out dated nowadays?
According to this systems integrator, the UK will start to abandon its out dated 1980s-style
backup technologies soon.
Why firms are avoiding encryption on backup tapes and databases
Companies are ignoring database and tape encryption due to cost and complexity, according the
results of this survey.
Zurich
receives
data
breach
fine
After Zurich Insurance UK outsourced some of its customer data to Zurich Insurance Company
South Africa Ltd, the company had to admit to the loss of 46,000 records during a routine tape
transfer. The unencrypted back-up tape was lost in August 2008, and as a result Zurich Insurance
Plc was forced to pay a recording fine.
A
technical
guide
to
your
tape
backup
If you don’t believe tape is dead, here’s a guide to how best use the technology for backup.
Disk storage
Disk storage refers to data that is recorded on a surface layer. The data is stored and recorded by
electronic, optical and magnetic methods. As it is recorded the data lays across one, or
sometimes, several round rotating platters.
Why folder and file encryption isn’t as safe as full disk encryption
How to ensure that a lost corporate laptop doesn’t cause a data breach. Full disk encryption vs.
file and folder encryption – which one is safest and easiest to use?
European storage budgets remain low
Storage budgets continue to shrink with the most having to be spent on disk systems, according
to SearchStorage.co.UK’s Purchasing Intentions survey.
Head to head: Tape archive vs disk archive
At EMC World the vendor’s message was “tape sucks,” however several other vendors still
claim tape is necessary. Find out which one is best.
The cost of disks cut by Compellent Data Progression: Radiology company case study
Read how Compellent’s Data Progression software expanded data storage through to cheaper tier
3 and SATA disks save this radiology firm cash to spend elsewhere.
How to conquer virtual desktop I/O bottlenecks
Learn how to tackle virtual desktop I/O bottlenecks with flash drives and solid state disks (SSD).
Virtual storage
Virtual storage is a reference to memory being extended beyond its main storage. This is
extended through to what is called secondary storage and is managed by system software. The
way it is managed means programmes still treat this designated storage as if it were main
storage.
VMware vSphere and HP 3PAR Utility Storage combine to make HP 3PAR Storage
Systems
VMware and 3PAR have combined forces to offer sever virtualisation and virtualised storage.
The tier 1 HP 3PAR Storage System is designed for virtual data centre and cloud computing
environments.
How data centres have expanded to accommodate a demand for more data storage
Due to an explosion in data, data centres have had to expand to cope with more and more data
storage. We take a look at how data centres have coped with this rapid expansion.
Teradata
makes
a
SSD,
hard
disk
and
virtual
storage
cocktail
Teradata uses its Universe show in Barcelona to announced it new Active EDW 6680 data
warehouse.
Job overlap: What exactly does a WAN manager do in terms of storage?
There are several areas where a storage manager overlaps with a WAN manager, so find out
where job responsibilities overlap for data deduplication, disaster recovery (DR), remote site
acceleration and optimisation.
How to implement SAN virtualisation
Best practices on how to implement SAN virtualisation. Find out about storage virtualisation and
which products to consider.
FACTORS OF IT MANAGEMENT:
The 5 Key Success Factors Of Business
(1) Managing and developing people - People today want some direction and structure, but they
also want freedom and encouragement to develop their skills and knowledge. Effectively
managing people requires balancing constraining forces (providing direction, structure,
organization, some rules) with liberating forces (encourage personal growth, development and
creativity). If you as manager/leader err too much in one direction or the other, your organization
will be either too rigid or too chaotic. To make it more complicated, each person has a different
set of needs for structure vs. freedom, order vs. opportunity, logic vs. personal values, factual
information vs. meaning and connections, and so on. Effective managers do not manage all
people the same, except for some basic rules. They manage each person according to what he or
she needs, what motivates them to do their best. This can be complicated but is essential for
success.
(2) Strategic focus - In today’s rap
idly changing world, it’s not
just enough to have a purpose for existing. Leaders have to focus the organization’s resources on
the greatest opportunities, which shift with each new day. Just run through your mind what has
happened in the world or your organization in the past year or two, and you’ll understand what
we mean by the reality of constant change. Doors open and doors close. Major customers or
income sources can change or even go out of business at any time. So it’s necessary for leaders
to keep focused on the desired end results such as increased sales and profits, or more satisfied
customers, while constantly steering the organization across the stormy waters of the
marketplace. As the illustration shows, the job of focused leaders is to connect and align all the
Success Factors for optimum performance.
(3) Operations, or what people do all day - What the people in your organization do day in and
day out to create value for customers, to earn or justify income, strongly determines whether you
succeed or fail. Like the other Top 5 Success Factors, you can’t separate operations from
strategic focus which gives direction, people which do the work, customers who pay the money
and physical resources to do the work. Effective operations ensure that customers get exactly
what they want at the right time, the right price and the right quality. Thus effective operations
management focuses on what is called cycle time (producing a product or service from start to
finish), cost control, and quality control (which requires some form of measurement). Strategic
focus is largely externally oriented, operations largely internally oriented. Both need to be totally
in sync with each other – not something that happens automatically but rather requiring constant
effort. This is why communication is the true lifeblood of a successful organization – a high flow
of information so everyone and everything is connected. Easy to say, hard to do.
(4) Physical resources - Finances, facilities and equipment are the big 3 physical resources. If
you don’t have enough money, you can’t start or sustain an organization. And one of the biggest
expenses is providing adequate facilities and equipment for people to work in and with.
Experienced managers learn that cash flow is king. It doesn’t matter how much customers owe
you, it’s when their money enters your bank account so you can use it to sustain the organization.
Failing to manage cash flow is the No. 1 reason for business failure. Too many business owners
leave the money up to someone else and can easily get blind-sided when suddenly the money
isn’t there to keep the doors open. And in a few rare, unfortunate cases, the person tracking the
money embezzles or cooks the books, then you really are in trouble. Likewise nice facilities can
be energizing, something to feel proud about, but also very expensive. The economy is always
cyclical, and if you buy or lease really nice facilities when times are good, paying for them can
be difficult or impossible in a downturn.
(5) Customer relations - Customers are where the money comes from, so in many ways this is
the most important success factor. As the famous business guru Peter Drucker said years ago,
The purpose of a business is to get and keep customers. Getting customers involves marketing –
indeed this success factor includes all kinds of marketing and sales. The key to successful
customer relations is to give them what they need, not just what you want to sell. Effective sales
and marketing begins with asking existing and potential customers what they need, what problem
they want solved or deficiency filled. By keeping in touch with customers and asking these
questions often, you’ll do a better job of developing customer loyalty and keeping competitors
away. In the broadest sense customer relations can be considered the organization’s relationships
with the external world. It involves tracking competitor actions, analyzing changes in the market
environment, and adapting according. This is closely linked to Strategic Focus.
Five Key Elements to Managing Teams
As explained by Patrick Lencioni’s in “The Five Dysfunctions of a Team”, senior executives,
middle management and assigned team leaders, must foster and expect that team member
activities include the following characteristics:
1. Trust among team members
Building trust takes time. If trust is lacking it must be the responsibility of the team leader to
focus first on building trust, i.e. getting team members to open up (among the team) and expose
their weaknesses and fears to each other. In some cases, a team building exercise can be utilized.
In certain business cases, due to time pressures, the leader may have to take responsibility for
building trust or change the team to achieve the necessary level of trust for team success. Until
everyone is willing to trust the other members of the team, progress towards team success will be
limited.
2. Prepare to engage in debate around ideas.
Disagreements can lead to conflict, but conflict can be good. If ideas are not presented and
debated, the team will miss opportunities to find the best solutions to problems. Respect for the
thoughts and ideas of the other team members will be developed through healthy debate.
3. Learn to commit to decisions and plans of action.
Team results will only come about as a result of team commitment to team decisions, this
includes agreeing on the specifics of action plans. If some team members are not consistent with
their
commitments,
the
team
will
not
succeed.
4. Hold one another accountable against their plans.
Team members must be prepared to check among themselves to assure progress and overcome
obstacles to progress. Ad hoc meetings may be necessary to coordinate actions between
departments
or
groups
to
assure
progress.
5. Focus on achieving collective results.
The vision and/or mission of the team must be accepted by all the team members and critical
goals viewed as the collective responsibility of the team. If a return to profitability is a critical
goal of an executive team, priorities and time commitments must be pulled from elsewhere.
Focusing on results that in any way does not support the critical goal(s) of the team will lead to
team
failure.
Mr. Lencioni’s diagnosis is helpful in understanding team dynamics. Yet, a straight forward
prescription for building successful teams is to A. Build attitudes of trust among team members,
B. Communicate openly among team members, and C. Focus on common goals that are related
to a clear purpose. The purpose, of course, must be based on the business vision, values and
mission of the company or, at the very least, the specific mission assigned the team by company
management.
IMPLEMENTATION CONTROL:
Implementation
Missteps in the implementation phase of a marketing plan can be disastrous. Implementation
means execution, or the actual steps the company will take to promote its business. These steps
may include running ads, launching a website or sending direct mail. If the implementation isn't
completed according to plan, the company won't achieve its strategic objectives. The best ideas
still need to be enacted. The implementation phase of the marketing plan makes sure the
marketing activities happen in the correct time and sequence for success.
Evaluation
The evaluation step of a marketing plan focuses on analyzing quantitative and qualitative metrics
associated with the implementation and strategy. Quantifiable metrics are those to which
numbers can be attached, such as the numbers of sales leads obtained, customers reached and
dollar amounts achieved. Qualitative factors include measures of customer satisfaction.
Evaluating the marketing plan means looking at the data and examining whether or not the
company achieved its strategy objectives from the implementation phase. If it did, the steps can
be replicated for future success. If not, changes can be made to improve performance and results.
Control
Controls are necessary for the evaluation phase. Controls established during the creation of the
marketing plan provide benchmarks to assess how well the plan accomplished its goals. Controls
are like goals; they give the company something to aim for when enacting the plan. Controls may
include measures such as the marketing budgets and market share.
Implementation of Management Control Systems : Overview
There is no certainty that management control systems will always be effective, either in terms of
design or in terms of implementation. These systems can only increase the probability of
achievement of organizational objectives of effectiveness, efficiency, accuracy of financial
reporting,
and
compliance.
Management controls should be integrated or in-built into the organization's activities. These inbuilt control systems will influence the organization's capability to achieve its objectives and also
help in improving the quality of its business operations. There are five components of
management control - control environment, risk assessment, control activities, information and
communication,
and
monitoring
the
control
system.
Control activities refer to the policies and procedures that are used in an organization to provide
a reasonable assurance that the directions and instructions given by the management are followed
appropriately.
Control activities differ depending on the business environment, organizational objectives,
complexity in business operations, the people involved in the implementation of these activities,
and organizational structure and culture. Conducting meetings helps in improving decision
making and also in reducing the time taken for the decision-making process. Four different types
of meetings which serve different purposes are: the daily check-in, the weekly tactical, the
monthly
strategic,
and
the
quarterly
off-site
review.
Information systems will not be effective without proper communication between the different
levels of management. Communication is not only required to pass on the information but is also
necessary for coordination of work, assigning responsibilities, etc. Two types of communications
- internal communication and external communication - take place in any organization.
The management controls are designed in such a way that the control activities involved are
monitored on a continuous basis or separately. Continuous monitoring helps the organization by
offering feedback on whether the control components are effective or ineffective. Separate
assessment of activities helps in understanding the effectiveness of the control system as a whole
and, in turn, of the continuous monitoring processes. The most important factor while
implementing control systems is that the organizations should have proper processes in place to
identify, communicate, follow up, and rectify discrepancies (if any) in the set plans and
objectives.
Management control is implemented by a number of people both internal and external to the
organization. Each of them plays a different role and has different responsibilities toward the
effective implementation of a management control system. The entities internal to the
organization are the management, the board of directors, the internal auditors, and most of the
employees; the entities external to the organization include external auditors, regulatory bodies,
customers,
suppliers,
and
financial
analysts.
Control is a process that is executed by people, and the relevant procedures should be practiced
thoughtfully, rather than mechanically. Consistency of execution is another major requirement
for the success of the administration of management control systems in an organization. The
issues faced in implementation can be those which hinder the management control process or
dysfunctional consequences of implementing the management control system.
Some issues that hinder management control process are: lack of proper organizational structure,
management style, well-defined hierarchy, etc.; lack of proper person-job and person-reward fit;
deficiencies in training and developing employees; collusion between the controlled person and
the controlling person; illegitimate use of management authority; and lack of proper
communication.
The implementation and administration of management control systems can lead to
dysfunctional consequences that are counterproductive to the achievement of organizational
objectives. It is necessary to closely monitor the control system to see whether it is actually
motivating managers and employees to act in the interest of the organization, so that necessary
corrective actions may be taken in the design and/or implementation. Some dysfunctional
consequences of management control systems are excessive quantification and attempt to
measure all possible measures, presence of standard operating procedures curbing innovation,
and
data
manipulation.
The control requirements change depending on which stage of the life cycle the organization is
in. Organizations usually go through five different phases of development and growth - the
creativity phase, the direction phase, the decentralization phase, the coordination phase, and the
collaboration phase. Transition from one phase to another is a difficult process for an
organization as it involves changing the rules for the functioning of the organization, the control
systems and procedures, as well as the way in which it will react and adapt to the external
environment.
In the creativity phase, the decision-making power lies with the owners and communication is
informal. In the direction phase, the organization adopts a functional structure with revenue
centers and cost centers; it implements accounting, budgeting, and inventory management
systems; there is formalization of communication and incentive schemes. In the decentralization
phase, profit centers are created; managers are motivated through increased autonomy and
incentives; and internal control and reporting systems help monitor the activities of lower level
managers. In the coordination phase, organizations adopt a divisional or product structure with
investment centers; proper systems for monitoring and control are put in place; strategic
decisions are centralized; and incentives are linked to organizational performance. In the
collaboration phase, a matrix structure is adopted; teamwork, social controls, and self-discipline
are highly emphasized; incentives are based on team performance; and focus is on innovation
and collaborative problem-solving. In addition to organizational growth, decline, or turnaround,
change can also take place when an existing control system used by an organization is modified
or a completely new control system is implemented.
SECURITY
Security is the degree of resistance to, or protection from, harm. It applies to any vulnerable and
valuable asset, such as a person, dwelling, community, nation, or organization.
As noted by the Institute for Security and Open Methodologies (ISECOM) in the OSSTMM 3,
security provides "a form of protection where a separation is created between the assets and the
threat." These separations are generically called "controls," and sometimes include changes to
the asset or the threat.
Perceived security compared to real security
Perception of security may be poorly mapped to measureable objective security. For example,
the fear of earthquakes has been reported to be more common than the fear of slipping on the
bathroom floor although the latter kills many more people than the former.[2] Similarly, the
perceived effectiveness of security measures is sometimes different from the actual security
provided by those measures. The presence of security protections may even be taken for security
itself. For example, two computer security programs could be interfering with each other and
even cancelling each other's effect, while the owner believes s/he is getting double the protection.
Security theater is a critical term for deployment of measures primarily aimed at raising
subjective security without a genuine or commensurate concern for the effects of that measure on
objective security. For example, some consider the screening of airline passengers based on
static databases to have been Security Theater and Computer Assisted Passenger Prescreening
System to have created a decrease in objective security.
Perception of security can increase objective security when it affects or deters malicious
behavior, as with visual signs of security protections, such as video surveillance, alarm systems
in a home, or an anti-theft system in a car such as a vehicle tracking system or warning sign.
Since some intruders will decide not to attempt to break into such areas or vehicles, there can
actually be less damage to windows in addition to protection of valuable objects inside. Without
such advertisement, an intruder might, for example, approach a car, break the window, and then
flee in response to an alarm being triggered. Either way, perhaps the car itself and the objects
inside aren't stolen, but with perceived security even the windows of the car have a lower chance
of being damaged.
Categorizing security
There is an immense literature on the analysis and categorization of security. Part of the reason
for this is that, in most security systems, the "weakest link in the chain" is the most important.
The situation is asymmetric since the 'defender' must cover all points of attack while the attacker
need only identify a single weak point upon which to concentrate.






Computer security
Internet security
Application security
Data security
Information security
Network security
Computer security
Computer security, also known as cyber security or IT security, is security applied to
computing devices such as computers and smart phones, as well as computer networks such as
private and public networks, including the whole Internet. The field includes all the processes
and mechanisms by which digital equipment, information and services are protected from
unintended or unauthorized access, change or destruction, and is of growing importance due to
the increasing reliance of computer systems in most societies.[1] It includes physical security to
prevent theft of equipment and information security to protect the data on that equipment. Those
terms generally do not refer to physical security, but a common belief among computer security
experts is that a physical security breach is one of the worst kinds of security breaches as it
generally allows full access to both data and equipment.
Cyber security is the process of applying security measures to ensure confidentiality, integrity,
and availability of data. Cyber security attempts to assure the protection of assets, which includes
data, desktops, servers, buildings, and most importantly, humans. The goal of cyber security is to
protect data both in transit and at rest. Countermeasures can be put in place in order to increase
the security of data. Some of these measures include, but are not limited to, access control,
awareness training, audit and accountability, risk assessment, penetration testing, vulnerability
management, and security assessment and authorization.
Vulnerabilities
Main article: Vulnerability (computing)
A vulnerability is a weakness which allows an attacker to reduce a system's information
assurance. Vulnerability is the intersection of three elements: a system susceptibility or flaw,
attacker access to the flaw, and attacker capability to exploit the flaw. To exploit a vulnerability,
an attacker must have at least one applicable tool or technique that can connect to a system
weakness. In this frame, vulnerability is also known as the attack surface.
A large number of vulnerabilities are documented in the Common Vulnerabilities and Exposures
(CVE) database.
Vulnerability management is the cyclical practice of identifying, classifying, remediating, and
mitigating vulnerabilities. This practice generally refers to software vulnerabilities in computing
systems.
A security risk may be classified as a vulnerability. The use of vulnerability with the same
meaning of risk can lead to confusion. The risk is tied to the potential of a significant loss. There
can also be vulnerabilities without risk, like when the asset has no value. A vulnerability with
one or more known (publicly or privately) instances of working and fully implemented attacks is
classified as an exploitable vulnerability- a vulnerability for which an exploit exists. To exploit
those vulnerabilities, perpetrators (individual hacker, criminal organization, or a nation state)
most commonly use malware (malicious software), worms, viruses, and targeted attacks.
Different scales exist to assess the risk of an attack. In the United States, authorities use the
Information Operations Condition (INFOCON) system. This system is scaled from 5 to 1
(INFOCON 5 being an harmless situation and INFOCON 1 representing the most critical
threats).
To understand the techniques for securing a computer system, it is important to first understand
the various types of "attacks" that can be made against it. These threats can typically be classified
into one of the categories in the section below.
Backdoors
A backdoor in a computer system, a cryptosystem or an algorithm, is a method of bypassing
normal authentication, securing remote access to a computer, obtaining access to plaintext, and
so on, while attempting to remain undetected. A special form of asymmetric encryption attacks,
known as kleptographic attack, resists to be useful to the reverse engineer even after it is detected
and analyzed.
The backdoor may take the form of an installed program (e.g., Back Orifice), or could be a
modification to an existing program or hardware device. A specific form of backdoor is a root
kit, which replaces system binaries and/or hooks into the function calls of an operating system to
hide the presence of other programs, users, services and open ports. It may also fake information
about disk and memory usage.
Denial-of-service attack
Main article: Denial-of-service attack
Unlike other exploits, denial of service attacks are not used to gain unauthorized access or
control of a system. They are instead designed to render it unusable. Attackers can deny service
to individual victims, such as by deliberately entering a wrong password enough consecutive
times to cause the victim account to be locked, or they may overload the capabilities of a
machine or network and block all users at once. These types of attack are, in practice, difficult to
prevent, because the behavior of whole networks needs to be analyzed, not just the behavior of
small pieces of code. Distributed denial of service (DDoS) attacks, where a large number of
compromised hosts (commonly referred to as "zombie computers", used as part of a botnet with,
for example, a worm, Trojan horse, or backdoor exploit to control them) are used to flood a
target system with network requests, thus attempting to render it unusable through resource
exhaustion, are common. Another technique to exhaust victim resources is through the use of an
attack amplifier, where the attacker takes advantage of poorly designed protocols on third-party
machines, such as NTP or DNS, in order to instruct these hosts to launch the flood. Some
vulnerabilities in applications or operating systems can be exploited to make the computer or
application malfunction or crash to create a denial-of-service.
Direct-access attacks
Common consumer devices that can be used to transfer data surreptitiously.
An unauthorized user gaining physical access to a computer (or part thereof) can perform many
functions or install different types of devices to compromise security, including operating system
modifications, software worms, keyloggers, and covert listening devices. The attacker can also
easily download large quantities of data onto backup media, like CD-R/DVD-R or portable
devices such as flash drives, digital cameras or digital audio players. Another common technique
is to boot an operating system contained on a CD-ROM or other bootable media and read the
data from the hard drive(s) this way. The only way to prevent this is to encrypt the storage media
and store the key separate from the system. Direct-access attacks are the only type of threat to air
gapped computers in most cases.
Eavesdropping
Eavesdropping is the act of surreptitiously listening to a private conversation, typically between
hosts on a network. For instance, programs such as Carnivore and NarusInsight have been used
by the FBI and NSA to eavesdrop on the systems of internet service providers. Even machines
that operate as a closed system (i.e., with no contact to the outside world) can be eavesdropped
upon via monitoring the faint electro-magnetic transmissions generated by the hardware;
TEMPEST is a specification by the NSA referring to these attacks.
Spoofing
Spoofing of user identity describes a situation in which one person or program successfully
masquerades as another by falsifying data.
Tampering
Tampering describes an intentional modification of products in a way that would make them
harmful to the consumer.
Repudiation
Repudiation describes a situation where the authenticity of a signature is being challenged.
Information disclosure
Information disclosure (privacy breach or data leak) describes a situation where information,
thought to be secure, is released in an Untrusted environment.
Privilege escalation
Privilege escalation describes a situation where an attacker gains elevated privileges or access to
resources that were once restricted to them.
Exploits
Main article: Exploit (computer security)
An exploit is a software tool designed to take advantage of a flaw in a computer system. This
frequently includes gaining control of a computer system, allowing privilege escalation, or
creating a denial of service attack. The code from exploits is frequently reused in Trojan horses
and computer viruses. In some cases, a vulnerability can lie in certain programs' processing of a
specific file type, such as a non-executable media file. Some security web sites maintain lists of
currently known unpatched vulnerabilities found in common programs.
Social engineering and Trojans
Main article: Social engineering (security)
See also: Category: Cryptographic attacks
A computer system is no more secure than the persons responsible for its operation. Malicious
individuals have regularly penetrated well-designed, secure computer systems by taking
advantage of the carelessness of trusted individuals, or by deliberately deceiving them, for
example sending messages that they are the system administrator and asking for passwords. This
deception is known as social engineering.
In the world of information technology there are different types of cyber attack–like code
injection to a website or utilizing malware (malicious software) such as virus, Trojans, or similar.
Attacks of these kinds are counteracted managing or improving the damaged product. But there
is one last type, social engineering, which does not directly affect the computers but instead their
users, which are also known as "the weakest link". This type of attack is capable of achieving
similar results to other class of cyber attacks, by going around the infrastructure established to
resist malicious software; since being more difficult to calculate or prevent, it is many times a
more efficient attack vector.
The main target is to convince the user by means of psychological ways to disclose secrets such
as passwords, card numbers, etc. by, for example, impersonating a bank, a contractor, or a
customer.[3]
Indirect attacks
An indirect attack is an attack launched by a third-party computer. By using someone else's
computer to launch an attack, it becomes far more difficult to track down the actual attacker.
There have also been cases where attackers took advantage of public anonym zing systems, such
as the Tor onion router system.
Computer crime
Computer crime refers to any crime that involves a computer and a network.[4]
Vulnerable areas
Computer security is critical in almost any industry which uses computers.[5]
Financial systems
Web sites that accept or store credit card numbers and bank account information are prominent
hacking targets, because of the potential for immediate financial gain from transferring money,
making purchases, or selling the information on the black market. In-store payment systems and
ATMs have also been tampered with in order to gather customer account data and PINs.
Utilities and industrial equipment
Computers control functions at many utilities, including coordination of telecommunications, the
power grid, nuclear power plants, and valve opening and closing in water and gas networks. The
Internet is a potential attack vector for such machines if connected, but the Stuxnet worm
demonstrated that even equipment controlled by computers not connected to the Internet can be
vulnerable to physical damage caused by malicious commands sent to industrial equipment (in
that case uranium enrichment centrifuges) which are infected via removable media. In 2014, the
Computer Emergency Readiness Team, a division of the Department of Homeland Security,
investigated 79 hacking incidents at energy companies.[6]
Aviation
The aviation industry is especially important when analyzing computer security because the
involved risks include human life, expensive equipment, cargo, and transportation infrastructure.
Security can be compromised by hardware and software malpractice, human error, and faulty
operating environments. Threats that exploit computer vulnerabilities can stem from sabotage,
espionage, industrial competition, terrorist attack, mechanical malfunction, and human error.[7]
The consequences of a successful deliberate or inadvertent misuse of a computer system in the
aviation industry range from loss of confidentiality to loss of system integrity, which may lead to
more serious concerns such as exfiltration (data theft or loss), network and air traffic control
outages, which in turn can lead to airport closures, loss of aircraft, loss of passenger life. Military
systems that control munitions can pose an even greater risk.
A proper attack does not need to be very high tech or well funded; for a power outage at an
airport alone can cause repercussions worldwide.[8] One of the easiest and, arguably, the most
difficult to trace security vulnerabilities is achievable by transmitting unauthorized
communications over specific radio frequencies. These transmissions may spoof air traffic
controllers or simply disrupt communications altogether.[9] Controlling aircraft over oceans is
especially dangerous because radar surveillance only extends 175 to 225 miles offshore. Beyond
the radar's sight controllers must rely on periodic radio communications with a third party. [10]
Another attack vector of concern is onboard wifi systems.[11]
Consumer devices
Desktop computers and laptops are commonly infected with malware, either to gather passwords
or financial account information, or to construct a botnet to attack another target. Smart phones,
tablet computers, smart watches, and other mobile devices have also recently become targets for
malware.
Many smart phones have cameras, microphones, GPS receivers, compasses, and accelerometers.
Many Quantified Self devices, such as activity trackers, and mobile apps collect personal
information, such as heartbeat, diet, notes on activities (from exercise in public to sexual
activities), and performance of bodily functions. Wifi, Bluetooth, and cell phone network devices
can be used as attack vectors, and sensors might be remotely activated after a successful attack.
Many mobile applications do not use encryption to transmit this data, nor to protect usernames
and passwords, leaving the devices and the web sites where data is stored vulnerable to
monitoring and break-ins.[12]
Hacking techniques have also been demonstrated against home automation devices such as the
Nest thermostat.[12]
Large corporations
Data breaches at large corporations have become common, largely for financial gain through
identity theft. Notably, the 2014 Sony Pictures Entertainment hack was allegedly carried out by
the government of North Korea or its supporters, in retaliation for an unflattering caricature and
fictional assassination of supreme leader Kim Jong-un.
Automobiles
With physical access to a car's internal controller area network, hackers have demonstrated the
ability to disable the brakes and turn the steering wheel.[13] Computerized engine timing, cruise
control, anti-lock brakes, seat belt tensioners, door locks, airbags and advanced driver assistance
systems make these disruptions possible, and self-driving cars go even further. Connected cars
may use wifi and Bluetooth to communicate with onboard consumer devices, and the cell phone
network to contact concierge and emergency assistance services or get navigational or
entertainment information; each of these networks is a potential entry point for malware or an
attacker.[13] Researchers in 2011 were even able to use a malicious compact disc in a car's stereo
system as a successful attack vector,[14] and cars with built-in voice recognition or remote
assistance features have onboard microphones which could be used for eavesdropping. A 2015
report by U.S. Senator Edward Markey criticized manufacturers' security measures as inadequate
and also highlighted privacy concerns about driving, location, and diagnostic data being
collected, which is vulnerable to abuse by both manufacturers and hackers.[15]
Government
Military installations have been the target of hacks; vital government infrastructure such as
traffic light controls, police and intelligence agency communications, and financial systems are
also potential targets as they become computerized.
Financial cost of security breaches
Serious financial damage has been caused by security breaches, but because there is no standard
model for estimating the cost of an incident, the only data available is that which is made public
by the organizations involved. “Several computer security consulting firms produce estimates of
total worldwide losses attributable to virus and worm attacks and to hostile digital acts in
general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only)
to $226 billion (for all forms of covert attacks). The reliability of these estimates is often
challenged; the underlying methodology is basically anecdotal.”[16]
However, reasonable estimates of the financial cost of security breaches can actually help
organizations make rational investment decisions. According to the classic Gordon-Loeb Model
analyzing the optimal investment level in information security, one can conclude that the amount
a firm spends to protect information should generally be only a small fraction of the expected
loss (i.e., the expected value of the loss resulting from a cyber/information security breach).[17]
Insecurities in operating systems have led to a massive black market for rogue software. An
attacker can use a security hole to install software that tricks the user into buying a product. At
that point, an affiliate program pays the affiliate responsible for generating that installation about
$30. The software is sold for between $50 and $75 per license.[18]
Reasons
There are many similarities (yet many fundamental differences) between computer and physical
security. Just like real-world security, the motivations for breaches of computer security vary
between attackers, sometimes called hackers or crackers. Some are thrill-seekers or vandals (the
kind often responsible for defacing web sites); similarly, some web site defacements are done to
make political statements. However, some attackers are highly skilled and motivated with the
goal of compromising computers for financial gain or espionage. An example of the latter is
Markus Hess (more diligent than skilled), who spied for the KGB and was ultimately caught
because of the efforts of Clifford Stoll, who wrote a memoir, The Cuckoo's Egg, about his
experiences.
For those seeking to prevent security breaches, the first step is usually to attempt to identify what
might motivate an attack on the system, how much the continued operation and information
security of the system are worth, and who might be motivated to breach it. The precautions
required for a home personal computer are very different for those of banks' Internet banking
systems, and different again for a classified military network. Other computer security writers
suggest that, since an attacker using a network need know nothing about you or what you have
on your computer, attacker motivation is inherently impossible to determine beyond guessing. If
true, blocking all possible attacks is the only plausible action to take.
Computer protection (countermeasures)
There are numerous ways to protect computers, including utilizing security-aware design
techniques, building on secure operating systems and installing hardware devices designed to
protect the computer systems.
In general, a countermeasure is a measure or action taken to counter or offset another one. In
computer security a countermeasure is defined as an action, device, procedure, or technique that
reduces a threat, a vulnerability, or an attack by eliminating or preventing it, by minimizing the
harm it can cause, or by discovering and reporting it so that corrective action can be taken.[19][20]
An alternate meaning of countermeasure from the InfosecToday glossary[21] is:
Security and systems design
Although there are many aspects to take into consideration when designing a computer system,
security can prove to be very important. According to Symantec, in 2010, 94 percent of
organizations polled expect to implement security improvements to their computer systems, with
42 percent claiming cyber security as their top risk.[22]
At the same time, many organizations are improving security and many types of cyber criminals
are finding ways to continue their activities. Almost every type of cyber attack is on the rise. In
2009 respondents to the CSI Computer Crime and Security Survey admitted that malware
infections, denial-of-service attacks, password sniffing, and web site defacements were
significantly higher than in the previous two years.
Security measures
A state of computer "security" is the conceptual ideal, attained by the use of the three processes:
threat prevention, detection, and response. These processes are based on various policies and
system components, which include the following:




User account access controls and cryptography can protect systems files and data,
respectively.
Firewalls are by far the most common prevention systems from a network security
perspective as they can (if properly configured) shield access to internal network
services, and block certain kinds of attacks through packet filtering. Firewalls can be both
hardware- or software-based.
Intrusion Detection Systems (IDSs) are designed to detect network attacks in progress
and assist in post-attack forensics, while audit trails and logs serve a similar function for
individual systems.
"Response" is necessarily defined by the assessed security requirements of an individual
system and may cover the range from simple upgrade of protections to notification of
legal authorities, counter-attacks, and the like. In some special cases, a complete
destruction of the compromised system is favored, as it may happen that not all the
compromised resources are detected.
Today, computer security comprises mainly "preventive" measures, like firewalls or an exit
procedure. A firewall can be defined as a way of filtering network data between a host or a
network and another network, such as the Internet, and can be implemented as software running
on the machine, hooking into the network stack (or, in the case of most UNIX-based operating
systems such as Linux, built into the operating system kernel) to provide real time filtering and
blocking. Another implementation is a so-called physical firewall which consists of a separate
machine filtering network traffic. Firewalls are common amongst machines that are permanently
connected to the Internet.
However, relatively few organizations maintain computer systems with effective detection
systems, and fewer still have organized response mechanisms in place. As result, as Reuters
points out: “Companies for the first time report they are losing more through electronic theft of
data than physical stealing of assets”.[24] The primary obstacle to effective eradication of cyber
crime could be traced to excessive reliance on firewalls and other automated "detection" systems.
Yet it is basic evidence gathering by using packet capture appliances that puts criminals behind
bars.
Difficulty with response
Responding forcefully to attempted security breaches (in the manner that one would for
attempted physical security breaches) is often very difficult for a variety of reasons:



Identifying attackers is difficult, as they are often in a different jurisdiction to the systems
they attempt to breach, and operate through proxies, temporary anonymous dial-up
accounts, wireless connections, and other anonym sing procedures which make
backtracking difficult and are often located in yet another jurisdiction. If they
successfully breach security, they are often able to delete logs to cover their tracks.
The sheer number of attempted attacks is so large that organizations cannot spend time
pursuing each attacker (a typical home user with a permanent (e.g., cable modem)
connection will be attacked at least several times per day so more attractive targets could
be presumed to see many more). Note however, that most of the sheer bulk of these
attacks are made by automated vulnerability scanners and computer worms.
Law enforcement officers are often unfamiliar with information technology, and so lack
the skills and interest in pursuing attackers. There are also budgetary constraints. It has
been argued that the high cost of technology, such as DNA testing, and improved
forensics mean less money for other kinds of law enforcement, so the overall rate of
criminals not getting dealt with goes up as the cost of the technology increases. In
addition, the identification of attackers across a network may require logs from various
points in the network and in many countries, the release of these records to law
enforcement (with the exception of being voluntarily surrendered by a network
administrator or a system administrator) requires a search warrant and, depending on the
circumstances, the legal proceedings required can be drawn out to the point where the
records are either regularly destroyed, or the information is no longer relevant.
Reducing vulnerabilities
Computer code is regarded by some as a form of mathematics. It is theoretically possible to
prove the correctness of certain classes of computer programs, though the feasibility of actually
achieving this in large-scale practical systems is regarded as small by some with practical
experience in the industry; see Bruce Schneier et al.
It is also possible to protect messages in transit (i.e., communications) by means of cryptography.
One method of encryption—the one-time pad—is unbreakable when correctly used. This method
was used by the Soviet Union during the Cold War, though flaws in their implementation
allowed some cryptanalysis; see the Venona project. The method uses a matching pair of keycodes, securely distributed, which are used once-and-only-once to encode and decode a single
message. For transmitted computer encryption this method is difficult to use properly (securely),
and highly inconvenient as well. Other methods of encryption, while breakable in theory, are
often virtually impossible to directly break by any means publicly known today. Breaking them
requires some non-cryptographic input, such as a stolen key, stolen plaintext (at either end of the
transmission), or some other extra cryptanalytic information.
Social engineering and direct computer access (physical) attacks can only be prevented by noncomputer means, which can be difficult to enforce, relative to the sensitivity of the information.
Even in a highly disciplined environment, such as in military organizations, social engineering
attacks can still be difficult to foresee and prevent.
Trusting computer program code to behave securely has been pursued for decades. It has proven
difficult to determine what code 'will never do.' Mathematical proofs are illusive in part because
it is so difficult to define secure behavior even notionally, let alone mathematically. In practice,
only a small fraction of computer program code is mathematically proven, or even goes through
comprehensive information technology audits or inexpensive but extremely valuable computer
security audits, so it is usually possible for a determined hacker to read, copy, alter or destroy
data in well secured computers, albeit at the cost of great time and resources. Few attackers
would audit applications for vulnerabilities just to attack a single specific system. It is possible to
reduce an attacker's chances by keeping systems up to date, using a security scanner or/and
hiring competent people responsible for security. The effects of data loss/damage can be reduced
by careful backing up and insurance. However software-based strategies have not yet been
discovered for protecting computers from adequately funded, dedicated malicious attacks.
Security by design
Main article: Secure by design
Security by design, or alternately secure by design, means that the software has been designed
from the ground up to be secure. In this case, security is considered as a main feature.
Some of the techniques in this approach include:






The principle of least privilege, where each part of the system has only the privileges that
are needed for its function. That way even if an attacker gains access to that part, they
have only limited access to the whole system.
Automated theorem proving to prove the correctness of crucial software subsystems.
Code reviews and unit testing, approaches to make modules more secure where formal
correctness proofs are not possible.
Defense in depth, where the design is such that more than one subsystem needs to be
violated to compromise the integrity of the system and the information it holds.
Default secure settings, and design to "fail secure" rather than "fail insecure" (see failsafe for the equivalent in safety engineering). Ideally, a secure system should require a
deliberate, conscious, knowledgeable and free decision on the part of legitimate
authorities in order to make it insecure.
Audit trails tracking system activity, so that when a security breach occurs, the
mechanism and extent of the breach can be determined. Storing audit trails remotely,
where they can only be appended to, can keep intruders from covering their tracks.

Full disclosure of all vulnerabilities, to ensure that the "window of vulnerability" is kept
as short as possible when bugs are discovered.
Security architecture
The Open Security Architecture organization defines IT security architecture as "the design
artifacts that describe how the security controls (security countermeasures) are positioned, and
how they relate to the overall information technology architecture. These controls serve the
purpose to maintain the system's quality attributes: confidentiality, integrity, availability,
accountability and assurance services".
Techopedia defines security architecture as "a unified security design that addresses the
necessities and potential risks involved in a certain scenario or environment. It also specifies
when and where to apply security controls. The design process is generally reproducible." The
key attributes of security architecture are:



the relationship of different components and how they depend on each other.
the determination of controls based on risk assessment, good practice, finances, and legal
matters.
the standardization of controls.
Hardware protection mechanisms
See also: Computer security compromised by hardware failure
While hardware may be a source of insecurity, such as with microchip vulnerabilities
maliciously introduced during the manufacturing process, hardware-based or assisted computer
security also offers an alternative to software-only computer security. Using devices and
methods such as dongles, trusted platform modules, intrusion-aware cases, drive locks, disabling
USB ports, and mobile-enabled access may be considered more secure due to the physical access
(or sophisticated backdoor access) required in order to be compromised. Each of these is covered
in more detail below.

USB dongles are typically used in software licensing schemes to unlock software
capabilities,[29] but they can also be seen as a way to prevent unauthorized access to a
computer or other device's software. The dongle, or key, essentially creates a secure
encrypted tunnel between the software application and the key. The principle is that an
encryption scheme on the dongle, such as Advanced Encryption Standard (AES) provides
a stronger measure of security, since it is harder to hack and replicate the dongle than to
simply copy the native software to another machine and use it. Another security
application for dongles is to use them for accessing web-based content such as cloud
software or Virtual Private Networks (VPNs).[30] In addition, a USB dongle can be
configured to lock or unlock a computer.

Trusted platform modules (TPMs) secure devices by integrating cryptographic
capabilities onto access devices, through the use of microprocessors, or so-called
computers-on-a-chip. TPMs used in conjunction with server-side software offer a way to
detect and authenticate hardware devices, preventing unauthorized network and data
access.

Computer case intrusion detection refers to a push-button switch which is triggered when
a computer case is opened. The firmware or BIOS is programmed to show an alert to the
operator when the computer is booted up the next time.

Drive locks are essentially software tools to encrypt hard drives, making them
inaccessible to thieves.[33] Tools exist specifically for encrypting external drives as well.

Disabling USB ports is a security option for preventing unauthorized and malicious
access to an otherwise secure computer. Infected USB dongles connected to a network
from a computer inside the firewall are considered by Network World as the most
common hardware threat facing computer networks.

Mobile-enabled access devices are growing in popularity due to the ubiquitous nature of
cell phones. Built-in capabilities such as Bluetooth, the newer Bluetooth low energy
(LE), Near field communication (NFC) on non-iOS devices and biometric validation such
as thumb print readers, as well as QR code reader software designed for mobile devices,
offer new, secure ways for mobile phones to connect to access control systems. These
control systems provide computer security and can also be used for controlling access to
secure buildings.
Secure operating systems
Main article: Security-focused operating system
One use of the term "computer security" refers to technology that is used to implement secure
operating systems. Much of this technology is based on science developed in the 1980s and used
to produce what may be some of the most impenetrable operating systems ever. Though still
valid, the technology is in limited use today, primarily because it imposes some changes to
system management and also because it is not widely understood. Such ultra-strong secure
operating systems are based on operating system kernel technology that can guarantee that
certain security policies are absolutely enforced in an operating environment. An example of
such a Computer security policy is the Bell-LaPadula model. The strategy is based on a coupling
of special microprocessor hardware features, often involving the memory management unit, to a
special correctly implemented operating system kernel. This forms the foundation for a secure
operating system which, if certain critical parts are designed and implemented correctly, can
ensure the absolute impossibility of penetration by hostile elements. This capability is enabled
because the configuration not only imposes a security policy, but in theory completely protects
itself from corruption. Ordinary operating systems, on the other hand, lack the features that
assure this maximal level of security. The design methodology to produce such secure systems is
precise, deterministic and logical.
Systems designed with such methodology represent the state of the art of computer security
although products using such security are not widely known. In sharp contrast to most kinds of
software, they meet specifications with verifiable certainty comparable to specifications for size,
weight and power. Secure operating systems designed this way are used primarily to protect
national security information, military secrets, and the data of international financial institutions.
These are very powerful security tools and very few secure operating systems have been certified
at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to
"unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS
LAN). The assurance of security depends not only on the soundness of the design strategy, but
also on the assurance of correctness of the implementation, and therefore there are degrees of
security strength defined for COMPUSEC. The Common Criteria quantifies security strength of
products in terms of two components, security functionality and assurance level (such as EAL
levels), and these are specified in a Protection Profile for requirements and a Security Target for
product descriptions. None of these ultra-high assurance secure general purpose operating
systems have been produced for decades or certified under Common Criteria.
In USA parlance, the term High Assurance usually suggests the system has the right security
functions that are implemented robustly enough to protect DoD and DoE classified information.
Medium assurance suggests it can protect less valuable information, such as income tax
information. Secure operating systems designed to meet medium robustness levels of security
functionality and assurance have seen wider use within both government and commercial
markets. Medium robust systems may provide the same security functions as high assurance
secure operating systems but do so at a lower assurance level (such as Common Criteria levels
EAL4 or EAL5). Lower levels mean we can be less certain that the security functions are
implemented flawlessly, and therefore less dependable. These systems are found in use on web
servers, guards, database servers, and management hosts and are used not only to protect the data
stored on these systems but also to provide a high level of protection for network connections
and routing services.
Secure coding
Main article: Secure coding
If the operating environment is not based on a secure operating system capable of maintaining a
domain for its own execution, and capable of protecting application code from malicious
subversion, and capable of protecting the system from subverted code, then high degrees of
security are understandably not possible. While such secure operating systems are possible and
have been implemented, most commercial systems fall in a 'low security' category because they
rely on features not supported by secure operating systems (like portability, and others). In low
security operating environments, applications must be relied on to participate in their own
protection. There are 'best effort' secure coding practices that can be followed to make an
application more resistant to malicious subversion.
In commercial environments, the majority of software subversion vulnerabilities result from a
few known kinds of coding defects. Common software defects include buffer overflows, format
string vulnerabilities, integer overflow, and code/command injection. These defects can be used
to cause the target system to execute putative data. However, the "data" contain executable
instructions, allowing the attacker to gain control of the processor.
Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord,
"Secure Coding in C and C++").[37] Other languages, such as Java, are more resistant to some of
these defects, but are still prone to code/command injection and other software defects which
facilitate subversion.
Another bad coding practice occurs when an object is deleted during normal operation yet the
program neglects to update any of the associated memory pointers, potentially causing system
instability when that location is referenced again. This is called dangling pointer, and the first
known exploit for this particular problem was presented in July 2007. Before this publication the
problem was known but considered to be academic and not practically exploitable.[38]
Unfortunately, there is no theoretical model of "secure coding" practices, nor is one practically
achievable, insofar as the code (ideally, read-only) and data (generally read/write) generally
tends to have some form of defect.
Capabilities and access control lists
Main articles: Access control list and Capability (computers)
Within computer systems, two of many security models capable of enforcing privilege separation
are access control lists (ACLs) and capability-based security. Using ACLs to confine programs
has been proven to be insecure in many situations, such as if the host computer can be tricked
into indirectly allowing restricted file access, an issue known as the confused deputy problem. It
has also been shown that the promise of ACLs of giving access to an object to only one person
can never be guaranteed in practice. Both of these problems are resolved by capabilities. This
does not mean practical flaws exist in all ACL-based systems, but only that the designers of
certain utilities must take responsibility to ensure that they do not introduce flaws.
Capabilities have been mostly restricted to research operating systems, while commercial OSs
still use ACLs. Capabilities can, however, also be implemented at the language level, leading to
a style of programming that is essentially a refinement of standard object-oriented design. An
open source project in the area is the E language.
The most secure computers are those not connected to the Internet and shielded from any
interference. In the real world, the most secure systems are operating systems where security is
not an add-on.
Hacking back
There has been a significant debate regarding the legality of hacking back against digital
attackers (who attempt to or successfully breach an individual's, entities, or nation's computer).
The arguments for such counter-attacks are based on notions of equity, active defense,
vigilantism, and the Computer Fraud and Abuse Act (CFAA). The arguments against the practice
are primarily based on the legal definitions of "intrusion" and "unauthorized access", as defined
by the CFAA. As of October 2012, the debate is ongoing.
Notable computer security attacks and breaches
Some illustrative examples of different types of computer security breaches are given below.
Robert Morris and the first computer worm
Main article: Morris worm
In 1988, only 60,000 computers were connected to the Internet, and most were mainframes,
minicomputers and professional workstations. On November 2, 1988, many started to slow
down, because they were running a malicious code that demanded processor time and that spread
itself to other computers - the first internet "computer worm".[40] The software was traced back to
23 year old Cornell University graduate student Robert Tappan Morris, Jr. who said 'he wanted
to count how many machines were connected to the Internet'.
Rome Laboratory
In 1994, over a hundred intrusions were made by unidentified crackers into the Rome
Laboratory, the US Air Force's main command and research facility. Using Trojan horses,
hackers were able to obtain unrestricted access to Rome's networking systems and remove traces
of their activities. The intruders were able to obtain classified files, such as air tasking order
systems data and furthermore able to penetrate connected networks of National Aeronautics and
Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some
Defense contractors, and other private sector organizations, by posing as a trusted Rome center
user.[41]
TJX loses 45.7m customer credit card details
In early 2007, American apparel and home goods company TJX announced that it was the victim
of an unauthorized computer systems intrusion[42] and that the hackers had accessed a system that
stored data on credit card, debit card, check, and merchandise return transactions.[43]
Stuxnet attack
The computer worm known as Stuxnet reportedly ruined almost one-fifth of Iran's nuclear
centrifuges[44] by disrupting industrial programmable logic controllers (PLCs) in a targeted attack
generally believed to have been launched by Israel and the United States[45][46][47][48] although
neither has publicly acknowledged this.
Global surveillance disclosures
Main article: Global surveillance disclosures (2013–present)
In early 2013, thousands of thousands of classified documents[49] were disclosed by NSA
contractor Edward Snowden. Called the "most significant leak in U.S. history"[50] it also revealed
for the first time the massive breaches of computer security by the NSA, including deliberately
inserting a backdoor in a NIST standard for encryption[51] and tapping the links between
Google's data centers.
Target And Home Depot Breaches by Rescator
In 2013 and 2014, a Russian/Ukrainian hacking ring known as "Rescator" broke into Target
Corporation computers in 2013, stealing roughly 40 million credit cards, and then Home Depot
computers in 2014, stealing between 53 and 56 million credit card numbers. Warnings were
delivered at both corporations, but ignored; physical security breaches using self checkout
machines are believed to have played a large role. “The malware utilized is absolutely
unsophisticated and uninteresting,” says Jim Walter, director of threat intelligence operations at
security technology company McAfee - meaning that the heists could have easily been stopped
by existing antivirus software had administrators responded to the warnings. The size of the
thefts has resulted in major attention from state and Federal United States authorities and the
investigation is ongoing.
Legal issues and global regulation
Conflict of laws in cyberspace has become a major cause of concern for computer security
community. Some of the main challenges and complaints about the antivirus industry are the
lack of global web regulations, a global base of common rules to judge, and eventually punish,
cyber crimes and cyber criminals. There is no global cyber law and cyber security treaty that can
be invoked for enforcing global cyber security issues.
International legal issues of cyber attacks are really tricky and complicated in nature. For
instance, even if an antivirus firm locates the cyber criminal behind the creation of a particular
virus or piece of malware or again one form of cyber attack, often the local authorities cannot
take action due to lack of laws under which to prosecute. This is mainly caused by the fact that
many countries have their own regulations regarding cyber crimes. Authorship attribution for
cyber crimes and cyber attacks has become a major problem for international law enforcement
agencies.[62]
"[Computer viruses] switch from one country to another, from one jurisdiction to another —
moving around the world, using the fact that we don't have the capability to globally police
operations like this. So the Internet is as if someone [had] given free plane tickets to all the
online criminals of the world." (Mikko Hyppönen) Use of dynamic DNS, fast flux and bullet
proof servers have added own complexities to this situation.
Businesses are eager to expand to less developed countries due to the low cost of labor, says
White et al. (2012). However, these countries are the ones with the least amount of Internet
safety measures, and the Internet Service Providers are not so focused on implementing those
safety measures (2010). Instead, they are putting their main focus on expanding their business,
which exposes them to an increase in criminal activity.
In response to the growing problem of cyber crime, the European Commission established the
European Cybercrime Centre (EC3). The EC3 effectively opened on 1 January 2013 and will be
the focal point in the EU's fight against cyber crime, contributing to faster reaction to online
crimes. It will support member states and the EU's institutions in building an operational and
analytical capacity for investigations, as well as cooperation with international partners.
Internet security
Internet security is a tree branch of computer security specifically related to the Internet, often
involving browser security but also network security on a more general level as it applies to other
applications or operating systems on a whole. Its objective is to establish rules and measures to
use against attacks over the Internet. The Internet represents an insecure channel for exchanging
information leading to a high risk of intrusion or fraud, such as phishing. Different methods have
been used to protect the transfer of data, including encryption.
Types of security
Network layer security
TCP/IP which stands for Transmission Control Protocol (TCP) and Internet Protocol (IP) aka
Internet protocol suite can be made secure with the help of cryptographic methods and protocols.
These protocols include Secure Sockets Layer (SSL), succeeded by Transport Layer Security
(TLS) for web traffic, Pretty Good Privacy (PGP) for email, and IPsec for the network layer
security.
Internet Protocol Security (IPsec)
This protocol is designed to protect communication in a secure manner using TCP/IP aka
Internet protocol suite. It is a set of security extensions developed by the Internet Task force
IETF, and it provides security and authentication at the IP layer by transforming data using
encryption. Two main types of transformation that form the basis of IPsec: the Authentication
Header (AH) and ESP. These two protocols provide data integrity, data origin authentication,
and anti-replay service. These protocols can be used alone or in combination to provide the
desired set of security services for the Internet Protocol (IP) layer.
The basic components of the IPsec security architecture are described in terms of the following
functionalities:




Security protocols for AH and ESP
Security association for policy management and traffic processing
Manual and automatic key management for the Internet key exchange (IKE)
Algorithms for authentication and encryption
The set of security services provided at the IP layer includes access control, data origin integrity,
protection against replays, and confidentiality. The algorithm allows these sets to work
independently without affecting other parts of the implementation. The IPsec implementation is
operated in a host or security gateway environment giving protection to IP traffic.
Security token
Some online sites offer customers the ability to use a six-digit code which randomly changes
every 30–60 seconds on a security token. The keys on the security token have built in
mathematical computations and manipulate numbers based on the current time built into the
device. This means that every thirty seconds there is only a certain array of numbers possible
which would be correct to validate access to the online account. The website that the user is
logging into would be made aware of that devices' serial number and would know the
computation and correct time built into the device to verify that the number given is indeed one
of the handful of six-digit numbers that works in that given 30-60 second cycle. After 30–60
seconds the device will present a new random six-digit number which can log into the website.
Electronic mail security (E-mail)
Background
Email messages are composed, delivered, and stored in a multiple step process, which starts with
the message's composition. When the user finishes composing the message and sends it, the
message is transformed into a standard format: an RFC 2822 formatted message. Afterwards, the
message can be transmitted. Using a network connection, the mail client, referred to as a mail
user agent (MUA), connects to a mail transfer agent (MTA) operating on the mail server. The
mail client then provides the sender’s identity to the server. Next, using the mail server
commands, the client sends the recipient list to the mail server. The client then supplies the
message. Once the mail server receives and processes the message, several events occur:
recipient server identification, connection establishment, and message transmission. Using
Domain Name System (DNS) services, the sender’s mail server determines the mail server(s) for
the recipient(s). Then, the server opens up a connection(s) to the recipient mail server(s) and
sends the message employing a process similar to that used by the originating client, delivering
the message to the recipient(s).
Pretty Good Privacy (PGP)
Pretty Good Privacy provides confidentiality by encrypting messages to be transmitted or data
files to be stored using an encryption algorithm such Triple DES or CAST-128. Email messages
can be protected by using cryptography in various ways, such as the following:



Signing an email message to ensure its integrity and confirm the identity of its
sender.
Encrypting the body of an email message to ensure its confidentiality.
Encrypting the communications between mail servers to protect the
confidentiality of both message body and message header.
The first two methods, message signing and message body encryption, are often used together;
however, encrypting the transmissions between mail servers is typically used only when two
organizations want to protect emails regularly sent between each other. For example, the
organizations could establish a virtual private network (VPN) to encrypt the communications
between their mail servers over the Internet. Unlike methods that can only encrypt a message
body, a VPN can encrypt entire messages, including email header information such as senders,
recipients, and subjects. In some cases, organizations may need to protect header information.
However, a VPN solution alone cannot provide a message signing mechanism, nor can it provide
protection for email messages along the entire route from sender to recipient.
Multipurpose Internet Mail Extensions (MIME)
MIME transforms non-ASCII data at the sender's site to Network Virtual Terminal (NVT) ASCII
data and delivers it to client's Simple Mail Transfer Protocol (SMTP) to be sent through the
Internet. The server SMTP at the receiver's side receives the NVT ASCII data and delivers it to
MIME to be transformed back to the original non-ASCII data.
Message Authentication Code
A Message authentication code (MAC) is a cryptography method that uses a secret key to
encrypt a message. This method outputs a MAC value that can be decrypted by the receiver,
using the same secret key used by the sender. The Message Authentication Code protects both a
message's data integrity as well as its authenticity.[6]
Firewalls
A computer firewall controls access between networks. It generally consists of gateways and
filters which vary from one firewall to another. Firewalls also screen network traffic and are able
to block traffic that is dangerous. Firewalls act as the intermediate server between SMTP and
Hypertext Transfer Protocol (HTTP) connections.
Role of firewalls in web security
Firewalls impose restrictions on incoming and outgoing Network packets to and from private
networks. Incoming or outgoing traffic must pass through the firewall; only authorized traffic is
allowed to pass through it. Firewalls create checkpoints between an internal private network and
the public Internet, also known as choke points(borrowed from the identical military term of a
combat limiting geographical feature). Firewalls can create choke points based on IP source and
TCP port number. They can also serve as the platform for IPsec. Using tunnel mode capability,
firewall can be used to implement VPNs. Firewalls can also limit network exposure by hiding the
internal network system and information from the public Internet.
Types of firewall
Packet filter
A packet filter is a first generation firewall that processes network traffic on a packet-by-packet
basis. Its main job is to filter traffic from a remote IP host, so a router is needed to connect the
internal network to the Internet. The router is known as a screening router, which screens packets
leaving and entering the network.
Stateful packet inspection
Main article: circuit-level gateway
In a Stateful firewall the circuit-level gateway is a proxy server that operates at the network level
of an Open Systems Interconnection (OSI) model and statically defines what traffic will be
allowed. Circuit proxies will forward Network packets (formatted unit of data ) containing a
given port number, if the port is permitted by the algorithm. The main advantage of a proxy
server is its ability to provide Network Address Translation (NAT), which can hide the user's IP
address from the Internet, effectively protecting all internal information from the Internet.
Application-level gateway
An application-level firewall is a third generation firewall where a proxy server operates at the
very top of the OSI model, the IP suite application level. A network packet is forwarded only if a
connection is established using a known protocol. Application-level gateways are notable for
analyzing entire messages rather than individual packets of data when the data are being sent or
received.
Application security
Application security (short: AppSec) encompasses measures taken throughout the code's lifecycle to prevent gaps in the security policy of an application or the underlying system
(vulnerabilities) through flaws in the design, development, deployment, upgrade, or maintenance
of the application.
Applications only control the kind of resources granted to them, and not which resources are
granted to them. They, in turn, determine the use of these resources by users of the application
through application security.
Open Web Application Security Project (OWASP) and Web Application Security Consortium
(WASC) updates on the latest threats which impair web based applications. This aids developers,
security testers and architects to focus on better design and mitigation strategy. OWASP Top 10
has become an industrial norm in assessing Web Applications.
Mobile application security
OWASP, a leading application security industry authority, has acknowledged and prioritized the
need for mobile application security, and recommended binary protection to mitigate the
business and technical risks that mobile apps face. See Mobile Security Project - Top Ten Mobile
Risks for Top Ten Mobile Risks based on new vulnerability statistics in the field of mobile
applications.
The proportion of mobile devices providing open platform functionality is expected to continue
to increase in future. The openness of these platforms offers significant opportunities to all parts
of the mobile eco-system by delivering the ability for flexible program and service delivery
options that may be installed, removed or refreshed multiple times in line with the user’s needs
and requirements. However, with openness comes responsibility and unrestricted access to
mobile resources and APIs by applications of unknown or Untrusted origin could result in
damage to the user, the device, the network or all of these, if not managed by suitable security
architectures and network precautions. Application security is provided in some form on most
open OS mobile devices (Symbian OS,[2] Microsoft, [3] BREW, etc.). Industry groups have also
created recommendations including the GSM Association and Open Mobile Terminal Platform
(OMTP).[4]
There are several strategies to enhance mobile application security including










Application white listing
Ensuring transport layer security
Strong authentication and authorization
Encryption of data when written to memory
Sandboxing of applications
Granting application access on a per-API level
Processes tied to a user ID
Predefined interactions between the mobile application and the OS
Requiring user input for privileged/elevated access
Proper session handling
Security testing for applications
Security testing techniques scour for vulnerabilities or security holes
vulnerabilities leave applications open to exploitation. Ideally, security
throughout the entire software development life cycle (SDLC) so that
addressed in a timely and thorough manner. Unfortunately, testing is
afterthought at the end of the development cycle.
in applications. These
testing is implemented
vulnerabilities may be
often conducted as an
Vulnerability scanners, and more specifically web application scanners, otherwise known as
penetration testing tools (i.e. ethical hacking tools) have been historically used by security
organizations within corporations and security consultants to automate the security testing of http
request/responses; however, this is not a substitute for the need for actual source code review.
Physical code reviews of an application's source code can be accomplished manually or in an
automated fashion. Given the common size of individual programs (often 500,000 lines of code
or more), the human brain can not execute a comprehensive data flow analysis needed in order to
completely check all circuitous paths of an application program to find vulnerability points. The
human brain is suited more for filtering, interrupting and reporting the outputs of automated
source code analysis tools available commercially versus trying to trace every possible path
through a compiled code base to find the root cause level vulnerabilities.
Security certifications
There are a number of certifications available for security professionals to demonstrate their
knowledge in the subject matter (e.g. Certified Information Systems Security Professional,
Certified Information Security Manager, etc.), however the usefulness of security certifications
and certifications in general typically receives mixed reviews by experienced professionals.
Data security
Data security means protecting data, such as a database, from destructive forces and from the
unwanted actions of unauthorized users.
Data security technologies
Disk encryption
Disk encryption refers to encryption technology that encrypts data on a hard disk drive. Disk
encryption typically takes form in either software (see disk encryption software) or hardware
(see disk encryption hardware). Disk encryption is often referred to as on-the-fly encryption
(OTFE) or transparent encryption.
Software versus hardware-based mechanisms for protecting data
Software-based security solutions encrypt the data to protect it from theft. However, a malicious
program or a hacker could corrupt the data in order to make it unrecoverable, making the system
unusable. Hardware-based security solutions can prevent read and write access to data and hence
offer very strong protection against tampering and unauthorized access.
Hardware based security or assisted computer security offers an alternative to software-only
computer security. Security tokens such as those using PKCS#11 may be more secure due to the
physical access required in order to be compromised. Access is enabled only when the token is
connected and correct PIN is entered (see two-factor authentication). However, dongles can be
used by anyone who can gain physical access to it. Newer technologies in hardware-based
security solves this problem offering fool proof security for data.
Working of hardware-based security: A hardware device allows a user to log in, log out and set
different privilege levels by doing manual actions. The device uses biometric technology to
prevent malicious users from logging in, logging out, and changing privilege levels. The current
state of a user of the device is read by controllers in peripheral devices such as hard disks. Illegal
access by a malicious user or a malicious program is interrupted based on the current state of a
user by hard disk and DVD controllers making illegal access to data impossible. Hardware-based
access control is more secure than protection provided by the operating systems as operating
systems are vulnerable to malicious attacks by viruses and hackers. The data on hard disks can
be corrupted after a malicious access is obtained. With hardware-based protection, software
cannot manipulate the user privilege levels. It is impossible for a hacker or a malicious program
to gain access to secure data protected by hardware or perform unauthorized privileged
operations. This assumption is broken only if the hardware itself is malicious or contains a
backdoor.[2] The hardware protects the operating system image and file system privileges from
being tampered. Therefore, a completely secure system can be created using a combination of
hardware-based security and secure system administration policies.
Backups
Backups are used to ensure data which is lost can be recovered from another source. It is
considered essential to keep a backup of any data in most industries and the process is
recommended for any files of importance to a user.
Data masking
Data Masking of structured data is the process of obscuring (masking) specific data within a
database table or cell to ensure that data security is maintained and sensitive information is not
exposed to unauthorized personnel. This may include masking the data from users (for example
so banking customer representatives can only see the last 4 digits of a customers national identity
number), developers (who need real production data to test new software releases but should not
be able to see sensitive financial data), outsourcing vendors, etc.
Data erasure
Data erasure is a method of software-based overwriting that completely destroys all electronic
data residing on a hard drive or other digital media to ensure that no sensitive data is leaked
when an asset is retired or reused.
INFORMATION SECURITY
Information security, sometimes shortened to InfoSec, is the practice of defending information
from unauthorized access, use, disclosure, disruption, modification, perusal, inspection,
recording or destruction. It is a general term that can be used regardless of the form the data may
take (e.g. electronic, physical).
IT security
Sometimes referred to as computer security, Information Technology security is information
security applied to technology (most often some form of computer system). It is worthwhile to
note that a computer does not necessarily mean a home desktop. A computer is any device with a
processor and some memory. Such devices can range from non-networked standalone devices as
simple as calculators, to networked mobile computing devices such as smartphones and tablet
computers. IT security specialists are almost always found in any major enterprise/establishment
due to the nature and value of the data within larger businesses. They are responsible for keeping
all of the technology within the company secure from malicious cyber attacks that often attempt
to breach into critical private information or gain control of the internal systems.
Information assurance
The act of ensuring that data is not lost when critical issues arise. These issues include but are
not limited to: natural disasters, computer/server malfunction, physical theft, or any other
instance where data has the potential of being lost. Since most information is stored on
computers in our modern era, information assurance is typically dealt with by IT security
specialists. One of the most common methods of providing information assurance is to have an
off-site backup of the data in case one of the mentioned issues arise.
Threats
Computer system threats come in many different forms. Some of the most common threats today
are software attacks, theft of intellectual property, identity theft, theft of equipment or
information, sabotage, and information extortion. Most people have experienced software attacks
of some sort. Viruses, worms, phishing attacks, and trojan horses are a few common examples of
software attacks. The theft of intellectual property has also been an extensive issue for many
businesses in the IT field. Intellectual property is the ownership of property usually consisting of
some form of protection.
Network security
Network security consists of the provisions and policies adopted by a network administrator to
prevent and monitor unauthorized access, misuse, modification, or denial of a computer network
and network-accessible resources. Network security involves the authorization of access to data
in a network, which is controlled by the network administrator. Users choose or are assigned an
ID and password or other authenticating information that allows them access to information and
programs within their authority. Network security covers a variety of computer networks, both
public and private, that are used in everyday jobs conducting transactions and communications
among businesses, government agencies and individuals. Networks can be private, such as
within a company, and others which might be open to public access. Network security is
involved in organizations, enterprises, and other types of institutions. It does as its title explains:
It secures the network, as well as protecting and overseeing operations being done. The most
common and simple way of protecting a network resource is by assigning it a unique name and a
corresponding password.
Network security concepts
Network security starts with authenticating, commonly with a username and a password. Since
this requires just one detail authenticating the user name —i.e., the password— this is sometimes
termed one-factor authentication. With two-factor authentication, something the user 'has' is also
used (e.g., a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor
authentication, something the user 'is' is also used (e.g., a fingerprint or retinal scan).
QUALITY
What is Quality?
The term "quality" has a relative meaning. This is expressed by the ISO definition: "The totality
of features and characteristics of a product or service that bear on its ability to satisfy stated or
implied needs". In simpler words, one can say that a product has good quality when it "complies
with the requirements specified by the client". When projected on analytical work, quality can be
defined as "delivery of reliable information within an agreed span of time under agreed
conditions, at agreed costs, and with necessary aftercare". The "agreed conditions" should
include a specification as to the precision and accuracy of the data which is directly related to
"fitness of use" and which may differ for different applications. Yet, in many cases the reliability
of data is not questioned and the request for specifications omitted. Many laboratories work
according to established methods and procedures which are not readily changed and have
inherent default specifications. Moreover, not all future uses of the data and reports can be
foreseen so that specifications about required precision and accuracy cannot even be given.
Consequently, this aspect of quality is usually left to the discretion of the laboratory. However,
all too often the embarrassing situation exists that a laboratory cannot evaluate and account for
its quality simply because the necessary documentation is lacking.
In the ensuing discussions numerous activities aimed at maintaining the production of quality are
dealt with. In principle, three levels of organization of these activities can be distinguished. From
the top down these levels are:
1.Quality Management (QM)
2. Quality Assurance (QA)
3. Quality Control (QC)
Quality Management
Quality Management is the assembly and management of all activities aimed at the production of
quality by organizations of various kinds. In the present case this implies the introduction and
proper running of a "Quality System" in laboratories. A statement of objectives and policy to
produce quality should be made for the organization or department concerned (by the institute's
directorate). This statement also identifies the internal organization and responsibilities for the
effective operation of the Quality System.
Quality Management can be considered a somewhat wider interpretation of the concept of "Good
Laboratory Practice" (GLP). Therefore, inevitably the basics of the present Guidelines largely
coincide with those of GLP. These are discussed below in Section 1.5.
Note. An even wider concept of quality management is presently coming into vogue: "Total
Quality Management" (TQM). This concept includes additional aspects such as leadership style,
ethics of the work, social aspects, relation to society, etc. For an introduction to TQM the reader
is referred to Parkany (1995).
Quality Assurance
Proper Quality Management implies consequent implementation of the next level: Quality
Assurance. The ISO definition reads: "the assembly of all planned and systematic actions
necessary to provide adequate confidence that a product, process, or service will satisfy given
quality requirements." The result of these actions aimed at the production of quality, should
ideally be checked by someone independent of the work: the Quality Assurance Officer. If no
QA officer is available, then usually the Head of Laboratory performs this job as part of his
quality management task. In case of special projects, customers may require special quality
assurance measures or a Quality Plan.
Quality Control
A major part of the quality assurance is the Quality Control defined by ISO as "the operational
techniques and activities that are used to satisfy quality requirements. " An important part of the
quality control is the Quality Assessment: the system of activities to verify if the quality control
activities are effective, in other words: an evaluation of the products themselves.
Quality control is primarily aimed at the prevention of errors. Yet, despite all efforts, it remains
inevitable that errors are be made. Therefore, the control system should have checks to detect
them. When errors or mistakes are suspected or discovered it is essential that the "Five Ws" are
trailed:
- what error was made?
- where was it made?
- when was it made?
- who made it?
- why was it made?
Only when all these questions are answered, proper action can be taken to correct the error and
prevent the same mistake being repeated.
The techniques and activities involved in Quality Control can be divided into four levels of
operation:
1. First-line control: Instrument performance check.
2. Second-line control: Check of calibration or standardization.
3. Third-line control: Batch control (control sample, identity check).
4. Fourth-line control: Overall check (external checks: reference samples, interlaboratory
exchange programmes).
Because the first two control levels both apply to the correct functioning of the instruments they
are often taken together and then only three levels are distinguished. This designation is used
throughout the present Guidelines:
1. First-line control: Instrument check / calibration.
2. Second-line control: Batch control
3. Third-line control: External check
It will be clear that producing quality in the laboratory is a major enterprise requiring a
continuous human effort and input of money. The rule-of-fist is that 10-20% of the total costs of
analysis should be spent on quality control. Therefore, for quality work at least four conditions
should be fulfilled:
- means are available (adequate personnel and facilities)
- efficient use of time and means (costs aspect)
- expertise is available (answering questions; aftercare)
- upholding and improving level of output (continuity)
In quality work, management aspects and technical aspects are inherently cobbled together and
for a clear insight and proper functioning of the laboratory these aspects have to be broken down
into their components. This is done in the ensuing chapters of this manual.
Good Laboratory Practice (GLP)
Quality Management in the present context can be considered a modem version of the hitherto
much used concept "Good Laboratory Practice" (GLP) with a somewhat wider interpretation.
The OECD Document defines GLP as follows: "Good Laboratory Practice (GLP) is concerned
with the organizational process and the conditions under which laboratory studies are planned,
performed, monitored, recorded, and reported."
Thus, GLP prescribes a laboratory to work according to a system of procedures and protocols.
This implies the organization of the activities and the conditions under which these take place are
controlled, reported and filed. GLP is a policy for all aspects of the laboratory which influence
the quality of the analytical work. When properly applied, GLP should then:
- allow better laboratory management (including quality management)
- improve efficiency (thus reducing costs)
- minimize errors
- allow quality control (including tracking of errors and their cause)
- stimulate and motivate all personnel
- improve safety
- improve communication possibilities, both internally and externally.
The result of GLP is that the performance of a laboratory is improved and its working effectively
controlled. An important aspect is also that the standards of quality are documented and can be
demonstrated to authorities and clients. This results in an improved reputation for the laboratory
(and for the institute as a whole). In short, the message is:
- say what you do
- do what you say
- do it better
- be able to show what you have done
The basic rule is that all relevant plans, activities, conditions and situations are recorded and that
these records are safely filed and can be produced or retrieved when necessary. These aspects
differ strongly in character and need to be attended to individually.
As an assembly, the involved documents constitute a so-called Quality Manual. This comprises
then all relevant information on:
- Organization and Personnel
- Facilities
- Equipment and Working materials
- Analytical or testing systems
- Quality control
- Reporting and filing of results.
Since institutions having a laboratory are of divergent natures, there is no standard format and
each has to make its own Quality Manual. The present Guidelines contain examples of forms,
protocols, procedures and artificial situations. They need at least to be adapted and many new
ones will have to be made according to the specific needs, but all have to fulfil the basic
requirement of usefulness and verifiability.
As already indicated, the guidelines for Quality Management given here are mainly based on the
principles of Good Laboratory Practice as they are laid down in various relevant documents such
as ISO and ISO/IEC guides, ISO 9000 series, OECD and CEN (EN 45000 series) documents,
national standards (e.g. NEN standards)*, as well as a number of text books. The consulted
documents are listed in the Literature. Use is also made of documents developed by institutes
which have obtained accreditation or are working towards this. This concerns mainly so-called
Standard Operating Procedures (SOPs) and Protocols. Sometimes these documents are hard to
acquire as they are classified information for reasons of competitiveness. The institutes and
persons which cooperated in the development of these Guidelines are listed in the
Acknowledgements.
* ISO: International Standardization Organization; IEC: International Electrical Commission;
OECD: Organization for Economic Cooperation and Development; CEN: European Committee
for Standardization, EN: European Standard; NEN: Dutch Standard.
ETHICAL ISSUES
List of Ethical Issues in Business:
Fundamental Issues
The most fundamental or essential ethical issues that businesses must face are integrity and trust.
A basic understanding of integrity includes the idea of conducting your business affairs with
honesty and a commitment to treating every customer fairly. When customers perceive that a
company is exhibiting an unwavering commitment to ethical business practices, a high level of
trust can develop between the business and the people it seeks to serve. A relationship of trust
between you and your customers may be a key determinate to your company's success.
Diversity Issues
According to the HSBC Group, "the world is a rich and diverse place full of interesting cultures
and people, who should be treated with respect and from whom there is a great deal to learn." An
ethical response to diversity begins with recruiting a diverse workforce, enforces equal
opportunity in all training programs and is fulfilled when every employee is able to enjoy a
respectful workplace environment that values their contributions. Maximizing the value of each
employees' contribution is a key element in your business's success.
Decision-Making Issues
According to Santa Clara University, the following framework for ethical decision-making is a
useful method for exploring ethical dilemmas and identifying ethical courses of action:
"recognizes an ethical issue, gets the facts, evaluates alternative actions, makes a decision and
tests it and reflects on the outcome." Ethical decision-making processes should center on
protecting employee and customer rights, making sure all business operations are fair and just,
protecting the common good and making sure individual values and beliefs of workers are
protected.
Compliance and Governance Issues
Businesses are expected to fully comply with environmental laws, federal and state safety
regulations, fiscal and monetary reporting statutes and all applicable civil rights laws. The
Aluminium Company of America's approach to compliance issues states, "no one may ask any
employee to break the law, or go against company values, policies and procedures." ALCOA's
commitment to compliance is underpinned by the company's approach to corporate governance;
"we expect all directors, officers and other Alcoans to conduct business in compliance with our
Business Conduct Policies."
Examples of Ethical Issues in Business
Discrimination
You're the boss in a predominantly male environment. The presence of a new female employee
stirs up conflict because your company has not had a chance to conduct sensitivity training.
Some of your male employees make inappropriate remarks to your new employee. She
complains to you; in response, you sanction those responsible for the conduct. You also wonder
if it would be wise to move your new female employee to another position where she would be
less likely to draw attention. Treating your female employee differently based on her gender or in
response to a harassment complaint may be considered discriminatory and unethical conduct.
Side Deals
You're a business manager with an employment contract. The contract requires you to work
solely for your employer and use your talents to attract new clients to the business. If you begin
attracting more clients than you believe your employer can reasonably handle, you may wonder
if there would be an ethical issue with your diverting that excess business elsewhere and taking
the commission. If you don't, at minimum, disclose the idea to your employer, you will likely be
in breach of both your contractual and ethical duties.
Partners
You're a partner in a business and see a great deal of profitability on the horizon. You don't
believe that your partner deserves to profit from the business' future success, because you don't
like his personality. You may wonder if you could simply take his name off the bank accounts,
change the locks and continue without him. If you proceed with this course of action, you would
likely be in violation of your ethical and legal obligation to act in good faith concerning your
partner. The better course of action may be to simply buy out his interest in the business.
Gross Negligence
You're on the board of directors for a publicly traded corporation. You and your fellow board
members, in hopes of heading off early for the holidays, rush through the investigatory process
involved in a much-anticipated merger. As a board member, you have a duty to exercise the
utmost care respecting decisions that affect the corporation and its shareholders. Failing to
properly investigate a matter that affects their interests could be viewed as gross negligence
supporting a breach of your ethical and legal duty of care.
Short Answers:
1. Define MIS
2. Define Decision Support System.
3. What is mean by transaction processing?
4. What is the three-management support level in MIS?
5. Define Expert System. Give any two-application tools.
Long Answers:
1.Briefly Explain the IT strategy Statements?
2.Explain the concept of Independent operations?
3.what is estimating returns?