Download Why is computer known as data processor? Data Processing are the

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
1. Why is computer known as data processor?
Data Processing are the steps by which data can be converted into useful information, which
can be usable by either by an individual or by any number of users, can be labeled as data
processing mechanism. Computer is an electronic device that is capable of making computations
and logic decisions at high speeds. It accepts data, stores data, process data according to a set of
instructions, and also retrieve the data when required. Hence it is known as a data processor.
2. Explain in brief the various generations in computer technology?
A generation refers to the state of improvement in the product development process. This term is
also used in the different advancements of new computer technology. With each new generation,
the circuitry has gotten smaller and more advanced than the previous generation before it. As a
result of the miniaturization, speed, power, and computer memory has proportionally increased.
The first generation (1946-1954): In 1946 there was no 'best' way of storing instructions and data in
a computer memory. There were four competing technologies for providing computer memory:
electrostatic storage tubes, acoustic delay lines (mercury or nickel), magnetic drums (and disks?),
and magnetic core storage. The digital computes using electronic valves (Vacuum tubes) are known
as first generation computers. The first 'computer' to use electronic valves (i.e. vacuum tubes). The
high cost of vacuum tubes prevented their use for main memory. They stored information in the
form of propagating sound waves. The vacuum tube consumes a lot of power. The Vacuum tube was
developed by Lee DeForest in 1908. These computers were large in size and writing programs on
them was difficult. Some of the computers of this generation were: ENIAC (Electronic Numerical
Integrator and Calculator), EDVAC It stands for Electronic Discrete Variable Automatic Computer,
EDSAC: It stands for Electronic Delay Storage Automatic Computer,
(ii) Second Generation (1955-1964): Around 1955 a device called Transistor replaced the bulky
Vacuum tubes in the first generation computer. Transistors are smaller than Vacuum tubes and have
higher operating speed. They have no filament and require no heating. Manufacturing cost was also
very low. Thus the size of the computer got reduced considerably. It is in the second generation that
the concept of Central Processing Unit (CPU), memory, programming language and input and output
units were developed. The programming languages such as COBOL, FORTRAN were developed
during this period. Some of the computers of the Second Generation were
1. IBM 1620: Its size was smaller as compared to First Generation computers and mostly used for
scientific purpose.
2. IBM 1401: Its size was small to medium and used for business applications.
3. CDC 3600: Its size was large and is used for scientific purposes.
(iii)
Third Generation (1964-1977): By the development of a small chip consisting of the
capacity of the 300 transistors. These ICs are popularly known as Chips. A single IC has many
transistors, registers and capacitors built on a single thin slice of silicon. So it is quite obvious that
the size of the computer got further reduced. Some of the computers developed during this period
were IBM-360, ICL-1900, IBM-370, and VAX-750. Higher level language such as BASIC (Beginners Allpurpose Symbolic Instruction Code) was developed during this period. Computers of this generation
were small in size, low cost, large memory and processing speed is very high. Very soon ICs were
replaced by LSI (Large Scale Integration), which consisted about 100 components. An IC containing
about 100 components is called LSI.
(iv) Fourth Generation: An IC containing about 100 components is called LSI (Large Scale
Integration) and the one, which has more than 1000 such components, is called as VLSI (Very Large
Scale Integration). It uses large scale Integrated Circuits (LSIC) built on a single silicon chip called
microprocessors. Due to the development of microprocessor it is possible to place computer’s
central processing unit (CPU) on single chip. These computers are called microcomputers. Later very
large scale Integrated Circuits (VLSIC) replaced LSICs. Thus the computer which was occupying a very
large room in earlier days can now be placed on a table. The personal computer (PC) that you see in
your school is a Fourth Generation Computer Main memory used fast semiconductors chips up to 4
M bits size. Hard disks were used as secondary memory. Keyboards, dot matrix printers etc. were
developed. OS-such as MS-DOS, UNIX, Apple’s Macintosh were available. Object oriented language,
C++ etc. were developed.
(v) Fifth Generation (1991- continued): 5th generation computers use ULSI (Ultra-Large Scale
Integration) chips. Millions of transistors are placed in a single IC in ULSI chips. 64 bit
microprocessors have been developed during this period. Data flow & EPIC architecture of these
processors have been developed. RISC & CISC, both types of designs are used in modern processors.
Memory chips and flash memory up to 1 GB, hard disks up to 600 GB & optical disks up to 50 GB
have been developed. Fifth generation digital computer will be Artificial intelligence.
3. Write a short note on Fifth Generation of computer. What makes it different from Fourth
generation computer?
Fifth Generation - Present and Beyond:
Fifth generation computing devices, based on artificial intelligence, are still in development, though
there are some applications, such as voice recognition, that are being used today. Artificial Intelligence is
the branch of computer science concerned with making computers behave like humans. The term was
coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial intelligence
includes: Games Playing: programming computers to play games such as chess and checkers. Expert
Systems: programming computers to make decisions in real-life situations (for example, some expert
systems help doctors diagnose diseases based on symptoms).Natural Language: programming
computers to understand natural human languages. Neural Networks: Systems that simulate
intelligence by attempting to reproduce the types of physical connections that occur in animal brains.
Robotics: programming computers to see and hear and react to other sensory stimuli. 5th generation
computers use ULSI (Ultra-Large Scale Integration) chips. Millions of transistors are placed in a single IC
in ULSI chips. 64 bit microprocessors have been developed during this period. Data flow & EPIC
architecture of these processors have been developed. RISC & CISC, both types of designs are used in
modern processors. Memory chips and flash memory up to 1 GB, hard disks up to 600 GB & optical disks
up to 50 GB have been developed.
Fourth Generation vs. Fifth Generation
5th generation computers use ULSI (Ultra-Large Scale Integration) chips while the 4th uses large scale
Integrated Circuits (LSIC) built on a single silicon chip called microprocessors.
Fifth generation computer use memory chips up to 1GB while Fourth Generation Computer Main
memory used fast semiconductors chips up to 4 M bits size.
Fourth generation programming languages are designed for a specific application domain, while fifth
generation programming languages are deigned to allow computers to solve problems by themselves.
Fourth generation programming languages (4GL) are the languages which are developed with a specific
goal in mind like developing commercial business applications. 4GL followed 3GL (3rd generation
programming languages, which were the first high-level languages) and are closer to the human
readable form and are more abstract. Fifth generation programming languages (which followed 4GL) are
programming languages that allow programmers to solve problems by defining certain constraints as
opposed to writing a specific algorithm.
4. Why did the size of computer get reduced in third generation computer?
In 1958, Jack Kilby who is an engineer with Texas Instruments developed the Integrated Circuit (IC). The
Integrated Circuit combined three electronic components onto a small silicon disc, which was made
from quartz rock. Scientist later managed to fit more components on a single chip, called
semiconductor. As a result of it, more components were able to squeeze onto the chip and thereby
computers became ever smaller. Another third generation computer development included the use of
an OS (operating system) that allowed computers to run multiple programs together with a central
program that monitored & coordinated the memory of the computer.
5. Give short notes on the following:
(a) Versatility
Versatility in computer means it is capable of performing almost any task, provided the task can be
reduced to a series of logical steps.
(b) Storage
a storage device is a hardware device capable of holding information. There are two storage devices
used in computers; a primary storage device such as computer RAM and a secondary storage device
such as a computer hard drive. The secondary storage could be a removable, internal, or external
storage. In the picture to the right, is an example of a Drobo, an external secondary storage device.
(c) Slide Rule
Slide rule is a mechanical analog computer. The slide rule is used primarily for multiplication and
division, and also for functions such as roots, logarithms and trigonometry, but is not normally used for
addition or subtraction. Slide rules come in a diverse range of styles and generally appear in a linear or
circular form with a standardized set of markings (scales) essential to performing mathematical
computations. Slide rules manufactured for specialized fields such as aviation or finance typically feature
additional scales that aid in calculations common to that field. The user determines the location of the
decimal point in the result, based on mental estimation. Scientific notation is used to track the decimal
point in more formal calculations. Addition and subtraction steps in a calculation are generally done
mentally or on paper, not on the slide rule. Most slide rules consist of three linear strips of the same
length, aligned in parallel and interlocked so that the central strip can be moved lengthwise relative to
the other two. The outer two strips are fixed so that their relative positions do not change. Some slide
rules ("duplex" models) have scales on both sides of the rule and slide strip, others on one side of the
outer strips and both sides of the slide strip (which can usually be pulled out, flipped over and reinserted
for convenience), still others on one side only ("simplex" rules). A sliding cursor with a vertical alignment
line is used to find corresponding points on scales that are not adjacent to each other or, in duplex
models, are on the other side of the rule. The cursor can also record an intermediate result on any of the
scales. Circular slide rules come in two basic types, one with two cursors (left), and another with a free
dish and one cursor (right).
(d) Babbage’s Analytical Engine
With the construction project stalled, and freed from the nuts and bolts of detailed construction,
Babbage conceived, in 1834, a more ambitious machine, later called Analytical Engine, a generalpurpose programmable computing engine. The Analytical Engine has many essential features found in
the modern digital computer. It was programmable using punched cards, an idea borrowed from the
Jacquard loom used for weaving complex patterns in textiles. The Engine had a 'Store' where numbers
and intermediate results could be held, and a separate 'Mill' where the arithmetic processing was
performed. It had an internal repertoire of the four arithmetical functions and could perform direct
multiplication and division. It was also capable of functions for which we have modern names:
conditional branching, looping (iteration), microprogramming, parallel processing, iteration, latching,
polling, and pulse-shaping, amongst others, though Babbage nowhere used these terms. It had a variety
of outputs including hardcopy printout, punched cards, graph plotting and the automatic production of
stereotypes - trays of soft material into which results were impressed that could be used as molds for
making printing plates. The logical structure of the Analytical Engine was essentially the same as that
which has dominated computer design in the electronic era - the separation of the memory (the 'Store')
from the central processor (the 'Mill'), serial operation using a 'fetch-execute cycle', and facilities for
inputting and outputting data and instructions. Calling Babbage 'the first computer pioneer' is not a
casual tribute.
6. Distinguish between Microcomputer and Mainframe computer.
A mainframe is designed to handle multiple processes (and users) simultaneously (true multitasking) at a
fairly decent speed while Microcomputer can handle one user at a time.
The word main frame has its origin in early computers which were big in size and required large frame
work in house while a Micro computers are available are small in size which utilize microprocessors.
Main frame can vary in cost from inexpensive to very expensive, depending on specifications while
micro computers have minimal cost and almost unlimited applications for the microcomputer have
made it the darling of the computer industry.
Main frame computers have large storage capacities in several million words while Micro computers
have low storage capacity and slow operation rate.
Main frame computers secondary storage devices are directly accessible by these computers while
microcomputers secondary storage devices such as CD Drive, hard disk.
These computers systems have more than one CPU and can support a large number of terminals while
CPU of microcomputers is usually contained in one chip.