Download First generation computers relied on machine language, the lowest

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
MWANGI ERIC MUTURI
B66/35505/2010
ASSIGNMENT 1
1. Why is computer known as data processor?
A computer is a machine that performs tasks, such as mathematical calculations
or electronic
2. Explain in brief the various generations in computer technology?
The history of computer development is often referred to in reference to the
different generations of computing devices. Each of the five generations of
computers is characterized by a major technological development that
fundamentally changed the way computers operate, resulting in increasingly
smaller, cheaper, more powerful and more efficient and reliable devices.
The first computers used vacuum tubes for circuitry and magnetic drum for
memory, and were often enormous, taking up entire rooms. They were very
expensive to operate and in addition to using a great deal of electricity, generated
a lot of heat, which was often the cause of malfunctions.
First generation computers relied on machine language, the lowest-level
programming language understood by computers, to perform operations, and
they could only solve one problem at a time. Input was based on punched cards
and paper tape, and output was displayed on printouts.
The vacuum tube was an extremely important step in the advancement of
computers. Vacuum tubes were invented the same time the light bulb was
invented by Thomas Edison and worked very similar to light bulbs. It's purpose
was to act like an amplifier and a switch. Without any moving parts, vacuum
tubes could take very weak signals and make the signal stronger (amplify it).
Vacuum tubes could also stop and start the flow of electricity instantly (switch).
These two properties made the ENIAC computer possible.
The UNIVAC and ENIAC computers are examples of first-generation computing
devices. The UNIVAC was the first commercial computer delivered to a business
client, the U.S. Census Bureau in 1951.
In the second generation, transistors replaced vacuum tubes and ushered in the
second generation of computers. The transistor was invented in 1947 but did not
see widespread use in computers until the late 1950s. The transistor was far
superior to the vacuum tube, allowing computers to become smaller, faster,
cheaper, more energy-efficient and more reliable than their first-generation
predecessors. Though the transistor still generated a great deal of heat that
subjected the computer to damage, it was a vast improvement over the vacuum
tube. Second-generation computers still relied on punched cards for input and
printouts for output.
Second-generation computers moved from cryptic binary machine language to
symbolic, or assembly, languages, which allowed programmers to specify
instructions in words. High-level programming languages were also being
developed at this time, such as early versions of COBOL and FORTRAN. These
were also the first computers that stored their instructions in their memory,
which moved from a magnetic drum to magnetic core technology.
Development of the integrated circuit was the hallmark of the third generation of
computers. Transistors were miniaturized and placed on silicon chips, called
semiconductors, which drastically increased the speed and efficiency of
computers.
Instead of punched cards and printouts, users interacted with third generation
computers through keyboards and monitors and interfaced with an operating
system, which allowed the device to run many different applications at one time
with a central program that monitored the memory. Computers for the first time
became accessible to a mass audience because they were smaller and cheaper
than their predecessors.
The microprocessor brought the fourth generation of computers, as thousands of
integrated circuits were built onto a single silicon chip. What in the first
generation filled an entire room could now fit in the palm of the hand. The Intel
4004 chip, developed in 1971, located all the components of the computer—from
the central processing unit and memory to input/output controls—on a single
chip.
As these small computers became more powerful, they could be linked together
to form networks, which eventually led to the development of the Internet.
Fourth generation computers also saw the development of GUIs, the mouse and
handheld devices.
Fifth generation computing devices, based on artificial intelligence, are still in
development, though there are some applications, such as voice recognition, that
are being used today. The use of parallel processing and superconductors is
helping to make artificial intelligence a reality. Quantum computation and
molecular and nanotechnology will radically change the face of computers in
years to come. The goal of fifth-generation computing is to develop devices that
respond to natural language input and are capable of learning and selforganization.
3. Write a short note on Fifth Generation of computer. What makes it differen
t from Fourth generation computer?
In the fifth generation, Developments are continually towards expanding memory
size, using very large-scale integration (VLSI) techniques and increasing the speed
of processors. This increasing power is allowing the pursuit of new lines of
development in computer systems.
The major difference from the fourth generation is that there has been increase
in storage, formation of artificial intelligence which has increased the efficiency of
computer in that there is self arrangement.
4. Why did the size of computer get reduced in third generation computer?
In the third generation, there was the development of integrated circuits. .
Transistors were miniaturized and placed on silicon chips, called semiconductors,
which drastically increased the speed and efficiency of computers. This led to a
decrease in the size of the computer and reduced the cost of fixing or assembly of
the computers.
5. Give short notes on the following:
(a) Versatility
A computer's versatility is determined by its ability to handle multiple processes
and programs simultaneously. For a computer to be versatile, it must have a
current operating system, contemporary hardware and a suitable amount of
peripherals.
(b) Storage
In a computer, storage is the place where data is held in an electromagnetic or
optical form for access by a computer processor. There are two general usages.
One, storage is frequently used to mean the devices and data connected to the
computer through input/output operations - that is, hard disk and tape systems
and other forms of storage that don't include computer memory and other incomputer storage. For the enterprise, the options for this kind of storage are of
much greater variety and expense than that related to memory.
Two, in a more formal usage, storage has been divided into: primary storage,
which holds data in memory (sometimes called random access memory or RAM)
and other "built-in" devices such as the processor's L1 cache, and secondary
storage, which holds data on hard disks, tapes, and other devices requiring
input/output operations.
(c) Slide Rule
The slide rule, also known colloquially in the United States as a slipstick, is a
mechanical analog computer. The slide rule is used primarily for multiplication
and division, and also for functions such as roots, logarithms and trigonometry,
but is not normally used for addition or subtraction.
Slide rules come in a diverse range of styles and generally appear in a linear or
circular form with a standardized set of markings (scales) essential to performing
mathematical computations. Slide rules manufactured for specialized fields such
as aviation or finance typically feature additional scales that aid in calculations
common to that field. It is a device consisting of two logarithmically scaled rules
mounted to slide along each other so that multiplication, division, and other more
complex computations are reduced to the mechanical equivalent of addition or
subtraction.
(d) Babbage’s Analytical Engine
The analytical engine was the first fully-automatic calculating machine, was
constructed by British computing pioneer Charles Babbage (1791-1871), who first
conceived the idea of an advanced calculating machine to calculate and print
mathematical tables in 1812. Conceived by him in 1834, this machine was
designed to evaluate any mathematical formula and to have even higher powers
of analysis than his original Difference engine of the 1820s. Only part of the
machine as a trial piece was completed before Babbage's death in 1871.
The Analytical Engine incorporated an arithmetic logic unit, control flow in the
form of conditional branching and loops, and integrated memory, making it the
first design for a general-purpose computer that could be described in modern
terms as Turing-complete.
6. Distinguish between Microcomputer and Mainframe computer.
A mainframe computer is a data processing system employed mainly in large
organizations for various applications, including bulk data processing, process
control, industry and consumer statistics, enterprise resource planning, and
financial transaction processing. The term originally referred to the large cabinets
that housed the central processing unit and main memory of early computers.
Later, the term was used to distinguish high-end commercial machines from less
powerful units
A microcomputer is a complete computer on a smaller scale and is generally a
synonym for the more common term, personal computer or PC , a computer
designed for an individual. A microcomputer contains a microprocessor (a central
processing unit on a microchip ), memory in the form of read-only memory and
random access memory , I/O ports and a bus or system of interconnecting wires,
housed in a unit that is usually called a motherboard .