Download FEB 116 computer applications assignment

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
VUTIYA KEVIN
F21/1926/2012
FEB 116 computer applications assignment
Question1.
What is a computer? And why is it known as data processor?
It is known as a data processor because it can:
i.
ii.
iii.
iv.
v.
Accept data
Store data.
Process data as desired.
Retrieve the stored data whenever need arises.
Present results in the best desired format.
Question 2.
Explain in brief the various generations in computer technology.
First Generation (1940-1956) Vacuum Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were
often enormous, taking up entire rooms. They were very expensive to operate and in addition to
using a great deal of electricity, generated a lot of heat, which was often the cause of
malfunctions.
First generation computers relied on machine language, the lowest-level programming language
understood by computers, to perform operations, and they could only solve one problem at a
time. Input was based on punched cards and paper tape, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The
UNIVAC was the first commercial computer delivered to a business client, the U.S. Census
Bureau in 1951.
Second Generation (1956-1963) Transistors
Transistors replaced vacuum tubes and ushered in the second generation of computers. The
transistor was invented in 1947 but did not see widespread use in computers until the late 1950s.
The transistor was far superior to the vacuum tube, allowing computers to become smaller,
faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors.
Though the transistor still generated a great deal of heat that subjected the computer to damage,
it was a vast improvement over the vacuum tube. Second-generation computers still relied on
punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or
assembly, languages, which allowed programmers to specify instructions in words. High-level
programming languages were also being developed at this time, such as early versions of
COBOL and FORTRAN. These were also the first computers that stored their instructions in
their memory, which moved from a magnetic drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.
Third Generation (1964-1971) Integrated Circuits
The development of the integrated circuit was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors, which
drastically increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation computers through
keyboards and monitors and interfaced with an operating system, which allowed the device to
run many different applications at one time with a central program that monitored the memory.
Computers for the first time became accessible to a mass audience because they were smaller
and cheaper than their predecessors.
Fourth Generation (1971-Present) Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated
circuits were built onto a single silicon chip. What in the first generation filled an entire room
could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the
components of the computer—from the central processing unit and memory to input/output
controls—on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many
areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet. Fourth generation computers
also saw the development of GUIs, the mouse and handheld devices.
Fifth Generation (Present and Beyond) Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development,
though there are some applications, such as voice recognition, that are being used today. The use
of parallel processing and superconductors is helping to make artificial intelligence a reality.
Quantum computation and molecular and nanotechnology will radically change the face of
computers in years to come. The goal of fifth-generation computing is to develop devices that
respond to natural language input and are capable of learning and self-organization.
Question 3.
Write a short note on the fifth generation of computer. And what makes it different from the
fourth generation?
Fifth computer generation
The Fifth Generation Computer Systems project(FGCS) was an initiative by Japan's Ministry
of International Trade and Industry, begun in 1982, to create a fifth generation computer which
was supposed to perform much calculation using massively parallel computing/processing. It
was to be the result of a massive government/industry research project in Japan during the 1980s.
It aimed to create an "epoch-making computer" with-supercomputer-like performance and to
provide a platform for future developments in artificial intelligence.
The term "fifth generation" was intended to convey the system as being a leap beyond existing
machines. In the history of computing hardware, computers using vacuum tubes were called the
first generation; transistors and diodes, the second; integrated circuits, the third; and those using
microprocessors, the fourth. Whereas previous computer generations had focused on increasing
the number of logic elements in a single CPU, the fifth generation, it was widely believed at the
time, would instead turn to massive numbers of CPUs for added performance.
How the fifth generation in computer technology differed from the fourth generation:
 The fifth generation technology , computer are proposed to work automatically without
the user’s command unlike the fourth generation computers.
 Computers in the fifth generation can perform calculations using massively using parallel
computing.
Question 4.
The IC and other internal computer components were relatively small in size thus the computer
became small. And also the general demand of more portable computers let to this.
Question 5.
Give notes on the following:
a. Versatility.
Refers to the ability of a computer to perform various functions or performances, generally it
can be termed as its inherently flexibility and adaptability to a wide variety of tasks.
b. Storage:
Computers have a very big storage capacity especially in what is commonly referred
to as secondary storage. In the event that this space is exhausted then it is easy to
expand it.
c. Slide rule:
The slide rule-often nicknamed a "slips tick" is a mechanical analog computer, consisting
of calibrated strips, usually a fixed outer pair and a movable inner one, with a sliding
window called the cursor. It was the most commonly used calculation tool in science and
engineering. Their use began to wane as computers were introduced, starting in the
1950s, and the scientific calculator made them largely obsolete by the early 1970s.
Despite their similar appearance, a slide rule serves a purpose different from that of a
standard ruler: a ruler measures physical distances and aids in drawing straight lines,
while a slide rule performs mathematical operations.
d. Babbage’s analytical engine:
The Analytical Engine was a proposed mechanical general-purpose computer designed
by English mathematician Charles Babbage
It was first described in 1837 as the successor to Babbage's Difference engine, a design
for a mechanical computer. The Analytical Engine incorporated an arithmetic logic unit,
control flow in the form of conditional branching and loops, and integrated memory,
making it the first design for a general-purpose computer that could be described in
modern terms as Turing-complete.
Question 6.
Distinguish between a microprocessor and a mainframe.
A microprocessor incorporates the functions of a computer's central processing unit (CPU) on a single
integrated circuit(IC), or at most a few integrated circuits.[ It is a multipurpose, programmable device
that accepts digital data as input, processes it according to instructions stored in its memory, and
provides results as output. It is an example of sequential digital logic, as it has internal memory.
Microprocessors operate on numbers and symbols represented in the binary numeral system. While a
mainframe is a data processing system employed mainly in large organizations for various applications,
including bulk data processing, process control, industry and consumer statistics, enterprise resource
planning, and financial transaction processing Mainframes use proprietary operating systems, most of
which are based on Unix, and a growing number on Linux.