Download 4. Software - Ghulam Hassan

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer cluster wikipedia , lookup

Supercomputer wikipedia , lookup

Supercomputer architecture wikipedia , lookup

ILLIAC IV wikipedia , lookup

Transcript
REPORT ON MINI COMPUTERS
PRESENTED TO
PROF.ADNAN KHALID
PRESENTED BY
NIDA IRFAN
L1F10BBAM0452
AHSAN AKRAM
L1F10BBAM0003
USMAN BASHIR
L1F10BBAM0101
AMBER SHAHEEN
L1S10BBAM0053
GHULAM HASSAN
L1F10BBAM0277
Mini computers
Minicomputers are computers that are somewhere in between a microcomputer and a main frame
computer. In times past, the minicomputer was typically a standalone device that was ideal for
use by small and mid-sized businesses who needed more power and memory than could be
obtained with microcomputers, but did not have a need for the resources provided by
mainframes. More recently, a minicomputer is thought of in terms of being a server that is part of
a larger network.
In the early years of computer technology, a hierarchy of computer sizes and types was used to
define the level of operation needed for different types of applications. The levels ranged from
the embedded systems that functioned more or less automatically to parallel processing systems
that was capable of supporting a huge network of interconnected computers and performing a
wide array of tasks simultaneously. The minicomputer tended to be somewhat low on the
hierarchical listing, in that the device was considered to be limited in ability.
The original structure for a minicomputer was a simple computer system that was equipped with
essential programs and functions that would handle simple tasks, such as word processing. The
minicomputer was equipped with terminals that made it possible to attach peripheral devices to
the system, such as a printer. However, the minicomputer usually did not have hardware or
software that allowed the device to be integrated into a larger network. However, if there was no
need to use comprehensive applications or interact with other systems, the minicomputer was
often sufficient.
Over time, the concept of a minicomputer has become somewhat obsolete. As technology has
continued to evolve, many tasks that was once the exclusive province of the larger and more
powerful mainframe computers has been assumed by the workstation computers of today. Still,
the mainframe remains in existence, although the purpose and function is often associated with
the role of a large server to support a network of workstations. In like manner, the minicomputer
has morphed into a server that is ideal for smaller and more localized networks.
Minicomputers are used for scientific and engineering computations, business-transaction
processing, file handling, and database management, and are often now referred to as small or
midsize servers.
HISTORY OF Mini COMPUTERS
Packard Bell PB250
This was the first machine in the collection. It was built around 1961
and predates true minicomputers, since it was rather slow and at
$50,000 for a typical system it was too expensive. Otherwise it is like a
mini in that it is small and takes little power. It is a 22-bit serial
machine with acoustic delay line memory. The basic IO device was a
Friden Flexowriter.
DEC PDP-8 Classic
Introduced in 1965, this is usually considered the first true
minicomputer because it was the first computer for less than $25,000. It
was also much faster than earlier machines such as the PB250. Its the
key (though not the first) member of Digital's highly successful
Data General SuperNOVA
Data General was an early competitor to DEC. This fast 16-bit mini had
some primitive protection features. Although most of the
minicomputers in the collection were made by Digital, I do have this
DG and an HP 2100A.
DEC PDP-11
The PDP-11 came out in 1970 and gradually displaced the PDP-8
family. It was a 16-bit machine with an elegant instruction set. The
collection includes this first model, the PDP-11/20, and the next and
more powerful model, the PDP-11/45.
Minicomputers pretty much died out when the microprocessor
took over. They went upscale to machines like the VAX, which
eventually grew to mainframe size. Some minicomputer architectures lived on as
microprocessors, the PDP-8 in the Intersil 6100 series chips and the PDP-11 in the
LSI-11 processors.
Structure of mini computers
Minicomputers were built based on integrated circuits (ICs), so-called "chips". This technology
of constructing logical gates was invented independently by both Jack Kilby and Robert Noyce
in late 1950s. They imprinted circuit networks on isolating material and used semiconductor
material - such as silicium or germanium - to take care of the actual logical operations. The
advantages of integrated circuits compared to transistors were significant: They were not only
smaller and faster, but also more reliable and consumed less power. Furthermore, it was now
possible to automate the production of these new chips, which made them more widely available
at a considerable lower price. . Presper Eckert, co-inventor of the ENIAC, noted in 1991: "I
didn't envision we'd be able to get as many parts on a chip as we finally got." In fact, in 1964
Gordon Moore, who worked as semiconductor engineer by then and went on to co-found Intel
four years later, observed that the complexity of integrated circuits doubles every year (Moore's
Law). This bold statement held until the late 1970s, at which point the development speed
slowed down to double complexity every 18 months.
Minicomputer architecture: The DEC PDP-11
Digital Equipment Corporation - DEC for short - was the leader in the market of minicomputers,
both financially and technologically speaking. Their most succesful line of products was the PDP
series, PDP standing for Programmed Data Processor. As already mentioned, these computers
were relatively cheap with prices ranging in the ten-thousands of dollars, this fact making them
quite popular amongst universities.
Furthermore, they introduced interactive computing, meaning that for the first time the user was
given a direct feedback while doing his work. Up until then, programs were usually entered on
punch cards and the result of the computation was eventually printed out on other punch cards.
The minicomputers used the line printer and later introduced the cathode-ray-tube (CRT) or
monitor as a way of giving quick feedback to the user. As multiple terminals (combinations of
input and output devices) were attached to the same main unit, a need arose to handle the input,
output and processing of all the terminals simultaneously. This is why minicomputers, such as
the PDP-11, came equipped with a time-sharing operating system capable of handling multiple
task at the same time.
The DEC PDP-11 was the successor of the PDP-10 and the predecessor of the VAX-11. Just as
the PDP-10 it was delivered with the TOPS-10 (Time-sharing OPerating System) operating
system and the MACRO-10 assembler. At the MIT an own operating system called ITS
(Incompatible Time-sharing System) was developed.
The main hardware features of the PDP-11 were:
o
o
o
o
o
8 general-purpose registers labeled R0 till R7, with R7 also being used as the program
counter
64K memory with a 16 bit address
fixed-point integers in two's complement notation both 8 bit (1 byte) and 16 bit (1 word)
long
floating-point numbers both 32 and 64 bit long
external UNIBUS for bi-directional, asynchronous data communication between devices
The inner view of DEC PDP-11
Because of the UNIBUS architecture, it was possible to built the components of the PDP-11 in a
modular way. This had the additional benefit that the system could be refitted with numerous
extensions, thus enhancing its overall capabilities. Below is a list of some of the available
extensions:
o
o
o
o
o
o
o
o
Extended Arithmetic Element (EAE) for fixed-point multiply, divide and shift
Extended Instruction Set (EIS) enhancing the CPU to perform the operations above
using general purpose registers
Floating Point Processor (FPP) operating on six associated registers
Floating Instruction Set (FIS) using a stack architecture for floating-point operations
FASTBUS and high-speed memory connects a high-speed memory between CPU and
UNIBUS
Cache memory interposed between CPU and main memory
Memory management extending the physical address space
MASSBUS for high-speed Input/Output
4. Software
The most important single piece of software that ran on these machines was, of
course, the operating system. Up until the early 1970s operating systems were written
in machine specific, tight assembler code. This was considered necessary because
only by hand-coding the innermost loops it was possible to achieve the optimal
performance of the machine. The drawback of this approach obviously was that the
code could not be reused, if another architecture differed in just the slightest way.
While IBM did quite a good job at keeping things compatible, DEC wasn't very
concerned about this issue. In fact, even within the PDP series the machine designs
were quite different, for example the length of a word in memory would vary between
7 and 16 bits as time went on.
In 1969 Bell Labs started work on a new kind of operating system which was
supposed be called "Multics". Partly based on MIT's ITS, the main goal was to hide
the complexity of the computer from programmers as well as users. However, this
project was canceled due to disagreements between Bell Labs and their associates.
Fortunately, Ken Thompson, who was in the team that first worked on Multics,
implemented parts of the project and some of his own ideas on a salvaged PDP-7
using the B programming language. He calls the resulting system "Unix". Roughly at
the same time Dennis M. Ritchie develops the C programming language which has
some of its roots in B that was used to first implement Unix. During the time from
1972 to 1974 Ken Thompson reimplements Unix in C constantly improving the
system. A first version of Unix is presented at a symposium on operating system
principles at Purdue University in 1973.
The Unix operating system and the C programming language broke with the old
principle of system level programming which was supposed only to be done in
assembler. This new approach, while not achieving the same performance initially,
had another huge advantage: Once the C compiler had been modified to produce
machine code for a new architecture, all programs written until then could easily be
recompiled and would run on the new platform without major problems. This was
impossible to do with assembly code.
The concept of making software portable between different platforms eliminated the
need to start working from scratch again for each new architecture. Unix, being the
first portable operating system, was immensely succesful and already by 1975 it ran
on several different platforms including IBM System/370, Honeywell 6000 and
Interdata 8/32. Also, it was now possible to easily port common tools to all platforms.
Taken together with the standardized interface of system calls through which Unix
gives access to the machine, these two factors contributed considerable to making
programming much easier than it was ever before.
5. Conclusion
The world of computers has dramatically changed during the 1970s. Starting off in the
world of the mainframes of the 1950s and 1960s, only 10 years later not only the
hardware has changed considerably, also some new concepts have rendered old
assumptions obsolete: It now is important that systems are compatible to each other,
interactivity is a prerequisite, and as far as software is concerned portability is the new
way to go.
Yet the 1970s should be considered just a transitional phase. As the technology
advances even more rapidly, it becomes obvious that soon enough new ideas will be
challenging these assumptions once again. The PC desktop revolution is just around
the corner.
SOURCES
www.piercefuller.com
www.inf.fu-berlin.de