Download Part I

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Tandem Computers wikipedia , lookup

Asynchronous I/O wikipedia , lookup

Transcript
OPERATING SYSTEMS
CPU:The CPU is the brains of the computer. Sometimes referred to simply as the
processor or central processor, the CPU is where most calculations take place.
Registers:A, special, high-speed storage area within the CPU. All data must be
represented in a register before it can be processed. For example, if two numbers are to
be multiplied, both numbers must be in registers, and the result is also placed in a
register. (The register can contain the address of a memory location where data is
stored rather than the actual data itself.)
There are several classes of registers according to the content:
Data registers are used to store integer numbers (see also Floating Point
Registers, below). In some older and simple current CPUs, a special data register is
the accumulator, used implicitly for many operations.
Address registers hold memory addresses and are used to access memory. In
some CPUs, a special address register is an index register, although often these
hold numbers used to modify addresses rather than holding addresses.
Conditional registers hold truth values often used to determine whether some
instruction should or should not be executed.
General purpose registers (GPRs) can store both data and addresses, i.e., they
are combined Data/Address registers.
Floating point registers (FPRs) are used to store floating point numbers in many
architectures.
Constant registers hold read-only values (e.g., zero, one, pi, ...).
Vector registers hold data for vector processing done by SIMD instructions
(Single Instruction, Multiple Data).
Special purpose registers hold program state; they usually include the
program counter (aka instruction pointer), stack pointer, and status register
(aka processor status word).
Instruction registers store the instruction currently being executed.
Index registers are used for modifying operand addresses during the run
of a program.
In some architectures, model-specific registers (also called machine-specific
registers) store data and settings related to the processor itself. Because their
meanings are attached to the design of a specific processor, they cannot be
expected to remain standard between processor generations.
Registers related to fetching information from random access memory, a
collection of storage registers located on separate chips from the CPU (unlike
most of the above, these are generally not architectural registers):
Memory buffer register
Memory data register
Memory address register
Memory Type Range Registers
ALU:The arithmetic logic unit (ALU) is a digital circuit that calculates an
arithmetic operation (like an addition, subtraction, etc.) and logic operations
(like an Exclusive Or) between two numbers. The ALU is a fundamental building
block of the central processing unit of a computer.
Control Unit:A control unit is the part of a CPU or other device that directs its
operation. The outputs of the unit control the activity of the rest of the device. A
control unit can be thought of as a finite state machine.
Application Program:An application program (sometimes shortened to
application) is any program designed to perform a specific function directly for
the user or, in some cases, for another application program. Examples of
application programs include word processors; database programs; Web
browsers; development tools; drawing, paint, and image editing programs; and
communication programs.
System Software: System software is a generic term referring to any
computer software which manages and controls the hardware so that
application software can perform a task. It is an essential part of the computer
system. An operating system is an obvious example, while an OpenGL or
database library are less obvious examples. System software contrasts with
application software, which are programs that help the end-user to perform
specific, productive tasks, such as word processing or image manipulation.If
system software is stored on non-volatile storage such as integrated circuits, it
is usually termed firmware.
First Generation Computers: In first generation computers, the operating
instructions or programs were specifically built for the task for which computer
was manufactured. The Machine language was the only way to tell these
machines to perform the operations. There was great difficulty to program these
computers ,and more when there were some malfunctions. First Generation
computers used Vacuum tubes and magnetic drums(for data storage).
Second Generation Computers: In Second Generation computers, the
instructions(program) could be stored inside the computer's memory. High-level
languages such as COBOL (Common Business-Oriented Language) and
FORTRAN (Formula Translator) were used, and they are still used for some
applications nowdays.
Third Generation Computers: Although transistors were great deal of
improvement over the vacuum tubes, they generated heat and damaged the
sensitive areas of the computer. The Intergreated Circuit(IC) was invented in
1958 by Jack Kilby. It combined electronic components onto a small silicon
disc, made from quartz. More advancement made possible the fitings of even
more components on a small chip or a semi conductor. Also in third generation
computers, the operating systems allowed the machines to run many different
applications. These applications were monitored and coordinated by the
computer's memory.
Fourth Generation Computers: Fourth Generation computers are the modern day
computers. The Size started to go down with the improvement in the integerated
circuits. Very Large Scale(VLSI) and Ultra Large scale(ULSI) ensured that millions
of components could be fit into a small chip. It reduced the size and price of the
computers at the same time increasing power, efficiency and reliability.
Fifth Generation Computers: The Fifth Generation Computer Systems project
(FGCS) was an initiative by Japan's Ministry of International Trade and Industry,
begun in 1982, to create a "fifth generation computer" (see history of computing
hardware) which was supposed to perform much calculation utilizing massive
parallelism. It was to be the end result of a massive government/industry research
project in Japan during the 1980s. It aimed to create an "epoch-making computer"
with supercomputer-like performance and usable artificial intelligence capabilities.
POST: Power-on self-test (POST) is the common term for a computer's, router's or
printer's pre-boot sequence. The same basic sequence is present on all computer
architectures. It is the first step of the more general process called initial program
load (IPL), booting, or bootstrapping.
BIOS: BIOS (basic input/output system) is the program a personal computer's
microprocessor uses to get the computer system started after you turn it on. It also
manages data flow between the computer's operating system and attached devices
such as the hard disk, video adapter, keyboard, mouse, and printer.
BOOTSTRAP Program: This appendix describes the Bootstrap Program, also
known as the ROM Monitor. The Bootstrap Program can help you isolate or rule
out hardware problems encountered when installing your router. A summary of
the bootstrap diagnostic tests and command options is provided.
Super Computer: A supercomputer is a computer that leads the world in
terms of processing capacity, particularly speed of calculation, at the time of its
introduction. The term "Super Computing" was first used by New York World
newspaper in 1920 to refer to large custom-built tabulators IBM made for
Columbia University.
Mainframe Computer: Mainframes (often colloquially referred to as Big Iron)
are computers used mainly by large organizations for critical applications,
typically bulk data processing such as census, industry/consumer statistics,
ERP, and financial transaction processing.
Workstation: A workstation, such as a Unix workstation, RISC workstation or
engineering workstation, is a high-end desktop or deskside microcomputer
designed for technical applications. Workstations are intended primarily to be
used by one person at a time, although they can usually also be accessed
remotely by other users when necessary.
Workstations usually offer higher performance than is normally seen on a
personal computer, especially with respect to graphics, processing power,
memory capacity and multitasking ability.
Hardware: There are several differences between computer hardware and
software. However, the fundamental difference between hardware and software
is that hardware is a physical device something that you're able to touch and
see. For example, the computer monitor you're viewing this text on or the
mouse you're using to navigate is considered computer hardware.
Software: Software is code and instructions that tell a computer and/or
hardware how to operate. This code can be viewed and executed using a
computer or other hardware device. However, without any hardware software
would not exist. An examples of software is Microsoft Windows, an operating
system that allows you to control your computer and other programs that run on
it. Another example of software is the Internet browser you're using to view this
page.
Neumann Architecture of a Computer:The von Neumann architecture is a
computer design model that uses a single storage structure to hold both
instructions and data. The term describes such a computer, which implements a
Universal Turing machine, and the common "referential model" of specifying
sequential architectures, in contrast with parallel architectures.The separation
of storage from the processing unit is implicit in the von Neumann architecture.
The term "stored-program computer" is generally used to mean a computer of
this design.
Program Counter: The program counter (also called the instruction pointer,
part of the instruction sequencer in some computers) is a register in a
computer processor which indicates where the computer is in its instruction
sequence. Depending on the details of the particular machine, it holds either
the address of the instruction being executed, or the address of the next
instruction to be executed. The program counter is automatically incremented
for each instruction cycle so that instructions are normally retrieved sequentially
from memory. Certain instructions, such as branches and subroutine calls and
returns, interrupt the sequence by placing a new value in the program counter.
Instruction Register: In computing, an instruction register is the part of a
CPU's control unit that stores the instruction currently being executed. In simple
processors each instruction to be executed is loaded into the instruction register
which holds it while it is decoded, prepared and ultimately executed, which can
take several steps.
Non-Volatile memory: non-volatile storage, is computer memory that can
retain the stored information even when not powered. Examples of non-volatile
memory include read-only memory, flash memory, most types of magnetic
computer storage devices (e.g. hard disks, floppy disk drives, and magnetic
tape), optical disc drives, and early computer storage methods such as paper
tape and punch cards.
Non-volatile memory is typically used for the task of secondary storage, or longterm persistent storage. The most widely used form of primary storage today is
a volatile form of random access memory (RAM), meaning that when the
computer is shut down, anything contained in RAM is lost. Unfortunately, most
forms of non-volatile memory have limitations that make them unsuitable for
use as primary storage. Typically, non-volatile memory either costs more or
performs worse than volatile random access memory.
Volatile Memory: Volatile memory, also known as volatile storage, is
computer memory that requires power to maintain the stored information, unlike
non-volatile memory which does not require a maintained power supply.Most
forms of modern random access memory are volatile storage, including
dynamic random access memory and static random access memory. Content
addressable memory and dual-ported RAM are usually implemented using
volatile storage. Early volatile storage technologies include delay line memory
and Williams tube.
Random Access Memory: Random Access Memory (usually known by its
acronym, RAM) is a type of data storage used in computers. It takes the form of
integrated circuits that allow the stored data to be accessed in any order — that
is, at random and without the physical movement of the storage medium or a
physical reading head.
Read Only Memory: Read-only memory (often referred to as its acronym
ROM) is a class of storage media used in computers and other electronic
devices. Because it cannot (easily) be written to, its main uses lie in the
distribution of firmware (software that is very closely related to hardware, and
not likely to need frequent upgrading).
BUS: In computer architecture, a bus is a subsystem that transfers data or
power between computer components inside a computer or between computers
and typically is controlled by device driver software. Unlike a point-to-point
connection, a bus can logically connect several peripherals over the same set
of wires. Each bus defines its set of connectors to physically plug devices,
cards or cables together.
interface standard used by Apple Macintosh computers, PCs, and many UNIX
systems for attaching peripheral devices to computers. Nearly all Apple
Macintosh computers, excluding only the earliest Macs and the recent iMac,
come with a SCSI port for attaching devices such as disk drives and printers.
SCSI interfaces provide for faster data transmission rates (up to 80
megabytes per second) than standard serial and parallel ports. In addition, you
can attach many devices to a single SCSI port, so that SCSI is really an I/O bus
rather than simply an interface.
Diff between RAM & ROM: As the name implies, you cannot write to readonly memory; your system can only read it during normal operation. The
computer manufacturer preprograms the data; the data will remain intact
without any power. Random-access memory allows the CPU to read and write
information at any time. This information is erased, however, when you turn the
system's power off. We can further differentiate RAM chips between Static RAM
(SRAM) and Dynamic (RAM) (DRAM), depending upon whether the data in the
cells needs to be refreshed. (SRAM doesn't need to be refreshed.)
Primary Memory: Primary storage, or internal memory, is computer
memory that is accessible to the central processing unit of a computer without
the use of computer's input/output channels. Primary storage is used to store
data that is likely to be in active use. Primary storage is typically very fast, as in
the case of RAM.
Secondary Memory:
Secondary memory (or secondary storage) is the slowest and cheapest form
of memory. It cannot be processed directly by the CPU. It must first be copied
into primary storage (also known as RAM ).
Secondary memory devices include magnetic disks like hard drives and
floppy disks ; optical disks such as CDs and CDROMs ; and magnetic tapes,
which were the first forms of secondary memory.
Cache Memory: Cache memory is random access memory (RAM) that a
computer microprocessor can access more quickly than it can access regular
RAM. As the microprocessor processes data, it looks first in the cache memory
and if it finds the data there (from a previous reading of data), it does not have
to do the more time-consuming reading of data from larger memory.
Hit Ratio in the context of cache memory:The chief measurement of a
cache, which is the percentage of all accesses that are satisfied by
the data in the cache. Also known as "hit ratio."
Compiler: A compiler is a computer program (or set of programs) that
translates text written in a computer language (the source language) into
another computer language (the target language). The original sequence is
usually called the source code and the output called object code.
Translator:Which translates a source code to an object code.
Assembler: a computer program to translate between lower-level
representations of computer programs
Interpreter: a program designed to run other non-executable programs directly.
Virtual Machine: a virtual machine is software that creates a virtualized
environment between the computer platform and its operating system, so that
the end user can operate software on an abstract machine.
Loader: In computing, a loader is a program that performs the functions of a
linker program and then immediately schedules the resulting executable
program for action (in the form of a memory image), without necessarily saving
the program as an executable file.
Linker: a linker or link editor is a program that takes one or more objects
generated by compilers and assembles them into a single executable program.
Boot Block: An area of a disk having information for loading the operating
system that is needed to start a computer.
Binary Code: Computers use the binary system to work with data. All data in
the computer is stored in binary code as 1's and 0's (bits).
Object code: The source code consists of instructions in a particular language, like
C or FORTRAN. Computers, however, can only execute instructions written in a
low-level language called machine language. To get from source code to machine
language, the programs must be transformed by a compiler. The compiler produces
an intermediary form called object code. Object code is often the same as or similar
to a computer's machine language.
Dynamic Linking: Dynamic linking defers much of the linking process until a
program starts running. It provides a variety of benefits that are hard to get
otherwise:
Dynamically linked shared libraries are easier to create than static linked shared
libraries.
Dynamically linked shared libraries are easier to update than static linked shared
libraries.
The semantics of dynamically linked shared libraries can be much closer to those of
unshared libraries.
Dynamic linking permits a program to load and unload routines at runtine, a facility
that can otherwise be very difficult to provide.
There are a few disadvantages, of course. The runtime performance costs of
dynamic linking are substantial compared to those of static linking, since a large
part of the linking process has to be redone every time a program runs. Every
dynamically linked symbol used in a program has to be looked up in a symbol table
and resolved. (Windows DLLs mitigate this cost somewhat, as we describe below.)
Dynamic libraries are also larger than static libraries, since the dynamic ones have
to include symbol tables.
Operating Systems: An operating system (OS) is a set of computer
programs that manage the hardware and software resources of a computer.
Batch Processing: Batch processing is the execution of a series of programs
("jobs") on a computer without human interaction, when possible.
Interactive Computing: interactive computing refers to software which
accepts input from humans — for example, data or commands. Interactive
software includes most popular programs, such as word processors or
spreadsheet applications.
Time Sharing: Time-sharing refers to sharing a computing resource among
many users by multitasking.
Multiprogramming: Multiprogramming is a rudimentary form of parallel
processing in which several programs are run at the same time on a
uniprocessor.
Multiprocessing: Multiprocessing is a generic term for the use of two or more
central processing units (CPUs) within a single computer system.
Parallel Processing: Parallel processing is the ability of the brain to
simultaneously process incoming stimuli. This becomes most important in
vision, as the brain divides and conquers what it sees. It breaks up a scene into
four components: color, motion, form, and depth. These are individually
analysed and then compared to stored memories, which helps the brain identify
what you are viewing. The brain then combines all of these into one image that
you see and comprehend. This is a continual and seamless operation.
Interrupt: an interrupt is an asynchronous signal from hardware indicating the
need for attention or a synchronous event in software indicating the need for a
change in execution. A hardware interrupt causes the processor to save its
state of execution via a context switch, and begin execution of an interrupt
handler. Software interrupts are usually implemented as instructions in the
instruction set, which cause a context switch to an interrupt handler similarly to
a hardware interrupt.
Trap: A trap is a device or tactic intended to harm, capture, detect, or
inconvenience an intruder. Traps may be physical objects, such as cages or
snares, or metaphorical concepts.
Monolithic System: A monolithic architecture is where processing, data and
the user interface all reside on the same system
System call / Monitor Call: a system call is the mechanism used by an
application program to request service from the operating system.
Kernal: The KERNAL is Commodore's name for the ROM-resident operating
system core in its 8-bit home computers; from the original PET of 1977, via the
extended, but strongly related, versions used in its successors; the VIC-20,
Commodore 64, Plus/4, C16, and C128.
Micro Kernal: A microkernel is a minimal computer operating system kernel
providing only basic operating system services (system calls), while other
services (commonly provided by kernels) are provided by user-space programs
called servers.
Device driver: A device driver, or a software driver is a specific type of
computer software, typically developed to allow interaction with hardware
devices.
Command Line Interface: A command line interface or CLI is a tool for
interacting with computers, often using a text terminal or remote shell client
software, such as PuTTY
GUI: A program interface that takes advantage of the computer's graphics
capabilities to make the program easier to use.
Resource: including
Resource (Web), anything identified by a Uniform Resource Identifier
Resource (Macintosh), data associated with a Mac OS file
Resource (Windows), data embedded in EXE and DLL files
Resource (Java), application data
Shell: a shell is a piece of software that provides an interface for users
(command line interpreter).
Synchronization: Synchronization (or Sync) is a problem in timekeeping
which requires the coordination of events to operate a system in unison.
Process Control Block: A Process Control Block (PCB, also called Task
Control Block or Task Struct) is a data structure in the operating system kernel
containing the information needed to manage a particular process. The PCB is
"the manifestation of a process in an operating system".[1]
Thread: Threads are a way for a program to fork (or split) itself into two or more
simultaneously (or pseudo-simultaneously) running tasks. Threads and
processes differ from one operating system to another, but in general, the way
that a thread is created and shares its resources is different from the way a
process does.
Short Term Schediling:Short term scheduling concerns with the allocation of
CPU time to processes in order to meet some pre-defined system performance
objectives. The definition of these objectives (scheduling policy) is an overall
system design issue, and determines the ``character'' of the operating system
from the user's (i.e. the buyer's) point of view, giving rise to the traditional
distinctions among ``multi-purpose, time shared'', ``batch production'', ``realtime'' systems, and so on.
Throughput: the rate of completion of processes (processes completed per
unit time). This is a ``raw'' measure of how much work is performed, since it
depends on the execution length of processes, but it's obviously affected by the
scheduling policy.
Long Term Scheduling: Long term scheduling: which determines which
programs are admitted to the system for execution and when, and which ones
should be exited.
Medium Term Scheduling: Medium term scheduling: which determines when
processes are to be suspended and resumed;
Parent process: A parent process is a computer process that has created one
or more child processes.
Child Process: A child process is a computer process created by another
process (the parent process).
Interprocess Communication: Inter-Process Communication (IPC) is a set
of techniques for the exchange of data between two or more threads in one or
more processes. Processes may be running on one or more computers
connected by a network. IPC techniques are divided into methods for message
passing, synchronization, shared memory, and remote procedure calls (RPC).
Response Time:the interval of time from the moment a service is requested
until the response begins to be received. In time-shared, interactive systems
this is a better measure of responsiveness from a user's point of view than
turnaround time, since processes may begin to produce output early in their
execution.
Turnaround time: the interval between the submission of a process and the
completion of its execution, including the actual running time, plus the time
spent sleeping before being dispatched or while waiting to access various
resources. This is the appropriate responsiveness measure for batch
production, as well as for time-shared systems that maintain multiple batch
queues, sharing CPU time among them.