* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Lecture 11
Survey
Document related concepts
Transcript
Operating Systems I/O Management and File Systems I/O Management and File Systems Topics 1. I/O Management and File Systems 2. Where I/O Fits In 3. I/O Device Types 4. Device Type Features 5. Architectural Support for I/O in Systems 6. Evolution of I/O Functions 7. Bus Support for DMA Control Topics (continued) 8. DMA and Busy Cycles 9. DMA Configurations 10. DMA Channels 11. Buffering 12. Buffering Performance 13. Anatomy of a Disk 14. Raw Disk Sector Addressing 15. Access Performance of Disks 16. Disk Scheduling Policies Topics (continued) 17. Disk Free Space Management 18. Disk Block Allocation Methods 19. Contiguous Allocation 20. Linked Allocation 21. File Allocation Tables 22. Indexed Schemes 23. Unix File System Layout 24. Unix INODE Usage Where I/O Fits In I/O devices (also called peripherals) are the third part of the von Neumann architecture. Peripherals are responsible for permitting the computer to exchange information with its external environment. CPU PC IR MAR MBR I/O AR I/O BR ALU Memory Instructions Instructions Data Data Data Buffers (Persistent Store and interface devices) Peripherals 0 1 I/O Device Types I/O devices may communicate with either devices or users. Devices can be categorized according to what they interface with: 1. Human Readable 2. Machine Readable 3. Data Communication Device Type Features Devices differ according to their: 1. Data rate 2. Application 3. Control complexity 4. Unit of transfer 5. Data Representation 6. Error Conditions 7. Storage media 8. Removeability of media Architectural Support for I/O in Systems Historically the choices for I/O support included: 1. Programmed I/O 2. Interrupt Driven I/O 3. Direct Memory Access (DMA) The current trend for block oriented devices is DMA. Evolution of I/O Functions Traditionally users make a system call to get I/O. The following methods are used: 1. Direct Processor Controlled I/O 2. Polling Software Module Level Control 3. Interrupt Driven Software Module Level Control 4. Interrupt Driven DMA Evolution of I/O Functions (continued) 5. Separate I/O Processor using Main Memory 6. Separate I/O Processor with Local Memory Disk controllers and modern network cards all into the last category. Bus Support for DMA Control The data bus typically needs to be augmented with additional lines to support DMA. Data Count Data Lines Address Lines DMA Req DMA ACK INTR Figure 2: Read A typical DMA Write Block Diagram Data Register Address Register Control Logic DMA and Busy Cycles The bus scheduling for DMA access is typically more flexible than for an interrupt. DMA just has to block memory access, interrupts have to avoid instruction restart whenever possible. DMA vs. Interrupt Scheduling Instruction Cycle Processor Processor Processor Processor Processor Processor Cycle Cycle Cycle Cycle Cycle Cycle Fetch Decode Fetch Execute Instruction Instruction Operand Instruction DMA Breakpoints Store Result Process Interrupt Interrupt Breakpoints DMA Configurations DMA can either share a common bus with the CPU or it can be detached, or it can have its own bus. Some Popular DMA Configurations CPU DMA Module CPU ... I/O Memory (a) Single-bus, detached DMA CPU DMA Module DMA Module Memory I/O I/O I/O (b) Single-bus, integrated DMA-I/O System Bus CPU DMA Module Memory I/O Bus I/O (c) I/O bus I/O I/O DMA Channels Systems requiring sustained I/O bandwidth may give DMA controllers dedicated channels, with instruction streams flowing to the controllers along the channels. Multiplexors can can be used to select the I/O channel. Data and Address Channel to Main Memory Selector Channel Control Signal Path to CPU I/O Controller I/O ... Controller To Memory To CPU (a) Selector Multiplexor Channel I/O Controller I/O Controller I/O Controller I/O Controller Buffering Hardware and software can use buffering to store data in fast memory pending I/O to overcome latency. User Process Operating System I/O Device Operating System I/O Device In (a)No buffering User Process Move (b)Single buffering In Operating System User Process Move (c)Double buffering I/O Device In Operating System User Process I/O Device Move (d)Circular buffering .. . In Buffering Performance Let • M be the time for a memory copy, • C be the the computing time between input requests, and • T be the time to transfer a block out to the peripheral. The cost of unbuffered I/O is T+C. The cost of a buffered I/O is max(C,T)+M. Buffering Performance (continued) Historically M<<T however recent trends in high performance computing have determined that excessive copying degrades performance in practice. In general buffering levels out I/O performance over short term variations. Anatomy of a Disk Hard disks (also called Winchester disks) are often the large capacity media of choice in modern systems. Hard Disk Components Track t Arm Cylinder Rotation Read/write heads Raw Disk Sector Addressing Typically the sectors on a disk are assigned sequential addresses along the tracks, let be the block address of a sector. Parameter Meaning b i j k s t The block address (to find) cylinder of sector surface of sector position of sector within its track number of sectors per track track/cylinder number Table 1: Parameters of Sector Computation b=k+sx(j+ixt) [1] Access Performance of Disks Seek time is the time to move the arm to the desired track. Let Ts be the seek time, n be the number of tracks traversed, m be the time to traversing a single track, and s be the startup time, then: Ts=m x n + s [2] Access Performance of Disks (continued) Transfer time is the time it takes to scan the information off the disk, letting T be the transfer time, and b be the number of bytes to transfer, N be the sector size in bytes, and r be the revolutions per seconds, then: T= (3) b rN Access Performance of Disks (continued) The average access time, Ta is: 1 b Ta = Ts + + 2r rN (4) Disk Scheduling Policies In practice disk access times are highly sensitive to the previous head position. Most disk scheduling algorithms focus on this. Some Disk Schedules 0 25 50 75 100 125 150 175 199 (a) FIFO Time 0 25 50 75 100 125 150 175 199 (b) SSTF Time 0 25 50 75 100 125 150 175 199 (c) SCAN Time 0 25 50 75 100 125 150 175 199 (d) C-SCAN Time Disk Free Space Management Available space needs to be found quickly for storage. Typically blocks are: 1. Corrupted --- Hardware failure, not available for use 2. Free 3. Used Disk Free Management (continued) Typically either bitmaps are used, or a list structure, sometimes with run length encoding (assuming larger contiguous spaces). The list structure may be contiguous or linked. The lists are stored starting in a fixed block within the partition (for uniform access at boot time), and may chain into other blocks. Disk Block Allocation Methods 1. Contiguous 2. Linked List (and FAT) 3. Indexed Contiguous Allocation This requires that a file n blocks long occupy n contiguous blocks. External fragmentation and file placement are difficult problems. count 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 A Contiguous Allocation Scheme directory File Count mail tr mail list list f Start Length 0 14 19 28 6 2 3 6 4 2 Linked Allocation Each block maintains a pointer to the next block, with an entry in the system directory block for the head of each file. Free blocks get placed on a special free linked list. Performance can be bad due to excessive seeks. A Linked Allocation Scheme 0 4 8 1 5 9 2 6 3 directory 7 10 11 12 13 14 15 16 17 18 19 File Start End Jeep 9 25 key 20 21 22 23 10 1 24 25 26 27 16 -1 28 29 30 31 25 File Allocation tables File allocation tables use a linked list stored in a table with one entry per block in a dedicated block on the partition. This is used in MS-DOS, OS/2 and MS Windows (probably NT too). Redundant storing of the links on disk provides a recovery mechanism. File Allocation Table Scheme (FAT) Directory entry test . . name 0 . 217 Start block 217 618 339 End-of-file 618 No. of disk blocks -1 339 FAT Indexed Schemes Each file has an index block which is an array of pointers to contiguous disk regions. When a file is allocated all index pointers are initialized to. As space is needed, blocks are appended to the index tree structure. Some options include: 1. Linked 2. Multilevel Indexed 3. Combined Unix File System Layout The Unix file system dedicates low order blocks in a partition for: 1. Boot Block --- For bootable partitions. 2. Superblock --- The root of the file system's Inode hierarchy. 3. Inodes --- Indices to blocks on disk for the file system. Unix File system Partitions Disk drive partition filesystem i-list partition partition Directory blocks and data blocks boot block(s) super block i-node i-node ... i-node 17.6 - Unix INODE Usage The INODES form a Btree of the blocks in a file. directory blocks and data blocks i-list data data lock lock i-node i-node directory block i-node 1st data block 2nd data block 3rd data block i-node data lock directory block i-node # filename i-node # filename Unix INODE Usage Directory blocks contain Inode information for files in the directory. The Unix file system is a directed rooted graph (but not a tree or DAG). Partitions can be added by mounting them (i.e. inserting them into the directory structure). MS-DOS does not have mounting. Figure 14: Inodes and directories in the Unix file system directory blocks and data blocks directory block i-list 2549 1267 directory block 1267 i-node # . .. . .. 2549 testdir i-node 0 i-node 1267 i-node 2549 data block