For direct-mapped cache, a main memory address Is viewed as consisting of three fields. List and define the three fields. One field identifies a unique word or byte within a block of main memory. The remaining two fields specify one of the blocks of main memory. These two fields are a line field, which identifies one of the lines of the cache, and a tag field, which identifies one of the blocks that can fit into that line. 3. For set-associative cache, a main memory address is viewed as consisting of two fields. List and define the two fields.
One field identifies a unique word or byte within a block of main memory. The remaining two fields specify one of the blocks of main memory. These two fields are a set field, which Identifies one of the sets of the cache, and a tag field, which Identifies one of the blocks that can fit Into that set 4. Consider a 32-bit microprocessor that has on-chip gigabyte four-way set-associative cache. Assume that cache has a line size of four 32-bit words. Draw a block diagram of this cache showing its organization and how the different address fields are used to determine a cache hit/miss.
Where in the cache is the word from memory location AFFECTED mapped. Block frame size = 16 bytes = 4 doublers Number of block frames in cache = 16 Subtest = 1024 16 Bytes Number of sets -?Number of block frames =1024 = 256 sets Associative set Offset ABACA 5. What is the general relationship among access time, memory cost, and capacity? Faster access time, greater cost per bit; greater capacity, smaller cost per bit; greater capacity, slower access time. 6. What are the differences among sequential access, direct access and random access?
Sequential access: Memory is organized into units of data, called records. Access must be made in a specific linear sequence. Direct access: Individual blocks or cords have a unique address based on physical location. Access is accomplished by direct access to reach the general vicinity plus sequential searching, counting, or waiting to reach the final location. Random access: Each addressable location in memory has a unique, physically wired-in addressing mechanism. The time to access a given location is independent of the sequence of prior accesses and is constant. 7.
What is the different between DRAM and SRAM in term of characteristics such as speed, size and cost? SRAM generally have faster access times than DRAMS. DRAMS are less expensive and smaller than SRAM. 8. Explain why one type of RAM is considered to be analog and other digital. A DRAM cell is essentially an analog device using a capacitor; the capacitor can store any charge value within a range; a threshold value determines whether the charge is interpreted as 1 or O. A SRAM cell is a digital device, in which binary values are stored using traditional flip-flop logic-gate configurations. 9.
What are differences among PROM, PROPER and flash memory? PROM is read and written electrically; before a write operation, all the storage cells must be erased to the same initial state by exposure of the packaged chip to ultraviolet radiation. Erasure is performed by shining an intense ultraviolet light through a window that is designed into the memory chip. PROPER is a read mostly memory that can be written into at any time without erasing prior contents; only the byte or bytes addressed are updated. Flash memory is intermediate between PROM and PROPER in both cost and functionality.
Like PROPER, flash memory uses an electrical erasing technology. An entire flash memory can be erased in one or a few seconds, which is much faster than PROM. In addition, it is possible to erase Just blocks of memory rather than an entire chip. However, flash memory does not provide byte-level erasure. Like PROM, flash memory uses only one transistor per bit, and so achieves the high density (compared with PROPER) of PROM. Bit. Give the array configuration of the chips on the memory board showing all requirement input and output signals for assigning this memory to the lowest address space.
The design should allow for both byte and 16-bit word access. 8192/64 = 128 chips; arranged in 8 rows by 64 columns: 1 1 . Consider a dynamic RAM that must be given a refresh cycle 64 times per ms. Each refresh operation requires 1 inns. What percentage of the memory total operating time must be given to refresh? In 1 ms, the time devoted to refresh is 64 x 150 NSA = 9600 NSA. The fraction of time devoted to memory refresh is (9. 6 x 10-6 s)/10- as = 0. 0096, which is approximately 1%. 12. Briefly define the seven RAID levels. O: Non-redundant 1: Mirrored; every disk has a mirror disk containing the same data. : Redundant via Hamming code; an error-correcting code is calculated across corresponding bits on each data disk, and the bits of the code are stored in the corresponding bit positions on multiple parity disks. 3: Bit-interleaved parity; similar o level 2 but instead of an error-correcting code, a simple parity bit is computed for the set of individual bits in the same position on all of the data disks. 4: Block- interleaved parity; a bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the corresponding strip on the parity disk. : Block-interleaved distributed parity; similar to level 4 but distributes the parity strips across all disks. 6: Block interleaved dual distributed parity; two different parity calculations are carried out and stored in separate blocks on different disks. 3. How is redundancy achieved in a RAID system? For RAID level 1, redundancy is achieved by having two identical copies of all data. For higher levels, redundancy is achieved by the use of error-correcting codes. 14. What are the major functions of an 1/0 module? Control and timing.
Processor communication. Device communication. Data buffering. Error detection. 15. List and briefly explain three techniques for performing 1/0. Programmed 1/0: The processor issues an 1/0 command, on behalf of a process, to an 1/0 module; that process then busy-waits for the operation to be completed before proceeding. Interrupt-driven 1/0: The processor issues an 1/0 command on behalf of a process, continues to execute subsequent instructions, and is interrupted by the 1/0 module when the latter has completed its work.
The subsequent instructions may be in the same process, if it is not necessary for that process to wait for the completion of the l/ O. Otherwise, the process is suspended pending the interrupt and other work is performed. Direct memory access (DAM): A DAM module controls the exchange of the transfer of a block of data to the DAM module and is interrupted only after the entire block has been transferred. 6. The DAM mechanism can be configured in a variety of ways. List and explain the entire configuration.
Configuration 1: Single Bus, Detached DAM controller Each transfer uses bus twice 1/0 to DAM then DAM to memory CPU is suspended twice Configuration 2: Single Bus, Integrated DAM controller Controller may support >1 device Each transfer uses bus once DAM to memory CPU is suspended once Configuration 3: Separate 1/0 Bus Bus supports all DAM enabled devices 17. A direct memory access module (DAM) module is transferring characters to memory using cycle stealing, from a device transmitting at 9600 BSP. The processor is fetching instructions at the rate of 1 Million Instruction per Second (MIPS).
Based on the information given, determine how much will the processor be slowed down due to the DAM activity. 64 KGB 18. What is the different between memory-mapped 1/0 and isolated 1/0? With memory-mapped 1/0, there is a single address space for memory locations and 1/0 devices. The processor treats the status and data registers of 1/0 modules as memory devices. With isolated 1/0, a command specifies whether the address refers to a memory location or an 1/0 device. The full range of addresses may be available for both. 19. What is an Operating system?
The operating system (SO) is the software that controls the execution of programs on a processor and that manages the processor’s resources. 20. What are the major types of operating system (SO) scheduling? I. Long-term scheduling: The decision to add to the pool of processes to be executed. It. Medium- term scheduling: The decision to add to the number of processes that are partially or fully in main memory. Iii. Short-term scheduling: The decision as to which available process will be executed by the processor 21 . Consider a computer system with both segmentation and paging.