Skip to main content

Posts

Demand Paging In Operating System

       ⇰   DEMAND PAGING :-   A demand paging system is basically advanced version of  paging system with swapping where processes reside in secondary memory and pages are loaded into memory only on demand, not in advance.  When we want to execute a process, we swap it into memory. Rather than swapping the entire process into memory.   Let us consider an executable program loaded from disk into memory. One option is to load the entire program in physical memory at program execution time. It may be possible that we may not initially need the entire program in memory. Suppose a program starts with a list of available options from which the user is to select. Loading the entire program into memory results in loading the executable code for all options, regardless of whether or not an option is ultimately selected by the user. An alternative strategy is to load pages only as they are needed. This technique is known as demand paging.    Demand paging is commonly used in virtual memor
Recent posts

Logical VS Physical Address Space In Operating System

Basically an address generated by the CPU is commonly referred  as a logical address, whereas an address seen by the memory unit is commonly referred as a physical address.The compile-time and load-time address-binding methods generate identical logical and physical addresses. However, the execution-time address binding scheme results in differing logical and physical addresses. In this case, we usually refer to the logical address as a virtual address. The set of all logical addresses generated by a program is a logical address space. The set of all physical addresses corresponding to these logical addresses is a physical address space. Thus, in the execution-time address-binding scheme, the logical and physical address spaces differ. The run-time mapping from virtual to physical addresses is done by a hardware device called the memory-management unit (MMU). The user program generates only logical addresses and thinks that the process runs in locations 0 to max. However, these logical

Atomic Transaction In Operating System

All the synchronization techniques we have studied are essentially low level as like semaphores. They require the programmer to involved with all the details of mutual exclusion, critical region management, deadlock prevention, and crash recovery. An abstraction exists there  and is widely used in distributed systems. We will call it an  atomic transaction,  or simply  transaction.   The term  atomic action  is also widely used. The mutual exclusion of critical section ensures that the critical sections are executed automatically. That is, if two critical sections are executed concurrently, the result is equivalent to their sequential execution in some unknown order.   In many cases we would like to make sure that a critical section forms a single logical unit of work that either is performed in its entirely or is not performed at all. An example is funds transfer, in which one account is debited and another is credited. Clearly, it is essential for data consistency either that both th

Boot Process In Operating System

BOOT BLOCK :-   Basically for a computer to start running to get an instance when it is powered up or rebooted it needs to have an initial program which is known as Bootstrap need to be simple. It must initialize all aspects of the system, from CPU registers to device controllers and the contents of the main memory and then starts the operating system.      To do this job the bootstrap program basically finds the operating system kernel on disk and then loads the kernal into memory and after this, it jumps to the initial address to begin the operating system execution.   USE OF ROM :-   For most of today's computer bootstrap os stored in ROM. Behind this there are following reasons :-     i > This location is good for storage because this place does not require any initialization and moreover location here it is fixed so that processor can start execution when powered up or reset. ii > ROM is basically read only memory, hence it cannot be the computer virus.           The pro

Classical Problem Of Synchronization

There are many classical problem in synchronization method as examples of a large class of concurrency - control problems. these problems are used for testing nearly every newly proposed method or scheme of synchronization. Let us take some problems one by one :- The Bounded-Buffer Problem :-   The bounded-buffer problem was commonly used to illustrate the power of synchronization premitives.    Bounded-buffer problrm is also known as producer consumer problem. This problem is genetalized in terms of the producer-consumer problem. The solution to this problem is, create two counting semaphore "full" and "empty" to keep track of the current number of full anbd empty buffers respectively. Producers produce a product and consumers consume the product, but use of one of the containers each time.    The code of producer and consumer are as follows :-   int count = 0;  void producer (void)  {     int item p;         while (true)     {        p

Disk Management in O/S

 DISK FORMATTING :-   It is the configuring process of a data storage media sich as a hard drive, floppy disk or flash drive for initial use. Any existing file on the drive would be erased with disk formatting. Disk formatting os usually done before initial installation or before installation of a new O/S. It is also done if there is need of additional storage in the computer.     A new magnatic disk is a blank state, it just a platter of magnatic recording material. Before a disk can store data, it must be divided into sectors that the disk controllr can read and write. This process is called low-level formatting or physical formatting. Low level formatting fills the disk with a special data structure for each sector . The data structure for each sector typically consistes of "a header, a data area and a trailer". Header and trailer :-   Header and trailercontain information used by disk controller, such as a sector number and error-correcting code (ECC). Data area :-   Mos

Semaphores In Process Synchronization

   ⇰  Semaphores :-   Semaphore is actually a method or tool to prevent race condition. Race condition can cause loss of data or even deadlock situation. For prevention from these conditions, the semaphore is one of the method.  Semaphore was proposed by Dijkstra in 1965. Simaphore    is a very significant technique to manage concurrent processes.  Semaphore is useful tool in the prevention of race condition. But the use of semaphore never means a guarantee that a program is free from these problems.     Semaphore is an integer variable which is used in mutual exclusive manner by various concurrent cooperative processes in order to acheive synchronization. Hence semaphore is one of the way to achieve synchronization.  Semaphore is basically  a variable which is non-negative and shared between threads. This variable is used to solve the critical section problem and to achieve process synchronization in the multiprocessing environment. Semaphore contains some operations as f

Disk Scheduling and Types of Disk Scheduling algorithms In O/S

   DISK SCHEDULING :-     It is  responsibility of the operating system to use the hardware efficiently. For magnetic disks, the access time has two major components,  The seek time and rotational latency. The disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer.    We can improve both the access time and the bandwidth by managing the order in which disk I/O requests are serviced. Whenever a process needs I/O to or from the disk, it issues a system call to the operating system. The request specifies several pieces of information :-  • Mode of operation (input or output).  • Disk address for the transfer.  • Memory address for the transfer.  • The number of sectors to be transferred.   If the desired disk drive and controller are available, the request can be serviced immediately. If the drive or controller is busy, any new requests for service will be placed in the queue of pendi

Disk Structure Of mass storage devices

   Mass storage magnetic disk drives are addressed as large one-dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer. The size of a logical block is usually 512 bytes, but by low lwvwl formatting the logical block size can be increased to 1,024 bytes.   The one-dimensional array of logical blocks is mapped onto the sectors of the disk sequentially. Sector 0 is the first sector of the first track on the outermost cylinder. We can convert a logical block number into an old-style disk address that consists of a cylinder number, a track number within that cylinder, and a sector number within that track.    In practice, it is difficult to perform this translation, for two reasons.    i> Most disks have some defective sectors, but the mapping hides this by substituting spare sectors from elsewhere on the disk.     ii> The number of sectors per track is not a constant on some drives.    Let’s look more closely at the second reason. On media that

Critical Section Problem In Operating System

      ⇰  CRITICAL SECTION :- Critical Section is the part of a program where shared resources or data are accessed by various cooperative processes. That resource may be any resource in a computer like a memory location, Data structure, CPU or any IO device.      Therefore we can say when more than one processes access a same code segment that segment is known as critical section. The resources or data which are present in critical section must be synchronized to maintain data consistency.  Each process has a segment of code called critical section, where the process may be changing common variables, updationg a table, writing a file and so on. The important feature of system is that when one process is executing in critical section, no other process is allowed to execute or enter in critical section.   An atomic action is required in a critical section means, only one process can execute in its critical section at a time. All the other processes have to wait to execute i