Skip to main content

Posts

Mass-Storage Structure In Operating System

In this post, we will discuss about a general overview of the physical structure of  secondary and tertiary storage devices.  Magnetic Disks :-     Magnetic disks provide the bulk of secondary storage for modern computer  systems. Each disk  platter has a flat circular shape  like a CD. Generally platter diameters range  from 1.8 to 3.5 inches. The two surfaces of a platter are covered with a magnetic  material.We store information by recording it magnetically on the platters. A read–write head “flies” just above each surface of every platter. The  heads are attached to a disk arm that moves all the heads as a unit. The surface  of a platter is logically divided into circular tracks, which are subdivided into  sectors. The set of tracks that are at one arm position makes up a cylinder.        There may be thousands of concentric cylinders in a disk drive, and each track  may contain hundreds of sectors. When the disk is in use, a drive motor spins it at high speed. Most drives  rotate

Interrupts In Operating System

  Interrupt is the mechanism by which modules like I/O or memory may interrupt the normal processing by CPU. It may be either clicking mouse, draging cursor, printing a document etc, the case where interrupt is getting generated. REQUIREMENT OF INTERRUPT :-   External devices are comparatively slower thasn CPU. So if there is no interrupt, CPU would waste a lot of time waiting for external devices to match its speed with that of CPU. This decreases the efficiency of CPU. Hence, interrupt is required to eleminate these limitations. ADVANTAGES :- It increases efficiency of CPU. It decreases the waiting time of CPU. It stops the wastage og instruction cycle. DISADVANTAGES :-   CPU has to do a lot of work to handle interrupts, resume its previous execution of program. An interrupt is a condition that halts the microprocessor temperoraily to work on different task and then return to its previous task. Interrupt is an event or signal that request to attension of CPU. This halt allows periph

Thrashing In Operating System

  Thrashing is a condition or a situation when the system is spending a major period of time on servicing the page faults, but the actaul processing done is very negligible.    "A process is said to be thrashing if it is spending more tim in servicing page fault than executing." The basic concept involved is that if a process is allocated too few frames, then there will be too many and too few frequent page faults. As a result, no useful work would be done by the CPU and CPU utilization would fall drastically. The long term schedular would then try to improve the CPU utilization by loading some more processes into the memory thereby increasing degree of multiprogramming. This would further decrease in the CPU utilization trigerring a chain reaction of higher page faults followed by an increase in the degree of multiprogramming called thrashing. In the above diagram, initially degree of multiprogramming upto some extent of point ( sau lamda), the CPU utilization is very high a

Replacement of Page in O/S

  The number of frames allocated to a process can also dynamically change depending on whether you have used global replacement or local replacement for replacing page in case of page fault. 1. Local replacement :-     when a process needs a page which is not in the memory, it can bring in the new page and allocate it a frame from its own set of allocated frames only. Advantage :-    The pages in the memory for a particular process and the page fault ratio nis affacted by the paging behaviour of only that process. Disadvantage :-    In local replacement the number of frames allocated to a process does not change so a low priority process may hender a high priority process by not making available to the high priority process its frames. 2. Global replacement :-      When a process needs a page which is not in the memory, it can bring in the new frame and allocate it a frame from the set of all frames, even if that frame is currently allocated to some other processes. i.e one process can

Heap Sort In Data Structure

    ⇰  HEAP SORT :- Heap sort is also a type of sorting technique, which can apply in binary tree sorting. Heap sort technique is comparison based sorting technique based on Heap data structure. This technique is similar to the selection sort . But first of all we need to know about Heap data structure. A Binary Heap is a Complete Binary Tree where items are stored in a special order such that  root node is compared with its children and arranged  accordingly. The heap is of two types:- Min heap ( value of root node is less than or equal to child) , Max heap ( value of root node is greater or equal to child). The heap can be represented by binary tree or array.   ⇰  ALGORITHM HEAP SORT :- Step1 :- Write root in the list. Step2 :- Write last element at the root. Step3 :- Heapify the tree. Step4 :- Repeat all three steps.   ⇰  IMPLEMENTATION OF HEAP SORT :- Let us take a heap:-   We take element and put into list Share, Follow

Frame Allocation Algorithm In Operating System

In this post we learn the basic algorithms of frame allocation.There are basically two algorithms for frame allocation :-  1. Equal allocation.  2. Proportional alloccation.  1. Equal allocation :-    The easiest way to split m number of frames among n processes is to give everyone an equal share, m/n frames ( ignoring frames needed by the O/S for that moment). As, if there is 93 frames and 5 processes, each process will get 18 frames. The 3 leftover frames can be used as a free-frame buffer pool. This scheme is called equal allocation. DISADVANTAGE :-   In a system with processes varying sizes, it does not make such sense to give each processes equal frames.   Allocation of large number of frames to a small process will eventually lead to the wastage of large number of allocated unused frames. 2. Proportional alloccation.     Consider a system with 1KB frame. If a smalll student process of 10KB and an interactivedatabase of 127KB are the inly two processes running in the system with 6

Allocation Of Frames In Operating System

 In this post we will cover the topic :- allocation of frame. As we know an important aspects of operating system is virtual memory, which is implemented by using demand paging technique. Demand paging necessitates the development of a page replacement algorithm and a frame allocation algorithm. Frame allocation algorithms are used if you have multiple processes. It helps to decide how many frames are allocated to each process. Consider a single user system with 128 KB of memory composed of 1 KB in size. This system has 128 frames. The operating system may take 35 KB leaving 93 frames for the user processes.  Under pure demand paging all 93 frames would initially be put on free frame list. When a user process started execution, it would generate a sequence of page faults. The first 93 page faults would all get free frames from the free frame list. When the free frame list exhausts, a page replacement algorithm would be used to select one of the 93 in memory pages to be replaced with 94

Merge Sort In data Structure

    ⇰  MERGE SORT :- Merge sort is also a type of sorting technique. It is mainly based on divide-and-conquer algorithm as quick and merge sort, which breaks down a list into several sub-lists until each sublist consists of a single element and merging those sublists in a manner that results into a sorted list.  Merge sort is a sorting technique based on divide and conquer technique. Its  worst-case time complexity is Ο(n log n).    It divides input list in two sub-lists, calls itself for the two sub-lists and then merges the two sorted sub-lists. Suppose we have two sub-lists. listA is a list with n element and listB with m element. The operator that combines the element of listA and listB into single result_list with R=n+m elements is called merging.     If the elements of listA and listB are in sorted order then we may get a result_list in sorted order. If listA and listB are not in sorted order then we have to make them in sorted order and then both the list can be me

Page Replacement In Operating System

As we know if we have to increase our degree of multiprogramming, we will over-allocate memory. For this we use demand paging technique.But if CPU demands a page which is not present in memory then page fault occurs. In this situation the command of CPU is taken by O/S and O/S replace the page with the demanded page. Then return command to the CPU then CPU will perform its execution on that demanded page. Therefore, we can say page replacement occur when CPU demands any page which is not in memory and the memory is full, then one page is to replace from memory by demanded page. And page fault happens when a running program accesses a memory page that is mapped into the virtual address space, but not loaded in physical memory . Page replacement takes the following approach. If no frame is free, we find one that is not currently being used and freee it. We can free a frame by writing its contents to swap space and changing the page table to indicate that the page is no longer in memory.

Page Replacement Algorithms In O/S

There are many different page-replacement algorithms. Every operating system probably has its own replacement scheme.  Let us watch various types of page replacement algorithms in this post. There are following types of page replacement algorithms :-   1. FIFO page replacement.   2. Optimal page replacement.   3. LRU page replacement. In general, we want the algorithm which have the lowest page-fault rate. We evaluate an algorithm by running it on a particular string of memory references and computing the number of page faults. The string of memory references is called a reference string. We can generate reference strings artificially by using a random-number generator.  We use the reference string as follows for a memory with three frames :-      7 0 1 2 0 3 4 2 3 0 3 2 0 1 7 0 1 1. FIFO page replacement :-    It is the simplest page replacement algorithm. Its full form is First-In-First-Out. A FIFO page replacement associates with each page the time when page was brought into memory.