Skip to main content

Posts

Virtual Memory In Operating System

We have discussed before various memory management strategies used in computer system. All these memory manage management strategies have the same goal:- to keep many processes in memory simultaneously to allow multiprogramming. Here we will read overall concept of Virtual Memory. Virtual Memory is a storage allocation scheme in which secondary memory can be considered as the part of main memory. The size of virtual storage is limited by the addressing scheme of the computer system and amount of secondary memory is available. Virtual memory is implemented using both hardware and software. It maps memory addresses, called virtual addresses into physical addresses in computer memory.    Virtual memory is a technique that allows the execution of process that are not completely loaded in memory. One major advantage of this scheme is that programs can be of larger than physical memory. This technique frees the programmers from the concerns of memory-storage limitations. Virtual memory also

Performance Of Demand Paging In O/S

  As we Know  when we load the entire program in physical memory at program execution time, Known as paging. And when we load pages of program only as they are neede, known as demand paging and commonly used in virtual memory system. With demand paging, virtual memory pages are loaded only when they are demanded during program execution. Pages that that are never accessed are thus never loaded into physical memory (RAM).     So, demand paging can significantly affect the performance of a computer system. To see how demand paging affect the performance of computer system, let's compute the 'effective access time' for a demand-paged memory. For most computer systems, the memory-access time is denoted by ma, ranged from 10 to 200 nanoseconds. As long as we have no page faults, the effective access time is equal to the memory access time. But if a page fault occurs, we must first read the relevant page from disk and then access the desired word. To complete the effective access

Process Synchronization & Race Condition

       ⇰  SYNCHRONIZATION :- Process Synchronization  is simply a way to coordinate processes that share something or use shared data. Synchronization occurs in an operating system among cooperating processes. Before starting the brief discuss we have to know types of processes.   There are two types of process according to sharing :- i>  Independent process. ii> Coordinate process. i>  Independent process :- Those process which do not share anything to other processes is called Independent process. Execution of one process will not affect other process. ii> Coordinate process :- Those process which share something with other processes as common variable, memory, code or resources(cpu, printer). So execution of a particular process can affect other process. And cooperative process is not properly synchronised, that can create problem as data inconsistancy.   While executing many cooperative processes, process synchronization helps to maintain shared data cons

Quick Sort In Data Structure

    ⇰   QUICK SORT :- Quick sort is one of the type of sorting technique. It  follows the di vide and conquer algorithm.  The Quick sort   treats an array as a list of elements. When sorting process begins, it selects the List's middle element as the list pivote. The algorithm then divides the list into two sub-lists, one with the elements that are less then the list pivote and other list with elements greater then or equal to list pivote.    The algorithm then recursively invokes or calls itself with both the list. Each time when the sorting algorithm is invoked, it further divides the elements into smaller sub-lists.  This algorithm is quite efficient for large-sized data.  In quick Sort generally middle element of list is considered as pivote, but we can  take pivot in different ways as follows:-     i > picking first element as pivot.  ii > picking last element as pivot iii > Pick a random element as pivot. iv > Pick middle element as pivot (We are us

Monitor In Process Synchronization

     ⇰   MONITOR :- The monitor is one of the way of achieving synchronization. The monitor is supported by programming languages to achieve mutual exclusion between processes.   Although semaphores provide a convenient and effective mechanism for process synchronization, but using semaphore incorrectly can cause timing errors that are difficult to detect. A monitor is a synchronization construct that allows threads to have both mutual exclusion and the ability to work or block for a certain condition to become true. Monitors also have a machenism for singling other threads that their condition has been met. Monitors provide a machenism for threads to temporarily give up exclusive access in order to wait for some condition to be met. Another definition of monitor is a thread-safe class, object or module that wraps around a mutex in order to safely allow access to a method or variable by more than ane thread. The defining charactorstics of a monitor is that its methods are

Fragmentation In Operating System

  As processes are loaded and removed from memory during allocation of memory, the free memory space is broken into little pieces. After sometimes a process cannot allocate memory due to various small leftover space. This occurance is called fragmentation. At this condition there is many small and unused holes scattered throughout memory. Therefore we can say  when many of the free blocks are too small to satisfy any request. Memory fragmentation is of two types :-  i> Internam Fragmentation. ii> External Fragmentation.  i> Internam Fragmentation :- It is a type of memory fregmentation.  Internal fragmentation is the space, which wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks only. Allocated memory may be slightly larger than the requested memory. This size difference is memory internal to a partition, but never being used. ii> External Fragmentation :- It is a type of memory fragmentation

Deadlock Handling In Operating System

     ⇰ DEADLOCK  AVOIDANCE :- As we know Deadlock-prevention algorithms only prevent deadlocks by limiting how requests can be made. The limits ensure that at least one of the necessary conditions for deadlock cannot occur. Side effects of deadlock prevention are low device utilization and reduced throughput. Avoidance of deadlock require additional information about how resources are to be requested.  For example, in a system with one tape drive and a printer, the system might need to know that  process P will request first the tape drive and then the printer before releasing both resources, whereas process Q will request first the printer and then the tape drive. With this prior information of the complete sequence of requests and releases of resources the system can decide for each request whether or not the process should wait in order to avoid a possible future deadlock.   The simplest and most useful model requires that each process declare the maximum number of resources

Bubble Sort in Data Structure

       ⇰  BUBBLE SORT :-  The bubble sorting technique is very simple sorting technique. However this technique is not efficient in comparision to other sorting techniques but for smaller list this technique works fine.  This sorting technique is comparison-based in which each pair of adjacent elements is compared and the elements are swapped if they are not in the desired order . Its worst case complexity is of Ο(n ^2 ) where n  is the number of items in the list.     Suppose we want to sort the elements of a list in ascending order, the bubble sort loops through the element in the list and compares the adjecent element and moves the smaller element to the the top of the list. EXAMPLE :- Let us take a unsorted list :-     4 3 5 2 1  ⇰    ALGORITHM OF BUBBLE SORT :- Variable used :- list= array of element, n= number of element in the list. I, J, temp = local variable. Step1 :- Initialize               I =1 Step2 :- Repeat through step5                  whi

Tree Traversal

        ⇰   TREE TRAVERSAL :- Tree Traversal refers to the process of visiting each node in a tree data structure exactly once.  You might for instance want to add all the values in the tree or find the largest one. For all these operations, you will need to visit each node of the tree atleast and exectly once. Linear data structures like arrays, stacks, queues and linked list have only one logical way to read the data. But a hierarchical data structure like a tree can be traversed in different ways.  Following are the generally used Techniques for traversing trees :-    i>Breadth First Traverse.  ii> Depth First Traverse. i > Breadth First Traverse :- Breadth First Traverse of a tree is technique of printing all the nodes of a tree level by level. It  is also called as   Level Order Traversal .   Here if we traverse in above tree by breadth first treverse technique the traversal will be on level wise as in the order :- 1 2 3 4 5 6.   ii > Depth