Skip to main content

Contigeous Memory Allocation


  The main memory must accommodate or store both the operating system and the various user processes. So we need to allocate main memory in the most efficient way possible. As we know the memory is usually divided into two partitions: one for the operating system and other for the user processes. We can place the operating system in either low memory or high memory. Since the interrupt vector is often in low memory, programmers usually like to place the operating system in low memory.

  We usually want several user processes to reside in memory at the same time. We therefore need to consider how to allocate available memory to the processes that are in the input queue waiting to enter into memory. In contiguous memory allocation, each process is contained in a single section of memory that is contiguous to the section containing the next process. Thus we can say if the blocks are allocated to the file in such a way that all the logical blocks of the file get the contiguous physical block in the hard disk then such allocation scheme is known as contiguous allocation.The contiguous memory allocation can be implemented with the help of two registers:- Base and limit registers.

 When a process want to execute, memory is requested by the process. Then the size of process is compared with the amount of contiguous main memory available. If sufficient contiguous memory is found, the process is allocated memory to start its execution. Otherwise, it is added to a queue of waiting processes until sufficient free contiguous memory is available.


ADVANTAGES:-  
  i] This scheme is simple to implement. 
 ii] This scheme provides best read performance.
iii] This also supports random accessing.



DISADVANTAGES:-
   i] The disk becomes fragmented.
  ii] The degree of multiprogramming is reduced.
iii] It may be difficult to have a file grow.


CONTIGEOUS  V/S  NON-CONT. MEMORY ALLOCATION :-



click here for Tyes of memory allocation


Share, Follow and please comment if you find anything incorrect or to share more information about the topic discussed above

Comments

Popular posts from this blog

Process & Its state And process control block :-

                ⇰  PROCESS :- A process can be thought of as a program in execution. Means when any program is executed it becomes process. A processwill need certain resources such as CPU time , memory, files and I/O devices to complete its task. These resources are allocated to the process either when it is created or at the time of execution.             A process is the unit of work in most systems. A system consistes of a collection of processes. All these processes may execute concurrently. Traditionally a process contained only a single thread. Most modern operating ststems now supports processes that have multiple threads.         The operating system is responsible for several important works of process management as - the creation and deletion of process, the schrduling of process, communication and deadlock handling of process. Process is broudly divided into two ...

Process Scheduling And Types of Process Schedular :-

        ⇰ PROCESS SCHEDULING Process Scheduling  is a task  of Operating System that schedules processes of different states like new, ready, waiting, terminated  and running.This scheduling helps in allocation of CPU time for each process, and Operating System allocates the CPU time for each procss. And the process scheduling plays important role to keep the CPU busy all the time.  ⏩   Followings are some objectives of Process Scheduling :-  i > To increase the amount of users within acceptable response times.  ii > To maintain the balance between response and utilization of system. iii > To decrease the enforce priorities and  give reference to the processes holding the key resources.      ⇰  PROCESS SCHEDULAR A scheduler carries out the pro cess scheduling work. Schedulers are often implemented so they keep all computer resources busy and  allows multiple users to shar...

Atomic Transaction In Operating System

All the synchronization techniques we have studied are essentially low level as like semaphores. They require the programmer to involved with all the details of mutual exclusion, critical region management, deadlock prevention, and crash recovery. An abstraction exists there  and is widely used in distributed systems. We will call it an  atomic transaction,  or simply  transaction.   The term  atomic action  is also widely used. The mutual exclusion of critical section ensures that the critical sections are executed automatically. That is, if two critical sections are executed concurrently, the result is equivalent to their sequential execution in some unknown order.   In many cases we would like to make sure that a critical section forms a single logical unit of work that either is performed in its entirely or is not performed at all. An example is funds transfer, in which one account is debited and another is credited. Clearly, it is essential f...