Skip to main content

Deadlock Handling In Operating System

     ⇰ DEADLOCK AVOIDANCE :-

As we know Deadlock-prevention algorithms only prevent deadlocks by limiting how requests can be made. The limits ensure that at least one of the necessary conditions for deadlock cannot occur. Side effects of deadlock prevention are low device utilization and reduced throughput. Avoidance of deadlock require additional information about how resources are to be requested.

 For example, in a system with one tape drive and a printer, the system might need to know that process P will request first the tape drive and then the printer before releasing both resources, whereas process Q will request first the printer and then the tape drive. With this prior information of the complete sequence of requests and releases of resources the system can decide for each request whether or not the process should wait in order to avoid a possible future deadlock.

 The simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. Given this a priori information, it is possible to construct an algorithm that ensures that the system will never enter a deadlocked state. A deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that a circular-wait condition can never exist.



     ⇰ DEADLOCK DETECTION :-

If a system does not apply either a deadlock-prevention or a deadlock avoidance algorithm, then a deadlock situation may occur. In this environment the system may provide :-
   ∘ An algorithm that examines  where a deadlock has occurred.
   ∘ An algorithm to recover from the deadlock. 

To detect deadlock, the system needs to maiontain the 'wait-for' graph and priodically invoke an algorithm that searchs for a cycle in the graph.The deadlock exists in the system if and only if the 'wait-for'graph contains a cycle. Hence, for detecting deadlock the algorithm searches for a cycle in the graph.
CPS 356 Lecture notes: Deadlock
Wait-For Graph







If deadlock occurs frequently, then the detection algorithm should  also be invoked frequently. We can invoke the deadlock detection algorithm every time a request for allocation cannot be granted immediately. By this we can identify not only the deadlocked set of processes but also the specific process that cause the deadlock.

Ofcourse, invoking the deadlock-detection algorith for every request will increase considerable overhead in computational time. A less expensive alternative is simply to invoke the algorithm at definite intervals.



   ⇰ RECOVERY FROM DEADLOCK :-


When a detection algorithm determines that a deadlock exists one possibility is to inform the operator that a deadlock has occurred and to let the operator deal with the deadlock manually. Another possibility is to let the system recover from the deadlock automatically.


 There are two options for breaking a deadlock. One is simply to abort one or more processes to break the circular wait. The other is to preempt some resources from one or more of the deadlocked processes.

To eliminate deadlocks by aborting a process, we use any of two methods. In both methods, the system reclaims all resources allocated to the terminated processes. 
• Abort all deadlocked processes. This method clearly will break the deadlock cycle, but at great expense. 
• Abort one process at a time until the deadlock cycle is eliminated. 

To eliminate deadlocks using resource preemption, we successively preempt some resources from processes and give these resources to other processes until the deadlock cycle is broken.



click here for discussion of deadlock and its characterization

Share, Follow and please comment if you find anything incorrect, or to share more information about the topic discussed above.

Comments

  1. Very good ……. We are getting a lot of help…. Like this, make it further on the new topic...Really helpful ...keep it up

    ReplyDelete

Post a Comment

Please comment.

Popular posts from this blog

Process Scheduling And Types of Process Schedular :-

        ⇰ PROCESS SCHEDULING Process Scheduling  is a task  of Operating System that schedules processes of different states like new, ready, waiting, terminated  and running.This scheduling helps in allocation of CPU time for each process, and Operating System allocates the CPU time for each procss. And the process scheduling plays important role to keep the CPU busy all the time.  ⏩   Followings are some objectives of Process Scheduling :-  i > To increase the amount of users within acceptable response times.  ii > To maintain the balance between response and utilization of system. iii > To decrease the enforce priorities and  give reference to the processes holding the key resources.      ⇰  PROCESS SCHEDULAR A scheduler carries out the pro cess scheduling work. Schedulers are often implemented so they keep all computer resources busy and  allows multiple users to share system resources  to achieve  multiprogramming .  There are  mainy three types of pro

Process & Its state And process control block :-

                ⇰  PROCESS :- A process can be thought of as a program in execution. Means when any program is executed it becomes process. A processwill need certain resources such as CPU time , memory, files and I/O devices to complete its task. These resources are allocated to the process either when it is created or at the time of execution.             A process is the unit of work in most systems. A system consistes of a collection of processes. All these processes may execute concurrently. Traditionally a process contained only a single thread. Most modern operating ststems now supports processes that have multiple threads.         The operating system is responsible for several important works of process management as - the creation and deletion of process, the schrduling of process, communication and deadlock handling of process. Process is broudly divided into two types:-  i> System  Process. ii> User Process. Early computers allowed only one program be ex

Semaphores In Process Synchronization

   ⇰  Semaphores :-   Semaphore is actually a method or tool to prevent race condition. Race condition can cause loss of data or even deadlock situation. For prevention from these conditions, the semaphore is one of the method.  Semaphore was proposed by Dijkstra in 1965. Simaphore    is a very significant technique to manage concurrent processes.  Semaphore is useful tool in the prevention of race condition. But the use of semaphore never means a guarantee that a program is free from these problems.     Semaphore is an integer variable which is used in mutual exclusive manner by various concurrent cooperative processes in order to acheive synchronization. Hence semaphore is one of the way to achieve synchronization.  Semaphore is basically  a variable which is non-negative and shared between threads. This variable is used to solve the critical section problem and to achieve process synchronization in the multiprocessing environment. Semaphore contains some operations as f