Operating System | Critical Section

Critical Section:

In simple terms a critical section is group of instructions/statements or region of code that need to be executed atomically, such as accessing a resource (file, input or output port, global data, etc.).

In concurrent programming, if one thread tries to change the value of shared data at the same time as another thread tries to read the value (i.e. data race across threads), the result is unpredictable.

The access to such shared variable (shared memory, shared files, shared port, etc…) to be synchronized. Few programming languages have built in support for synchronization.

It is critical to understand the importance of race condition while writing kernel mode programming (a device driver, kernel thread, etc.). since the programmer can directly access and modifying kernel data structures.

A simple solution to critical section can be thought as shown below,

acquireLock();
Process Critical Section
releaseLock();

A thread must acquire a lock prior to executing critical section. The lock can be acquired by only one thread. There are various ways to implement locks in the above pseudo code.

We have built the accelerating growth-oriented website for budding engineers and aspiring job holders of technology companies such as Google, Facebook, and Amazon
If you would like to study our free courses you can join us at

http://www.techcodebit.com. #techcodebit #google #microsoft #facebook #interview portal #jobplacements
#technicalguide

Operating System | Process Management | CPU Scheduling

Scheduling of processes/work is done to finish the work on time.

Below are different time with respect to a process.

Arrival Time:       Time at which the process arrives in the ready queue.
Completion Time:    Time at which process completes its execution.
Burst Time:         Time required by a process for CPU execution.
Turn Around Time:   Time Difference between completion time and arrival time.          
     Turn Around Time = Completion Time - Arrival Time
Waiting Time(W.T): Time Difference between turn around time and burst time.
     Waiting Time = Turn Around Time - Burst Time

Why do we need scheduling?
A typical process involves both I/O time and CPU time. In a uniprogramming system like MS-DOS, time spent waiting for I/O is wasted and CPU is free during this time. In multiprogramming systems, one process can use CPU while another is waiting for I/O. This is possible only with process scheduling.

Objectives of Process Scheduling Algorithm

Max CPU utilization [Keep CPU as busy as possible]
Fair allocation of CPU.
Max throughput [Number of processes that complete their execution per time unit]
Min turnaround time [Time taken by a process to finish execution]
Min waiting time [Time a process waits in ready queue]
Min response time [Time when a process produces first response]

Different Scheduling Algorithms

First Come First Serve (FCFS): Simplest scheduling algorithm that schedules according to arrival times of processes.

Shortest Job First(SJF): Process which have the shortest burst time are scheduled first.

Shortest Remaining Time First(SRTF): It is preemptive mode of SJF algorithm in which jobs are schedule according to shortest remaining time.

Round Robin Scheduling: Each process is assigned a fixed time in cyclic way.

Priority Based scheduling (Non Preemptive): In this scheduling, processes are scheduled according to their priorities, i.e., highest priority process is schedule first. If priorities of two processes match, then schedule according to arrival time.

Highest Response Ratio Next (HRRN) In this scheduling, processes with highest response ratio is scheduled. This algorithm avoids starvation.

Response Ratio = (Waiting Time + Burst time) / Burst time

Multilevel Queue Scheduling: According to the priority of process, processes are placed in the different queues. Generally high priority process are placed in the top level queue. Only after completion of processes from top level queue, lower level queued processes are scheduled.

Multi level Feedback Queue Scheduling: It allows the process to move in between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it is moved to a lower-priority queue.

Some useful facts about Scheduling Algorithms:
1) FCFS can cause long waiting times, especially when the first job takes too much CPU time.

2) Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a situation when long process is there in ready queue and shorter processes keep coming.

3) If time quantum for Round Robin scheduling is very large, then it behaves same as FCFS scheduling.

4) SJF is optimal in terms of average waiting time for a given set of processes,i.e., average waiting time is minimum with this scheduling, but problems is, how to know/predict time of next job.

We have built the accelerating growth-oriented website for budding engineers and aspiring job holders of technology companies such as Google, Facebook, and Amazon
If you would like to study our free courses you can join us at

http://www.techcodebit.com. #techcodebit #google #microsoft #facebook #interview portal #jobplacements
#technicalguide

Operating System | Process Scheduler

There are three types of process scheduler.

1. Long Term or job scheduler It bring the new process to the ‘Ready State’. It controls Degree of Multi-programming, i.e., number of process present in ready state at any point of time.

2. Short term ot CPU scheduler: It is responsible for selecting one process from ready state for scheduling it on the running state. Note: Short term scheduler only selects the process to schedule it doesn’t load the process on running.
Dispatcher is responsible for loading the selected process by Short Term scheduler on the CPU (Ready to Running State) Context switching is done by dispatcher only. A dispatcher does following:
1) Switching context.
2) Switching to user mode.
3) Jumping to the proper location in the newly loaded program.

3. Medium term scheduler It is responsible for suspending and resuming the process. It mainly does swapping (moving processes from main memory to disk and vice versa).

We have built the accelerating growth-oriented website for budding engineers and aspiring job holders of technology companies such as Google, Facebook, and Amazon
If you would like to study our free courses you can join us at

http://www.techcodebit.com. #techcodebit #google #microsoft #facebook #interview portal #jobplacements
#technicalguide

Largest Sum contiguous subarray

Largest Sum Contiguous Subarray Write an efficient C program to find the sum of contiguous subarray within a one-dimensional array of numbers which has the largest sum. Kadane’s Algorithm: Initialize: max_so_far = 0 max_ending_here = 0 Loop for each element of the array (a) max_ending_here = max_ending_here + a[i] (b) if(max_ending_here < 0) max_ending_here =

Read More