MCS-041 June 2018

If you have any queries please leave a message here
Your Message
×


1. (a) What is a Critical Section ? Give a monitor solution to the Dining philosophers' problem and explain.

Answer : - Critical Section is the part of a program which tries to access shared resources. That resource may be any resource in a computer like a memory location, Data structure, CPU or any IO device.

The critical section cannot be executed by more than one process at the same time; operating system faces the difficulties in allowing and disallowing the processes from entering the critical section.

The critical section problem is used to design a set of protocols which can ensure that the Race condition among the processes will never arise.

In order to synchronize the cooperative processes, our main task is to solve the critical section problem. We need to provide a solution in such a way that the following conditions can be satisfied.

Requirements of Synchronization mechanisms




1. (b) Describe Linked and Indexed allocation for disk space allocation.

Answer : -

Linked Allocation

Indexed Allocation


1. (c) Consider the page reference string :
1, 2, 3, 4, 2, 5, 3, 4, 2, 6, 7, 8, 7, 9, 7, 8, 2, 5, 4 and 9
Calculate how many page faults would occur for LRU and FIFO page replacement algorithms, when the number of frames is 3. Assume all frames are initially empty.

Answer : -


1. (d) What is RPC ? Describe the steps involved in the execution of a RPC.

Answer : - A Remote Procedure Call (RPC) is an interprocess communication technique that is used for client-server based applications. It is also known as a subroutine call or a function call.

A client has a request message that the RPC translates and sends to the server. This request may be a procedure or a function call to a remote server. When the server receives the request, it sends the required response back to the client.

Implementation of RPC involves the five elements of program :

  1. Client
  2. Client Stub
  3. RPC-Runtime
  4. Server Stub
  5. Server

Client

Client Stub

RPC-Runtime

Server stub

Server




2. (a) Define a Process. Explain various states of a process. How does a process differ from a thread ?

Answer : - A process is basically a program in execution. The execution of a process must progress in a sequential fashion.

A process is defined as an entity which represents the basic unit of work to be implemented in the system.

To put it in simple terms, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four sections - stack, heap, text and data.

stackThe process Stack contains the temporary data such as method/function parameters, return address and local variables.
heapThis is dynamically allocated memory to a process during its run time.
textThis includes the current activity represented by the value of Program Counter and the contents of the processor's registers.
dataThis section contains the global and static variables.


Process Life Cycle

When a process executes, it passes through different states. The various states of the process are as followings -

NewThis is the initial state when a process is first started/created.
ReadyAfter the creation of a process, the process enters the ready state i.e. the process is loaded into the main memory. The process here is ready to run and is waiting to get the CPU time for its execution. Processes that are ready for execution by the CPU are maintained in a queue for ready processes.
RunningOnce the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions.
WaitingProcess moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available. Once the Input/Output operation is completed the process goes to the ready state.
Terminated or ExitOnce the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory.


Difference between Process and Thread




2. (b) With the help of a neat diagram, explain segmented paging and paged segmentation.

Answer : -

Segmented Paging

Pure segmentation is not very popular and not being used in many of the operating systems. However, Segmentation can be combined with Paging to get the best features out of both the techniques.

In Segmented Paging, the main memory is divided into variable size segments which are further divided into fixed size pages.

Advantages of Segmented Paging

Disadvantages of Segmented Paging




3. (a) Consider the following set of processes with the length of CPU burst time given in milliseconds :

ProcessBurst Time (msec)Arrival Time (msec)Priority
P12404
P2733
P3652
P410101

(i) Draw Gantt chart for FCFS, SJF and RR (quantum = 4) scheduling algorithms.

Answer : -

(ii) Calculate the average waiting time and turnaround time for each of the above mentioned algorithms.

Answer : -


3. (b) Explain Access Matrix and Mandatory Access Control Security Models.

Answer : -


4. (a) Explain RAID with different levels. Give the features of each level.

Answer : - RAID is a technology that is used to increase the performance and/or reliability of data storage. RAID stands for Redundant Array of Independent Disks. A RAID system consists of two or more drives working in parallel. There are different RAID levels, each optimized for a specific situation.


RAID 0 – striping

In a RAID 0 system data are split up into blocks that get written across all the drives in the array. By using multiple disks (at least 2) at the same time, this offers superior I/O performance. This performance can be enhanced further by using multiple controllers, ideally one controller per disk.

Advantages

Disadvantages


RAID 1 – mirroring

Data are stored twice by writing them to both the data drive (or set of data drives) and a mirror drive (or set of mirror drives). If a drive fails, the controller uses either the data drive or the mirror drive for data recovery and continues operation. You need at least 2 drives for a RAID 1 array.

Advantages

Disadvantages


RAID 5 – striping with parity

RAID 5 is the most common secure RAID level. It requires at least 3 drives but can work with up to 16 drives. Data blocks are striped across the drives and on one drive a parity checksum of all the block data is written. The parity data are not written to a fixed drive, they are spread across all drives. Using the parity data, the computer can recalculate the data of one of the other data blocks, should those data no longer be available. That means a RAID 5 array can withstand a single drive failure without losing data or access to data.

Advantages

Disadvantages


RAID 6 – striping with double parity

RAID 6 is like RAID 5, but the parity data are written to two drives. That means it requires at least 4 drives and can withstand 2 drives dying simultaneously. The chances that two drives break down at exactly the same moment are of course very small. However, if a drive in a RAID 5 systems dies and is replaced by a new drive, it takes hours or even more than a day to rebuild the swapped drive. If another drive dies during that time, you still lose all of your data. With RAID 6, the RAID array will even survive that second failure.

Advantages

Disadvantages


RAID 10 – combining mirroring and striping

It is possible to combine the advantages (and disadvantages) of RAID 0 and RAID 1 in one single system. This is a nested or hybrid RAID configuration. It provides security by mirroring all data on secondary drives while using striping across each set of drives to speed up data transfers.

Advantages

Disadvantages




4. (b) Explain the concept of Thrashing. How can we prevent it ?

Answer : - Thrashing is nothing but a situation which occurs when a process is spending more time in paging or swapping activities rather than its execution. In thrashing CPU states is so much busy in swapping that it can not respond to user program as much as it required.


Causes of Thrashing

Initially when the CPU utilization is low, then process scheduling mechanism, loads many processes into the memory at the same time so that degree of multi programming can be increased. Now in this situation we have more number of processes in memory as compare to the available number of frames in memory. So allocating a limited amount of frames to each process.

When any higher priority process arrive in memory and if frame is not freely available at that time then the other process that occupied the frame which is resides in frame will move to secondary storage and this free frame is now allocated to higher priority process.

In other words we can say that as the memory fills up, process starts to spend a lot of time for the required pages to be swapped in, again CPU utilization becomes low because most of the processes are waiting for pages.


How to prevent Thrashing

In order to prevent thrashing in operating system at first we need to know that how many frames as they really is needed by a process at any time. There is a technique known as working-set model which is used to reduce the causes of thrashing in operating system.

Thrashing technique starts by looking at how frames a process is actually using. This techniques specify a locality model for process execution. According to locality model when a process executes, it moves from locality to locality. Here the term locality represents a set of pages that are actively used together.




4. (c) Discuss any SCAN and C-SCAN disk scheduling algorithms. List the advantages of SCAN over C-SCAN algorithm.

Answer : -

SCAN Disk Scheduling

It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular direction till the end, satisfying all the requests coming in its path,and then it turns backand moves in the reverse direction satisfying requests coming in its path.

Example
Consider the following disk request sequence for a disk with 200 tracks -
65, 48, 39, 08, 99, 164, 152, 38, 124

Number of tracks moved by the head
= [(65 - 48) + (48 - 39) + (39 - 38) + (38 - 8) + (8 - 0) + (99 - 0) + (124 - 99) + (152 - 124) + (164 - 152)]
= [17 + 9 + 1 + 30 + 8 + 99 + 25 + 28 + 12]
= 229


C-SCAN Disk Scheduling

In C-SCAN algorithm, the arm of the disk moves in a particular direction servicing requests until it reaches the last cylinder, then it jumps to the last cylinder of the opposite direction without servicing any request then it turns back and start moving in that direction servicing the remaining requests.

Example
Consider the following disk request sequence for a disk with 200 tracks -
65, 48, 39, 08, 99, 164, 152, 38, 124

Number of tracks moved by the head
= [(65 - 48) + (48 - 39) + (39 - 38) + (38 - 8) + (8 - 0) + (200 - 0) + (200 - 164) + (164 - 152) + (152 - 124) + (124 - 99)]
= [17 + 9 + 1 + 30 + 8 + 200 + 36 + 12 + 28 + 25]
= 366




5. Write short notes on the following :

(a) Multiprocessor Operating System

Answer : - Multiprocessor is a computer system in which two or more central processing units (CPUs) exists, each CPU sharing the common main memory (RAM) as well as the peripherals. Due to this, the simultaneous processing of programs is possible.

In a multiprocessing system, symmetric multiprocessing model is used. Each processor runs the same copy of the operating system, and these copies communicate with each other. Each processor is assigned a specific task in this system. There is also a concept of a master processor whose task is to controls the system. This scheme refers to as a master-slave relationship. This system is economically beneficial as compare to single processor systems because the processors can share peripherals, power supplies, and other devices.

The main advantage of a multiprocessor system is to get more work done in the shortest period. It also provides reliability in case of one processor fails. In this situation, the system with multiprocessor will not stop the whole system; but slow it down.

(b) Fault Tolerance in Distributed Systems

Answer : -


(c) Characterization of Deadlock

Answer : - There are four conditions that must be present simultaneously for a deadlock to occur :




(d) Overlays and Swapping

Answer : -

Overlays

The main problem in Fixed partitioning is the size of a process has to be limited by the maximum size of the partition, which means a process can never be span over another. In order to solve this problem, earlier people have used some solution which is called as Overlays.

The concept of overlays is that whenever a process is running it will not use the complete program at the same time, it will use only some part of it. Then overlays concept says that whatever part you required, you load it and once the part is done, then you just unload it, means just pull it back and get the new part you required and run it.


Swapping

Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to main memory.

Though performance is usually affected by swapping process but it helps in running multiple and big processes in parallel and that's the reason Swapping is also known as a technique for memory compaction.





ABOUT US

QuestionSolves.com is an educational website that helps worldwide students in solving computer education related queries.

Also, different software like Visual Studio, SQL Server, Oracle etc. are available to download in different versions.

Moreover, QuestionSolves.com provides solutions to your questions and assignments also.


MORE TOPIC


Windows Command

UNIX Command

IGNOU Assignment Solution

IGNOU Question Paper Solution

Solutions of Different Questions


WHAT WE DO


Website Devlopment

Training

Home Learning

Provide BCA, MCA Projects

Provide Assignment & Question Paper Solution


CONTACT US


Follow Us