MCS-041 June 2018

1. (a) What is a Critical Section ? Give a monitor solution to the Dining philosophers' problem and explain.

Answer : - Critical Section is the part of a program which tries to access shared resources. That resource may be any resource in a computer like a memory location, Data structure, CPU or any IO device.

The critical section cannot be executed by more than one process at the same time; operating system faces the difficulties in allowing and disallowing the processes from entering the critical section.

The critical section problem is used to design a set of protocols which can ensure that the Race condition among the processes will never arise.

In order to synchronize the cooperative processes, our main task is to solve the critical section problem. We need to provide a solution in such a way that the following conditions can be satisfied.

Requirements of Synchronization mechanisms

• Primary

1. Mutual Exclusion - Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one process is executing inside critical section then the other process must not enter in the critical section.

2. Progress - Progress means that if one process doesn't need to execute into critical section then it should not stop other processes to get into the critical section.

• Secondery

1. Bounded Waiting - We should be able to predict the waiting time for every process to get into the critical section. The process must not be endlessly waiting for getting into the critical section.

2. Architectural Neutrality - Our mechanism must be architectural natural. It means that if our solution is working fine on one architecture then it should also run on the other ones as well.

1. (b) Describe Linked and Indexed allocation for disk space allocation.

1. (c) Consider the page reference string :
1, 2, 3, 4, 2, 5, 3, 4, 2, 6, 7, 8, 7, 9, 7, 8, 2, 5, 4 and 9
Calculate how many page faults would occur for LRU and FIFO page replacement algorithms, when the number of frames is 3. Assume all frames are initially empty.

1. (d) What is RPC ? Describe the steps involved in the execution of a RPC.

Answer : - A Remote Procedure Call (RPC) is an interprocess communication technique that is used for client-server based applications. It is also known as a subroutine call or a function call.

A client has a request message that the RPC translates and sends to the server. This request may be a procedure or a function call to a remote server. When the server receives the request, it sends the required response back to the client.

Implementation of RPC involves the five elements of program :

1. Client
2. Client Stub
3. RPC-Runtime
4. Server Stub
5. Server

Client

• A client is a user process which initiates a RPC
• The client makes a normal call that will invoke a corresponding procedure in the Client Stub.

Client Stub

• On receipt of a call request from the client, it packs specifications of the target procedure and arguments into a message and asks the local RPC-Runtime to send it to the Server Stub.
• On receipt of the result of procedure execution, it unpacks the result and passes it to the client.

RPC-Runtime

• Transmission of messages between client and the server machine across the network is handled by RPC-Runtime.
• It performs Retransmission, Acknowledgement, Routing and Encryption.
• RPC-Runtime on client machine receives messages containing result of procedure execution from server and sends it Client Stub as well as the RPC-Runtime on server machine receives the same message from Server Stub and passes it to client machine.
• It also receives call request messages from client machine and sends it to Server Stub.

Server stub

• On receipt of a call request message from the local RPC-Runtime, it unpacks and makes a normal call to invoke the required procedure in the server.
• On receipt of the result of procedure execution from the server, it packs the result into a message and then asks the local RPC-Runtime to send it to the client stub.

Server

• When a call request is received from the server stub, the server executes the required procedure and returns the result to the server stub.

2. (a) Define a Process. Explain various states of a process. How does a process differ from a thread ?

Answer : - A process is basically a program in execution. The execution of a process must progress in a sequential fashion.

A process is defined as an entity which represents the basic unit of work to be implemented in the system.

To put it in simple terms, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four sections - stack, heap, text and data.

stack The process Stack contains the temporary data such as method/function parameters, return address and local variables. This is dynamically allocated memory to a process during its run time. This includes the current activity represented by the value of Program Counter and the contents of the processor's registers. This section contains the global and static variables.

Process Life Cycle

When a process executes, it passes through different states. The various states of the process are as followings -

New This is the initial state when a process is first started/created. After the creation of a process, the process enters the ready state i.e. the process is loaded into the main memory. The process here is ready to run and is waiting to get the CPU time for its execution. Processes that are ready for execution by the CPU are maintained in a queue for ready processes. Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions. Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available. Once the Input/Output operation is completed the process goes to the ready state. Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory.

• A program in execution is often referred as process. A thread is a subset(part) of the process.
• A process consists of multiple threads. A thread is a smallest part of the process that can execute concurrently with other parts(threads) of the process.
• A process is sometime referred as task. A thread is often referred as lightweight process.
• A process has its own address space. A thread uses the process’s address space and share it with the other threads of that process.
• New threads are easily created. However the creation of new processes require duplication of the parent process.

2. (b) With the help of a neat diagram, explain segmented paging and paged segmentation.

Segmented Paging

Pure segmentation is not very popular and not being used in many of the operating systems. However, Segmentation can be combined with Paging to get the best features out of both the techniques.

In Segmented Paging, the main memory is divided into variable size segments which are further divided into fixed size pages.

• Pages are smaller than segments.

• Each Segment has a page table which means every program has multiple page tables.

• The logical address is represented as Segment Number (base address), Page Number and Page Offset.

• Segment Number - It points to the appropriate Segment Number.
• Page Number - It Points to the exact page within the segment.
• Page Offset - Used as an offset within the page frame.

• It reduces memory usage.
• Page table size is limited by the segment size.
• Segment table has only one entry corresponding to one actual segment.
• External Fragmentation is not there.
• It simplifies memory allocation.

• Internal Fragmentation will be there.
• The complexity level will be much higher as compare to paging.
• Page Tables need to be contiguously stored in the memory.

3. (a) Consider the following set of processes with the length of CPU burst time given in milliseconds :

ProcessBurst Time (msec)Arrival Time (msec)Priority
P12404
P2733
P3652
P410101

(i) Draw Gantt chart for FCFS, SJF and RR (quantum = 4) scheduling algorithms.

(ii) Calculate the average waiting time and turnaround time for each of the above mentioned algorithms.

3. (b) Explain Access Matrix and Mandatory Access Control Security Models.

4. (a) Explain RAID with different levels. Give the features of each level.

Answer : - RAID is a technology that is used to increase the performance and/or reliability of data storage. RAID stands for Redundant Array of Independent Disks. A RAID system consists of two or more drives working in parallel. There are different RAID levels, each optimized for a specific situation.

• RAID 0 – striping
• RAID 1 – mirroring
• RAID 5 – striping with parity
• RAID 6 – striping with double parity
• RAID 10 – combining mirroring and striping

RAID 0 – striping

In a RAID 0 system data are split up into blocks that get written across all the drives in the array. By using multiple disks (at least 2) at the same time, this offers superior I/O performance. This performance can be enhanced further by using multiple controllers, ideally one controller per disk.

• RAID 0 offers great performance, both in read and write operations. There is no overhead caused by parity controls.
• All storage capacity is used, there is no overhead.

• RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be used for mission-critical systems.

RAID 1 – mirroring

Data are stored twice by writing them to both the data drive (or set of data drives) and a mirror drive (or set of mirror drives). If a drive fails, the controller uses either the data drive or the mirror drive for data recovery and continues operation. You need at least 2 drives for a RAID 1 array.

• RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single drive.
• In case a drive fails, data do not have to be rebuild, they just have to be copied to the replacement drive.

• The main disadvantage is that the effective storage capacity is only half of the total drive capacity because all data get written twice.
• Software RAID 1 solutions do not always allow a hot swap of a failed drive. That means the failed drive can only be replaced after powering down the computer it is attached to. For servers that are used simultaneously by many people, this may not be acceptable. Such systems typically use hardware controllers that do support hot swapping.

RAID 5 – striping with parity

RAID 5 is the most common secure RAID level. It requires at least 3 drives but can work with up to 16 drives. Data blocks are striped across the drives and on one drive a parity checksum of all the block data is written. The parity data are not written to a fixed drive, they are spread across all drives. Using the parity data, the computer can recalculate the data of one of the other data blocks, should those data no longer be available. That means a RAID 5 array can withstand a single drive failure without losing data or access to data.

• Read data transactions are very fast while write data transactions are somewhat slower (due to the parity that has to be calculated).
• If a drive fails, you still have access to all data, even while the failed drive is being replaced and the storage controller rebuilds the data on the new drive.

• Drive failures have an effect on throughput, although this is still acceptable.
• This is complex technology. If one of the disks in an array using 4TB disks fails and is replaced, restoring the data (the rebuild time) may take a day or longer, depending on the load on the array and the speed of the controller. If another disk goes bad during that time, data are lost forever.

RAID 6 – striping with double parity

RAID 6 is like RAID 5, but the parity data are written to two drives. That means it requires at least 4 drives and can withstand 2 drives dying simultaneously. The chances that two drives break down at exactly the same moment are of course very small. However, if a drive in a RAID 5 systems dies and is replaced by a new drive, it takes hours or even more than a day to rebuild the swapped drive. If another drive dies during that time, you still lose all of your data. With RAID 6, the RAID array will even survive that second failure.

• Like with RAID 5, read data transactions are very fast.
• If two drives fail, you still have access to all data, even while the failed drives are being replaced. So RAID 6 is more secure than RAID 5.

• Write data transactions are slower than RAID 5 due to the additional parity data that have to be calculated.
• Drive failures have an effect on throughput, although this is still acceptable.
• This is complex technology. Rebuilding an array in which one drive failed can take a long time.

RAID 10 – combining mirroring and striping

It is possible to combine the advantages (and disadvantages) of RAID 0 and RAID 1 in one single system. This is a nested or hybrid RAID configuration. It provides security by mirroring all data on secondary drives while using striping across each set of drives to speed up data transfers.

• If something goes wrong with one of the disks in a RAID 10 configuration, the rebuild time is very fast since all that is needed is copying all the data from the surviving mirror to a new drive.

• Half of the storage capacity goes to mirroring, so compared to large RAID 5 or RAID 6 arrays, this is an expensive way to have redundancy.

4. (b) Explain the concept of Thrashing. How can we prevent it ?

Answer : - Thrashing is nothing but a situation which occurs when a process is spending more time in paging or swapping activities rather than its execution. In thrashing CPU states is so much busy in swapping that it can not respond to user program as much as it required.

Causes of Thrashing

Initially when the CPU utilization is low, then process scheduling mechanism, loads many processes into the memory at the same time so that degree of multi programming can be increased. Now in this situation we have more number of processes in memory as compare to the available number of frames in memory. So allocating a limited amount of frames to each process.

When any higher priority process arrive in memory and if frame is not freely available at that time then the other process that occupied the frame which is resides in frame will move to secondary storage and this free frame is now allocated to higher priority process.

In other words we can say that as the memory fills up, process starts to spend a lot of time for the required pages to be swapped in, again CPU utilization becomes low because most of the processes are waiting for pages.

How to prevent Thrashing

In order to prevent thrashing in operating system at first we need to know that how many frames as they really is needed by a process at any time. There is a technique known as working-set model which is used to reduce the causes of thrashing in operating system.

Thrashing technique starts by looking at how frames a process is actually using. This techniques specify a locality model for process execution. According to locality model when a process executes, it moves from locality to locality. Here the term locality represents a set of pages that are actively used together.

4. (c) Discuss any SCAN and C-SCAN disk scheduling algorithms. List the advantages of SCAN over C-SCAN algorithm.

SCAN Disk Scheduling

It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular direction till the end, satisfying all the requests coming in its path,and then it turns backand moves in the reverse direction satisfying requests coming in its path.

Example
Consider the following disk request sequence for a disk with 200 tracks -
65, 48, 39, 08, 99, 164, 152, 38, 124

Number of tracks moved by the head
= [(65 - 48) + (48 - 39) + (39 - 38) + (38 - 8) + (8 - 0) + (99 - 0) + (124 - 99) + (152 - 124) + (164 - 152)]
= [17 + 9 + 1 + 30 + 8 + 99 + 25 + 28 + 12]
= 229

C-SCAN Disk Scheduling

In C-SCAN algorithm, the arm of the disk moves in a particular direction servicing requests until it reaches the last cylinder, then it jumps to the last cylinder of the opposite direction without servicing any request then it turns back and start moving in that direction servicing the remaining requests.

Example
Consider the following disk request sequence for a disk with 200 tracks -
65, 48, 39, 08, 99, 164, 152, 38, 124

Number of tracks moved by the head
= [(65 - 48) + (48 - 39) + (39 - 38) + (38 - 8) + (8 - 0) + (200 - 0) + (200 - 164) + (164 - 152) + (152 - 124) + (124 - 99)]
= [17 + 9 + 1 + 30 + 8 + 200 + 36 + 12 + 28 + 25]
= 366

5. Write short notes on the following :

(a) Multiprocessor Operating System

Answer : - Multiprocessor is a computer system in which two or more central processing units (CPUs) exists, each CPU sharing the common main memory (RAM) as well as the peripherals. Due to this, the simultaneous processing of programs is possible.

In a multiprocessing system, symmetric multiprocessing model is used. Each processor runs the same copy of the operating system, and these copies communicate with each other. Each processor is assigned a specific task in this system. There is also a concept of a master processor whose task is to controls the system. This scheme refers to as a master-slave relationship. This system is economically beneficial as compare to single processor systems because the processors can share peripherals, power supplies, and other devices.

The main advantage of a multiprocessor system is to get more work done in the shortest period. It also provides reliability in case of one processor fails. In this situation, the system with multiprocessor will not stop the whole system; but slow it down.

(b) Fault Tolerance in Distributed Systems

Answer : - There are four conditions that must be present simultaneously for a deadlock to occur :

• Mutual Exclusion - At least one resource must be held in a non-shareable mode, that is only one process can use a resource at a time. If another process want to access that resource, then it must wait until the resource has been released.

• Hold and Wait - A process is currently holding at least one resource and requesting additional resources which are being held by other processes.

• No Preemption - A resource cannot be taken from a process unless the process releases the resource.

• Circular Wait - A set of processes are waiting for each other in circular form. In general, there is a set {P0, P1, …, Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, . . . . . , Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

(d) Overlays and Swapping

Overlays

The main problem in Fixed partitioning is the size of a process has to be limited by the maximum size of the partition, which means a process can never be span over another. In order to solve this problem, earlier people have used some solution which is called as Overlays.

The concept of overlays is that whenever a process is running it will not use the complete program at the same time, it will use only some part of it. Then overlays concept says that whatever part you required, you load it and once the part is done, then you just unload it, means just pull it back and get the new part you required and run it.

Swapping

Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to main memory.

Though performance is usually affected by swapping process but it helps in running multiple and big processes in parallel and that's the reason Swapping is also known as a technique for memory compaction.

QuestionSolves.com is an educational website that helps worldwide students in solving computer education related queries.

Also, different software like Visual Studio, SQL Server, Oracle etc. are available to download in different versions.

Moreover, QuestionSolves.com provides solutions to your questions and assignments also.

MORE TOPIC

WHAT WE DO