MCS-041 ASSIGNMENT SOLUTION (2019-20)

If you have any queries please leave a message here
Your Message
×


Q1. Assume you have the following jobs to execute with one processor :

ProcessProcessing TimeArrival Time
P11500
P21005
P30710
P41612
P50413

Calculate the turnaround time, waiting time, average turnaround time, average waiting time, throughput and processor utilization for the above given set of processes that arrive at a given arrive time shown in the table, with the length of processing time given in milliseconds using FCFS, SJF, RR (with quantum 2) and SRTN scheduling algorithms. Also draw their corresponding Gantt charts.

Answer : -

First Come First Serve (FCFS)

Waiting Time = Starting Time - Arivel Time
Turnaround Time = Waiting Time + Execution Time

ProcessProcessing TimeArrival TimeStarting TimeWaiting TimeTurnaround Time
P1150000(0 + 15) = 15
P21005(0 + 15) = 15(15 -05) = 10(10 + 10) = 20
P30710(15 + 10) = 25(25 - 10) = 15(15 + 7) = 22
P41612(25 + 7) = 32(32 - 12) = 20(20 + 16) = 36
P50413(32 + 16) = 48(48 - 13) = 35(35 + 4) = 39

Average Waiting Time = (0 + 10 + 15 + 20 + 35)/5 = 16
Average Turnaround Time = (15 + 20 + 22 + 36 + 39)/5 = 26.4


Gantt Chart of FCFS

Execute TimeRunning ProcessProcess QueueDescription
0
P1
-Process "P1" arrives and gets processed
5
P1
P2
Process "P2" arrives and wait for its turn
10
P1
P2P3
Process "P3" arrives and wait for its turn
12
P1
P2P3P4
Process "P4" arrives and wait for its turn
13
P1
P2P3P4P5
Process "P5" arrives and wait for its turn
15
P2
P3P4P5
"P1" gets completed, so "P2" gets processed
25
P3
P4P5
"P2" gets completed, so "P3" gets processed
32
P4
P5
"P3" gets completed, so "P4" gets processed
48
P5
-"P4" gets completed, so "P5" gets processed
52--"P5" gets completed


Shortest Job First (SJF)

ProcessProcessing TimeArrival TimeStarting TimeWaiting TimeTurnaround Time
P1150000(0 + 15) = 15
P50413(0 + 15) = 15(15 - 13) = 2(2 + 4) = 6
P30710(15 + 4) = 19(19 - 10) = 9(9 + 7) = 16
P21005(19 + 7) = 26(26 - 5) = 21(21 + 10) = 31
P41612(26 + 10) = 36(36 - 12) = 24(24 + 16) = 40

Average Waiting Time = (0 + 2 + 9 + 21 + 24)/5 = 11.2
Average Turnaround Time = (15 + 6 + 16 + 31 + 40)/5 = 21.6


Gantt Chart of SJF

Execute TimeRunning ProcessProcess QueueDescription
0
P1
-Process "P1" arrives and gets processed
5
P1
P2
Process "P2" arrives and wait for its turn
10
P1
P2P3
Process "P3" arrives and wait for its turn
12
P1
P2P3P4
Process "P4" arrives and wait for its turn
13
P1
P2P3P4P5
Process "P5" arrives and wait for its turn
15
P5
P2P3P4
"P1" gets completed, so "P5" gets processed
19
P3
P2P4
"P5" gets completed, so "P3" gets processed
26
P2
P4
"P3" gets completed, so "P2" gets processed
36
P4
-"P2" gets completed, so "P4" gets processed
52--"P4" gets completed


Round Robin (RR)

ProcessProcessing TimeArrival TimeWaiting TimeTurnaround Time
P1150027(27 + 15) = 42
P2100525(25 + 10) = 35
P3071024(24 + 7) = 31
P4161224(24 + 16) = 40
P5041315(15 + 4) = 19

Average Waiting Time = (27 + 25 + 24 + 24 + 15)/5 = 28
Average Turnaround Time = (42 + 35 + 31 + 40 + 19)/5 = 33.4


Gantt Chart of RR

Execute TimeRunning ProcessProcess QueueDescription
0
P1
-Process "P1" arrives and gets processed
2
P1
-As there is no process in the queue, so "P1" gets processed
4
P1
-
5
P1
P2
Process "P2" arrives and wait for its turn
6
P2
P1
Quantum time expires, so "P1" is forced out of CPU and "P2" gets processed
8
P1
P2
Quantum time expires, so "P2" is forced out of CPU and "P1" gets processed
10
P2
P3P1
Process "P3" arrives and added to the queue.
Quantum time expires, so "P1" is forced out of CPU and "P2" gets processed
12
P3
P1P4P2
Process "P4" arrives and wait for its turn.
Quantum time expires, so "P1" is forced out of CPU and "P2" gets processed
13
P3
P1P4P2P5
Process "P5" arrives and wait for its turn
14
P1
P4P2P5P3
Quantum time expires, so "P3" is forced out of CPU and "P1" gets processed
16
P4
P2P5P3P1
Quantum time expires, so "P1" is forced out of CPU and "P4" gets processed
18
P2
P5P3P1P4
Quantum time expires, so "P4" is forced out of CPU and "P2" gets processed
20
P5
P3P1P4P2
Quantum time expires, so "P2" is forced out of CPU and "P5" gets processed
22
P3
P1P4P2P5
Quantum time expires, so "P5" is forced out of CPU and "P3" gets processed
24
P1
P4P2P5P3
Quantum time expires, so "P3" is forced out of CPU and "P1" gets processed
26
P4
P2P5P3P1
Quantum time expires, so "P1" is forced out of CPU and "P4" gets processed
28
P2
P5P3P1P4
Quantum time expires, so "P4" is forced out of CPU and "P2" gets processed
30
P5
P3P1P4P2
Quantum time expires, so "P2" is forced out of CPU and "P5" gets processed
32
P3
P1P4P2
"P5" gets completed, so "P3" gets processed
34
P1
P4P2P3
Quantum time expires, so "P3" is forced out of CPU and "P1" gets processed
36
P4
P2P3P1
Quantum time expires, so "P1" is forced out of CPU and "P4" gets processed
38
P2
P3P1P4
Quantum time expires, so "P4" is forced out of CPU and "P2" gets processed
40
P3
P1P4
"P2" gets completed, so "P3" gets processed
41
P1
P4
"P3" gets completed, so "P1" gets processed
42
P4
-"P1" gets completed, so "P4" gets processed
44
P4
-As there is no process in the queue, so "P4" gets processed
46
P4
-
48
P4
-
50
P4
-
52--"P4" gets completed


Shortest Remaining Time Next (SRTN)

ProcessProcessing TimeArrival TimeWaiting TimeTurnaround Time
P115000(0 + 15) = 15
P50413(15 - 13) = 2(2 + 4) = 6
P30710(19 - 10) = 9(9 + 7) = 16
P21005(26 - 5) = 21(21 + 10) = 31
P41612(36 - 12) = 24(24 + 16) = 40

Average Waiting Time = (0 + 2 + 9 + 21 + 24)/5 = 11.2
Average Turnaround Time = (15 + 6 + 16 + 31 + 40)/5 = 21.6


Gantt Chart of SRTN

Execute TimeRunning ProcessProcess QueueDescription
0
P1
-Process "P1" arrives and gets processed
5
P1
P2
Process "P2" arrives and wait for its turn
10
P1
P2P3
Process "P3" arrives and wait for its turn
12
P1
P2P3P4
Process "P4" arrives and wait for its turn
13
P1
P2P3P4P5
Process "P5" arrives and wait for its turn
15
P5
P2P3P4
"P1" gets completed, so "P5" gets processed
19
P3
P2P4
"P5" gets completed, so "P3" gets processed
26
P2
P4
"P3" gets completed, so "P2" gets processed
36
P4
-"P2" gets completed, so "P4" gets processed
52--"P4" gets completed




Q2. Using C programming, write a semaphore based solution to Dining Philosopher’s problem and explain the program.

Answer : -


Q3. (a) Discuss how fragmentations manifest itself in each of the following types of virtual storage system.

Answer : -

i. Segmentation


ii. Paging

Paging is a memory management technique that permits the physical address space of a process to be non-contiguous.


iii. Combined segmentation and paging

Pure segmentation is not very popular and not being used in many of the operating systems. However, Segmentation can be combined with Paging to get the best features out of both the techniques.

In Segmented Paging, the main memory is divided into variable size segments which are further divided into fixed size pages.




Q3. (b) Compare direct file with indexed sequential file organization

Answer : -

Direct File Organization

Advantages of Direct File Organization

Disadvantages of Direct File Organization


Indexed Sequential File Organization

Advantages of Indexed sequential file organization

Disadvantages of Indexed sequential file organization




Q4. (a) Explain take-grant model for operating system security with an example. Also explain the mechanisms of security in WIN 2000 operating system ?

Answer : - The Take-Grant System is a model that helps in determining the protection rights (e.g., read or write) in a computer system. The Take-Grant system was introduced by Jones, Lipton, and Snyder to show that it is possible to decide on the safety of a computer system even when the number of subjects and objects are very large, or unbound. This can be accomplished in linear time based on the initial size of the system.

The take-grant system models a protection system which consists of a set of states and state transitions. A directed graph shows the connections between the nodes of this system. These nodes are representative of the subjects or objects of the model. The directed edges between the nodes represent the rights that one node has over the linked node.

There are a total of four such rules :


Example




Q4. (b) Explain Bell and La-Padula Model for security and protection. Why is security a critical issue in a distributed OS environment ?

Answer : - The Bell-Lapadula Model of protection systems deals with the control of information flow. It is a linear non-discretionary model. This model of protection consists of the following components :

The set of access rights given to a subject are the following :

Control Attribute - This is an attribute given to the subject that creates an object. Due to this, the creator of an object can pass any of the above four access rights of that object to any subject. However, it cannot pass the control attribute itself. The creator of an object is also known as the controller of that object.

The following restrictions are imposed by the Bell-Lapadula Model :




Q5. Write and explain an algorithm used for ordering of events in a distributed environment. Implement the algorithm with an example and explain?

Answer : -


Q6. Discuss in detail the Process management, Memory management, I/O management, File management and Security and Protection for the following Operating Systems :

a) WINDOWS 10

Answer : -

Process Management - A process is a program in execution. For example, when we write a program in C or C++ and compile it, the compiler creates binary code. The original code and binary code are both programs. When we actually run the binary code, it becomes a process.

A process will need certain resources — such as CPU time, memory, files, and I/O devices — to accomplish its task. These resources are allocated to the process either when it is created or while it is executing. Every process has an ID, a number that identifies it. A process may have more than one thread. A thread is an object that identifies which part of the program is running. Each thread also has an ID (a number that identifies it).

A thread is a basic unit of CPU utilization, which consists of a program counter, a stack, and a set of registers. It is used whenever a process has multiple tasks to perform independently of the others. The benefits of multithreading are faster responsiveness and effective usage of resources.

On a machine with one processor, more than one thread can be allocated, but only one thread can run at a time. Each thread only runs a short time and then the execution is passed on to the next thread, giving the user the illusion that more than one thing is happening at once. On a machine with more than one processor, true multi-threading can take place. If an application has multiple threads, the threads can run simultaneously on different processors.


Memory Management - To under this, we will check some fundamental concepts of memory.

Virtual Memory - Your computer has two types of memory, Random Access Memory (RAM) and Virtual Memory. All programs use RAM, but when there isn't enough RAM for the program you're trying to run, Windows temporarily moves information that would normally be stored in RAM to a file on your hard disk called a Paging File. The amount of information temporarily stored in a paging file is also referred to as virtual memory. Using virtual memory, in other words, moving information to and from the paging file, frees up enough RAM for programs to run correctly.

Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.

The size of virtual storage is limited by the addressing scheme of the computer system and amount of secondary memory is available not by the actual number of the main storage locations.

Address Translation - Mapping virtual address to physical address is known as Address translation mechanism. A virtual address does not represent the actual physical location of an object in memory; instead, the system maintains a page table for each process, which is an internal data structure used to translate virtual addresses into their corresponding physical addresses. Each time a thread references an address, the system translates the virtual address to a physical address.

In this model, both virtual and physical memory are divided up into handy sized chunks called pages. These pages are all the same size. Each of these pages is given a unique number; the Page Frame Number (PFN). For every instruction in a program, for example to load a register with the contents of a location in memory, the CPU performs a mapping from a virtual address to a physical one. Also, if the instruction itself references memory then a translation is performed for that reference.

The address translation between virtual and physical memory is done by the CPU using page tables which contain all the information that the CPU needs. Typically there is a page table for every process in the system. Above figure shows a simple mapping between virtual addresses and physical addresses using page tables for Process X and Process Y. This shows that Process X’s virtual PFN 0 is mapped into memory in physical PFN 3 and that Process Y’s virtual PFN 2 is mapped into physical PFN 5. Each entry in the theoretical page table contains the following information :

Memory Management Unit (MMU) - A memory management unit (MMU) is a computer hardware unit that performing the translation of virtual memory addresses to physical addresses. The MMU is usually located within the computer’s Central Processing Unit (CPU), but sometimes operates in a separate Integrated Chip (IC).

Translation Lookaside Buffer (TLB) - TLB is the cache used by modern CPUs to improve performance of Virtual to Physical address translation. MMU caches last few translations in this. When CPU’s Memory Management Unit (MMU) needs read page from page table, it will first check TLB by default. If it is found, then it’s called TLB Hit. If not, then MMU scans page table to find the page, copies found page to TLB and then reads it.

Swapping - Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to main memory.

Though performance is usually affected by swapping process but it helps in running multiple and big processes in parallel and that’s the reason Swapping is also known as a technique for memory compaction.




b) ANDROID Version 9.0 (PIE)

Answer : -

Process Management - In most cases, every Android application runs in its own Linux process. This process is created for the application when some of its code needs to be run, and will remain running until it is no longer needed and the system needs to reclaim its memory for use by other applications.

An unusual and fundamental feature of Android is that an application process's lifetime is not directly controlled by the application itself. Instead, it is determined by the system through a combination of the parts of the application that the system knows are running, how important these things are to the user, and how much overall memory is available in the system.

To determine which processes should be killed when low on memory, Android places each process into an "importance hierarchy" based on the components running in them and the state of those components. These process types are (in order of importance) :

1. Foreground Process - A foreground process is one that is required for what the user is currently doing. A process is considered to be in the foreground if any of the following conditions hold :

There will only ever be a few such processes in the system, and these will only be killed as a last resort if memory is so low that not even these processes can continue to run. Generally, at this point, the device has reached a memory paging state, so this action is required in order to keep the user interface responsive.

2. Visible Process - A visible process is doing work that the user is currently aware of, so killing it would have a noticeable negative impact on the user experience. A process is considered visible in the following conditions :

The number of these processes running in the system is less bounded than foreground processes, but still relatively controlled. These processes are considered extremely important and will not be killed unless doing so is required to keep all foreground processes running.

3. Service Process - A service process is one holding a Service that has been started with the startService( ) method. Though these processes are not directly visible to the user, they are generally doing things that the user cares about (such as background network data upload or download), so the system will always keep such processes running unless there is not enough memory to retain all foreground and visible processes.

Services that have been running for a long time (such as 30 minutes or more) may be demoted in importance to allow their process to drop to the cached LRU list. This helps avoid situations where very long running services with memory leaks or other problems consume so much RAM that they prevent the system from making effective use of cached processes.

4. Cached Process - A cached process is one that is not currently needed, so the system is free to kill it as desired when memory is needed elsewhere. In a normally behaving system, these are the only processes involved in memory management: a well running system will have multiple cached processes always available (for more efficient switching between applications) and regularly kill the oldest ones as needed. Only in very critical (and undesireable) situations will the system get to a point where all cached processes are killed and it must start killing service processes.

A process's priority may also be increased based on other dependencies a process has to it. For example, if process A has bound to a Service with the Context.BIND_AUTO_CREATE flag or is using a ContentProvider in process B, then process B's classification will always be at least as important as process A's.


Memory Management - The Android Runtime (ART) and Dalvik virtual machine use paging and memory-mapping to manage memory. This means that any memory an app modifies—whether by allocating new objects or touching memory-mapped pages — remains resident in RAM and cannot be paged out. The only way to release memory from an app is to release object references that the app holds, making the memory available to the garbage collector. That is with one exception: any files memory-mapped in without modification, such as code, can be paged out of RAM if the system wants to use that memory elsewhere.


File Management - Most of the Android user are using their Android phone just for calls, SMS, browsing and basic apps, But form the development prospective, we should know about Android internal structure. Android uses several partitions (like boot, system, recovery , data etc) to organize files and folders on the device just like Windows OS.

There are mainly 6 partitions in Android phones, tablets and other Android devices. Note that there might be some other partitions available, it differs from Model to Model. But logically below 6 partitions can be found in any Android devices.

Also Below are the for SD Card File System Partitions.



ABOUT US

QuestionSolves.com is an educational website that helps worldwide students in solving computer education related queries.

Also, different software like Visual Studio, SQL Server, Oracle etc. are available to download in different versions.

Moreover, QuestionSolves.com provides solutions to your questions and assignments also.


MORE TOPIC


Windows Command

UNIX Command

IGNOU Assignment Solution

IGNOU Question Paper Solution

Solutions of Different Questions


WHAT WE DO


Website Devlopment

Training

Home Learning

Provide BCA, MCA Projects

Provide Assignment & Question Paper Solution


CONTACT US


Follow Us