MCS-041 ASSIGNMENT SOLUTION (2019-20)

Q1. Assume you have the following jobs to execute with one processor :

ProcessProcessing TimeArrival Time
P11500
P21005
P30710
P41612
P50413

Calculate the turnaround time, waiting time, average turnaround time, average waiting time, throughput and processor utilization for the above given set of processes that arrive at a given arrive time shown in the table, with the length of processing time given in milliseconds using FCFS, SJF, RR (with quantum 2) and SRTN scheduling algorithms. Also draw their corresponding Gantt charts.

First Come First Serve (FCFS)

Waiting Time = Starting Time - Arivel Time
Turnaround Time = Waiting Time + Execution Time

ProcessProcessing TimeArrival TimeStarting TimeWaiting TimeTurnaround Time
P1150000(0 + 15) = 15
P21005(0 + 15) = 15(15 -05) = 10(10 + 10) = 20
P30710(15 + 10) = 25(25 - 10) = 15(15 + 7) = 22
P41612(25 + 7) = 32(32 - 12) = 20(20 + 16) = 36
P50413(32 + 16) = 48(48 - 13) = 35(35 + 4) = 39

Average Waiting Time = (0 + 10 + 15 + 20 + 35)/5 = 16
Average Turnaround Time = (15 + 20 + 22 + 36 + 39)/5 = 26.4

Gantt Chart of FCFS

Execute TimeRunning ProcessProcess QueueDescription
0
 P1
-Process "P1" arrives and gets processed
5
 P1
 P2
Process "P2" arrives and wait for its turn
10
 P1
 P2 P3
Process "P3" arrives and wait for its turn
12
 P1
 P2 P3 P4
Process "P4" arrives and wait for its turn
13
 P1
 P2 P3 P4 P5
Process "P5" arrives and wait for its turn
15
 P2
 P3 P4 P5
"P1" gets completed, so "P2" gets processed
25
 P3
 P4 P5
"P2" gets completed, so "P3" gets processed
32
 P4
 P5
"P3" gets completed, so "P4" gets processed
48
 P5
-"P4" gets completed, so "P5" gets processed
52--"P5" gets completed

Shortest Job First (SJF)

ProcessProcessing TimeArrival TimeStarting TimeWaiting TimeTurnaround Time
P1150000(0 + 15) = 15
P50413(0 + 15) = 15(15 - 13) = 2(2 + 4) = 6
P30710(15 + 4) = 19(19 - 10) = 9(9 + 7) = 16
P21005(19 + 7) = 26(26 - 5) = 21(21 + 10) = 31
P41612(26 + 10) = 36(36 - 12) = 24(24 + 16) = 40

Average Waiting Time = (0 + 2 + 9 + 21 + 24)/5 = 11.2
Average Turnaround Time = (15 + 6 + 16 + 31 + 40)/5 = 21.6

Gantt Chart of SJF

Execute TimeRunning ProcessProcess QueueDescription
0
 P1
-Process "P1" arrives and gets processed
5
 P1
 P2
Process "P2" arrives and wait for its turn
10
 P1
 P2 P3
Process "P3" arrives and wait for its turn
12
 P1
 P2 P3 P4
Process "P4" arrives and wait for its turn
13
 P1
 P2 P3 P4 P5
Process "P5" arrives and wait for its turn
15
 P5
 P2 P3 P4
"P1" gets completed, so "P5" gets processed
19
 P3
 P2 P4
"P5" gets completed, so "P3" gets processed
26
 P2
 P4
"P3" gets completed, so "P2" gets processed
36
 P4
-"P2" gets completed, so "P4" gets processed
52--"P4" gets completed

Round Robin (RR)

ProcessProcessing TimeArrival TimeWaiting TimeTurnaround Time
P1150027(27 + 15) = 42
P2100525(25 + 10) = 35
P3071024(24 + 7) = 31
P4161224(24 + 16) = 40
P5041315(15 + 4) = 19

Average Waiting Time = (27 + 25 + 24 + 24 + 15)/5 = 28
Average Turnaround Time = (42 + 35 + 31 + 40 + 19)/5 = 33.4

Gantt Chart of RR

Execute TimeRunning ProcessProcess QueueDescription
0
 P1
-Process "P1" arrives and gets processed
2
 P1
-As there is no process in the queue, so "P1" gets processed
4
 P1
-
5
 P1
 P2
Process "P2" arrives and wait for its turn
6
 P2
 P1
Quantum time expires, so "P1" is forced out of CPU and "P2" gets processed
8
 P1
 P2
Quantum time expires, so "P2" is forced out of CPU and "P1" gets processed
10
 P2
 P3 P1
Process "P3" arrives and added to the queue.
Quantum time expires, so "P1" is forced out of CPU and "P2" gets processed
12
 P3
 P1 P4 P2
Process "P4" arrives and wait for its turn.
Quantum time expires, so "P1" is forced out of CPU and "P2" gets processed
13
 P3
 P1 P4 P2 P5
Process "P5" arrives and wait for its turn
14
 P1
 P4 P2 P5 P3
Quantum time expires, so "P3" is forced out of CPU and "P1" gets processed
16
 P4
 P2 P5 P3 P1
Quantum time expires, so "P1" is forced out of CPU and "P4" gets processed
18
 P2
 P5 P3 P1 P4
Quantum time expires, so "P4" is forced out of CPU and "P2" gets processed
20
 P5
 P3 P1 P4 P2
Quantum time expires, so "P2" is forced out of CPU and "P5" gets processed
22
 P3
 P1 P4 P2 P5
Quantum time expires, so "P5" is forced out of CPU and "P3" gets processed
24
 P1
 P4 P2 P5 P3
Quantum time expires, so "P3" is forced out of CPU and "P1" gets processed
26
 P4
 P2 P5 P3 P1
Quantum time expires, so "P1" is forced out of CPU and "P4" gets processed
28
 P2
 P5 P3 P1 P4
Quantum time expires, so "P4" is forced out of CPU and "P2" gets processed
30
 P5
 P3 P1 P4 P2
Quantum time expires, so "P2" is forced out of CPU and "P5" gets processed
32
 P3
 P1 P4 P2
"P5" gets completed, so "P3" gets processed
34
 P1
 P4 P2 P3
Quantum time expires, so "P3" is forced out of CPU and "P1" gets processed
36
 P4
 P2 P3 P1
Quantum time expires, so "P1" is forced out of CPU and "P4" gets processed
38
 P2
 P3 P1 P4
Quantum time expires, so "P4" is forced out of CPU and "P2" gets processed
40
 P3
 P1 P4
"P2" gets completed, so "P3" gets processed
41
 P1
 P4
"P3" gets completed, so "P1" gets processed
42
 P4
-"P1" gets completed, so "P4" gets processed
44
 P4
-As there is no process in the queue, so "P4" gets processed
46
 P4
-
48
 P4
-
50
 P4
-
52--"P4" gets completed

Shortest Remaining Time Next (SRTN)

ProcessProcessing TimeArrival TimeWaiting TimeTurnaround Time
P115000(0 + 15) = 15
P50413(15 - 13) = 2(2 + 4) = 6
P30710(19 - 10) = 9(9 + 7) = 16
P21005(26 - 5) = 21(21 + 10) = 31
P41612(36 - 12) = 24(24 + 16) = 40

Average Waiting Time = (0 + 2 + 9 + 21 + 24)/5 = 11.2
Average Turnaround Time = (15 + 6 + 16 + 31 + 40)/5 = 21.6

Gantt Chart of SRTN

Execute TimeRunning ProcessProcess QueueDescription
0
 P1
-Process "P1" arrives and gets processed
5
 P1
 P2
Process "P2" arrives and wait for its turn
10
 P1
 P2 P3
Process "P3" arrives and wait for its turn
12
 P1
 P2 P3 P4
Process "P4" arrives and wait for its turn
13
 P1
 P2 P3 P4 P5
Process "P5" arrives and wait for its turn
15
 P5
 P2 P3 P4
"P1" gets completed, so "P5" gets processed
19
 P3
 P2 P4
"P5" gets completed, so "P3" gets processed
26
 P2
 P4
"P3" gets completed, so "P2" gets processed
36
 P4
-"P2" gets completed, so "P4" gets processed
52--"P4" gets completed

Q2. Using C programming, write a semaphore based solution to Dining Philosopher’s problem and explain the program.

Q3. (a) Discuss how fragmentations manifest itself in each of the following types of virtual storage system.

• Segmentation
• Paging
• Combined segmentation and paging

i. Segmentation

• A Memory Management technique in which memory is divided into variable sized chunks which can be allocated to processes. Each chunk is called a Segment.

• The mapping of the logical address to the physical address is done with the help of the segment table.

• Segment table is divided into three part : Segment Number, Base Address and Segment Limit.

• Segment Number - Used as an index into a segment table which contains Base Address and Limit of each segment in physical memory.
• Base Address - Starting address of the corresponding segment in main memory.
• Segment Limit - The length of the segment.

ii. Paging

Paging is a memory management technique that permits the physical address space of a process to be non-contiguous.

• Physical memory is divided into fixed size blocks called frames.

• Logical memory is divided into blocks of the same size called pages.

• A frame has the same size as a page and it is a place where a(logical) page can be (physically) placed.

• The size of a page is always power of 2, and vary between 512 bytes to 8192 bytes per page.

• Every address generated by the CPU is divided into two parts: Page number (p) and Page offset (d)

• Page number - Used as an index into a page table which contains base address of each page in physical memory.
• Page offset - Combined with base address to define the physical memory address that is sent to the memory unit.

iii. Combined segmentation and paging

Pure segmentation is not very popular and not being used in many of the operating systems. However, Segmentation can be combined with Paging to get the best features out of both the techniques.

In Segmented Paging, the main memory is divided into variable size segments which are further divided into fixed size pages.

• Pages are smaller than segments.

• Each Segment has a page table which means every program has multiple page tables.

• The logical address is represented as Segment Number (base address), Page Number and Page Offset.

• Segment Number - It points to the appropriate Segment Number.
• Page Number - It Points to the exact page within the segment.
• Page Offset - Used as an offset within the page frame.

Q3. (b) Compare direct file with indexed sequential file organization

Direct File Organization

• Direct file organization is also known as random or relative file organization.
• In direct file, all records are stored in direct access storage device, such as hard disk. The records are randomly placed throughout the file.
• The records does not need to be in sequence because they are updated directly and rewritten back in the same location.
• This file organization is useful for immediate access to large amount of information. It is used in accessing large databases.
• It is also called as hashing.

• Direct file helps in online transaction processing system (OLTP) like online railway reservation system.
• In direct file, sorting of the records are not required.
• It accesses the desired records immediately.
• It updates several files quickly.
• It has better control over record allocation.

• Direct file does not provide back up facility.
• It is expensive.

Indexed Sequential File Organization

• Indexed sequential file combines both sequential file and direct file organization.
• In indexed sequential file, records are stored randomly on a direct access device such as magnetic disk by a primary key.
• This file have multiple keys. These keys can be alphanumeric in which the records are ordered is called primary key.
• The data can be access either sequentially or randomly using the index. The index is stored in a file and read into memory when the file is opened.

Advantages of Indexed sequential file organization

• In indexed sequential file, sequential file and random file access is possible.
• It accesses the records very fast if the index table is properly organized.
• The records can be inserted in the middle of the file.
• It provides quick access for sequential and direct processing.
• It reduces the degree of the sequential search.

Disadvantages of Indexed sequential file organization

• Indexed sequential file requires unique keys and periodic reorganization.
• Indexed sequential file takes longer time to search the index for the data access or retrieval.
• It requires more storage space.
• It is expensive because it requires special software.

Q4. (a) Explain take-grant model for operating system security with an example. Also explain the mechanisms of security in WIN 2000 operating system ?

Answer : - The Take-Grant System is a model that helps in determining the protection rights (e.g., read or write) in a computer system. The Take-Grant system was introduced by Jones, Lipton, and Snyder to show that it is possible to decide on the safety of a computer system even when the number of subjects and objects are very large, or unbound. This can be accomplished in linear time based on the initial size of the system.

The take-grant system models a protection system which consists of a set of states and state transitions. A directed graph shows the connections between the nodes of this system. These nodes are representative of the subjects or objects of the model. The directed edges between the nodes represent the rights that one node has over the linked node.

There are a total of four such rules :

• take rule allows a subject to take rights of another object

• grant rule allows a subject to grant own rights to another object

• create rule allows a subject to create new objects

• remove rule allows a subject to remove rights it has over on another object

Example

Q4. (b) Explain Bell and La-Padula Model for security and protection. Why is security a critical issue in a distributed OS environment ?

Answer : - The Bell-Lapadula Model of protection systems deals with the control of information flow. It is a linear non-discretionary model. This model of protection consists of the following components :

• A set of subjects, a set of objects, and an access control matrix.
• Several ordered security levels. Each subject has a clearance and each object has a classification which attaches it to a security level. Each subject also has a current clearance level which does not exceed its clearance level. Thus a subject can only change to a clearance level below its assigned clearance level.

The set of access rights given to a subject are the following :

• Append (The subject can only write to the object but it cannot read.)
• Execute (The subject can execute the object but can neither read nor write.)
• Read-Write (The subject has both read and write permissions to the object.)

Control Attribute - This is an attribute given to the subject that creates an object. Due to this, the creator of an object can pass any of the above four access rights of that object to any subject. However, it cannot pass the control attribute itself. The creator of an object is also known as the controller of that object.

The following restrictions are imposed by the Bell-Lapadula Model :

• Reading Down - A subject has only read access to objects whose security level is below the subject's current clearance level. This prevents a subject from getting access to information available in security levels higher than its current clearance level.
• Writing Up - A subject has append access to objects whose security level is higher than its current clearance level. This prevents a subject from passing information to levels lower than its current level.

Q5. Write and explain an algorithm used for ordering of events in a distributed environment. Implement the algorithm with an example and explain?

Q6. Discuss in detail the Process management, Memory management, I/O management, File management and Security and Protection for the following Operating Systems :

a) WINDOWS 10

Process Management - A process is a program in execution. For example, when we write a program in C or C++ and compile it, the compiler creates binary code. The original code and binary code are both programs. When we actually run the binary code, it becomes a process.

A process will need certain resources — such as CPU time, memory, files, and I/O devices — to accomplish its task. These resources are allocated to the process either when it is created or while it is executing. Every process has an ID, a number that identifies it. A process may have more than one thread. A thread is an object that identifies which part of the program is running. Each thread also has an ID (a number that identifies it).

A thread is a basic unit of CPU utilization, which consists of a program counter, a stack, and a set of registers. It is used whenever a process has multiple tasks to perform independently of the others. The benefits of multithreading are faster responsiveness and effective usage of resources.

On a machine with one processor, more than one thread can be allocated, but only one thread can run at a time. Each thread only runs a short time and then the execution is passed on to the next thread, giving the user the illusion that more than one thing is happening at once. On a machine with more than one processor, true multi-threading can take place. If an application has multiple threads, the threads can run simultaneously on different processors.

Memory Management - To under this, we will check some fundamental concepts of memory.

Virtual Memory - Your computer has two types of memory, Random Access Memory (RAM) and Virtual Memory. All programs use RAM, but when there isn't enough RAM for the program you're trying to run, Windows temporarily moves information that would normally be stored in RAM to a file on your hard disk called a Paging File. The amount of information temporarily stored in a paging file is also referred to as virtual memory. Using virtual memory, in other words, moving information to and from the paging file, frees up enough RAM for programs to run correctly.

Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.

The size of virtual storage is limited by the addressing scheme of the computer system and amount of secondary memory is available not by the actual number of the main storage locations.

In this model, both virtual and physical memory are divided up into handy sized chunks called pages. These pages are all the same size. Each of these pages is given a unique number; the Page Frame Number (PFN). For every instruction in a program, for example to load a register with the contents of a location in memory, the CPU performs a mapping from a virtual address to a physical one. Also, if the instruction itself references memory then a translation is performed for that reference.

The address translation between virtual and physical memory is done by the CPU using page tables which contain all the information that the CPU needs. Typically there is a page table for every process in the system. Above figure shows a simple mapping between virtual addresses and physical addresses using page tables for Process X and Process Y. This shows that Process X’s virtual PFN 0 is mapped into memory in physical PFN 3 and that Process Y’s virtual PFN 2 is mapped into physical PFN 5. Each entry in the theoretical page table contains the following information :

• The virtual PFN,
• The physical PFN that it maps to,
• Access control information for that page.

Memory Management Unit (MMU) - A memory management unit (MMU) is a computer hardware unit that performing the translation of virtual memory addresses to physical addresses. The MMU is usually located within the computer’s Central Processing Unit (CPU), but sometimes operates in a separate Integrated Chip (IC).

Translation Lookaside Buffer (TLB) - TLB is the cache used by modern CPUs to improve performance of Virtual to Physical address translation. MMU caches last few translations in this. When CPU’s Memory Management Unit (MMU) needs read page from page table, it will first check TLB by default. If it is found, then it’s called TLB Hit. If not, then MMU scans page table to find the page, copies found page to TLB and then reads it.

Swapping - Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to main memory.

Though performance is usually affected by swapping process but it helps in running multiple and big processes in parallel and that’s the reason Swapping is also known as a technique for memory compaction.

b) ANDROID Version 9.0 (PIE)

Process Management - In most cases, every Android application runs in its own Linux process. This process is created for the application when some of its code needs to be run, and will remain running until it is no longer needed and the system needs to reclaim its memory for use by other applications.

An unusual and fundamental feature of Android is that an application process's lifetime is not directly controlled by the application itself. Instead, it is determined by the system through a combination of the parts of the application that the system knows are running, how important these things are to the user, and how much overall memory is available in the system.

To determine which processes should be killed when low on memory, Android places each process into an "importance hierarchy" based on the components running in them and the state of those components. These process types are (in order of importance) :

1. Foreground Process - A foreground process is one that is required for what the user is currently doing. A process is considered to be in the foreground if any of the following conditions hold :

• It is running an Activity at the top of the screen that the user is interacting with (its onResume( ) method has been called).

• It has a Service that is currently executing code in one of its callbacks (Service.onCreate( ), Service.onStart( ), or Service.onDestroy( )).

There will only ever be a few such processes in the system, and these will only be killed as a last resort if memory is so low that not even these processes can continue to run. Generally, at this point, the device has reached a memory paging state, so this action is required in order to keep the user interface responsive.

2. Visible Process - A visible process is doing work that the user is currently aware of, so killing it would have a noticeable negative impact on the user experience. A process is considered visible in the following conditions :

• It is running an Activity that is visible to the user on-screen but not in the foreground (its onPause( ) method has been called). This may occur, for example, if the foreground Activity is displayed as a dialog that allows the previous Activity to be seen behind it.

• It has a Service that is running as a foreground service, through Service.startForeground( ) (which is asking the system to treat the service as something the user is aware of, or essentially visible to them).

• It is hosting a service that the system is using for a particular feature that the user is aware, such as a live wallpaper, input method service, etc.

The number of these processes running in the system is less bounded than foreground processes, but still relatively controlled. These processes are considered extremely important and will not be killed unless doing so is required to keep all foreground processes running.

3. Service Process - A service process is one holding a Service that has been started with the startService( ) method. Though these processes are not directly visible to the user, they are generally doing things that the user cares about (such as background network data upload or download), so the system will always keep such processes running unless there is not enough memory to retain all foreground and visible processes.

Services that have been running for a long time (such as 30 minutes or more) may be demoted in importance to allow their process to drop to the cached LRU list. This helps avoid situations where very long running services with memory leaks or other problems consume so much RAM that they prevent the system from making effective use of cached processes.

4. Cached Process - A cached process is one that is not currently needed, so the system is free to kill it as desired when memory is needed elsewhere. In a normally behaving system, these are the only processes involved in memory management: a well running system will have multiple cached processes always available (for more efficient switching between applications) and regularly kill the oldest ones as needed. Only in very critical (and undesireable) situations will the system get to a point where all cached processes are killed and it must start killing service processes.

A process's priority may also be increased based on other dependencies a process has to it. For example, if process A has bound to a Service with the Context.BIND_AUTO_CREATE flag or is using a ContentProvider in process B, then process B's classification will always be at least as important as process A's.

Memory Management - The Android Runtime (ART) and Dalvik virtual machine use paging and memory-mapping to manage memory. This means that any memory an app modifies—whether by allocating new objects or touching memory-mapped pages — remains resident in RAM and cannot be paged out. The only way to release memory from an app is to release object references that the app holds, making the memory available to the garbage collector. That is with one exception: any files memory-mapped in without modification, such as code, can be paged out of RAM if the system wants to use that memory elsewhere.

• Garbage collection - A managed memory environment, like the ART or Dalvik virtual machine, keeps track of each memory allocation. Once it determines that a piece of memory is no longer being used by the program, it frees it back to the heap, without any intervention from the programmer. The mechanism for reclaiming unused memory within a managed memory environment is known as garbage collection. Garbage collection has two goals: find data objects in a program that cannot be accessed in the future; and reclaim the resources used by those objects.

Android’s memory heap is a generational one, meaning that there are different buckets of allocations that it tracks, based on the expected life and size of an object being allocated.

• Share Memory - In order to fit everything it needs in RAM, Android tries to share RAM pages across processes. It can do so in the following ways :

• Each app process is forked from an existing process called Zygote. The Zygote process starts when the system boots and loads common framework code and resources (such as activity themes). To start a new app process, the system forks the Zygote process then loads and runs the app's code in the new process. This approach allows most of the RAM pages allocated for framework code and resources to be shared across all app processes.

• Most static data is memory-mapped into a process. This technique allows data to be shared between processes, and also allows it to be paged out when needed.

• In many places, Android shares the same dynamic RAM across processes using explicitly allocated shared memory regions (either with ashmem or gralloc).

• Allocate and Reclaim App Memory - The Dalvik heap is constrained to a single virtual memory range for each app process. This defines the logical heap size, which can grow as it needs to but only up to a limit that the system defines for each app.

The logical size of the heap is not the same as the amount of physical memory used by the heap. When inspecting your app's heap, Android computes a value called the Proportional Set Size (PSS), which accounts for both dirty and clean pages that are shared with other processes—but only in an amount that's proportional to how many apps share that RAM. This (PSS) total is what the system considers to be your physical memory footprint.

The Dalvik heap does not compact the logical size of the heap, meaning that Android does not defragment the heap to close up space. Android can only shrink the logical heap size when there is unused space at the end of the heap. However, the system can still reduce physical memory used by the heap. After garbage collection, Dalvik walks the heap and finds unused pages, then returns those pages to the kernel using madvise. So, paired allocations and deallocations of large chunks should result in reclaiming all (or nearly all) the physical memory used. However, reclaiming memory from small allocations can be much less efficient because the page used for a small allocation may still be shared with something else that has not yet been freed.

• Restrict App Memory - To maintain a functional multi-tasking environment, Android sets a hard limit on the heap size for each app. The exact heap size limit varies between devices based on how much RAM the device has available overall. If your app has reached the heap capacity and tries to allocate more memory, it can receive an OutOfMemoryError.

• Switch Apps - When users switch between apps, Android keeps apps that are not foreground—that is, not visible to the user or running a foreground service like music playback— in a least-recently used (LRU) cache.

File Management - Most of the Android user are using their Android phone just for calls, SMS, browsing and basic apps, But form the development prospective, we should know about Android internal structure. Android uses several partitions (like boot, system, recovery , data etc) to organize files and folders on the device just like Windows OS.

There are mainly 6 partitions in Android phones, tablets and other Android devices. Note that there might be some other partitions available, it differs from Model to Model. But logically below 6 partitions can be found in any Android devices.

• /boot - It includes the android kernel and the ramdisk. The device will not boot without this partition.

• /system - As the name suggests, this partition contains the entire Android OS. This includes the Android GUI and all the system applications that come pre-installed on the device.

• /recovery - This is specially designed for backup. The recovery partition can be considered as an alternative boot partition, that lets the device boot into a recovery console for performing advanced recovery and maintenance operations on it.

• /data - It is called userdata partition. This partition contains the user’s data like your contacts, sms, settings and all android applications that you have installed.

• /cache - This is the partition where Android stores frequently accessed data and app components. Wiping the cache doesn’t effect your personal data but simply gets rid of the existing data there, which gets automatically rebuilt as you continue using the device.

• /misc - This partition contains miscellaneous system settings in form of on/off switches. These settings may include CID (Carrier or Region ID), USB configuration and certain hardware settings etc. This is an important partition and if it is corrupt or missing, several of the device’s features will will not function normally.

Also Below are the for SD Card File System Partitions.

• sdcard - This is not a partition on the internal memory of the device but rather the SD card. In terms of usage, this is your storage space to use as you see fit, to store your media, documents, ROMs etc. on it.

• sd-ext - This is not a standard Android partition, but has become popular in the custom ROM scene. It is basically an additional partition on your SD card that acts as the /data partition. It is especially useful on devices with little internal memory allotted to the /data partition. Thus, users who want to install more programs than the internal memory allows can make this partition and use it for installing their apps.

QuestionSolves.com is an educational website that helps worldwide students in solving computer education related queries.

Also, different software like Visual Studio, SQL Server, Oracle etc. are available to download in different versions.

Moreover, QuestionSolves.com provides solutions to your questions and assignments also.

MORE TOPIC

WHAT WE DO