• Internal Code :
  • Subject Code : ICT 201
  • University : Kings Own Institute
  • Subject Name : IT Computer Science

Contents

Question 1.

First Come First Serve.

Round Robin.

Shortest Process Next (SPN).

Shortest Remaining Time.

Shortest Process Next.

Question 2.

Question 3.

References.

Question 1

First Come First Serve

In FCFS, The process which arrives first executes first. It is non preemptive in nature.

Process

Arrival Time

Burst Time

Completion Time

Turnaround time

Waiting Time

A

0

3

3

3

0

B

3

8

11

8

0

C

4

3

14

10

7

D

7

14

28

21

7

E

12

2

30

18

16

Round Robin

Process

Arrival Time

Burst Time

Completion Time

Turnaround time

Waiting Time

A

0

3

3

3

0

B

3

8

19

16

8

C

4

3

9

5

2

D

7

14

30

23

9

E

12

2

17

5

3

Shortest Process Next (SPN)

In this algorithm the shortest job available at the moment is executed first. This is non preemptive in nature.

Process

Arrival Time

Burst Time

Completion Time

Turnaround time

Waiting Time

A

0

3

3

3

0

B

3

8

11

8

8

C

4

3

14

10

7

D

7

14

30

23

9

E

12

2

16

4

2

Shortest Remaining Time

At the end of each unit time we check the shortest job available, as well as the shortest job starts execution.

Process

Arrival Time

Burst Time

Completion Time

Turnaround time

Waiting Time

A

0

3

3

3

0

B

3

8

14

11

3

C

4

3

7

3

0

D

7

14

30

23

9

E

12

2

16

4

2

Shortest Process Next

Process

Arrival Time

Burst Time

Completion Time

Turnaround time

Waiting Time

A

0

3

3

3

0

B

3

8

14

8

0

C

4

3

7

10

7

D

7

14

30

23

9

E

12

2

16

4

2

Question 2

Virtual memory is the space on the hard drive located on the operating system used to supplement the reverse memory when the RAM limit has been maxed out. However, the two parts of software application which cannot be accessed are emulators as well as memory management units. These two parts store the immediate result as well as then return the result; therefore, they are of no use, thus stored as a temporary variable. This statement is somewhat misleading, because computers should be equipped with more RAM than virtual memory, but should not three times the size of the physical memory. The system with similar physical RAM with the same virtual memory cannot offer the same performance as one with slightly different memory sizes ( McKinley, 2016).

If the virtual memory the physical memory is small then the system can spend a large proportion of its CPU swapping data as well as fort. At least 1.5 times the size of physical memory is needed to offer good performance. Distributed operating systems use memory more efficiently that the Traditional OS. In particular, this is because they are essentially a resource management component of an OS that implements the shared memory model in a distributed system without physically shared memory (Kumar et al., 2016).In essence, the shared memory offers virtual address space in the distributed system. They hide data movement as well as offer better abstraction for sharing the data, which means that a programmer does not need to worry about the transfer of memory between one machine to the other. Additionally, they allow the passing of sophisticated structure through reference as well as simplifying algorithm development.

The memory management in window serves is implemented in microkernel as opposed to Linux implemented in the monolithic kernel (Seo, Kim, & Kim, 2017). This negatively affects the execution time of window server framework calls. Window server likewise uses working sets as its substitution approach contrasted with Linux which utilizes the worldwide substitution arrangement. The Win32 API interface enables window servers to all the more likely oversee virtual memory contrasted with Linux, which doesn't assign virtual memory (Seo, Kim, as well as Kim, 2017). This is a favorable position if the developer has more control, yet a weakness is the program has a lot of control. All things considered, windows serve is a superior OS in memory the board since is can share memory through mapped records accordingly physical documents can without much of a stretch be made.

Static partitioning is more likely to suffer from internal fragmentation because more memory is allocated to each process than needed as well as the desired partition size may not be divisible by the minimum unit of allocation. Dynamic partitioning is more likely to suffer from external fragmentation because dynamic partitioning gives each process exactly as much memory as it needs from a larger chunk of free memory, leaving behind a fragment of memory that is often too small to use. Dynamically partitioned machines may use compaction to reduce external fragmentation.

Swapping is the demonstration of running every entire procedure in principle memory for a period at that point putting back onto circle just as the other way around. It is utilized when the framework needs more fundamental memory to hold all the right now dynamic procedures. Accepting that an application is one program just as consequently a solitary procedure, trading won't license an application (process) requiring 16M of memory to run on a machine with 8M RAM (primary memory) as the entire procedure is too huge to even think about fitting into principle memory.

TLBs that support ASIDs are known as tagged TLBs. In TLBs that don't support ASIDs, when you switch to another address space you need to flush the TLB(invalidate all entries) because you don't know whether a TLB entry belongs to the current address space - the TLB is shared between all processes but each page table entry is valid for only one process. This takes a long time. In tagged TLBs, you can simply change your current address space ID as well as instead check whether the address space of a TLB entry matches the current address space ID on each page reference. This reduces the time it takes to perform a context switch. In addition, it allows multiple processes to have hot TLB entries, preventing expensive TLB misses after a context switch.

A two-level page table maps virtual page numbers to physical frame numbers. It differs from a simple page table array in that not every entry in the page table has memory allocated for it. Two-level page tables only allocate memory for a second level table if it contains a resident virtual page. As a result, two-level page tables use considerably less memory than a simple page table on average. However, two-level page tables require 2 memory references to look up a mapping instead of 1, so they save memory at the expense of performance.

32 (4GB) virtual location spaces, 4K pages/outlines: The top-level page table (the various 1024 pointers to the second-level cluster) is listed to the extent that the largest bit of the virtual location exists. (VADDR [31:22]) is used to determine the two levels of exhibit (page table section - outline number as approval).In this case, a portion of bit VADDR [21:12], which is not very noticeable, is used to record the table. .

The converted page table is the rundown of the page organized by outline number. The modified page table hashes the virtual location page number to look up at the page table, and then adjusts the page number (or detects the page using the connected rundown structure).This record is the edge number. The size of the page table is related to the size of the address space. This is especially important on 64 machines. The size of the virtual location space is 16 gigabytes for 2 ^ 64 bytes. To do this, you need a very large number of levels of indirect references to clean up the staggered page table.

If the page size is small, the work set size will be smaller (not the maximum number of pages as long as memory is used) because the page reflects more accurately the current memory usage (for example, because there is less extraneous information inside).page).Because the current working set can increase the page size, the working set also increases because it stores extraneous information in large pages.

Thrashing occurs when the total working set size of all processes exceeds the size of physical memory. As a result, a process page faults, replaces a page as well as blocks waiting on IO. Then, the next process also page faults, replaces a page as well as also blocks to wait for the faulting page to be loaded from disk. This repeats with other processes, leading to an increase in page faults as well as decrease in CPU usage, which can be used to detect thrashing. To recover from thrashing, suspend a few processes so that their resident sets move to the backing store. This frees up physical memory for the remaining processes so that they can load in missing pages from their working sets. Thrashing can also be avoided by using more RAM.

1) Optimal - Use the time travel to find pages that are only launched and not used for most of the time. Difficult to achieve - It is uniquely used as a virtual reference point to contrast calculations.

2) Not recently used - Calculates the page that is not used for the longest time, just as it is launched. It's hard to do it effectively, not just by itself - each memory reference needs a timestamp.

3) Clock page replacement - Set the "reference" piece to 1 when something is used. When you are sure to search for a package, set these bits to 0 if they are 1, just as you would kick the first zero competitor found. Restart from the last boot. Effective (implementable) guesswork of LRU - actually used.

4) FIFO- Retreats the longest-lived page - Does not consider the actual memory usage in the selection - Deletes the pages used in the framework as often as possible

Monitor page fault frequency of an application as well as compare it to an ideal page fault frequency. If an application's page fault frequency is too high, give it an extra frame as well as vice versa.

Question 3

Yes, the system is deadlocked. Each process has a resource, but is waiting for other resources that are currently held by other processes.

Process

Holding

Waiting

P1

R2

R1

P2

R1 R2

R3

P3

R3 R4

R2

P1 cannot continue because it must terminate the R1 held by P2.

P2 is waiting for the resource R3 held by P3 as well as does not release the resources R1 as well as R2 until it gets the R3, so it cannot terminate.

P3 holds R3 as well as R4, but is waiting for an instance of resource R2 held by P1 as well as P2.

  1. Therefore, the system is deadlocked.

  2. All three processes are blocked.

  3. To detect whether a particular state S is in a deadlock state, you must determine whether processes blocked by S remain permanently blocked. This can be accomplished using a technique called graph shrinkage.

The first graph reduction will satisfy all requests in a state of S where unblocked processes are allowed, as well as the requested process will continue to complete without requesting any more resources. Free all resources before exiting. These actions invoke previously blocked processes as well as proceed in the same way. This rehashes until there are no procedures, that is, until all procedures are ended or every single residual procedure are blocked. In the last case, the first state S is in a stop state.

To reduce the graph:

Repeat the following steps until no processes are blocked.

  1. Select a process that is not blocked p.

  2. Delete p as well as include all its requests as well as assigned edges

However, in our case, since all processes are blocked, we cannot further reduce the given resource allocation graph.

References

Kumar, P., Raj, R., Reyaz, A., & Rajiv, P. (2016). Operating systems: Demand-based modularity. In 2016 International Conference on Computing, Communication as well as Automation (ICCCA) (pp. 878-883).IEEE.

McKinley, K. S. (2016). Next-Generation Virtual Memory Management. ACM SIGPLAN Notices51(7), 107-107.

Seo, S., Kim, J., & Kim, S. M. (2017). An Analysis of Embedded Operating Systems: Windows CE Linux VxWorks uC/OS-II as well as OSEK/VDX. International Journal of Applied Engineering Research12(18), 7976-7981.

Remember, at the center of any academic work, lies clarity and evidence. Should you need further assistance, do look up to our Computer Science Assignment Help

Get It Done! Today

Applicable Time Zone is AEST [Sydney, NSW] (GMT+11)
Not Specific >5000
  • 1,212,718Orders

  • 4.9/5Rating

  • 5,063Experts

"

5 Stars to their Experts for my Assignment Assistance.

There experts have good understanding and knowledge of university guidelines. So, its better if you take their Assistance rather than doing the assignments on your own.

What you will benefit from their service -

I saved my Time (which I utilized for my exam studies) & Money, and my grades were HD (better than my last assignments done by me)

What you will lose using this service -

Absolutely nothing.

Unfortunately, i had only 36 hours to complete my assignment when I realized that it's better to focus on exams and pass this to some experts, and then I came across this website.

Kudos Guys!

Jacob

"

Highlights

  • 21 Step Quality Check
  • 2000+ Ph.D Experts
  • Live Expert Sessions
  • Dedicated App
  • Earn while you Learn with us
  • Confidentiality Agreement
  • Money Back Guarantee
  • Customer Feedback

Just Pay for your Assignment

  • Turnitin Report

    $10.00
  • Proofreading and Editing

    $9.00Per Page
  • Consultation with Expert

    $35.00Per Hour
  • Live Session 1-on-1

    $40.00Per 30 min.
  • Quality Check

    $25.00
  • Total

    Free
  • Let's Start

Get
500 Words Free
on your assignment today

Browse across 1 Million Assignment Samples for Free

Explore MASS
Order Now

Request Callback

Tap to ChatGet instant assignment help

Get 500 Words FREE
Ask your Question
Need Assistance on your
existing assignment order?