MyAssignmentServices uses cookies to deliver the best experience possible. Read more

Computer Organization and Architecture

Table of Contents

Introduction.

Memory management

Virtual Memory.

Resource Allocation Graph.

Conclusion.

References.

Introduction to Elastic Memory Management for Cloud Data Analytics

The following report will be included with the memory management and virtual memory. Here, the memory management will discuss about the various concepts about memory allocation and page replacement. Besides, the virtual memory will also include a few concepts of secondary memory where the advantages and how the virtual memory works have been shown. A few CPU scheduling process has been given here and according to that, the waiting time and turnaround time will also discuss. Besides, a resource allocation graph will be discussed here where the deadlock concept will be included. Besides, a graph will be designed and it has been shown that the deadlock has been reductant.

Memory Management

The term main memory will be considered as physical memory. Here the memory has been used as distinguishing external storage memory such as disk drivers. Besides, here the main memory has been considered as RAM. Where the ability of the specific system will be capable of changing the data of the main memory. All the programs here will be executed and will need to be copied from the main memory device. In addition, the system will be responsible for handle various primary memory as well the moving process will be back in the space of main memory execution (McKeen et al., 2016). Besides, the memory will be keeping track of all the locations. Hence, the process here will be responsible for getting memory by this. Besides, the allocation tracking and freeing of space will be tracked by the main memory. There are a few major parts that have been included with main memory:

Process space

The address space has been set as logical address and the entire process will be capable of track the address of the code. Hence, it can be said as an example that the process which will be in use with 32 bit addressing space where the range of the address will be 0 to 0*7ffffff. The specified processing address has been referred as 2^31numbers hence the theoretical size can be 2 GB (Liu et al., 2019). The operating system here will be able to take care about the mapping system of entire logical address during the time of memory allocation time. There has been a list of three memory allocation address:

Symbolic addresses: These specified addresses will be used as source code that can be included with variables, instruction labels, constants that will be considered as basic elements.

Relative addresses: The following addresses will be defined as the compilation time when the compiler will be responsible for converting symbolic addresses into relative addresses.

Physical addresses: This following address will be responsible for the loading time during the execution of a program.

Swapping

This can be defined as a memory management mechanism where the entire process will be able to swap temporarily from the memory. Besides, the swapping process will need to be repeated back from the location of main memory. Hence, the process of swapping can be able to be used as a memory compaction technique (Kannan et al., 2017). The entire time for swapping can be identified as the time that will take for moving into a disk and the process will be repeated as the back process in the existing main memory. The overall process of memory management has been referred that if there will be two process running in the operating system where first process will be swapped in by main memory and the second process will be swapped out by second memory.

Memory allocation

The allocation of memory can be two types such as high memory and low memory. Here, the operating system of a computer has been available while the processes have been responsible for holding high memory in low memory type system. The allocation can be distributed into two parts as single partition and multiple partitions. Besides, single partition will be used for the protection of multiple user process and code changing. Memory allocation will be referred to the allocation of process which will be divided into various parts of the memory. Moreover, the relocation of register has been contained physical address and it has also been included with a few logical addresses. Besides, the main memory will be divided into a few fixed partitions and all the partitions have been contained with single process in multiple partitions (Darte, Isoard and Yuki, 2016). On the other hand, a random process will be triggered and will be loaded from free partition during free partition. Moreover, the process will be free for new process in the termination time.

Virtual Memory

The operating system has been included with virtual memory which will be contained a few extra spaces as compared with the physical address of the system (Oukid et al., 2017). The virtual memory here will be known a s a hard disk system that has been included in the RAM. Hence, the major purpose of virtual memory is to extend the physical memory storage through memory protection. The virtual memory which can be considered as extra memory space in the operating system will be triggered when the process will be distributed through main memory and secondary memory (Cai et al., 2020). The purpose of virtual memory is taking the extra space for the process when the main memory will be full and running with other ongoing process. The process implementation here can be held through demand paging and segmentation. There are a few concepts of virtual memory have been given below:

Demand Paging

The system here will be considered as a paging system where the secondary process of these pages has been loaded in secondary memory. Moreover, the process of copying in the operating system is the way of program pages for the creation of new program pages in the time of context switching. As opined by Wang and Balazinska, (2017), the new program will be executed by fetching the pages from the previous program even after the pages have been loaded. However, there are a few advantages in demand paging such as there has been no limit in multiprogramming and demand paging has been contained with large amount of virtual memory (Schlichting and Frankland, 2017). The page reference program cannot be available here and been swapped first. The memory reference of pages can be known as page fault and the transfer controls will be responsible for demanding the pages back in the main memory.

Page replacement

The algorithms which have been used in operating system techniques will be responsible for page memory swapped and disk writing of memory allocation. Besides, the paging system here will be occurred in the time of page fault while the free pages will not be used during the allocation unit (Wang and Balazinska, 2017. However, the page availability will be negative as the free pages will be counted as lower as compared with the required pages.

According to the required file the connotation of pages is too less though the availability in count of pages are also negative as per the requirement of pages asked in the respective file.

2.

Process

Priority

Arrival Time

Service Time

A

3

0

3

B

2

3

8

C

1

3

3

D

2

7

14

E

2

7

2

B.

In the below the priority list of the process is described:

Process

Priority

Arrival Time

CPU cycle time

C

1

3

3

B

2

3

8

D

2

7

14

E

2

7

2

A

3

0

3

CPU buffering time is generally referring during the time which will is taken as the process since the preferred resources will be free. To select as an example, this is can be referred as there are total three processes including the process and resources which has been depicted as five.

2 resources are required in the Process 1 and 2 each. On the other hand, total 3 resources are being required by the process 3. Although, the resource management file 1 has been pre scheduled respectively for process 1 and 2. This has been scheduled priory for process 1 and afterwards allocated for process 3 (Gentine et al., 2018). As mentioned, the process no 3 is needed to wait more until the implementation of the process 1 and 2 being finished or unit quantity has been used.

Waiting time

In this project waiting time can be calculated once regarding the mentioned turnaround time which has been identified as requirement of the ongoing processes.

Turnaround time

(Exit time -Arrival Time) is the formula of calculating the turnaround time.

A= 3ms, B= 1ms, C= 0ms, D= 1ms, E= 2ms, as per the upper mentioned arrival time.

After calculation, as result of the waiting time of the overall processes is,

A= (27-7) =20ms, B= (3-3) =0, C= (3-3) = 0, D= (8+3)-7 =6ms and E= (25-14) = 11ms

Turnaround time is being calculated A= (20+3) =23, B= (1+8) = 9, C= (0+3) = 3, D= (7+14) = 23 and E= (11+2) =13.

  1. Feedback

Quantum of the time for this process has been stated where, q=3.

  1. Highest Response Ratio Next

A= (20/3) =6.67, B=0/8=0, C=3/3=0, D= 23/14=1.64, E=13/2=6.5

  1. Round Robin

A= (21-12) + (24-21) + (12-4) =20

B= (3-1) =2

C= (0-0) + (15-3) =12

D= (6-2) + (15-6) + (18-15) =16

E= (9-3) + (18-9) + (21-18) =18

Shortest Remaining Time

Shortest Process Next

C

C

B

D

D

E

A

C

C

B

D

EA

C

C

0

1

2

3

4

5

6

7

8

9

10

11

12

13

Shortest Remaining Process

Process

Burst time

Arrival Time

C

3

3

B

8

3

D

14

7

E

2

7

A

3

0

At time 0, process A starts but it needs two more execution units for the completion of it.

On the 1st point, process C, B and A take place.

On the 2nd point, process C, B, and A will continue

On the 3rd point, the C and B process has completed their execution.

On the 4th, 5th, 6th and 7th in the process D and E will take place and afterwards one specific unit that will wait in the place of one by one (Kalhauge and Palsberg, 2018). However, in these upper mentioned four units of time regarding the processes that will continue as per their implementation and after the processing unit, the process A will take place and complete their execution. After that, the process D and E will continue their execution.

Resource Allocation Graph

Here, 3 processes are been scheduled as a resource allocation graph with their enlisted resources. There are few rules for introducing the allocation graph analysis. Deadlock condition is the first rule for finding the graph which indicates the resources are in a single instance or not.

It is can be seen that resource no 2 is not in a single instance which has been concluded with multiple instances that have been allocation for both process1 and process2 (Wang and Balazinska, 2017). As per the analysis regarding this graph, it is been stated that the system presenting in a deadlock state. As reflected here it can be elaborated that the resource1 has been needed by process1 first and afterwards process1is necessary to use resource 2 where process 2 that needed to be waiting for the respective time when resource 2 is being used (Biggs et al., 2017). As the process no 3 is needed to be responsible until the resource 2 will be liberated. On the other hand, this has been generated that resource 2 is needed to be used by all total three processes hereby notified and all the process will wait until the distribution of resources will be free which is formed as a deadlock situation here.

As the process 2 and 3 are already in used and have been blocked here regarding the resources. The process has been blocked as there has listed 2 inputs from the process and one particular output from this respective process (Duo et al., 2020). As resource 2 cannot be used easily and P2 is indicated in the blocked stage that will be more waiting state regarding the ongoing process. Besides, P3 has been blocked though there are 3 inputs which processes and 1 output and that results in the process as a unique blocked process.

Reduction of resource allocation

After the reduction of some resource blocking issue this above diagram has been shown. Hence, the resources can be used by individual process but there are few waiting states will exists for every process until the resource allocation will be free.

Conclusion on Elastic Memory Management for Cloud Data Analytics

In this upper stated report, the analysis has been discussed about the multiple concepts of memory organization including with the virtual memory and its working structure regarding these has discussed. The CPU time schedule has already been given where waiting time, the round-robin scheduling, turnaround time that has been calculated in this report. In spite of, the shortest time that remains the shortest process which is the next and also has been calculated. Including it, a process table that has been attached in this report where this system is scheduled of that process will start first and also process will appear in the next that will be described. Regarding the resources allocational graph is being measured regarding the deadlock condition which has been recognized in the upper mentioned graph that either any process that is been blocked and also being mentioned. After all the discussion this can be stated that this is a deadlock conditional graph.

References for Elastic Memory Management for Cloud Data Analytics

Biggs, D., Holden, M.H., Braczkowski, A., Cook, C.N., Milner-Gulland, E.J., Phelps, J., Scholes, R.J., Smith, R.J., Underwood, F.M., Adams, V.M. and Allan, J., 2017. Breaking the deadlock on ivory. Science, 358(6369), pp.1378-1381.

Cai, W., Wen, H., Beadle, H.A., Kjellqvist, C., Hedayati, M. and Scott, M.L., 2020, June. Understanding and optimizing persistent memory allocation. In Proceedings of the 2020 ACM SIGPLAN International Symposium on Memory Management (pp. 60-73).

Darte, A., Isoard, A. and Yuki, T., 2016, March. Extended lattice-based memory allocation. In Proceedings of the 25th International Conference on Compiler Construction (pp. 218-228).

Duo, W., Jiang, X., Karoui, O., Guo, X., You, D., Wang, S. and Ruan, Y., 2020. A deadlock prevention policy for a class of multithreaded software. IEEE Access, 8, pp.16676-16688.

Gelado, I. and Garland, M., 2019, February. Throughput-oriented GPU memory allocation. In Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming (pp. 27-37).

Gentine, P., Pritchard, M., Rasp, S., Reinaudi, G. and Yacalis, G., 2018. Could machine learning break the convection parameterization deadlock?. Geophysical Research Letters, 45(11), pp.5742-5751.

Kalhauge, C.G. and Palsberg, J., 2018. Sound deadlock prediction. Proceedings of the ACM on Programming Languages, 2(OOPSLA), pp.1-29.

Kannan, S., Gavrilovska, A., Gupta, V. and Schwan, K., 2017, June. Heteroos: Os design for heterogeneous memory management in datacenter. In Proceedings of the 44th Annual International Symposium on Computer Architecture (pp. 521-534).

Liu, L., Yang, S., Peng, L. and Li, X., 2019. Hierarchical hybrid memory management in OS for tiered memory systems. IEEE Transactions on Parallel and Distributed Systems, 30(10), pp.2223-2236.

McKeen, F., Alexandrovich, I., Anati, I., Caspi, D., Johnson, S., Leslie-Hurd, R. and Rozas, C., 2016. Intel® software guard extensions (intel® sgx) support for dynamic memory management inside an enclave. In Proceedings of the Hardware and Architectural Support for Security and Privacy 2016 (pp. 1-9).

Oukid, I., Booss, D., Lespinasse, A., Lehner, W., Willhalm, T. and Gomes, G., 2017. Memory management techniques for large-scale persistent-main-memory systems. Proceedings of the VLDB Endowment, 10(11), pp.1166-1177.

Schlichting, M.L. and Frankland, P.W., 2017. Memory allocation and integration in rodents and humans. Current opinion in behavioral sciences, 17, pp.90-98.

Wang, J. and Balazinska, M., 2017. Elastic memory management for cloud data analytics. In 2017 {USENIX} Annual Technical Conference ({USENIX}{ATC} 17) (pp. 745-758).

Remember, at the center of any academic work, lies clarity and evidence. Should you need further assistance, do look up to our Computer Science Assignment Help

Get It Done! Today

Applicable Time Zone is AEST [Sydney, NSW] (GMT+11)
Not Specific >5000
  • 1,212,718Orders

  • 4.9/5Rating

  • 5,063Experts

Highlights

  • 21 Step Quality Check
  • 2000+ Ph.D Experts
  • Live Expert Sessions
  • Dedicated App
  • Earn while you Learn with us
  • Confidentiality Agreement
  • Money Back Guarantee
  • Customer Feedback

Just Pay for your Assignment

  • Turnitin Report

    $10.00
  • Proofreading and Editing

    $9.00Per Page
  • Consultation with Expert

    $35.00Per Hour
  • Live Session 1-on-1

    $40.00Per 30 min.
  • Quality Check

    $25.00
  • Total

    Free
  • Let's Start

Get
500 Words Free
on your assignment today

Browse across 1 Million Assignment Samples for Free

Explore MASS
Order Now

Request Callback

My Assignment Services- Whatsapp Tap to ChatGet instant assignment help

Hire Certified Experts
Ask your Question
Need Assistance on your
existing assignment order?