Home > Sample essays > Overview of cloud computing with literature review

Essay: Overview of cloud computing with literature review

Essay details and download:

  • Subject area(s): Sample essays
  • Reading time: 9 minutes
  • Price: Free download
  • Published: 27 July 2024*
  • Last Modified: 27 July 2024
  • File format: Text
  • Words: 2,472 (approx)
  • Number of pages: 10 (approx)
  • Tags: Cloud Computing essays

Text preview of this essay:

This page of the essay has 2,472 words.

Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet) the name comes from the common use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams, cloud computing entrusts remote services with a user’s data, software and computation, cloud computing consists of hardware and software resources made available on the Internet as managed third-party services these services typically provide access to advanced software applications and high-end networks of server computers.

Fig1.1:Structure of cloud computing

 ¬¬¬How Cloud ComputingWorks:

The goal of cloud computing is to apply traditional supercomputing, or high-performance computing power normally used by military and research facilities to perform tens of trillions of computations per second, in consumer-oriented applications such as financial portfolios to deliver personalized information, to provide data storage or to power large, immersive computer games.

The cloud computing uses networks of large groups of servers typically running low-cost consumer pc technology with specialized connections to spread data-processing chores across them this shared it infrastructure contains large pools of systems that are linked together often, virtualization techniques are used to maximize the power of cloud computing.

 Characteristics and Services Models:

The salient characteristics of cloud computing based on the definitions provided by the National Institute of Standards and Terminology (NIST) are outlined below

 On-demand self-service: A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.

 Broad network access:Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms.

Resource pooling: The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location-independence in that the customer generally has no

 control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or data center). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.

 Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

 Measured service: Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service Resource usage can be managed, controlled, and reported providing transparency for both the provider and consumer of the utilized service.

Fig: 1.2Characteristics of cloud computing

 Services Models:

Cloud Computing comprises three different service models, namely Infrastructure-as-a-Service (laas), Platform-as-a-Service (paas), and Software-as-a-Service (saas) the three service models or layer are completed by an end user layer that encapsulates the end user perspective on cloud services the model is shown in figure below If a cloud user accesses services on the infrastructure layer, for instance, she can run her own applications on the resources of a cloud infrastructure and remain responsible for the support, maintenance and security of these applications herself If she accesses a service on the application layer, these tasks are normally taken care of by the cloud service provider.

Fig: 1.3:Structure of service models

 Benefits of cloud computing:

1. Achieve economies of scale: increase volume output or productivity with fewer   people, your cost per unit, project or product plummets.

2. Reduce spending on technology infrastructure: Maintain easy access to your information with minimal upfront spending. Pay as you go based on demand.

3. Globalize your workforce on the cheap: People worldwide can access the cloud, provided they have an Internet connection.

4. Streamline processes: Get more work done in less time with less people.

5. Reduce capital costs: There’s no need to spend big money on hardware, software or licensing fees.

6. Improve accessibility: You have access anytime, anywhere, making your life so much easier.

7. Monitor projects more effectively: Stay within budget and ahead of completion cycle times.

8. Less personnel training is needed: It takes fewer people to do more work on a cloud, with a minimal learning curve on hardware and software issues.

9. Minimize licensing new software: Stretch and grow without the need to buy expensive software licenses or programs.

10. Improve flexibility: You can change direction without serious “people” or “financial” issues at stake.

 Advantages:

1. Price: Pay for only the resources used.

2. Security: Cloud instances are isolated in the network from other instances for improved security.

3. Performance:Instances can be added instantly for improved performance. Clients have access to the total resources of the Cloud’s core hardware.

4. Scalability: Auto-deploy cloud instances when needed.

5. Uptime: Uses multiple servers for maximum redundancies. In case of server failure, instances can be automatically created on another server.

6. Control: Able to login from any location. Server snapshot and a software library lets you deploy custom instances.

7. Traffic: Deals with spike in traffic with quick deployment of additional instances to handle the load.

1.2 Problem Definition

The problem ofdeduplication with differential privileges in cloud computing, we consider a hybrid cloud architecture consistingof a public cloud and a private cloud Unlike existingdata deduplication systems, the private cloud is involvedas a proxy to allow data owner/users to securely performduplicate check with differential privileges Such architecture is practical and has attracted much attension from researchers, The data owners only outsource theirdata storage by utilizing public cloud while the dataoperation is managed in private cloud.

A new deduplicationsystem supporting differential duplicate checks proposed under this hybrid cloud architecture where the resides in the public cloud the user is onlyallowed to perform the duplicate check for files markedwith the corresponding privileges.

Chapter 2

LITERATURE REVIEW

2. LITERATURE REVIEW

2.1 Preliminaries

In this section, we first define the notations used in this paper, review some secure primitives used in our secured duplication, the notations used in this paper are listed.

2.1.1 Symmetric encryption

Symmetric encryption uses a common secret key encrypt and decrypt information a symmetric encryption scheme consists of three primitive functionsthe key generation algorithm that generates security parameter 1is the symmetric encryption algorithm that takes the secret and message andthen outputs the ciphertextc,is the symmetric decryption algorithm that takes the secret and cipher text Candthen outputs the original message m.

2.2Convergent Encryption

With the significant advances in Information and communications technology (ict) over the last half century, there is an increasingly perceived vision that computing will one day be the 5th utility  this computing utility, like all other four existing utilities, will provide the basic level of computing service that is considered essential to meet the everyday needs of the general community to deliver this vision, a number of computing paradigms have been proposed, of which the latest one is known as cloud computing hence, in this paper, we define cloud computing and provide the architecture for creating clouds with market-oriented resource allocation by leveraging technologies such as virtual machines.

We also provide insights on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain service level agreement (SLA)-oriented resource allocation.

Representative cloud platforms, especially those developed in industries along with  our current work towards realizing market-oriented  resource allocation of clouds as

realized in aneka  enterprise cloud technology Furthermore, we highlight the difference between high performance computing (hpc) workload and Internet-based services workload we also describe a meta-negotiation infrastructure to establish global Cloud exchanges and markets, and illustrate a case study  of harnessing ‘Storage clouds’ for high performance content  delivery finally, we conclude with the need for convergence of competing it paradigms to deliver our 21st century vision.

Cloud computing is a new and promising paradigm delivering it services as computing utilities as clouds are designed to provide services to external users, providers need to be compensated for sharing their resources and capabilities In this paper, we have proposed architecture for market-oriented allocation of resources within Clouds.

We have also presented a vision for the creation of global cloud exchange for trading services moreover we have discussed some representative platforms for Cloud computing covering the state-of-the-art. In particular, we have  presented various Cloud efforts in practice from the market-oriented perspective to reveal its emerging potential for the creation of third-party services to enable the successful adoption of Cloud computing, such as meta-negotiation  infrastructure for global Cloud exchanges and provide high performance content delivery via ‘torage clouds’.

The state-of-the-art Cloud technologies have limited support for market-oriented resource management and they need to be extended to support negotiation of qos between users and providers to establish mechanisms and algorithms for allocation of vm resources to meet SLAs and manage risks associated with the violation of SLAs  Furthermore, interaction protocols needs to be extended to support interoperability between different .

Cloud service providers In addition, we need programming environments and tools that allow rapid creation of Cloud applications, Data Centers are known to be expensive to operate and they consume huge amounts of electric power. For example, the Google data center consumes power as much as a city such as San Francisco.

As Clouds are emerging as next-generation data centers and aim to support ubiquitous service-oriented applications, it is important that they are designed to be energy efficient to reduce both their power bill and carbon footprint on the environment To achieve this at software systems level, we need to investigate new techniques for allocation of resources to applications depending on quality of service expectations of users and service contracts established between consumers and providers.

As Cloud platforms become ubiquitous, we expect the need for internetworking them to create market-oriented global Cloud exchanges for trading services several challenges need to be addressed to realize this vision. They include: market-maker for bringing service providers and consumers market registry for publishing and discovering Cloud service providers and their services clearing houses and brokers for mapping service requests to providers who can meet QoS expectations and payment management and accounting infrastructure for trading services.

We need to address regulatory and legal issues which go beyond technical issues Some of these issues are explored in related paradigms such as Grids and service-oriented computing systems hence rather than competing, these past developments need to be leveraged for advancing Cloud computing Also Cloud computing and other related paradigms need to converge so as to produce  unified and interoperable platforms for delivering IT services as  the 5th utility to individuals, organizations, and corporations.

2.3 Proof of Ownership

The basic idea behind Cloud computing is that resource providers offer elastic resources to end users  we intend to answer one key question to the success of Cloud computing in cloud, can small or medium-scale scientific computing communities benefit from the economies of scale our research contributions are three-fold first, we propose an enhanced scientific public cloud model (esp) that encourages small or medium scale research organizations rent elastic resources from a public cloud provider second, on a basis of the esp model, we design and implement the Dawning Cloud system that can consolidate heterogeneous scientific workloads  on a cloud site third, we propose an innovative emulation  methodology and perform a comprehensive evaluation.

However, there is a prominent shortcoming of the dedicated system model for peak loads, a dedicated cluster system cannot provide enough resources, while lots of resources are idle for light loads Recently, as resource providers, several pioneer computing companies are adopting the concept of infrastructure as a service, among which, Amazon ec2 contributed to popularizing the infrastructure-as-a-service paradigm A new term Cloud is used to describe this new computing paradigm. In this paper, we adopt the terminology from B, Sotomayor et paper to describe different types of cloudsPublic clouds offer a publicly accessible remote interface for the masses’ creating and managing virtual machine instances within their proprietary infrastructure.

We have answered one key question to the success of Cloud computing In scientific communities, can small- or medium-scale organizations benefit from the economies of scaleour contributions are four-fold first, we proposed a dynamic service provisioning (esp) model in Cloud computing In the esp model, a resource provider can create specific runtime environments on demand for mtc or htc service providers, while a service provider can resize dynamic resources. Second, on  a basis of the ESP model.

we designed and implemented  an enabling system, DawningCloud, which provides  automatic management for heterogeneous mtc and htc workloads. Third, our

experiments show that for typical mtc and htc workloads, mtc and htc service providers and the resource service provider can benefit from the economies of scale on a Cloud platform. Finally, using an analytical approach we verify that irrespective of specific workloads, Dawning Cloud can achieve the economies of scale on Cloud platforms.

2.4 Identification Protocol

Parallel dataflow programs generate enormous amounts of distributed data that are short-lived, yet are critical for completion of the job and for good run-time performance We call this class of data as intermediate data is the first to address intermediate data as a first-class citizen, specifically targeting and minimizing the effect of run-time server failures on the availability of intermediate data, and thus on performance metrics such as job completion time.

We propose new design techniques for a new storage system called ISS (Intermediate Storage System), implement these techniques within Hadoop, and experimentally evaluate the resulting system Under no failure, the performance of Hadoop augmented with ISS (i.e., job completion time) turns out to be comparable to base Hadoop.

Under a failure, Hadoop with ISS outperforms base Hadoop and incurs up to 18% overhead compared to base no-failure Hadoop,depending on the testbedsetup,we have shown the need for, presented requirements towards, and designed a new intermediate storage system (ISS) that treats intermediate data in dataflow programs as a first-class citizen in order to tolerate failures we have shown ex-perimentally that the existing approaches are insufficient in satisfying the requirements of data availability and minimal interference we have also shown that our asynchronous rack-level selective replication mechanism is effective and masks interference very well.

An identification protocol can be described with two phases Proof and verify, In the stage of Proof, a proverb/user ucan demonstrate his identity to a verifier by performing some identification proof related to his identity The input of the prover/useris his private key sunhat is sensitive information such as private key of a public key in his certificate or credit card number etc that he would not like to share with the other users The verifier performs the verification with input of public information pku related to skt the conclusion of the protocol, the verifier outputs either accept or rejection denote whether the proof is passed or not There are many efficient identification protocols in literature, including certificate-based, identity-based identification etc.

Fig: 2.4.1:Architecture for Authorized Deduplication

Discover more:

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Overview of cloud computing with literature review. Available from:<https://www.essaysauce.com/sample-essays/2015-10-7-1444198811/> [Accessed 18-12-24].

These Sample essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.