Abstract’This document is about mobile and cloud comput-
ing which are converging as the prominent technologies that are
leading the change to the post personal computing (PC) era.
The cloud is extremely powerful to perform computations while
computing ability of mobile devices has a limit so many issues
occur to show how to balance the differences between these two.
So there are some issues in implementing cloud computing for
mobile. These issues can be related to limited resources, related
to network, related to security of mobile users and clouds. Some
of the issues are:
Limited Resources: Having limited resources in mobile device
make use of cloud computing in mobile devices difficult. Basic
limitations related to limited resources are limited computing
power, limited battery and low quality display.
Network related issues: All processing in Mobile Cloud Comput-
ing(MCC) is performed on the network. So there are some issues
related to the network like Bandwidth, latency, availability and
heterogeneity.
Low Bandwidth: Bandwidth is one of the big issues in MCC
since the radio resource for wireless networks is much scarce as
compared with the traditional wired networks.
Availability: Service availability becomes more important issue
in MCC than that in the cloud computing with wired networks.
Mobile users may not be able to connect to the cloud to obtain
service due to traffic congestion, network failures, and the out-
of-signal.
I. I NTRODUCTION
Advancements in computing technology have expanded the
usage of computers from desktops and mainframes to a wide
range of mobile and embedded applications. Examples of these
applications are surveillance, environmental sensing, GPS
navigation, mobile phones, autonomous robots,etc. Many of
these applications run on systems with limited resources. For
example, mobile phones are battery powered. Environmental
sensors have small physical sizes, slow processors, and small
amounts of storage. Most of these applications use wireless
networks and their bandwidths are orders of magnitude
lower than wired networks. Meanwhile, increasingly complex
programs are running on these systems whose types vary video
processing on mobile phones and object recognition on mobile
robots. Thus there is an increasing gap between the demand
for complex programs and the availability of limited resources.
Offloading is a solution to augment capabilities of these
mobile system by migrating computation to more resourceful
computers (i.e, servers). This is different from the traditional
client-server architecture, where a thin client always migrates
computation to a server. Computation offloading is also
different from the migration model used in grid computing
and multiprocessor systems, where processes may be migrated
for load balancing. The main difference is that computation
offloading migrates programs to servers outside of the users
immediate computing environment. Migration of processes
for grid computing occurs from one computer to another
within the same computing environment, i.e., the grid [11].
Offloading in principle similar to SETI@home [12], where
requests are sent to surrogates for performing computation.
The difference being SETI@home is a large scale distributed
computing effort involving several thousands of users, whereas
offloading is typically used to augment the computational
capability of a resource constrained device for a single user.
The terms ‘surrogate computing’ and ‘cyber foraging’ are
also used to describe computation offloading.
This was primarily due to limitations in wireless networks
mainly a result of low bandwidths. At the turn of the
millenium, the focus moved to developing efficient algorithms
for making offloading decisions to help decide whether
offloading would benefit mobile users. The direction of
offloading has shifted recently towards improvements in
virtualization technology, network bandwidths, and cloud
computing infrastructures,. These developments have made
computation offloading more practical.
Offloading can save energy and improve performance
on mobile systems. However, this depends on many
parameters such as the amount of data exchanged between
networks and the network bandwidth. Multiple algorithms
have been proposed to make offloading decisions to increase
efficiency [13,14,15,16,17]. The decisions are usually made
by analyzing parameters including server loads, available
memory, bandwidths, server speeds, and the amounts of data
exchanged between servers and mobile systems. The solutions
include partitioning programs and predicting parametric
variations in application behavior and execution environment.
Offloading requires access to resourceful computers for
short durations through networks, wired or wireless. Servers
may use virtualization to provide offloading services so
that different programs and their data can be isolated and
protected. Isolation and protections have motivated research
on developing infrastructures for offloading at various
granularities .Offloading may be performed at the levels of
methods, tasks, applications, or virtual machines. Java RMI,
.NET remoting, and RPC (remote procedure call) are several
mechanisms enabling offloading at the class and object level.
Several techniques have been proposed to enable offloading
at the virtual-machine level; for example, Chun and Maniatis
use cloud computing to enable offloading. Cloud computing
allows elastic resources and offloading to multiple servers; it is
an enabler for computation offloading. Various infrastructures
and solutions have been proposed to improve offloading:
they deal with various issues such as transparency to users,
privacy, security, mobility, etc [4]. These infrastructures and
solutions address different issues associated with offloading.
II. B ACKGROUND
A. Why should we consider offloading?
1) Improving Performance: Offloading becomes an attrac-
tive solution for meeting response time requirements on mobile
systems as applications become increasingly complex. Another
goal is meeting real-time constraints. For example, a navigating
robot may need to recognize an object before it collides with
the object; if the robots processor is too slow, the computation
may need to be offloaded. Another application is context-
aware computingwhere multiple streams of data from different
sources like GPS, maps, accelerometers, temperature sensors,
etc need to be analyzed together in order to obtain real-
time information about a users context. In many of these
scenarios, the limited computing speeds of mobile systems can
be enhanced by offloading.
2) Saving Energy: Energy is a primary constraint for
mobile systems. A survey of 7,000 users across 15 countries
showed that 75% of respondents said better battery life is the
main feature they want [9,10]. As it turns out smartphones
are no longer used only for voice communication; instead,
being used for acquiring and watching videos, gaming, web
surfing, and many other purposes. As a result, these systems
will likely consume more power and shorten the battery life.
Even though battery technology has been steadily improving,
it has not been able to keep up with the rapid growth of power
consumption of these mobile systems. Battery life may be
extended by offloading by migrating the energy-intensive parts
of the computation to servers.
B. What should we consider while offloading?
Minimized memory usage: The memory cost of resident
service can not be more than available memory on the mobile
device.
Minimized energy usage: For the offloaded services, the
energy consumption of offloading should not be greater than
not offloading. The energy cost of offloading some parts
to remote cloud can be expressed as the sum of energy
consumption during waiting for the results from the cloud ,
transferring (including sending and receiving ) the services to
be offloaded and also the additional data which may be needed
on the remote cloud.
Minimized execution time: The local execution time can
be expressed as the ratio of CPU instructions to local CPU
frequency, meanwhile, the remote execution time consists of
the time consumed by CPU, file transmission and the overhead
of our middleware.
III. I NFRASTRUCTURE FOR M OBILE C LOUD C OMPUTING
The previous sections describe the conditions when offload-
ing computation can improve performance, or save energy, or
both.
A. Interoperability
Different types of resource constrained devices may inter-
act and connect across different types of networks to one or
many servers. For example, devices like the iPhone switch
to 3G signal when there is no WiFi network available. Since
the 3G radio is typically slower and consumes more power
than WiFi, the offloading decision may vary based on the net-
work available. Moreover, offloading may be possible between
different systems of different computational capabilities; it is
important to hide these interactions from the user.[18,19,20]
B. Mobility and Fault Tolerance
Offloading relies on wireless networks and servers; thus it
is important to handle failures and to focus on reliable services.
Fault tolerance enables the system to continue executing the
application in the event of network congestion or failure, or
server failure.
C. Privacy and Security
Privacy is a concern because users programs and data are
sent to servers that are not under the users control. Security
is an issue because a third party may access confidential data.
Many studies have been conducted to protect outsourced data.
Solutions include steganography , homomorphic encryption,
hardware-based secure execution. Most of these solutions have
limitations in their applications: for example, encryption keys
may be too large and dramatically increase the amount of
data. Also, efficient computation on encrypted data is still a
debatable topic.
D. Context Awareness
This refers to the device being able to perceive the users
state and surroundings and infer context information. This
is important because the mechanism of offloading may vary
depending on the users location and context.[21,22]
E. Offloading Tasks to the Cloud
An important question arises when we talk about offloading
to the cloud. That question is: ‘Do mobile phones with slow
computational capabilities and good Internet connection bene-
fit by using cloud computing based mobile phone applications,
in terms of improved execution time, in comparison to mobile
phones with great computational capabilities and slow Internet
connection’?.
In theory mobile phones that could be considered slow, or
old, would be more suitable for offloading functions to the
cloud. Computation heavy tasks would take long to execute
because of insufficient hardware. If the network connection
were good, fast, then it would be quick to transfer data
to the cloud servers, where it would be processed. On the
other hand there are fast mobile phones with slow network
connection. In this case the mobile phone can more easily carry
out computational heavy tasks. Transferring the task to cloud
servers would take longer time because of the slow network
connection. Therefore fast phones with a bad connection would
be less suitable for cloud offloading. This relationship is
visualized in the figure shown below.
It all depends on factors like computational task, mobile
phone type and network connection. At some point the tasks
are too big for the mobile phone to execute. If the mobile
phone is slow or fast, and depending on slow or fast network
connection, it will be more or less suitable to offload to the
cloud.
IV. P ROPOSED A LGORITHMS
1) Based on Lyapunov Optimization: [27]
The problem of optimal application component partitioning
for offloading is NP-complete [23]. This algorithm is used to
provide low-latency user interaction under different wireless
conditions.This offloading algorithm can offload part of an
applications computation to a dedicated server dynamically
according to the change of wireless environment. The
proposed algorithm is based on Lyapunov optimization.
Data Structures Used: A component is said to be un-
offloadable when it has user interaction or handles access
to local I/O devices. The application is considered to
have one un-offloadable component( if there exist multiple
components, they can be combined into one), while there
exist N offloadable components [23].
A weighted directed graph is used to relate the components.
Each vertex one of the graph denotes some component and
each edge along the graph shows the size of the data migrated
between the vertices. Usually the component with index 0 is
considered un-ofloadable while the other N components are
offloadable.
The application execution is considered complete when
the last component returns the result to the initial component,
ie. component 0. Usually whenever there is any request for
the application to execute, the controller inside the device
determines the components which should be executed locally
and which ones remotely.
Type of Mobile Network: Usually the time taken to
transfer the data remotely between the server and the device
dominates the total execution time, thus not satisfying the time
constraint. In this case, the data transmission rate between
the mobile device and the server has significant impact on
decisions of offloading. The handheld device is assumed to be
mobile and the available wireless service for the mobile device
keeps changeing over time, dependent on the current location
of the mobile device. An example of network environment
is as shown. 3G network is assumed to be available for all
locations while the availability of Wi-Fi network is conditional.
The data rate of each network changes over locations. During
the execution time of an application request, the available
network of the mobile user doesn’t get changed. When multiple
networks are available, the controller in the mobile chooses the
best network (e.g., a Wi-Fi network which gives the highest
data transfer rate) before going for offloading.
2) Based on Dynamic Programming: [28]
It is very important to have such an offloading algorithm that
has low time complexity and achieves the optimal solution
as fast as if a real-time decision for offloading has been
made. A fast offloading algorithm is necessary to guarantee the
efficiency of the MCC system. O(n 2 ) computation complexity
of the DPOA solver gives a much faster speed for cases where
calculation of the optimal offloading decision holds priority,
while the exponential computation complexity of B&B limits
its realistic use.
A Linear Programming Solver (LP solver) based on the
Branch and Bound (B&B) algorithm is used in schemes of
[24], [25] and [26]. B&B is a feasible approach for solving
integer linear problems when the number of branches is
not large; but the number of its optional solutions grows
exponentially with the number of subprograms, which means
higher time complexity (O(2 n )).
Data Structures Used: A Dynamic Programming (DP)
table is built in this DPOA system to calculate the shortest
path for offloading decisions.
M(i,j) saves the actual location of the (i + j)th
offloadable component of an application. The actual
location can be different from (i + j) since there may
be un-offloadable components within an application. For
example, an application contains four components, and the
second component is un-offloadable. So, for the DP table
M(0,1) = M(1,0) = 1,M(0,2) = M(2,0) = M(1,1) =
3,M(3,0) = M(0,3) = M(2,1) = M(1,2) = 4. Only the
offloadable methods are being considered in the DP table,
and thus the sum of i and j is less than NM( total number
of offloadable components within the application besides the
un-offloadable component).
T(i,j) associates with the cell (i,j) the minimal cost of
all the offloadable and un-offloadable components by the
(i + j)th offloadable component of an application. The row i
denotes the number of transitions made between the mobile
device and the cloud server, while the column j means the
number of no-transitions(not completed). For example, in an
application contains four components , wherein the second
component is un-offloadable, then T(2,1) includes three
parts, which are the execution cost of the second component
at the device itself , and the minimum transition cost and
execution cost of the other components under condition of
two transitions between the device and server.
System Model: The framework of DPOA is as shown
above. The profiler and application analyzer are used to
calculate the cost of a method when running on a mobile
device according to CPU speed and application software
complexity. The network profiler calculates a methods
transition cost in terms of the network features and the data
sent from each method. The constraint analyzer provides
the indexes of application methods that are not offloadable.
With this kind of information, DPOA solver calculates and
produces ahead of time, offloading decisions for a mobile
application and how it should execute before the execution of
the application program.
Once the wireless environment or the components under
execution change, the related offloading decision has to be
rethought and recalculated. The capacity for dynamic update
of offloading decision is supported by DPOA solver, the only
thing needing adjustment before processing of a program
being the related cost of components in the DP table. Then,
a new offloading decision can be recalculated and updated
quickly.
3) Based on Multi Parameter Decision: [29]
Mobile applications are characterized by constraints of latency
, complexity of computation , and requirement of memory –
that should all be met. Therefore, a specific parameter-centric
optimized trade-off approach does not ensure a user’s expected
Quality of Experience (QoE). Additionally, it is known that
not all tasks can be offloaded. This may be because some
require the use of specific information available on the mobile
handset, or use dedicated hardware which is only available
locally.
The problem of multi-parameters optimization complexity is
tackled which is considered as a major bottleneck of classical
multi-parameters optimization techniques. A sequential
approach where decisions are taken sequentially according
to a decision tree is chosen. This approach allows intro-
ducing a multitude of parameters in the decision process.
Multi-parameters optimization is approached with a multi-fold
task classification.
System Model: An LTE system with k users served by
either a macro base station or a femtocell base station,
within a distance d is considered. Uplink connection, between
UE (user equipment) and the serving base station, with a
bandwidth B is considered here. Instantaneous uplink bit
rate is maximized based on adaptive modulation and coding
scheme (AMC) [30]. A parameter ??in the channel model
indicates the SNR margin to guarantee the minimum error
rate.
Each UE (user equipement) is parametrized as shown.
A Rayleigh channel model is adopted in this method with
path loss exponent , noise power N 0 , and fading channel
coefficient h k . Herein a perfect estimation of the channel
coefficient h k is assumed. The channel is assumed to be
constant for a whole transmission period, with a coherence
time T c . Also a constant transmission power P Tx is kept
track of through the defined channel.
Applications are launched by any user at the mobile handset.
Here, activity on the applications side is modeled as a Poisson
process with a rate ??, where ??represents the number of
launched applica- tions in a time window T w . An application
call generates a bursts of tasks which have to be computed.
Each generated task is a set of ?? instructions that has to
be executed with a required memory m, and a maximum
latency L max . A parameter ?? is used to indicate if the task is
offloadable (?? = 1) or un-offloadable(?? = 0).
The percentage of tasks that cannot be offloaded is defined by
a parameter ?? no . In case a task is offloadable, a parameter
N indicates the number of bits to send to the femtocell in the
task is decided to be offloaded.
For each task its energy consumption is computed. For the
tasks that are compute locally, the mobile handset energy
consumption is evaluated as the product of the number of
instructions to be executed and the energy consumption per
instruction.
For the tasks that are offloaded, the energy consumption of
the mobile handset physical components is evaluated based
on the mobile user LTE power consumption model proposed
by Jensen et al. in [31].
Major Considerations: This algorithm allows one to
tke offloading decisions depending on the offloading
feasibility, memory requirement of the tasks and latency
requirements, the resources available locally to the mobile
handset (computational capacity,memory, battery), the energy
required in the offloading trade-off, and the prevalent radio
channel conditions.
Essay: Mobile and cloud computing
Essay details and download:
- Subject area(s): Computer science essays
- Reading time: 11 minutes
- Price: Free download
- Published: 27 July 2024*
- Last Modified: 27 July 2024
- File format: Text
- Words: 3,185 (approx)
- Number of pages: 13 (approx)
- Tags: Cloud Computing essays
Text preview of this essay:
This page of the essay has 3,185 words.
About this essay:
If you use part of this page in your own work, you need to provide a citation, as follows:
Essay Sauce, Mobile and cloud computing. Available from:<https://www.essaysauce.com/computer-science-essays/essay-mobile-and-cloud-computing/> [Accessed 19-11-24].
These Computer science essays have been submitted to us by students in order to help you with your studies.
* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.