1 Introduction and Background
1.1. Introduction
In recent years, the usage of mobile phone is increasing significantly across the world. According to statistics as of 2013, 73% of internet users accessed the internet resources via a mobile device and this percentage projected to nearly 90% by 2017[1]. Mobile phones numbers and other types of integrated sensors, processing capacity, battery, and overall hardware qualities are also improving. Due to their conveniences and smartness they have become prevalent in our daily life and even may lead to over usage by the consumers [2]. Nowadays, we carry our phone with us everywhere at any given time. Even though, the smartness of the phones are relatively increasing however they are not smart enough to operate automatically based on user’s current situation. This problem and the increasing numbers of integrated sensors provide new opportunity for the development of application [herein referred to as ‘apps’]. These apps basically are computer programs that run on smartphones that can sense user’s operating conditions and surrounding environment. This is known as context[3] and react accordingly[4, 5].
Inferring user’s context from such raw sensor data (low level ‘context) has been an emerging research opportunity. Context encompasses where you are, who you are with, and what resources are nearby[3]. The most significant reason for today’s smartphone revolution is their capability of providing context aware services. Mobile apps generally obtain access to richer context as almost all smartphones on the market currently are equipped with location sensors, motion sensors, Bluetooth, and Wi-Fi. By exploiting their data, apps can achieve ‘context awareness’ that can significantly increase their capabilities and value, and can really make our phones smarter and more of a proactive personal assistant. Thus, it can be obviously pointed out that such phones would make our daily lives much simpler.
Context-aware mobile apps are capable of giving service without requiring user intervention and this is highly consistent with ubiquity[6] , as such, these are an important and valuable step down this path. They employ contextual information collected from their sensors to proactively provide the user with valuable information and they do so with minimal effort on the part of the user. They can sense and react based on the contextual data it has access to. Through trends, it observes over the course of usage of the device, and/or through feedback provided by the user, such an app can actually adapt, evolve, and become smarter and more useful over time. These days, they are popular for a diverse range of services from health-care to lifestyle monitoring to participatory sensing. According to Juniper research the number of smartphone and tablet apps that uses contextual or location data, will near 7.5 billion by 2019 when compared up from 2.8 billion in 2014.[7]
One challenge of context awareness is to learn and adapt the dynamic environment with a new class of applications that are aware of the context in which they are to run. There are many different activities that people do in their daily life. These activates are different in terms of their location, time, surrounding environment (noise level, network connectivity, social situation and etc.) and these parameters are constantly changing from activity to activity. Context-aware apps adapts according to such changes over time. A system with these capabilities can examine the computing environment and react to changes according to the environment.
Another challenge is the fact that most phones have limited battery power. Different context aware apps give us different services by employing different sensors on our phone. Because almost all apps do not share data among each other, as such, whenever each app requests a sensors for detecting our context it will consume high amount of energy[8-10]. Therefore, with each apps requesting multiple context the battery life will decrease at an alarming rate if we have such apps running on our phone. Context sensing is expensive in terms of energy as it involves various sensors and it is very expensive when individual context aware apps carry on detecting context. This becomes a real world problem due to increasing usage and numbers of continuous context aware apps.
1.2. Background
In the early 1990s through a research at Xerox PARC the concept of Context-aware system was discovered and became a part of the pervasive computing paradigm. It contained three essential aspects as follows[11]:
‘ Where you are’? Who you are with’? What resources are nearby?
This Context-aware system was characterized to act and react with other devices as such it was known as a system that can adapt to the interchanging of location, objects and collection of people over a given period of time[12].
1.2.1. Context
According to the Merriam-Webster[13], context has been described as ‘the circumstances surrounding an act or event.’ At any given time of the day a person do various activities which must be taken into account by the context-aware system to be a successful application. In the past before the mass use of mobile devices in terms of contextual needs it was fairly narrow and limited in use. However, desktop user performed more tasks which were inconvenient due to computers being immobile and connected to a power source and/or other networks.
As stated by Van Laerhoven and Aidoo [14], ‘the notion of context is very broad and incorporates lots of information, not just about the current location, but also about the current activity, or even the inner state of the person describing it. As a consequence, people can describe their contexts in different ways, even if they are in the same location doing the same things.’
According to the Context-aware system, location is the most important and complex concept as it not only depends of actual location but also on other variables such as objects and people. The physical relationship between various objects has been clearly pointed out by Brumitt and Shafer [15]. They suggest understanding ‘the location of the person, their physical relationship to the devices around them, and the various consequences of the current state of the world.’ In view of this definition, all aspects in propinquity of the user such as people, devices and physical objection together with the capability of the user’s device must be taken into account by the Context.
Schilit, Adams, and Want [3]wrote about the broader definition of context, describing context-aware systems as accommodating to ‘the location of use, the collection of nearby people, hosts, and accessible devices, as well as to changes to such things over time.’ They also further described the important aspects of context as ‘where you are, who you are with, and what resources are nearby.’
Schmidt, Beigl, and Gellersen [16] presented a fairly complete working model for context in the domain of mobile computing. Two main and board categories for context space features were proposed being Human Factors and Physical Environment which was further subdivided as follows, see figure 1:
Figure 1: Context Model[16]
Furthermore, during additional research by Abowd and Mynatt[17], they presented ‘five W’s’ as a minimal set of necessary context information:
‘ Who ‘ the user and other people in the environment
‘ What ‘human activity perception and interpretation
‘ Where ‘ location and the perceived path of the user
‘ When ‘ time as an index and elapsed time
‘ Why ‘ reason a person is doing something
It was further stated by Abowd and Mynatt that the question of ‘when’ needs to be further dissected to include constant changing of time to assist in explicate human activity. The difficulty in determining the usage of ‘why’ a certain person is doing something was also acknowledged.
It is much of a complicated task collecting, storing and using these types of context information and based on the collection of such methods the quality of information will vary. Henricksen, Indulska, and Rakotonirainy [18] described a context model for pervasive computing that addresses the need to measure context information quality and accurately define the relationships among context information. In addition, they outlined the importance of the different aspects of context which also includes future planned activities.
Many researchers have had difficulty in absolutely describing the complexities of context. In contrast, one researcher, Dey [19]proposes a more general approach. He says that context is ‘all about the whole situation relevant to an application and its set of users.’ He further explains context as ‘any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.’ The reason for such a general approach by Dey is because he considers that context should be used to create situation in which everything that are important at any given moment are important and context therefore any other variable is not considered context. However, it may be difficult and puzzling to remove aspects of context one time and re-call them the next as various aspects of context changes from situation to situation. While Dey does not explicitly consider time in this definition of context, he does consider the importance of time in earlier works [20, 21]
1.2.2. Main characteristics of context-information
When dealing with context information the following characteristics have to be kept in mind:
‘ Context-information comes from heterogeneous sources
‘ Context-information might need to be enriched to be meaningful
‘ Context information can be contradictory
‘ Context-information may applicable only in a certain situation
‘ Context-information is continuously subject to change
One of the examples used to demonstrate these characteristic and illustrate how context-aware can be used by integrating context information obtained from the environment surrounding the driver of a car that uses a navigation system.
Location-Based Services (LBS) is a division of context-aware services that determines the behavior of the application by using its location context[22]. The navigation system integrated in a car is one such example; with the use of a GPS receiver, it helps assist a driver from his current location to a desired destination. Using a set of electronic maps driving directions are provided to the driver.
In order for context-aware system to derive an higher level knowledge, it is usually combined with other type of context information such as using the same car navigation system to also help find the nearest or cheapest gas station in the user’s area by having access to an database with gas station locations and prices together being equipped with fuel level sensor to access such database when the need arises.
Context information may be deprive from various context sources in a context-aware system, therefore, in the example of the car navigation system, the combination of information being generated from its internal GPS receiver, the car’s fuel senor together with having access to an gas station database will generate the desired information.’
We must deal with contradictory sensory information whereby the fuel sensor indicates that the fuel tank is near empty, however, the car is still being driven. Indication of the right fuel level may not be accurate which may lead to unnecessary notification about gas and/or stations in the current location of the user.
These information are close to useless when the driver has recently passed a gas station, hence, inconsistency or incomplete of context information may arise. In addition, where the car navigation system has to refer to outdated information or when location information is not updated or the GPS senor fail would also be problematic to the user.
As a result, there are only certain situations where context information may be applicable. For instance a driver would not appreciate receiving notification that indicate that his needs to go to gas station when his car it parked at home after a long day of work.
Context information is always subject to change being one of its distinctive features. The fuel level and location of the car is constantly changing whilst driving. Nevertheless, the type of context and the availability of context-sensors often determines the frequency at which the context information changes.
Figure 2: Layers of a Context-aware System[19]
1.2.3. Context-Awareness
If we take a model of context, where relevant context factors are measured, and those information are used to improve the functionality or usability of mobile applications, we end up with context-aware systems. Context-aware systems assist the user based on knowledge of the environment[23]. Such systems provide relevant information and/or services to the user, depending on the user’s task[19]. Van Laerhoven and Aidoo [14] stressed that context awareness should be adaptive, given that contexts depend heavily on both the user and application. There is an increasing number of research based on context aware computing, much of which is concentrated on the construction and testing of prototype systems that improves systems by using context information and also improves ones that could not exist without taking context into account.
Simplifying or in other words improving the user interface has been the focus of some context-aware research. Chevrest et al. [24] discussed three ways in which context can be used to simplify the user’s interaction with an interactive system. These are:
‘ Reducing the need for input/action by the user;
‘ Reducing the quantity of information that has to be processed by the user or increasing the quality of the information presented;
‘ Reducing the complexity of rules constituting the user’s mental model of the system.
Researchers have begun to address the needs of mobility by formulating alternative interaction for such improvement. For example, Pascoe, Ryan, and Morse [25] discussed a context-aware application called ‘stick-e notes,’ which allows users to type messages on a mobile device and virtually attach it to their current location. Context such as weather conditions, temperature and time of day can also be used other than the location context. The format of the notes is not limited to plain text, and if the user revisits the same location, the notes shall reappear.
In a dynamic environment, we must consider the design of a device that derives input indirectly from the user to improve issue of usability. Schmidt [26] discussed a vision of mobile computing where devices can ‘see, hear, and feel.’ Devices are used in different situation context whereby they are required to act accordingly to those certain situations. According to Schmidt, there is a shift to implicit interaction with devices from explicit interaction; as such the device understands input which is not necessarily directed at the device. For example, a device might turn on automatically when grasped by a user and power down after being left alone for a certain length of time.
The surrounding environment of the user also provides context data to the device. Addlesee et al. [27] are investigating systems that react to changes in the environment according to a user’s preferences. They use the term sentient computing because the applications appear to share the user’s perception of the environment. A device called ‘Bat’ was created to determine a 3-dimensional location within a building in real time, whereby such device is either attached to certain equipment or carried by the user.
According to Dey’s [19] definition for context-aware systems: ‘A system is context-aware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user’s task’. Three categories of features illustrated in figure 2, that a context-aware application can support where identified by Dey as follows:
1. Presentation of information and services to a user;
2. Automatic execution of a service for a user; and
3. Tagging of context to information to support later retrieval.
A number of application prototypes, frameworks, middleware system and models for describing context have been produced. When there is a pre-defined context-based situation occurring making user’s lives easier as a result the goal of context-aware computing is to allow applications to perform seamless adaptations.
1.2.4. Mobile Context-Awareness
The field of ubiquitous computing is closely related to the research of context-aware mobile computing. With the availability of contextual information, it is aimed to enable mobile devices to provide better services for the users. Multiple computers are made available throughout the physical environment which enhances the computer usage. In a tactical environment, the usage of mobile computing devices by warfighters has become more and more prominent. Contextual information is used to modify its behavior, adapt its interface, or filter data by a context-aware mobile application.
User’s context must be continuously or frequently monitored by some context-aware application. Thus apps are enabled to collect valuable intelligence directly due to high performance, mobility and power of mobile devices. By utilizing a rapidly growing set of embedded sensors on mobile phones such as microphone, digital compass, global positioning system (GPS), gyroscope, camera, ambient light sensor, accelerometer, proximity sensor, barometer, air quality, chemicals, and radiation it is possible to understand the phone’s environmental situation or context. Especially, this day’s mobile devices are enabled to perform continuous sensing applications as they are equipped with an increasing range of sensing, computational, storage and communication resources.
1.2.5. Continuous Context-Aware Apps
Continuous Context-aware mobile apps employ contextual information gathered from their sensors to proactively provide the user with valuable information and they do so with minimal effort on the part of the user. The following are some of known continuous context-aware mobile apps.
Google Maps
Generating traffic related information in Google Map is one other implementation of context-awareness systems. Traveling patterns and traveling time of people are used by Google Map through collecting of location data from android devices present in the same area. Such information are then stored and used to calculate the provided traveling time between two different locations[28].
Google Now
Google Now displays cards with information pulled from the user’s Gmail account, such as flight information, package tracking information, hotel reservations and restaurant reservations. Other additions were movies, concerts, stocks and news cards based on the user’s location and search history [29]
For example Google Now has the capability to inform the user whether he/she will be late for work based in the information such as the road one is on, the time of the day and traffic information which is all provided by Google Maps[30]. Google Now is implemented as a part of the Google Search application[31]. In order to display more relevant information related to the user in the form of “cards”, it recognizes and uses repeated actions that a user performs on the device (common locations, repeated calendar appointments, search queries, etc.). The system leverages Google’s Knowledge Graph project[32], a system used to accumulate more detailed search results by analyzing their meaning and connections, which is user context.
1.2.6. Challenges of Context Aware Apps
1. Security: Security became a huge factor in mobile computing when there is so many sensors that can resolve your identity. Context-aware can be hacked by a third party to retrieve information as such Hardware manufacturers needed to make sure their hardware used in such context are not hack able by a third party. In addition, data can be stolen from users which also raises a great concern as such software vendors needed to test their applications thoroughly to check whether there are any gaping holes which will allow some unauthorized person to steal the data from users. Sharing of some critical information may cause damage to the user if though there is consent. For example, if information is given that a certain family is on vacation and no one is at home, this family may be subjected to robbery with sharing of such information.
2. Privacy: In order to provide better context-aware services to users, it is well aware by community that mobile operating system tends to cache user location which sometimes results in violation of their privacy rights. Hence, for collecting any kind of data related to user privacy most operating systems and applications ask for user consent to avoid any liability.
3. Accuracy: Data fabrication is always a possibility when there is an application that completely relies on context information to provide services which then results in inaccurate data being introduced and incorrect information being provided to the user.
4. Infrastructure: Not every person has and access to the mobile devices with context-sensing hardware and software.
5. Accurate position determination in buildings: As GPS are not available in indoor, mostly the best option is Wi-Fi. Even though they solve indoor positioning problem they are not accurate as GPS in addition, in some places they are also not available.
6. Heterogeneity of sources: Verities of hardware and software sensors are increasing from time to time with different behaviors; they generate different type of data. Handling such data is also other challenge.
7. No standard way to acquire and handle context: To the best of may knowledge, there is no available context toolkits or any standard modules for studying or developing continouse context aware apps.
8. Sensor’s noise: Sometimes sensors infer falsely
9. Battery power
10. Adoption
‘
1.3. Problem description and Statement
1.3.1. Problem Description
Typical continuous context aware apps are involves hardware sensors, they are the main source of power consumption [8-10]. This makes context aware apps and services remains power hunger. For example, let us take the context listed in table 1; turning on accelerometer only for 10 seconds costs about 259 mJ. The limited phone battery will drain in short period of time if we have two or three apps which involve those sensors frequently.
Context Sensors Sensing Energy (mJ)
IsWalking, IsDriving, IsJogging, IsSitting Accelerometer
(10 sec) 259
AtHome, AtOffice WiFi 605
IsIndoor GPS + WiFi 1985
IsAlone Mic (10 sec) 2995
InMeeting, IsWorking WiFi + Mic (10 sec) 3505
Table 1: Energy costs for inferring a context[5]
1.3.2. Problem Statement
In order to provide context aware services at minimum cost of energy, continuous context aware apps needs a middleware that considers energy efficient and accuracy
Many conceptualizations for reducing consumption of energy in continuous context-aware application are viable [5, 8, 9, 33-41]. Among them Acquisitional Context Engine (ACE)[5] and A Scalable and Energy-Efficient Context Monitoring Framework for Mobile Personal Sensor Networks (SeeMon)[4] have done some preliminary works to achieve energy efficiency and accuracy. ACE uses high level (context level) inference, speculative sensing and only supports Boolean attributes. SeeMon achieves high degree of efficiency in computation and energy consumption by applying bi-directional way of context monitoring and continuous detection of context together. However, despite their advantage they have their own limitations. While ACE gains ~4.2x energy saving with ~4.0% inaccuracy, ACE doesn’t exploit a phenomenon that a single sensor at a time can infer only one result. In SeeMon, even though only necessary set of sensors are requested, the continuous detection of context changes needs to request sensor data continuously, thus results unnecessary computational costs.
A context aware middleware that considers low level (sensor data level) inference that extend the pervious works (specifically ACE and SeeMon) by exploring the capabilities and limitations of them may surpass both works in energy saving and accuracy. We propose a continuous context sensing middleware that infer context at minimum energy cost while supporting Boolean and non-Boolean context attributes. We call it Energy- Efficient Continuous Context Sensing Middleware (CCCenter).We will also investigate energy efficiency and accuracy of CCCenter.
1.4. Objectives
1. Foundation work and proposing system architecture of CCCenter
‘ Studying android programing
‘ Investigating the two system: ACE and SeeMon
‘ Explore the capabilities and limitations of ACE and SeeMon for energy saving and accuracy
‘ Proposing solution and designing the architecture
‘ Studying Funf open sensing framework because of we will use Funf for raw sensor data retrieval and preprocessing
2. Implementation and Evaluation
‘ Implementing our continuous context sensing middleware
‘ Evaluating energy consumption and accuracy of the newly proposed system
‘ Drawing conclusion and identifying future works
2 Related Works
Continuously acquiring data from sensor of devices is what makes context-aware resource intensive. The lack of memory scalability, high energy consumption and low processing power of most mobile device presents numerous challenges for mobile context-aware applications. The progress in improving battery technology has been relatively slow when compared to the improvements that have been made in the area of memory scalability and processing power. Hence, several approaches have been defined for reducing energy consumption of sensors. The following topic explores two different sensor management strategies; SeeMon and ACE, and proposes Energy- Efficient Continuous Context Sensing Middleware (CCCenter).
2.1. Acquistional Context Engine (ACE)
While exploiting correlation of known context attributes, a more energy-efficient approach was discovered known as ACE by Suman Nath of Microsoft Research. ACE is a middleware [5]that dynamically learns relationships of context attributes. It uses the correlation of collected attributes to derive the desired context attributes with minimum use of energy intensive context attributes to infer energy intensive context attributes through inference caching and speculative sensing.
2.1.1. Components of ACE
The main components of ACE are contexters, raw sensor data cache, rule miner, inference cache; and Sensing Planner (see Figure 3).[5]
By acquiring data from necessary sensors help determine the value of a context attribute, these collection of modules are called Contexters. The two main pieces of information that is required by a contexter is the name of the attribute that needs to be determined and its energy cost.
A standard cache called Raw Sensor Data Cache stores the value of context attributes.
A predefined expiration period is maintained by Rule Miner through the users context history of time stamped. Thus, the relationship between various Boolean context attributes is automatically learnt and able to generate context rules. Minimum support and confidence percentage is required by each rule which then can be modified to achieve an acceptable level of accuracy. Frequent offloading of all context tuples to a remote server to compute rules is required by the rule miner. Existing rules which are no longer valid are required to be deleted through an incremental updates on the mobile device.
The Inference Cache functions as a traditional cache with a Get/Put interface.
Sensing Planners is used in situations where when a requested context attribute is not located in the cache as such uses speculative sensing to find the sequence of proxy attributes which then uses the least expensive senor to determine the value of the requested context attributes.
Figure 3: Workflow in ACE[5]
2.1.1.1. Inference Caching
The Inference Cache functions like a traditional cache. Without acquiring sensor data, it allows ACE to assume context attributes from known context attributes. The Inference Cache provides a Get/Put interface. A Put(X,Y) places a context attribute, X, in the cache for a predetermined amount of time, v. A Get(X) returns the value of X if the value is in the cache or can be inferred from context rules and cached values of other context attributes.
The Boolean expression is represented by ACE by constructing an expression trees. The expression tree is a Boolean AND-OR tree, where a non-leaf node presents an AND or OR operation on the value of its child and a leaf node represents a tuple. According to the context rule, for a tuple to evaluate to true it must hold in the expression tree. See Figure 4 for an expression tree for the tuple Indoor = True. One expression tree for each tuple is maintained by the Inference Cache. The sensing planner is only invoked when the value of a context attribute to be true or false cannot be determined by the expression tree.
Figure 4 : An Expression Tree for Indoor = True, shown upside down
2.1.1.2. Speculative Sensing
Speculative sensing is used by ACE to discover additional proxy attributes when Inference Cache fails to determine the value of a context attribute. ACE attempts to determine the value of an expensive attribute by engaging a low-cost attribute. In order to determine an optimal ordering based on energy cost, Ci, and the likelihood that an attribute will return a True value, Pi; a sensing plan is developed.
For many small number of attributes, the algorithm to evaluate a Boolean expression with minimum cost work quite well. In contrast, the sensing cost to find the best plan may be more than the cost to directly acquire the requested attribute when the number of attribute is relatively large. In such cases, ACE uses a heuristic algorithm to rank the attributes based on Ci and Pi.
2.1.1.3. Evaluation
As such, energy consumption of mobile devices can be significantly reduced with the use of ACE. It mitigates sensing costs by inferring context attributes from known attributes. Hence, automatically ACE is able to learn the relationship between expensive and low-cost context attributes. As a result, sensing costs for continuous context sensing application can be significantly reduced by ACE by distributing context information to numerous applications.
Even though ACE provides an energy efficient solution it also has several shortcomings. Firstly, the current version of ACE returns the state of a context in Boolean attributes only. Boolean values cannot represent several contexts such as, location of a user, current temperature etc., therefore limiting ACE to deal with Boolean attributes only; such as; driving, in meetings, at home and so on. Secondly, often activates by users are correlated depending on time, for example if user’s is at home at now therefore he cannot be at office in the coming 10 minutes. Although such correlation is very useful, ACE does not overuse this. Thirdly, the expiry time for all context attributes is used by the Inference Cache, whilst it is possible to use temporal correlation between attribute to determine expire time. Fourthly, the confidence of the result is not returned rather Inference Cache currently returns only the value of an attribute. We believe that except the first limitation the other limitations lead the ACE to occasional inaccuracy.
2.2. SeeMon
Kang et al. [4] presented SeeMon which achieves energy efficiency and lessens computational complexity by only performing continuous detection of context recognitions whenever a change occurs during the context monitoring by using hierarchical sensor management system.
This approach is based on the awareness that during an environment monitoring the need to remove unnecessary computation. In addition, during computations, in order to detect similar recognitions to prevent from redundant power harvesting operations, SeeMon employees a bidirectional feedback system. SeeMon appears to be similar to a middle-tier framework of context-aware applications and a network of sensors which means that it can be used concurrently by other application as it exposes a set of APIs. An efficient context-based system of queries with a strong semantic meaning is also provided by SeeMon which allows an overhead (usually created by not necessary context) reduction. The concept of minimum number of sensors needed in a system to satisfy one query was also introduced by the authors of the Essential Sensor Set (ESS). Energy cost of transmitting data is significantly reduced when there is selective activation of sensors according to the contexts that is being requested, as such, only essential sensors that determine the truth-values of Boolean conjunctive clauses. In order to determine those essential sensors, a greedy SetCover algorithm is used. Hence, energy a cost of transmitting data is ominously reduced through the use of selective activation of sensors according to the contexts that needs to be sensed.
As a result, SeeMon is able to achieve high degree of efficiency by reducing the energy consumption of mobile devices. However, even though only necessary set of sensors are requested, the continuous detection of context changes needs to request sensor data continuously, thus results unnecessary computational costs.
‘
3 Proposed Solution and System Architecture (CCCenter)
3.1. Proposed Solution
Context level (high level) inference results in energy saving significantly along with speculative sensing, even though the achievement gained is at the cost of small decrement of accuracy. Continuous detection of context changes strategy also needs to request sensor data continuously , even though only essential set of sensors are requested, thus results unnecessary computational costs.
We believe that it is possible to gain more energy saving and increase accuracy if feature data level (low level/ low context level) inference technique employed together with context level inference, see figure 5. Low level context inference will minimize the number of sensor data requests. The technique we are going to propose will work in the same flavor with ACE and SeeMon.
Figure 5: Two level inference
3.1.1. Two level inferences
Our approach is to infer sensor data from one feature to another feature. Single sensor data has many features, each features belongs to different context. For example, a single GPS data has only one longitude and latitude which shows only one location or feature at a time. So at this level we can infer that the current GPS signal is only shows one feature, so no need to request GPS sensor for new data. If there is a request for GPS signal within the same time slice, it can be easily inferred from the current data as long as it is not expired. But, when context aware apps run without middleware there will be lots of subsequent GPS data request, because of there is no horizontal communication or inter apps communication for sharing most latest signals of sensors. If there are two context aware apps (App 1 and App 2) that involve GPS signal, traditionally, both apps will request GPS sensor directly through android API, if app 2 requests GPS signal within short window of time with app 1 request, here both apps are requesting for the same result that shows only one feature. This approach is not computationally efficient, especially in battery power, as GPS sensor is the most power hunger sensor[10]. But, if they are able to share sensor data[42] within apps, they can save many computational resources, especially battery power. In pervious works inference were done at the high level by involving computationally expensive rule miner algorithm, and then uses the mined rule for context inference.
Figure 6: High level inference on work
The other scenario is, if two or more sensors data are involved for inferring one context, in some cases we can infer the context without requesting other sensor data. For example, to know whether the user is in meeting or not (in short InMetting), in pervious works Wi-Fi signature and microphone data is required. The same Wi-Fi data may also involve to infer other contexts like as, whether the user is at home or not (in short AtHome). AtHome = {Wi-Fi data}, InMetting= {Wi-Fi data AND microphone data}. So here, if a Wi-Fi signature is required to infer AtHome we can infer InMeeting from the same data, let say the current Wi-Fi signature shows that the user is at home this means the Wi-Fi signatures does not belong to office so we can easily infer InMeeting without requesting for microphone data, see figure 6 and 7. Therefore, In CCCenter for better energy efficiency we use both inferences. Low level inference is only useful for inferring contexts those involve the same sensor , while high level inference is used for contexts those does not involve the same sensor and also used for speculative sensing.
Figure 7: High level and low level inference on work
3.2. System Architecture (CCCenter)
We divided the modules in to two sub systems, namely, sensing and thinking sub system. Raw data retrieval and preprocessing has been carried out by funf[43]. Funf has there components, funf manager, pipeline and probe. In CCCenter probe will collect and preprocess raw data from sensors then sends to pipeline. Funf manager is responsible for managing probes and pipeline, so funf the CCCenter have communication with pipeline and funf manager only, see figure 8.
Thinking subsystem is a part that can take preprocessed data from the sensing subsystem and infer high level meaning (context) then pass to the next subsystem through API .CCCeneter provides two types of API. One is for the request of context result as Boolean and the other one is API that handles non Boolean context result requests.
Figure 8: Abstract layered architecture for CCCenter
Up on a new request made by a context aware app (let say app 1), the request is made by calling get (context). The CCCenter will first check whether this context result can be inferred without involvement of other component or not. If it can be inferred by inference cache it will return the result; otherwise will trigger speculative sensing component. This component is aware of low sensing costs and best proxy sensor. Best proxy sensor is cheapest context that can be used to infer other context. So this component will look for best and cheapest proxy context for inferring the requested context. If the requested context costs 2995 mJ of energy and there is other context which can costs less energy which have correlation with the current expensive context, then speculative sensing component will sense the cheapest one for inferring the expensive context result. This component does not carry out sensing by itself, it triggers contexter, contexter is component that triggers the feature-context matrix to get fresh low level context result. Contexter knows necessary sets of sensors to determine the requested context result, for the time being we hard coded the contexter. Then, the matrix will return the requested low level context if it is not expired, otherwise the matrix will request pipeline for fresh sensor data. The feature-context matrix is a component that takes sensor data and checks the data where it belongs in the feature map table and performs low level inference according to the feature map’s information. The feature map is a component that contains pre learned thresholds, and labeled GPS and WiFi signatures, see figure 9.
4 Implementation and Evaluation
In this chapter, we discuss the development process and detailed design of CCCenter, which has been implemented on Android platform. Section 4.1 describes the technologies and approaches used. Section 4.2 presents the evaluation results in terms of energy efficiency and accuracy.
4.1. Detailed Design and Implementation
To clarify presenting the detail of CCCenter implementation, we like to present the general work flow of CCCenter. CCCenter has six sub components, namely, inference cache, speculative sensing, contexter, rule miner, feature map and feature-context matrix, see figure 10.
4.1.1. Detailed Design
4.1.1.1. Feature Extraction and Feature Level Inference
Except location data and WiFi signature there are extracted features before bootstrap as thresholds and labeled data. There are also features that need to be localized, extracted and stored in feature map on bootstrap. All feature stored in SQLite database table. In the following section we present how this designed and implemented as a component.
Feature Map Component
This is a component that knows all features of the sensors data, when new sensor data arrives feature ‘context matrix will check the data where it belongs to before storing and inferencing. Because of differences in data format from sensors, we maintain SQLite table for each of them.
Feature-Context Matrix
Feature-Context matrix is a component that requests pipeline for new sensor data and checks where the data belongs to which feature on feature map table then store and perform inferrencing accordingly. We maintain SQLite table for each sensors. Whenever data request comes from contexter, if results in the table are not expired the results will be sent to contexter otherwise it triggers pipeline for new data. For the time being we use 5 minutes expire time, mostly there is no significant change in user’s environment within such short period of time. But it is also possible to determine expire time according to sensor type and user’s environmental behavior rather than using the same fixed time for all matrix.
Figure 10: Work Flow of CCCenter
4.1.1.2. Context Level Inference, Storage And Management
Contexter
Contexter is a component that collects low level inference result from matrixes (feature-context table) and determine the state of the context attribute by using AND/OR logical operator. Then sends the result to inference cache and context history table. This component has one SQLite table used to store list of context attribute, associated probes and state of the context attribute, see table 2.
Attribute State Problist
IsWalking Undefined/True/False AccelerometerFeaturesProbe
IsDriving Undefined/True/False AccelerometerFeaturesProbe
IsJogging Undefined/True/False AccelerometerFeaturesProbe
IsSitting Undefined/True/False AccelerometerFeaturesProbe
AtHome Undefined/True/False WifiProbe
InOffice Undefined/True/False WifiProbe
IsIndoor Undefined/True/False LocationProbe + WifiProbe
IsAlone Undefined/True/False AudioFeatureProbe
InMeeting Undefined/True/False WifiProbe + AudioFeatureProbe
IsWorking Undefined/True/False WifiProbe + AudioFeatureProbe
Table 2: Context attributes implemented in CCCenter
Rule Miner
CCCenter rule miner is the same with ACE’s rule miner. It has three sub components, namely, rules, context history table and a class that perform rule mining activity. This class is responsible to fetch context history from context history table and mine rules by using Apriori algorithm. A predefined expiration period is maintained by Rule Miner through the users context history of time stamped. Thus, the relationship between various Boolean context attributes is automatically learnt by Apriori algorithm[44] and able to generate context rules. Minimum support and confidence percentage is required by each rule which then can be modified to achieve an acceptable level of accuracy. Frequent offloading of all context tuples to a remote server to compute rules is required by the rule miner. Existing rules which are no longer valid are required to be deleted through an incremental updates on the mobile device.
Speculative Sensing
Speculative sensing is used by ACE to discover additional proxy attributes when Inference Cache fails to determine the value of a context attribute. ACE attempts to determine the value of an expensive attribute by engaging a low-cost attribute. In order to determine an optimal ordering based on energy cost, Ci, and the likelihood that an attribute will return a True value, Pi; a sensing plan is developed. CCCenter also uses the ACE’s speculative sensing mechanisms.
For many small number of attributes, the algorithm to evaluate a Boolean expression with minimum cost work quite well. In contrast, the sensing cost to find the best plan may be more than the cost to directly acquire the requested attribute when the number of attribute is relatively large. In such cases, ACE uses a heuristic algorithm to rank the attributes based on Ci and Pi.
Inference Cache
Inference Cache of CCCenter is the same with the ACE. We maintain one SQLite table as inference cache. This component functions like a traditional cache. Without acquiring sensor data, it allows CCCenter to assume context attributes from known context attributes. The Inference Cache provides a Get/Put interface. A Put(X,Y) places a context attribute, X, in the cache for a predetermined amount of time , v. i.e., five minutes. A Get(X) returns the value of X if the value is in the cache or can be inferred from context rules and cached values of other context attributes.
The Boolean expression is represented by ACE by constructing an expression trees. The expression tree is a Boolean AND-OR tree, where a non-leaf node presents an AND or OR operation on the value of its child and a leaf node represents a tuple. According to the context rule, for a tuple to evaluate to true it must hold in the expression tree. See Figure 4 for an expression tree for the tuple Indoor = True. One expression tree for each tuple is maintained by the Inference Cache. The sensing planner is only invoked when the value of a context attribute to be true or false cannot be determined by the expression tree.
‘
4.1.2. Technologies and Approaches used for Implementation
In this section we presented the state-of-art tools and technologies used for the implementation of the system.
4.1.2.1. Android
Android is an operating system designed for mobile devices that contains a soft stack also equipped with middleware and key applications. It is developed by Google and is really famous nowadays and is widest spread operating system[45]. Java is used for developing apps run on this operating system. Java is a programming language and computing platform first released by Sun Microsystems[46]. We use Java for development of the context sensing middleware and Eclipse for the development environment. Middleware is software that acts as a bridge between an operating system or database and applications, especially on a network[47].
4.1.2.2. Funf Open Sensing Framework
Funf Open Sensing Framework (in short Funf)[43]. Funf is an extensible and scalable sensing and data processing framework for Android phones. It is open source project and many scientific research teams are using funf for different area of researches. Funf has three main components, namely Funf manager, pipeline and probe, see figure 7. Probes are designed for raw sensor’s data collection. Funf v0.4 collects data using about 30 built in probes and also provides probe interface this enables anyone to design own probe according to their need. Probes are classes that directly interact with hardware and software sensors then sends collected data to pipeline. There is built in pipeline that collects raw data from probes and encrypt, archive on SD card, and upload the data to back end servers. Funf applies strict privacy protection measures by encrypting the data and hiding identity/sensitive information of the phone users. We present the components of funf, their relevance for the implementation of CCCenter and how we leveraged them as follows.
Funf Manager
This class is inherited from android.app.service, so it can serve as android service class; the effect is equivalent to provide services for other applications and serves as the connection to the rest of the Android OS. Funf manager is responsible for the management and operation of all pipeline, probe, schedule and receiving alarms with the AlarmManager. To start the service in our program in onCreat() we bind service using bindService() then the funfManager service starts, see listing 1; here funf manager acts as a service.
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
.
.
.
// Bind to the service, to create the connection with FunfManager
bindService(new Intent(this, FunfManager.class), funfManagerConn,
BIND_AUTO_CREATE);}
}
Listing 1: Binding service
Connection between CCCenter main activity and funf manager is created in bindService().
Pipeline
The current version of funf only has one built in pipeline called basic pipeline. This pipeline is more generic and does not fit our requirements. We implement our own pipeline called CCCenterPipeline by implementing pipeline.
public class CCCenterPipeline implements Pipeline, DataListener {‘
Once the connection between funf manager and main activity stablished the funf manager registers pipeline to probes, see appendix c. We can also register pipeline to probe at any time when we want to get probe data immediately without waiting for the schedule, see appendix d.
Probe
This class is implementation of probe interface and responsible for collecting raw sensor (both hardware and software) data using the android sensor APIs and call sendData() to send the data to the pipeline. Funf provides single probes for each sensor, on current version there are about 30 probes. Due to variety of sensor data system, probe will do some preprocessing on raw data directly arrives from the sensors. We used these classes as it is because of all of them meets our need.
Figure 11: Funf high-level structure
4.1.2.3. SQLite Database
An open source relational embedded database is known as SQLite. Embedded is basically described as software that coexists in an application it provided instead of running independently as such the code is a part of the program that hosts it. Structured Query Language [SQL] is used to communicate with relational databases. For example: SELECT * FROM table; will fetch all data from database where the table name is “table”. This communication language for databases is most common and most spread[48]. As of this database is integrated with android programing and Funf pipeline is also use SQLite for persistence data storage. So because of the connivance of this database and we using Funf pipeline, we also used it in CCCenter for storing information about both low and high lave. To use SQLite first we create a subclass of SQLiteOpenHelper called DBHelper, see listing 1.
public class DBHelper extends SQLiteOpenHelper {
public static final int CURRENT_VERSION = 1;
private SQLiteDatabase database = null;
public static final String database_name= “CCCENTER.db”;
Listing 1: Database Helper Class (DBHelper.java)
This class is responsible to make sure we have the most up-to-date database structures for all four types of tables of CCCenter’; Namely, FEATURE_MAP_TABLE, FEATURE_CONTEXT_TABLE (Feature-Context Matrix, see figure 8 ), CORRELATION_TABLE and INFERENCE_CATCHE_TABLE. Except CORRELATION_TABLE and INFERENCE_CATCHE_TABLE the rest are logical tables, it is a collective name of the table they are not physical table. Physically, we created a table to store pre learned thresholds and labeled data. Thus data are from different source and they are different in formats, so we designed table for each of sources (headwear sensors). We integrate data from software sensors with hardware sensors so we do not need to have table for software sensors. See the following listing.
public static final Table WIFI_FEATURE_MAP_TABLE = new Table(“WIFI_FEATURE_MAP”,
Arrays.asList(new Column(COLUMN_FEATURE, “TEXT”), // ACTION from data broadcast
new Column(COLUMN_CONTEXT, “TEXT”), // TIMESTAMP in data broadcast
new Column(COLUMN_VALUE, “TEXT”))); // JSON representing
public static final Table LOCATION_FEATURE_MAP_TABLE = new Table(“LOCATION_FEATURE_MAP”,
Arrays.asList(new Column(COLUMN_FEATURE, “TEXT”), // ACTION from data broadcast
new Column(COLUMN_CONTEXT, “TEXT”), // TIMESTAMP in data broadcast
new Column(COLUMN_VALUE, “TEXT”))); // JSON representing
Listing 2: Physical Table
See figure 12 and 13 for the physical representation of the above tables..
Figure 12: WIFI_FEATURE_MAP table
Figure 13: WIFI_FEATURE_CONTEXT table
4.1.2.4. Detecting user’s locations
At bootstrap CCCenter does not have any knowledge about the user’s geographic locations. CCCenter uses this location for purposes of inferring contexts that relates with the location of the user especially when at home or at office. For this purpose we used a rule based approach[49] and customize logic of fuel price app [50] and designed a module that automatically detects locations that are geographically-relevant to the user (user’s home and work locations). For this design, firstly we took an assumption that the user has a fairly normal work schedule. We can therefore, assume that the user will typically be home between 2 – 3 AM, and will typically be at their office between 2 – 3 PM. Hence, according to the assumption, to detect Home Location CCCenter collects location data (longitudes, latitudes, accuracy and cell tower ID) using simple location probe of Funf between 2-3 AM for a week and cauterize them to get approximate home address. Gathers location data between 2 – 3 PM and uses the same method to detect work location as well. To determine location by Wi-Fi, CCCenter also collects the Wi-Fi signature in the same manner as location data and cluster them, see figure xx for the logical design of the algorithm. Finally, CCCenter is aware of locations that are geographically relevant to the user.
Figure 14: Logical design of the location detection algorithm
For the implementation of the algorithm, we designed timeFilter(), accumulator() and createClusters() functions. There is a config data in JSON format with structure seen on listing 3; this data is input for all functions, for the time being it contains information about time slice of interest. This data will make our module to filter the input data based on any time slice accordingly.
{
“uid”: “some unique identifier for device/user”,
“location”: [longitude, latitude, accuracy, cellTowerID ]
‘WiFISignature’:[BSSID,SSID, capabilities, frequency, level]
“time”: “time in user’s timezone”
}
Listing 3: Config data
A. Time Filter Function
Our time filter is a function that takes location data and Wi-Fi signatures from Funf’s location probe and Wi-Fi probe, respectively. In addition to this data, it also takes config as a parameter. And then passes the data that only collected between 2 to 3 PM and 2 to 3 AM to next function. See pseudocode 1.
Pseudocode 1: Time filter function
B. Accumulator Function
This function is designed for receiving data passed by timeFilter() and storing in to bucket. Each new arrival data from the previous function is added to the bucket until it is full. Then when the bucket is full accumulated data will be sent to the next function, i.e., createClusters() after sending data the bucket will be empty, see pseudocode 2.
Pseudocode 2: Accumulator Function
C. Create Cluster Function
For this function we choose simple way to cluster location data among sophisticated systems in coordinate geometry to cluster 2D data. First the function finds neighbors for each location data from a set of locations. If some of the neighbors belong to an existing cluster, then expand neighbors with cluster. If locations data in neighbor set are more than threshold, add neighbors as a new cluster. The other type of data is Wi-Fi signature, for such data the function will only cauterize most frequent Wi-Fi signatures. Since both of the data types are different in attribute we have two versions of clustering function (see appendix A) and finally we pull it together as shown in pseudocode 3.
Pseudocode 3: Accumulator Function
4.1.2.5. Accelerometer data
To infer state of IsWalking, IsDriving, IsJogging and IsSitting the contexter needs to collect accelerometer data over 10 second window. After the data is collected the matrix will detect the feature of the data using feature map, a map for accelerometer operates based on a decision tree learned from training data[34].
4.1.2.6. Microphone Data
Contexter uses two types of microphone data, i.e., non-surrounding sound and surrounding sound/ background sound. Non- surrounding audio sample is collected over 10 second window and compared the average signal strength with pre learned thresholds[51] from feature map for inferring whether the user IsAlone or not. The state of InMeeting and IsWorking is inferred based on surrounding audio sample(acoustic background signature) and Wi-Fi signature collected over 10 second window[52].
4.2. Evaluation
4.2.1. Energy Efficiency
Evaluation of CCCenter conducted on the ZTE Grand S running Android Operating System version 4.1.2 and Lenovo A678t running Android version 4.2.2. We use two contexts to evaluate the energy efficiency and accuracy of CCCenter. We develop two versions of continuous context aware app. The first version uses CCCenter to get our context just by requesting the context only through the API for the value of AtOffice and AtHome. The second version of the app is developed without CCCenter and it can infer the value of AtHome and AtOffice by itself without CCCenter. We run both apps on both phone for 11 hours. Version 1 of the app, i.e. with the CCCenter, consumes 3.9 % of the battery of the Lenovo phone and 4.3 % of the battery of ZTE phone. Version 2 of the app, i.e. without the CCCenter, consumes 19.8 % of the Lenovo phone and 21.2 % of the battery of ZTE phone. So, approximate energy saving gained on Lenovo phone is 5.08X and for ZTE phone 4.93X, that is 5.01X energy saving in average.
Figure 17: Battery consumption of Context
aware app running on CCCenter, app version 1
4.2.2. Accuracy
We make both versions of apps to save our context log and finally we cross check them with ground truth. According to our experiment App version 1 which runs on CCCenter shows 96.06% and the App version which runs without CCCenter shows 96.97% as shown in table 2 and figure 18.
Ground Truth Inference Result
Time Expected Context in every 5 minutes App version 1 Inference Result Accuracy App version 2 Inference Result Accuracy
2 PM to 3 PM Office (12x) Office (12x) 100% Office (12x) 100%
3 PM to 4 PM Office (12x) Office (12x) 100% Office (12x) 100%
4 PM to 5 PM Office (12x) Office (12x) 100% Office (12x) 100%
5 PM to 6 PM Office (4x) + on the way to restaurant and office (3x) Restaurant (5x) Office (5x) 75% Office (5x) 75%
7 PM to 8 PM Office (12x) Office (11x) 91.67% Office (11x) 91.67%
9 PM to 10 PM Office (12x) Office (12x) 100% Office (12x) 100%
11 PM to 12 AM on the way to home (2x) + Home (10x) Home (9x) 90% Home (10x) 100%
1 AM to 2 AM Home (12x) Home (12x) 100% Home (12x) 100%
3 AM to 4 AM Home (12x) Home (12x) 100% Home (12x) 100%
5 AM to 6 AM Home (12x) Home (12x) 100% Home (12x) 100%
6 AM to 7AM Home (12x) Home (12x) 100% Home (12x) 100%
96.06% 3.94% 96.97%
Table 3: Accuracy of app version 1 and version 2 when compared to ground truth. App version 1 has 3.94 % of inaccuracy and app version 2 shows 3.03 % of inaccuracy.
When we use Wi-Fi for location determination it is not accurate as the GPS and this results occasional inaccuracy shown above. When we leave office for restaurant it infers our context as office once after we left the office, this is because of the Wi-Fi signals are still available to some meters far from the office. Thus, shows 25% of inaccuracy. The same thing happen when we came back to the office. Whenever we change our current context somehow it infers the pervious context wrongly.
Figure 18: Accuracy of App version 1 and version 2 when compared to ground truth
5 Conclusion and Future Works
Low level inference can help more for energy efficiency and accuracy, even though we faced challenges on the data structure of Feature-context matrix and Feature map.
Our future work can be extending CCCenter to open context sensing framework since there is no such framework, building Special data structure for feature map and feature ‘context matrix . Feature-Context can be leveraged for sensor data privacy.
‘
References
[1] I. Statista. (2013). Statistics and facts on Mobile Internet Usage. Available: http://www.statista.com/topics/779/mobile-internet/
[2] C. Shin and A. K. Dey, “Automatically detecting problematic use of smartphones,” presented at the Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, Zurich, Switzerland, 2013.
[3] B. Schilit, N. Adams, and R. Want, “Context-Aware Computing Applications,” in Mobile Computing Systems and Applications, 1994. WMCSA 1994. First Workshop on, 1994, pp. 85-90.
[4] S. Kang, J. Lee, H. Jang, H. Lee, Y. Lee, S. Park, et al., “SeeMon: scalable and energy-efficient context monitoring framework for sensor-rich mobile environments,” presented at the Proceedings of the 6th international conference on Mobile systems, applications, and services, Breckenridge, CO, USA, 2008.
[5] S. Nath, “ACE: exploiting correlation for energy-efficient and continuous context sensing,” presented at the Proceedings of the 10th international conference on Mobile systems, applications, and services, Low Wood Bay, Lake District, UK, 2012.
[6] M. Weiser, “The computer for the 21st century,” Scientific American, pp. 78-89, 1991.
[7] J. Research. (2014). Context Aware to Transform Search & Discovery of Apps. Available: http://www.juniperresearch.com/press/press-releases/mobile-context-aware-technology-to-revolutionise-a
[8] H. Lu, J. Yang, Z. Liu, N. D. Lane, T. Choudhury, and A. T. Campbell, “The Jigsaw continuous sensing engine for mobile phone applications,” presented at the Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems, Zürich, Switzerland, 2010.
[9] K. Lin, A. Kansal, D. Lymberopoulos, and F. Zhao, “Energy-accuracy trade-off for continuous mobile device location,” presented at the Proceedings of the 8th international conference on Mobile systems, applications, and services, San Francisco, California, USA, 2010.
[10] R. Herrmann, P. Zappi, and T. S. Rosing, “Context aware power management of mobile systems for sensing applications,” in Proceedings of the ACM/IEEE Conference on Information Processing in Sensor Networks, 2012.
[11] Wikipedia. (2015). Context-aware pervasive systems. Available: http://en.wikipedia.org/wiki/Context-aware_pervasive_systems
[12] B. N. Schilit and M. M. Theimer, “Disseminating active map information to mobile hosts,” Network, IEEE, vol. 8, pp. 22-32, 1994.
[13] Merriam-Webster. Context. Available: http://www.merriam-webster.com/dictionary/context
[14] K. V. Laerhoven and K. Aidoo, “Teaching Context to Applications,” Personal Ubiquitous Comput., vol. 5, pp. 46-49, 2001.
[15] B. Brumitt and S. Shafer, “Better Living Through Geometry,” Personal Ubiquitous Comput., vol. 5, pp. 42-45, 2001.
[16] A. Schmidt, M. Beigl, and H.-W. Gellersen, “There is more to context than location,” Computers & Graphics, vol. 23, pp. 893-901, 1999.
[17] G. D. Abowd and E. D. Mynatt, “Charting past, present, and future research in ubiquitous computing,” ACM Trans. Comput.-Hum. Interact., vol. 7, pp. 29-58, 2000.
[18] K. Henricksen, J. Indulska, and A. Rakotonirainy, “Modeling Context Information in Pervasive Computing Systems,” presented at the Proceedings of the First International Conference on Pervasive Computing, 2002.
[19] A. K. Dey, “Understanding and Using Context,” Personal Ubiquitous Comput., vol. 5, pp. 4-7, 2001.
[20] A. K. Dey, “Context-aware computing: The CyberDesk project,” in Proceedings of the AAAI 1998 Spring Symposium on Intelligent Environments, 1998, pp. 51-54.
[21] A. K. Dey, “Providing architectural support for building context-aware applications,” Georgia Institute of Technology, 2000.
[22] Wikipedia. Location-based service. Available: http://en.wikipedia.org/wiki/Location-based_service
[23] J. Pascoe, N. Ryan, and D. Morse, “Using while moving: HCI issues in fieldwork environments,” ACM Trans. Comput.-Hum. Interact., vol. 7, pp. 417-437, 2000.
[24] K. Cheverst, N. Davies, K. Mitchell, and C. Efstratiou, “Using Context as a Crystal Ball: Rewards and Pitfalls,” Personal Ubiquitous Comput., vol. 5, pp. 8-11, 2001.
[25] J. Pascoe, N. Ryan, and D. Morse, “Issues in developing context-aware computing,” in Handheld and ubiquitous computing, 1999, pp. 208-221.
[26] A. Schmidt, “Implicit human computer interaction through context,” Personal technologies, vol. 4, pp. 191-199, 2000.
[27] M. Addlesee, R. Curwen, S. Hodges, J. Newman, P. Steggles, A. Ward, et al., “Implementing a sentient computing system,” Computer, vol. 34, pp. 50-56, 2001.
[28] (2013). How Google Tracks Traffic. Available: https://www.ncta.com/platform/broadband-internet/how-google-tracks-traffic/
[29] Google. (2012). Google Now. Available: https://www.google.com/landing/now/
[30] G. Inc. Google Maps. Available: https://maps.google.com/
[31] Wikipedia. Google Search. Available: http://en.wikipedia.org/wiki/Google_Search
[32] Wikipedia. Knowledge Graph. Available: http://en.wikipedia.org/wiki/Knowledge_Graph
[33] Y. Wang, J. Lin, M. Annavaram, Q. A. Jacobson, J. Hong, B. Krishnamachari, et al., “A framework of energy efficient mobile sensing for automatic user state recognition,” presented at the Proceedings of the 7th international conference on Mobile systems, applications, and services, Kraków, Poland, 2009.
[34] J. R. Kwapisz, G. M. Weiss, and S. A. Moore, “Activity recognition using cell phone accelerometers,” SIGKDD Explor. Newsl., vol. 12, pp. 74-82, 2011.
[35] E. Miluzzo, N. D. Lane, Krist, #243, f. Fodor, R. Peterson, et al., “Sensing meets mobile social networks: the design, implementation and evaluation of the CenceMe application,” presented at the Proceedings of the 6th ACM conference on Embedded network sensor systems, Raleigh, NC, USA, 2008.
[36] A. Parate, D. Ganesan, and B. M. Marlin, “CQue: A Graphical Model-based Context Querying Engine for Mobile Computing.”
[37] M. Azizyan, I. Constandache, and R. Roy Choudhury, “SurroundSense: mobile phone localization via ambience fingerprinting,” in Proceedings of the 15th annual international conference on Mobile computing and networking, 2009, pp. 261-272.
[38] D. Choujaa and N. Dulay, “Tracme: Temporal activity recognition using mobile phone data,” in Embedded and Ubiquitous Computing, 2008. EUC’08. IEEE/IFIP International Conference on, 2008, pp. 119-126.
[39] M. Wirz, D. Roggen, and G. Troster, “Decentralized detection of group formations from wearable acceleration sensors,” in Computational Science and Engineering, 2009. CSE’09. International Conference on, 2009, pp. 952-959.
[40] N. Banerjee, S. Agarwal, P. Bahl, R. Chandra, A. Wolman, and M. Corner, “Virtual compass: relative positioning to sense mobile social interactions,” in Pervasive computing, ed: Springer, 2010, pp. 1-21.
[41] H. Lu, W. Pan, N. D. Lane, T. Choudhury, and A. T. Campbell, “SoundSense: scalable sound sensing for people-centric applications on mobile phones,” in Proceedings of the 7th international conference on Mobile systems, applications, and services, 2009, pp. 165-178.
[42] H. H??pfner and K.-U. Sattler, “Cache-supported Processing of Queries in Mobile DBS,” Database Mechanisms for Mobile Applications, vol. 43, pp. 106-121, 2003.
[43] A. G. a. C. S. Nadav Aharony, “Funf Open Sensing Framework.”
[44] R. Agrawal, T. Imieli, #324, ski, and A. Swami, “Mining association rules between sets of items in large databases,” presented at the Proceedings of the 1993 ACM SIGMOD international conference on Management of data, Washington, D.C., USA, 1993.
[45] G. Inc. Android Developer Available: http://developer.android.com/index.html
[46] O. T. Network. Java. Available: https://www.java.com/en/
[47] O. Dictionaries. (2015). Middleware. Available: http://www.oxforddictionaries.com/us/definition/learner/middleware
[48] M. O. a. G. Allen, The Definitive Guide to SQLite second ed.: Apress, 2010.
[49] P. Theekakul, S. Thiemjarus, E. Nantajeewarawat, T. Supnithi, and K. Hirota, “A rule-based approach to activity recognition,” presented at the Proceedings of the 5th international conference on Knowledge, information, and creativity support systems, Chiang Mai, Thailand, 2011.
[50] R. DEVASKAR, “Context Aware Applications and Complex Event Processing Architecture,” p. TOPTAL.
[51] H. Lu, W. Pan, N. D. Lane, T. Choudhury, and A. T. Campbell, “SoundSense: scalable sound sensing for people-centric applications on mobile phones,” presented at the Proceedings of the 7th international conference on Mobile systems, applications, and services, Kraków, Poland, 2009.
[52] S. P. Tarzia, P. A. Dinda, R. P. Dick, and G. Memik, “Indoor localization without infrastructure using the acoustic background spectrum,” presented at the Proceedings of the 9th international conference on Mobile systems, applications, and services, Bethesda, Maryland, USA, 2011.
‘
APPENDIX A: CREATE CLUSTER FUNCTION PSEUDOCODE
APPENDIX B: DISTANCE FUNCTION
APPENDIX C: Registering pipeline to probe through funf manager
private ServiceConnection funfManagerConn = new ServiceConnection() {
public void onServiceConnected(ComponentName name, IBinder service) {
funfManager = ((FunfManager.LocalBinder) service).getManager();
Gson gson = funfManager.getGson();
wifiProbe = gson.fromJson(new JsonObject(), WifiProbe.class);
pipeline = (CCCenterPipeline) funfManager
.getRegisteredPipeline(PIPELINE_NAME);
wifiProbe.registerListener(MainActivity.this);
funfManager.enablePipeline(PIPELINE_NAME);
}
public void onServiceDisconnected(ComponentName name) {
funfManager = null;
}
};
APPENDIX D: Registering pipeline to probe anytime whenever we want to get data from probe without waiting for schedule.
@Override
protected void onStart() {
// TODO Auto-generated method stub
super.onStart();
scanBtn.setOnClickListener(new OnClickListener() {
public void onClick(View view) {
if (pipeline != null) {
wifiProbe.registerListener(MainActivity.this);
contentTable = addRowToTable(contentTable,
vertical_virgule, vertical_virgule);
}
if (timer == null) {
timer.schedule(task, 0, 1000);
}
}
});
}
APPENDIX E: APRIORI ALGORITHM FOR ASSOCIATION RULE MINING
Acknowledgement
First and foremost I offer my thanks to GOD, who made this work possible.
I am very grateful to Central South University and Chinese Scholarship Council for its sponsorship throughout my study and generally for People’s Republic of China.
I express my deepest gratitude to my supervisor Prof. Lu Mingming, for his advice, concern, dedication, encouragement, and formal guidance.
My special thanks also go to Mr. TianBo, for his brotherly supports.
I have to express my sincere appreciation to all of my classmates and lab mates, for their encouragement and concern, especially Solomon Teferi and Hassan Al Kulani for their brotherly advices.
Finally, my heartfelt appreciation and indebtedness goes to my family for their love, support, patience and encouragement throughout my study.
in here…
Essay: CCCenter: Energy- Efficient Continuous Context Sensing Middleware
Essay details and download:
- Subject area(s): Information technology essays
- Reading time: 37 minutes
- Price: Free download
- Published: 25 November 2015*
- Last Modified: 23 July 2024
- File format: Text
- Words: 10,762 (approx)
- Number of pages: 44 (approx)
Text preview of this essay:
This page of the essay has 10,762 words.
About this essay:
If you use part of this page in your own work, you need to provide a citation, as follows:
Essay Sauce, CCCenter: Energy- Efficient Continuous Context Sensing Middleware. Available from:<https://www.essaysauce.com/information-technology-essays/essay-cccenter-energy-efficient-continuous-context-sensing-middleware/> [Accessed 18-12-24].
These Information technology essays have been submitted to us by students in order to help you with your studies.
* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.