Navigating tradeoffs in context sharing among the Internet of Things



Journal Title

Journal ISSN

Volume Title



This dissertation introduces new perspectives on the sharing context (situational information) among Internet of Things (IoT) devices having different processing power, storage capacity, communication bandwidth, and energy supply. Emerging IoT applications require devices to share information about their context with one another, often over device-to-device wireless links. However, as each IoT device has different capabilities, it may also have different priorities with respect to sharing its context with other nearby devices; low- end IoT devices with limited communication bandwidth and energy supply can prioritize a small context size (and therefore a reduced burden associated with sharing context information), while high-end IoT devices can prioritize communicating context without loss in data quality. Different IoT applications can also impact the priorities; real-time applications can prioritize fast data processing times, whereas big data server applications can prioritize reduced context sizes due to required massive storage. Prioritizing entails tradeoffs. For example, reducing context size through compression requires more energy consumption; in the case of using lossy compression for even smaller output, the data quality can be degraded. In this dissertation, we explore the tradeoffs in sharing context among IoT devices. Specifically, we present our solutions in three stages; theory, implementation, and execution models. In the theory stage, we present our context sharing model using four strategies; we start with strategies that prioritize a single factor, data quality or size, then, we introduce a novel tunable strategy where users can control the tradeoff factors to meet their application’s requirements. We build a mathematical model, and we analyze and experiment with the model to assess the performance relative to tradeoff factors including size, data quality, and energy consumption. An aggregation strategy, which shows an excellent performance in size reduction and energy consumption will be our fourth strategy. In the implementation stage, we introduce a programming model for IoT devices. We stress three principles: easy availability to accessibility of core functions, simple extension to meet application demands, and portability to the other multiple platforms. We demonstrate how these considerations drive the development of the programming model by providing programming tools that realize the model; developers can use these tools to build context sharing activities into their applications. Ultimately, users’ applications will be deployed on a variety of IoT devices. In the third stage of this research, execution models, we categorize IoT devices using three models: tiny devices, mobile devices, and server/cloudlet devices, depending on how the programming tools are employed. We present how context sharing IoT applications can be developed, deployed, and executed within each of these execution models. We expect that IoT developers can benefit in creating new context sharing applications from not only the tools we present but also the ideas behind the tools.