Browsing by Subject "Cloud computing"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Classification of encrypted cloud computing service traffic using data mining techniques(2011-12) Qian, Cheng; Ghosh, JoydeepIn addition to the wireless network providers’ need for traffic classification, the need is more and more common in the Cloud Computing environment. A data center hosting Cloud Computing services needs to apply priority policies and Service Level Agreement (SLA) rules at the edge of its network. Overwhelming requirements about user privacy protection and the trend of IPv6 adoption will contribute to the significant growth of encrypted Cloud Computing traffic. This report presents experiments focusing on application of data mining based Internet traffic classification methods to classify encrypted Cloud Computing service traffic. By combining TCP session level attributes, client and host connection patterns and Cloud Computing service Message Exchange Patterns (MEP), the best method identified in this report yields 89% overall accuracy.Item Cloud computing : security risk analysis and recommendations(2011-12) Sachdeva, Kapil; Bagchi, Uttarayan; Walls, StephenCloud computing is here to stay and is the natural progression in the evolution of our computing and collaboration needs. The easy availability of computing infrastructures is motivating a new breed of entrepreneurs to realize their ideas and deliver innovations to masses. These innovations, however, have some serious security weaknesses. If not taken into account, these weaknesses could prove fatal for an organization’s reputation and existence. This thesis explains the potential risks associated with various types of cloud computing technologies and recommends methods to mitigate them.Item Google app engine case study : a micro blogging site(2009-12) Kajita, Marcos Suguru; Aziz, Adnan; Khurshid, SarfrazCloud computing refers to the combination of large scale hardware resources at datacenters integrated by system software that provides services, commonly known as Software-as-a-Service (SaaS), over the Internet. As a result of more affordable datacenters, cloud computing is slowly making its way into the mainstream business arena and has the potential to revolutionize the IT industry. As more cloud computing solutions become available, it is expected that there will be a shift to what is sometimes referred to as the Web Operating System. The Web Operating System, along with the sense of infinite computing resources on the “cloud” has the potential to bring new challenges in software engineering. The motivation of this report, which is divided into two parts, is to understand these challenges. The first part gives a brief introduction and analysis of cloud computing. The second part focuses on Google’s cloud computing platform and evaluates the implementation of a micro blogging site using Google’s App Engine.Item A hydrologic information system for water availability modeling(2011-08) Siler, Clark D., 1978-; Maidment, David R.; Civil, Architectural, and Environmental Engineering; McKinney, Daene; Gilbert, Robert; Hodges, Ben; Jones, NormTexas water availability modeling has undergone a transition from paper-based documents to digital databases and GIS maps. This results in many discrete components: a water rights database, a GIS database, a monthly flow simulation model to quantify water availability, and an environmental flows assessment to quantify how much water should remain in Texas rivers. This dissertation examines how these components can be connected by a conceptual model and automated as a Hydrologic Information System (HIS) for Texas water availability modeling using custom GIS toolsets and data processing. The HIS is defined using three tools that combine components of the conceptual model. These tools automate the processes of water availability modeling and synthesize the conceptual model components. This dissertation also explores how desktop-based Texas water availability modeling can be informed by web services and how a services-oriented architecture for water availability modeling could be constructed. Existing hydrologic information models are used as a guide in creating an Arc Hydro Web information model as a framework for this activity. This model is demonstrated using scenarios highlighting its capabilities for representing desktop and web-informed analyses. The functionality of Arc Hydro Web is demonstrated via a use case of five associated component studies in the San Jacinto Basin illustrating the functionality of the HIS of water availability modeling in Texas. The shift from desktop-based analyses to web-enabled processing enables certain aspects of water availability modeling being moved to cloud computing. The network aspects of the Texas water availability modeling environment can be informed by web services using a centrally-stored network, negating the current system of having nearly-identical duplicate networks. This could foster communication and sharing of water resources models. It is recommended that Arc Hydro Web be implemented, that aspects of water availability modeling processing become web-enabled through the combination of web processing and web services, and that additional services be developed to meet the needs of web-based water availability modeling.Item Mobile computing in a clouded environment(2009-12) Rosales, Jacob Jason; Julien, Christine; Bard, WilliamCloud Computing has started to become a viable option for computing centers and mobile consumers seeking to reduce cost overhead, power consumption, and increase software services available within their platform. For instance distributed memory constrained mobile devices can expand their ability to share real time data by utilizing virtual memory located within the cloud. Cloud memory services can be configured to restrict read and write access to the shared memory pool on a partner by partner basis. Utilization of such resources in turn reduces hardware requirements on mobile devices while lessening power consumption for each physical resource. Within the Cloud Computing paradigm, computing resources are provisioned to consumers on demand and guaranteed through service level agreements. Although the idea of a computing utility is not new, its realization has come to pass as researchers and corporate companies embark on a journey of implementing highly scalable cloud environments. As new solutions and architectures are proposed, additional use cases and consumer concerns have been revealed. These issues range from consumer security, adequate service level agreements and vendor interoperability, to cloud technology standardizations. Further, the current state of the art does not adequately address these needs for mobile consumers, where services need to be guaranteed even as consumers dynamically change locations. Due to the rapid adoption of virtualization stacks and the dramatic increase of mobile computing devices, cloud providers must be able to handle logical and physical mobility of consumers. As consumers move throughout geographical regions, there exists the probability that a consumer’s new locale may hinder a producer’s ability to uphold service level agreements. This inability is due to the fact that a producer may not have physical resources located relatively close to a mobile consumer’s new locale. As a consequence, producers must either continue to provide degraded resource consumption or migrate workloads to third party producers in order to ensure service level agreements are maintained. The goal of this report is to research existing architectures that provide the ability to adequately uphold service level agreements as mobile consumers move from locale to locale. Further we propose an architecture that can be implemented along with existing solutions in order to ensure consumers receive adequate service levels regardless of locality. We believe this architecture will lead to increased cloud interoperability and decreased consumer to producer platform coupling.Item Protecting sensitive information from untrusted code(2010-08) Roy, Indrajit; Witchel, Emmett; Dahlin, Michael D.; Mazières, David; McKinley, Kathryn S.; Shmatikov, VitalyAs computer systems support more aspects of modern life, from finance to health care, security is becoming increasingly important. However, building secure systems remains a challenge. Software continues to have security vulnerabilities due to reasons ranging from programmer errors to inadequate programming tools. Because of these vulnerabilities we need mechanisms that protect sensitive data even when the software is untrusted. This dissertation shows that secure and practical frameworks can be built for protecting users' data from untrusted applications in both desktop and cloud computing environment. Laminar is a new framework that secures desktop applications by enforcing policies written as information flow rules. Information flow control, a form of mandatory access control, enables programmers to write powerful, end-to-end security guarantees while reducing the amount of trusted code. Current programming abstractions and implementations of this model either compromise end-to-end security guarantees or require substantial modifications to applications, thus deterring adoption. Laminar addresses these shortcomings by exporting a single set of abstractions to control information flows through operating system resources and heap-allocated objects. Programmers express security policies by labeling data and represent access restrictions on code using a new abstraction called a security region. The Laminar programming model eases incremental deployment, limits dynamic security checks, and supports multithreaded programs that can access heterogeneously labeled data. In large scale, distributed computations safeguarding information requires solutions beyond mandatory access control. An important challenge is to ensure that the computation, including its output, does not leak sensitive information about the inputs. For untrusted code, access control cannot guarantee that the output does not leak information. This dissertation proposes Airavat, a MapReduce-based system which augments mandatory access control with differential privacy to guarantee security and privacy for distributed computations. Data providers control the security policy for their sensitive data, including a mathematical bound on potential privacy violations. Users without security expertise can perform computations on the data; Airavat prevents information leakage beyond the data provider's policy. Our prototype implementation of Airavat demonstrates that several data mining tasks can be performed in a privacy preserving fashion with modest performance overheads.