Browsing by Subject "Software maintenance"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item A comparison of black-box models for software evolution(Texas Tech University, 2002-08) Fuentetaja, EduardoLast thirty years have seen the birth and maturity of software evolution as a specialized field inside the vast area covered by software engineering. Software evolution focuses on the study of how software systems change during their lifetime, trying to shed some light on the nature of that change and the relationships between system's attributes from one release to the next. Early empirical studies of software evolution discovered some patterns common to the evolution of every software system that was analyzed. These invariants were formalized in the "laws of software evolution." The election of the term law is intentional and expresses their universality. However, as this study intends to prove, the laws have undergone multiple revisions, and some early conclusions were contradicted by late discoveries, as new empirical studies of the evolution of software systems became available. One of the practical derivations of the laws is the inverse square model. The model explains the growth of evolutionary software systems during their lifetime and it can be used to forecast accurately a system's size many releases in the future, which allows management to plan and allocate resources well in advance. Although many papers have presented examples of software systems whose growth is successfully modeled by the inverse square equations, some recent studies present prove of just the opposite. The QUICK application, a web-based system developed locally at Texas Tech is one example of a system whose growth cannot be explained by the inverse square model. This motivated the analysis of the underlying reasons, finally found in the intrinsic limitations of the model. In addition, the present study proposes an alternative model, already suggested in some recent studies of software evolution, as a better alternative. The model is referred here as the "constant work rate" model and is based on a simple exponential relation. The present study includes a comparison of both inverse square and constant work rate models, presenting proof of the superiority of the constant work rate model.Item Diagnosing and tolerating bugs in deployed systems(2008-12) Bond, Michael David; McKinley, Kathryn S.Deployed software is never free of bugs. These bugs cause software to fail, wasting billions of dollars and sometimes causing injury or death. Bugs are pervasive in modern software, which is increasingly complex due to demand for features, extensibility, and integration of components. Complete validation and exhaustive testing are infeasible for substantial software systems, and therefore deployed software exhibits untested and unanalyzed behaviors. Software behaves differently after deployment due to different environments and inputs, so developers cannot find and fix all bugs before deploying software, and they cannot easily reproduce post-deployment bugs outside of the deployed setting. This dissertation argues that post-deployment is a compelling environment for diagnosing and tolerating bugs, and it introduces a general approach called post-deployment debugging. Techniques in this class are efficient enough to go unnoticed by users and accurate enough to find and report the sources of errors to developers. We demonstrate that they help developers find and fix bugs and help users get more functionality out of failing software. To diagnose post-deployment failures, programmers need to understand the program operations--control and data flow--responsible for failures. Prior approaches for widespread tracking of control and data flow often slow programs by two times or more and increase memory usage significantly, making them impractical for online use. We present novel techniques for representing control and data flow that add modest overhead while still providing diagnostic information directly useful for fixing bugs. The first technique, probabilistic calling context (PCC), provides low-overhead context sensitivity to dynamic analyses that detect new or anomalous deployed behavior. Second, Bell statistically correlates control flow with data, and it reconstructs program locations associated with data. We apply Bell to leak detection, where it tracks and reports program locations responsible for real memory leaks. The third technique, origin tracking, tracks the originating program locations of unusable values such as null references, by storing origins in place of unusable values. These origins are cheap to track and are directly useful for diagnosing real-world null pointer exceptions. Post-deployment diagnosis helps developers find and fix bugs, but in the meantime, users need help with failing software. We present techniques that tolerate memory leaks, which are particularly difficult to diagnose since they have no immediate symptoms and may take days or longer to materialize. Our techniques effectively narrow the gap between reachability and liveness by providing the illusion that dead but reachable objects do not consume resources. The techniques identify stale objects not used in a while and remove them from the application and garbage collector’s working set. The first technique, Melt, relocates stale memory to disk, so it can restore objects if the program uses them later. Growing leaks exhaust the disk eventually, and some embedded systems have no disk. Our second technique, leak pruning, addresses these limitations by automatically reclaiming likely leaked memory. It preserves semantics by waiting until heap exhaustion to reclaim memory--then intercepting program attempts to access reclaimed memory. We demonstrate the utility and efficiency of post-deployment debugging on large, real-world programs--where they pinpoint bug causes and improve software availability. Post-deployment debugging efficiently exposes and exploits programming language semantics and opens up a promising direction for improving software robustness.Item The traceable lifecycle model(2010-12) Nadon, Robert Gerard; Barber, K. Suzanne; Graser, ThomasSoftware systems today face many challenges that were not even imagined decades prior. Challenges including the need to evolve at a very high rate, lifecycle phase drift or erosion, inability to prevent the butterfly effect where the slightest change causes unimaginable side effects throughout the system, lack of discipline to define metrics and use measurement to drive operations, and no "silver bullet" or single solution to solve all the problems of every domain, just to name a few. This is not to say that the issues stated above are the only problems. In fact, it would be impossible to list all possible problems--software itself is infinitely flexible bounded only by the human imagination. These are just a portion of the primary challenges today's software engineer faces. There have been attempts throughout the history of software to resolve each one of these challenges. There have been those who tried to tackle them individually, simultaneously, as well as various combinations of them at one time. One such method was to define and encapsulate the various phases within software, which has come to be called a software lifecycle or lifecycle model. Another area of recent research has lead to the hypothesis that many of these challenges can be resolved or at least facilitated through proper traceability methods. Virtually none of today's software components are completely derived from scratch. Rather, code reuse and software evolution become a large portion of the software engineer's duties. As Vance Hilderman at HighRely puts it, "Research has shown that proper traceability is vital. For high quality and safety-critical engineering development efforts however, traceability is a cornerstone not just for achieving success, but to proving it as well." So if software is not derived from scratch, having the traceability to know about its origination is invaluable. Given today's struggles, what is in store for the future software engineer? This paper is an attempt to quantify and answer (or at least project a possibility) that involves a new mindset and a new lifecycle model or structure change that may assist in tackling some of the above referenced issues.