Browsing by Subject "Software"
Now showing 1 - 20 of 22
Results Per Page
Sort Options
Item A model for assessing reusable, remoting sensors in test and measurement systems(2007-05) Laurent, Shane A.; Megel, Susan; Pyeatt, Larry D.In a typical system, sensors communicate with a computer via a communication port, such as a serial port; however, with the recent advances in programming languages and the addition of many low-cost networking devices, new methods for communicating with sensors are essential. A sensor's communication port provides an interface for connecting the sensor with a computer to control the sensor or acquire data. Most sensors utilize the Recommended Standard 232 (RS232) port for communication with a computer; however, this port was created to serve as an interface for computers and modems to communicate, not sensors. Utilizing the RS232 port to communicate with a sensor requires configuring the computer's and sensor's settings, cables, and commands. As a result, a majority of sensors are not "plug and play" and additional time is required to configure and develop the software to communicate with the sensor. The use of models, modern computers, and networking solutions can be utilized to alleviate these problems and improve the way sensors are incorporated into the design of a system. Therefore, this thesis creates a model for the development of reusable remote servers which handle the communication interface to a sensor and provide a network interface to a computer to control and acquire data from the sensor. The model created can be modified to support various communication interfaces a sensor may use to communicate with a computer. The model is designed with the Unified Modeling Language (UML) to provide reusable diagrams of the model for the development of a sensor subsystem. The diagrams are applied to several sensors by using development platforms and languages to create a sensor’s communication software. The sensor's communication software is deployed on a network processor and the sensor is connected to the network processor which results in the creation of a sensor subsystem. For each sensor subsystem, a software executable is deployed on a networked processor that communicates with the sensor and allows a client application on a different network processor to connect to the sensor subsystem and interact with the sensor. The sensor subsystems created through the diagrams include a heise pressure subsystem, a vaisala temperature/humidity subsystem, and a mettler mass comparator subsystem. The time required to create the sensor subsystems through the diagrams is measured and recorded to determine if the diagrams created can be reused to reduce the amount of time required to interface and communicate with a sensor through a computer. In addition tests are conducted to determine if any advantages can be found by utilizing sensor subsystems in the creation of test and measurement systems. For example, development time of the client application is measured to determine if the use of sensor subsystems can reduce the amount of time required to develop a complete test and measurement system. Additional tests are performed to explore other advantages the sensor subsystems may provide and the results of the tests are compared with prior research. Finally, the results of the tests conducted utilizing the sensor subsystems to develop test and measurement systems have shown reduction of development time of client applications by 60%. In addition, the use of the sensor subsystems also enable multiple client applications the ability to share the sensors allowing the sensor to be reused in the design and deployment of different test and measurement systems. The results of the tests conducted have shown that utilizing UML modeling tools to create diagrams for developing sensor subsystems is an effective means of reducing development time of a sensor's communication software by 90% or more, and improves sensor configuration. Furthermore, the use of sensor subsystems enables reuse of a sensor in other systems.Item Chameleon : rapid deployment of adaptive communication-aware applications(2009-12) Jun, Taesoo; Julien, ChristineMobile ad hoc networks create communication links without the aid of any infrastructure, forwarding packets among mobile nodes. The MANET research community has identified several fundamental challenges, among which the most prominent is discovering an optimal route between two nodes. Existing work has proposed a plethora of routing protocols. Since each protocol implements its own philosophy and algorithm to target a specific purpose, routing protocols in MANETs show very different characteristics. Selecting a particular protocol for an application or deployment environment involves evaluating many complex inter-dependent tradeoffs and can be an overwhelming task for an application designer. However, this decision can have a significant impact on the success of a system in terms of performance, cost, and responsiveness. Emerging distributed applications deployed in MANETs inherently experience highly dynamic situations, which necessitate real-time routing protocol selection in response to varying scenarios. Most of the relevant research in this area relies on simulation studies or empirical analysis to select a routing protocol, requiring an infeasible amount of time and resources for the approaches to be used in real-time decision making. In my dissertation work, I designed the Chameleon framework to facilitate real-time routing protocol decisions based on given application and environmental characteristics. My approach develops analytical models for important network layer performance measures capturing various inter-dependent factors that affect routing protocol behavior. I provide an analytical framework that expresses protocol performance metrics in terms of environment-, protocol-, and application-dependent parameters. This effort has resulted in detailed models for two important metrics: end-to-end delay and throughput. I specify detailed models for the parameters embedded in the models with respect to the ability of network deployers, protocol designers, and application developers to reasonably provide the information. Finally, in a systematic manner, I outline the Chameleon software framework to integrate the analytical models with parameters specified by these three groups of stakeholders.Item Designing a consulting services architecture model(2015-05) Pinkston, Jeffrey Lynn; Barber, K. Suzanne; Graser, ThomasDuring my years of experience in the technology industry, it has become obvious that standard processes and methodologies within the engineering discipline are at a mature state. The realization though is that software engineering specifically lags behind. Most software engineering methodologies that I have studied focus on the mission of software development. It is this realization and the need for structure that led me to review existing methodologies used within my company's software services organization. The definition of what a successful software services methodology entails is rather limited. This report will provide a history of existing software engineering methodologies that I have studied, describe an initial services method that was being developed within my organization, develop a new model that addresses previous shortcomings and identify additional components required to further define a strong software services-oriented delivery methodology.Item Electrical Demand Analysis Software Tool Suite and Automatic Report Generation for Energy Audits(2015-05-06) Morelli, Franco JavierThe American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) defines an energy audit through a multi-tiered stratagem characterized by the level of in-depth analysis. The Level 1, or walkthrough survey is highlighted by low to no cost energy efficiency evaluations and a list of improvement measures that warrant further inquiry. Through the Industrial Assessment Center (IAC) at Texas A&M University, the Department of Energy's, Advanced Manufacturing Office maintains collaboration with academic entities to further the goal of reducing industrial and manufacturing energy consumption. As a result, the IAC at Texas A&M University performs ASHRAE Level 1 Energy Audits for manufacturing plants across gulf coast states. The IAC at Texas A&M University seeks to develop a series of electrical demand analysis and report generation software tools to optimize and enhance the electrical investigation inherent with establishing efficient industrial resource (electricity, water, natural gas) usage. Typically, such analysis are done through utility bill information, quantifying usage and capital charge characteristics, as well as usage trends over the course of the billing period. By establishing electrical analysis through the use of 15-minute or 30-minute demand data sets, available to industrial and manufacturing clients augmented with Interval Data Recorder (IDR) meters, the Industrial Assessment Center at Texas A&M has developed a suite of electrical analysis tools designed to increase analysis fidelity, identify pre-visit Energy Conservation Measures (ECM), establish unknown variables helpful in diagnosing ECMs, size systems design to optimize electrical usage, create a simple, user friendly interface and increase ECM implementation. While the conclusions and results for the following work and tools will not be known for some time, preliminary efforts have shown that the tools are effective in interpreting and diagnosing aberrant electrical usage. In particular, one instance in usage of the demand visualization tool diagnosed an issue where a facility was being charged double the amount of their typical demand. Supporting data, along with key IAC visits will be required to determine if the following tools are effective in increasing IAC implementation rates.Item An empirical study on software quality : developer perception of quality, metrics, and visualizations(2013-05) Wilson, Gary Lynn; Kim, MiryungSoftware tends to decline in quality over time, causing development and maintenance costs to rise. However, by measuring, tracking, and controlling quality during the lifetime of a software product, its technical debt can be held in check, reducing total cost of ownership. The measurement of quality faces challenges due to disagreement in the meaning of software quality, the inability to directly measure quality factors, and the lack of measurement practice in the software industry. This report addresses these challenges through both a literature survey, a metrics derivation process, and a survey of professional software developers. Definitions of software quality from the literature are presented and evaluated with responses from software professionals. A goal, question, metric process is used to derive quality-targeted metrics tracing back to a set of seven code-quality subgoals, while a survey to software professionals shows that despite agreement that metrics and metric visualizations would be useful for improving software quality, the techniques are underutilized in practice.Item Extensible Software Architecture for a Distributed Engineering Simulation Facility(2013-03-18) May, James FA need has arisen for an easy-to-use, flexible, transparent, and cross-platform communication backbone for configuration and execution of distributed simulations and experiments. Open source, open architecture, and custom student written pro- grams have extended the capabilities of educational research facilities and opened the way for the development of the architecture presented in this thesis. The architecture is known by the recursive acronym hADES: hADES Architecture for Distributed Engineering Simulation. Included in this thesis is a discussion of the design and implementation of the novel hADES software architecture for Ethernet and wireless IEEE 802.11 network-based distributed simulation and experiment facilities. The goal of this architecture is to facilitate rapid integration of new and legacy simulations and laboratory equipment to support undergraduate and graduate research projects as well as educational classroom activities and industrial simulation and experiments.Item Improving RNA folding prediction algorithms with enhanced interactive visualization software(2016-08) Grant, Kevin Marcus; Markey, Mia Kathleen; Gutell, RobinSoftware improvements from this project will enable new algorithms for RNA folding prediction to be explored. Issues with capacity, extensibility, multi-tasking, usability, efficiency, accuracy and testing in the original program have been addressed, and the corresponding software architecture changes are discussed herein. Previously limited to just hundreds of helices, the software can now display and manipulate million-helix RNAs. Actions on large data sets are now feasible, such as continuous zooming. A new scripting interface adds flexibility and is especially useful for repetitive tasks and software testing. Structural analysis of RNA can be streamlined using the new mechanisms for organizing experiments, running other programs and displaying results (helices, or arbitrary text and images such as statistics). Finally, usability has been enhanced with more documentation, controls and settings.Item Interactive engagement with an open source community : a study of the relationships between organizations and an open source software community(2013-05) Sims, Jonathan Paul; Crossland, Craig; Henderson, Andrew DuaneThis dissertation theoretically develops and empirically tests a model of interactive firm engagement with an open source software community. An inductive pilot study and subsequent interview analysis suggest that the nature of the relationship between a firm and an open source community varies in the degree by which a firm both "takes from" and "gives to" the community. I propose that a firm will experience direct effects from both giving to and taking from the community, and further propose that the interaction of these two behaviors, which I call interactive engagement, will lead to three firm-level consequences: an increase in the number of new products, higher levels of incremental (as opposed to radical) innovation, and shortened development and debug time. I test these hypotheses using regression analysis of questionnaire responses collected from 250 organizations that work with a popular open source software community.Item Mocking embedded hardware for software validation(2016-08) Kim, Steve Seunghwan; Khurshid, Sarfraz; Bard, WilliamThis report makes the case for unit testing embedded systems software, a procedure traditionally found in application software development. While the challenges of developing and executing unit tests on embedded software are acknowledged, multiple solutions are presented. The GNU toolchain and a Texas Instruments microcontroller are used as an example embedded target. Two applications, one introductory and one more realistic, were developed for this embedded target using the C programming language. This report details the procedure required to apply open-source frameworks, Unity and CMock, to the two embedded applications. These frameworks, combined with the techniques outlined in this report, accomplished several goals of unit testing. The goals included automated validation of the embedded applications, increased code coverage, and protection against regression defects. In addition, it is shown how unit tests led to more modular software architecture. Potential ideas to extend this research to other tools, environments, and frameworks are also discussed.Item Practical software testing for an FDA-regulated environment(2011-12) Vadysirisack, Pang Lithisay; Khurshid, Sarfraz; Perry, Dewayne E.Unlike hardware, software does not degrade over time or frequency use. This is good for software. Also unlike hardware, software can be easily changed. This unique characteristic gives software much of its power, but is also responsible for possible failures in software applications. When software is used within medical devices, software failures may result in bodily injury or death. As a result, regulations have been imposed on the makers of medical devices to ensure their safety, which includes the safety of the devices’ software. The U.S. Food and Drug Administration requires establishment of systems and control processes to ensure quality devices. A principal part of the quality assurance effort is testing. This paper explores the unique role of software testing in the design, development, and release of software used for medical devices and applications. It also provides practical, industry-driven guidance on medical device software testing techniques and strategies.Item Procain: Protein Profile Comparison with Assisting Information(2009-06-19) Wang, Yong; Grishin, NickDetection of remote sequence homology is essential for the accurate inference of protein structure, function, and evolution. The most sensitive detection methods involve the comparison of evolutionary patterns reflected in multiple sequence alignments of protein families. We present PROCAIN, a new method for MSA comparison based on the combination of 'vertical' MSA context (substitution constraints at individual sequence positions) and 'horizontal' context (patterns of residue content at multiple positions). Based on a simple and tractable profile methodology and primitive measures for the similarity of horizontal MSA patterns, the method achieves the quality of homology detection comparable to a more complex advanced method employing hidden Markov models and secondary structure prediction. Adding secondary structure information further improves PROCAIN performance beyond the capabilities of current state-of-the-art tools. The potential value of the method for structure/function predictions is illustrated by the detection of subtle homology between evolutionary distant yet structurally similar protein domains. ProCAIn, relevant databases and tools can be downloaded from http://prodata.swmed.edu/procain/download. The web server can be accessed at http://prodata.swmed.edu/procain/procain.php.Item The Role of leadership in high performance software development teams(2011-12) Ward, John Mason; Nichols, Steven Parks, 1950-; McCann, Robert Bruce, 1948-The purpose of this research was to investigate the role of leadership in creating high performance software development teams. Of specific interest were the challenges faced by the Project Manager without a software engineering background. These challenges included management of a non-visible process, planning projects with significant uncertainty, and working with teams that don’t trust their leadership. Conclusions were drawn from the author’s experience as a software development manager facing these problems and a broad literature review of experts from the software and knowledge worker management fields. The primary conclusion was that, until the next big breakthrough, gains in software development productivity resulting from technology are limited. The only way for a group to distinguish itself as performing at the highest levels is teamwork enabled by good leadership.Item Second Level Cluster Dependencies: A Comparison of Modeling Software and Missing Data Techniques(2011-10-21) Larsen, Ross Allen AndrewDependencies in multilevel models at the second level have never been thoroughly examined. For certain designs first-level subjects are independent over time, but the second level subjects may exhibit nonzero covariances over time. Following a review of revelant literature the first study investigated which widely used computer programs adequately take into account these dependencies in their analysis. This was accomplished through a simulation study with SAS, and examples of analyses with Mplus and LISREL. The second study investigated the impact of two different missing data techniques for such designs in the case where data is missing at the first level with a simulation study in SAS. The first study simulated data produced in a multiyear study varying the numbers of subjects in the first and second levels, the number of data waves, the magnitude of effects at both the first and second level, and the magnitude of the second level covariance. Results showed that SAS and the MULTILEV component in LISREL analyze such data well while Mplus does not. The second study compared two missing data techniques in the presence of a second level dependency, multiple imputation (MI) and full information maximum likelihood (FIML). They were compared in a SAS simulation study in which the data was simulated with all the factors of the first study and the addition of missing data varied in amounts and patterns (missing completely at random or missing at random). Results showed that FIML is superior to MI because it produces lower bias and correctly estimates standard errorsItem Self-reconfiguration in self-healing systems(2008-12) An, Jung Hoon; Shin, Michael; Zhuang, YuThe purpose of this thesis is to suggest an approach to self-reconfiguration, which is part of a self-healing mechanism against anomalous objects, of a system prior to repairing anomalies of objects. The approach assumes that the software architecture of a system is structured into components and connectors between the components. A component is self-reconfigured differently in accordance with the object types, such as tasks (concurrent or active objects), connectors between tasks, and passive objects accessed by tasks in the component, while a connector between the components is self-reconfigured in response to the different object types constituting a connector. An asynchronous message queue connector between components is used to illustrate self-reconfiguration of a connector between components. The elevator system with multiple elevators is considered to apply for a case study of self-reconfiguration.Item Session 1H | Piloting OpenProject for Digital Projects(2022-05-23) McIntosh, MarciaOne digitization lab continues its development of project management systems by piloting the open source software OpenProject. Come hear about the many features and how the lab has customized OpenProject to track digital projects. Their test is your gain.Item Software fault localization with theory of evidence(2010-12) Jordan, Adam L.; Hewett, Rattikorn; Shin, Michael; Zhang, YuanlinSoftware development is a worldwide business that affects almost all aspects of our lives. In the cycle of software development, software debugging is the most time consuming phase and in debugging, the process of locating software faults takes the majority of the time. The process of automating fault localization is a valuable asset to any development of large scale software. The larger an application scales, the more complex it becomes, and the more difficult it becomes to manage and locate faults within the software. Automated software fault localization is used to try to locate a fault with little or no human intervention. In the past, this has been accomplished by analyzing test cases, execution sequences, logical predicates, memory states, and various other methods. This thesis presents a new technique of automated software fault localization that is based on theory of evidence for uncertainty reasoning to estimate likelihoods of faulty locations. The proposed technique is evaluated and compared to the three best-performing methods presently available using a set of benchmark programs in an empirical study. The study compares the methods‟ abilities to reduce the amount of code that needs to be viewed to locate a fault. The results show that the proposed technique performed no worse than these top techniques in 100% of all the program versions in the benchmark set with an average of over 85% of effectiveness measure.Item SQL database design static analysis(2010-12) Dooms, Joshua Harold; Krasner, Herb; Perry, Dewayne E.Static analysis of database design and implementation is not a new idea. Many researchers have covered the topic in detail and defined a number of metrics that are well known within the research community. Unfortunately, unlike the use of metrics in code development, the use of these metrics has not been widely adopted within the development community. It seems that a disjunction exists between the research into database design metrics and the actual use of databases in industry. This paper describes new metrics that can be used in industry to ensure that a database's current implementation supports long term scalability, to support easily developed and maintainable code, or to guide developers towards functions or design elements that can be modified to improve scalability of their data systems. In addition, this paper describes the production of a tool designed to extract these metrics from SQL Server and includes feedback from professionals regarding the usefulness of the tool and the measures contained within its output.Item Studies on Combining Sequence and Structure for Protein Classification(2010-01-12T18:53:50Z) Kim, Bong-Hyun; Grishin, NickThe ultimate goal of our research is to develop a better understanding of how proteins evolve different structures and functions. A large scale protein clustering can provide a useful platform to identify such principles of protein evolution. Manual classification schemes accurately group homologous proteins, but they are slow and subjective. Automatic protein clustering methods are largely based on sequence information. Therefore, they often do not accurately reflect remote homologies that can be recognized by structural information. We hypothesized that combining evolutionary signals from protein sequence and 3D structure will improve automated protein classification. To test this hypothesis, we clustered proteins into evolutionary groups using both sequence and structure by a fully automated method. We developed a stringent algorithm, self-consistency grouping (SCG) method, which clusters proteins if all the proteins in the group are more similar to each other than to proteins outside the group. Comparison of SCG and other commonly used clustering methods to a widely accepted manual classification scheme, Structural Classification of Protein (SCOP), showed SCG groups to better reflect the reference classification. In depth analysis of SCG clusters highlights new non-trivial evolutionary links between proteins. SCG clustering can be further developed as a reference for evolutionary classification of proteins. [Keywords: protein classification; protein evolution; fold change; homology; structural similarity; sequence similarity; bioinformatics; computational biology]Item A survey of feature selection methods : algorithms and software(2015-05) Arguello, Bryan; Dimitrov, Nedialko B.; Maloney, AndyThe feature selection problem is a major component in disease surveillance since data sources are so costly. This report describes several existing methods for performing feature selection along with software that implements these methods. To help make experimenting with different algorithms easy, we have created a feature selection wrapper package in Python. This wrapper allows the user to easily try different algorithms on the same data set and visualize the results. Experiments are performed to validate that the methods perform as expected.Item To develop a small interfering Rna (siRNA) design and information resource to facilitate genetic manipulaton of human cells.(2004-05-25) Shah, Jyoti Khetsi; Minna, John D.Part I: Small interfering RNAs (siRNAs) have revolutionized our ability to study the effects of altering the expression of single genes in mammalian (and other) cells through targeted knockdown of gene expression. In the past, there were a set of rules designed to develop siRNA which worked efficiently in most cases. There was further refinement performed in these rules in some modern research analyses which attempted to address the question of what most closely determines siRNA functionality. I have designed and implemented a new software tool siRNA Information Resource ('sIR') that incorporates the most recent refinements in the design algorithm in order to provide fast and efficient siRNA design. sIR is a web-based computational tool which takes these existing rules for designing synthetic siRNAs and puts them in a software architecture that allows the researcher to design siRNAs for every gene. It also provides a database containing information about already developed siRNA and thus allows the researcher to access the siRNA information database consisting of siRNA information from literature and various other sources. This will ultimately help in future siRNA related discoveries. It also includes a scoring system which helps in rational selection of efficient siRNA. sIR was successfully validated using already designed and developed target siRNA sequences. Part II: One of the major problems in using chemotherapy to treat cancer is whether patients, whose tumors do not respond to one drug, would respond to another. Thus, it would be very useful if one could rationally select the appropriate chemotherapy for each patient's tumor. We are asking is whether tumor gene "expression signatures" detected by microarray analysis could identify a set of genes correlating with sensitivity or resistance to a particular drug. A large panel of breast cancer cell lines was tested with cisplatin, paclitaxel, vinorelbine, doxorubicin and gemcitabine, in vitro using a colorimetric assay to determine the concentration of drug that gives 50% growth inhibition (IC50). Gene expression profiles were also performed using Affymetrix chips and the two data sets were merged. It was found that a panel of ~100 genes were significantly up regulated (4 fold or more) for each drug in resistant cells. As an alternative approach, Pearson correlations between each gene expression data and each drug IC50 across all cell lines analyzed were determined. A positive correlation for a pair of gene and drug indicates the gene may be associated with resistance to the drug whereas a negative correlation would associate that gene with sensitivity to the drug. Some of these genes might be associated with the drug mechanism of action. We conclude that gene expression signatures do exist for individual breast tumor cell chemosensitivity and these could be of clinical significance.