Browsing by Subject "Algorithm"
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item Algorithms for the scaling toward nanometer VLSI physical synthesis(Texas A&M University, 2007-04-25) Sze, Chin NgaiAlong the history of Very Large Scale Integration (VLSI), we have successfully scaled down the size of transistors, scaled up the speed of integrated circuits (IC) and the number of transistors in a chip - these are just a few examples of our achievement in VLSI scaling. It is projected to enter the nanometer (timing estimation and buffer planning for global routing and other early stages such as floorplanning. A novel path based buffer insertion scheme is also included, which can overcome the weakness of the net based approaches. Part-2 Circuit clustering techniques with the application in Field-Programmable Gate Array (FPGA) technology mapping The problem of timing driven n-way circuit partitioning with application to FPGA technology mapping is studied and a hierarchical clustering approach is presented for the latest multi-level FPGA architectures. Moreover, a more general delay model is included in order to accurately characterize the delay behavior of the clusters and circuit elements.Item Algorithms for VLSI Circuit Optimization and GPU-Based Parallelization(2010-07-14) Liu, YifangThis research addresses some critical challenges in various problems of VLSI design automation, including sophisticated solution search on DAG topology, simultaneous multi-stage design optimization, optimization on multi-scenario and multi-core designs, and GPU-based parallel computing for runtime acceleration. Discrete optimization for VLSI design automation problems is often quite complex, due to the inconsistency and interference between solutions on reconvergent paths in directed acyclic graph (DAG). This research proposes a systematic solution search guided by a global view of the solution space. The key idea of the proposal is joint relaxation and restriction (JRR), which is similar in spirit to mathematical relaxation techniques, such as Lagrangian relaxation. Here, the relaxation and restriction together provides a global view, and iteratively improves the solution. Traditionally, circuit optimization is carried out in a sequence of separate optimization stages. The problem with sequential optimization is that the best solution in one stage may be worse for another. To overcome this difficulty, we take the approach of performing multiple optimization techniques simultaneously. By searching in the combined solution space of multiple optimization techniques, a broader view of the problem leads to the overall better optimization result. This research takes this approach on two problems, namely, simultaneous technology mapping and cell placement, and simultaneous gate sizing and threshold voltage assignment. Modern processors have multiple working modes, which trade off between power consumption and performance, or to maintain certain performance level in a powerefficient way. As a result, the design of a circuit needs to accommodate different scenarios, such as different supply voltage settings. This research deals with this multi-scenario optimization problem with Lagrangian relaxation technique. Multiple scenarios are taken care of simultaneously through the balance by Lagrangian multipliers. Similarly, multiple objective and constraints are simultaneously dealt with by Lagrangian relaxation. This research proposed a new method to calculate the subgradients of the Lagrangian function, and solve the Lagrangian dual problem more effectively. Multi-core architecture also poses new problems and challenges to design automation. For example, multiple cores on the same chip may have identical design in some part, while differ from each other in the rest. In the case of buffer insertion, the identical part have to be carefully optimized for all the cores with different environmental parameters. This problem has much higher complexity compared to buffer insertion on single cores. This research proposes an algorithm that optimizes the buffering solution for multiple cores simultaneously, based on critical component analysis. Under the intensifying time-to-market pressure, circuit optimization not only needs to find high quality solutions, but also has to come up with the result fast. Recent advance in general purpose graphics processing unit (GPGPU) technology provides massive parallel computing power. This research turns the complex computation task of circuit optimization into many subtasks processed by parallel threads. The proposed task partitioning and scheduling methods take advantage of the GPU computing power, achieve significant speedup without sacrifice on the solution quality.Item The application of machine learning methods in software verification and validation(2010-08) Phuc, Nguyen Vinh, 1955-; Ghosh, Joydeep; Khurshid, SarfrazMachine learning methods have been employed in data mining to discover useful, valid, and beneficial patterns for various applications of which, the domain encompasses areas in business, medicine, agriculture, census, and software engineering. Focusing on software engineering, this report presents an investigation of machine learning techniques that have been utilized to predict programming faults during the verification and validation of software. Artifacts such as traces in program executions, information about test case coverage and data pertaining to execution failures are of special interest to address the following concerns: Completeness for test suite coverage; Automation of test oracles to reduce human intervention in Software testing; Detection of faults causing program failures; Defect prediction in software. A survey of literature pertaining to the verification and validation of software also revealed a novel concept designed to improve black-box testing using Category-Partition for test specifications and test suites. The report includes two experiments using data extracted from source code available from the website (15) to demonstrate the application of a decision tree (C4.5) and the multilayer perceptron for fault prediction, and an example that shows a potential candidate for the Category-Partition scheme. The results from several research projects shows that the application of machine learning in software testing has achieved various degrees of success in effectively assisting software developers to improve their test strategy in verification and validation of software systems.Item Distributed trigger counting algorithms(2010-12) Casas, Juan Manual, 1978-; Garg, Vijay K. (Vijay Kumar), 1963-; McCann, BruceA distributed system consists of a set of N processor nodes and a finite set of communication channels. It is frequently described as a directed graph in which each vertex represents a processor node and the edges represent the communication channels. A global snapshot of a distributed system consists of the local states of all the processor nodes and all of the in-transit messages of a distributed computation. This is meaningful as it corresponds to the global state where all the local states and communication channels of all the processor nodes in the system are recorded simultaneously. A classic example where snapshots are utilized is in the scenario of some failure where the system can restart from the last global snapshot. This is an important application of global snapshot algorithms as it forms the basis for fault-tolerance in distributed programs and aids in serviceability as a distributed program debugging mechanism. Another important application includes checkpointing and monitoring systems where a set of continuous global snapshots are employed to detect when a certain number of triggers have been received by the system. When the distributed system is scaled in terms of an increase in the number of processor nodes and an increase in the number of expected triggers the message complexity increases and impacts the total overhead for the communication and computation of the global snapshot algorithm. In such a large distributed system, an optimal algorithm is vital so that the distributed application program that is employing the snapshots does not suffer from performance degradation as the size of the distributed system continues to grow over time. We are interested in global snapshot algorithms that offer lower bound message complexity and lower bound MaxLoad messages for large values of N processor nodes and large values of W expected triggers. In this report we study and simulate the Centralized, Grid based, Tree Based, and LayeredRand global snapshot algorithms then evaluate the algorithms for total number of messages (sent and received) and MaxLoad messages (sent and received) for the trigger counting problem in distributed computing. The report concludes with simulation results that compare the performance of the algorithms with respect to the total number of messages and MaxLoad messages required by each algorithm to detect when the number of W triggers have been delivered to the distributed system.Item Extending the Petrel Model Builder for Educational and Research Purposes(2013-04-11) Nwosa, Obiajulu CReservoir Simulation is a very powerful tool used in the Oil and Gas industry to perform and provide various functions including but not limited to predicting reservoir performance, conduct sensitivity analysis to quantify uncertainty, production optimization and overall reservoir management. Compared to explored reservoirs in the past, current day reservoirs are more complex in extent and structure. As a result, reservoir simulators and algorithms used to represent dynamic systems of flow in porous media have invariably got just as complex. In order to provide the best solutions for analyzing reservoir performance, there is a need to continuously develop reservoir simulators and reservoir simulation algorithms that best represent the performance of the reservoir without compromising efficiency and accuracy. There exists several commercial reservoir simulation packages in the market that have been proven to be extremely resourceful with functionality that covers a wide range of interests in reservoir simulation yet there is the constant need to provide better and more efficient methods and algorithms to study and manage our reservoirs. This thesis aims at bridging the gap in the framework for developing these algorithms. To this end, this project has both an educational and research component. Educational because it leads to a strong understanding of the topic of reservoir simulation for students which can be daunting especially for those who require a more direct experience to fully comprehend the subject matter. It is research focused because it will serve as the foundation for developing a framework for integrating custom built external simulators and algorithms with the workflow of the model builder of our reservoir simulation package of choice i.e. Petrel with the Ocean programming environment in a seamless manner for simulating large scale multi-physics problems of flow in highly heterogeneous flow of porous media. Of particular interest are the areas of model order reduction and production optimization. In-house algorithms are being developed for these areas of interest and with the completion of this project. We hope to have developed a framework whereby we can take our algorithms specifically developed for areas of interest and add them to the workflow of the Petrel Model Builder. Currently, we have taken one of our in-house simulators i.e. a two dimensional, oil-water five-spot water flood pattern as a starting point and have been able to integrate it successfully into the ?Define Simulation Case? process of Petrel as an additional choice for simulation by an end user. In the future, we will expand this simulator with updates to improve its performance, efficiency and extend its capabilities to incorporate areas of research interest.Item Impact of the LZW-based common subexpression elimination algorithm on SAT-solving efficiency(2012-05) Jn Charles, Jeriah; Zhang, Yuanlin; Gelfond, Michael; Watson, RichardThe Satisfiability (SAT) problem is the problem of finding an assignment that satisfies a given propositional formula. SAT is effective in solving many important problems in areas such as automated reasoning, computer-aided design, and planning in Artificial Intelligence. The need to solve these problems in a reduced amount of time has geared considerable research in improving the performance of SAT solvers resulting in many solver algorithms being created or modified. This research investigates how the removal of common subexpressions in a formula via the Lempel–Ziv-Welch (LZW)-based approach can affect the efficiency of SAT solving. By substituting common subexpressions for new variables in the original formula, we compare the results of passing the original formula and the new equivalent formula through a SAT solver. In this LZW-based approach, we modify the Lempel–Ziv-Welch data compression algorithm to find and substitute the common subexpressions in the formula.Item Inversion-based petrophysical interpretation of logging-while-drilling nuclear and resistivity measurements(2013-08) Ijasan, Olabode; Torres-Verdín, CarlosUndulating well trajectories are often drilled to improve length exposure to rock formations, target desirable hydrocarbon-saturated zones, and enhance resolution of borehole measurements. Despite these merits, undulating wells can introduce adverse conditions to the interpretation of borehole measurements which are seldom observed in vertical wells penetrating horizontal layers. Common examples are polarization horns observed across formation bed boundaries in borehole resistivity measurements acquired in highly-deviated wells. Consequently, conventional interpretation practices developed for vertical wells can yield inaccurate results in HA/HZ wells. A reliable approach to account for well trajectory and bed-boundary effects in the petrophysical interpretation of well logs is the application of forward and inverse modeling techniques because of their explicit use of measurement response functions. The main objective of this dissertation is to develop inversion-based petrophysical interpretation methods that quantitatively integrate logging-while-drilling (LWD) multi-sector nuclear (i.e., density, neutron porosity, photoelectric factor, natural gamma ray) and multi-array propagation resistivity measurements. Under the assumption of a multi-layer formation model, the inversion approach estimates formation properties specific to a given measurement domain by numerically reproducing the available measurements. Subsequently, compositional multi-mineral analysis of inverted layer-by-layer properties is implemented for volumetric estimation of rock and fluid constituents. The most important prerequisite for efficient petrophysical inversion is fast and accurate forward models that incorporate specific measurement response functions for numerical simulation of LWD measurements. In the nuclear measurement domain, first-order perturbation theory and flux sensitivity functions (FSFs) are reliable and accurate for rapid numerical simulation. Albeit efficient, these first-order approximations can be inaccurate when modeling neutron porosity logs, especially in the presence of borehole environmental effects (tool standoff or/and invasion) and across highly contrasting beds and complex formation geometries. Accordingly, a secondary thrust of this dissertation is the introduction of two new methods for improving the accuracy of rapid numerical simulation of LWD neutron porosity measurements. The two methods include: (1) a neutron-density petrophysical parameterization approach for describing formation macroscopic cross section, and (2) a one-group neutron diffusion flux-difference method for estimating perturbed spatial neutron porosity fluxes. Both methods are validated with full Monte Carlo (MC) calculations of spatial neutron detector FSFs and subsequent simulations of neutron porosity logs in the presence of LWD azimuthal standoff, invasion, and highly dipping beds. Analysis of field and synthetic verification examples with the combined resistivity-nuclear inversion method confirms that inversion-based estimation of hydrocarbon pore volume in HA/HZ wells is more accurate than conventional well-log analysis. Estimated hydrocarbon pore volume from conventional analysis can give rise to errors as high as 15% in undulating HA/HZ intervals.Item Measure-Driven Algorithm Design and Analysis: A New Approach for Solving NP-hard Problems(2010-10-12) Liu, YangNP-hard problems have numerous applications in various fields such as networks, computer systems, circuit design, etc. However, no efficient algorithms have been found for NP-hard problems. It has been commonly believed that no efficient algorithms for NP-hard problems exist, i.e., that P6=NP. Recently, it has been observed that there are parameters much smaller than input sizes in many instances of NP-hard problems in the real world. In the last twenty years, researchers have been interested in developing efficient algorithms, i.e., fixed-parameter tractable algorithms, for those instances with small parameters. Fixed-parameter tractable algorithms can practically find exact solutions to problem instances with small parameters, though those problems are considered intractable in traditional computational theory. In this dissertation, we propose a new approach of algorithm design and analysis: discovering better measures for problems. In particular we use two measures instead of the traditional single measure?input size to design algorithms and analyze their time complexity. For several classical NP-hard problems, we present improved algorithms designed and analyzed with this new approach, First we show that the new approach is extremely powerful for designing fixedparameter tractable algorithms by presenting improved fixed-parameter tractable algorithms for the 3D-matching and 3D-packing problems, the multiway cut problem, the feedback vertex set problems on both directed and undirected graph and the max-leaf problems on both directed and undirected graphs. Most of our algorithms are practical for problem instances with small parameters. Moreover, we show that this new approach is also good for designing exact algorithms (with no parameters) for NP-hard problems by presenting an improved exact algorithm for the well-known satisfiability problem. Our results demonstrate the power of this new approach to algorithm design and analysis for NP-hard problems. In the end, we discuss possible future directions on this new approach and other approaches to algorithm design and analysis.Item Multi-state PLS based data-driven predictive modeling for continuous process analytics(2012-05) Kumar, Vinay; Flake, Robert H.; Edgar, Thomas F.Today’s process control industry, which is extensively automated, generates huge amounts of process data from the sensors used to monitor the processes. These data if effectively analyzed and interpreted can give a clearer picture of the performance of the underlying process and can be used for its proactive monitoring. With the great advancements in computing systems a new genre of process monitoring and fault detection systems are being developed which are essentially data-driven. The objectives of this research are to explore a set of data-driven methodologies with a motive to provide a predictive modeling framework and to apply it to process control. This project explores some of the data-driven methods being used in the process control industry, compares their performance, and introduces a novel method based on statistical process control techniques. To evaluate the performance of this novel predictive modeling technique called Multi-state PLS, a patented continuous process analytics technique that is being developed at Emerson Process Management, Austin, some extensive simulations were performed in MATLAB. A MATLAB Graphical User Interface has been developed for implementing the algorithm on the data generated from the simulation of a continuously stirred blending tank. The effects of noise, disturbances, and different excitations on the performance of this algorithm were studied through these simulations. The simulations have been performed first on a steady state system and then applied to a dynamic system .Based on the results obtained for the dynamic system, some modifications have been done in the algorithm to further improve the prediction performance when the system is in dynamic state. Future work includes implementing of the MATLAB based predictive modeling technique to real production data, assessing the performance of the algorithm and to compare with the performance for simulated data.Item Newton's Method(2013-08) Banacka, Nicole Lynn; Luecke, John EdwinRoot-finding algorithms have been studied for ages for their various applications. Newton's Method is just one of these root-finding algorithms. This report discusses Newton's Method and aims to describe the procedures behind the method and to determine its capabilities in finding the zeros for various functions. The possible outcomes when using this method are also explained; whether the Newton function will converge to a root, diverge from the root, or enter a cycle. Modifications of the method and its applications are also described, showing the flexibility of the method for different situations.Item Plog: Its algorithms and applications(2012-05) Zhu, Weijun; Gelfond, Michael; Zhang, Yuanlin; Nelson, RushtonThis dissertation is a contribution towards combining logical and probabilistic reasoning. The work is based on the language P-log which combines a recently developed non-monotonic logic language, Answer Set Prolog, with the philosophy of causal Bayesian networks. The goal of this dissertation was to design and implement an inference engine for P-log and to develop a methodology of its use. As the result of this research work, we had built two P-log inference engines. Through the experiments on various examples, we have shown the advantages of using plog2.0, a system based on partial grounding algorithm and a concept of partial possible worlds. We introduced a new action description language NB which allows specifying non-deterministic actions as well as probabilities associated with these actions. We developed an encoding which maps systems written in NB to P-log programs. We presented systematic methods of representing probabilistic diagnosis and planning problems and algorithms of finding the solutions with P-log systems. Finally, we investigated the performance on these two applications and compared with other similar systems.Item Practicality of algorithmic number theory(2013-08) Taylor, Ariel Jolishia; Luecke, John EdwinThis report discusses some of the uses of algorithms within number theory. Topics examined include the applications of algorithms in the study of cryptology, the Euclidean Algorithm, prime generating functions, and the connections between algorithmic number theory and high school algebra.Item Sentiment-based Classification of Tweeters and University Programs(2014-08-21) Huang, BolunThe rapidly growing World WideWeb (WWW) is no longer a passive information provider. Nowadays, Internet users themselves have become contributors to the WWW. A lot of user generated data, along with non-user-generated data, make our world an informative, however, perhaps over-informed society. The increasing amount of unorganized, disordered, unstructured, or even randomly generated data drove the momentum of big data analysis, aiming to discover and learn the hidden patterns behind the data. In this thesis, in particular, we look at two problems of mining knowledge from data. In the first project, we are trying to classify "democrats" and "republicans" in Twitter. We first propose a sentiment-based classification model to classify "democrats" and "republicans", with the aim to address the problem that conventional quantitative features, such as tweet count, follower-to-following ratio, election tweet count, cannot reflect the opinion alignment of tweeters. Therefore we utilize sentiment scores over multiple topics as our feature vector in the classification model. We innovatively proposed an automatic topic selection model to learn those distinguishing topics, making the sentiment feature selection domain independent. However, the sentiment-based classification model is not doing much better than non-sentiment model. Given the fact that sentiment-based classification model is not doing well enough, we propose using social relationship graph information to adjust our sentiment vectors. The graph-adjusted sentiment model achieves an accuracy higher than 80 percent in classification. What's more, we deploy a completely graph-based model, Belief Propagation (BP) model on the social graph, which achieves a prediction accuracy higher than 85 percent. We conclude that the effect of social relationship graph is more important than sentiment of tweets for classifying users into "democrats" and "republicans". In the second project, we propose an alternative and new way to rank graduate schools using algorithms, instead of using qualitative surveys as U.S. News does. Based on the assumption that "schools tend to hire PhD graduates from better or peer schools" to become their faculty members, we propose deploying link-based ranking algorithms on the "hiring graph" among universities. We refine PageRank (PR) algorithm and Hyperlink-induced Topic Search (HITS) Algorithm by taking the edge weight into consideration, as our own way to rank graduate programs. In order to validate our approach, we collect two separate data sets to construct the "hiring graph", faculty data in top 50 Computer Science (CS) programs and faculty data in top 50 Mechanical Engineering (ME) programs across the United States. By comparing our new rankings with U.S. News ranking, we discover that some programs are either under-ranked or over-ranked by U.S. News. We also conduct extensive data analysis on our data, revealing a lot of interesting patterns and cases behind the U.S. News ranking. Finally, we conduct sensitivity analysis on each proposed algorithms to see how sensitive they are in response to changes in the "hiring graph".Item Standardization for intelligent detection and autonomous operation of non-structured hardware, and its application on railcar brake release operation(2015-05) Hammel, Christopher Scott; Tesar, Delbert; Ashok, PradeepkumarThis thesis introduces a standard framework for evaluating and planning for desired autonomous (or semi-autonomous) operations, then applies the framework, in detail, to the task of automating emergency brake release before rail-car decoupling. A significant hurdle to be accounted for is the lack of standardization of much of the hardware of interest in industry. Non-standardized rail car components must be formally structured as fully as possible to improve the reliability of the robotic automation. This brake release task requires either pushing or pulling a “bleed rod” that protrudes from the side of each rail car. The requirements for each step of the evaluation and planning process will be laid out in this thesis, as an example of how it should be applied to future automation tasks.Item Systems and Algorithms for Automated Collaborative Observation using Networked Robotic Cameras(2011-10-21) Xu, YiliangThe development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. Recently, the fast evolving advances in network and computer technologies and decreasing size and cost of sensors and robots enable us to further extend the MOMR system architecture to incorporate heterogeneous components such as humans, robots, sensors, and automated agents. The requests for controlling robot actuation are generated by all the participants. We term it as the MOMR++ system. However, to reach the best potential and performance of the system, there are many technical challenges needing to be addressed. In this dissertation, we address two major challenges in the MOMR++ system development. We first address the robot coordination and planning issue in the application of an autonomous crowd surveillance system. The system consists of multiple robotic pan-tilt-zoom (PTZ) cameras assisted with a fixed wide-angle camera. The wide-angle camera provides an overview of the scene and detects moving objects, which are required for close-up views using the PTZ cameras. When applied to the pedestrian surveillance application and compared to a previous work, the system achieves increasing number of observed objects by over 210% in heavy traffic scenarios. The key issue here is given the limited number (e.g., p (p > 0)) of PTZ cameras and many more (e.g., n (n >> p)) observation requests, how to coordinate the cameras to best satisfy all the requests. We formulate this problem as a new camera resource allocation problem. Given p cameras, n observation requests, and [epsilon] being approximation bound, we develop an approximation algorithm running in O(n/[epsilon]? + p?/[epsilon]?) time, and an exact algorithm, when p = 2, running in O(n?) time. We then address the automatic object content analysis and recognition issue in the application of an autonomous rare bird species detection system. We set up the system in the forest near Brinkley, Arkansas. The camera monitors the sky, detects motions, and preserves video data for only those targeted bird species. During the one-year search, the system reduces the raw video data of 29.41TB to only 146.7MB (reduction rate 99.9995%). The key issue here is to automatically recognize the flying bird species. We verify the bird body axis dynamic information by an extended Kalman filter (EKF) and compare the bird dynamic state with the prior knowledge of the targeted bird species. We quantify the uncertainty in recognition due to the measurement uncertainty and develop a novel Probable Observation Data Set (PODS)-based EKF method. In experiments with real video data, the algorithm achieves 95% area under the receiver operating characteristic (ROC) curve. Through the exploration of the two MOMR++ systems, we conclude that the new MOMR++ system architecture enables much wider range of participants, enhances the collaboration and interaction between participants so that information can be exchanged in between, suppresses the chance of any individual bias or mistakes in the observation process, and further frees humans from the control/observation process by providing automatic control/observation. The new MOMR++ system architecture is a promising direction for future telerobtics advances.Item The development and implementation of an ionic-polymer-metal-composite propelled vessel guided by a goal-seeking algorithm(Texas A&M University, 2007-09-17) Vickers, Jason AaronThis thesis describes the use of an ultrasonic goal-seeking algorithm while using ionic polymer metal composite (IPMC), an electroactive polymer, as the actuator to drive a vessel towards a goal. The signal transmitting and receiving circuits as well as the goal seeking algorithm are described in detail. Two test vessels were created; one was a larger vessel that contained all necessary components for autonomy. The second was a smaller vessel that contained only the sensors and IPMC strips, and all power and signals were transmitted via an umbilical cord. To increase the propulsive efforts of the second, smaller vessel, fins were added to the IPMC strips, increasing the surface area over 700%, determined to yield a 22-fold force increase. After extensive testing, it was found that the three IPMC strips, used as oscillating fins, could not generate enough propulsion to move either vessel, with or without fins. With the addition of fins, the oscillating frequency was reduced from 0.86-Hz to 0.25-Hz. However, the goal-seeking algorithm was successful in guiding the vessel towards the target, an ultrasonic transmitter. When moved manually according to the instructions given by the algorithm, the vessel successfully reached the goal. Using assumptions based on prior experiments regarding the speed of an IPMC propelled vessel, the trial in which the goal was to the left of the axis required 18.2% more time to arrive at the goal than the trial in which the goal was to the right. This significant difference is due to the goal-seeking algorithm??????s means to acquire the strongest signal. After the research had concluded and the propulsors failed to yield desired results, many factors were considered to rationalize the observations. The operating frequency was reduced, and it was found that, by the impulse-momentum theorem, that the propulsive force was reduced proportionally. The literature surveyed addressed undulatory motion, which produces constant propulsive force, not oscillatory, which yields intermittent propulsive force. These reasons among others were produced to rationalize the results and prove the cause of negative results was inherent to the actuators themselves. All rational options have been considered to yield positive results.