Browsing by Subject "Visualization"
Now showing 1 - 20 of 37
Results Per Page
Sort Options
Item 3D interactive pictorial maps(Texas A&M University, 2005-02-17) Naz, AsmaThe objective of my research is to revive and practice the art of traditional pictorial maps in 3D cartographic visualization. I have chosen to create both graphical and statistical pictorial maps which can be used for the purpose of tourism and data representation respectively. Some traditional hand-drawn and sculptural pictorial maps of famous artists have been picked out to start as a base for my work. The goal was to recreate or imitate the style, character and features of these traditional hand-drawn and sculptural maps with 3D computer graphics and to analyze how effectively 3D tools can be used to communicate map information. I also wanted to explore ways to make these maps interactive on the Web and have them accessible to a large number of viewers. The results show a number of interactive 3D pictorial maps of different countries and continents. These maps are initially built with Maya, a 3D modeling software, and converted into web pages using the Viewpoint Technology. For statistical maps, Mel scripts have been used in Maya to take input from the user and change the shape of models accordingly to represent data. These maps are interactive and navigable and are designed to be easily accessible on the Web.Item An analysis of selected factors of an exploratory art program with emphasis on visualization(Texas Tech University, 1969-05) Woodson, Eleanor PurcellNot availableItem An Approach To Painterly Rendering(2014-10-28) Broussard, GarrettAn often overlooked key component of 3D animations is the rendering engine. However, some rendering techniques are hard to implement or are too restrictive in terms of the imagery they can produce. The goal of this thesis is to make easy-to-use software that artists can use to create stylistic animations and that also minimizes technical constraints placed on the art. For this project, I present a tool that allows artists to create temporally coherent, painterly animations using Autodesk Maya and Corel Painter. I then use that tool to create proof of concept animations. This new rendering technique offers artists a different avenue through which they can showcase their art and also offers certain freedoms that current computer graphics techniques lack. Accompanying this paper are some animations demonstrating possible outcomes, and they are located on the Texas A&M online library catalog system. The painting system used for this project expands upon an algorithm designed by Barbara Meier of the Disney Research Group that involves spreading particles across a surface and using those particles to define brush strokes. The first step is to infer the general syntax of Painter?s commands by using Painter and its ability to record a painting made by an artist. The next step is to use the commands and syntax that Painter uses in the automated creation of scripts to generate paintings used for the animation. As this thesis is designed to showcase a rendering technique, I found animations made by fellow candidates for the Master of Science and Master of Fine Arts degrees in Visualization bearing qualities accented by a painterly treatment and rendered them using this technique.Item Automatic Generation of Virtual Cities Based on User Defined Zoning Districts(2012-07-16) Van Maanen, Kathryn ElizabethTraditionally, city maps are drafted in two dimensions, on paper and using GIS technology, and specify the placement and boundaries of different zoning districts. Two dimensional maps place limitations on the designer, including, but not limited to, the inability to foresee areas which might be shaded by neighboring buildings. This thesis presents a prototype for a visualization tool to create city maps in three dimensions. Three dimensional city planning can be beneficial because it allows the designer to envision a proposed skyline and balance the positive space of the building mass and the negative space surrounding and in between buildings. This visualization tool allows the city planner to layout a city map using four different zoning districts. Once drafted, the city map is populated with three-dimensional building models representing buildings commonly found in each local zoning district. The purpose of this virtual environment is to help visualize a hypothetical city created under conditions of the proposed city map.Item Balancing human and system visualization during document triage(2009-05-15) Bae, Soon IlPeople must frequently sort through and identify relevant materials from a large set of documents. Document triage is a specific form of information collecting where people quickly evaluate a large set of documents from the Internet by reading (or skimming) documents and organizing them into a personal information collection. During triage people can re-read documents, progressively refine their organization, and share results with others. People usually perform triage using multiple applications in concert: a search engine interface presents lists of potentially relevant documents; a document reader displays their content; and a text editor or a more specialized application records notes and assessments. However, people often become disoriented while switching between these subtasks in document triage. This can hinder the interaction between the subtasks and can distract people from focusing on documents of interest. To support document triage, we have developed an environment that infers users? interests based on their interactions with multiple applications and on an analysis of the characteristics and content of the documents they are interacting with. The inferred user interest is used to relieve disorientation by generating visualizations in multiple applications that help people find documents of interest as well as interesting sections within documents.Item Black hole visualization and animation(2010-05) Krawisz, Daniel Gregory; Matzner, Richard A. (Richard Alfred), 1942-; Shepley, Lawrence C.Black hole visualization is a problem of raytracing over curved spacetimes. This paper discusses the physics of light in curved spacetimes, the geometry of black holes, and the appearance of objects as viewed through a relativistic camera (the Penrose-Terrell effect). It then discusses computational issues of how to generate images of black holes with a computer. A method of determining the most efficient series of steps to calculate the value of a mathematical expression is described and used to improve the speed of the program. The details of raytracing over curved spaces not covered by a single chart are described. A method of generating images of several black holes in the same spacetime is discussed. Finally, a series of images generated by these methods is given and interpreted.Item Case study assessment of 3D and 4D modeling techniques for early constructabilty review of transportation projects(2011-08) Schmeits, Cameron William; O'Brien, William J.; Borcherding, JohnTransportation projects are unique projects that have many issues such as ROW acquisition, traffic control, and utilities. To help solve some of these issues projects should utilize constructability. Over the past 25 years research on constructability has consistently shown to have substantial cost and schedule benefits. To fully obtain those benefits, constructability should be utilized from the very beginning of the project at the conceptual planning phase. One of the tools to support implementation is 3D and 4D visualization. The benefits and applications of 3D and 4D for transportation project research is still lagging behind building projects. This thesis aims to provide a frame work for how 3D and 4D visualization could have an impactful role if used in the early planning and design process. Two case studies are used for developing that frame work, the Woodall Rodgers Deck Plaza and the Eastern Extension of President George Bush Turnpike projects in Dallas, Texas. Information taken from interviews of Texas Department of Transportation staff are used to develop a list of issues for each project, as well as the impacts those issues have had on the project. For each of those issues a proposal of how using 3D and 4D visualization could help mitigate those issues when implemented during the early planning phases.Item Constructing a GIS-based 3D urban model using LiDAR and aerial photographs(Texas A&M University, 2005-02-17) Lin, Wei-MingDue to the increasing availability of high-resolution remotely sensed imagery and detailed terrain surface elevation models, urban planners and municipal managers can now model and visualize the urban space in three dimensions. The traditional approach to the representation of urban space is 2D planimetric maps with building footprints, facilities and road networks. Recently, a number of methods have been developed to represent true 3D urban models. Those include panoramic imaging, Virtual Reality Modeling Language (VRML), and Computer-aided Design (CAD). These methods focus on aesthetic representation, but they do not have sufficient spatial query and analytical capabilities. This research evaluates the conventional approaches to 3D urban models, and identifies their advantages and limitations; GIS functionalities have been combined with 3D urban visualization techniques to develop a GIS-based urban modeling method; The algorithms and techniques have been explored to derive urban objects and their attributes from airborne LiDAR and high-resolution imagery for constructing and visualizing 3D urban models; and 3D urban models for the Texas A&M University (TAMU) campus and downtown Houston have been implemented using the algorithms and techniques developed in this research. By adding close-range camera images and highresolution aerial photographs as the texture of urban objects, effect of photorealism visualization has been achieved for walk-through and fly-through animations. The Texas A&M University campus model and the downtown Houston model have been implemented to offer proof-of-concept, namely, to demonstrate the advantages of the GIS-based approach. These two prototype applications show that the GIS-based 3D urban modeling method, by coupling ArcGIS and MultiGen-Paradigm Site Builder 3D software, can realize the desired functionalities in georeferencing, geographical measurements, spatial query, spatial analysis, and numerical modeling in 3D visual environment.Item Control flow graph visualization and its application to coverage and fault localization in Python(2015-05) Salling, Jackson Lee; Khurshid, Sarfraz; Julien, ChristineThis report presents a software testing tool that creates visualizations of the Control Flow Graph (CFG) from Python source code. The CFG is a representation of a program that shows execution paths that may be taken by the machine. Similar techniques to the ones here could be applied to many other languages, but the CFGs in this tool are tailored to the Python language. As computers get faster, tools to help programmers be effective at work can become more complex and still give quick feedback, without causing an undue performance burden. This tool explores several approaches to giving feedback to developers through a visualization of the CFG. First, just the viewing of a CFG gives a different perspective on the code. A programmer could choose to juxtapose the CFG with complexity metrics during development, seeing increased complexity as graphs grow larger. Second, the tool implements a mechanism to provide code coverage to Python modules. This feature extends the visualization to show code coverage as a highlighted CFG. Test coverage requirements are calculated to check node, edge, edge-pair, and prime path coverage. From studying existing testing tools, it appears no existing tool for Python provides all these test coverage levels. Third, the tool provides an interface for adding custom highlighting of the CFG, used here to visualize fault localization. Seeing the most suspicious locations from fault localization techniques could be used to reduce debugging time. The results of running the tool on several popular Python packages, and on itself, show its performance is competitive with the most popular coverage tool when measuring branch coverage. It is slightly slower on statement cover- age alone, but much faster against an unoptimized version and a logic coverage tool. This report also presents ideas for extensions to the tool. Among them is to incorporate program repair using fault localization and mutation operators. Visualizing code as a CFG provides interesting ways to look at many software testing metrics.Item Creating Automated Interactive Video Playback for Studies of Animal Communications(2010-01-16) Butkowski, TrishaVideo playback is a technique used to study the visual communication and behaviors of animals. While video playback is a useful tool, most experiments lack the ability for the visual stimulus to interact with the live animal. The limited number of experiments involving interactive video playback can be attributed partially to the lack of software available to conduct instructive interactive video playback experiments. To facilitate such interactive experiments, I have created a method that combines real-time animations with video tracking software. This method may be used to conduct interactive playback experiments. To demonstrate this method, a prototype was created and used to conduct automated mating choice trials on female swordtail fish. The results of the mating choice trials show that this prototype is able to create effectively interactive visual stimulus automatically. In addition, the results show that the interactive video playback has a measurable effect on the female swordtail fish, Xiphophorus birchmanni.Item Design, Construction, and Visualization of Transparent Full Scale High Pressure Test Facility for Electronic Submersible Pumps(2013-12-10) Marchetti, Joseph MichaelWith the advent of aging oilfields and extraction in extreme conditions, artificial lift has become a necessity to make certain fields technically and economically feasible. One artificial lift method which has high throughput and can be adapted to a variety of production situations is electric submersible pumps. One issue with these pumps is their natural inability to handle two phase gas-liquid flow without considerable loss or failure in performance. A pump, the Baker Hughes Centrilift G470 multi-vane pump (MVP) was developed to handle two phase flow. To understand the flow patterns and phenomena that occur in the pump over a variety of conditions, a full scale, full speed, moderate pressure, and transparent pump was designed and constructed at the Texas A&M University Turbomachinery Laboratory. The closed loop test facility then provides a means for flow visualization of predicted recirculation, bubble coalescence, and stagnation. The pump was designed and constructed using the SLA manufacturing process with a polycarbonate casing for optimal clarity and safety. High speed photography with lighting sources allowed visualization through the eye of the impeller and in the channels of the diffuser. Recirculation between the blades of the impeller was observed. Within the diffuser, large recirculation zones on the suction side of the vane were observed blocking up to 75% of the diffuser channel outlet. Further analysis using advanced flow velocity measurements such as PIV or DGV will more fully characterize the pump. This will allow improvement of CFD simulations and even pump design.Item Engineering collaboration via electronic media : how to promote reflective thinking skills and visualize data with technology(2011-08) Ramos, Noel Hector; Martin, Taylor, 1970-; Allen, David T.Online discussion forums and reflective writing are proven methods for enriching conceptual understanding and are hallmarks of engineering education. Plagiarism and many students’ apprehension to contribute to online journals can plague the effectiveness of these educational tools. Using elements of the engineering design cycle, I have created a blogging website that addresses these problems by restricting comment visibilities for users and includes a graphic visualization called a “word cloud” to supplement discussion. A prototype was tested with UTeach Engineering teachers for feedback on design and use. The critiques provided examples of classroom use, constructive design feedback, and ideas for its use as a formative assessment. The design could be used as a pedagogical tool for an investigation of formative assessments in engineering education, but further research for “word cloud” visualizations and journal data collection is needed to expand the current design.Item Exploration, Registration, and Analysis of High-Throughput 3D Microscopy Data from the Knife-Edge Scanning Microscope(2014-04-25) Sung, ChulAdvances in high-throughput, high-volume microscopy techniques have enabled the acquisition of extremely detailed anatomical structures on human or animal organs. The Knife-Edge Scanning Microscope (KESM) is one of the first instruments to produce sub-micrometer resolution ( ~1 ?m^(3)) data from whole small animal brains. We successfully imaged, using the KESM, entire mouse brains stained with Golgi (neuronal morphology), India ink (vascular network), and Nissl (soma distribution). Our data sets fill the gap of most existing data sets which have only partial organ coverage or have orders of magnitude lower resolution. However, even though we have such unprecedented data sets, we still do not have a suitable informatics platform to visualize and quantitatively analyze the data sets. This dissertation is designed to address three key gaps: (1) due to the large volume (several tera voxels) and the multiscale nature, visualization alone is a huge challenge, let alone quantitative connectivity analysis; (2) the size of the uncompressed KESM data exceeds a few terabytes and to compare and combine with other data sets from different imaging modalities, the KESM data must be registered to a standard coordinate space; and (3) quantitative analysis that seeks to count every neuron in our massive, growing, and sparsely labeled data is a serious challenge. The goals of my dissertation are as follows: (1) develop an online neuro-informatics framework for efficient visualization and analysis of the multiscale KESM data sets, (2) develop a robust landmark-based 3D registration method for mapping the KESM Nissl-stained entire mouse data into the Waxholm Space (a canonical coordinate system for the mouse brain), and (3) develop a scalable, incremental learning algorithm for cell detection in high-resolution KESM Nissl data. For the web-based neuroinformatics framework, I prepared multi-scale data sets at different zoom levels from the original data sets. And then I extended Google Maps API to develop atlas features such as scale bars, panel browsing, and transparent overlay for 3D rendering. Next, I adapted the OpenLayers API, which is a free mapping and layering API supporting similar functionality as the Google Maps API. Furthermore, I prepared multi-scale data sets in vector-graphics to improve page loading time by reducing the file size. To better appreciate the full 3D morphology of the objects embedded in the data volumes, I developed a WebGL-based approach that complements the web-based framework for interactive viewing. For the registration work, I adapted and customized a stable 2D rigid deformation method to map our data sets to the Waxholm Space. For the analysis of neuronal distribution, I designed and implemented a scalable, effective quantitative analysis method using supervised learning. I utilized Principal Components Analysis (PCA) in a supervised manner and implemented the algorithm using MapReduce parallelization. I expect my frameworks to enable effective exploration and analysis of our KESM data sets. In addition, I expect my approaches to be broadly applicable to the analysis of other high-throughput medical imaging data.Item The fluviageny, a method for analyzing temporal river fragmentation using phylogenetics(2015-05) Gordon, Andrew Lloyd; Howison, James; Arctur, David KPhylogenetic trees have historically been used to determine evolutionary relatedness between organisms. In the past few decades, as we've developed increasingly powerful computational algorithms and toolsets for performing analyses using phylogenetic methods, the use of these trees has expanded into other areas, including biodiversity informatics and geoinformatics. This report proposes using phylogenetic methods to create "fluviagenies" - trees that represent the effects of river fragmentation over time caused by damming. Faculty at the Center for Research in Water Resources at the University of Texas worked to develop tools and documentation for automating the creation of river segment codes (a.k.a., "fluvcodes") based on spatiotemporal data. Python was used to generate fluviageny trees from lists of these codes. The resulting trees can be exported into the appropriate data format for use with various phylogenetics programs. The Fishes of Texas Database (fshesoftexas.org), a comprehensive geospatial database of Texas fish occurrences aggregated and normalized from 42 museum collections around the world, was employed to create an example of how this tool might be used to analyze and hypothesize changes in fish populations as a consequence of river fragmentation. Additionally, this paper serves to theorize and analyze past and future potential uses for phylogenetic trees in various other fields of informatics.Item Inference and Visualization of Periodic Sequences(2011-10-21) Sun, YingThis dissertation is composed of four articles describing inference and visualization of periodic sequences. In the first article, a nonparametric method is proposed for estimating the period and values of a periodic sequence when the data are evenly spaced in time. The period is estimated by a "leave-out-one-cycle" version of cross-validation (CV) and complements the periodogram, a widely used tool for period estimation. The CV method is computationally simple and implicitly penalizes multiples of the smallest period, leading to a "virtually" consistent estimator. The second article is the multivariate extension, where we present a CV method of estimating the periods of multiple periodic sequences when data are observed at evenly spaced time points. The basic idea is to borrow information from other correlated sequences to improve estimation of the period of interest. We show that the asymptotic behavior of the bivariate CV is the same as the CV for one sequence, however, for finite samples, the better the periods of the other correlated sequences are estimated, the more substantial improvements can be obtained. The third article proposes an informative exploratory tool, the functional boxplot, for visualizing functional data, as well as its generalization, the enhanced functional boxplot. Based on the center outwards ordering induced by band depth for functional data, the descriptive statistics of a functional boxplot are: the envelope of the 50 percent central region, the median curve and the maximum non-outlying envelope. In addition, outliers can be detected by the 1.5 times the 50 percent central region empirical rule. The last article proposes a simulation-based method to adjust functional boxplots for correlations when visualizing functional and spatio-temporal data, as well as detecting outliers. We start by investigating the relationship between the spatiotemporal dependence and the 1.5 times the 50 percent central region empirical outlier detection rule. Then, we propose to simulate observations without outliers based on a robust estimator of the covariance function of the data. We select the constant factor in the functional boxplot to control the probability of correctly detecting no outliers. Finally, we apply the selected factor to the functional boxplot of the original data.Item Information visualization : working with screens of experience(2012-05) Aler, Carolyn Jean; Lee, Gloria; Lee, Gloria; Shields, DavidInformation visualizations have become increasingly popular in the last decade. Viewing data visually has proved helpful in communicating or revealing information in many fields ranging from science to journalism to art. Information is incredibly malleable; given the same data, a group of designers may make wildly different information visualizations. The malleability of an information visualization leads me to believe that there are certain and finite truths in data, but when a designer converts data into information, they pass these truths through a screen of their experience. Additionally, a reader brings their own screen of experience, through which they read an information visualization. These screens of experience create infinite ways to communicate and interpret information. This report reviews some concepts and methods that I have found helpful when creating information visualizations.Item Interactive musical visualization based on emotional and color theory(2009-05-15) Bowens, Karessa NateeInfluenced by synesthesia, the creators of such ?visual musics? as abstract art, color organs, abstract film, and most recently visualizers, have attempted to illustrate correspondences between the senses. This thesis attempts to develop a framework for music visualization founded on emotional analogues between visual art and music. The framework implements audio signal spectrum analysis, mood modeling, and color theory to produce pertinent data for use in visualizations. The research is manifest as a computer program that creates a simple visualizer. Built in Max/MSP/Jitter, a programming environment especially for musical and multimedia processing, it analyzes data and produces images in real-time. The program employs spectrum analysis to extract musical data such as loudness, brightness, and note attacks from the audio signals of AIFF song files. These musical features are used to calculate the Energy and Stress of the song, which determine the general mood of the music. The mood can fall into one of the four general categories of Exuberance, Contentment, Depression, and Anxious/Frantic. This method of automatic mood classification resulted in an eighty-five percent accuracy rate. Applying color expression theory yields a color palette that reflects the musical mood. The color palette and the musical features are then supplied to four different animation schemes to produce visuals. The visualizer generates shapes and forms in a three-dimensional environment and animates them in response to the real-time musical data. The visualizer allows user input to actively direct the creation of a variety of different visualizations. This personalization of the synesthetic effects of the visualizer invites the viewer to actively consider his or her own unique associations and facilitates understanding of the phenomenon of synesthesia and sensory fusion.Item Modeling high-genus surfaces(Texas A&M University, 2004-09-30) Srinivasan, VinodThe goal of this research is to develop new, interactive methods for creating very high genus 2-manifold meshes. The various approaches investigated in this research can be categorized into two groups -- interactive methods, where the user primarily controls the creation of the high-genus mesh, and automatic methods, where there is minimal user interaction and the program automatically creates the high-genus mesh. In the interactive category, two different methods have been developed. The first allows the creation of multi-segment, curved handles between two different faces, which can belong to the same mesh or to geometrically distinct meshes. The second method, which is referred to as ``rind modeling'', provides for easy creation of surfaces resembling peeled and punctured rinds. The automatic category also includes two different methods. The first one automates the process of creating generalized Sierpinski polyhedra, while the second one allows the creation of Menger sponge-type meshes. Efficient and robust algorithms for these approaches and user-friendly tools for these algorithms have been developed and implemented.Item Optimal monitoring and visualization of steady state power system operation(2009-06-02) Xu, BeiPower system operation requires accurate monitoring of electrical quantities and a reliable database of the power system. As the power system operation becomes more competitive, the secure operation becomes highly important and the role of state estimation becomes more critical. Recently, due to the development of new technology in high power electronics, new control and monitoring devices are becoming more popular in power systems. It is therefore necessary to investigate their models and integrate them into the existing state estimation applications. This dissertation is dedicated to exploiting the newly appeared controlling and monitoring devices, such as Flexible AC Transmission System (FACTS) devices and (Phasor Measurement Units) PMUs, and developing new algorithms to include them into power system analysis applications. Another goal is to develop a 3D visualization tool to help power system operators gain an in-depth image of the system operation state and to identify limit violations in a quick and intuitive manner. An algorithm of state estimation of a power system with embedded FACTS devices is developed first. This estimator can be used to estimate the system state quantities and Unified Power Flow Controller (UPFC) controller parameters. Furthermore, it can also to be used to determine the required controller setting to maintain a desired power flow through a given line. In the second part of this dissertation, two methods to determine the optimal locations of PMUs are derived. One is numerical and the other one is topological. The numerical method is more effective when there are very few existing measurements while the topology-based method is more applicable for a system, which has lots of measurements forming several observable islands. To guard against unexpected failures of PMUs, the numerical method is extended to account for single PMU loss. In the last part of this dissertation, a 3D graphic user interface for power system analysis is developed. It supports two basic application functions, power flow analysis and state estimation. Different visualization techniques are used to represent different kinds of system information.Item Optimization of Single and Layered Surface Texturing(2010-07-14) Bair, Alethea S.In visualization problems, surface shape is often a piece of data that must be shown effectively. One factor that strongly affects shape perception is texture. For example, patterns of texture on a surface can show the surface orientation from foreshortening or compression of the texture marks, and surface depth through size variation from perspective projection. However, texture is generally under-used in the scientific visualization community. The benefits of using texture on single surfaces also apply to layered surfaces. Layering of multiple surfaces in a single viewpoint allows direct comparison of surface shape. The studies presented in this dissertation aim to find optimal methods for texturing of both single and layered surfaces. This line of research starts with open, many-parameter experiments using human subjects to find what factors are important for optimal texturing of layered surfaces. These experiments showed that texture shape parameters are very important, and that texture brightness is critical so that shading cues are available. Also, the optimal textures seem to be task dependent; a feature finding task needed relatively little texture information, but more shape-dependent tasks needed stronger texture cues. In visualization problems, surface shape is often a piece of data that must be shown effectively. One factor that strongly affects shape perception is texture. For example, patterns of texture on a surface can show the surface orientation from foreshortening or compression of the texture marks, and surface depth through size variation from perspective projection. However, texture is generally under-used in the scientific visualization community. The benefits of using texture on single surfaces also apply to layered surfaces. Layering of multiple surfaces in a single viewpoint allows direct comparison of surface shape. The studies presented in this dissertation aim to find optimal methods for texturing of both single and layered surfaces. This line of research starts with open, many-parameter experiments using human subjects to find what factors are important for optimal texturing of layered surfaces. These experiments showed that texture shape parameters are very important, and that texture brightness is critical so that shading cues are available. Also, the optimal textures seem to be task dependent; a feature finding task needed relatively little texture information, but more shape-dependent tasks needed stronger texture cues.