Browsing by Subject "Image Processing"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item 3D Multi-Field Multi-Scale Features From Range Data In Spacecraft Proximity Operations(2012-07-16) Flewelling, Brien RoyA fundamental problem in spacecraft proximity operations is the determination of the 6 degree of freedom relative navigation solution between the observer reference frame and a reference frame tied to a proximal body. For the most unconstrained case, the proximal body may be uncontrolled, and the observer spacecraft has no a priori information on the body. A spacecraft in this scenario must simultaneously map the generally poorly known body being observed, and safely navigate relative to it. Simultaneous localization and mapping(SLAM)is a difficult problem which has been the focus of research in recent years. The most promising approaches extract local features in 2D or 3D measurements and track them in subsequent observations by means of matching a descriptor. These methods exist for both active sensors such as Light Detection and Ranging(LIDAR) or laser RADAR(LADAR), and passive sensors such as CCD and CMOS camera systems. This dissertation presents a method for fusing time of flight(ToF) range data inherent to scanning LIDAR systems with the passive light field measurements of optical systems, extracting features which exploit information from each sensor, and solving the unique SLAM problem inherent to spacecraft proximity operations. Scale Space analysis is extended to unstructured 3D point clouds by means of an approximation to the Laplace Beltrami operator which computes the scale space on a manifold embedded in 3D object space using Gaussian convolutions based on a geodesic distance weighting. The construction of the scale space is shown to be equivalent to both the application of the diffusion equation to the surface data, as well as the surface evolution process which results from mean curvature flow. Geometric features are localized in regions of high spatial curvature or large diffusion displacements at multiple scales. The extracted interest points are associated with a local multi-field descriptor constructed from measured data in the object space. Defining features in object space instead of image space is shown to bean important step making the simultaneous consideration of co-registered texture and the associated geometry possible. These descriptors known as Multi-Field Diffusion Flow Signatures encode the shape, and multi-texture information of local neighborhoods in textured range data. Multi-Field Diffusion Flow Signatures display utility in difficult space scenarios including high contrast and saturating lighting conditions, bland and repeating textures, as well as non-Lambertian surfaces. The effectiveness and utility of Multi-Field Multi-Scale(MFMS) Features described by Multi-Field Diffusion Flow Signatures is evaluated using real data from proximity operation experiments performed at the Land Air and Space Robotics(LASR) Laboratory at Texas A&M University.Item A photogrammetric on-orbit inspection for orbiter thermal protection system(Texas A&M University, 2006-04-12) Gesting, Peter PaulDue to the Columbia Space Shuttle Accident of February 2003, the Columbia Accident Investigation Board determined the need for an on-orbit inspection system for the Thermal Protection System that accurately determines damage depth to 0.25". NASA contracted the Spacecraft Technology Center in College Station, Texas, for a proof-of-concept photogrammetric system. This system involves a high quality digital camera placed on the International Space Station, capable of taking high fidelity images of the orbiter as it rotates through the Rendezvous Pitch Maneuver. Due to the pitch rotation, the images are tilted at different angles. The tilt causes the damage to exhibit parallax between multiple images. The tilted images are therefore registered to the near-vertical images using visually striking features on the undamaged surface of the Thermal Protection System that appear in multiple images taken at different tilt angles. The images become relatively oriented after registration, and features in one image are ensured to lie on the epipolar line in the other images. Features that do not lie on the undamaged surface, however, are shifted in the tilted images. These pixels are matched to the near-vertical image using a sliding-window area-matching approach. The windows are matched using a least-squares error method. The change in location for a pixel in a tilted image from its expected location on the undamaged surface is called the pixel disparity. This disparity is linearly scaled using the tilt angle and the pixel sampling to determine the depth of the damage at that pixel location. The algorithm is tested on a set of damaged tiles at the Johnson Space Center in Houston and the photogrammetric damage depth is then compared to a set of truth data provided by NASA. The photogrammetric method shows promise, with the 0.25" error limit being exceeded in only a few pixel locations. Once the camera properties are fully known from calibration, this systematic error should be reduced.Item Algorithms for Fluorescence Lifetime Microscopy and Optical Coherence Tomography Data Analysis: Applications for Diagnosis of Atherosclerosis and Oral Cancer(2014-05-16) Pande, ParitoshWith significant progress made in the design and instrumentation of optical imaging systems, it is now possible to perform high-resolution tissue imaging in near real-time. The prohibitively large amount of data obtained from such high-speed imaging systems precludes the possibility of manual data analysis by an expert. The paucity of algorithms for automated data analysis has been a major roadblock in both evaluating and harnessing the full potential of optical imaging modalities for diagnostic applications. This consideration forms the central theme of the research presented in this dissertation. Specifically, we investigate the potential of automated analysis of data acquired from a multimodal imaging system that combines fluorescence lifetime imaging (FLIM) with optical coherence tomography (OCT), for the diagnosis of atherosclerosis and oral cancer. FLIM is a fluorescence imaging technique that is capable of providing information about auto fluorescent tissue biomolecules. OCT on the other hand, is a structural imaging modality that exploits the intrinsic reflectivity of tissue samples to provide high resolution 3-D tomographic images. Since FLIM and OCT provide complimentary information about tissue biochemistry and structure, respectively, we hypothesize that the combined information from the multimodal system would increase the sensitivity and specificity for the diagnosis of atherosclerosis and oral cancer. The research presented in this dissertation can be divided into two main parts. The first part concerns the development and applications of algorithms for providing quantitative description of FLIM and OCT images. The quantitative FLIM and OCT features obtained in the first part of the research, are subsequently used to perform automated tissue diagnosis based on statistical classification models. The results of the research presented in this dissertation show the feasibility of using automated algorithms for FLIM and OCT data analysis for performing tissue diagnosis.Item Analytical and Experimental Studies of Drag Embedment Anchors and Suction Caissons(2011-08-08) Beemer, RyanThe need for experimental and analytical modeling in the field of deep water offshore anchoring technologies is high. Suction caisson and drag embedment anchors (DEA) are common anchors used for mooring structures in deep water. The installation process of drag embedment anchors has been highly empirical, employing a trial and error methodology. In the past decade analytical methods have been derived for modeling DEA installation trajectories. However, obtaining calibration data for these models has not been economical. The development of a small scale experimental apparatus, known as the Laponite Tank, was developed for this thesis. The Laponite Tank provides a quick and economical means of measuring DEA trajectories, visually. The experimental data can then be used for calibrating models. The installation process of suctions caissons has benefited from from a more rational approach. Nevertheless, these methods require refinement and removal methodology requires development. In this thesis, an algorithm for modeling suction caisson installation in clay has been presented. An analytical method and modeling algorithm for removal processes of suction caissons in clay was also developed. The installation and removal models were calibrated to field data. These analytical and experimental studies can provide a better understanding of installation of drag embedment anchors and the installation and removal of suction caissons.Item Automated counting of cell bodies using Nissl stained cross-sectional images(2009-05-15) D'Souza, Aswin CletusCell count is an important metric in neurological research. The loss in numbers of certain cells like neurons has been found to accompany not only the deterioration of important brain functions but disorders like clinical depression as well. Since the manual counting of cell numbers is a near impossible task considering the sizes and numbers involved, an automated approach is the obvious alternative to arrive at the cell count. In this thesis, a software application is described that automatically segments, counts, and helps visualize the various cell bodies present in a sample mouse brain, by analyzing the images produced by the Knife-Edge Scanning Microscope (KESM) at the Brain Networks Laboratory. The process is described essentially in five stages: Image acquisition, Pre- Processing, Processing, Analysis and Refinement, and finally Visualization. Nissl staining is a staining mechanism that is used on the mouse brain sample to highlight the cell bodies of our interest present in the brain, namely neurons, granule cells and interneurons. This stained brain sample is embedded in solid plastic and imaged by the KESM, one section at a time. The volume that is digitized by this process is the data that is used for the purpose of segmentation. While most sections of the mouse brain tend to be comprised of sparsely populated neurons and red blood cells, certain sections near the cerebellum exhibit a very high density and population of smaller granule cells, which are hard to segment using simpler image segmentation techniques. The problem of the sparsely populated regions is tackled using a combination of connected component labeling and template matching, while the watershed algorithm is applied to the regions of very high density. Finally, the marching cubes algorithm is used to convert the volumetric data to a 3D polygonal representation. Barring a few initializations, the process goes ahead with minimal manual intervention. A graphical user interface is provided to the user to view the processed data in 2D or 3D. The interface offers the freedom of rotating and zooming in/out of the 3D model, as well as viewing only cells the user is interested in analyzing. The segmentation results achieved by our automated process are compared with those obtained by manual segmentation by an independent expert.Item Determination of the Presence Conditions of Pavement Markings using Image Processing(2012-10-19) Ge, HanchengPavement markings, as a form of traffic control devices, play a crucial role in safely guiding drivers. Restriping pavement markings is an important task in the maintenance of traffic control devices. Every year state agencies spend a lot of money in maintaining pavement markings as the retroreflectivity or durability values of the markings fall below a minimum level. Currently, the most widely adopted method used to determine the presence conditions of pavement markings is by expert observation, a subjective technique that may not provide consistent and convinced results for agencies. Hence, a fast and accurate way to determine the presence conditions of pavement markings can lead to significant cost savings while ensuring driving safety. In this study, a systematic approach that can automatically determine the presence conditions of pavement markings using digital image processing techniques is presented. These techniques are used to correct the geometric deformity, detect colors of pavement markings, segment images, enhance images, detect edge lines of ideal pavement markings, and recognize the features of pavement markings appearing in the photographs. To better implement the aforementioned techniques, a software package has been developed by Graphic User Interface (GUI) as a platform to simultaneously evaluate the presence conditions of single or multiple pavement markings. The developed software package is able to do operations such as open files, calibrate camera calibration, clip, rotation, histogram display, and detection of edge lines of ideal pavement markings. The above system was tested and evaluated with the photograph datasets provided by the NTPEP Mississippi test deck. The empirical results (when compared with the manual method and expert observation) show that the developed system in this study is accurate and reliable. Additionally, the interactivity of the developed software package is satisfactory due to the feedback from ten volunteers. It is also concluded that the developed system, as an important reference, potentially helps agencies make a better decision in the maintenance of pavement markings with more accurate and speedy evaluation of the presence conditions of pavement markings.