Browsing by Subject "segmentation"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item A comparison of automated land cover/use classification methods for a Texas bottomland hardwood system using lidar, spot-5, and ancillary data(2009-05-15) Vernon, Zachary IsaacBottomland hardwood forests are highly productive ecosystems which perform many important ecological services. Unfortunately, many bottomland hardwood forests have been degraded or lost. Accurate land cover mapping is crucial for management decisions affecting these disappearing systems. SPOT-5 imagery from 2005 was combined with Light Detection and Ranging (LiDAR) data from 2006 and several ancillary datasets to map a portion of the bottomland hardwood system found in the Sulphur River Basin of Northeast Texas. Pixel-based classification techniques, rulebased classification techniques, and object-based classification techniques were used to distinguish nine land cover types in the area. The rule-based classification (84.41% overall accuracy) outperformed the other classification methods because it more effectively incorporated the LiDAR and ancillary datasets when needed. This output was compared to previous classifications from 1974, 1984, 1991, and 1997 to determine abundance trends in the area?s bottomland hardwood forests. The classifications from 1974-1991 were conducted using identical class definitions and input imagery (Landsat MSS 60m), and the direct comparison demonstrates an overall declining trend in bottomland hardwood abundance. The trend levels off in 1997 when medium resolution imagery was first utilized (Landsat TM 30m) and the 2005 classification also shows an increase in bottomland hardwood from 1997 to 2005, when SPOT-5 10m imagery was used. However, when the classifications are re-sampled to the same resolution (60m), the percent area of bottomland hardwood consistently decreases from 1974-2005. Additional investigation of object-oriented classification proved useful. A major shortcoming of object-based classification is limited justification regarding the selection of segmentation parameters. Often, segmentation parameters are arbitrarily defined using general guidelines or are determined through a large number of parameter combinations. This research justifies the selection of segmentation parameters through a process that utilizes landscape metrics and statistical techniques to determine ideal segmentation parameters. The classification resulting from these parameters outperforms the classification resulting from arbitrary parameters by approximately three to six percent in terms of overall accuracy, demonstrating that landscape metrics can be successfully linked to segmentation parameters in order to create image objects that more closely resemble real-world objects and result in a more accurate final classification.Item Segmentation, registration,and selective watermarking of retinal images(Texas A&M University, 2006-08-16) Wu, DiIn this dissertation, I investigated some fundamental issues related to medical image segmentation, registration, and watermarking. I used color retinal fundus images to perform my study because of the rich representation of different objects (blood vessels, microaneurysms, hemorrhages, exudates, etc.) that are pathologically important and have close resemblance in shapes and colors. To attack this complex subject, I developed a divide-and-conquer strategy to address related issues step-by-step and to optimize the parameters of different algorithm steps. Most, if not all, objects in our discussion are related. The algorithms for detection, registration, and protection of different objects need to consider how to differentiate the foreground from the background and be able to correctly characterize the features of the image objects and their geometric properties. To address these problems, I characterized the shapes of blood vessels in retinal images and proposed the algorithms to extract the features of blood vessels. A tracing algorithm was developed for the detection of blood vessels along the vascular network. Due to the noise interference and various image qualities, the robust segmentation techniques were used for the accurate characterization of the objects?? shapes and verification. Based on the segmentation results, a registration algorithm was developed, which uses the bifurcation and cross-over points of blood vessels to establish the correspondence between the images and derive the transformation that aligns the images. A Region-of-Interest (ROI) based watermarking scheme was proposed for image authenticity. It uses linear segments extracted from the image as reference locations for embedding and detecting watermark. Global and locally-randomized synchronization schemes were proposed for bit-sequence synchronization of a watermark. The scheme is robust against common image processing and geometric distortions (rotation and scaling), and it can detect alternations such as moving or removing of the image content.Item Segmenting Hand-Drawn Strokes(2012-07-16) Wolin, Aaron DavidPen-based interfaces utilize sketch recognition so users can create and interact with complex, graphical systems via drawn input. In order for people to freely draw within these systems, users' drawing styles should not be constrained. The low-level techniques involved with sketch recognition must then be perfected, because poor low-level accuracy can impair a user's interaction experience. Corner finding, also known as stroke segmentation, is one of the first steps to free-form sketch recognition. Corner finding breaks a drawn stroke into a set of primitive symbols such as lines, arcs, and circles, so that the original stoke data can be transformed into a more machine-friendly format. By working with sketched primitives, drawn objects can then be described in a visual language, noting what primitive shapes have been drawn and the shapes? geometric relationships to each other. We present three new corner finding techniques that improve segmentation accuracy. Our first technique, MergeCF, is a multi-primitive segmenter that splits drawn strokes into primitive lines and arcs. MergeCF eliminates extraneous primitives by merging them with their neighboring segments. Our second technique, ShortStraw, works with polyline-only data. Polyline segments are important since many domains use simple polyline symbols formed with squares, triangles, and arrows. Our ShortStraw algorithm is simple to implement, yet more powerful than previous polyline work in the corner finding literature. Lastly, we demonstrate how a combination technique can be used to pull the best corner finding results from multiple segmentation algorithms. This combination segmenter utilizes the best corners found from other segmentation techniques, eliminating many false negatives (missed primitive segmentations) from the final, low-level results. We will present the implementation and results from our new segmentation techniques, showing how they perform better than related work in the corner finding field. We will also discuss limitations of each technique, how we have sought to overcome those limitations, and where we believe the sketch recognition subfield of corner finding is headed.