Browsing by Subject "Data compression"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item An efficient test vector compression scheme using base vector extraction and Huffman coding(2007-05) Chan, Sio Pang, 1981-; Touba, Nur A.This thesis presents a methodology for achieving high test data compression efficiency by extracting base vectors from the pool of uncompressed test vectors combined with Huffman Coding. The base vectors are used to generate coefficient vectors, which are associated with each test vector and are used to span the whole test space. Resultant base vectors and coefficient vectors can represent the entire test space. However, a high compression ratio cannot be achieved by just replacing the initial test vectors directly by base vectors and coefficient vectors. To achieve a high compression ratio, coefficient vectors, which consist of highly repetitive sequences, are further compressed by Huffman Coding. Moreover, since each base vector contains only a single "1", it is much more efficient to store the bit position at which there is a "1" instead of the entire base vector. Experimental data is also presented in this thesis to support the effectiveness of the proposed compression scheme.Item Fast and efficient progressive image coding and transmission using wavelet decomposition(Texas Tech University, 1999-05) Sharma, MohitWith the recent boom in multimedia and the Iniernel. miage compression and techniques for progressive image transmission have become quite important. This thesis describes the concept and design of a codec for progressixc image transmission highlighted by a new technique SPHIT (Set Partitioning in Hierarchical Trees). This technique works on the principles of partial ordering by magnitude utilizing a sci partitioning sorting algorithms, ordered bit plane transmission, and exploitation of selfsimilarity across different scales of an image wavelet transform. The said principles of SPIHT are no different than what was described in the original EZW by J. P. Shapiro. But the approach for implementation of SPIHT is significantly different. Here the ordering information for image data is not explicitly transmitted. Instead, the fact that the execution path of any algorithm is defined by the results of the comparisons on its branching points is exploited to obtain ordering information at the decoder. The decoder and the encoder not only share the same sorting algorithm, but also the same execution path. Thus, the decoder can recover the ordering information from its execution path, which happens to be identical to that of the encoder. An attempt to highlight the basic differences between the EZW and SPIHT is made by taking an example of 8 x 8 image section.Item Morphological filters for image enhancement(Texas Tech University, 1996-12) Kher, AlokDigital images are subjected to filtering processes during the operations of noise reduction and lossy compression. Fine details are often lost or severely altered in these filtering processes. Connectivity preserving morphological filters haven been proposed in the past to remove noise while preserving thin but connected regions However, these filters preserved regional connectivity only in restricted orientations The present work has developed morphological filters that may be used for fast and efficient removal of noise while completely preserving connectivity information in gray scale images. These filters are shown to satisfy the requirements of well behaved abstract operations of algebraic opening and closing. When applied to the problem of speckle noise reduction from synthetic aperture radar images, the new filters performed significantly better than conventional linear and non-linear filters. The present work has also developed an image representation approach that may be used for developing high quality lossy image compression techniques based on morphological muhiresolution pyramid decomposition of images. A pyramid decomposition technique represents an image as a pyramid of differential images which store incremental information at various resolutions The lossy compression techniques based on pyramid decomposition often discard the first differential image component which usually consists of a substantial amount of high frequency noise. Complete omission of this image component can result in the loss of fine image details. The present work has developed an approach to approximately reconstruct the first differential image from its two components consisting of directional information. The simplification process is shown to be equivalent to connectivity preserving filtering. For various standard images, the entropies of the differential images were shown to decrease by 35% to 40% for approximately 10% mean square error between the original and the reconstructed differential images.Item Using partial string matching for coding of hierarchical grammars(Texas Tech University, 2002-12) Sethuraman, RadhakrishnanText data can be compressed by extracting repeated strings in the input. Therefore, the input can be expressed using a context-free grammar, where each repeated string is denoted by a rule of the grammar. Sequitur [3] is one such method that derives a context-free grammar, which generates the given input precisely as its language. Sequitur is a two-pass method; in the first pass, it determines a symbolic representation of the grammar, and in the second pass, it encodes this representation using Arithmetic coder to realize a very efficient compressed representation of the input. This thesis presents a complete and robust implementation of Sequitur. First, it presents the design and then gives a detailed description of the algorithm. Next, it presents a context based coding of the grammar; the purpose of this coding method is to code the next symbol in the context of the input character, which is the last character generated by the preceding rule. This coding method is known as partial matching method, and hence this implementation of sequitur is termed here as sequitur-PM. The thesis presents a number of compression results; the first set of results is given to show that our implementation is able to attain as good compression ratio as presented in [1] for all benchmark test input. The next set of results is given to show further compression obtained by using PM method. These results where on par with the published results [3].Item Video compression in signal-dependent noise(Texas Tech University, 1996-12) Upadhya, Ashwin KumarThis work investigates the performance of video compression techniques in the presence of signal-dependent noise. The signal-dependent noise sources most commonly encountered are film-grain noise and speckle. Film-grain noise degradation occurs when a photographic film is scanned for the purpose of digitization [6]. All types of coherent imaging techniques, such as synthetic aperture radar (SAR) imagery, laser illuminated imagery, astronomical imagery and ultrasonic medical imagery are affected by speckle. Noise in the video not only affects the quality of the video, but also the compression scheme for the video. It is of utmost importance to improve the quality of video and also the achievable compression, for the sake of archiving, in applications such as medical imagery. This work aims to investigate techniques to improve the quality and achievable compression of moving pictures (video), keeping in mind such applications. There is no real consensus yet on the "best" quality measure to use for determining the quality of the output video, so we will use the standard mean square error, log mean square error, signal-to-noise ratio and perceptual mean square error (which is modeled on the human visual system) in this work.