Browsing by Subject "Image compression"
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item A study of measurements of blocking defects in highly-compressed images(Texas Tech University, 1997-12) Zhong, Jianqiang NormanNot availableItem Adaptive wavelet filter design for digital signal processing systems(Texas Tech University, 2000-12) Kustov, Vadim MichailovichDiscrete wavelet transform has been used in many image/signal processing applications in recent years. However, the design of optimized and adaptive wavelet filter banks is still a significant research topic, specifically in image/signal compression. A number of wavelet-based advanced lossy compression algorithms provide high-fidelity reconstmction of input images at computationally intensive costs. The present work investigates the potential and the limitations of optimized adaptive design of two-channel perfect reconstmction filters when the signal in a channel is subjected to coarse quantization during the encoding process of such advanced compression algorithms. A real-time optimal two-channel perfect reconstmction filter bank design algorithm has been developed and implemented in a digital signal processor. The algorithm has been used in a newly developed execution time reduction method to reduce the computational costs and data storage requirement of image compression algorithms. A reduction of execution time by two to three times has been achieved without adding appreciable distortion to the reconstmcted image.Item Adaptive wavelet filter design for optimized image source encoding(Texas Tech University, 2002-12) Kumar, RoopeshDespite intensive research being conducted on the topic of adaptive filter design in general, adaptive filter design in the discrete wavelet transform (DWT) domain with specific constraints is still an active research area. The present work investigates the advantages and limitations of the design of a 2-chanel perfect-reconstruction wavelet filter which is adapted and optimized under minimum energy constraints in a specific band. Such a filter can be used with a quantizer and entropy encoder of a wavelet based image encoder to give optimum performance. An optimal 2-channel conjugate quadrature filter (CQF) bank has been designed and optimized using Sequential Quadratic Programming methods. The filter bank problem is solved using recently developed optimization techniques for general nonlinear, non-convex functions. The results indicate an improved performance for this method compared to the earlier-used Interior-Point optimization method.Item An adaptive vector quantization technique with a fuzzy distortion measure for efficient image coding(Texas Tech University, 1996-08) Pemmaraju, Suryalakshmi V.Digital image compression techniques are currently experiencing significant growth due to diverse applications demanding efficient storage and transmission of increasing image data contents. These compression techniques involve representation of an image with reduced number of bits pdr pixel by exploiting the redundancy in data present within an image. In lossy^ compression, information theory predicts that the performance of vector quantization (VQ) is superior to that obtained using scalar quantization (SQ) in optimizing the rate distortion fimction. In practice, however, the existing VQ algorithms suffer from a number of serious problems, e.g., long search process, codebook initialization and getting trapped in local minima. This research develops an adaptive vector quantization technique for generating an optimal codebook by employing a neurofuzzy clustering approach to ensure minimum distortion. In addition, a multiresolution decomposition of an image is used as a preprocessing stage for transforming the image into a form that is more suitable for quantization, coding and progressive transmission. The multiresolution wavelet decomposition of an image is performed using Daubechies coefficients prior to vector quantization and a multiresolution codebook scheme is used for quantizing the sub-images at different resolutions. This integrated approach of adaptive vector quantization with wavelet based pyramid image decomposition aids in significant facilitation of the compression and coding processes thereby allowing higher compression ratios with acceptable visual quality. Experimental results of this new approach show significant improvement in performance as compared to a variable block size vector quantization (VBQ). The superior performance of this integrated algorithm has been validated by applying it to several classes of images and comparing the performance in terms of mean-squared error (MSB), peak-signal-to-noise-ratio (PSNR), bit rates and visual fidelity.Item Codebook optimization in vector quantization(Texas Tech University, 1999-12) Zhang, XiaoxiDigital image processing techniques were introduced early this century. One of the first applications was in improving digitized newspaper pictures sent by submarine cable between London and New York in 1920s. It reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours. In 1964, Jet Propulsion Laboratory began using computers to improve image quality [5]. From 1964 until present, the field of image processing has grown vigorously. It has become a prime area of research not only in electrical engineering but also in many other disciplines such as computer science, health science, and geography. However, representing a digitized image may require enormous amount of data. Some images, like medical images, have higher resolution and therefore require even larger amounts of memory. Due to the vast amount of data associated with images and video, compression is a key technology for reducing the amount of data required to represent a digital image. The reason that we can compress a digital image is because there are some data redundancies in the image. When we reduce or eliminate the redundancies, the data is compressed. There are many compression methods and normally, they can be classified into two main categories: lossless and lossy compression. In this thesis, we will focus on Vector Quantization which is a lossy compression. Based on Shannon^ theory, coding systems can perform better if they operate on vectors or group of symbols rather than on individual symbols or samples [9]. The objective of this research was to compress image using LBG-VQ [21] both in spatial domain (The spatial domain algorithm was introduced by Linde, Buzo, and Gary) and in wavelet transformdomain [6] and compare it with other algorithms for vector quantization techniques developed recently.Item Color image compression using wavelet transform(Texas Tech University, 1997-08) Meadows, Steven CarlC language coding of image compression algorithms can be a difficult and tedious task. Image compression methods are usually composed of many stages of cascaded algorithms. Each algorithm may be developed independently. This thesis will address the problem of interfacing new image compression algorithms with older and established algorithms such as entropy coding and the discrete wavelet transform. The thesis will describe ANSI C coding procedures and functions involved in implementing two entrop\ coding algorithms including Huffman coding and arithmetic coding. Wavelet theory will be discussed as it applies to the discrete wavelet transform. The thesis will also describe an ANSI C coding implementation of one of the newest wavelet coefficient coding techniques, embedded zerotree wavelets (EZW) developed by Jerome Shapiro. The EZW compression performance will be compared with JPEG which is the standard adapted currently for still images by the Joint Photographic Experts Group.Item Comparison of lossless compression models(Texas Tech University, 1999-08) Hovhannisyan, AnahitWith the development of multimedia and digital imaging there is a need to reduce the cost of storage and transmission of information. The cost reduction translates into reducing the amount of data that represent the information. This thesis investigates the performance of several lossless compression algorithms widely used for image coding. The first three chapters describe these algorithms in detail, as well as give examples of well-known data compression algorithms such as Huffman. Arithmetic. Dynamic Markov Coding, and Run Length Encoding. Finally, the relative performances of the listed algorithms are compared. The thesis also includes C— and ANSI C implementations of the Huffman and RLE algorithms respectively.Item Content-based compression of mammograms(Texas Tech University, 2001-08) Grinstead, Bradley IanThis thesis presents results from the content-based compression (CBC) of digitized mammograms for transmission, archiving, and, ultimately, telemammography. Unlike traditional compression techniques, CBC is a process in which the content of the data is analyzed before compression takes place. In this approach, the data is partitioned into two classes of regions and a different compression technique is performed on each class. The intended result achieves a balance between data compression and data fidelity. For mammographic images, the data is segmented into two non-overlapping regions: (1) background regions, and (2) focus-of-attention regions that contain the chnically important information. Subsequently, the former regions are compressed using a lossy technique, which attains large reductions in data, while the latter regions are compressed using a lossless technique in order to maintain the fidelity of these regions. In this case, results show that compression ratios averaging 10-20 times greater than that of lossless compression alone can be achieved, while preserving the fidehty of the clinically important information.Item Design of predictive vector quantizer for image coding(2005-05) Yin, Jie; Mitra, Sunanda; Karp, Tanja; Nutter, Brian S.Due to the prediction loop, instability is a major difficulty in the design of a predictive quantizer. The conventional open-loop method enjoys good stability but suffers from poor performance. In contrast, the closed-loop method may be able to perform better but it can not guarantee a complete convergence and so it is unstable. Recently, the asymptotic closed-loop approach was proposed to benefit from the stability of open-loop design while asymptotically optimizing the actual closed-loop system. In this work, all of the above three design algorithms for predictive quantization are discussed and applied to image coding. Based on the analysis of simulation results, modifications to closed-loop design and asymptotic closed-loop design are proposed for further improvement in design quality and reliability.Item Design of predictive vector quantizer for image coding(Texas Tech University, 2005-05) Yin, Jie; Mitra, Sunanda; Karp, Tanja; Nutter, Brian S.Due to the prediction loop, instability is a major difficulty in the design of a predictive quantizer. The conventional open-loop method enjoys good stability but suffers from poor performance. In contrast, the closed-loop method may be able to perform better but it can not guarantee a complete convergence and so it is unstable. Recently, the asymptotic closed-loop approach was proposed to benefit from the stability of open-loop design while asymptotically optimizing the actual closed-loop system. In this work, all of the above three design algorithms for predictive quantization are discussed and applied to image coding. Based on the analysis of simulation results, modifications to closed-loop design and asymptotic closed-loop design are proposed for further improvement in design quality and reliability.Item Energy repacking for video compression using wavelet transforms(Texas Tech University, 1997-05) Sreenath, Sreenivas PrasadThe video compression field has gone through a rapid change in the past few years. This change can be attributed to the requirement of a large amount of video data handling and storage. The enormous amount of video data has increased the need for an optimum way of storing and compression. In order to achieve a better video compression, many techniques have been developed. This research exploits the energy repacking capability of wavelet transforms for video compression. A wavelet transform of an original image provides an efficient way to represent the original data with potentially very high compression rate. To evaluate the compression achievable with the wavelet transform, the results obtained from the wavelet algorithm is compared with results obtained from the standard JPEG baseline technique. The evaluation is mainly based on the log mean square error, the signal to noise ratio and the bit per pixel requirement of the data. The results confirm that the wavelet transform yields a better compression and energy repacking than that obtained from DCT transform. Different levels of subband decomposition of wavelets are also evaluated.Item Fast and efficient progressive image coding and transmission using wavelet decomposition(Texas Tech University, 1999-05) Sharma, MohitWith the recent boom in multimedia and the Iniernel. miage compression and techniques for progressive image transmission have become quite important. This thesis describes the concept and design of a codec for progressixc image transmission highlighted by a new technique SPHIT (Set Partitioning in Hierarchical Trees). This technique works on the principles of partial ordering by magnitude utilizing a sci partitioning sorting algorithms, ordered bit plane transmission, and exploitation of selfsimilarity across different scales of an image wavelet transform. The said principles of SPIHT are no different than what was described in the original EZW by J. P. Shapiro. But the approach for implementation of SPIHT is significantly different. Here the ordering information for image data is not explicitly transmitted. Instead, the fact that the execution path of any algorithm is defined by the results of the comparisons on its branching points is exploited to obtain ordering information at the decoder. The decoder and the encoder not only share the same sorting algorithm, but also the same execution path. Thus, the decoder can recover the ordering information from its execution path, which happens to be identical to that of the encoder. An attempt to highlight the basic differences between the EZW and SPIHT is made by taking an example of 8 x 8 image section.Item Fingerprint image restoration using wavelet transformation(Texas Tech University, 1996-05) Liu, Ti-chungImage compression plays a crucial role m many unportant and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform(WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run-length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.Item Image compression in signal-dependent noise(Texas Tech University, 1995-08) Shahnaz, RubeenaThe performance of an image compression scheme is affected by the presence of noise in an image. This work mainly investigates the effects of signal-dependent noise on image compression using the JPEG image compression algorithm. Simulation results show that the achievable compression is significantly reduced in the presence of noise. The types of noise considered are, signal-independent additive noise, signal-dependent film-grain noise and speckle noise. For improvement of compression ratios noisy images are pre-processed for noise suppression before applying compression. Two approaches are used for reduction of signal-dependent noise prior to compression. In one approach estimator designed specifically for a particular signal-dependent noise model is used on the noise degraded image for noise suppression. In the second approach the signal-dependent noise is transformed into signal-independent noise using a homomorphic transformation. An estimator designed for signal-independent noise is then used on the transformed image for noise suppression followed by an inverse homomorphic transformation. The performances of these two pre-compression noise suppression schemes are compared using different performance criteria. Simulation results show that pre-compression noise suppression significantly increases the amount of compression obtained subsequently. The compression results for the noiseless, noisy and restored images are compared.Item Image compression using locally sensitive hashing(2013-05) Chucri, Samer Gerges; Dimakis, Alexandros G.The problem of archiving photos is becoming increasingly important as image databases are growing more popular, and larger in size. One could take the example of any social networking website, where users share hundreds of photos, resulting in billions of total images to be stored. Ideally, one would like to use minimal storage to archive these images, by making use of the redundancy that they share, while not sacrificing quality. We suggest a compression algorithm that aims at compressing across images, rather than compressing images individually. This is a very novel approach that has never been adopted before. This report presents the design of a new image database compression tool. In addition to that, we implement a complete system on C++, and show the significant gains that we achieve in some cases, where we compress 90% of the initial data. One of the main tools we use is Locally Sensitive Hashing (LSH), a relatively new technique mainly used for similarity search in high-dimensions.Item Implementation of BCWT in GUI wavelet toolbox(2010-12) Kongara, Spandana; Karp, Tanja; Nutter, Brian; Mitra, SunandaABSTRACT MATLAB has different tools available for Image processing applications such as Image processing toolbox, Wavelet Toolbox etc. The Wavelet Toolbox has different GUI interfaces in it for various wavelet applications which can be accessed by the command ‘wavemenu’. For Image Compression the Wavelet Toolbox has a GUI tool named True Compression 2D. It can also be accessed by the command ‘wc2dtool’. The user can also access a command line function instead of GUI toolbox using the command ‘wcompress’. The toolbox and the command line function have different compression algorithms available for compressing images such as EZW, SPIHT etc. The user can select the desired method for a particular application and compress the images. The BCWT Compression algorithm is proposed by Jiangling Guo is advantageous than some of the existing algorithms in the Wavelet toolbox such as EZW, SPIHT etc. BCWT algorithm is less complex, fast and uses less memory than EZW and SPIHT. BCWT is added to the available compression algorithms in the toolbox and the command line function such that user can access it similar to the other compression methods. BCWT is made simple to all the users by integrating it into the GUI Wavelet Toolbox.Item Morphological filters for image enhancement(Texas Tech University, 1996-12) Kher, AlokDigital images are subjected to filtering processes during the operations of noise reduction and lossy compression. Fine details are often lost or severely altered in these filtering processes. Connectivity preserving morphological filters haven been proposed in the past to remove noise while preserving thin but connected regions However, these filters preserved regional connectivity only in restricted orientations The present work has developed morphological filters that may be used for fast and efficient removal of noise while completely preserving connectivity information in gray scale images. These filters are shown to satisfy the requirements of well behaved abstract operations of algebraic opening and closing. When applied to the problem of speckle noise reduction from synthetic aperture radar images, the new filters performed significantly better than conventional linear and non-linear filters. The present work has also developed an image representation approach that may be used for developing high quality lossy image compression techniques based on morphological muhiresolution pyramid decomposition of images. A pyramid decomposition technique represents an image as a pyramid of differential images which store incremental information at various resolutions The lossy compression techniques based on pyramid decomposition often discard the first differential image component which usually consists of a substantial amount of high frequency noise. Complete omission of this image component can result in the loss of fine image details. The present work has developed an approach to approximately reconstruct the first differential image from its two components consisting of directional information. The simplification process is shown to be equivalent to connectivity preserving filtering. For various standard images, the entropies of the differential images were shown to decrease by 35% to 40% for approximately 10% mean square error between the original and the reconstructed differential images.Item Optimization of vector quantization for large color images(Texas Tech University, 1999-05) Yang, ShuyuImages are produced to record and store infomiation that people want to preserve. As a matter of fact, visual informadon is more easily accepted and understood than other types of informadon, for example, linguistic information, and thus has become a popular way of representing information. However, compared to other types of information processing, images contain much more data and usually takes more processing time and storage space. With more and more wide use of computers and the Intemet, efficient methods of image transmission and storage are needed because of the limitation of the currently available speed of the Intemet. In other applications, such as medical image transmission and storage, video conferencing. Video On Demand (VOD) systems, where huge amounts of image data storage or fast, real-time image transmission are demanded, image compression can provide a major solution. Image compression involves identification of the redundancy of the information contained in images so as to reduce the amount of data to represent the original image, thus achieving a lower bit rate, less transmission time and storage space.Item Optimization of vector quantization in Hybrid Vector Scalar Quantization (HVSQ)(2005-05) Varambally, Dheeraj B.; Mitra, Sunanda; Karp, Tanja; Krile, ThomasThe advancement in fields like multimedia, medical imagery and emergence of high resolution digital cameras has necessitated the acquisition, storage and transmission of high resolution digital images. Storage and transmission of such images are expensive in terms of bytes and bandwidth. There is a need for compressing these images to curtail the storage and transmission budget. A bewildering variety of image compression schemes featuring new concepts and techniques have been proposed to yield superior compression quality. Hybrid Vector Scalar Quantization (HVSQ) is one such novel compression scheme used for compressing high resolution cervigram image archives of the US National Library of Medicine. It is a lossy compression scheme comprising of two of the well known quantization techniques namely Vector and Scalar Quantization in the wavelet domain. This thesis focuses on implementation and optimization of the Vector Quantization module of HVSQ to yield higher image quality in terms of Peak Signal to Noise Ratio (PSNR) and lower encoding time. The thesis also focuses on evaluating the performance of Vector, Scalar and Hybrid Vector Scalar Quantization. The superior performance HVSQ is then verified by compressing a few high resolution natural images.Item Optimization of vector quantization in hybrid vector scalar quantization (HVSQ)(Texas Tech University, 2005-05) Varambally, Dheeraj B.; Mitra, Sunanda; Karp, Tanja; Krile, ThomasThe advancement in fields like multimedia, medical imagery and emergence of high resolution digital cameras has necessitated the acquisition, storage and transmission of high resolution digital images. Storage and transmission of such images are expensive in terms of bytes and bandwidth. There is a need for compressing these images to curtail the storage and transmission budget. A bewildering variety of image compression schemes featuring new concepts and techniques have been proposed to yield superior compression quality. Hybrid Vector Scalar Quantization (HVSQ) is one such novel compression scheme used for compressing high resolution cervigram image archives of the US National Library of Medicine. It is a lossy compression scheme comprising of two of the well known quantization techniques namely Vector and Scalar Quantization in the wavelet domain. This thesis focuses on implementation and optimization of the Vector Quantization module of HVSQ to yield higher image quality in terms of Peak Signal to Noise Ratio (PSNR) and lower encoding time. The thesis also focuses on evaluating the performance of Vector, Scalar and Hybrid Vector Scalar Quantization. The superior performance HVSQ is then verified by compressing a few high resolution natural images.