Browsing by Subject "Coding theory"
Now showing 1 - 19 of 19
Results Per Page
Sort Options
Item A software implementation progress model(Texas Tech University, 2004-08) Towell, DwayneNot availableItem A study of measurements of blocking defects in highly-compressed images(Texas Tech University, 1997-12) Zhong, Jianqiang NormanNot availableItem A study of very low bit rate video coding(Texas Tech University, 1997-08) Wang, NaxinThis work focuses on the low bit rate video coding used in video conferencing system. A simplified segmentation based coding approach is exponed to H 263 and MPEG-1 standards. This approach uses the motion information to separate the moving objects from the rest and segments the moving macroblocks into objects Based on the features of video conferencing sequence, the encoder is optimized so that more bits are spent on theses moving objects and the others are treated as background Therefore, it saves a lot of bits. The encoder can select the objects by their sizes or let the viewer select them. The quality of the output is fine by our observation.Item Adaptive wavelet filter design for optimized image source encoding(Texas Tech University, 2002-12) Kumar, RoopeshDespite intensive research being conducted on the topic of adaptive filter design in general, adaptive filter design in the discrete wavelet transform (DWT) domain with specific constraints is still an active research area. The present work investigates the advantages and limitations of the design of a 2-chanel perfect-reconstruction wavelet filter which is adapted and optimized under minimum energy constraints in a specific band. Such a filter can be used with a quantizer and entropy encoder of a wavelet based image encoder to give optimum performance. An optimal 2-channel conjugate quadrature filter (CQF) bank has been designed and optimized using Sequential Quadratic Programming methods. The filter bank problem is solved using recently developed optimization techniques for general nonlinear, non-convex functions. The results indicate an improved performance for this method compared to the earlier-used Interior-Point optimization method.Item Algebraic and analytic techniques in coding theory(2015-12) Bhowmick, Abhishek; Zuckerman, David I.; Bajaj, Chandrajit; Gal, Anna; Lovett, Shachar; Price, EricError correcting codes are designed to tackle the problem of reliable trans- mission of data through noisy channels. A major challenge in coding theory is to efficiently recover the original message even when many symbols of the received data have been corrupted. This is called the unique decoding problem of error correcting codes. More precisely, if the user wants to send K bits, the code stretches K bits to N bits to tolerate errors in the N bits. Then the goal is to recover the original K bits of the message. Often, the receiver requires only a certain part of the message. In such cases, analyzing the entire received data (word) becomes prohibitive. The challenge is to design a local decoder which queries only few locations of the received word and outputs the part of the message required. This is known as local decoding of an error correcting code. The unique decoding problem faces a certain combinatorial barrier. That is, there is a limit to the number of errors it can tolerate in order to uniquely identify the correct message. This is called the unique decoding radius. A major open problem is to understand what happens if one allows for errors beyond this threshold. The goal is to design an algorithm that can recover the right message, or possibly a list of messages (preferably a small number). This is referred to as list decoding of an error correcting code. At the core of many such codes lies polynomials. Polynomials play a fundamental role in computer science with important applications in algorithm design, complexity theory, pseudo-randomness and machine learning. In this dissertation, we improve our understanding of well known classes of codes and discover various properties of polynomials. As an additional consequence, we obtain results in a suite of problems in effective algebraic geometry, including Hilbert’s nullstellensatz, ideal membership problem and counting rational points in a variety.Item An optimized vector quantization for color image compression(Texas Tech University, 1998-05) Kompella, Sastry V SImage Data compression using vector quantization (VQ) has received a lot of attention in the recent years because of its optimality in rate distortion and adaptability. A fundamental goal of data compression is to reduce the bit rate for transmission or data storage while maintaining an acceptable fidelity or image quality. The combination of subband coding and vector quantization can provide a powerful method for compressing color images. Most of the existing VQ algorithms however suffer from a number of serious problems like long search process, codebook initialization, getting trapped in local minima, etc. This work investigates the development of an image compression algorithm using a variable block size vector quantization technique for generation of optimal codebook by employing a neuro-fuzzy clustering approach to ensure minimum distortion. Each color image is decomposed into R, G, and B color planes prior to application of wavelet transform and vector quantization to each color plane. Each color plane is preprocessed by performing multiresolution wavelet decomposition. The multiresolution nature of the discrete wavelet transform is utilized to decompose the images into more directionally decorrelated sub-images, which are more suitable for quantization and coding. Vector quantization is performed on each of the subimages at different resolutions and a multiresolution codebook scheme is utilized. This new approach to image compression facilitates generation of an improved globally optimal codebook, and a simpler search scheme. Finally, the codebooks generated from the three encoded color planes are entropy coded for obtaining higher compression at minimum distortion. Each color plane codebook is decoded and the reconstructed color planes are combined to form the final reconstructed image. The reconstructed images are compared with those of other standard compression algorithms, in terms of Mean square error (MSE), and Peak signal-to-noise ratio (PSNR).Item Codebook optimization in vector quantization(Texas Tech University, 1999-12) Zhang, XiaoxiDigital image processing techniques were introduced early this century. One of the first applications was in improving digitized newspaper pictures sent by submarine cable between London and New York in 1920s. It reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours. In 1964, Jet Propulsion Laboratory began using computers to improve image quality [5]. From 1964 until present, the field of image processing has grown vigorously. It has become a prime area of research not only in electrical engineering but also in many other disciplines such as computer science, health science, and geography. However, representing a digitized image may require enormous amount of data. Some images, like medical images, have higher resolution and therefore require even larger amounts of memory. Due to the vast amount of data associated with images and video, compression is a key technology for reducing the amount of data required to represent a digital image. The reason that we can compress a digital image is because there are some data redundancies in the image. When we reduce or eliminate the redundancies, the data is compressed. There are many compression methods and normally, they can be classified into two main categories: lossless and lossy compression. In this thesis, we will focus on Vector Quantization which is a lossy compression. Based on Shannon^ theory, coding systems can perform better if they operate on vectors or group of symbols rather than on individual symbols or samples [9]. The objective of this research was to compress image using LBG-VQ [21] both in spatial domain (The spatial domain algorithm was introduced by Linde, Buzo, and Gary) and in wavelet transformdomain [6] and compare it with other algorithms for vector quantization techniques developed recently.Item Energy repacking for video compression using wavelet transforms(Texas Tech University, 1997-05) Sreenath, Sreenivas PrasadThe video compression field has gone through a rapid change in the past few years. This change can be attributed to the requirement of a large amount of video data handling and storage. The enormous amount of video data has increased the need for an optimum way of storing and compression. In order to achieve a better video compression, many techniques have been developed. This research exploits the energy repacking capability of wavelet transforms for video compression. A wavelet transform of an original image provides an efficient way to represent the original data with potentially very high compression rate. To evaluate the compression achievable with the wavelet transform, the results obtained from the wavelet algorithm is compared with results obtained from the standard JPEG baseline technique. The evaluation is mainly based on the log mean square error, the signal to noise ratio and the bit per pixel requirement of the data. The results confirm that the wavelet transform yields a better compression and energy repacking than that obtained from DCT transform. Different levels of subband decomposition of wavelets are also evaluated.Item Fault tolerance in distributed systems : a coding-theoretic approach(2012-08) Balasubramanian, Bharath; Garg, Vijay K. (Vijay Kumar), 1963-; Chase, Craig M.; Julien, Christine L.; Plaxton, Greg; Touba, Nur A.; Vishwanath, SriramDistributed systems are rapidly increasing in importance due to the need for scalable computations on huge volumes of data. This fact is reflected in many real-world distributed applications such as Amazon's EC2 cloud computing service, Facebook's Cassandra key-value store or Apache's Hadoop MapReduce framework. Multi-core architectures developed by companies such as Intel and AMD have further brought this to prominence, since workloads can now be distributed across many individual cores. The nodes or entities in such systems are often built using commodity hardware and are prone to physical failures and security vulnerabilities. Achieving fault tolerance in such systems is a challenging task, since it is not easy to observe and control these distributed entities. Replication is a standard approach for fault tolerance in distributed systems. The main advantage of this approach is that the backups incur very little overhead in terms of the time taken for normal operation or recovery. However, replication is grossly wasteful in terms of the number of backups required for fault tolerance. The large number of backups has two major implications. First, the total space or memory required for fault tolerance is considerably high. Second, there is a significant cost of resources such as the power required to run the backup processes. Given the large number of distributed servers employed in real-world applications, it is a hard task to provide fault tolerance while achieving both space and operational efficiency. In the world of data fault tolerance and communication, coding theory is used as the space efficient alternate for replication. A direct application of coding theory to distributed servers, treating the servers as blocks of data, is very inefficient in terms of the updates to the backups. This is primarily because each update to the server will affect many blocks in memory, all of which have to be re-encoded at the backups. This leads us to the following thesis statement: Can we design a mechanism for fault tolerance in distributed systems that combines the space efficiency of coding theory with the low operational overhead of replication? We present a new paradigm to solve this problem, broadly referred to as fusion. We provide fusion-based solutions for two models of computation that are representative of a large class of applications: (i) Systems modeled as deterministic finite state machines and, (ii) Systems modeled as programs containing data structures. For finite state machines, we use the notion of Hamming distances to present a polynomial time algorithm to generate efficient backup state machines. For programs hosting data structures, we use a combination of erasure codes and selective replication to generate efficient backups for most commonly used data structures such as queues, array lists, linked lists, vectors and maps. We present theoretical and experimental results that demonstrate the efficiency of our schemes over replication. Finally, we use our schemes to design an efficient solution for fault tolerance in two real-world applications: Amazons Dynamo key-value store, and Google's MapReduce framework.Item Network coding for next-generation networks(2008-05) Bhadra, Sandeep; Shakkottai, SanjayItem Oversampled multi-bit sigma-delta A/D converters(Texas Tech University, 1997-08) Kinyua, Martin K.The objective of this research is geared towards proposing sigma-delta modulators that will achieve high resolution at wide bandwidths, meaning A/D conversion at rates exceeding 1 MHz with a resolution of at least 12 bits [1, p.219]. As will become clear later, this task not only requires the use of high-order noise shaping but also multi-bit quantization. On that basis, this thesis starts with a basic sigma-delta topology which employs multi-bit quantization with single-bit feedback thus avoiding the very strict linearity requirements imposed on DAC in the feedback path of a sigma-delta loop. The concept is then further extended to develop novel sigma-delta topologies that accomplish a synergetic combination of the advantages of sigma-delta and pipeline ADCs to provide wide dynamic range at wide bandwidths, a performance which may not be currently realized using either of the structures alone. The validity of the proposed topologies is confirmed by system level simulations.Item Real-time lossless compression(Texas Tech University, 2000-12) Hargrave, James RogerThis thesis covers the selection and design of a lossless coding algorithm for optimal performance of high speed data transfer using a very large-scale integration (VLSI) of the algorithm on a dedicated chip for real time operation. The choice of the lossless coding algorithm is based on a comparative evaluation of the existing lossless coding algorithms, computational efficiency of parallel versus serial design and parameter optimization. Reduced multiplication arithmetic coding is found to be the best choice for efficient real time transmission of large data files at speeds exceeding one gigabit per second.Item Space frequency quantization in video compression(Texas Tech University, 2004-05) Pai, Sunil SubraoThe recent explosion in digital video storage and delivery has presented strong motivations for high performance video compression solutions. One second of uncompressed video would require about 30 MB of disk space (30 fps x 921,600 bytes per frame). For today's PCs, 10 MBs per second data throughput rates is considered fast. With the web, we have an even more significant problem. With the current storage devices and the bandwidth available, the only way to share videos is to compress them, and higher the compression the better it is. Over the past decade, the wavelets have been used successfully in solving many difficult problems requiring transform domain processing, including image compression (e.g., JPEG 2000). This thesis is an attempt to use wavelet transform in video compression instead of the traditional discrete cosine transform (DCT)-based technique.Item Speaker independent real-time speech recognition system(Texas Tech University, 1998-08) Jindani, Abid MThis thesis attempts to develop a real-time speaker-independent Automatic Speech Recognition (ASR) system. The system recognizes isolated utterances from a limited vocabulary, and is small and cost-efficient to be incorporated into a consumer appliance. The recognition is based on zero crossings and energy content measurement on the speech waveforms. The algorithm is based on segmenting the speech waveform into ten equally spaced intervals and performing a match with the patterns in a reference template. The system was implemented on an IBM Personal Computer and achieved an error rate of 0% on a vocabulary of four words from an initial ten-word database of 16 speakers (8 male and 8 female). The system recognized unknown utterances in less than 0.3 seconds.Item Statistical algorithms for circuit synthesis under process variation and high defect density(2007-12) Singh, Ashish Kumar, 1981-; Orshansky, MichaelAs the technology scales, there is a need to develop design and optimization algorithms under various scenarios of uncertainties. These uncertainties are introduced by process variation and impact both delay and leakage. For future technologies at the end of CMOS scaling, not only process variation but the device defect density is projected to be very high. Thus realizing error tolerant implementation of Boolean functions with minimal redundancy overhead remains a challenging task. The dissertation is concerned with the challenges of low-power and area digital circuit design under high parametric variability and high defect density. The technology mapping provides an ideal starting point for leakage reduction because of higher structural freedom in the choices of implementations. We first describe an algorithm for technology mapping for yield enhancement that explicitly takes parameter variability into account. We then show how leakage can be reduced by accounting for its dependence on the signal state, and develop a fast gain-based technology mapping algorithm. In some scenarios the state probabilities can not be precise point values but are modeled as an interval. We extended the notion of mean leakage to the worst case mean leakage which is defined as the sum of maximal mean leakage of circuit gates over the feasible probability realizations. The gain-based algorithm has been generalized to optimize this proxy leakage metric by casting the problem within the framework of robust dynamic programming. The testing is performed by selecting various instance probabilities for the primary inputs that are deviations from the point probabilities with respect to which a point probability based gain based mapper has been run. We obtain leakage improvement for certain test probabilities with the interval probability based over the point probability based mapper. Next, we present techniques based on coding theory for implementing Boolean functions in highly defective fabrics that allow us to tolerate errors to a certain degree. The novelty of this work is that the structure of Boolean functions is exploited to minimize the redundancy overhead. Finally we have proposed an efficient analysis approach for statistical timing, which can correctly propagate the slope in the path-based statistical timing analysis. The proposed algorithm can be scaled up to one million paths.Item Three essays on the interface of computer science, economics and information systems(2007) Hidvégi, Zoltán Tibor, 1970-; Whinston, Andrew B.This thesis looks at three aspects related to the design of E-commerce systems, online auctions and distributed grid computing systems. We show how formal verification techniques from computer science can be applied to ensure correctness of system design and implementation at the code level. Through e-ticket sales example, we demonstrate that model checking can locate subtle but critical flaws that traditional control and auditing methods (e.g., penetration testing, analytical procedure) most likely miss. Auditors should understand formal verification methods, enforce engineering to use them to create designs with less of a chance of failure, and even practice formal verification themselves in order to offer credible control and assistance for critical e-systems. Next, we study why many online auctions offer fixed buy prices to understand why sellers and auctioneers voluntarily limit the surplus they can get from an auction. We show when either the seller of the dibbers are risk-averse, a properly chosen fixed permanent buy-price can increase the social surplus and does not decrease the expected utility of the sellers and bidders, and we characterize the unique equilibrium strategies of uniformly risk-averse buyers in a buy-price auction. In the final chapter we look at the design of a distributed grid-computing system. We show how code-instrumentation can be used to generate a witness of program execution, and show how this witness can be used to audit the work of self-interested grid agents. Using a trusted intermediary between grid providers and customers, the audit allows payment to be contingent on the successful audit results, and it creates a verified reputation history of grid providers. We show that enabling the free trade of reputations provides economic incentives to agents to perform the computations assigned, and it induces increasing effort levels as the agents' reputation increases. We show that in such a reputation market only high-type agents would have incentive to purchase a high reputation, and only low-type agents would use low reputations, thus a market works as a natural signaling mechanism about the agents' type.Item Two-dimensional image convolution by analog computation(Texas Tech University, 1997-05) Symes, Donald AllenVision computing has been a subject of tremendous research interest since the days of the original perceptron in the 1940's [McClelland 88]. The last decade or so has shown a great deal of progress in our understanding of, at least, the early stages of vision and of the underlying structures and processes that perform these early processes in biological systems. Despite the ever-increasing speed and power of digital computing systems, the consensus appears to be that any practical vision system will, necessarily, be a highly parallel, probably analog, computing system.Item Video compression in signal-dependent noise(Texas Tech University, 1996-12) Upadhya, Ashwin KumarThis work investigates the performance of video compression techniques in the presence of signal-dependent noise. The signal-dependent noise sources most commonly encountered are film-grain noise and speckle. Film-grain noise degradation occurs when a photographic film is scanned for the purpose of digitization [6]. All types of coherent imaging techniques, such as synthetic aperture radar (SAR) imagery, laser illuminated imagery, astronomical imagery and ultrasonic medical imagery are affected by speckle. Noise in the video not only affects the quality of the video, but also the compression scheme for the video. It is of utmost importance to improve the quality of video and also the achievable compression, for the sake of archiving, in applications such as medical imagery. This work aims to investigate techniques to improve the quality and achievable compression of moving pictures (video), keeping in mind such applications. There is no real consensus yet on the "best" quality measure to use for determining the quality of the output video, so we will use the standard mean square error, log mean square error, signal-to-noise ratio and perceptual mean square error (which is modeled on the human visual system) in this work.Item Wireless systems incorporating full-diversity single-symbol decodable space-time block codes: performance evaluations and developments(2007-12) Lee, Hoo-jin, 1973-; Powers, Edward J.