Browsing by Subject "Video compression"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item An ASIC implementation of the two-dimensional Discrete Cosine Transform(Texas Tech University, 1996-08) Chen, FengNot availableItem Bayesian motion estimation using video frame differences(Texas Tech University, 1998-05) Panturu, Daniel I.In modem video compression, two major research directions are under intense scrutiny due to their potential to significantly improve the existing techniques. One is focusing on new orthogonal transformations of the video signal, such as wavelets, to perform more efficient signal energy repacking. The second concentrates on motion prediction and compensation, to remove or reduce temporal redundancy in successive video frames. In this respect, motion estimation, which may refer to image-plane motion (2-D motion) or object-motion (3-D motion), is one of the fundamental problems in digital video processing and has been the subject of much research effort, since the early 1980s. Because of the ill-posed nature of the problem, motion estimation algorithms need supplementary provisions (models) about the structure of the 2-D motion field. In this study, the 2-D motion estimation is formulated as a Bayesian estimation problem. Furthermore, a stochastic smoothness constraint is introduced by modeling the 2-D motion vector field in terms of a Gibbs distribution. The motion vector model proposed in this dissertation is a globally smooth model based on vector Markov random fields and the estimation criterion is the maximum a posteriori (MAP) probability, in which the a posteriori probability of motion is maximized given the input data. In contrast with other studies, successive video frame differences are used in this dissertation to estimate the motion. The MAP estimation is performed through simulated annealing, in which the sampling of the solution space is done by means of the Gibbs sampler. Bad data are eliminated using a variant of the method of local outliers rejection. Experimental results of the application of the proposed simulated annealing algorithm and gradient descent based algorithms to natural and computer-generated images with natural and synthetic motion are compared.Item DCT domain video foveation and transcoding for heterogeneous video communication(2002-05) Liu, Shizhong; Bovik, Alan C. (Alan Conrad), 1958-Item Energy repacking for video compression using wavelet transforms(Texas Tech University, 1997-05) Sreenath, Sreenivas PrasadThe video compression field has gone through a rapid change in the past few years. This change can be attributed to the requirement of a large amount of video data handling and storage. The enormous amount of video data has increased the need for an optimum way of storing and compression. In order to achieve a better video compression, many techniques have been developed. This research exploits the energy repacking capability of wavelet transforms for video compression. A wavelet transform of an original image provides an efficient way to represent the original data with potentially very high compression rate. To evaluate the compression achievable with the wavelet transform, the results obtained from the wavelet algorithm is compared with results obtained from the standard JPEG baseline technique. The evaluation is mainly based on the log mean square error, the signal to noise ratio and the bit per pixel requirement of the data. The results confirm that the wavelet transform yields a better compression and energy repacking than that obtained from DCT transform. Different levels of subband decomposition of wavelets are also evaluated.Item Fast and low memory usage coding for image and video based on wavelet transform(Texas Tech University, 2007-05) Ye, Linning; Nutter, Brian; Mitra, Sunanda; Seshaiyer, Padmanabhan; Karp, TanjaA new video codec based on three-dimensional wavelet subband coding with 3-D BCWT is presented. This new video codec has almost identical PSNR performance to the well-known 3-D SPIHT video codec. However, it is much more computationally efficient and uses much less internal memory than 3-D SPIHT. Implementation results of 3-D BCWT show that it can achieve real time decoding with strictly software implementation on a PC. Application of the 3-D BCWT algorithm to volumetric medical images shows that it can also achieve good performance. Although the BCWT algorithm itself uses much less memory than the SPIHT algorithm, the total system memory usage in BCWT coding is still high due to the large memory consumption of the wavelet transform. In this dissertation, the line-based BCWT algorithm is also presented, which utilizes the line-based wavelet transform to achieve BCWT coding. Due to the backward coding feature of the BCWT algorithm, the line-based BCWT algorithm can significantly reduce the overall system memory usage. Depending upon the image size, the memory usage of the line-based BCWT algorithm can be less than 1% of the memory usage of the SPIHT algorithm. Compared with the original BCWT algorithm, the line-based BCWT algorithm can use less than 2% of the memory that the BCWT algorithm consumes, thus making this algorithm extremely suitable for implementation on resource-limited platforms.Item Foveated coding for persistics(2012-12) Bernstein, Alan Aaron; Bovik, Alan C. (Alan Conrad), 1958-; Heath, Robert WPersistics is an advanced framework for processing wide-area aerial surveillance video. This framework handles the tasks of data collection, stitching of multi-sensor imagery, image registration and stabilization, motion tracking, and compression. As the technology for image sensor sizes improves, significant improvements in compression techniques are necessary in order to make full use of the data. Because the information of interest in such video is naturally moving, point-like targets, the applicability of foveated coding to the compression problem is an interesting question. Foveated coding, a compression technique that was designed to be perceptually optimal for the human visual system, has several components that are appropriate to the persistics compression problem. Foveation is applied in several different scenarios and methods to persistics data. As foveation can make good use of the persistics tracker data, a problem affecting tracker performance is explored as well. The multi-sensor stitching component of persistics can generate artifacts that reduce the effectiveness of the tracker. A method for characterizing, detecting, and correcting such artifacts is desirable. These three concepts are explored, and a method for detection is developed. Components of these algorithms were absorbed into a more general framework for artifact correction.Item Joint source-channel distortion modeling for image and video communication(2006) Sabir, Muhammad Farooq; Bovik, Alan C. (Alan Conrad), 1958-; Heath, Robert W., Ph. D.Item Rate-adaptive H.264 for TCP/IP networks(Texas A&M University, 2007-09-17) Kota, PraveenWhile there has always been a tremendous demand for streaming video over TCP/IP networks, the nature of the application still presents some challenging issues. These applications that transmit multimedia data over best-effort networks like the Internet must cope with the changing network behavior; specifically, the source encoder rate should be controlled based on feedback from a channel estimator that probes the network periodically. First, one such Multimedia Streaming TCP-Friendly Protocol (MSTFP) is considered, which iteratively integrates forward estimation of network status with feedback control to closely track the varying network characteristics. Second, a network-adaptive embedded bit stream is generated using a r-domain rate controller. The conceptual elegance of this r-domain framework stems from the fact that the coding bit rate ) (R is approximately linear in the percentage of zeros among the quantized spatial transform coefficients ) ( r , as opposed to the more traditional, complex and highly nonlinear ) ( Q R characterization. Though the r-model has been successfully implemented on a few other video codecs, its application to the emerging video coding standard H.264 is considered. The extensive experimental results show thatrobust rate control, similar or improved Peak Signal to Noise Ratio (PSNR), and a faster implementation.Item Scene change detection in compressed video data(Texas Tech University, 1995-08) Ambady, Balagopalan MenonUse of digital video data is on the rise in applications like multimedia educational and training tools, multimedia mail, networked video conferencing systems and desktop entertainment systems. Due to the large amount of data associated with video applications, video data is compressed before storage or transmission. Therefore the problems of content based indexing and retrieval of compressed video data assume primary importance. Solutions to these problems require computationally efficient detection of scene changes. Previous research has shown that scene change techniques operating on compressed data are much faster than techniques requiring decompression of data before scene change detection. We have developed new scene change detection algorithms, based on block comparisons between adjacent frames of digital video data compressed in the spatial frequency domain.Item Software design and implementation of a video data acquisition and replay system(Texas Tech University, 2004-05) Helene, Sigi Jessica St.The objective of this paper involves using the waterfall approach model to design a bespoke product system to test video signals in a test lab. The thesis goes into depth describing the system requirements, design and implementation of a Video Data Acquisition and Replay (VIDAR). VIDAR allows video signals to be fed into a hardware system, downloaded onto a computer, and outputted onto a video system such as a television.Item Space frequency quantization in video compression(Texas Tech University, 2004-05) Pai, Sunil SubraoThe recent explosion in digital video storage and delivery has presented strong motivations for high performance video compression solutions. One second of uncompressed video would require about 30 MB of disk space (30 fps x 921,600 bytes per frame). For today's PCs, 10 MBs per second data throughput rates is considered fast. With the web, we have an even more significant problem. With the current storage devices and the bandwidth available, the only way to share videos is to compress them, and higher the compression the better it is. Over the past decade, the wavelets have been used successfully in solving many difficult problems requiring transform domain processing, including image compression (e.g., JPEG 2000). This thesis is an attempt to use wavelet transform in video compression instead of the traditional discrete cosine transform (DCT)-based technique.Item Video compression in signal-dependent noise(Texas Tech University, 1996-12) Upadhya, Ashwin KumarThis work investigates the performance of video compression techniques in the presence of signal-dependent noise. The signal-dependent noise sources most commonly encountered are film-grain noise and speckle. Film-grain noise degradation occurs when a photographic film is scanned for the purpose of digitization [6]. All types of coherent imaging techniques, such as synthetic aperture radar (SAR) imagery, laser illuminated imagery, astronomical imagery and ultrasonic medical imagery are affected by speckle. Noise in the video not only affects the quality of the video, but also the compression scheme for the video. It is of utmost importance to improve the quality of video and also the achievable compression, for the sake of archiving, in applications such as medical imagery. This work aims to investigate techniques to improve the quality and achievable compression of moving pictures (video), keeping in mind such applications. There is no real consensus yet on the "best" quality measure to use for determining the quality of the output video, so we will use the standard mean square error, log mean square error, signal-to-noise ratio and perceptual mean square error (which is modeled on the human visual system) in this work.