Codebook optimization in vector quantization

Date

1999-12

Journal Title

Journal ISSN

Volume Title

Publisher

Texas Tech University

Abstract

Digital image processing techniques were introduced early this century. One of the first applications was in improving digitized newspaper pictures sent by submarine cable between London and New York in 1920s. It reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours. In 1964, Jet Propulsion Laboratory began using computers to improve image quality [5]. From 1964 until present, the field of image processing has grown vigorously. It has become a prime area of research not only in electrical engineering but also in many other disciplines such as computer science, health science, and geography. However, representing a digitized image may require enormous amount of data. Some images, like medical images, have higher resolution and therefore require even larger amounts of memory.

Due to the vast amount of data associated with images and video, compression is a key technology for reducing the amount of data required to represent a digital image. The reason that we can compress a digital image is because there are some data redundancies in the image. When we reduce or eliminate the redundancies, the data is compressed.

There are many compression methods and normally, they can be classified into two main categories: lossless and lossy compression. In this thesis, we will focus on Vector Quantization which is a lossy compression. Based on Shannon^ theory, coding systems can perform better if they operate on vectors or group of symbols rather than on individual symbols or samples [9]. The objective of this research was to compress image using LBG-VQ [21] both in spatial domain (The spatial domain algorithm was introduced by Linde, Buzo, and Gary) and in wavelet transformdomain [6] and compare it with other algorithms for vector quantization techniques developed recently.

Description

Citation