Multiplexing Of AVS China Video With AAC Audio Bit Streams & De-multiplexing With Lip Synchronization During Playback

Date

2011-03-03

Authors

Journal Title

Journal ISSN

Volume Title

Publisher

Electrical Engineering

Abstract

AVS China is the latest digital video coding standard released by the AVS working group of China which has been proven superior to the other video coding standards such as H.261, MPEG-1 and MPEG-2 in terms of coding efficiency and reduced complexity. AVS standard employs the latest video coding tools which mainly target standard definition (SD) and high definition (HD) video compression and aim to achieve a similar coding efficiency of H.264/AVC but with reduced complexity. AVS video standard was developed in order to target broadcast and storage media applications such as digital video television, digital video disk (DVD and high definition disk) and broadband network multimedia applications such as video conferencing, video on demand, IPTV etc. In order to have a meaningful delivery of multimedia content to the end user, it is necessary to associate an audio stream to the video stream. Among the various audio compression schemes, MPEG-2/4 advanced audio coding (AAC) is the state-of-the art audio coding algorithm standardized by the ISO/IEC MPEG (moving pictures expert group) committee. The audio quality of the AAC bit stream is observed to be superior than the earlier audio coding standards such as MPEG -1/2 layer 3 and AC3 which were widely used for delivering the audio content at very low bit rates. Considering AVS as video and AAC as audio for transmission of digital multimedia content over a broadcast network provides the end users to take advantages of such leading technologies. In order for a proper transmission of multimedia content, the video and audio streams cannot be transmitted separately. These streams need to be multiplexed before transmission. The objective of the thesis is to propose an effective method for multiplexing the AVS video and AAC audio elementary streams and is followed by de-multiplexing the stream at the receiver end while achieving lip synchronization between the audio and video streams during playback. Since the video and audio streams have a frame wise arrangement, frame numbers are used as synchronization information during multiplexing which helps in achieving lip synchronization between the video and audio streams during playback. There are totally two layers of packetization adopted for multiplexing the video and audio streams before their transmission. The synchronization information is embedded in the headers of the first layer of packetization. The packetization layers adopted in the thesis conform to the MPEG-2 systems standard, which meets the various requirements of the transmission channels. In order to prevent the buffer under or over flow at the receiver end, play back time has been chosen as reference criteria by which the audio and video data packets are allocated according to their play back time in the multiplexed stream. The advantages and limitations of the proposed method are discussed in detail.

Description

Keywords

Citation