Browsing by Subject "Multimedia systems"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Architectural techniques to accelerate multimedia applications on general-purpose processors(2001-08) Talla, Deependra, 1975-; John, Lizy KurianGeneral-purpose processors (GPPs) have been augmented with multimedia extensions to improve performance on multimedia-rich workloads. These extensions operate in a single instruction multiple data (SIMD) fashion to extract data level parallelism in multimedia and digital signal processing (DSP) applications. This dissertation consists of a comprehensive evaluation of the execution characteristics of multimedia applications on SIMD enhanced GPPs, detection of bottlenecks in the execution of multimedia applications on SIMD enhanced GPPs, and the design and implementation of architectural techniques to eliminate and alleviate the impact of the various bottlenecks to accelerate multimedia applications. This dissertation identifies several bottlenecks in the processing of SIMD enhanced multimedia and DSP applications on GPPs. It is found that approximately 75-85% of instructions in the dynamic instruction stream of media workloads are not performing useful computations but merely supporting the useful computations by performing address generation, address transformation/data reorganization, loads/stores, and loop branches. This leads to an underutilization of the SIMD computation units with only 1-12% of the peak SIMD throughput being achieved. This dissertation proposes the use of hardware support to efficiently execute the overhead/supporting instructions by overlapping them with the useful computation instructions. A 2-way GPP with SIMD extensions augmented with the proposed MediaBreeze hardware significantly outperforms a 16-way SIMD GPP without MediaBreeze hardware on multimedia kernels. On multimedia applications, a 2-/4-way SIMD GPP augmented with MediaBreeze hardware is superior to a 4-/8-way SIMD GPP without MediaBreeze hardware. The performance improvements are achieved at an area cost that is less than 0.3% of current GPPs and power consumption that is less than 1% of the total processor power without elongating the critical path of the processor.Item Bayesian motion estimation using video frame differences(Texas Tech University, 1998-05) Panturu, Daniel I.In modem video compression, two major research directions are under intense scrutiny due to their potential to significantly improve the existing techniques. One is focusing on new orthogonal transformations of the video signal, such as wavelets, to perform more efficient signal energy repacking. The second concentrates on motion prediction and compensation, to remove or reduce temporal redundancy in successive video frames. In this respect, motion estimation, which may refer to image-plane motion (2-D motion) or object-motion (3-D motion), is one of the fundamental problems in digital video processing and has been the subject of much research effort, since the early 1980s. Because of the ill-posed nature of the problem, motion estimation algorithms need supplementary provisions (models) about the structure of the 2-D motion field. In this study, the 2-D motion estimation is formulated as a Bayesian estimation problem. Furthermore, a stochastic smoothness constraint is introduced by modeling the 2-D motion vector field in terms of a Gibbs distribution. The motion vector model proposed in this dissertation is a globally smooth model based on vector Markov random fields and the estimation criterion is the maximum a posteriori (MAP) probability, in which the a posteriori probability of motion is maximized given the input data. In contrast with other studies, successive video frame differences are used in this dissertation to estimate the motion. The MAP estimation is performed through simulated annealing, in which the sampling of the solution space is done by means of the Gibbs sampler. Bad data are eliminated using a variant of the method of local outliers rejection. Experimental results of the application of the proposed simulated annealing algorithm and gradient descent based algorithms to natural and computer-generated images with natural and synthetic motion are compared.Item Building and maintaining overlay networks for bandwidth-demanding applications(2005) Kim, Min Sik; Lam, Simon S.The demands of Internet applications have grown significantly in terms of required resources and types of services. Overlay networks have emerged to accommodate such applications by implementing more services on top of IP (Internet Protocol). However, while overlay networks are successful in circumventing limitations of IP, the task of building and maintaining an overlay network is still challenging. In an overlay network, participating hosts are virtually fully-connected through the underlying Internet. However, since the quality of overlay connections varies, the performance of the overlay network is dependent on which connections are chosen to be utilized. Therefore, maintaining a “good” overlay network topology is crucial in achieving high performance. To demonstrate how much performance gain can be achieved through topology changes, a distributed algorithm to build an overlay multicast tree is proposed for streaming media distribution. The algorithm finds an optimal tree such that the average bandwidth of receivers is maximized under an abstract network model. However, increasing bandwidth does not necessarily lead to a better overlay topology; in overlay networks, interference between overlay connections should be taken into account. Since such interference occurs when different overlay connections pass through a congested link simultaneously, detecting congestion shared by multiple overlay connections is necessary to avoid bottlenecks. For shared congestion detection, a novel technique called DCW (Delay Correlation with Wavelet denoising) is proposed. Previous techniques to detect shared congestion have limitations in applying to overlay networks; they assume a common source or destination node, drop-tail queueing, or a single point of congestion. However, DCW is applicable to any pair of paths on the Internet without such limitations. It employs a signal processing method, wavelet denoising, to separate queueing delay caused by network congestion from various other delay variations. The proposed technique is evaluated through both simulations and Internet experiments. They show that for paths with a common synchronization point, DCW provides faster convergence and higher accuracy while using fewer packets than previous techniques. Furthermore, DCW is robust and accurate without a synchronization point; more specifically, it can tolerate a synchronization offset of up to one second between two packet flows. Because DCW is designed to detect shared congestion between a pair of paths, there is a concern about scalability when it is used in a large-scale overlay network. To cluster N paths, a straightforward approach of using pairwise tests would require O(N2 ) time complexity. To address this issue, a scalable approach to cluster Internet paths using multidimensional indexing is presented. By storing per-path data in a multidimensional space indexed using a tree-like structure, the computational complexity of clustering is reducible to O(N log N). The indexing overhead can be further improved by reducing dimensionality of the space through the wavelet transform. Computation cost is kept low by using the same wavelet transform for both denoising in DCW and dimensionality reduction. The proposed approach is evaluated using simulations and found to be effective for large N. The tradeoff between indexing overhead and clustering accuracy is shown empirically. As a case study, an algorithm that improves overlay multicast topology is designed. Because overlay multicast forwards data without support from routers, data may be delivered multiple times over the same physical link, causing a bottleneck. This problem is more serious for applications demanding high bandwidth such as multimedia distribution. Although such bottlenecks can be removed by overlay topology changes, a na¨ıve approach may create bottlenecks in other parts of the network. The proposed algorithm removes all bottlenecks caused by the redundant data delivery of overlay multicast, detecting such bottlenecks using DCW. In a case where the source rate is constant and the available bandwidth of each link is not less than the rate, the algorithm guarantees that every node receives at the full source rate. Simulation results show that even in a network with a dense receiver population, the algorithm finds a tree that satisfies all the receiving nodes while other heuristic-based approaches often fail. A similar approach to finding bottlenecks and removing them through topology changes is applicable to other types of overlay networks. This research will enable bandwidth-demanding applications to build more efficient overlay networks to achieve higher throughput.Item DCT domain video foveation and transcoding for heterogeneous video communication(2002-05) Liu, Shizhong; Bovik, Alan C. (Alan Conrad), 1958-Item Efficient management of large metadata catalogs in a ubiquitous computing environment(2012-05) Beatty, Dan; Lopez-Benitez, Noe; Urban, Susan D.; Sill, Alan F.; Smith, Philip W.Trends in experimental sciences, such as astrophysics, have led to many critically needed, non-normalized, and massive meta-data catalogs that organize collections of recorded photographic and spectrographic observations of similar size. Observations of the night sky can best be presented using a data model that conveys the observations, analysis, objects contained with the observations, and results of analysis pertaining to those objects. Such a model is devised and it is referred to as the internet Flexible Image Transport System (iFITS). In addition a set of mapping functions to transform instances of the Sloan Digital Sky Survey into instances of iFITS, a light-weight marshaling method to transfer data to and from server side instances to mobile instances. Furthermore, this dissertation explores four architectures such as content management, software/ infrastructure/ platform as a service, context rule engine based request-response loop factory, and representational state transfer (REST) based query engines to facilitate the mining of the meta-data catalogs containing these observations.Item Rock star: a computer modeling and animation portfolio(Texas Tech University, 2004-05) Davis, Brett AllanNot availableItem Software design and implementation of a video data acquisition and replay system(Texas Tech University, 2004-05) Helene, Sigi Jessica St.The objective of this paper involves using the waterfall approach model to design a bespoke product system to test video signals in a test lab. The thesis goes into depth describing the system requirements, design and implementation of a Video Data Acquisition and Replay (VIDAR). VIDAR allows video signals to be fed into a hardware system, downloaded onto a computer, and outputted onto a video system such as a television.Item The effects of conceptual tempo and learning styles on the reflective thinking and decision making of principals in a multimedia case simulation(Texas Tech University, 1999-12) White, David RutterMultimedia-based case simulation programs provide program users with case-based dilemmas of practice using multimedia and computer-based design elements. Case-based dilemmas provide individuals with an opportunity to bridge theory with practice. They allow individuals to develop reflective thinking and problem solving skills needed for effective decision making. Experiences gained from problems encountered in casebased dilemmas prove invaluable because they can be interfaced with individuals' knowledge bases and previous experiences (Kowalski, 1995). The design of the case simulation prototype program used in this study applies a "technology-integrated, case-based design to help school leaders better scrutinize the complex leadership challenges they face in the day-today managing and leading of schools" (Claudet, 1998b, p. 338) . The purpose of this study was to examine the technology-cognition connection of a multimedia-based case simulation program. The case simulation program was designed for principals and administrators. The effects of related cognitive styles on participants' knowledge application and specific decision-making tasks that occur during the case simulation process were examined. This study was conducted to determine if cognitive styles help predict the decision making task performances of case simulation participants and if appropriate experiential outcomes of the targeted users of the case simulation prototype program were attained.