Добавлен пользователем Shushimora, дата добавления неизвестна
Издательство CRC Press, 2006, -645 pp.A brief review of the history and the state-of-the-art of research in the field will reveal the fundamental concepts, principles and techniques used in image data compression for storage and visual communications. An important goal that was set fairly early by forerunners in image data compression is to minimize statistical (including source coding, spatio-temporal and inter-scale) and psychovisual (or perceptual) redundancies of the image data to either comply with a certain storage or communications bandwidth restrictions or limitations with the best possible picture quality, or to provide a certain picture quality service with the lowest possible amount of data or bit rate. It helped to set the course and to raise a series of issues widely researched, which have inspired and, in many ways, frustrated generations of researchers in the field. Some of these issues and associated problems are better researched, understood and solved than others. Using information theory and optimization techniques, we understand reasonably well the definition of statistical redundancy and what is the theoretical lower bound set by Shannon’s entropy in lossless image and video coding. We have statistically modelled natural image data fairly well, which has led to various optimal or sub optimal compression techniques in the least mean square sense. We routinely apply the rate-distortion theory with the mean squared error (MSE) as a distortion measure in design of constant bit rate coders. We have pushed the performance of a number of traditional compression techniques, such as predictive and transform coding, close to their limit in terms of decorrelation and energy packing efficiencies. Motion compensated prediction has been thoroughly investigated for inter-frame coding of video and image sequences, leading to a number of effective and efficient algorithms used in practical systems. In model-, object- or segmentation-based coding, we have been trying to balance bit allocations between coding of model parameters and that of the residual image, but we have yet to get it right. Different from classical compression algorithms, techniques based on matching pursuit, fractal transforms and projection on to convex sets are recursive, and encode transform or projection parameters instead of either pixel or transform coefficient values. Nevertheless, they have failed so far to live up to their great expectations in terms of rate-distortion performance in practical coding systems and applications. We have long since realized that much higher compression ratios can be achieved than what is achievable by the best lossless coding techniques or the theoretical lower bound set by the information theory without noticeable distortion when viewed by human subjects. Various adaptive quantization and bit allocation techniques and algorithms have been investigated to incorporate some of the aspects of the human visual systems (HVS), most of which focus on spatial contrast sensitivity and masking effects. Various visually weighted distortion measures have been also explored in either performance evaluation or rate-distortion optimization of image or video coders . Limited investigations have been conducted in constant quality coder design, impeded by the lack of a commonly acceptable quality metric which correlates well with subjective or perceived quality indices, such as the mean opinion score (MOS). Long has the question been asked, What’s wrong with mean-squared error?, as well as its derivatives such as the peak signal to noise ratio (PSNR), as the quality or distortion measure. Nonetheless, obtaining credible and widely acceptable alternative perceptual based quantitative quality and/or impairment metrics have so far eluded us till most recently. Consequently, attempts and claims of providing users with guaranteed or constant quality visual services have been by and large unattainable or unsubstantiated. Lacking HVS-based quantitative quality or impairment metrics, more often than not we opt for a much higher bit rate for quality critical visual service applications than what is necessary, resulting in users carrying extra costs; and just as likely a coding strategy may reduce a particular type of coding distortions or artifacts at the expense of manifesting or enhancing other types of distortions. One of the most challenging questions begging for an answer is how to define psychovisual redundancy for lossy image and video coding, if it can ever be defined quantitatively in a similar way to the statistical redundancy defined for lossless coding. It would help to set the theoretical lower bound for lossy image data coding at just noticeable level compared with the original. This book attempts to address two of the above raised issues which may form a critical part of theoretical research and practical system development in the field, i.e., HVS based perceptual quantitative quality/impairment metrics for digitally coded pictures (i.e., images and videos), and perceptual picture coding. The book consists of three Part I comprises the first three chapters, covering a number of fundamental concepts, theory, principles and techniques underpinning issues and topics addressed by this book. Rao provides an introduction to digital picture compression, covering basic issues and techniques along with popular coding structures, systems and international standards for compression of images and videos. Fundamentals of Human Vision and Vision Modeling are presented by Montag lated to the HVS and its applications presented in Parts II and III on perceptual quality/ impairment metrics, image/video coding and visual communications. The most recent achievements and findings in vision research are included, which are relevant to digital picture coding engineering practice. Various digital image and video coding/compression algorithms and systems introduce highly structured coding artifacts or distortions, which are different from those in their counterpart analog systems. It is important to analyze and understand these coding artifacts in either subjective and objective quality assessment of digitally encoded presented by Yuen of various coding artifacts in digital pictures coded using well known techniques. Part II of this book consists of eight chapters dealing with a range of topics regarding picture quality assessment criteria, subjective and objective methods and metrics, testing procedures, and development of international standards activities in the field. on subjective assessment methods and techniques, experimental design, international standard test methods for digital video images in contrast to objective assessment methods, highlighting a number of critical issues and findings. Commonly used test video sequences are presented. The chapter also covers test criteria, test procedures and related issues for various applications in digital video coding and communications. Although subjective assessment methods have been well documented in the literature and standardized by the international standards bodies, there has been a renewed Chapter 1, Digital Picture Compression and Coding Structure, by Hwang, Wu and and Fairchild in Chapter 2 which forms foundations of materials and discussions reimages or video sequences. In Chapter 3, a comprehensive classification and analysis is Chapter 4, Video Quality Testing by Corriveau, provides an in-depth discussion parts, i.e., Part I, Fundamentals; Part II, Testing and Quality Assessment of Digital Pictures and Part III, Perceptual Coding and Postprocessing.Picture Coding and Human Visual System Fundamentals Digital Picture Compression and Coding Structure Fundamentals of Human Vision and Vision Modeling Coding Artifacts and Visual Distortions Picture Quality Assessment and Metrics Video Quality Testing Perceptual Video Quality Metrics — A Review Philosophy of Picture Quality Scale Structural Similarity Based Image Quality Assessment Vision Model Based Digital Video Impairment Metrics Computational Models for Just-Noticeable Difference No-Reference Quality Metric for Degraded and Enhanced Video Perceptual Coding and Processing of Digital Pictures HVS Based Perceptual Video Encoders Perceptual Image Coding Foveated Image and Video Coding Artifact Reduction by Post-Processing in Image Compression Reduction of Color Bleeding in DCT Block-Coded Video Error Resilience for Video Coding Service Critical Issues and Challenges A VQM Performance Metrics
Чтобы скачать этот файл зарегистрируйтесь и/или войдите на сайт используя форму сверху.