“Woah! It's like Spotify but for academic articles.”

Instant Access to Thousands of Journals for just $40/month

Get 2 Weeks Free

2D

2D 2D+t Wavelet Domain Video Watermarking Hindawi Publishing Corporation Home Journals About Us About this Journal Submit a Manuscript Table of Contents Journal Menu Abstracting and Indexing Aims and Scope Article Processing Charges Articles in Press Author Guidelines Bibliographic Information Contact Information Editorial Board Editorial Workflow Free eTOC Alerts Reviewers Acknowledgment Subscription Information Open Special Issues Published Special Issues Special Issue Guidelines Abstract Full-Text PDF Full-Text HTML Full-Text ePUB Linked References How to Cite this Article Advances in Multimedia Volume 2012 (2012), Article ID 973418, 19 pages doi:10.1155/2012/973418 Research Article 2D+t Wavelet Domain Video Watermarking Deepayan Bhowmik and Charith Abhayaratne Department of Electronic and Electrical Engineering, The University of Sheffield, Sheffield S1 3JD, UK Received 29 November 2011; Revised 20 January 2012; Accepted 21 January 2012 Academic Editor: Chong Wah Ngo Copyright © 2012 Deepayan Bhowmik and Charith Abhayaratne. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract A novel watermarking framework for scalable coded video that improves the robustness against quality scalable compression is presented in this paper. Unlike the conventional spatial-domain (t + 2D) water-marking scheme where the motion compensated temporal filtering (MCTF) is performed on the spatial frame-wise video data to decompose the video, the proposed framework applies the MCTF in the wavelet domain (2D + t) to generate the coefficients to embed the watermark. Robustness performances against scalable content adaptation, such as Motion JPEG 2000, MC-EZBC, or H.264-SVC, are reviewed for various combinations of motion compensated 2D + t + 2D using the proposed framework. The MCTF is improved by modifying the update step to follow the motion trajectory in the hierarchical temporal decomposition by using direct motion vector fields in the update step and implied motion vectors in the prediction step. The results show smaller embedding distortion in terms of both peak signal to noise ratio and flickering metrics compared to frame-by-frame video watermarking while the robustness against scalable compression is improved by using 2D + t over the conventional t + 2D domain video watermarking, particularly for blind watermarking schemes where the motion is estimated from the watermarked video. 1. Introduction Several attempts have been made to extend the image watermarking algorithms into video watermarking by using them either on frame-by-frame basis or on 3D decomposed video. The initial attempts on video watermarking were made by frame-by-frame embedding [ 1 – 4 ], due to its simplicity in implementation using image watermarking algorithms. Such watermarking algorithms consider embedding on selected frames located at fixed intervals to make them robust against frame dropping-based temporal adaptations of video. In this case, each frame is treated separately as an individual image; hence, any image-watermarking algorithm can be adopted to achieve the intended robustness. But frame-by-frame watermarking schemes often perform poorly in terms of flickering artefacts and robustness against various video processing attacks including temporal desynchronization, video collusion, video compression attacks, and so forth . In order to address some of these issues, the video temporal dimension is exploited using different transforms, such as discrete Fourier transform (DFT), discrete cosine transform (DCT), or discrete wavelet transform (DWT). These algorithms decompose the video by performing spatial 2D transform on individual frames followed by 1D transform in the temporal domain. Various transforms are proposed in 3D decomposed watermarking schemes, such as 3D DFT domain [ 5 ], 3D DCT domain [ 6 ], and more popularly multiresolution 3D DWT domain watermarking [ 7 , 8 ]. A multilevel 3D DWT is performed by recursively applying the above-mentioned procedure on low-frequency spatiotemporal subband. Various watermarking methods similar to image watermarking are then applied to suitable subbands to balance the imperceptibility and robustness. 3D decomposition-based methods overcome the issues like temporal desynchronization, video format conversion, and video collusion. However, such naive subband decomposition-based embedding strategies without considering motion element of the sequence during watermark embedding often result in unpleasant flickering visual artefacts. The amount of flickering in watermarked sequences varies according to the texture, colour, and motion characteristics of the video content as well as the watermark strength and the choice of frequency subband used for watermark embedding. At the same time, these schemes are also fragile to video compression attacks, which consider motion trajectory during compression coding. In order to address such issues stated above, we have extended image watermarking techniques into video considering the motion and texture characteristics of the video sequence using wavelet-based motion compensated 2D + t + 2D filtering. The proposed approach is evolved from the motion compensated temporal filtering- (MCTF-) based wavelet domain video decomposition concept. MCTF has been successfully used in wavelet-based scalable video coding research [ 9 , 10 ]. The idea of MCTF was originated from 3D subband wavelet decomposition, which is merely an extension of spatial domain transform into temporal domain [ 11 ]. But 3D wavelet decomposition alone does not decouple motion information and it is addressed by using temporal filtering along the motion trajectories. This MCTF-based video decomposition technique motivates a new avenue in transform domain video watermarking. Few attempts have already been made to investigate the effect of motion in video watermarking by incorporating motion compensation into video watermarking algorithms [ 12 – 14 ]. In these investigations, the sequence is first temporally decomposed into Haar wavelet subbands using MCTF and then spatially decomposed using the 2D DCT transform resulting in the decomposition scheme widely known as t + 2D. In this paper, we aim to advance further by investigating along the line of MCTF-based wavelet coding to propose a robust video watermarking scheme against scalable content adaptation, such as Motion JPEG 2000, MC-EZBC, or H.264-SVC, while keeping the imperceptibility. Apparent problems of direct use of MCTF and t + 2D decompositions in watermarking are three fold. (1) In scalable video coding research, it is evident that video with different texture and motion characteristics leading to its spatial and temporal features perform differently on t + 2D domain [ 9 ] and its alternative 2D + t domain [ 15 ], where MCTF is performed on 2D wavelet decomposition domain. Further, in 3D subband decomposition for video watermarking, the use of MCTF is only required for subbands where the watermarks are embedded. Hence, the motion estimation and compensation on full spatial dimension (t + 2D case) add unnecessary complexity to the watermarking algorithm. (2) Conventional MCTF is focused on achieving higher compression and thus gives more attention on the prediction-lifting step in MCTF. However, for watermarking, it is necessary to follow the motion trajectory of content into low-frequency temporal subband frames, in order to avoid motion mismatch in the update step of MCTF when these frames are modified due to watermark embedding. (3) t + 2D structure offers better energy compaction in the low-frequency temporal subband, keeping most of the coefficient values to very small or nearly zero in high-frequency temporal subbands. This is very useful during compression but leaves very little room for watermark embedding in high-frequency temporal subbands. Therefore, for a robust algorithm, most of the MCTF domain watermarking schemes, mentioned before, embed the watermark in the lowpass temporal frames. On the other hand, 2D + t provides more energy in high-frequency subbands, which enables the possibility to embed and recover the watermark robustly using highpass temporal frames which improves the overall imperceptibility of the watermarked video. To overcome these shortcomings, we propose MCTF-based 3D wavelet decomposition scheme for video sequences and offer a flexible 2D + t + 2D generalized motion compensated temporal-spatial subband decomposition scheme using a modified MCTF scheme for video watermarking. Using the framework, we study and analyze the merits and the demerits of watermark embedding using various combinations of 2D + t + 2D structure and propose new 2D + t video watermarking algorithms to improve the robustness performance against quality scalable video compression. The rest of the paper is organized as follows. In Section 2 , the modified MCTF scheme is presented along with the new 2D + t + 2D subband decomposition framework. The video watermarking algorithms using the implementation of different subband decomposition schemes are proposed in Section 3 . The analysis of the framework is described in Section 4 . The experimental results are shown and discussed in Section 5 followed by the conclusions in Section 6 . 2. Motion Compensated Spatiotemporal Filtering The generalized spatiotemporal decomposition scheme consists of two modules: (1) MCTF and (2) 2D spatial frequency decomposition. To capture the motion information accurately, we have modified the commonly used lifting-based MCTF by tracking interframe pixel connectivity and use the 2D wavelet transform for spatial decomposition. In this section, first we describe the MCTF with implied motion estimation and then propose the 2D + t + 2D general framework. 2.1. MCTF with Implied Motion Estimation We formulate the MCTF scheme giving more focus into the motion trajectory-based update step as follows. Let 𝐼 𝑑 be the video sequence, where t is the time index in display order. We consider two consecutive frames 𝐼 2 𝑑 and 𝐼 2 𝑑 + 1 , as the current frame ( 𝑐 ) and the reference frame ( π‘Ÿ ), respectively, following the video coding terminology. In traditional motion estimation for lifting-based MCTF [ 9 ], 𝐼 2 𝑑 + 1 frame usually partitioned into nonoverlapping blocks and for each block, motion is estimated from 𝐼 2 𝑑 frame using a block matching algorithm. In this case only, two types of pixel connectivity are considered, (1) pixels are connected or (2) unconnected. In the case of several pixels are connected to the same pixel in the reference frame, only one of them is categorized as a connected pixel. The temporal frames are derived using the subband analysis pair by replacing the 𝐼 2 𝑑 as the low-frequency temporal frame (L) and 𝐼 2 𝑑 + 1 as the high-frequency temporal frame (H). Connected pixels: L ξ€Ί π‘š − β„‹ 𝑐 → π‘Ÿ , 𝑛 − 𝒱 𝑐 → π‘Ÿ ξ€» = 1 √ 2 𝐼 2 𝑑 + 1 [ ] + 1 π‘š , 𝑛 √ 2 𝐼 2 𝑑 ξ€Ί π‘š − β„‹ 𝑐 → π‘Ÿ , 𝑛 − 𝒱 𝑐 → π‘Ÿ ξ€» , H [ ] = 1 π‘š , 𝑛 √ 2 𝐼 2 𝑑 + 1 [ ] − 1 π‘š , 𝑛 √ 2 𝐼 2 𝑑 ξ€Ί π‘š − β„‹ 𝑐 → π‘Ÿ , 𝑛 − 𝒱 𝑐 → π‘Ÿ ξ€» , ( 1 ) where 𝒱 𝑐 → π‘Ÿ and β„‹ 𝑐 → π‘Ÿ represent the motion vector fields: vertical and horizontal displacements of the nonoverlapping blocks, respectively. Unconnected pixels: L [ ] = π‘š , 𝑛 2 𝐼 2 𝑑 [ ] π‘š , 𝑛 √ 2 . ( 2 ) For the unconnected pixels in 𝐼 2 𝑑 + 1 , the scaled displaced frame differences are substituted to form the temporal high subband. As stated in the introduction, such a traditional scheme gives more attention on the prediction-lifting step in MCTF to reduce the prediction error in high-frequency subband. This is useful in a compression scenario. However, in the case of watermarking, we account the object motion within low-frequency temporal frames to avoid motion mismatch in update step when these frames are modified due to watermark embedding. To address this, we have used MCTF with implied motion estimation, which allows opportunity to embed the watermark in any chosen low- or high-frequency temporal frame. At the same time, as opposed to the traditional scheme, we consider the relative contributions of one-to-many connected pixels and this is important to capture the motion information accurately during MCTF operation. In the proposed scheme, the 𝐼 2 𝑑 frame is partitioned into nonoverlapping blocks and for each block, vertical and horizontal displacements are quantified and represented as motion vector fields 𝒱 𝑐 → π‘Ÿ and β„‹ 𝑐 → π‘Ÿ , respectively. Figure 1 shows an example how the four nonoverlapping blocks in the current frame ( 𝐼 2 𝑑 ) are moved in different direction in the next frame ( 𝐼 2 𝑑 + 1 ). In the 𝐼 2 𝑑 frame, each block can be one of two types, namely, inter- and intrablocks, where the motion is only estimated for the former block type. Similarly, in the 𝐼 2 𝑑 + 1 frame, any pixel can be one of three types, namely, one-to-one connected (point A), one-to-many connected (point B and C), and unconnected (point D) (as shown in Figure 1 ), depending on its connectivity to pixels in the 𝐼 2 𝑑 frame. The connectivity follows the implied motion vector fields 𝒱 𝑐 ← π‘Ÿ and β„‹ 𝑐 ← π‘Ÿ , which are simply the directional inverse of the original motion vector fields, 𝒱 𝑐 → π‘Ÿ and β„‹ 𝑐 → π‘Ÿ . Considering these block and pixel classifications, the lifting steps for pixels at positions [ π‘š , 𝑛 ] in frames 𝐼 2 𝑑 and 𝐼 2 𝑑 + 1 (i.e., 𝐼 2 𝑑 [ π‘š , 𝑛 ] and 𝐼 2 𝑑 + 1 [ π‘š , 𝑛 ] ) performing the temporal motion compensated Haar wavelet transform are defined as follows. Figure 1: Pixel connectivity in 𝐼 2 𝑑 and 𝐼 2 𝑑 + 1 frames. Forward Transform The Prediction Step For one-to-one connected pixels, 𝐼 ′ 2 𝑑 + 1 [ ] π‘š , 𝑛 = 𝐼 2 𝑑 + 1 [ ] π‘š , 𝑛 − 𝐼 2 𝑑 ξ€Ί π‘š + β„‹ 𝑐 → π‘Ÿ , 𝑛 + 𝒱 𝑐 → π‘Ÿ ξ€» . ( 3 ) For one-to-many connected pixels, 𝐼 ξ…ž 2 𝑑 + 1 [ ] π‘š , 𝑛 = 𝐼 2 𝑑 + 1 [ ] − 1 π‘š , 𝑛 𝐽 𝐽 − 1  𝑖 = 0 𝐼 2 𝑑 ξ€Ί π‘š + β„‹ 𝑖 𝑐 → π‘Ÿ , 𝑛 + 𝒱 𝑖 𝑐 → π‘Ÿ ξ€» , ( 4 ) where 𝐽 is the total number of connections. For unconnected pixels, 𝐼 ξ…ž 2 𝑑 + 1 [ ] π‘š , 𝑛 = 𝐼 2 𝑑 + 1 [ ] π‘š , 𝑛 . ( 5 ) The last case is similar to the no prediction case as in intrablocks used in conventional MCTF. The Update Step For interblocks, every pixel in an interblock is one-to-one connected with a unique pixel in 𝐼 2 𝑑 + 1 . Then, the update step is computed as 𝐼 ξ…ž 2 𝑑 [ ] π‘š , 𝑛 = 𝐼 2 𝑑 [ ] + 1 π‘š , 𝑛 2 𝐼 ξ…ž 2 𝑑 + 1 ξ€Ί π‘š − β„‹ 𝑐 ← π‘Ÿ , 𝑛 − 𝒱 𝑐 ← π‘Ÿ ξ€» . ( 6 ) For intrablocks, as there are no motion compensated connections with 𝐼 2 𝑑 + 1 , 𝐼 ξ…ž 2 𝑑 [ ] π‘š , 𝑛 = 𝐼 2 𝑑 [ ] π‘š , 𝑛 . ( 7 ) Finally, these lifting steps are followed by the normalization step: 𝐼 ξ…ž ξ…ž 2 𝑑 [ ] = √ π‘š , 𝑛 2 𝐼 ξ…ž 2 𝑑 [ ] , 𝐼 π‘š , 𝑛 ξ…ž ξ…ž 2 𝑑 + 1 [ ] = 1 π‘š , 𝑛 √ 2 𝐼 ξ…ž 2 𝑑 + 1 [ ] . π‘š , 𝑛 ( 8 ) The temporally decomposed frames 𝐼 ξ…ž ξ…ž 2 𝑑 and 𝐼 ξ…ž ξ…ž 2 𝑑 + 1 are the first level low- and highpass frames and are denoted as L and H temporal subbands. These steps are repeated for all frames in L to obtain L L and L H subbands and continued to obtain the desired number of temporal decomposition levels. Inverse Transform For the inverse transform, the order of operation of steps is reversed as stated follows. First, the decomposed coefficients are passed through an unnormalization step followed by the inverse lifting steps: 𝐼 ′ 2 𝑑 [ ] = 1 π‘š , 𝑛 √ 2 𝐼 ξ…ž ξ…ž 2 𝑑 [ ] , 𝐼 π‘š , 𝑛 ξ…ž 2 𝑑 + 1 [ ] = √ π‘š , 𝑛 2 𝐼 ξ…ž ξ…ž 2 𝑑 + 1 [ ] . π‘š , 𝑛 ( 9 ) The inverse update step ∢ 𝐼 2 𝑑 [ ] = ⎧ βŽͺ ⎨ βŽͺ ⎩ 𝐼 π‘š , 𝑛 ξ…ž 2 𝑑 [ ] − 1 π‘š , 𝑛 2 𝐼 ξ…ž 2 𝑑 + 1 ξ€Ί π‘š − β„‹ 𝑐 ← π‘Ÿ , 𝑛 − 𝒱 𝑐 ← π‘Ÿ ξ€» 𝐼 f o r i n t e r b l o c k s , ′ 2 𝑑 [ ] π‘š , 𝑛 f o r i n t r a b l o c k s . ( 1 0 ) The inverse prediction step ∢ 𝐼 2 𝑑 + 1 [ ] = ⎧ βŽͺ βŽͺ βŽͺ ⎨ βŽͺ βŽͺ βŽͺ ⎩ 𝐼 π‘š , 𝑛 ′ 2 𝑑 + 1 [ ] π‘š , 𝑛 + 𝐼 2 𝑑 ξ€Ί π‘š + β„‹ 𝑐 → π‘Ÿ , 𝑛 + 𝒱 𝑐 → π‘Ÿ ξ€» 𝐼 f o r o n e - t o - o n e c o n n e c t e d p i x e l s , ′ 2 𝑑 + 1 [ ] + 1 π‘š , 𝑛 𝐽 𝐽 − 1  𝑖 = 0 𝐼 2 𝑑 ξ€Ί π‘š + β„‹ 𝑖 𝑐 → π‘Ÿ , 𝑛 + 𝒱 𝑖 𝑐 → π‘Ÿ ξ€» 𝐼 f o r o n e - t o - m a n y c o n n e c t e d p i x e l s , ′ 2 𝑑 + 1 [ ] π‘š , 𝑛 f o r u n c o n n e c t e d p i x e l s . ( 1 1 ) 2.2. 2D + t + 2D Framework In a 3D video decomposition scheme, t + 2D is achieved by performing temporal decomposition followed by a spatial transform whereas in case of 2D + t, the temporal filtering is done after the spatial 2D transform. Due to its own merit and demerit, it is required to analyze both the combinations in order to enhance the video watermarking performance. A common flexible reconfigurable framework, which allows creating such possible combinations, is particularly useful for applications like video watermarking. Here, we propose the 2D + t + 2D framework by combining the modified motion compensated temporal filtering with spatial 2D wavelet transformation. Let ( 𝑠 1 𝑑 𝑠 2 ) be the number of decomposition levels used in the 2D + t + 2D subband decomposition to obtain a 3D subband decomposition with motion compensated t temporal levels and 𝑠 spatial levels, where 𝑠 = 𝑠 1 + 𝑠 2 . In such a scheme, first the 2D DWT is applied for an 𝑠 1 level decomposition. As a result, a new sequence is formed by the low-frequency spatial L L subband of all frames. Then, the sequences of spatial L L subbands are temporally decomposed using the MCTF with implied motion estimation into t temporal levels. Finally, each of the temporal transformed spatial L L subbands are further spatially decomposed into 𝑠 2 wavelet levels. For a 𝑑 - 𝑠 motion compensated temporal subband decomposition, the values of 𝑠 1 and 𝑠 2 are determined by considering the context of the choice of temporal-spatial subbands used for watermark embedding. From now onwards, in this paper, we will use exact values of 𝑠 1 , 𝑑 , 𝑠 2 to represent various combinations of spatiotemporal decomposition, that is, 𝑠 1 𝑑 𝑠 2 . For example, 𝑠 1 = 0 , 𝑑 = 3 , 𝑠 2 = 2 (032), and 𝑠 1 = 2 , 𝑑 = 3 , 𝑠 2 = 0 (230) parameter combinations result in t + 2D and 2D + t motion compensated 3D subband decompositions, respectively. The same amount of subband decomposition levels can be obtained by also using the parameter combination 𝑠 1 = 1 , 𝑑 = 3 , 𝑠 2 = 1 (131) using the proposed generalized scheme implementation. The combination 𝑠 1 = 0 , 𝑑 = 0 , 𝑠 2 = 2 (002) allows 2D decomposition of all frames for frame-by-frame watermark embedding. The realizations of these examples are shown in Figure 2 . We use the notation ( L L L , L L H , L H , H ) to denote the temporal subbands after a 3 level decomposition. We have described the use of this framework in combination with watermarking algorithms, in the next section. Figure 2: Realization of 3-2 temporal schemes using the 2D + t + 2D framework with different parameters: (a) (032), (b) (131), (c) (230), and (d) (002). 3. Video Watermarking in 2D + t + 2D Spatiotemporal Decomposition We propose a new video watermarking scheme by extending the wavelet-based image watermarking algorithms into 2D + t + 2D framework. In this section, we briefly revisit the wavelet-based image watermarking algorithms followed by the proposed video watermarking scheme. Then, we carry on to analyze various combinations in the proposed video decomposition framework to decide unique video embedding parameters, such as (1) choice of temporal subband selection and (2) motion estimation parameters, to retrieve the motion information from watermarked video. 3.1. Wavelet-Based Watermarking Due to its ability for efficient multiresolution spatiofrequency representation of the signals, the DWT became the major transform for spread-spectrum image watermarking [ 16 – 22 ]. A broad classification of such wavelet-based watermarking algorithms can be found in [ 23 ]. In this paper, we have chosen commonly used example algorithms to represent nonblind and blind watermarking algorithmic classes. 3.1.1. The Nonblind Case A magnitude alteration-based additive watermarking is chosen as a nonblind case. In such an algorithm, coefficient values are increased or decreased depending on the magnitude of the coefficient, by making the modified coefficient a function of the original coefficient: 𝐢 ′ 𝑠 , 𝑑 [ ] π‘š , 𝑛 = 𝐢 𝑠 , 𝑑 [ ] π‘š , 𝑛 + 𝛼 𝐢 𝑠 , 𝑑 [ ] π‘š , 𝑛 π‘Š , ( 1 2 ) where 𝐢 𝑠 , 𝑑 [ π‘š , 𝑛 ] is the original decomposed coefficient at 𝑠 , 𝑑 spatiotemporal subband, 𝛼 is the watermark weighting factor, π‘Š is the watermark value to be embedded, and 𝐢 ξ…ž 𝑠 , 𝑑 [ π‘š , 𝑛 ] is the corresponding modified coefficient. 3.1.2. The Blind Case In this category, we used an example blind watermarking algorithm as proposed in [ 20 , 24 ], by modifying various coefficients towards a specific quantization step, 𝛿 . The method modifies the median coefficient by using a nonoverlapping 3 × 1 running window, passed through the entire selected subband of the wavelet decomposed image. At each sliding position, a rank-order sorting is performed on the coefficients 𝐢 1 , 𝐢 2 , and 𝐢 3 to obtain an ordered list 𝐢 1 < 𝐢 2 < 𝐢 3 . The median value 𝐢 2 is modified to obtain 𝐢 ξ…ž 2 as follows: 𝐢 ξ…ž 2 ξ€· = 𝑓 𝛾 , 𝐢 1 , 𝐢 3 ξ€Έ , 𝛿 , π‘Š , ( 1 3 ) where π‘Š is the input watermark sequence, 𝛾 is the weighting parameter, 𝑓 ( ) denotes a nonlinear transformation, and 𝛿 = ( 𝛾 ( | 𝐢 1 | + | 𝐢 3 | ) / 2 ) is the quantization step. 3.2. The Proposed Video Watermarking Scheme The new video watermarking scheme uses the above algorithms on spatial-temporal decomposed video. The system block diagrams for watermark embedding, a nonblind extraction process, and a blind extraction process are shown in Figures 3 , 4(a) , and 4(b) , respectively. Figure 3: System blocks for watermark embedding scheme in 2D + t + 2D spatiotemporal decomposition. Figure 4: System blocks for watermark extraction scheme in 2D + t + 2D spatiotemporal decomposition. 3.2.1. Embedding To embed the watermark, first spatiotemporal decomposition is performed on the host video sequence by applying spatial 2D-DWT followed by temporal MCTF for a 2D + t (230) or temporal decomposition followed by spatial transform for a t + 2D (032). In both the cases, the motion estimation (ME) is performed to create the motion vector (MV) either on the spatial domain (t + 2D) or on the approximation subband in the frequency domain (2D + t) as described in Section 2.2 . Other combinations, such as 131 and 002, are achieved in a similar fashion. After obtaining the decomposed coefficients, the watermark is embedded either using nonblind ( 12 ) or a blind watermarking algorithm ( 13 ) by selecting various temporal low- or highpass frames (i.e., L L L or L L H , etc.) and spatial subband within the selected frame. Once embedded, the coefficients follow inverse process of spatiotemporal decomposition in order to reconstruct the watermarked video. 3.2.2. Extraction and Authentication The extraction procedure follows a similar decomposition scheme as in embedding and the system diagram for the same is shown in Figure 4 . The watermark coefficients are retrieved by applying 2D + t + 2D decomposition on watermarked test video. For a nonblind algorithm, the original video sequence is available at the decoder and hence the motion vector is obtained from the original video. After spatiotemporal filtering on test and original video, the coefficients are compared to extract the watermark. In case of a blind watermarking scheme, the motion estimation is performed on the test video itself without any prior knowledge of original motion information. The temporal filtering is then done by using the new motion vector and consequently the spatiotemporal coefficients are obtained for the detection. The authentication is then done by measuring the Hamming distance ( 𝐻 ) between the original and the extracted watermark: 𝐻 ξ‚€ π‘Š , π‘Š ′  = 1 𝐿 𝐿 − 1  𝑖 = 0 π‘Š 𝑖 ⊕ π‘Š ξ…ž 𝑖 , ( 1 4 ) where π‘Š and π‘Š ξ…ž are the original and the extracted watermarks, respectively. 𝐿 is the length of the sequence and ⊕ represents the X O R operation between the respective bits. 4. The Framework Analysis in Video Watermarking Context Before approaching to the experimental results, in this section, we aim to address the issues related to MCTF-based video watermarking of the proposed framework. Firstly, to improve the imperceptibility, an investigation is made about the energy distribution of the host video in different temporal subbands, which is useful to select the temporally decomposed frames during embedding. Then, an insight is given to motion retrieval for a blind watermarking scheme, where no prior motion information is available during watermark extraction and this is crucial for the robustness performance. 4.1. On Improving Imperceptibility In wavelet domain watermarking research, it is a well-known fact that embedding in high-frequency subbands offers better imperceptibility and low-frequency embedding provides higher robustness. Often wavelet decompositions compact most of the energy in low-frequency subbands and leave less energy in high-frequencies and due to this reason, high-frequency watermarking schemes are less robust to compression. Therefore, increase in energy distribution in high-frequency subbands can offer a better watermarking algorithm. In analyzing our framework, the research findings show that different 2D + t + 2D combinations can vary the energy distribution in high-frequency temporal subbands and this is independent of video content. To show an example, we used Foreman sequence and decomposed using 032, 131, and 230 combinations in the framework and calculate the sum of energy for first two GOP each with 8 temporal frequency frames, namely, L L L , L L H , L H 1 , L H 2 , H 1 , H 2 , H 3 , and H 4 . In all cases, we calculate the energy for the low-frequency ( L L 𝑠 ) subband of spatial decomposition. Other input parameters are set to 8 × 8 macroblock, a fixed-size block matching motion estimation with ± 1 6 search window. The results for percentage of energy (of a GOP) in each temporally decomposed frame are shown in Figure 5 and the histograms of the coefficients for 032, 131, and 230 of LLL and LLH are shown in Figure 6 . The inner graph in Figure 6 represents the zoomed version of the local variations by clipping the 𝑦 -axis to show the coefficient distribution more effectively. From the results, we can rank the energy distribution in high-frequency temporal subbands as: ( 2 3 0 ) > ( 1 3 1 ) > ( 0 3 2 ) . This analysis guides us to select optimum spatiotemporal parameter in the framework to improve the robustness while keeping better imperceptibility. We have performed the experimental simulation on 8 test videos: ( Foreman, Crew, News, Stefan, Mobile, City, Football, and Flower garden ) and all of them follows a similar trend. Figure 5: Percentage of energy (of a GOP) in each temporally decomposed frame. Energy calculation considers the energy of the coefficients at L L 𝑠 for first two GOP each with 8 temporal low and high frequency frames of Foreman sequence. (a) GOP1 and (b) GOP2. Figure 6: Histogram of coefficients at L L 𝑠 for 3 r d level temporal low and high-frequency frames (GOP 1) for Foreman sequence. Row (1) and (2) represents L L L and L L H temporal frames, respectively, and Column (1), (2) and (3) shows 032, 131, and 230 combinations of 2D + t + 2D framework. 4.2. On Motion Retrieval In an MCTF-based video watermarking scheme, motion information contributes at large for temporal decomposition along motion trajectory. The watermark embedding modification in the temporal domain causes motion mismatch, which affects the decoder performance. While original motion information is available for a nonblind watermarking scheme, motion estimation must be done in the case of a blind video watermarking scheme. In this case, the motion vectors are expected to be retrieved from the watermarked video without any prior knowledge of the original motion vector (MV). Our study shows that, in such a case, more accurate motion estimation is possible by choosing the right 2D + t + 2D combination along with an optimum choice of macro block (MB) size. At the same time, we investigate the performance, based on motion search range (SR). Experimental performance shows that effectively SR has lesser contribution towards motion retrieval. The experiment set is organized by studying the watermarking detection performance by measuring Hamming distance of a blind watermark embedding at L L 𝑠 spatial subband on L L L and L L H temporal frames. The watermark extraction is done by using various combinations of MB and SR to find the best motion retrieval parameters. The results are shown in Tables 1 and 2 using average of the first 64 frames from Foreman and Crew CIF size video sequences, respectively, for 032, 131, and 230 spatiotemporal decompositions. Due to the limitations in macroblock size and integer pixel motion search, 3 2 × 3 2 MB search is excluded for 131 decomposition and 3 2 × 3 2 , 1 6 × 1 6 MB searches are excluded for 230 decomposition. It is noted that, in the video compression schemes, 1 6 × 1 6 is the most commonly used MB while in this paper we have used various other MB sizes to investigate the effect on watermark retrieval. Table 1: Hamming distance for blind watermarking by estimating motion from watermarked video using different macro block size (MB) and search range (SR). Embedding at L L 𝑠 on frame: (a) L L L and (b) L L H on Foreman sequence (average of first 64 frames). Table 2: Hamming distance for blind watermarking by estimating motion from watermarked video using different macro block size (MB) and search range (SR). Embedding at L L 𝑠 on frame: (a) L L L and (b) L L H on Crew sequence (average of first 64 frames). The results show that for an MB size more than 8 × 8 , 2D + t outperform t + 2D. In this context, the spatiotemporal decompositions can be ranked as ( 2 3 0 ) > ( 1 3 1 ) > ( 0 3 2 ) . In the case of 131 or 230, the motion is estimated in hierarchically downsampled low-frequency subband. Therefore, number of motion vector reduces accordingly for a given macroblock size. This offers two-fold advantages. (1) Complexity The search range during the motion estimation is either half or quarter size of the full-resolution motion estimation. As a result, the searching time and computation complexity reduces significantly as follows: Let us assume motion is estimated for MB of 𝑏 × π‘ with SR 𝑀 × π‘€ as shown in Figure 7 . The complexity, π’ͺ , is calculated based on the number of search operations as given in ( 15 ): π’ͺ = 𝑇 ( 2 𝑀 + 1 ) 2 , ( 1 5 ) where 𝑇 = 𝑀 𝑁 is total number of pixels. As motion is estimated only on the downsampled low-frequency component, we can rewrite ( 15 ) as 𝑀 π’ͺ = 𝑠 1 𝑁 𝑠 1 ( 2 𝑀 + 1 ) 2 , ( 1 6 ) where 𝑠 1 is the 1 s t spatial decomposition in the proposed scheme. Now, SR 𝑀 × π‘€ is a constant considering any given column in Tables 1 and 2 and hence it is evident that the complexity is inversely proportional to 𝑠 2 1 : 1 π’ͺ ∝ 𝑠 2 1 ( 2 𝑀 + 1 ) 2 , ∝ 1 𝑠 2 1 . ( 1 7 ) Therefore, the complexity of various spatiotemporal decomposition can be ranked as ( 2 3 0 ) < ( 1 3 1 ) < ( 0 3 2 ) , that is, complexity of proposed 2D + t scheme is much lesser when compared to traditional t + 2D. Figure 7: Exhaustive search complexity for a motion block. (2) MV Error Reduction At the same time, for blind motion estimation, less number of motion vector needs to be estimated at the decoder resulting in more accurate motion estimation and higher robustness. It is evident from Tables 1 and 2 that if the same number of motion vectors are considered, that is, 3 2 × 3 2 MB for 032, 1 6 × 1 6 MB for 131, and 8 × 8 MB for 230, the robustness performance is comparable for all three combinations. However, in L L L subband of 2D + t, for a smaller MB, such as 4 × 4 , more motion mismatch is observed as motion estimation is done in a spatially decomposed region. Now, using the analysis, above, we have designed experiments to verify our proposed video watermarking schemes for improved imperceptibility as well as robustness against scalable video compressions. 5. Experimental Results and Discussion We used the following experimental setup for the simulation of watermark embedding using the proposed generalized 2D + t + 2D motion compensated temporal-spatial subband scheme. In order to make the watermarking strength constant across subbands, the normalization steps in the MCTF and the 2D DWT were omitted. There are two different sets of results obtained to show the embedding distortion and the robustness performance using luma component of 8 test video sequences ( 4 ∢ 2 ∢ 0 YUV): ( Foreman, Crew, News, Stefan, Mobile, City, Football, and Flower garden ). However, within the scope of this paper, three test sequences are chosen to show the results according to their object motion activity, that is, high-motion activity ( Crew ), medium-motion activity ( Foreman ), and low-motion activity ( News ). We have used one nonblind and one blind watermarking scheme as example cases, described in Section 3.1 . For the simulations shown in this work, the four combinations (032), (230), (131), and (002) were used. In each case, the watermark embedding is performed on the low-frequency subband ( L L 𝑠 ) of 2D spatial decompositions due to its improved robustness performance against compression attacks in image watermarking. In these simulations, the 9/7 biorthogonal wavelet transform was used as the 2D decompositions. Based on the analysis in the previous section, here we explored the possibility of watermark embedding in high-frequency temporal subband and investigate the robustness performance against compression attacks, as high-frequency subband can offer improved imperceptibility. In the experiment sets, we chose third temporal level highpass ( L L H ) and lowpass ( L L L ) frames to embed the watermark. Other video decomposition parameters are set to (1) 64 frames with GOP size of 8, (2) 8 × 8 macro block size, and (3) a search window of ± 1 6 . The choice of macro block size and search window are decided by referring the motion retrieval analysis in Section 4.2 . For embedding distortion measure, we used peak signal to noise ratio (PSNR) and also measured the amount of flicker introduced due to watermark embedding. Fan et al. [ 25 ] defined a quality metric to measure flicker in intracoded video sequences. In our experiments, we have measured flicker in a similar way by calculating the difference between average brightness values of previous and current frames and used the flicker metric in the MSU quality measurement tool [ 26 ]. The flicker metric here compares the flicker content in the watermarked video with respect to the original video. In these metrics, higher PSNR represents lower embedding distortion and for flicker, the lower values correspond to the better distortion performance. On the other hand, the watermarking robustness is represented by Hamming distance as mentioned in ( 14 ) and lower Hamming distance corresponds to better detection performance. Various scalable coded quality compression attacks are considered, such as Motion JPEG 2000, MC-EZBC scalable video coding, and H.264/AVC scalable extension (H.264-SVC). In these experiments, low-frequency spatial L L subband are selected within L L L and L L H temporal subbands. Therefore, the scheme is robust against respective spatial and temporal scalability. For example, the algorithm is robust against spatial scalability up to quarter resolution and temporal scaling up to L H and H frames. The results show the mean value of Hamming distance for average of first 64 frames of test video set. The experiments are divided into two sets, one for embedding distortion analysis and the other for robustness evaluation. In all the experimental setup, we considered two watermarking algorithms, one each from nonblind (Section 3.1.1 ) and blind (Section 3.1.2 ) category. The weighting parameters 𝛼 and 𝛾 are set to 0.1. In case of nonblind algorithm, the level adaptive thresholding as described in [ 22 ] is taken into account to avoid watermark embedding in small or nearly zero coefficients to minimize the false detection. The watermarking payload is set to 2000 bits and 2112 bits using a binary logo for all combinations and every sequences for nonblind and blind watermarking methods, respectively. 5.1. Embedding Distortion Analysis The embedding distortion results in terms of PSNR are shown in Figures 8 , 10 , and 12 for News , Foreman, and Crew sequences, respectively, for nonblind and blind watermarking methods. In each of the figures, π‘₯ -axis shows the frame number and 𝑦 -axis represents the PSNR. The flickering results are shown in Figures 9 , 11 , and 13 for News , Foreman, and Crew sequences, respectively. In these figures, the 𝑦 -axis represents the flicker metric as discussed in the previous section. Figure 8: PSNR for nonblind and blind watermarking on L L L and L L H temporal subbands for News sequence. (a) L L L (nonblind), (b) L L L (blind), (c) L L H (nonblind), and (d) L L H (blind). Figure 9: Flicker metric for nonblind and blind watermarking on L L L and L L H temporal subbands for News sequence. (a) L L L (nonblind), (b) L L L (blind), (c) L L H (nonblind), and (d) L L H (blind). Figure 10: PSNR for nonblind and blind watermarking on L L L and L L H temporal subbands for Foreman sequence. (a) L L L (nonblind), (b) L L L (blind), (c) L L H (nonblind), and (d) L L H (blind). Figure 11: Flicker metric for nonblind and blind watermarking on L L L and L L H temporal subbands for Foreman sequence. (a) L L L (nonblind), (b) L L L (blind), (c) L L H (nonblind), and (d) L L H (blind). Figure 12: PSNR for nonblind and blind watermarking on L L L and L L H temporal subbands for Crew sequence. (a) L L L (nonblind), (b) L L L (blind), (c) L L H (nonblind), and (d) L L H (blind). Figure 13: Flicker metric for nonblind and blind watermarking on L L L and L L H temporal subbands for Crew sequence. (a) L L L (nonblind), (b) L L L (blind), (c) L L H (nonblind), and (d) L L H (blind). From the results for L L L subband, it is evident that although the PSNR performances are comparable, proposed MCTF-based methods ((032), (131), and (230)) outperform the frame-by-frame embedding (002) in addressing the flickering problem. In all four combinations, the sum of energy in L L L subband are similar and resulting in comparable PSNR. However, in the proposed methods, the error (i.e., PSNR) is propagated along the GOP due to hierarchical temporal decomposition along the motion trajectory and the error propagation along the motion trajectory addressed the issues related to flickering artifacts. On the other hand, for L L H subband, due to temporal filtering, the sum of energy is lesser and the four combinations can be ranked as 0 3 2 < 1 3 1 < 2 3 0 < 0 0 2 . Hence, the PSNR and flickering performance for this temporal subband can be ranked as 0 3 2 > 1 3 1 > 2 3 0 > 0 0 2 . Therefore, while choosing a temporally filtered high-frequency subband, such as L L H , L H , or 𝐻 , the proposed MCTF approach also outperforms the frame-by-frame embedding in terms of PSNR while addressing the flickering issues. It is evident that flickering due to frame-by-frame embedding is increasingly prominent in the sequences with lower motion (e.g., 𝑁 𝑒 𝑀 𝑠 > 𝐹 π‘œ π‘Ÿ 𝑒 π‘š π‘Ž 𝑛 > 𝐢 π‘Ÿ 𝑒 𝑀 ) and is successfully addressed by the proposed MCTF-based watermarking approach. 5.2. Robustness Performance Evaluation The robustness results for the nonblind watermarking method are shown in Figures 14 , 15 , and 16 for Crew , Foreman, and News sequences, respectively. The π‘₯ -axis represents the compression ratio (Motion JPEG 2000) or bitrates (MC-EZBC and H.264-SVC) and 𝑦 -axis shows the corresponding Hamming distances. Columns (1) and (2) show the results for the L L L and L L H frame selections, respectively. The robustness performances shows that 2D + t, for example, any combination of temporal filtering on spatial decomposition (i.e., (131) and (230)) outperforms a conventional t + 2D scheme. The experimental robustness results for blind watermarking method are shown in Figures 17 , 18 , and 19 for Crew , Foreman, and News sequences, respectively. Column 1 shows results for the L L L temporal subband while results for L L H are shown in Column 2 . The rows represent various scalability attacks, Motion JPEG 2000, MC-EZBC, and H.264-SVC, respectively. In this case, the motion information is obtained from the watermarked test video. Similar to the nonblind watermarking 2D + t again outperforms a conventional t + 2D scheme such as in [ 14 ]. We now analyze the obtained results by grouping it by selection of temporal subband, by embedding method, and by compression scheme. Figure 14: Robustness performance of nonblind watermarking scheme for Crew sequence. Columns (1), (2) and (3) show robustness against Motion JPEG 2000, MC-EZBC, and H.264-SVC, respectively. Rows (1) and (2) represent the embedding on temporal subbands L L L & L L H , respectively. Figure 15: Robustness performance of nonblind watermarking scheme for Foreman sequence. Columns (1), (2) and (3) show robustness against Motion JPEG 2000, MC-EZBC, and H.264-SVC, respectively. Rows (1) and (2) represent the embedding on temporal subbands L L L & L L H , respectively. Figure 16: Robustness performance of non-blind watermarking scheme for News sequence. Columns (1), (2) and (3) show robustness against Motion JPEG 2000, MC-EZBC, and H.264-SVC, respectively. Rows (1) and (2) represent the embedding on temporal subbands L L L and L L H , respectively. Figure 17: Robustness performance of blind watermarking scheme for Crew sequence. Columns (1), (2) and (3) show robustness against Motion JPEG 2000, MC-EZBC, and H.264-SVC, respectively. Rows (1) and (2) represent the embedding on temporal subbands L L L and L L H , respectively. Figure 18: Robustness performance of blind watermarking scheme for Foreman sequence. Columns (1), (2) and (3) show robustness against Motion JPEG 2000, MC-EZBC, and H.264-SVC, respectively. Rows (1) and (2) represent the embedding on temporal subbands L L L and L L H , respectively. Figure 19: Robustness performance of blind watermarking scheme for News sequence. Columns (1), (2) and (3) show robustness against Motion JPEG 2000, MC-EZBC, and H.264-SVC, respectively. Rows (1) and (2) represent the embedding on temporal subbands L L L and L L H , respectively. 5.2.1. Selection of Temporal Subband The low-frequency temporal subband ( L L L ) offers higher robustness in comparison to high-frequency L L H subband. This is due to more energy concentration in L L L subband after temporal filtering. Within the temporal subbands, in L L L subband, various spatiotemporal combinations perform equally as the energy levels are nearly equal for 032, 131, and 230. However, 230 performs slightly better due to lesser motion-related error in spatially scaled subband. On the other hand, for L L H subband, we can rank the robustness performance as 2 3 0 > 1 3 1 > 0 3 2 as a result of the energy distribution ranking of these combinations in Section 4.1 . 5.2.2. Embedding Method For a nonblind case, the watermark extraction is performed using the original host video and hence the original motion vector is available at the extractor which makes this scheme more robust to various scalable content adaptation. On the other hand, as explained before, the blind watermarking scheme neither have any reference to original video sequence nor any reference motion vector. The motion vector is estimated from the watermarked test video itself which results in comparatively poor robustness. The effect of motion related error is more visible in L L H subband as the motion compensated temporal highpass frame is highly sensitive to motion estimation accuracy and so the robustness performance. As discussed in Section 4.2 in case of a 2D + t (i.e., 230), the error due to motion vector is lesser compared to t + 2D scheme and hence offers better robustness ( 2 3 0 > 1 3 1 > 0 3 2 ). 5.2.3. Compression Scheme We have evaluated our proposed algorithm against various scalable video compression schemes, that is, Motion JPEG 2000, MC-EZBC, and H.264-SVC. First two video compression schemes are based on wavelet technology whereas more recent H.264-SVC uses layered scalability using base layer coding of H.264/AVC. In Motion JPEG 2000 scheme, the coding is performed by applying 2D wavelet transform on each frame separately without considering any temporal correlation between frames. In the proposed watermarking scheme, the use of 2D wavelet transform offers better association with Motion JPEG 2000 scheme and hence provides better robustness for 2D + t combination for L L L and L L H . Also in the case of L L H subband, a better energy concentration offers higher robustness to Motion JPEG 2000 attacks. The robustness performance against Motion JPEG 2000 can be ranked as 2 3 0 > 1 3 1 > 0 3 2 . MC-EZBC video coder uses motion compensated 1D wavelet transform in temporal filtering and 2D wavelet transform in spatial decomposition. In compression point of view, MC-EZBC usually encodes the video sequences in t + 2D combination due to better energy compaction in low-frequency temporal frames. But in watermarking perspective, higher energy in high-frequency subband can offer higher robustness. The argument is justified from the robustness results where results for L L L subbands are comparable, but a distinctive improvement is observed in L L H subband and based on the results the robustness ranking for MC-EZBC can be done as 2 3 0 > 1 3 1 > 0 3 2 . Finally, we have evaluated the robustness of the proposed scheme against H.264-SVC, which uses inter- /intramotion compensated prediction followed by an integer transform with similar properties of DCT transform. Although the proposed watermarking and H.264-SVC video coding scheme do not share any common technology or transform, the robustness evaluation of the proposed method, against H.264-SVC, has been carried for the completeness of the paper for different scalable video compression schemes. The results provide acceptable robustness. However, for a blind watermarking scheme in L L H subband, proposed schemes performs poorly due to blind motion estimation. Similar to previous robustness results, based on energy distribution and motion retrieval argument, here we can rank the spatiotemporal combinations as 2 3 0 > 1 3 1 > 0 3 2 . In a specific example case, H.264-SVC usually gives preference to intraprediction to the sequences with low global or local motion, as in News sequence and hence exception in robustness performance to H.264-SVC is noticed for the proposed scheme. It is evident that, due to close association between the proposed scheme and MC-EZBC, robustness of the proposed scheme offers best performance against MC-EZBC-based content adaptation. To conclude this discussion, we suggest that a choice of 2D + t watermarking scheme improves the imperceptibility and the robustness performance in a video watermarking scenario for a nonblind as well as a blind watermarking algorithm. 6. Conclusions In this paper, we have presented a new motion compensated temporal-spatial subband decomposition scheme, based on the MCTF with implied motion estimation for video watermarking. The MCTF was modified by taking into account the motion trajectory in obtaining an efficient update step. The proposed 2D + t domain watermarking offers improved robustness against scalable content adaptation compared to state-of-the-art conventional t + 2D video watermarking scheme in a nonblind as well as a blind watermarking scenario. The robustness performance is evaluated against scalable coding-based quality compressions attacks, including Motion JPEG 2000, MC-EZBC, and H.264-SVC (scalable extension). The proposed subband decomposition also provides low complexity as MCTF is performed only on subbands where the watermark is embedded. Acknowledgment This work is funded by the UK Engineering and Physical Sciences Research Council (EPSRC) by an EPSRC-BP Dorothy Hodgkin Postgraduate Award (DHPA). References F. Hartung and B. Girod, “Watermarking of uncompressed and compressed video,” Signal Processing , vol. 66, no. 3, pp. 283–301, 1998. View at Scopus H. Inoue, A. Miyazaki, T. Araki, and T. Katsura, “Digital watermark method using the wavelet transform for video data,” in Proceedings of the 1999 IEEE International Symposium on Circuits and Systems (ISCAS '99) , vol. 4, pp. V-247–V-250, June 1999. View at Scopus G. Doërr and J. L. Dugelay, “A guide tour of video watermarking,” Signal Processing , vol. 18, no. 4, pp. 263–282, 2003. View at Publisher · View at Google Scholar · View at Scopus M. P. Mitrea, T. B. Zaharia, F. J. Preteux, and A. Vlad, “Video watermarking based on spread spectrum and wavelet decomposition,” in Wavelet Applications in Industrial Processing II , vol. 5607 of Proceedings of the SPIE , pp. 156–164, 2004. F. Deguillaume, G. Csurka, J. J. O'Ruanaidh, and T. Pun, “Robust 3D DFT video watermarking,” in Security and Watermarking of Multimedia Contents , vol. 3657 of Proceedings of the SPIE , pp. 113–124, 1999. J. H. Lim, D. J. Kim, H. T. Kim, and C. S. Won, “Digital video watermarking using 3D-DCT and intracubic correlation,” in Security and Watermarking of Multimedia Contents III , vol. 4314 of Proceedings of the SPIE , pp. 64–72, 2001. S. J. Kim, S. H. Lee, K. S. Moon et al., “A new digital video watermarking using the dual watermark images and 3D DWT,” in Proceedings of the IEEE Region Conference (TENCON '04) , vol. 1, pp. 291–294, 2004. P. Campisi and A. Neri, “Video watermarking in the 3D-DWT domain using perceptual masking,” in IEEE International Conference on Image Processing (ICIP '05) , pp. 997–1000, September 2005. View at Publisher · View at Google Scholar · View at Scopus S. J. Choi and J. W. Woods, “Motion-compensated 3-D subband coding of video,” IEEE Transactions on Image Processing , vol. 8, no. 2, pp. 155–167, 1999. View at Scopus S. T. Hsiang and J. W. Woods, “Embedded video coding using invertible motion compensated 3-D subband/wavelet filter bank,” Signal Processing , vol. 16, no. 8, pp. 705–724, 2001. View at Publisher · View at Google Scholar · View at Scopus C. I. Podilchuk, N. S. Jayant, and N. Farvardin, “Three-dimensional subband coding of video,” IEEE Transactions on Image Processing , vol. 4, no. 2, pp. 125–139, 1995. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus P. Vinod and P. K. Bora, “Motion-compensated inter-frame collusion attack on video watermarking and a countermeasure,” IEE Proceedings on Information Security , vol. 153, no. 2, pp. 61–73, 2006. P. Vinod, G. Doërr, and P. K. Bora, “Assessing motion-coherency in video watermarking,” in Proceedings of the Multimedia and Security Workshop , pp. 114–119, September 2006. View at Scopus P. Meerwald and A. Uhl, “Blind motion-compensated video watermarking,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '08) , pp. 357–360, June 2008. View at Publisher · View at Google Scholar · View at Scopus Y. Andreopoulos, A. Munteanu, J. Barbarien, M. Van Der Schaar, J. Cornelis, and P. Schelkens, “In-band motion compensated temporal filtering,” Signal Processing , vol. 19, no. 7, pp. 653–673, 2004. View at Publisher · View at Google Scholar · View at Scopus F. Huo and X. Gao, “AWavelet based image watermarking scheme,” in Proceedings of the IEEE International Conference on Image Processing , pp. 2573–2576, Atlanta, Ga, USA, 2006. C. Jin and J. Peng, “A robust wavelet-based blind digital watermarking algorithm,” Information Technology Journal , vol. 5, no. 2, pp. 358–363, 2006. View at Publisher · View at Google Scholar · View at Scopus M. A. Suhail, M. S. Obaidat, S. S. Ipson, and B. Sadoun, “A comparative study of digital watermarking in JPEG and JPEG 2000 environments,” Information Sciences , vol. 151, pp. 93–105, 2003. View at Publisher · View at Google Scholar · View at Scopus M. Barni, F. Bartolini, and A. Piva, “Improved wavelet-based watermarking through pixel-wise masking,” IEEE Transactions on Image Processing , vol. 10, no. 5, pp. 783–791, 2001. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus L. Xie and G. R. Arce, “Joint wavelet compression and authentication watermarking,” in Proceedings of the International Conference on Image Processing (ICIP '98) , vol. 2, pp. 427–431, October 1998. View at Scopus D. Kundur and D. Hatzinakos, “Digital watermarking using multiresolution wavelet decomposition,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '98) , vol. 5, pp. 2969–2972, May 1998. View at Scopus J. R. Kim and Y. S. Moon, “Robust wavelet-based digital watermarking using level-adaptive thresholding,” in International Conference on Image Processing (ICIP '99) , pp. 226–230, October 1999. View at Scopus D. Bhowmik and C. Abhayaratne, “A framework for evaluating wavelet based watermarking for scalable coded digital item adaptation attacks,” in Wavelet Applications in Industrial Processing VI , vol. 7248 of Proceedings of the SPIE , San Jose, Calif, USA, January 2009. P. Meerwald, “Quantization watermarking in the JPEG2000 coding pipeline,” in Proceedings of the 5th International Working Conference on Communication and Multimedia Security , pp. 69–79, 2001. X. Fan, W. Gao, Y. Lu, and D. Zhao, “Flicking reduction in all intra frame coding,” Tech. Rep. JVT-E070, 2002. MSU Graphics & Media Lab VG, MSU Quality Measurement Tool, http://www.compression.ru/video/ . var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-8578054-2']); _gaq.push(['_setDomainName', 'hindawi.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Advances in Multimedia Hindawi Publishing Corporation
Loading next page...
 
/lp/hindawi-publishing-corporation/2d-t-wavelet-domain-video-watermarking-DDEsXsYaED

You're reading a free preview. Subscribe to read the entire article.

And millions more from thousands of peer-reviewed journals, for just $40/month

Get 2 Weeks Free

To be the best researcher, you need access to the best research

  • With DeepDyve, you can stop worrying about how much articles cost, or if it's too much hassle to order — it's all at your fingertips. Your research is important and deserves the top content.
  • Read from thousands of the leading scholarly journals from Springer, Elsevier, Nature, IEEE, Wiley-Blackwell and more.
  • All the latest content is available, no embargo periods.

Stop missing out on the latest updates in your field

  • We’ll send you automatic email updates on the keywords and journals you tell us are most important to you.
  • There is a lot of content out there, so we help you sift through it and stay organized.