Lagrange inversion theorem: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Petersheldrick
added link to formal power series article
en>Rupeld
m →‎Lagrange–Bürmann formula: Replaced n+1 by n in the second displayed equation to better match the first.
 
Line 1: Line 1:
{{Refimprove|date=August 2008}}
I'm Kayleigh and I live in Springfield. <br>I'm interested in Physical, Photography and Spanish art. I like travelling and watching The Big Bang Theory.<br><br>Also visit my web blog - [http://www.a1-Links.com/fifa15-coingenerator.com-11980.html how to get free Fifa 15 coins]
[[Image:YUV UV plane.svg|thumb|300px|Example of U-V color plane, Y' value = 0.5, represented within RGB color gamut]]
[[Image:Barn-yuv.png|right|thumb|150px|An image along with its Y', U, and V components respectively.]]
 
'''YUV''' is a [[color space]] typically used as part of a [[color image pipeline]].  It encodes a color image or video taking human perception into account, allowing reduced bandwidth for [[chrominance]] components, thereby typically enabling transmission errors or compression artifacts to be more efficiently masked by the human perception than using a "direct" RGB-representation. Other color spaces have similar properties, and the main reason to implement or investigate properties of Y'UV would be for interfacing with analog or digital television or photographic equipment that conforms to certain Y'UV standards.
 
The scope of the terms Y'UV, YUV, [[YCbCr]], [[YPbPr]], etc., is sometimes ambiguous and overlapping. Historically, the terms YUV and Y'UV were used for a specific ''analog encoding'' of color information in television systems, while YCbCr was used for ''digital encoding'' of color information suited for video ''and'' still-image compression and transmission such as [[MPEG]] and [[JPEG]]. Today, the term YUV is commonly used in the computer industry to describe ''file-formats'' that are encoded using YCbCr.
 
The Y'UV model defines a [[color space]] in terms of one [[Luma (video)|luma]] (Y') and two [[chrominance]] (UV) components. The Y'UV color model is used in the [[PAL]] and [[SECAM]] [[composite video|composite color video]] standards. Previous black-and-white systems used only luma (Y') information.  Color information (U and V) was added separately via a [[sub-carrier]] so that a black-and-white receiver would still be able to receive and display a color picture transmission in the receiver's native [[black-and-white]] format.  
 
Y' stands for the [[Luma (video)|luma]] component (the brightness) and U and V are the [[chrominance]] (color) components; [[luminance]] is denoted by Y and luma by Y' – the prime symbols (') denote [[gamma compression]],<ref>Engineering Guideline EG 28, "Annotated Glossary of Essential Terms for Electronic Production," SMPTE, 1993.</ref> with "luminance" meaning perceptual ([[color science]]) brightness, while "luma" is electronic (voltage of display) brightness.
 
The [[YPbPr]] color model used in analog [[component video]] and its digital version [[YCbCr]] used in digital video are more or less derived from it, and are sometimes called Y'UV. (C<sub>B</sub>/P<sub>B</sub> and C<sub>R</sub>/P<sub>R</sub> are deviations from grey on blue–yellow and red–cyan axes, whereas U and V are blue–luminance and red–luminance differences.) The [[YIQ|Y'IQ]] color space used in the analog [[NTSC]] television broadcasting system is related to it, although in a more complex way.
 
==History==
Y'UV was invented when engineers wanted [[color television]] in a [[black-and-white]] infrastructure.<ref>Maller, Joe. [http://joemaller.com/fcp/fxscript_yuv_color.shtml RGB and YUV Color], ''FXScript Reference''</ref>  They needed a signal transmission method that was compatible with black-and-white (B&W) TV while being able to add color.  The luma component already existed as the black and white signal;  they added the UV signal to this as a solution.
 
The UV representation of chrominance was chosen over straight R and B signals because U and V are color difference signals.  This meant that in a black and white scene the U and V signals would be zero and only the Y' signal would need to be transmitted.  If R and B were to have been used, these would have non-zero values even in a B&W scene, requiring all three data-carrying signals.  This was important in the early days of color television, because holding the U and V signals to zero while connecting the black and white signal to Y' allowed color TV sets to display B&W TV without the additional expense and complexity of special B&W circuitry.  In addition, black and white receivers could take the Y' signal and ignore the color signals, making Y'UV backward-compatible with all existing black-and-white equipment, input and output.  It was necessary to assign a narrower bandwidth to the chrominance channel because there was no additional bandwidth available. If some of the luminance information arrived via the chrominance channel (as it would have if RB signals were used instead of differential UV signals), B&W resolution would have been compromised.<ref>W. Wharton & D. Howorth, ''Principles of Television Reception'', Pitman Publishing, 1971, pp 161-163</ref>
 
==Conversion to/from RGB==
Y'UV signals are typically created from [[RGB]] ([[red]], [[green]] and [[blue]]) source. Weighted values of R, G, and B are summed to produce Y', a measure of overall brightness or luminance.  U and V are computed as scaled differences between Y' and the B and R values.
 
Defining the following constants:
 
:<math>\begin{align}
W_R &= 0.299 \\
W_G &= 1 - W_R - W_B = 0.587 \\
W_B &= 0.114 \\
U_{Max} &= 0.436 \\
V_{Max} &= 0.615
\end{align}</math>
 
Y'UV is computed from RGB as follows:
 
:<math>\begin{align}
Y' &= W_R R + W_G G + W_B B \\
U &= U_{Max} \frac{B - Y'}{1 - W_B} \approx 0.492(B - Y') \\
V &= V_{Max} \frac{R - Y'}{1 - W_R} \approx 0.877(R - Y')
\end{align}</math>
 
The resulting ranges of Y', U, and V respectively are [0, 1], [-''U''<sub>Max</sub>, ''U''<sub>Max</sub>], and [-''V''<sub>Max</sub>, ''V''<sub>Max</sub>].
 
Inverting the above transformation converts Y'UV to RGB:
 
:<math>\begin{align}
R &= Y' + V \frac{1 - W_R}{V_{Max}} = Y' +  \frac{V}{0.877}\\
G &= Y' - U \frac{W_B (1 - W_B)}{U_{Max} W_G} - V \frac{W_R (1 - W_R)}{V_{Max} W_G} = Y' - \frac{0.232 U}{0.587} -  \frac{0.341 V}{0.587} = Y' - 0.395 U - 0.581 V\\
B &= Y' + U \frac{1 - W_B}{U_{Max}} =Y' +  \frac{U}{0.492}
\end{align}</math>
 
Equivalently, substituting values for the constants and expressing them as [[Matrix (mathematics)|matrices]] gives:
 
:<math>
\begin{bmatrix} Y' \\ U \\ V \end{bmatrix}
=
\begin{bmatrix}
  0.299  &  0.587  &  0.114 \\
-0.14713 & -0.28886 &  0.436 \\
  0.615  & -0.51499 & -0.10001
\end{bmatrix}
\begin{bmatrix} R \\ G \\ B \end{bmatrix}
</math>
 
:<math>
\begin{bmatrix} R \\ G \\ B \end{bmatrix}
=
\begin{bmatrix}
1 &  0      &  1.13983 \\
1 & -0.39465 & -0.58060 \\
1 &  2.03211 &  0
\end{bmatrix}
\begin{bmatrix} Y' \\ U \\ V \end{bmatrix}
</math>
 
===Notes===
The weights used to compute Y' (top row of matrix) are identical to those used in the [[YIQ|Y'IQ]] color space.
 
Equal values of red, green and blue (i.e. levels of gray) yield 0 for U and V.  Black, RGB=(0, 0, 0), yields YUV=(0, 0, 0).  White, RGB=(1, 1, 1), yields YUV=(1, 0, 0).
 
These formulas are traditionally used in analog televisions and equipment; digital equipment such as [[HDTV]] and digital video cameras use Y'CbCr.
 
<gallery caption="UV Planes in a range of [-1,1]">
File:YUV UV plane Y0 100 percent.png|Y' value of 0
File:YUV UV plane Y0.5 100 percent.png|Y' value of 0.5
File:YUV UV plane Y1 100 percent.png|Y' value of 1
</gallery>
 
===BT.709 and BT.601===
When standardising [[high-definition video]], the [[ATSC]] chose a different formula for the YCbCr than that used for standard-definition video. This means that when converting between SDTV and HDTV, the color information has to be altered, in addition to [[image scaling]] the video.
 
The formulae above reference [[Rec. 601]]. For HDTV, a slightly different matrix is used, where W<sub>R</sub> and W<sub>B</sub> in the above formula is replaced by [[Rec. 709]]:
 
:<math>
\begin{array}{rl}
W_R &= 0.2126 \\
W_B &= 0.0722 \\
\end{array}
</math>
 
This gives the matrices :
 
:<math>
\begin{bmatrix} Y' \\ U \\ V \end{bmatrix}
=
\begin{bmatrix}
  0.2126  &  0.7152  &  0.0722 \\
-0.09991 & -0.33609 &  0.436 \\
  0.615  & -0.55861 & -0.05639
\end{bmatrix}
\begin{bmatrix} R \\ G \\ B \end{bmatrix}
</math>
 
:<math>
\begin{bmatrix} R \\ G \\ B \end{bmatrix}
=
\begin{bmatrix}
1 &  0      &  1.28033 \\
1 & -0.21482 & -0.38059 \\
1 &  2.12798 &  0
\end{bmatrix}
\begin{bmatrix} Y' \\ U \\ V \end{bmatrix}
</math>
 
==Numerical approximations==
Prior to the development of fast [[SIMD]] floating-point processors, most digital implementations of RGB->Y'UV used integer math, in particular [[Fixed-point arithmetic|fixed-point]] approximations. In the following examples, the operator "<math>a \gg b</math>" denotes a right-shift of ''a'' by ''b'' bits.
 
Traditional 8 bit representation of Y'UV with unsigned integers uses the following
 
1. Basic transform
:<math>\begin{bmatrix}Y' \\ U \\ V \end{bmatrix} =
\begin{bmatrix}
  66 & 129 & 25  \\
-38 & -74 & 112 \\
112 & -94 & -18
\end{bmatrix}
\begin{bmatrix} R \\ G \\ B \end{bmatrix}
</math>
2. Scale down to 8 bits with rounding
:<math>
\begin{array}{rcl}
Y' &=& (Y' + 128) \gg 8\\
U  &=& (U  + 128) \gg 8\\
V  &=& (V  + 128) \gg 8
\end{array}
</math>
3. Shift values
:<math>
\begin{array}{rcl}
Y' &=& Y' + 16\\
U  &=& U + 128\\
V  &=& V + 128
\end{array}
</math>
 
Y' values are conventionally shifted and scaled to the range [16, 235] (referred to as studio swing) rather than using the full range of [0, 255] (referred to as full swing).  This confusing practice derives from the MPEG standards and explains why 16 is added to Y' and why the Y' coefficients in the basic transform sum to 220 instead of 255.  U and V values, which may be positive or negative, are summed with 128 to make them always positive.<ref>{{cite book
| title = Video Demystified
| author = Keith Jack
| isbn = 1-878707-09-4
}}</ref>
 
==Luminance/chrominance systems in general==
The primary advantage of luma/chroma systems such as Y'UV, and its relatives [[YIQ|Y'IQ]] and [[YDbDr]], is that they remain compatible with black and white [[analog television]] (largely due to the work of [[Georges Valensi]]). The Y' channel saves all the data recorded by black and white cameras, so it produces a signal suitable for reception on old monochrome displays. In this case, the U and V are simply discarded. If displaying color, all three channels are used, and the original RGB information can be decoded.
 
Another advantage of Y'UV is that some of the information can be discarded in order to reduce [[Bandwidth (signal processing)|bandwidth]]. The human eye has fairly little spatial sensitivity to color: the accuracy of the brightness information of the luminance channel has far more impact on the image detail discerned than that of the other two. Understanding this human shortcoming, standards such as [[NTSC]] reduce the bandwidth of the chrominance channels considerably.  (Bandwidth is in the temporal domain, but this translates into the spatial domain as the image is scanned out.)
 
Therefore, the resulting U and V signals can be substantially "compressed". In the [[NTSC]] (Y'IQ) and [[PAL]] systems, the chrominance signals had significantly narrower bandwidth than that for the luminance. Early versions of NTSC rapidly alternated between particular colors in identical image areas to make them appear adding up to each other to the human eye, while all modern analogue and even most digital video standards use [[chroma subsampling]] by recording a picture's color information at only half the resolution compared to the brightness information. This ratio is the most common form, known as 4:2:2, which is the sample ratio of Y:U:V, or Y:I:Q. The 4:x:x standard was adopted due to the very earliest color NTSC standard which used a chroma subsampling of 4:1:1 so that the picture carried only a quarter as much resolution in color than it did in brightness. Today, only high-end equipment processing uncompressed signals uses a chroma subsampling of 4:4:4 with identical resolution for both brightness and color information.
 
The I and Q axes were chosen according to bandwidth needed by human vision, one axis being that requiring the most bandwidth, and the other (fortuitously at 90 degrees) the minimum. However, true I and Q demodulation was relatively more complex, requiring two analog delay lines, and NTSC receivers rarely used it.
 
However, this color space conversion is [[lossy compression|lossy]], particularly obvious in [[crosstalk]] from the luma to the chroma-carrying wire, and vice versa, in analogue equipment (including [[RCA connector]]s to transfer a digital signal, as all they carry is analogue [[composite video]], which is either YUV, YIQ, or even [[CVBS]]). Furthermore, NTSC and PAL encoded color signals in a manner that causes high bandwidth chroma and luma signals to mix with each other in a bid to maintain backward compatibility with black and white television equipment, which results in [[dot crawl]] and [[cross color]] artifacts. When the NTSC standard was created in the 1950s, this was not a real concern since the quality of the image was limited by the monitor equipment, not the limited-bandwidth signal being received. However today's modern television is capable of displaying more information than is contained in these lossy signals. To keep pace with the abilities of new display technologies, attempts were made since the late 1970s to preserve more of the Y'UV signal while transferring images, such as [[SCART]] (1977) and [[S-Video]] (1987) connectors.
 
Instead of Y'UV, Y'CbCr was used as the standard format for (digital) common [[video compression]] algorithms such as [[MPEG-2]]. Digital television and DVDs preserve their [[video compression|compressed video]] streams in the [[MPEG-2]] format, which uses a full Y'CbCr color space, although retaining the established process of chroma subsampling. The professional [[CCIR 601]] digital video format also uses Y'CbCr at the common chroma subsampling rate of 4:2:2, primarily for compatibility with previous analog video standards. This stream can be easily mixed into any output format needed.
 
Y'UV is not an [[absolute color space]]. It is a way of encoding RGB information, and the actual color displayed depends on the actual RGB colorants used to display the signal. Therefore a value expressed as Y'UV is only predictable if standard RGB colorants are used (i.e. a fixed set of primary chromaticities, or particular set of red, green, and blue).
 
==Confusion with Y'CbCr==
 
Y'UV is often used as the term for [[YCbCr]].  However, they are different formats.  Y'UV is an analog system with scale factors different from the digital Y'CbCr system.<ref>{{cite journal|url=http://www.poynton.com/papers/YUV_and_luminance_harmful.html
|accessdate=2008-08-22
|title=YUV and luminance considered harmful
|date=1999-06-19
|first=Charles
|last=Poynton
}}</ref>
 
In digital video/image systems, Y'CbCr is the most common way to express color in a way suitable for compression/transmission. The confusion stems from computer implementations and text-books erroneously using the term YUV where Y'CbCr would be correct.
 
==Types of sampling==
 
To get a digital signal, Y'UV images can be [[sample (signal)|sampled]] in several different ways; see [[chroma subsampling]].
 
==Converting between Y'UV and RGB==
 
RGB files are typically encoded in 8, 12, 16 or 24 bits per pixel. In these examples, we will assume 24 bits per pixel, which is written as RGB888. The standard byte format is
r0 = rgb[0];
g0 = rgb[1];
b0 = rgb[2];
r1 = rgb[3];
g1 = rgb[4];
b1 = rgb[5];
  ...
 
Y'UV files can be encoded in 12, 16 or 24 bits per pixel. The common formats are Y'UV444 (or YUV444), YUV411, Y'UV422 (or YUV422) and Y'UV420p (or YUV420). The apostrophe after the Y is often omitted, as is the "p" after YUV420p.  In terms of actual file formats, YUV420 is the most common, as the data is more easily compressed, and the file extension is usually ".YUV".
The relation between data rate and sampling (A:B:C) is defined by the ratio between Y to U and V channel.<ref>[http://msdn.microsoft.com/en-us/library/windows/desktop/dd391027(v=vs.85).aspx msdn.microsoft.com, Recommended 8-Bit YUV Formats for Video Rendering]</ref><ref>[http://msdn.microsoft.com/de-de/library/windows/desktop/dd206750(v=vs.85).aspx msdn.microsoft.com, YUV Video Subtypes]</ref>
 
To convert from RGB to YUV or back, it is simplest to use RGB888 and YUV444. For YUV411, YUV422 and YUV420, the bytes need to be converted to YUV444 first.
         
YUV444    3 bytes per pixel
YUV422    4 bytes per 2 pixels
YUV411    6 bytes per 4 pixels
YUV420p  6 bytes per 4 pixels, reordered
 
===Y'UV444 to RGB888 conversion===
 
The function [R, G, B] = Y'UV444toRGB888(Y', U, V) converts Y'UV format to simple RGB format
 
The RGB conversion formulae used for Y'UV444 format are also applicable to the standard NTSC TV transmission format of YUV420 (or YUV422 for that matter). For YUV420, since each U or V sample is used to represent 4 Y samples that form a square, a proper sampling method can allow the utilization of the exact conversion formulae shown below. For more details, please see the 420 format demonstration in the bottom section of this article.
 
These formulae are based on the NTSC standard;
:<math>Y' =  0.299 \times R + 0.587 \times G + 0.114 \times B</math>
:<math>U  = -0.147 \times R - 0.289 \times G + 0.436 \times B</math>
:<math>V  =  0.615 \times R - 0.515 \times G - 0.100 \times B</math>
 
On older, non-[[SIMD]] architectures, floating point arithmetic is much slower than using fixed-point arithmetic, so an alternative formulation is:
 
:<math>C = Y' - 16</math>
:<math>D = U - 128</math>
:<math>E = V - 128</math>
 
Using the previous coefficients and noting that clamp() denotes clamping a value to the range of 0 to 255, the following formulae provide the conversion from Y'UV to RGB (NTSC version):
 
:<math>R = \mathrm{clamp}(( 298 \times C                + 409 \times E + 128) >> 8)</math>
:<math>G = \mathrm{clamp}(( 298 \times C - 100 \times D - 208 \times E + 128) >> 8)</math>
:<math>B = \mathrm{clamp}(( 298 \times C + 516 \times D                + 128) >> 8)</math>
 
Note: The above formulae are actually implied for YCbCr.
Though the term YUV is used here, it should be noted that YUV and YCbCr are not exactly the same in a strict manner.
 
The ITU-R version of the formulae is different:
 
:<math>Y  = 0.299 \times R + 0.587 \times G + 0.114 \times B + 0</math>
:<math>Cb = -0.169 \times R - 0.331 \times G + 0.499 \times B + 128</math>
:<math>Cr = 0.499 \times R - 0.418 \times G - 0.0813 \times B + 128</math>
 
:<math>R = \mathrm{clamp}(Y + 1.402 \times (Cr - 128))</math>
:<math>G = \mathrm{clamp}(Y - 0.344 \times (Cb - 128) - 0.714 \times (Cr - 128))</math>
:<math>B = \mathrm{clamp}(Y + 1.772 \times (Cb - 128))</math>
 
Integer operation of ITU-R standard for YCbCr(8 bits per channel) to RGB888:
 
:<math>Cr = Cr - 128;</math>
:<math>Cb = Cb - 128;</math>
:<math>R = Y + Cr + (Cr>>2) + (Cr>>3) + (Cr>>5)</math>
:<math>G = Y - ((Cb>>2) + (Cb>>4) + (Cb>>5)) - ((Cr>>1) + (Cr>>3) + (Cr>>4) + (Cr>>5))</math>
:<math>B = Y + Cb + (Cb>>1) + (Cb>>2) + (Cb>>6)</math>
 
===Y'UV422 to RGB888 conversion===
 
: Input:  Read  4 bytes of Y'UV    (u, y1, v, y2 )
: Output: Writes 6 bytes of RGB  (R, G,  B,  R, G, B)
 
y1 = yuv[0];
u  = yuv[1];
y2 = yuv[2];
v  = yuv[3];
<!--:<math>{YUV} = \begin{bmatrix}u & y_1 & v & y_2\end{bmatrix}</math>-->
 
Using this information it could be parsed as regular Y'UV444 format to get 2 RGB pixels info:
 
rgb1 = Y'UV444toRGB888(y1, u, v);
rgb2 = Y'UV444toRGB888(y2, u, v);
<!--:<math>{RGB}_1 = \begin{bmatrix}y_1 & u & v\end{bmatrix}</math>
:<math>{RGB}_2 = \begin{bmatrix}y_2 & u & v\end{bmatrix}</math>-->
 
Y'UV422 can also be expressed in YUY2 [[FourCC]] format code. That means 2 pixels will be defined in each macropixel (four bytes) treated in the image.
[[Image:Yuv422 yuy2.svg|350px]].
 
===Y'UV411 to RGB888 conversion===
 
: Input:  Read  6 bytes of Y'UV
: Output: Writes 12 bytes of RGB
 
// Extract YUV components
u  = yuv[0];
y1 = yuv[1];
y2 = yuv[2];
v  = yuv[3];
y3 = yuv[4];
y4 = yuv[5];
 
rgb1 = Y'UV444toRGB888(y1, u, v);
rgb2 = Y'UV444toRGB888(y2, u, v);
rgb3 = Y'UV444toRGB888(y3, u, v);
rgb4 = Y'UV444toRGB888(y4, u, v);
<!--:<math>{YUV} = \begin{bmatrix}u & y_1 & y_2 & v & y_3 & y_4\end{bmatrix}</math>
:
:<math>{RGB}_1 = \begin{bmatrix}y_1 & u & v\end{bmatrix}</math>
:<math>{RGB}_2 = \begin{bmatrix}y_2 & u & v\end{bmatrix}</math>
:<math>{RGB}_3 = \begin{bmatrix}y_3 & u & v\end{bmatrix}</math>
:<math>{RGB}_4 = \begin{bmatrix}y_4 & u & v\end{bmatrix}</math>-->
 
So the result is we are getting 4 RGB pixels values (4*3 bytes) from 6 bytes. This means reducing the size of transferred data to half, with a loss of quality.
 
===Y'UV420p (and Y'V12 or YV12) to RGB888 conversion===
 
Y'UV420p is a planar format, meaning that the Y', U, and V values are grouped together instead of interspersed.  The reason for this is that by grouping the U and V values together, the image becomes much more compressible.  When given an array of an image in the Y'UV420p format, all the Y' values come first, followed by all the U values, followed finally by all the V values.
 
The Y'V12 format is essentially the same as Y'UV420p, but it has the U and V data switched: the Y' values are followed by the V values, with the U values last.  As long as care is taken to extract U and V values from the proper locations, both Y'UV420p and Y'V12 can be processed using the same algorithm.
 
As with most Y'UV formats, there are as many Y' values as there are pixels.  Where X equals the height multiplied by the width, the first X indices in the array are Y' values that correspond to each individual pixel.  However, there are only one fourth as many U and V values.  The U and V values correspond to each 2 by 2 block of the image, meaning each U and V entry applies to four pixels.  After the Y' values, the next X/4 indices are the U values for each 2 by 2 block, and the next X/4 indices after that are the V values that also apply to each 2 by 2 block.
 
Translating Y'UV420p to RGB is a more involved process compared to the previous formats.  Lookup of the Y', U and V values can be done using the following method:
 
size.total = size.width * size.height;
y = yuv[position.y * size.width + position.x];
u = yuv[(position.y / 2) * (size.width / 2) + (position.x / 2) + size.total];
v = yuv[(position.y / 2) * (size.width / 2) + (position.x / 2) + size.total + (size.total / 4)];
rgb = Y'UV444toRGB888(y, u, v);
 
Here "/" is Div not division.
 
[[Image:Yuv420.svg|800px]]
 
As shown in the above image, the Y', U and V components in Y'UV420 are encoded separately in sequential blocks.  A Y' value is stored for every pixel, followed by a U value for each 2×2 square block of pixels, and finally a V value for each 2×2 block.  Corresponding Y', U and V values are shown using the same color in the diagram above.  Read line-by-line as a byte stream from a device, the Y' block would be found at position 0, the U block at position x×y (6×4 = 24 in this example) and the V block at position x×y + (x×y)/4 (here, 6×4 + (6×4)/4 = 30).
 
===Y'UV420sp (NV21) to ARGB8888 conversion===
 
This format (NV21) is the standard picture format on android camera preview. YUV 4:2:0 planar image, with 8 bit Y samples, followed by interleaved V/U plane with 8bit 2x2 subsampled chroma samples.<ref> fourcc.com YUV pixel formas [http://www.fourcc.org/yuv.php#NV21]</ref>
 
Java source code used on Android:
<source lang="Java">
/**
* Converts YUV420 NV21 to ARGB8888
*
* @param data byte array on YUV420 NV21 format.
* @param width pixels width
* @param height pixels height
* @return a ARGB8888 pixels int array. Where each int is a pixels ARGB.
*/
public static int[] convertYUV420_NV21toARGB8888(byte [] data, int width, int height) {
    int size = width*height;
    int offset = size;
    int[] pixels = new int[size];
    int u, v, y1, y2, y3, y4;
    // i along Y and the final pixels
    // k along pixels U and V
    for(int i=0, k=0; i < size; i+=2, k+=2) {
    y1 = data[i  ]&0xff;
    y2 = data[i+1]&0xff;
    y3 = data[width+i  ]&0xff;
    y4 = data[width+i+1]&0xff;
 
v = data[offset+k  ]&0xff;
        u = data[offset+k+1]&0xff;
v = v-128;
        u = u-128;
 
        pixels[i  ] = convertYUVtoARGB(y1, u, v);
        pixels[i+1] = convertYUVtoARGB(y2, u, v);
        pixels[width+i  ] = convertYUVtoARGB(y3, u, v);
        pixels[width+i+1] = convertYUVtoARGB(y4, u, v);
       
if (i!=0 && (i+2)%width==0)
    i+=width;
    }
    return pixels;
}
 
private static int convertYUVtoARGB(int y, int u, int v) {
    int r,g,b;
    r = y + (int)(1.402f*u);
    g = y - (int)(0.344f*v + 0.714f*u);
    b = y + (int)(1.772f*v);
    r = r>255? 255 : r<0 ? 0 : r;
    g = g>255? 255 : g<0 ? 0 : g;
    b = b>255? 255 : b<0 ? 0 : b;
    return 0xff000000 | (r<<16) | (g<<8) | b;
}
</source>
 
==References==
{{Reflist}}
 
==External links==
* [http://www.fourcc.org/fccyvrgb.php RGB/Y'UV Pixel Conversion]
* [http://www.fourcc.org/yuv.php Explanation of many different formats in the Y'UV family]
* Poynton, Charles. [http://www.poynton.com/Poynton-video-eng.html Video engineering]
* Kohn, Mike. [http://www.mikekohn.net/stuff/image_processing.php Y'UV422 to RGB using SSE/Assembly]
* [http://discoverybiz.net/enu0/faq/faq_YUV_YCbCr_YPbPr.html YUV, YCbCr, YPbPr color spaces.]
* [http://www.equasys.de/colorformat.html Color formats] for image and video processing - [http://www.equasys.de/colorconversion.html Color conversion] between RGB, YUV, YCbCr and YPbPr.
* [http://www.erazer.org/how-to-convert-rgb-to-yuv420p How to convert RGB to YUV420P]
* [http://code.google.com/p/libyuv/ libyuv]
* [http://code.google.com/p/pixfc-sse/ pixfc-sse] - C library of SSE-optimized color format conversions.
 
{{Color space}}
 
[[Category:Color space]]

Latest revision as of 00:15, 28 June 2014

I'm Kayleigh and I live in Springfield.
I'm interested in Physical, Photography and Spanish art. I like travelling and watching The Big Bang Theory.

Also visit my web blog - how to get free Fifa 15 coins