Composite video decoding: Theory and practice

After a few busy weeks, I’ve finally arranged enough time to cover the details behind my color composite video decoding project I featured recently. Before you proceed, I suggest you read the previous post, as well as the one before that covers B/W decoding (this post builds on that one), and watch the video, if you haven’t already:

So, feel ready to learn about composite color coding, quadrature amplitude modulation (QAM)? Let’s get started! Before you get going, you can grab the example excel file so you can view the diagrams below in full size, and study the data and formulas yourself.

Combining luminance and color information

If we look at one line of video information (“scanline”), it’s basically a function of two things: luminance (brightness) and chrominance (color) information, combined to a single waveform/signal like this one I showed in my first article.

If we’d like to just have a black-and-white image, encoding it would be easy: Maximum voltage for white, minimum voltage for black, and the values between would simply be shades of grey. However, if we’d like to add color information to this signal, we need to get clever. What the engineers in the 1960’s did to get two things stuffed into one signal was to add color in sine wave modulated form on top of the luminance signal. With proper analog electronics, these two signals could then be separated from the receiving end.


Continue reading Composite video decoding: Theory and practice

Color composite video decoding with Picoscope 3206B

I recently finished an improved version of the NTSC composite video decoder previously featured here at Code and Life. With the bigger buffer of Picoscope 3206B, I was able to capture enough samples per frame to add color. A Youtube demonstration video has just finished uploading and you can view it below.

I already talk a bit about the techniques used in the new version, as well as the new features. For the readers of this blog, I’m planning a more detailed technical explanation, which I’ll put up as soon as I get the infographics for that one done.

Enjoy the show! If you want to take a look at the sources (or better yet, have a 3000 series Picoscope at hand), you can grab the source package. My code is licensed under GPL, see README.txt inside the archive for details.

Update: Technical details now published! Implementation details to be added in another post a bit later.

Realtime Composite Video Decoding with PicoScope

After getting my Raspberry Pi and successfully trying out serial console and communication with Arduino, I wanted to see if I could use the Pi as a “display shield” for Arduino and other simpler microcontroller projects. However, this plan had a minor problem: My workstation’s monitor wouldn’t display the HDMI image from Pi, and neither had it had a composite input. Working with the Pi in my living room which has a projector with both HDMI and composite was an option, but spreading all my gear there didn’t seem like such a good plan. But then I got a crazy idea:

The Pi has a composite output, which seems like a standard RCA connector. Presumably it’s sending out a rather straightforward analog signal. Would it be possible to digitize this signal and emulate a composite video display on the PC?

The short answer is: Yes. The medium length answer is, that it either requires an expensive oscilloscope with very large capture buffer (millions of samples), or then something that can stream the data fast enough so there’s enough samples per scanline to go by. Turns out my Picoscope 2204 can do the latter just enough – it isn’t enough for color, but here’s what I was able to achieve (hint: you may want to set video quality to 480p):

What my program does is essentially capture a run of 500 000 samples at 150ns intervals, analyze the data stream to see whether we have a working frame (and because the signal is interlaced, whether we got odd or even pixels), plot it on screen and get a new set of data. It essentially creates a “virtual composite input” for the PC. There’s some jitter and horizontal resolution lost due to capture rate and decoding algorithm limitations, and the picture is monochrome, but if you consider that realtime serial decoding is considered a nice feature in oscilloscopes, this does take things to a whole another level.

Read on to learn how this is achieved, and you’ll learn a thing or two about video signals! I’ve also included full source code (consider it alpha grade) for any readers with similar equipment in their hands.

Continue reading Realtime Composite Video Decoding with PicoScope