Pixels and subpixels…

With all apologies to Ella Fitzgerald:

A-nixel a-bixel
A red, green and blue subpixel
I rendered a pixel for my work
And on the way forgot to save it

I deleted it, I deleted it
Yes, on the way I deleted it
A Riot Grrrl discovered it
And recovered it on the Internets
She was truckin’ on down the tubes, without a single thing to do she was peck-peck-peckin all around when she spied it on the Internets.
A-nixel a-bixel she took my red, green and blue subpixel and if she doesn’t bring it back I think that I shall die

Look close…

Have you ever looked at your LCD display? I mean really looked at your LCD display? Up close? This weekend I spent some time with the macro lens and Apple Cinema Display examining the pixels and subpixel RGB elements. Although displays can adopt a number of geometries for pixels and subpixels, the common theme is the use of Red, Green and Blue subpixels arrayed in horizontal rows of bars or angled bars. However, from an informal survey of higher end LCD glass versus lower end LCD glass it appears that angled bars improve the visual appearance of text and images and perhaps, help to conceal any pixels that may in fact, be dead or stuck. Don’t quote me on that though…

The whole field of subpixel rendering is actually quite interesting and historically, might very well be tracked back to the original Apple II. While it was not exactly the subpixel rendering that we know today, I do remember being able to address the position of a pixel with a percentage of a location or performing half pixel shifts. Interestingly, when Steve Wozniak designed the Apple II, he used a color subcarier approach that conceptually, very closely matches todays current LCD technology in that it utilized a horizontal distribution of colors much like the RGB subpixels in modern LCD displays. Steve had to remind me this weekend how it was done, but to address a subpixel, “each byte had 7 bits that came out but if the highest bit of the byte was a ‘1’ instead of a ‘0’ then the 7 bits were shifted over a half bit position.” Back then, I was just 12 and was playing around making animations of UFOs and such or using it to make text look better on my Apple II with its green phosphor display, but others were actually doing something useful with it, and the work evolved into some of the more elaborate true subpixel rendering schemes and algorithms we know of today. While Apple was one of the first to address these issues, IBM, Xerox PARC and Honeywell invested not inconsiderable efforts into the development of algorithms and it should be noted that Microsoft came to the game in 1998 with their ClearType technologies, ballyhooing them as “a great leap forward”. You can read more on the background of subpixel rendering and who came up with it first on Steve Gibson’s site here.

In short, any pixel on your LCD display is an amalgam of three subpixels, red, green and blue, configured as RGB. Examining them up close, you can see the subpixel elements, but due to optical blurring and spatial integration of the signal by the retina and visual cortex, we see them as a single color further away from the display. By varying the intensity of not only each pixel, but each component of the subpixels, one can create various effects or percepts of shape, color, position and blurring that ultimately serve to improve text readability or image quality.

Note: Because of the precision to which this is possible with LCD displays, the technologies work better with LCDs than they do traditional CRT tube displays which tend to spread the “signal” over a less well defined area. Some companies (Apple Inc.) actually provide a software switch for use with LCD versus CRT displays.


One Reply to “Pixels and subpixels…”

Leave a Reply

Your email address will not be published. Required fields are marked *