PseudoGrey Plus takes the PseudoGrey concept one step further. Whereas PseudoGrey can encode 1786 levels of grey within a 24-bit color image, PseudoGrey Plus ups that to 3110 unique levels. I have only been working with this idea for a week. (It is 20 Nov 2013 as I write.) Currently I envision its main utility in being able to use 12-bit greyscale images on a color monitor, while maintaining both the original dynamic range and most discrete levels of contrast.
The underlying basis of pseudogrey is a color to grey algorithm known as luma.
It dates back to color televisions of the 1950s and '60s. While reversing it works very
well as a greyscale to color technique, modern monitors treat color differently than old color
TVs. In particular, they weight the intensity rating of red, green, and blue more like
human eyes do. The newer algorithm is based on luminance. Here is a table
comparing the two:
Note in particular that the eye weights of both red and especially blue have dropped, while the effect of green on perceived intensity went up.
Simulating more than 256 levels of grey in a 24-bit image involves minor tweaks to the intensities of red, green, and blue (RGB). A normal greyscale conversion simply calculates an overall intensity for each pixel and sets the RGB values the same.
Perhaps the RGB values do not need to be exactly the same. Consider varying them
a bit <clears throat> from their base intensity.
In-between levels of grey result. In the following table, there are "plusses"
to the base RGB value. Pseudogrey has no plusses greater than 1.
Pseudogrey Plus, however, takes advantage of the weakness of blue and red in the
luminance value to allow them to vary upward by as much as 2.
The "12-bit low nibble" column is how you would map if your original image were 12-bit greyscale. The base intensity of each pixel is the upper 8 bits (byte), and the low-order nibble specifies the "plusses" for the three RGB output pixels.
Not necessarily, but you are correct that your eyes match the second set better. However, if you are converting a color image to greyscale, 1786 shades is likely more than enough. Most folks have lived for decades with only 256 shades – not even knowing they were missing anything. It does not matter that the %betweens are not even. The important thing is that there is some difference in lightness offered for each one of the shades. OTOH, if you are converting a 12-bit greyscale image to be viewed on a color monitor, having all 3110 unique levels is a much better choice.
In case that is not clear, let me restate. For color to greyscale, there is a significant gain in expanding to 1786 levels. Note that downstream users might further modify your image. If the maximum variance from true grey is one bit instead of two, the chances of visible color artifacting are more than halved. That all said, if you want to use all 3110 levels on your color-to-greyscale image, feel free. For viewing deep grey images (e.g. 12 or 16 bits/pixel) on a color monitor, 3110 levels is pretty darn good. While the full dynamic range of the original is preserved, some of the the discrete levels are still lost. But the loss would be much greater if only 1786 levels were available.