Posted February 02, 2010
Trying to describe visual acuity in terms of discrete units (such as used in describing image resolutions) doesn't really provide any useful information, hence the metric of cycles per degree described in your linked Wikipedia article. To expand a bit on why discrete units don't work so well for describing human vision, we should start at the smallest level of human vision, which ironically actually is a discrete unit: the individual photoreceptor cells in our retinas, commonly categorized as rods and cones. However, while it may be tempting to compare rods and cones to pixels, it doesn't actually work out well in practice. Firstly, rods and cones are very limited in what they can detect, and individually can't detect the full spectrum of visible light. Rods basically can only detect whether or not light is present at all, and mainly serve to detect low levels of indirect light. Cones are divided into three types, each which detects light at a different optimal wavelength (although with a range of wavelengths that can be detected centered around that optimal wavelength).
Now, upon being exposed to the kind of light that it can detect, all an individual rod or cone can communicate through biological signaling is that light has been detected, and to a lesser extent a qualitative estimate of how much light has been detected (no wavelength/color information is transmitted). Now, actual color information comes from the photoreceptor cells working in clusters- you have numerous cells close together reporting different amounts of light being detected, and this is the information that actually gets passed to our brain. And here is where things start getting really complicated. The color we perceive is our brain interpreting the various light levels detected by proximal cones with different optimum wavelength detection ranges. In low light environments the cones can't actually send much useful data, thus our brains are basically just processing the "light"/"no light" information from the rods, hence the highly muted colors we perceive in low light environments.
Now, if all this wasn't enough, we also have to take into account that our photoreceptor cells aren't firing uniformly (i.e. there's no definitive "refresh rate"). The various cells are basically firing off randomly, and in between all of these disparate data points our brains are furiously filling in information. So not only is what we're interpreted, it's also interpolated on top of that. Now, if you've followed me up to this point, hopefully you'll have started to see the crux of the matter as to why we can't describe our vision in discrete units: such units are useful for describing the representative capture of visual information, but human vision is not representative, it is entirely interpretive. We can apply metrics that measure the interpretive capture of visual information (such as CDP) and use it to compare different types of interpretive vision, but these metrics will always be incompatible with metrics used to measure the representative capture of visual information.
Now, upon being exposed to the kind of light that it can detect, all an individual rod or cone can communicate through biological signaling is that light has been detected, and to a lesser extent a qualitative estimate of how much light has been detected (no wavelength/color information is transmitted). Now, actual color information comes from the photoreceptor cells working in clusters- you have numerous cells close together reporting different amounts of light being detected, and this is the information that actually gets passed to our brain. And here is where things start getting really complicated. The color we perceive is our brain interpreting the various light levels detected by proximal cones with different optimum wavelength detection ranges. In low light environments the cones can't actually send much useful data, thus our brains are basically just processing the "light"/"no light" information from the rods, hence the highly muted colors we perceive in low light environments.
Now, if all this wasn't enough, we also have to take into account that our photoreceptor cells aren't firing uniformly (i.e. there's no definitive "refresh rate"). The various cells are basically firing off randomly, and in between all of these disparate data points our brains are furiously filling in information. So not only is what we're interpreted, it's also interpolated on top of that. Now, if you've followed me up to this point, hopefully you'll have started to see the crux of the matter as to why we can't describe our vision in discrete units: such units are useful for describing the representative capture of visual information, but human vision is not representative, it is entirely interpretive. We can apply metrics that measure the interpretive capture of visual information (such as CDP) and use it to compare different types of interpretive vision, but these metrics will always be incompatible with metrics used to measure the representative capture of visual information.