Kudos to you for taking up my challenge and providing a reference. I didn't know OSX was missing memory address randomization, seems like a significant omission on their part, thanks for pointing that out.
I see that the source is using disclosed flaws as a metric; this poses an interesting question: How do you accurately measure something like security or vulnerability?
Statistics based on disclosed flaws will obviously look very different based on what procedures OS vendors have for reporting internally discovered flaws or flaws confidentially disclosed by third parties. Linux will probably always be the loser here, as its flaws will always be public knowledge due to its open source nature. I don't know how often Apple and Microsoft chose to disclose flaws, but any differences between them will obviously influence the statistics. I would suspect that Microsoft would tend to keep stuff more hidden, but that's not based on anything except my preconceptions. (It gets worse; I've read stories about application vendors purposefully ignoring security flaws as long as they aren't published anywhere. Some have threatened white-hat hackers who reported flaws to keep their mouths shut etc. Obviously, patches are expensive and some people will try to avoid having to write them... I haven't heard of the OS vendors doing stuff like this, but you never know of course)
Another way to determine the level of security might be to measure the time between the discovery of a flaw to the issue of a bugfix. I'm guessing Linux would be the winner here, as the open source community tend to be extremely prompt about those things. Apple and Microsoft will be slower, as they are likely to spend more time putting the bugfixes through lengthy verification and testing procedures before publishing them. Also, Microsoft tends to release patches only once a month if I recall correctly.
A third way to measure security would be to estimate the likelihood of malware infection. Here Windows is the obvious loser, as the vast majority of malware is written for that platform. In this sense, there is security in choosing an unpopular operating system.
A fourth method might be to have the operating systems go through a certification process from an independent third party. This will invariably be very costly, so only operating systems with solid financial backers will be able to participate. Also it's debatable whether you can actually gain any useful information from this procedure.
My point is that there are many ways of defining system security, and many ways of measuring it regardless of definition. Most methods will be biased in one way or another. A reporter with a bias writing about OS security will of course choose the metric that suits him or her best.
A final point that I just thought of; different types of users are at risk from different types of flaws. For most normal users, a virus or trojan might be the biggest risk, and it's often a matter of the ignorant user clicking on dangerous links, not necessarily something related to the operating system. (A.k.a PEBKAC, "Problem Exists Between Keyboard And Chair") For a big company like Google on the other hand, the biggest risks might be targeted hacking from outside the company, or dishonest employees stealing data from within. For these two types of users, security priorities will be entirely different.