jpilot: Data rot does not occur only in high load or mission critical scenarios. It can happen everywhere and it isn't even uncommon to happen. If you don't check for corruption in your data, you can easily spread the corrupted data to all your backups, so backups alone won't help you there. So calculating checksums and/or using file systems like ZFS or Btrfs as mentioned before is not a bad idea if you've got any important data which can't easily be recreated.
sunshinecorp: Data rot is a very specific term that encompasses very specific types of, well, data degradation. This does not include regular cache read/write errors for example or bad sectors. It includes things like bits losing their magnetic orientation due to time and non-use.
Data rot isn't really as specific as you claim it to be, but actually I was referring to the meaning of it as you describe here. When you encounter bad sectors, you usually notice something is wrong, so you can easily take action. The far worse situation is when your data is silently corrupted and this is not limited to enterprise applications at all.
Just to be clear, I was talking about data being silently corrupted for whatever reason. I guess the question of this thread was about just that. Using MD5 for verifying data integrity works well in that situation and its cryptographic weaknesses do not matter at all in this case.