But I jest. My strategy works for me; I just had to find a way around the problems first. Staggered backups with varying frequencies seems to have done the trick.
You're right, the "best practices" method would be to run defrags right before full backups. But Diskeeper advertises that it's best to just leave it on 24/7 so that it can move things around even very soon after they were written in a fragmented state. That's a relatively new feature, and I like the possibility that Diskeeper could defrag a file even before it gets backed up the first time after being created or updated.
OTOH, if you use a file-based backup solution fragmentation becomes irrelevant. Restore a file-based backup to a fresh drive, and it will actually be *less* fragmented than an image backup/restore of a defragmented drive.
Since my new system is so much more powerful than anything before it, there's no performance reason to throttle Diskeeper. I like the simplicity of leaving it on!
To each his own, I guess. I do not like regularly scheduled defrags since it results in extra wear and tear on the drive(s), and also carries a small risk of data corruption (especially if you are not using ECC RAM and/or do not have a UPS).
I'm not using any databases or other journaling software on this system (yet), so a recovery situation (for now) will tolerate a recovery of each partition to a different point-in-time.
All modern file systems (NTFS, EXT4, etc.) use journals internally, and can write updated data to blocks other than the ones which were occupied by the original file. This can result in extra blocks getting backed up by a block-based incremental backup tool.
The current setup will probably suffice for years to come. And when I install a database (which may be later this year because I have a project in mind), I will probably have to revisit this strategy with respect to the database files and transaction logs. Possibly put them on my office data partition and exclude them from Macrium's backups, then use the batch scheduler to have the database do its own backups for correct handling of locking and concurrency.
Yeah, databases (especially if large) won't play nice with file-based incremental backups either. In fact, for databases you may be better off with the block-based incremental, provided you disable the defrag.
Thanks guys; I learned something today!