BIF wrote:ChronoReverse wrote:I'm fairly certain that what's presented as "contiguous" to the system isn't even necessarily so internally...
This is also true of hard drives. It may look contiguous in your defragger display, but might not be. Or vice-versa. The representation of what's actually on the platters is just whatever is represented by the controller.
While this is factually true, in practice sequential LBAs will tend to be laid out sequentially on the media a vast majority of the time. Exceptions are remapped bad blocks, and certain drive-managed SMR implementations, which may use a non-shingled area of the disk as a write cache, and/or logically reorder the shingled zones (but not the blocks within them).
If HDD manufacturers didn't stick to a "keep LBAs sequential as much as possible" rule of thumb, then sequential transfer rates would plummet, making their performance numbers look really bad; and nobody wants that.
curtisb wrote:just brew it! wrote:meerkt wrote:There was enough space for all the files it fragmented, combined, to be moved elsewhere contiguously.
Well if that was the case it was being dumb.
Except that it doesn't have anything to do with available space on the drive. It has to do with available
contiguous open blocks to move the files around to efficiently/quickly defrag them. If there isn't sufficient contiguous space at the start of the process, that's where the "it'll make it worse before it makes it better" statement comes in. I think that's the point you were trying to make in your post previous to his, though.
Yes, that was my point. But meerkt claims there was space for the fragmented files "to be moved elsewhere contiguously", i.e. he believes there was a lot of contiguous free space. If that was in fact the case, then the defragger was indeed "being dumb".
All that aside, I agree it is silly to run a defrag and stop it partway, except in unusual circumstances (e.g. thunderstorm rolling in and you're not on a UPS).
demolition wrote:I do use defrag utils from time to time and even on purpose on flash media.. I find it useful when I am building images for various systems and after I finished installing everything, configuring stuff and cleaning up what I don't want, then I do a defrag before resizing the partition to a small size and then I make an image which I can then restore to another drive even if it is a smaller one.
Granted this is a special use case and I wouldn't set up a regular defrag job on a flash drive.
jihadjoe wrote:Thinking back on how NAND cells have an endurance rating, it does make sense to at least move data blocks around otherwise all the wear ends up concentrated on the unused NAND blocks, while blocks that have been constantly occupied will hardly see any wear at all.
That just adds wear to cells that would be otherwise left untouched? In theory you could end up with a drive where half the cells only have a few writes while the remaining 50% are 100% worn.
Ahh, but if the cells that DON'T have long-lived files in them are wearing out from constant rewrites, it actually makes sense to move the old files there, on the assumption that this will free up fresh(er) cells for the files that are of a more transient nature. This actually makes sense if your goal is to maximize the overall useful lifetime of the drive.
That said, as I noted in a previous post that is really the job of the SSD's internal wear leveling algorithms, not the OS.
***
One final thought on defragging in general: Data "at rest" on a disk platter or in a SSD's flash memory chips is protected by some heavy-duty error detection and correction algorithms. If a bit gets flipped, it'll be corrected. If a bunch of bits get flipped or lost (overwhelming the error correction), the OS will let you know the file is corrupted. When you defrag a drive, you're staging all the data that is getting moved around through system RAM. While in RAM, that data is susceptible to corruption - from malware, OS/driver/defragger bugs, and (unless you are running ECC RAM in a system that properly supports it) "soft" DRAM errors and flaky/failing DIMMs. So frequent defragging exposes you to some additional potential sources of "bitrot"...