Personal computing discussed

Moderators: renee, morphine, Steel

 
BIF
Gold subscriber
Minister of Gerbil Affairs
Topic Author
Posts: 2433
Joined: Tue May 25, 2004 7:41 pm

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Tue Jun 25, 2019 9:52 pm

ChronoReverse wrote:
I'm fairly certain that what's presented as "contiguous" to the system isn't even necessarily so internally...
This is also true of hard drives. It may look contiguous in your defragger display, but might not be. Or vice-versa. The representation of what's actually on the platters is just whatever is represented by the controller.

Wow, this is an old thread. I don't think I use PD anymore. Still a good product. I liked how it gave me ways to configure the defrags so that subsequent defrags wouldn't keep moving low-update files all over the drive. None of the others did this. Norton's Speed Disk once had a similar feature, but then Symantec dumbed it down and removed the feature. So I removed the software from my life. Low use/infrequently updated files should get defragged once and then should never really need to be moved again unless they get updated again in the future.

But I only have one physical disk drive that's not for backups. Everything else is SSDs. So why bother with defrags? When that last data hard drive finally goes, I'll just buy an SSD to replace it, then restore from a backup. No big whoop.
 
jihadjoe
Gerbil Elite
Posts: 834
Joined: Mon Dec 06, 2010 11:34 am

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Tue Jun 25, 2019 9:56 pm

Back to GUIs and graphics: Anyone remember Central Point Software and PC Tools? I loved the graphics of their defrag tool back in the DOS days!
 
meerkt
Graphmaster Gerbil
Posts: 1319
Joined: Sun Aug 25, 2013 2:55 am

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Tue Jun 25, 2019 10:04 pm

jihadjoe:
Also AV software was more interesting. Some of them, anyway.

just brew it! wrote:
I don't think I've used a 3rd party defrag tool since the Win98/NT days.

Win98's defrag was more interesting than the NT ones (actually, perhaps Win95 screenshoted):
Image
 
Yan
Gold subscriber
Gerbil XP
Posts: 306
Joined: Fri Dec 21, 2012 9:37 pm
Location: Ottawa

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Tue Jun 25, 2019 10:30 pm

MS-DOS 6's Defrag was actually a licensed and simplified version of Norton Utilities' Speedisk, and I think the screenshot you posted is similar to Speedisk's Windows version.

Edit: yes, here's NU 8's Speed Disk for Windows 3.1.
 
curtisb
Gerbil XP
Posts: 436
Joined: Tue Mar 30, 2010 11:27 pm
Location: Oklahoma

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Tue Jun 25, 2019 11:43 pm

meerkt wrote:
I don't want defraggers to fragment files


No one wants them to fragment files. :D


This is a case where you need to schedule some time to let it finish. I think it's already been stated that stopping it while it's working is just compounding the problem. You need to read the link that was pasted above and pay attention to the part about the metadata having to keep track of where all the pieces are. Beyond the practical limit, there is a point where it will start causing errors.

Sooo...long story short, let it finish or the drive being defragmented while you're working will end up being the least of your concerns. :)
ASUS MAXIMUS VIII HERO | Intel Core i7-6700 | Asus STRIX GTX 970 4GB | 4 x Corsair LPX 8GB | 2 x Crucial MX200 500GB | 2 x Hitachi Deskstar 4TB | Phanteks Eclipse | Seasonic X-850 | Dell UP2516D
 
curtisb
Gerbil XP
Posts: 436
Joined: Tue Mar 30, 2010 11:27 pm
Location: Oklahoma

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Tue Jun 25, 2019 11:51 pm

just brew it! wrote:
meerkt wrote:
just brew it! wrote:
On a nearly full or badly fragmented disk, temporarily making things worse on the way to making them better may be unavoidable.

There was enough space for all the files it fragmented, combined, to be moved elsewhere contiguously.

Well if that was the case it was being dumb.

Except that it doesn't have anything to do with available space on the drive. It has to do with available contiguous open blocks to move the files around to efficiently/quickly defrag them. If there isn't sufficient contiguous space at the start of the process, that's where the "it'll make it worse before it makes it better" statement comes in. I think that's the point you were trying to make in your post previous to his, though. :D
ASUS MAXIMUS VIII HERO | Intel Core i7-6700 | Asus STRIX GTX 970 4GB | 4 x Corsair LPX 8GB | 2 x Crucial MX200 500GB | 2 x Hitachi Deskstar 4TB | Phanteks Eclipse | Seasonic X-850 | Dell UP2516D
 
demolition
Gerbil First Class
Posts: 123
Joined: Wed Nov 01, 2017 3:27 am

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 4:10 am

I do use defrag utils from time to time and even on purpose on flash media.. I find it useful when I am building images for various systems and after I finished installing everything, configuring stuff and cleaning up what I don't want, then I do a defrag before resizing the partition to a small size and then I make an image which I can then restore to another drive even if it is a smaller one.
Granted this is a special use case and I wouldn't set up a regular defrag job on a flash drive.

jihadjoe wrote:
Thinking back on how NAND cells have an endurance rating, it does make sense to at least move data blocks around otherwise all the wear ends up concentrated on the unused NAND blocks, while blocks that have been constantly occupied will hardly see any wear at all.


That just adds wear to cells that would be otherwise left untouched? In theory you could end up with a drive where half the cells only have a few writes while the remaining 50% are 100% worn. This seems like a very theoretical case though since it would need you to never touch those first 50% during the entire lifespan of the drive. In a real system you would sometimes reinstall the OS which would scramble up the entire thing. Even feature updates on Win10 would do something similar since it touches a lot of the system files. Perhaps some industrial system which is never updated in any way might see this kind of uneven wear, but in these internet-connected times, you will probably find it hard to find those kinds of systems since they need to be patched regularly if they are to be allowed online.
 
jihadjoe
Gerbil Elite
Posts: 834
Joined: Mon Dec 06, 2010 11:34 am

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 5:02 am

demolition wrote:
That just adds wear to cells that would be otherwise left untouched? In theory you could end up with a drive where half the cells only have a few writes while the remaining 50% are 100% worn. This seems like a very theoretical case though since it would need you to never touch those first 50% during the entire lifespan of the drive. In a real system you would sometimes reinstall the OS which would scramble up the entire thing. Even feature updates on Win10 would do something similar since it touches a lot of the system files. Perhaps some industrial system which is never updated in any way might see this kind of uneven wear, but in these internet-connected times, you will probably find it hard to find those kinds of systems since they need to be patched regularly if they are to be allowed online.


How often does Windows update itself? And even when it does majority of the files are left untouched. I'll bet there are a good number of files that remain the same on Windows 10 from the beta right up to the current 1903. Also some files are lot more volatile than others. Swap might see itself re-written several thousand times a day, whereas system files and stuff like the MBR/GPT will remain essentially unchanged for the service life of a volume. Without any wear-levelling done an SSD will end up with dead blocks and reduced capacity while the non-volatile sections of it remain at essentially 100% life.

Luckily for us, the drive's internal firmware likely does wear-leveling behind the scenes as JBI mentioned. I'm just not sure how good they are at properly distributing writes.

Oh, and I finally found a PC Tools screenshot! I totally forgot that the tool was called "Compress" and not "Defrag".

Image
 
just brew it!
Gold subscriber
Administrator
Posts: 53078
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 6:38 am

BIF wrote:
ChronoReverse wrote:
I'm fairly certain that what's presented as "contiguous" to the system isn't even necessarily so internally...

This is also true of hard drives. It may look contiguous in your defragger display, but might not be. Or vice-versa. The representation of what's actually on the platters is just whatever is represented by the controller.

While this is factually true, in practice sequential LBAs will tend to be laid out sequentially on the media a vast majority of the time. Exceptions are remapped bad blocks, and certain drive-managed SMR implementations, which may use a non-shingled area of the disk as a write cache, and/or logically reorder the shingled zones (but not the blocks within them).

If HDD manufacturers didn't stick to a "keep LBAs sequential as much as possible" rule of thumb, then sequential transfer rates would plummet, making their performance numbers look really bad; and nobody wants that. :wink:

curtisb wrote:
just brew it! wrote:
meerkt wrote:
There was enough space for all the files it fragmented, combined, to be moved elsewhere contiguously.

Well if that was the case it was being dumb.

Except that it doesn't have anything to do with available space on the drive. It has to do with available contiguous open blocks to move the files around to efficiently/quickly defrag them. If there isn't sufficient contiguous space at the start of the process, that's where the "it'll make it worse before it makes it better" statement comes in. I think that's the point you were trying to make in your post previous to his, though. :D

Yes, that was my point. But meerkt claims there was space for the fragmented files "to be moved elsewhere contiguously", i.e. he believes there was a lot of contiguous free space. If that was in fact the case, then the defragger was indeed "being dumb".

All that aside, I agree it is silly to run a defrag and stop it partway, except in unusual circumstances (e.g. thunderstorm rolling in and you're not on a UPS).

demolition wrote:
I do use defrag utils from time to time and even on purpose on flash media.. I find it useful when I am building images for various systems and after I finished installing everything, configuring stuff and cleaning up what I don't want, then I do a defrag before resizing the partition to a small size and then I make an image which I can then restore to another drive even if it is a smaller one.
Granted this is a special use case and I wouldn't set up a regular defrag job on a flash drive.

jihadjoe wrote:
Thinking back on how NAND cells have an endurance rating, it does make sense to at least move data blocks around otherwise all the wear ends up concentrated on the unused NAND blocks, while blocks that have been constantly occupied will hardly see any wear at all.

That just adds wear to cells that would be otherwise left untouched? In theory you could end up with a drive where half the cells only have a few writes while the remaining 50% are 100% worn.

Ahh, but if the cells that DON'T have long-lived files in them are wearing out from constant rewrites, it actually makes sense to move the old files there, on the assumption that this will free up fresh(er) cells for the files that are of a more transient nature. This actually makes sense if your goal is to maximize the overall useful lifetime of the drive.

That said, as I noted in a previous post that is really the job of the SSD's internal wear leveling algorithms, not the OS.

***

One final thought on defragging in general: Data "at rest" on a disk platter or in a SSD's flash memory chips is protected by some heavy-duty error detection and correction algorithms. If a bit gets flipped, it'll be corrected. If a bunch of bits get flipped or lost (overwhelming the error correction), the OS will let you know the file is corrupted. When you defrag a drive, you're staging all the data that is getting moved around through system RAM. While in RAM, that data is susceptible to corruption - from malware, OS/driver/defragger bugs, and (unless you are running ECC RAM in a system that properly supports it) "soft" DRAM errors and flaky/failing DIMMs. So frequent defragging exposes you to some additional potential sources of "bitrot"...
Nostalgia isn't what it used to be.
 
meerkt
Graphmaster Gerbil
Posts: 1319
Joined: Sun Aug 25, 2013 2:55 am

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 7:01 am

just brew it! wrote:
All that aside, I agree it is silly to run a defrag and stop it partway

I want to see you wait when the program says there are 2 fragments and 47KB left, yet continues reading and writing files all over the disk with no end in sight. :)

Windows' idle-time scheduled defragging stops when there's user activity, BTW.

defragging exposes you to some additional potential sources of "bitrot"...
That's usually true also for normal use of the data (well, at least document style). Apps tend to rewrite whole files on save.
 
just brew it!
Gold subscriber
Administrator
Posts: 53078
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 7:51 am

meerkt wrote:
defragging exposes you to some additional potential sources of "bitrot"...

That's usually true also for normal use of the data (well, at least document style). Apps tend to rewrite whole files on save.

Yes, of course. But in that case you are only exposing a single document, not (potentially) a significant percentage of all the data on your drive.

And if it is a document you are actively working with, you're more likely to notice the corruption soon, instead of months (or years!) down the road, potentially long after the last good backup of that file has been overwritten (depending on your backup routine).
Nostalgia isn't what it used to be.
 
Waco
Gold subscriber
Grand Gerbil Poohbah
Posts: 3083
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 8:25 am

Which is why stable overclocks on even a pure gaming machine are important. :)
Desktop: X570 Gaming X | 3900X | 32 GB | Alphacool Eisblock Radeon VII | Heatkiller R3 | Samsung 4K 40" | 1 TB NVME + 2 TB SATA + LSI (128x8) RAID
NAS: 1950X | Designare EX | 32 GB ECC | 7x8 TB RAIDZ2 | 8x2 TB RAID10 | FreeNAS | ZFS | LSI SAS
 
meerkt
Graphmaster Gerbil
Posts: 1319
Joined: Sun Aug 25, 2013 2:55 am

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 8:30 am

just brew it! wrote:
only exposing a single document, not (potentially) a significant percentage of all the data on your drive.

I suppose that's true.

File checksumming might come in handy (though trickier in certain system directories).
 
just brew it!
Gold subscriber
Administrator
Posts: 53078
Joined: Tue Aug 20, 2002 10:51 pm
Location: Somewhere, having a beer

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 9:18 am

meerkt wrote:
just brew it! wrote:
only exposing a single document, not (potentially) a significant percentage of all the data on your drive.

I suppose that's true.

File checksumming might come in handy (though trickier in certain system directories).

<cue Waco extolling the virtues of ZFS>

:wink:
Nostalgia isn't what it used to be.
 
demolition
Gerbil First Class
Posts: 123
Joined: Wed Nov 01, 2017 3:27 am

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 9:26 am

jihadjoe wrote:
Swap might see itself re-written several thousand times a day, whereas system files and stuff like the MBR/GPT will remain essentially unchanged for the service life of a volume. Without any wear-levelling done an SSD will end up with dead blocks and reduced capacity while the non-volatile sections of it remain at essentially 100% life.
Luckily for us, the drive's internal firmware likely does wear-leveling behind the scenes as JBI mentioned. I'm just not sure how good they are at properly distributing writes.

They should all be quite good at wear-leveling by now. Early models had issues in this regard as they would have 'hot spots', but it was sorted out after the first couple of SSD generations.
If we say that 25% of the drive is static data (which is still really high), with proper wear-leveling that will just correspond to a 25% decrease in TBW. You would need to run an SSD very hard for a long time to get anywhere near that (I would link to the SSD test that TR did but you probably know about it).
Some (most?) drive fimwares will rewrite (and thus, shuffle) data automatically even if it is never written to avoid the issue with old data going bad like we learnt with the 840 EVO.
 
Waco
Gold subscriber
Grand Gerbil Poohbah
Posts: 3083
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 9:42 am

just brew it! wrote:
<cue Waco extolling the virtues of ZFS>

:wink:

I see I've gotten my spiel embedded in others' heads. Excellent. :)

demolition wrote:
Some (most?) drive fimwares will rewrite (and thus, shuffle) data automatically even if it is never written to avoid the issue with old data going bad like we learnt with the 840 EVO.

I think this applies to essentially every SSD sold today. Stale data is eventually rewritten to new locations to avoid the error rate rising above the ECC threshold internally.

This is also why storing SSDs cold/offline is in general a bad idea for data integrity.
Desktop: X570 Gaming X | 3900X | 32 GB | Alphacool Eisblock Radeon VII | Heatkiller R3 | Samsung 4K 40" | 1 TB NVME + 2 TB SATA + LSI (128x8) RAID
NAS: 1950X | Designare EX | 32 GB ECC | 7x8 TB RAIDZ2 | 8x2 TB RAID10 | FreeNAS | ZFS | LSI SAS
 
jihadjoe
Gerbil Elite
Posts: 834
Joined: Mon Dec 06, 2010 11:34 am

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 10:06 am

demolition wrote:
(I would link to the SSD test that TR did but you probably know about it).


In hindsight the TR SSD endurance experiment wasn't a very good test. Geoff did no verification of the written data, and since each disk was re-written as a whole wear-leveling algorithms weren't tested either. It was basically an accelerated test of how long the drives would have lasted as surveillance drives, but without read verification of any sort.

Of course that's not to say it wasn't interesting, or that the data was useless. The test just wasn't representative of any real-world use or workload.
 
Waco
Gold subscriber
Grand Gerbil Poohbah
Posts: 3083
Joined: Tue Jan 20, 2009 4:14 pm
Location: Los Alamos, NM

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 10:24 am

The retention of the written data beyond the stated endurance would be highly suspect I bet.
Desktop: X570 Gaming X | 3900X | 32 GB | Alphacool Eisblock Radeon VII | Heatkiller R3 | Samsung 4K 40" | 1 TB NVME + 2 TB SATA + LSI (128x8) RAID
NAS: 1950X | Designare EX | 32 GB ECC | 7x8 TB RAIDZ2 | 8x2 TB RAID10 | FreeNAS | ZFS | LSI SAS
 
Glorious
Gold subscriber
Gerbilus Supremus
Posts: 11782
Joined: Tue Aug 27, 2002 6:35 pm

Re: I've been trying Raxco Perfect Disk for NTFS Defrags

Wed Jun 26, 2019 11:35 am

jihadjoe wrote:
n hindsight the TR SSD endurance experiment wasn't a very good test. Geoff did no verification of the written data,


After a certain milemarker, I'm pretty sure he added an unpowered retention test.

Obviously the amount of time was probably limited, given practical constraints, but you can't say he didn't do it at all.


So, yes, the question of "If you beat a drive like that and then put it on a shelf for 6 months" wasn't really answered, but that's pretty clearly a problem anyway.

EDIT: 5 days, and he started at 300TB.

https://techreport.com/review/25681/the ... at-300tb/2

Who is online

Users browsing this forum: No registered users and 6 guests
GZIP: On