Am I the only one in the universe who's wary of having a device with 0.64*10^14 bits and an specified read-error rate of 1 in 1*10^14 bits ?
Sure, it's a concern. Like most things in life, it's a tradeoff. A few things to consider:
1. OP in this case stated that the use case was an "internal backup". So a secondary copy of data which he has another copy of. Furthermore, this method of "backup" has some inherent insecurities regardless (e.g. user error, power surge, failing PSU, or malware taking out both the original and backup); so in the grand scheme of things the unrecoverable read error rate is probably not increasing the risk of catastrophic data loss by that
much. Any truly valuable data should also be backed up elsewhere.
2. The 1*10^14 figure seems to be a worst-case. TBH I don't think I've ever seen rates that bad in practice on a drive which wasn't demonstrably flaky (i.e. its reallocated sector count was growing). At home, my server runs a RAID-6 array with 28TB raw capacity (before RAID overhead). It does a full scrub once a month, which checks that every block on every drive of the array is readable. This means the scrub has been reading 2.2*10^14 bits every month. I have had no unrecoverable read errors in many months of operation (so far... fingers crossed).
3. The (comparatively) lousy unrecoverable error rate of current drives relative to their capacity is why any serious application of HDDs for storing critical data needs RAID-6 (or more sophisticated forms of erasure coding), and a robust backup and disaster recovery plan utilizing external or (better yet) off-site storage.
But... at the end of the day, you can't beat the cost per byte stored of HDDs, unless your use case is amenable to tape (which has a high up-front hardware cost, but the lowest media cost, and arguably the best archival characteristics).