Chrispy_ wrote:I've used DFS replication in the past and it falls afoul of several real-world scenarios unless you commit to "The Microsoft Way" of data management. FWIW, this "Microsoft Way" of data management is a moving target of best-practices and outdated documentation, using customers as paying lab rats. It still feels like it's very much tied to the Sharepoint collaboration era of publishing from multiple users in multiple locations with version-control gaffer-taped onto the side as a mandatory afterthought. In the right situation it's amazing but I've never found it to do the simple job it's supposed to do when just trying to use it on enormous datasets in real-time.
I manage around 100TB of cross-site replication using scheduled Robocopy mirroring, which is both bomb-proof reliable and costs nothing, but it's not bit-level replication, only file-level replication, so it's bandwidth intensive. If you have huge datasets with regular changes in large files over low bandwidth, I would look at byte-level or block-level replication software instead, and to answer those questions we probably need to know more about what platform your storage is on. Rsync will work at byte-level if that's an option for you.
Veem's really good, but also possibly overkill. I haven't used the community edition so I don't know what its limitations are or how well it works. Hopefully someone else can comment on that.
Robocopy as in the Powershell command? This could work, as I don't need any bit-level changes, and for what documents change a few extra Kb/s won't matter. Everything is on a Windows machine and new servers will be. Storage a bit mangled right now, but the goal is a RAID card and mirrored drives on a SATA or SAS interface for everything. Lots of data protection between the daily syncs and at least one machine that has daily backups.