Custom 4U server stores 67TB for $7,867

How do you build a 67TB file server for a fraction of the cost of commercial solutions? The people at low-cost backup service Backblaze have pulled it off, and they’ve posted a detailed guide complete with a 3D model of their custom 4U enclosure.

A single Backblaze server costs only $7,867, and it includes 45 1.5TB Seagate hard drives, four SATA controller cards, two power supplies, 4GB of RAM, six fans, and nine "multiplier backplanes." This contraption runs Debian Linux 4 using IBM’s JFS file system and three RAID-6 volumes. Since RAID 6 uses two parity drives per volume, each 4U system should have a total 58.5TB of redundant storage.

The enclosure looks like so. (The fire-engine red paint job apparently comes as standard.)

For reference, Backblaze says a petabyte‘s worth of hard drives costs $81,000. Using all of those hard drives with this custom solution adds up to $117,000 per petabyte, while with the cheapest commercial solution the firm found, Dell’s PowerVault MD1000, it would add up to a whopping $826,000.

Why share the design? Backblaze explains, "Our hope is that by sharing, others can benefit and, ultimately, refine this concept and send improvements back to us. Evolving and lowering costs is critical to our continuing success."

Comments closed
    • thebeastie
    • 11 years ago

    Ah ha!
    Finally something that could hold at least half of my porno collection 😛

    Seriously though I think its pretty cool, it would have to be in a proper server room to keep cool.
    I think the red color is a bad idea, it would gain more respect from tech snobs who enjoy wasting large amounts of companies money on ultra high end storage solutions if it was pure black and had a LED for each HD on the front of the case.

    Its about time some one burst the storage kings bubble.

    • JdL
    • 11 years ago

    Looks like Cyril reads comments. I posted that story several hours before Cyril did over here: §[<https://techreport.com/discussions.x/17517<]§

    • zgirl
    • 11 years ago

    having seen some of the more advanced SAN technologies that are out there as my company prepares for a complete upgrade. I find this rather Ghetto. It’s really just a NAS and most of those 800k solutions are SAN, where you can connect multiple servers and services to one large storage platform.

    Really the most impressive product out there is Xiotech’s ISE where they figured out most drive that fail haven’t really failed. They get power cycled, flashed and/or reformatted and bang a failed drive is now running again. Considering 6000 returned drives to their sunkworks netted less then 1% of them actually being physically failed, they build this tech into the system. So know you don’t even have to replace a drive.

    The main issue here with the above unit is going to be vibration and probably heat. Either that or you are pulling a log of air to keep them cool. Which will draw more power.

      • Anonymous Coward
      • 11 years ago

      67TB for $8k, of course its not a SAN. By the looks of it, it also lacks hotswap anything. Its almost a generic Linux box with a lot of disks. Hell, I bet its softraid.

      Its still pretty awesome.

        • drsauced
        • 11 years ago

        It’s not entirely clear what kind of storage it is, besides really huge. It does run Debian, so adding iscsitarget to that shouldn’t be much of an issue, and then you’d get a mondo honking huge SAN.

        I’d wager moneh, real moneh, that it will run FreeNAS too!

          • continuum
          • 11 years ago

          Exactly. Goal here is cheap bulk storage, and for that they’ve succeeded admirably. Performance, reliability, uptime, the goals clearly are “adequate” rather than “five nine’s”…

          • zgirl
          • 11 years ago

          You obviously don’t understand what a SAN is. While still possible it would have to run iSCSI but I cannot tell what or how many type of NICs are in there.

          If you are just creating a large storage box fine. But if you want to directly connect multiple servers for clustering, mail, etc. You are going to need a dedicated network. Along with an interface that allows for carving LUNs up to indepentant devices. Also, you don’t run iSCSI on top of your regular network traffic. So there is still additional cost in infrastructure there.

          It is always neat to see some ingenuity but in a High Availability environment like mine. This won’t fly. The lack of hot-swap isn’t helping either.

            • mieses
            • 11 years ago

            You don’t need company charging a $2000/TB pricetag to implement a Lustre filesystem.

            • drsauced
            • 11 years ago

            Well, no, it does in fact support hot plugging. This is part of the SATA technology, implemented in the Silicon Image controllers.

            I do agree that for large environments an actual network just for storage is best, but it’s not the only way to do it. My thinking is that anything with iSCSI and a network card is instant SAN.

            Edit: meant for #37.

            • dextrous
            • 11 years ago

            You do in fact need your own network for an iSCSI SAN unless you have one box connected to it and then what’s the point anyway?

            This thing isn’t close to enterprise class hardware or software but I don’t think that’s the goal here. They are comparing apples to oranges to make the price tag look really good.

    • ssidbroadcast
    • 11 years ago

    q[

      • mcnabney
      • 11 years ago

      The firmware problem was solved in 2008. The Seagate .11 1.5TB is still the best bang/buck drive out there.

      • albundy
      • 11 years ago

      agreed. Seagate put a sour taste in my mouth with their 7200.10, 3.AAK firmware cr@p performance. Its the last time I buy from them. They could have at least replaced my drives, but opted not to. What goes around comes around.

      • l33t-g4m3r
      • 11 years ago

      I have a barracuda 1.5. no problems.
      The caviar black however, as my OS drive, refused to kick on several times and made grinding noises.
      Sounded like the bearing was shot.
      Still running it though.
      Previously, my RE2 had an instant death, no warning.

      Statistically, 2/3 WD drives have gone bad for me.
      The 2 TB black looks like a good replacement for my current black, but I am really, really having a hard time trusting WD at this point in time.

    • odizzido
    • 11 years ago

    It’s a little hard to tell from one picture, but it looks like those drives would get pretty damn hot.

      • reactorfuel
      • 11 years ago

      Rackmount server = no noise concerns. With enough screaming high-power fans, and industrial A/C in the room, you can keep /[

        • cygnus1
        • 11 years ago

        Yup, it’s got a bank of 3 fans (look like 120+ mm) you can see, and I’d wager they’d place another bank on the other side of the drive area. I figure it’s off for the sake of the picture. With the top on, that’d create a pretty nice wind tunnel even with 45 drives in the way.

        You wouldn’t even need much AC with that much airflow. Standard office AC would probly suffice.

          • highlandr
          • 11 years ago

          If you head over to the blog, they show them rack-mounted and in action, and they have 3 matching fans on the front as well (and that’s pretty much it – very plain)

          • Buub
          • 11 years ago

          As Google’s white paper from a few years ago proved, drives run just fine hot, and in fact seem to work better (i.e. lower failure rates) if you allow them to run warmer than most people expect. So I doubt there are any cooling issues.

      • Faiakes
      • 11 years ago

      Why not have a partially or completely perforated bottom with several fans /[

        • Anonymous Coward
        • 11 years ago

        In a chassis like that, I expect vibration to be more of a problem than heat.

        • jap0nes
        • 11 years ago

        Because it is meant to be used on a rack, stacked with other servers.

          • Faiakes
          • 11 years ago

          Even better. Create a design that allows uninterrupted flow of air from the bottom of the stack all the way to the top (or would that fry the top racks?)

            • Anonymous Coward
            • 11 years ago

            How is that better than front-to-back cooling?

            • just brew it!
            • 11 years ago

            If you’re in a datacenter with the A/C ducts running under a false floor, it allows you to route the cold air directly into the racks.

            • just brew it!
            • 11 years ago

            The standard for rackmount enclosures is to blow the air from front to back. Unless you’re only installing equipment from a single vendor in the rack, that’s a problem; you’d have to get all the vendors to modify their systems to comply with the bottom to top cooling scheme. And yes, cooling for the enclosures at the top would not be as effective. But with enough airflow, that isn’t an issue.

            We actually did that on a project when I worked at Fermilab, around 20 years ago. All the equipment in the racks was custom-built, so we didn’t have to deal with enclosures that couldn’t handle vertical airflow. We had ducts from the A/C routed to holes in the floor below the racks, and vented the hot air out the top. Worked great for us, but we weren’t constrained by the limitations of commercial rackmount enclosures.

    • mieses
    • 11 years ago

    The RMC5D2 chassis is a similar design supporting 45 drives. It sells for about 4-5K.
    §[<
    http://www.ciara-tech.com/pdf/RMC5D2.pdf<]§ It would be great if the Backblaze guys sold their chassis for ~1.5K or at least listed their fabricator. Both the Backblaze and RMC5D2 designs seem to copy the Sun Thumpers, which are many times more expensive.

      • continuum
      • 11 years ago

      The OEM for that chassis is actually AIC, which sells the similar RSC-5D-2Q1.

      §[<http://www.aicipc.com/ProductDetail.aspx?ref=RSC-5D-2Q1<]§ Not sure who actually makes the one Backblaze designed. Vibration in a chassis like that is kinda scary.

      • Goncyn
      • 11 years ago

      In their “thanks” section at the bottom of the article, they mention Protocase:
      §[<http://www.protocase.com/<]§

        • continuum
        • 11 years ago

        Ah, I missed that. Thanks! Protocase even mentions they build the enclosures.

        I wonder if I could get it cheaper via AIC or Chenbro or whoever, although AIC and Chenbro would be inherently higher cost due to the fact that their standard off the shelf chassis include hotswap.

    • drsauced
    • 11 years ago

    Big storage is kinda a big deal. We got a quote from Dell for an EMC array of 20TB, which they wanted $2000/terabyte. This thing looks like a pretty good deal!

      • bdwilcox
      • 11 years ago

      But with EMC, you’re not just paying for storage.

        • drsauced
        • 11 years ago

        True, most of it is paying for service. The next quote we get should be in the $800/terabyte range with reduced service levels.

    • bdwilcox
    • 11 years ago

    My friend would fill that with porn in about 2 hours.

      • no51
      • 11 years ago

      Uncompressed 1080p?

      • 0g1
      • 11 years ago

      At 100MB/sec it would take 162.5 hours. Or 6.8 days.

        • Farting Bob
        • 11 years ago

        And considering they use 5-way SATA multipliers (5 drives all using a single sata2 cable), and one of the raid cards is PCI (limiting throughput even further) these systems would be terribly slow to fill up.

          • prb123
          • 11 years ago

          It’s for Internet Based Backups….how fast do you think it needs to be? What I’m surprised at is the use of Non-ECC Memory.

            • just brew it!
            • 11 years ago

            Yeah, I agree. With as cheap as DRAM has gotten, there’s really no excuse for not using ECC. I generally buy ECC RAM even for desktop systems these days.

        • bdwilcox
        • 11 years ago

        Yeah, but you don’t know my friend…

      • dpaus
      • 11 years ago

      Hey, we have the same friend!:-)

    • pmonti80
    • 11 years ago

    Error reply

    • Usacomp2k3
    • 11 years ago

    You could use 2TB Drivers for 33% more space, right?

      • CampinCarl
      • 11 years ago

      I think they looked at it as a cost vs. capacity thing, and decided on 1.5TB. I think 1TB would have been better, though I don’t know if you could get up to that same capacity.

        • Usacomp2k3
        • 11 years ago

        When I got the 1.5 TB hdd for my WHS, it was almost the exact same $/gb as a 1TB.

    • Buzzard44
    • 11 years ago

    What a coincidence! I was just wondering how much it would cost to store my 58.5 TB of Linux distributions.

    • indeego
    • 11 years ago

    It is red because fire will enflame 99% of those drivesg{

      • henfactor
      • 11 years ago

      Well they’re Seagate drives to begin with…

      • _Sigma
      • 11 years ago

      Makes it go faster

        • pmonti80
        • 11 years ago

        You are right sir, as everyone should know:
        “Da red wunz go fasta!”

        • Kurotetsu
        • 11 years ago

        3 times faster!

Pin It on Pinterest

Share This