LSI’s new SAS/SATA cards have native PCIe 3.0 support

PCI Express 3.0 slots are all over new motherboards these days. The latest and greatest graphics cards adhere to the third-gen spec, and now LSI has a family of storage products capable of taking advantage of the interface’s additional bandwidth. The company has announced a slew of SATA/SAS adapters with PCIe 3.0 connectivity: three standard HBAs and seven MegaRAID cards. All the cards appear to use variants of the LSISAS2308 (PDF), an eight-port SATA/SAS controller with a native PCI Express x8 interface.

The new controller is backward-compatible with PCIe 2.0 slots, of course, but the gen-three standard offers double the usable bandwidth per lane. With x8 links, the LSI cards have 8GB/s of bidirectional bandwidth on tap. Finding eight storage devices capable of saturating that pipe is going to be difficult, even with SSDs in RAID. It’s nice to have the headroom, though.

In addition to its PCIe and SATA/SAS interfaces, the controller boasts dual PowerPC 440 chips running at 800MHz. There’s 4MB of on-die RAM plus a gig of cache memory that lives outside the silicon. The chip supports RAID 0, 1, 1E, and 10, although arrays can be created only on the MegaRAID cards.

You’ll have to provide your own drives for these cards, which start at $305 and scale up to $1,525. Given LSI’s ownership of SandForce, it’s probably only a matter of time before the same controller ends up on a PCIe 3.0 solid-state drive, as well.

Comments closed
    • Bensam123
    • 7 years ago

    If the card really needed more bandwidth they could’ve made it x16 during the PCI-E 2 generation time frame. Chances are they wont even come close to saturating a x8 PCIE 2.0 slot.

    Sorta sad, but a lot of these cards lack the throughput to take advantage of the drives you hook up to them. I don’t know if that applies to this one, but attaching 8 500MB/s SSDs up in a r0 array doesn’t come close to providing the theoretical available.

    Notice how they don’t list max sequential speeds or max i/os? Almost all of the enterprise class raid cards are like that. They’re black boxes you just have to take their word for.

    Curious why this card doesn’t support r5 and r6 even if it supports r10.

      • Krogoth
      • 7 years ago

      The I/O throughput of large array of SSDs (300-400MB each drive) can saturated a 8x PCIe slot.

      These cards are meant for large NAS and SAN boxes that are driving nested RAIDs (50, 51, 100 etc).

        • Bensam123
        • 7 years ago

        Why would just the I/Os saturate a PCIE slot? IOs generally aren’t bandwidth intensive as much as processing intensive. I’ve never heard anything about there being a bottleneck on PCIE as far as I/Os go.

        Even if they’re part of a NAS, I don’t know why they would neglect r5 or r6. Doesn’t it need r5 capabilities just to do r50? This particular card isn’t listed for r50 either.

          • bcronce
          • 7 years ago

          Something like ZFS using triple mirroring can parallelize thousands of requests across the drives. Assuming the card can forward the requests fast enough, it wouldn’t be too hard to get a NAS setup that could make use of PCIe3.0.

          Plus, PCIe3.0 consumes the same amount of power as 2.0, but it is 2x faster. This means it can enter a low power state much faster.

            • Bensam123
            • 7 years ago

            Yeah, I was originally questioning cards capable of doing that though. I was lamenting about how cards never list any of the vital stats like that nor are they capable of doing that. I haven’t heard of a raid card that has run into limitations with the PCI-E buss.

      • Cova
      • 7 years ago

      I could saturate the PCIe link easily – it would only take 1GB/s of bandwidth per port on the card. I could route each port externally to a 5U JBOD shelf with 70 HDDs. With 350 drives attached even green drives could deliver the bandwidth. Or daisy-chain more SAS JBODs off each port, you can have 65000 devices in a SAS domain – have fun managing that though.

        • Bensam123
        • 7 years ago

        That’s assuming the card could keep up with it. Part of my post was about how cards like this never list their specifications as far as that goes nor are they capable of actually pushing the limitations of PCIE.

          • Cova
          • 7 years ago

          That only applies to the RAID versions of the cards. The bare HBA versions will have no issues saturating either the SAS2 links or the PCIe link, whichever fills up first. Probably won’t fill a 8x PCIe 3.0 link, but needs more than an 8x 2.0 can provide. Software raid or a smarter filesystem (zfs, btrfs, refs, etc.) will take care of balancing the load.

      • Bauxite
      • 7 years ago

      x16 slots are rarer on server boards and a lot of configs would use multiple cards anyways.

      Regardless moving to 3.0 has merit:
      2 SAS2 ports = 8 channels @6Gbit/sec = potential for 6GBytes/sec raw.

      With just a single expander (which they also are well known for) you can attach 24 drives to those two ports. Reaching that potential is feasible then, just open your wallet wide.

      I also find it odd that you think LSI is not about enterprise, because their stuff is probably the most common…from what I see the big vendors more often than not use their chips, though you wouldn’t always know it (many dell/hp cards for example).

        • Bensam123
        • 7 years ago

        That’s what Cova said as well. That’s assuming this card is capable of actually handling that kind of throughput. All of that is theoretical and not even measured synthetically. That’s part of what I was talking about.

    • Deanjo
    • 7 years ago

    [quote<]PCI Express 3.0 slots are all over new [i<][b<]intel[/b<][/i<] motherboards these days.[/quote<] Fixed that for you.

      • Goty
      • 7 years ago

      Well, if we want to be technical, the [b<]slots[/b<] are the same across all mobos. 😉

    • willmore
    • 7 years ago

    Can’t multiplexers be connected to these? That would explain the apparent excess of bandwidth.

      • Waco
      • 7 years ago

      Yup. SAS drives in particular…

      • Bensam123
      • 7 years ago

      Yup… even if it has the bandwidth that doesn’t mean the card has the capacity to drive all the drives attached to it at any reasonable speed.

    • dashbarron
    • 7 years ago

    So…is the x8 the same speed across PCIe 1, 2, 3 or is x8 a different/faster denomination on PCIe 3?

      • BobbinThreadbare
      • 7 years ago

      x8 is the number of channels available, I think each PCIe version has doubled the bandwidth to each channel. So it will operate in “x8 mode” on any version, but gets more bandwidth from PCIe 3.

      • willmore
      • 7 years ago

      8x is the number of PCI-E ‘lanes’. The smallest slot is a ‘1x’ as it has one lane. The largest are the 16x which you typically see for graphics cards. 4x and 8x are more common in server hardware.

      The speed of each individual lane has doubled in each major revision of the PCI-E standard. PCI-E 1 had 250MB/s (each direction) per lane. PCI-E 2 had 500MB/s and now PCI-E 3 has 1GB/s. So, an 8x PCI-E 3 device would have 8GB/s of bandwidth each direction.

        • Bensam123
        • 7 years ago

        Largest is 32x, which you almost never see.

          • willmore
          • 7 years ago

          I keep seeing references to that, but I’ve never been able to find a picture of such a slot, any hardware that even suggests that it might support such a thing, etc. I even wonder if they’re even still in the spec. Back when PCI-E was announced, I remember a 12x and 32x size, but I don’t think I’ve seen them since. I know that 12x is no longer mentioned.

          So, yeah, pedantically, 32x is the largest size. But, if you actually want to find hardware in existance, 16x is as big as they get.

Pin It on Pinterest

Share This