Crucial ships DDR4 for servers, desktop modules coming in August

Crucial has made several DDR4-related announcements to coincide with the Computex trade show in Taiwan. The first is that the company is now shipping DDR4 server memory designed for Haswell-EP-based Xeons. The next-gen modules run at 2133 MT/s, yielding 17 GB/s of bandwith. They’re available in 4, 8, and 16GB capacities, and Crucial plans to scale up to 32, 64, and 128GB. Those higher-capacity modules won’t arrive until Crucial starts using 1GB DDR4 dies, though. The current modules are built with 512MB chips. Here’s what they look like:

In August, Crucial will start selling DDR4 meant for Haswell-E and its X99 sidekick. In addition to plain-Jane 2133-MT/s modules, Ballistix Elite-branded units will be available with 2600 MT/s and 3000 MT/s speeds. The fastest modules boast 24GB of bandwidth, and Crucial expects to hit even higher speeds "as the technology matures."

The Ballistix UDIMMs come with a reasonably-sized heat spreader that may or may not be necessary to keep the memory cool. You can test for yourself, because there’s an onboard temperature sensor that provides real-time monitoring via the accompanying utility software. The modules also support new XMP 2.0 profiles for easy configuration.

Crucial’s vanilla DDR4 for the desktop will come in a quad-channel, 32GB kit. The Ballistix stuff will be available in 4GB and 8GB sticks sold separately and in multi-channel kits. The modules all run at 1.2V and use memory built on a 25-nm process. They deliver "up to 40 percent more power efficiency" than DDR3 memory, in part because of the lower voltage, but also because the memory has transitioned from 30- to 25-nm fabrication technology.

Intel still hasn’t announced a precise launch date for Haswell-E, but if compatible DDR4 memory will be selling in August, odds are X99 motherboards won’t be far behind.

Comments closed
    • td1353l
    • 5 years ago

    Wonder what this means for APU’s…

    • Convert
    • 5 years ago

    Is it just the pictures or is the bottom (contact row) of the RAM not straight?

      • ImSpartacus
      • 5 years ago

      That’s intentional. Apparently it makes it harder to ruin a DIMM by pushing it in too hard or something like that.

    • Milo Burke
    • 5 years ago

    Cores and memory channels, impressive throughput with 512 Mb die size DIMMs and platform support. DDR3 to DDR6, with a crosstalk advantage, 64 Gb to 2 GB per utilized channel slot on board the top side. Within ranges, X99 and Skypond to reach withunder the bandwidth cap towards the header size and minimize overhead.

    When the process shrink passes and the lithography bottoms out, fab expectations consolidate.

    Cancel the variations and alternate the point-to-point distribution hub, and there is performance. Multichannel is the use, and populated DIMMs are the missing piece. Generational improvement within expected doubled to be tripled underneath the folded half of the median.

    Controller throughput.

    • brucethemoose
    • 5 years ago

    Since latency is going up, won’t DDR4 be SLOWER than DDR3 2133+ at launch?

    Why is everyone getting so excited?

      • Airmantharp
      • 5 years ago

      You have to balance latency (as measured in clock cycles) against actual clockspeed to get actual latency (as measured in nanoseconds). Here, on average, you’ll see actual latency staying largely the same between DDR, DDR2, DDR3, and now DDR4, while actual bandwidth continues to climb, despite higher ‘CAS’ and other latency-describing attributes growing generation on generation.

    • slaimus
    • 5 years ago

    I remember hearing years ago that DDR4 requires all slots to be populated at all times to run at maximum bandwidth. Is this true? Does upgrading memory mean replacing all sticks at once?

      • derFunkenstein
      • 5 years ago

      All channels have to be populated for maximum bandwidth, just like with DDR3.

      • Ryhadar
      • 5 years ago

      I think what you might be thinking of is that with the DDR4 standard only 1 DIMM can occupy a single channel. Versus DDR2 where up to 2 DIMMs can occupy a single channel.

      This could mean that that quad channel could become the new norm. Or it could mean that it will usher in the era of most motherboards coming only with 2 DIMM slots to reduce complexity. =P

      • jensend
      • 5 years ago

      From [url=http://www.eetimes.com/document.asp?doc_id=1279949<]an eetimes q&a in 2012[/url<]:[quote<] K.L.: The DDR4 standard goes point-to-point instead of in parallel. Does the architecture cause you to sacrifice speed or scalability going forward? T.F.: We tried to make it as backward compatible as possible but also to give it some legs going forward. It does have optimizations for point-to-point but it’s also designed to be able to do multiple modules in a channel at higher data rates than DDR3. The advantage of going point to point is that it makes the memory channel fairly simple. You can save a lot of space and still get good performance. It’s also less loaded so you can run at a faster data rate and you don’t have as many reflections as opposed to having sockets that may be populated or empty in a multi-channel or multi-drop system that uses modules.[/quote<]Not sure what may have changed since that interview, but if that's still current, the rumors (from 2010/2011) that it was going to only allow point-to-point might be inaccurate, but even if multiple dimms per channel is allowed by the spec, many chipsets may go for the benefits of point-to-point. A little more digging turns up various sources saying consumer chipsets are likely to be point-to-point.

        • the
        • 5 years ago

        It is still correct that it is point-to-point. The difference is that registered and load balanced memory contain repeater functionality to utilize additional slots. This makes DDR4 similar to FB-DIMMs as DIMMs further away on a memory channel will have higher access latencies as a the signal has to first travel through the closer DIMMs.

        TI has a [url=http://www.ti.com/lit/ml/slyt534/slyt534.pdf<]product brochure (PDF)[/url<] that explains this a bit.

      • the
      • 5 years ago

      That stems from the one DIMM per slot limitation of unbuffered DDR4. So for consumer systems, the number of slots you have equals the number of channels you have. Each additional channel you fill up, the more bandwidth you’ll have and generally higher performance.

    • ozzuneoj
    • 5 years ago

    So, I know that DDR3 and GDDR3 aren’t directly related, and that there were a few “GDDR4” cards around a few year back, but we still see a ton of low end cards with GDDR3 and DDR3, and I’m wondering if the mass production off DDR4 memory will have any effect on graphics cards. If you look at charts comparing memory bandwidth, low end cards have stagnated a bit since its very expensive to move to GDDR5, and there’s only so far GDDR3 on 64bit\128bit interfaces can go.

    Anyone have any input\theories on this?

      • Sargent Duck
      • 5 years ago

      No, DDR and GDDR are two very separate things.

      I found this post [URL]http://www.techspot.com/community/topics/whats-the-difference-between-ddr3-memory-and-gddr5-memory.186408/[/URL] (second reply in, by user “dividebyzero”) to really lay out the differences.

        • ozzuneoj
        • 5 years ago

        Yes, but there are many low cost cards that advertise that they use DDR3, rather than GDDR3. I’ve searched quite a bit and I can’t find any definitive answer as to whether or not this is just an inaccuracy in advertising. I did find a discussion somewhere where someone actually emailed a graphics card company about the difference between gddr3 and ddr3 and they stated that DDR3 was slower but less expensive… indicating that it is used on graphics cards in some cases. But, honestly, who’s to say how much that random tech support person really knew about the topic.

        If DDR3 is ever used on graphics cards, then I’d think that DDR4 eventually replacing DDR3 as the lowest cost solution available would mean that the lowest of the low end will at some point see a large improvement in bandwidth.

          • UnfriendlyFire
          • 5 years ago

          But the mid/high-end GPUs would get stacked memory. Which could potentially lead to a bandwidth gap greater than the GDDR5 vs DDR3.

      • jensend
      • 5 years ago

      I’m pretty sure the last new chips to use GDDR3 were introduced in 2010. You see plenty of low end cards with DDR3, some mislabeled as GDDR3, but I imagine it’s pretty rare to actually have GDDR3 these days.

      GDDR3 is tremendously outdated- it was based on DDR2 and was introduced in 2004- and once DDR3 made it past the early adoption phase it was cheaper and better. (Before that, there were a few cards that used plain DDR2 because it was cheap, but the bandwidth was really abysmal and manufacturers couldn’t get away with too much of that.)

      GDDR4 bears no relation to DDR4- GDDR4 and GDDR5 both use technologies developed for DDR3. nV and several memory manufacturers skipped GDDR4, and the industry rapidly moved to GDDR5 because it could provide higher bandwidth with relatively narrow buses.

      DDR4 will at some point be used in graphics cards. The question is when and at what clocks and bus widths. You may not see bandwidth bottlenecks on low-end cards vanish in the near term.

        • ozzuneoj
        • 5 years ago

        Thank you for the concise reply! I had read many things about GDDR3 being related to DDR2, which did seem like an extremely antiquated technology for modern graphics cards, even low end (sub $100) ones. What you said makes a lot more sense, because most are now advertising “DDR3” rather than “GDDR3″… take a look at EVGA’s website for example.

        Its interesting to know that DDR3 is technically better than GDDR3 though. This certainly flies in the face of all the casual forum\comment discussions I see online, but it makes far more sense given the differences in technology behind GDDR3 and DDR3. I remember having GDDR3 based cards back in 2004, like you said. Its no surprise that modern DDR3 is better.

        It will be interesting to see DDR4 start to appear on cheaper cards at some point though. Memory bandwidth has always been a significant problem with low end cards. Remember SDR vs DDR? Or 64bit vs 128bit DDR? What a mess…

    • albundy
    • 5 years ago

    cant wait for the oversupply! its how i ended up maxing my systems capacity with DDR3-1600.

      • Airmantharp
      • 5 years ago

      Same; each 16GB in my desktop and portable workstation (DTR ‘laptop’).

      But note that our current ‘undersupply’ problem is related to production being diverted to mobile applications that present ever increasing demand; we may not see the same situation again with DDR4.

    • the
    • 5 years ago

    My issue with DDR4 is that unbuffered memory only permits one DIMM per channel. Higher capacity DIMMs will offset the reduction in memory slots for the most part.

    Really they should have just standardized registered or load reduced memory for DDR4. Those do enable more than one slot per channel. Then users would be able to have a tangible increase in maximum memory capacity.

      • Chrispy_
      • 5 years ago

      with 32GB, 64GB, and 128GB modules on the way, a dual-channel setup will net you from 64 to 256GB of RAM.

      It’ll take away the “buy half now, buy half later” approach but in my experience that never worked out anyway. Within a couple of years 1066/1333MHz was practically obsoleted and 1600/1866 became the standard. You wouldn’t have wanted to mix two sticks of 1066 with two sticks of 1866….

        • Aliasundercover
        • 5 years ago

        Compute toys are always on the way.

        • the
        • 5 years ago

        The initial wave of unbuffered DDR4 doesn’t appear to be launching beyond 32 GB. The larger 64 GB and 128 GB modules are likely registered or load reduced modules aimed at servers. That is another benefit of registered/load reduced memory is that more ranks can be put onto a DIMM.

    • Aliasundercover
    • 5 years ago

    It seems to me the point of the Sandy Bridge E and Ivy Bridge E is the 64GB memory size. The only other way to get big memory is pay and pay and pay for server market segment margins. OK, yes, you do get 6 cores and bandwidth but you give up a generation in processor core.

    Now with DDR4 memory size will fall back to 32G. Presumably the desktop variants will stay at 2 channels and fall back to 16G from their present 32. New chip sizes will be along Real Soon now at We Saw You Coming prices.

    What an immense benefit.

      • willmore
      • 5 years ago

      Am I alone in not understanding WTH you’re saying?

        • derFunkenstein
        • 5 years ago

        I think he’s equating/confusing DDR4 32GB DIMMs with system memory capacity.

        And the mere fact I have to say “I think” means that no, you’re not alone.

      • ImSpartacus
      • 5 years ago

      I think you’re underestimating the usefulness of extra cores.

      Sure, memory bandwidth and quantity are high points for the consumer Xeon parts, but it’s silly to think that they are the only benefits.

      There’s also the extra PCIe lanes. A lot of folks enjoy having multiple GPUs.

      So I think there are quite a few more benefits to getting a Haswell E build than just the max memory.

        • Airmantharp
        • 5 years ago

        For gaming, the only real benefit will be the extra PCIe lanes, and that’s only if you have more than two discrete GPUs.

        For my content creation work 16GB is really enough, though I can see situations where it wouldn’t be.

        And for the cores? Everything I do is thoroughly multithreaded, so I expect a significant boost there, even for gaming, if clock speeds are kept appropriately high.

      • kruky
      • 5 years ago

      By my calculation 8 times 128GB is 1TB not 32GB. Since they’re going to make 128GB parts my guess is you will be able to use them πŸ˜‰

        • Aliasundercover
        • 5 years ago

        8 times? You only get one dimm per channel on DDR4.

        The 128GB dimm is a maybe someday server part which will not be made until memory chips with double native density are available.

        In the part of the article discussing Haswell E the memory size is 4 channels by 8GB per channel. Half what you can get right now on a Sandy or Ivy E.

          • kruky
          • 5 years ago

          How comes the MSI mobo has 8 memory slots?
          [url<]http://www.techpowerup.com/img/14-06-03/56a.jpg[/url<] Edit: you do know that number of channels != number of ram slots?

      • Milo Burke
      • 5 years ago

      Yep, I’m confused.

      • Bauxite
      • 5 years ago

      The extra lanes are pretty nice for those that can use them, and I don’t mean GPUs. 64GB is not the limit either, that is just the cutoff for unbuffered dimms.

      Network and drive controllers love the bandwidth, the 16 “good” lane limit on the socket 115x cpus is getting a bit long in the tooth. Just one nice raid card and one fast NIC and it is full, that is while using the onboard gpu as well.

      A lot of people forget that the -E covers some of the xeon models πŸ˜‰

    • UnfriendlyFire
    • 5 years ago

    Let the DDR4 vs DDR3 for desktops/laptops debate begin.

    Although it should end a lot faster compared to the DDR2 vs DDR3 debate since I doubt AMD and Intel are going to mess with the DDR3/DDR4 compatibility stuff.

      • ImSpartacus
      • 5 years ago

      DDR4 generally has a power advantage over DDR3, right?

      That alone will push adoption in battery-powered products.

      On the desktop, adoption might be a little tougher. I hope we don’t have any of those horrible motherboards that had 2 DDR2 slots and 2 DDR3 slots. I mean, what a money grab!

        • UnfriendlyFire
        • 5 years ago

        I highly suspect DDR4 is going to be like DDR3 at launch: Horribly expensive

        On mobile devices… Sure it might help, but that same money could be used for a slightly larger battery if DDR4 is quite expensive.

        On desktops… Same performance as 2133 mhz DDR3, slightly lower power consumption, a lot higher price. Not worth it, at least until the price comes down.

        If AMD’s DDR4 controller is at least efficient as their DDR3 controller (preferably better), and we start to see 2667-2800 mhz RAM at reasonable prices, then that would help their APUs.

        Though both Nividia’s and AMD’s dedicated GPUs are going to get stacked memory in 2016, which would expand the bandwidth gap.

          • DarkMikaru
          • 5 years ago

          You hit the nail on the head man. Every generation SDRAM to DDR, DDR to DDR2, DDR2 to DDR3…. benchmark after benchmark showed literally 1 to 2% performance increase over the previous generation. Which is well, NOTHING in general computing terms. Seriously… only reason I’m looking forward to DDR4 is so that DDR3’s price might start to go down in time.

          I guess the only thing I’d actually care about is how much difference over all system power draw would drop vs a DDR3 system. Otherwise, I’m glad to see the progression but will continue to do as I’ve always done when new memory rolls out. Wait for the price to drop. Cause no way in hell will I be paying $200 for an 8GB kit.

            • UnfriendlyFire
            • 5 years ago

            Actually, DDR2’s price went up as DDR3’s price fell. That was because RAM manufacturers were shifting production from DDR2 to DDR3 faster than the demand for DDR2 was falling.

            • Bauxite
            • 5 years ago

            I remember pretty much every ram transition time as being some of the worst possible times to buy [b<]any[/b<] ram, I was spending my own money back to FP/EDO simms. The old stuff was not being replenished so retailers were trying to milk it for all they could, the new stuff was in short supply and had extra "new hotness" gouging applied.

            • DarkMikaru
            • 5 years ago

            As soon as I said that I knew someone would correct me on that! πŸ™‚ You are absolutely correct. At first, the price maintains or even climbs, but over time, as DDR4 gets mass produced 3 prices will drop as they are perceived as having saturated the market.

            Reminds me of back in the day when a friend of mine built a P4 Rambus based system and he paid 200 dollars for 512MB! My fist system was Athlon XP based on DDR 512, $60 bucks. And my machine benched faster than his. Loved it. πŸ™‚ Anyway, I’m happy to DDR4 makes its appearance, just not going to be an early adopter. Not a fan of overpaying.

        • JustAnEngineer
        • 5 years ago

        The power savings vs. [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007611%20600006050%20600000285%20600000253%20600000279%20600083963%20600006072%20600006074%20600006130%20600006127%20600006069%20600006067%20600006158%20600006157&IsNodeId=1&bop=And&Order=PRICE&PageSize=100<]1.35 V DDR3[/url<] isn't going to be as great as compared to most of the 1.5+ V stuff.

        • DPete27
        • 5 years ago

        Warning: 2010 article, and TomsHardware, but you should check out [url=http://www.tomshardware.com/reviews/lovo-ddr3-power,2650-9.html<]this article[/url<]

        • Ryhadar
        • 5 years ago

        I own a couple of those DDR2/DDR3 combo motherboards and they’ve been great. Had DDR3 in the one while I was using it for my main machine, then regulated it to HTPC duty with some DDR2 that I had lying around. I’ve got a pending lanbox build that incorporates DDR2/DDR3 combo slots also just because it won’t need the extra memory and, again, I’ve got cheap DDR2 just lying around.

        I’ve heard of some horror stories versus motherboards that only stick with one, but for my purposes they’ve been great.

      • jihadjoe
      • 5 years ago

      The double RAM layout boards were only possible because the MC was on the chipset, and third parties provided chipsets that supported both. I doubt Intel or AMD are going to waste die space putting on both DDR3 and DDR4 MCs on upcoming CPUs, so the transition (and hopefully volume-driven pric drops) should be much faster this time around.

        • ImSpartacus
        • 5 years ago

        That’s an excellent point.

        Haswell-E literally only supports DDR4, right?

          • Klimax
          • 5 years ago

          Correct. (To my knowledge)

        • Ryhadar
        • 5 years ago

        This isn’t entirely accurate. AMD had DDR2 and DDR3 memory support on their CPUs (with a built in memory controller) since the Athlon II days. In fact, most of the Phenom II line supports DDR2 and DDR3, though some didn’t (e.g. Phenom II X4 940). They only recently removed the DDR2 parts of the MC with Bulldozer and possibly Llano as well, although I can’t remember off the top of my head.

        On the Intel side the P45 northbridge, and it’s derivatives, supported both DDR2 and DDR3. So I’m not sure where you’re getting this third party chipset idea from. Perhaps from sometime earlier than that?

        In any case, I agree with you. I doubt AMD or Intel will support a memory controller with DDR3 and DDR4 support on the CPU. There are more important things to use that real estate for. AMD in particular is due for some new CPUs and chipsets whenever they release up dated server/consumer parts and by that time DDR4 should be pretty mainstream, I think.

        [i<]edited for grammar mistakes[/i<]

          • jihadjoe
          • 5 years ago

          You’re totally right. I don’t even remember what I was thinking so I’m just gonna blame that on a lack of coffee.

    • Airmantharp
    • 5 years ago

    I’m actually looking forward to X99. X79 is just so antiquated, and I don’t like the idea of ‘upgrading’ to an Ivy-based system from my current OC’d Sandy setup; aside from the extra cores, there’s just not enough of a jump to make it worth it, and it might not even be a jump at all for gaming.

    For everything else, which for me is photography and videography, the real extra ‘umph’ will actually make a tangible difference. I’m hoping to get an eight-core sixteen-thread 32GB combo at >5.0GHz for under a grand by the holidays :).

      • ImSpartacus
      • 5 years ago

      I think Haswell-E will be fucking amazing.

      First off, X99. It’s looking like it’ll be pretty awesome and it’s likely that we’ll have it for two years, so it’s modestly “future-proofed”.

      Next, dem core counts. SnB-E & IvB-E were silly because they provided 4-core SKUs that couldn’t even outperform comparably priced #770K products. Now with the 2 extra cores at (hopefully) modest prices, we’ll finally see some performance out of these bad boys.

      Finally DDR4. Uhhh, hell yeah.

      Overall, Haswell-E feels like what the *-E CPUs should’ve always been/

        • jihadjoe
        • 5 years ago

        Hopefully the 5820k does come in at current 4820k prices and we can finally get 6-cores for about $300.

        Otherwise what I’ve read says that only the 5960X will be 8-core, so there’s really not much improvement at the x930k $600 price point. At least there’s soon going to be a proper reason to shell out for the top-end extreme edition CPU.

          • ImSpartacus
          • 5 years ago

          I think 6 cores at ~$300 would be tremendous.

          It would make me think twice before getting the $300 #770K CPU.

            • Deanjo
            • 5 years ago

            [quote<]I think 6 cores at ~$300 would be tremendous.[/quote<] Thuban was. πŸ˜€

            • Airmantharp
            • 5 years ago

            Limited overclocking, archaic platform support, and Core 2-era IPC. Nothing bad about it, but it wasn’t the bee’s knees when it was released either.

            • the
            • 5 years ago

            Now I gotta check how much my i7 3930k cost me. I was originally shopping for an i7 3820 as the quad core reportedly over clocked higher, was cheaper, and my expected workload benefited more from single threaded performance than parallelism. Then Micro Center was sold out of i7 3820’s so I got a i7 3930k at nearly the same price.

            • Airmantharp
            • 5 years ago

            If they can do six cores at ~US$300, then they can do eight cores at ~US$500- and that I’d get in line for.

      • HisDivineOrder
      • 5 years ago

      Given the overall speed at which CPU’s are aging these days, I expect an octa-core based around Haswell with hyperthreading bringing it up to 16 threads is going to make for a mighty impressive CPU and a decent value argument.

      Rather than constantly upgrading the mainstream CPU, you can have more and pay more on the front end for the pleasure. The system’ll last a lot longer than the mainstream because as much as we’re hitting the limits on video RAM right now with ports from “next gen” consoles, gaming PC’s are going to start hitting GTA IV-like # of core limits eventually as developers get lazy when porting from the next gen, high core-count CPU’s. When that happens, we’re all going to wish we had more than quad-cores (though not necessarily more than hexacores given the fact 2-3 cores are reserved for system use in the consoles).

      But a hexacore or an octacore? Especially a system built around DDR4 is going to have a nice, long shelf life. This is of course ignoring GPU upgrades, which would happen either way. DX12 and Adaptive Sync as a standard going forward in DP are going to be the big motivators there, which will probably facilitate upgrades to the system.

      CPU-wise, I doubt Intel will release greater than quad-core in the mainstream next year, either. With Intel increasingly delaying each new architecture by a few months, we’re pushing out the delays between new architectures and Broadwell looks to offer nothing to the desktop CPU side of the equation and more to the per watt part and the iGPU part, neither of which really benefit the PC gamer with a discrete card.

      That means if Broadwell mostly comes out toward the end of this year, then Haswell-E stands a good shot of being king of the Haswell-esque architecture pile for the rest of this year, plus most of next year when Intel inevitably delays the launch of Skylake in favor of a Broadwell refresh or something like that. That’d practically be two years of Broadwell on top of the Haswell refresh they just committed to.

      So you can either buy a Haswell/Broadwell system today and plan to upgrade in the future IF Skylake is deemed worthy of an increase in cores (doubtful) or wait a few years more to get them when Intel finally wakes up and adds the cores they should have last year. Then you’ll be upgrading much sooner than you otherwise need to.

      Or you can just buy a hexacore/octacore this year with more memory bandwidth, more PCIe slots, and lose out on only Quicksync. You pay more now, but divided over the sheer number of years the system should last you, you’ll probably pay less. This value argument would be MUCH better, though, if we weren’t being forced unnecessarily to throw out memory kits because Intel arbitrarily decided to support no DDR3 on a Haswell-based E CPU that already included the DDR3 support in earlier versions of the same CPU.

      Even with the inflated cost of DDR4, we should be able to get the 16 GB any modern system needs in the short term and upgrade for a lot less money after costs drop with increased supply.

      That said, Devil’s Canyon’s high end with its 4.0ghz baseline and 4.4 turbo with a promise of greater overclocking gains might wind up a worthwhile alternative for the crowds who still prefer DDR3 even with the core count shortcoming looming in the shadows…

        • Airmantharp
        • 5 years ago

        I look at it this way- I have a Sandy quad with 16GB of DDR3 on a Z68 board. I don’t plan on canabalizing that system; it’ll become a second system, minus the GPUs.

        So I’ll be buying more memory anyway, and therefore I’m fine with it being the newer, faster stuff, given that I’ll actually be able to make use of the increased bandwidth across the board.

Pin It on Pinterest

Share This