Rumor: Nvidia working on Project Boulder server processor

At the Consumer Electronics Show in 2011, Nvidia unveiled Project Denver, a custom CPU compatible with the ARM instruction set. This processor was destined to share die space with an Nvidia GPU and deliver enough performance for desktops and servers. We haven’t heard much about Project Denver in the nearly 20 months since the initial reveal, though.

According to Bright Side of News, Project Denver will come to market in Nvidia’s Tegra 5 SoC, which will also feature a GPU code-named Maxwell. Denver isn’t the only Colorado-themed processor in the works, though. The same story cites “sources in the know,” who say a chip called Project Boulder is scheduled to arrive in 2014. This processor will purportedly target servers specifically, and it seems likely to be based on the same microarchitecture as its mile-high counterpart. The BSN post is otherwise short on specifics but suggests that Project Boulder may employ 8-16 processor cores and interface with DDR4 memory.

While it’s difficult to read too much into rumors about a product that isn’t due until 2014—especially since its apparent roots lie in a chip Nvidia has been silent about for nearly two years—there is some logic to a two-pronged processor lineup. Servers face different workloads than the desktops, notebooks, and tablets that may be ripe for Denver-based Tegra chips. There are much higher margins in the server space, too.

Comments closed
    • blastdoor
    • 7 years ago

    Nvidia and AMD will not survive as independent companies. Best case scenario would be for Qualcomm to buy AMD and Samsung to buy NV.

    Consolidation in the ARM market has to happen. It’s just too crowded right now.

      • pogsnet
      • 7 years ago
        • blastdoor
        • 7 years ago

        Qualcomm’s interest in amd would be ATI, engineering talent, and closer ties to global foundriest. The x86 license is a big turd — nobody wants to compete head to head with intel on intel’s turf, especially since x86 is a shrinking market.

    • ronch
    • 7 years ago

    Nice. About time ARM spreads its wings a little farther. I really don’t like the way Intel tries to keep x86 all to themselves. It’s good to see a different architecture manage to become so popular and remind Intel that x86 is not the only architecture available.

    • jdaven
    • 7 years ago

    If TSMC can delivery on their promises of 14 nm, then this is big news. ARMv8 on 14 nm here we come. Finally, competition is coming back to the CPU market.

      • chuckula
      • 7 years ago

      [quote<]If TSMC can delivery on their promises of 14 nm[/quote<] LMAO!!! Nobody ever accused you of lacking a sense of humor!

      • samurai1999
      • 7 years ago

      In two years time (2014) this would be on 20nm not 14nm …

      • pogsnet
      • 7 years ago
        • just brew it!
        • 7 years ago

        Yes, but a lot of servers run Linux these days, and Linux already has a track record on ARM. The infrastructure (e.g. LAMP stack, NFS, Samba…) is Open Source and already ported to multiple architectures, so recompiling it for ARM shouldn’t be a huge deal. I don’t think the porting issues are as bad as you think.

    • MadManOriginal
    • 7 years ago

    Waiting for a South Park codenamed CPU.

      • Vivaldi
      • 7 years ago

      Mint-Berry.

      “the power of mint and berries, yet with a satisfying, tasty crunch!”

      Yes, I lol’ed while I wrote this.

      • Martian
      • 7 years ago

      And an overheating SP CPU can turn the cooling fan into a flamethrower and kill Kenny…

      • Diplomacy42
      • 7 years ago

      step one- product development codenamed project kenny
      step two- ?
      step three- profit

      • rrr
      • 7 years ago

      Red rocket, Red rocket, Sparky, Red rocket!

    • bcronce
    • 7 years ago

    Many Core ARM server CPU…. /drool

    Bring it on! Competition!!!

      • chuckula
      • 7 years ago

      You might be disappointed. I think Nvidia really sees Boulder in the same way that Intel sees chipsets. Boulder is there to make it easier for Nvidia to move more Tesla boards with a theoretical ability to cut prices since the Intel portion of the system is removed. (and without Jen-Hsun having to gnash his teeth at the thought of Intel still making money by providing the systems that the Tesla boards plug into). In a similar vein, the chipsets are there for Intel to have support for its own CPUs to talk to peripherals so that people will want to buy the CPU. Both are necessary, but neither one is glamorous on its own.

      • sschaem
      • 7 years ago

      What data do you have that those will be any more power efficient then even Opterons ?

        • just brew it!
        • 7 years ago

        It isn’t a given, but it is likely. x86 has a handicap coming out of the gate because of the complicated instruction decode logic, which raises die area and power consumption by a non-trivial amount. OTOH x86 tends to use less memory bandwidth (and power!) for instruction fetches, since the irregular instruction encoding is actually quite efficient at physically packing instructions into memory; this partially mitigates the decode overhead.

          • bcronce
          • 7 years ago

          Intel has already shown that x86 isn’t an issue. They demo’d an Atom CPU that is about 20%-100% faster than the equivalent ARM offering, while consuming only 10%-30% more power than the same ARM chip. Not to mention it’s full blown x86 with all the bells and whistles.

          Yes, both idle batter time and load battery time were almost identical with nearly identical battery capacity.

    • Beelzebubba9
    • 7 years ago

    I think this makes a lot of sense for nVidia, as Intel is trying to push them out of the HPC space with Knights Corner. I doubt these cores will be anywhere near as fast as Haswell EX (or whatever Intel has out at the time), but if they allow nVIdia to market an Intel-free bootable system in which the GPU does the lion’s share of the processing, this makes a lot of sense.

    Plus, we’re getting not too far from the point where even mobile SoCs are ‘fast enough’ for most computing tasks. I suspect that by 2014 these nVidia cores could power PCs just fine running either Windows RT or some version of Android. I don’t see them being particularly competitive with Intel, but as a consumer it’s always nice to have options.

      • UberGerbil
      • 7 years ago

      “Intel-free” isn’t a selling point [i<]per se[/i<]. Only if that allows them to be cheaper or offer some other advantage is that trait interesting to customers (and given that the customers will be OEMs and VARs/integrators like HP, the fact that Intel offers matching marketing dollars can make "Intel-free" a net [i<]negative[/i<]). Having the GPU do the lion's share of the processing is an advantage if the processing is highly-parallel fp ops, so as you say it's a match for HPC where Intel keeps trying to threaten them. But that's just one piece of the server pie, and only if the GPU can do that lion's share faster or using less power (or, again, at lower total cost) is it a net win as far as customers are concerned. The server pie is much larger than HPC, and includes many tasks that feature branchy, serial, integer loads that GPUs traditionally haven't handled well. So while there's certainly a profitable niche in severs for nVidia, it's really more of the same HPC niche they're already colonized with Tesla. Moreover the server chips presumably are dp-oriented, whereas sp loads dominate in the consumer / gaming space (when fp is required at all, ie gaming) -- and users that especially don't appreciate having extra transistors burning power for un-needed precision. So just as nVidia found it necessary to split their discrete GPU designs, they're probably doing the same here: the GPU found in Tegra 5 is unlikely to be the same as the one that shows up in the server chips (though obviously there will be plenty in common). However, I suspect you may be mis-reading this and the reality may be different altogether: nVidia isn't intending to use the GPU to do the lion's share of processing at all. Instead, they leave the HPC niche to the product they already have (Tesla) and target Denver at the rest of the server market with a many-ARM-cores design that has a small GPU merely tacked on, as the "8-16 processor cores" quote suggests.

        • Beelzebubba9
        • 7 years ago

        I agree that Intel-free doesn’t provide benefits to consumers, but I think nVidia has to provide an option lest Intel push them out of the HPC space entirely.

        Maybe I’m just being pessimistic, but I don’t see how nVidia can hope to compete against Haswell or Broadwell in 2014 in price:perf in an average server workload.

      • sschaem
      • 7 years ago

      nvidia will also need to build clusters & chipset.

      Seamicro seem keen to leverage ARM when ever they can run server workload better then existing x86.

      Could AMD start to use nVidia ARM cpu in their cluster design ? 🙂

      note: servers dont really SoC design with Wifi/ display interface, GPU, etc.. etc.. so It seem the barrier to entry is very different.
      I haven’t checked, but AMD jaguar might have a tiny transistor count that they could match ARM density.

Pin It on Pinterest

Share This