Leaked slide unmasks next-gen Tegra SoC

About a year has passed since Nvidia revealed its Tegra 3 SoC to the world. Rumor has it the firm will lift the curtain on its next-gen Tegra processor at the Consumer Electronics Show next month, and some information has leaked out already. A member of the Chip Hell forums has posted an official-looking slide detailing some of the features of Wayne, the codename associated with the Tegra 3’s replacement. Tegra 3 was dubbed Kal-El, but Nvidia is channeling the Dark Knight this time around.

According to the slide, the Tegra 3’s novel 4-plus-1 CPU core arrangement will persist in the next generation. The chip’s quad-core cluster is backed by a low-power sidekick core designed to conserve battery life. The cores are referred to as Eagle, which matches the codename for ARM’s Cortex-A15 CPU.

On the graphics front, the integrated GPU purportedly boasts 72 “cores,” or ALUs—six times the number in the Tegra 3’s GeForce component. The block diagram shows dedicated video decode and encode blocks with support for VP8 and H.264 HP acceleration at resolutions up to 2560×1440. 2560x1600 seems to be the highest supported display resolution for 24-bit color, and it looks like there’s a 1080p @ 120Hz mode for stereoscopic 3D applications.

The slide says the CPU and GPU will share a dual-channel memory controller, giving the chip potentially double the memory bandwidth of its single-channel predecessor. USB 3.0 is also on the menu and should be particularly valuable for convertible tablets with notebook aspirations. There are no details on clock speeds or thermal envelopes, but Wayne is supposed to be built using 28-nm process technology.

Nvidia needs a substantial upgrade in graphics horsepower to keep up with Apple’s SoCs, and it will be interesting to see if the next-gen Tegra delivers. We expect to learn more about the chip at CES next month, so stay tuned. Thanks to The Verge for the tip.

Comments closed
    • DavidC1
    • 7 years ago

    I call BS on the 20-100x increase for the Series 6 unless…

    -They aren’t mentioning power use, meaning the highest one might use 50W+ on 28nm
    -They are comparing fastest Series 6 to slowest Series 5

    Press release and marketing guys are geniuses at making average and above average consumers fooled. You could tell simply by looking at how Series 5 and the previous one fares across the line. Outrageous claims happen by the following equation = Normal gains + Synthetic benchmarks + Corner case scenario + Taking top line next gen coming few years later versus lowest end currently out.

    At a set process and power use, performance does not vary orders of magnitudes, let alone 50%(top tier companies of course).

    Oh, and since Neely mentioned David Kanter’s article. I think its a matter of time that Intel will bring their GenX graphics to ALL CPUs, if not Silvermont, then next generation.

      • NeelyCam
      • 7 years ago

      I’m sure the 20-100x doesn’t consider power consumption, and 100x is most likely a corner case. This is from the Series 6 announcement:

      [quote<]"Delivering the best performance in both GFLOPS/mm2 and GFLOPS/mW, PowerVR Series6 GPUs can deliver 20x or more of the performance of current generation GPU cores targeting comparable markets. [b<]This is enabled by an architecture that is around 5x more efficient than previous generations.[/b<]"[/quote<] Even if they estimate 2x from a process node advantage, the efficiency improvement still sounds very impressive (again, unless it's a corner case).

    • DavidC1
    • 7 years ago

    Performance gain may end up nowhere close to 6x. That’s because they are moving to unified shaders.

    Geforce 8800GTX had 128 SPs, with double the performance of 7800GTX that had 24 Pixel and 8 Vertex shaders. Tegra 3 is 8 Pixel and 4 shader, making the total 12. By that metric, 7800GTX has 32 SPs.

    But increasing it by 4x only doubled the performance. So I assume same will be with Tegra 4, ending up 3x the performance of Tegra 3.

      • MadManOriginal
      • 7 years ago

      Maybe they’ve done other things that help like more memory bandwidth. Tegra 3 only has a single 32-bit memory channel, paired with different memory types and effective speeds for the different versions, so just going to a 64-bit interface would double bandwidth and is a pretty low-hanging fruit to implement.

    • mcnabney
    • 7 years ago

    H.264 but not H.265… that’s some real good advanced planning there Nvidia.

      • ludi
      • 7 years ago

      How are they supposed to support a standard that doesn’t exist yet?

    • ashleyw2934x
    • 7 years ago
    • Beelzebubba9
    • 7 years ago

    Anyone else curious as to how nVidia expects to hit a reasonable power budget with the Tegra 4 considering OEMs seem to already have thermal issues with the Qualcomm S4 Pro SoC and its much less power hungry CPU cores?

    Also, any idea why nVidia went with four Cortex-A15 cores instead of two? With the Tegra 3 it made some sense, because nVidia was already using a well-tested and somewhat slow and low power core and they couldn’t really scale up single threaded CPU performance, but with the A15 I figured they’d wait for a die shrink before hopping on the diminishing returns of a quad core SoC.

    Regardless of power, this thing should be a beast when it comes out. I’m just not sure how they’re going to get it into a phone at 28nm.

      • MadManOriginal
      • 7 years ago

      Because QUAD CORE is a sweet marketing bullet point. Sad but true. I would MUCH rather see further development of dual cores that have a wide frequency range to balance power consumption. Quad cores are still a waste in mobile devices imo.

        • Beelzebubba9
        • 7 years ago

        Agreed. I even find them to be a waste for most PC workloads, but at least there isn’t really much of a compromise to be had when you’re not trying to run a CPU off of a battery that fits in your pocket.

      • tviceman
      • 7 years ago

      They have a separate Tegra product they’re releasing specifically for phones – Tegra Grey. Tegra 4 is probably going to be 90-95% tablets and laptop convertibles.

        • Beelzebubba9
        • 7 years ago

        Good point – I had totally forgotten about Grey.

        Dual A15’s and a ~48 shader GPU could make for a very compelling smartphone SoC if nVIdia executes well with the Icera Baseband.

          • NeelyCam
          • 7 years ago

          A15’s are pretty power hungry.. check out that Chromebook review

            • Beelzebubba9
            • 7 years ago

            Yeah that’s my worry too. At this point in CPU/GPU design, it’s not about how much hardware you can get onto a chip, it’s how much work you can do in a given power budget.

      • tviceman
      • 7 years ago

      And also, have you not been keeping up with what Qualcomm and Samsung are doing? Quadcore snapdragons are coming, and Samsung as their own quadcore ARM processors. Nvidia fired the first quad core volleys and now they have to keep up.

    • dashbarron
    • 7 years ago

    The Tegra 3 was disappointing in the GPU area last time; you’d think that being a GPU company the Tegra 4’s will be beasts.

    • chuckula
    • 7 years ago

    Unmasks Tegra 4 eh? Very punny headline there.

    P.S. –> [url<]http://www.youtube.com/watch?v=JoX-HkOcEuE[/url<]

      • dpaus
      • 7 years ago

      Except that Bruce Wayne doesn’t wear a mask….

      Now if the headline had been “Tegra 4’s secret identity revealed” I’d have given it a standing slow-clap 🙂

        • SonicSilicon
        • 7 years ago

        Sometimes Bruce does : [url<]http://www.youtube.com/watch?v=w-A04w1oLaw[/url<] (skip to about the 5 minute mark.) Actually, I had a similar thought to UberGerbil's : a pun on photolithography.

      • UberGerbil
      • 7 years ago

      I took “unmask” as a pun on IC [url=http://en.wikipedia.org/wiki/Photomask<]masks[/url<].

    • lilbuddhaman
    • 7 years ago

    6x the gfx power of Tegra3 ? Sounds like it’ll compete, but my FIRST notion is that the followups to it’s competitors will be doubling and tripling up themselves…nvidia needs to catch up further…

      • Chrispy_
      • 7 years ago

      I’m tired of this FIRST rubbish.

      Congratulations on making an plainly obvious statement just so that you could fit in the magic word.

        • lilbuddhaman
        • 7 years ago

        It was obvious but needed to be said. “according to benchmarks” the tegra 3 is way behind the competition, upwards of 10x.

        To contradict myself though, I don’t see how these benchmarks correlated to real world performance, especially when both android and ios have a terrible selection of meaningful 3d games / apps.

          • Beelzebubba9
          • 7 years ago

          iOS has a pretty fantastic selection of 3D games. I mean, you’re not going to be running Call of Medal of Black Ghost Scripted Console Focused Shooter 7: Part 2 on an iPhone, but there are a lot of 3D games available on the platform.

          No one will claim that you can re-create the PC gaming ecosystem on a phone, but there’s a ton of software out there to run.

      • Silus
      • 7 years ago

      Have you checked benchmarks ? Tegra 3 is up there…not at the top anymore, but up there in the charts. With Kepler in Tegra’s GPU, you can be sure that you’ll see much better graphics performance. Remember that Tegra 2 and 3 used the same architecture as the GeForce 6800s. Only now NVIDIA finally shifted to a unified architecture (Qualcomm only did that too with their new Kraits) and that should pay off.

      Plus, as usual NVIDIA will launch its new SoC before any of the other competitors. Samsung seems to be the only one that will have something new soon enough. Qualcomm won’t have any new chip for almost a year from now.

        • MadManOriginal
        • 7 years ago

        I don’t pay as close attention to the architecture details of mobile GPUs, probably because they are ‘all-in-one’ packaged SoC solutions so you have to look at the overall SoC anyway. I didn’t know Tegra wasn’t even unified shaders yet, thanks!

        • Helmore
        • 7 years ago

        All Snapdragon GPUs are based on a unified shaders architecture. Actually, the original Adreno 2X0 series GPUs are based in part on the technology behind the X-Box 360 Xenos GPU, just geared towards extremely low power usage and somewhat more limited functionality.

          • Silus
          • 7 years ago

          S4s are, but not the previous ones:

          [url<]http://www.brightsideofnews.com/news/2012/7/25/qualcomm-snapdragon-s4-benchmarking-day.aspx[/url<] Also, Xenos was not an unified architecture. it had some concepts that can be found in a unified architecture, but it wasn't unified. It still had independent units for specific tasks.

      • tviceman
      • 7 years ago

      Apple’s A6X: [url<]http://www.anandtech.com/show/6426/ipad-4-gpu-performance-analyzed-powervr-sgx-554mp4-under-the-hood[/url<] Qualcomm's APQ8064 with Adreno 320: [url<]http://www.anandtech.com/show/6112/qualcomms-quadcore-snapdragon-s4-apq8064adreno-320-performance-preview[/url<] Samsung's Exynos with ARM's Mail GPU: [url<]http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review/2[/url<] The A6X has far and away the fastest mobile graphics, anywhere from 3-5 times faster than Tegra 3. Overall nothing comes particularly close to the A6X. So if Tegra 4 can live up to it's 6x faster than Tegra 3 claims, then Nvidia will have the fastest mobile graphics solution (at least when it's introduced). Qualcomm is not likely to come out with any completely new high end chips relatively soon as it's quadcore Krait is still being rolled out, and Samsung's latest came with Nexus 10. I think 2013 the overall top mobile performance chip will be Tegra 4 vs. Apple's next latest and greatest.

        • MadManOriginal
        • 7 years ago

        This is why looking at architectural details of parts of a SoC can be misleading. I am positive that a big reason for the A6X’s graphics performance is due to the memory interface bandwidth rather than anything else ‘magical’ about the architectural details. Apple isn’t designing its own CPU or graphics cores entirely, just tweaking what exists at most, but packaging them as a whole with exceptional bandwidth or other features that others could do just as easily but don’t.

        • NeelyCam
        • 7 years ago

        [quote<]I think 2013 the overall top mobile performance chip will be Tegra 4 vs. Apple's next latest and greatest.[/quote<] Actually, GPU-wise, the likely winners are those that sport PowerVR Series 6 that are supposed to be 20-100x faster than Imagination's current GPUs: [url<]http://www.imgtec.com/news/Release/index.asp?NewsID=666[/url<] ST-Ericsson's Nova A9600 should have one, although it may be that the chip will never be released. Also, as David Kanter mentions, Intel's 22nm chip should have one: [quote<]" In 2013, Intel will ship a 22nm FinFET SoC with the new, power-optimized Silvermont CPU and the recently announced [b<]PowerVR Series 6 graphics.[/b<]"[/quote<] [url<]http://www.realworldtech.com/medfield/5/[/url<] My money is on Intel's 22nm chip... assuming it is really coming out in 2013. EDIT: Actually, I'll take that back - I just realized this was a discussion on tablet graphics. For that, the clear winner will be this: [url<]http://www.techspot.com/news/47923-intels-next-gen-valley-view-atom-to-sport-ivy-bridge-graphics.html[/url<]

          • MadManOriginal
          • 7 years ago

          NeelyCam says Intel is teh winnar. MORE ON THIS SHOCKING DEVELOPMENT AT 11 !

Pin It on Pinterest

Share This