Leaked specifications reveal Bay Trail-based Atom, Celeron, and Pentium processors

We’ve already taken an in-depth look at the Silvermont architecture that will underpin Intel’s next-generation Atom processors. Now, details have leaked out on the specifications of some of those processors. The information first appeared on Baidu but seems to have disappeared since. Fortunately, EXPreview captured the details, which we’ve arranged in a handy table below.

Model Family CPU cores CPU clock Max GPU clock SDP TDP
Atom E3840 Bay Trail-I 4 1.91GHz 792MHz NA 10W
Atom E3823 Bay Trail-I 2 1.75GHz 792MHz NA 8W
Atom E3822 Bay Trail-I 2 1.46GHz 667MHz NA 7W
Atom E3821 Bay Trail-I 2 1.33GHz 533MHz NA 6W
Atom E3810 Bay Trail-I 1 1.46GHz 400MHz NA 5W
Pentium N3510 Bay Trail-M 4 2.00GHz 750MHz 4.5W 7.5W
Celeron N2910 Bay Trail-M 4 1.60GHz 756MHz 4.5W 7.5W
Celeron N2810 Bay Trail-M 2 2.00GHz 756MHz 4.5W 7.5W
Celeron N2805 Bay Trail-M 2 1.46GHz 667MHz <2.5W 4.5W
Pentium J2850 Bay Trail-D 4 2.41GHz 792MHz NA 10W
Celeron J1850 Bay Trail-D 4 2.00GHz 792MHz NA 10W
Celeron J1750 Bay Trail-D 2 2.41GHz 792MHz NA 10W

The leak covers multiple variants of Bay Trail, the 22-nm SoC that will serve as Silvermont’s launch vehicle. As expected, the chip will span Atom, Celeron, and Pentium families. Bay Trail-M and -D variants will purportedly have Celeron and Pentium branding, while Bay Trail-I will fuel the Atom line.

According to the leaked specifications, Bay Trail-D’s CPU frequency will scale as high as 2.4GHz. Those chips will have 10W TDPs, and I suspect they’ll appear in all-in-one and small-form-factor systems. A Bay Trail-powered NUC is probably in the cards.

The Bay Trail-M processors are the only ones with SDP ratings. Otherwise known as Scenario Design Power, SDP refers to power draw under sustained loads rather than short bursts. We first saw SDP ratings attached to Intel’s Y-series Ivy Bridge chips, which are aimed at tablets and convertibles. Bay Trail-M is probably targeted at similar systems.

If the leaked information is accurate, Bay Trail-I will power a broad range of Atom processors from the single-core E3810 to the quad-core E3840. The lineup should cover TDP ratings from 5-10W, and I wouldn’t be surprised if it were restricted to cheaper devices.

Comments closed
    • MarionLima00
    • 6 years ago
      • mesyn191
      • 6 years ago

      Spammer plllz delete

      • Grigory
      • 6 years ago

      Then why don’t you do the same instead of “working” as a spammer for cents an hour?

        • Novuake
        • 6 years ago

        LOL Grigory. Funny. These posts are getting out of hand.

    • zzz
    • 6 years ago

    Quad-core Celeron’s, Pentiums and Atoms? Doesn’t that effectively kill the naming convention of the i3’s, cause who cares about a dual-core with HT if you can buy a cheaper Pentium with 4 legit cores?

      • MadManOriginal
      • 6 years ago

      Anyone who knows that the Core CPUs will still be way faster, and isn’t swayed just by core count? I guess that excludes AMD fans but they wouldn’t be buying an Intel CPU in the first place :p

        • Melvar
        • 6 years ago

        I seriously doubt that even 5% of the people that buy computers know what a core is, much less MHz or IPC. They probably don’t know what Celeron, Pentium or Core (with a capital C) represent either.

        It’s okay though. The sales staff at Best Buy would never steer them wrong.

          • MadManOriginal
          • 6 years ago

          Yes – towards the more expensive product, in this case a Core series.

        • dragontamer5788
        • 6 years ago

        Power-efficiency demands a higher core count at a lower clock and lower voltage. Power use in computers is more or less Frequency * Voltage^2. If you cut the voltage and frequency in half… each core will use 8x less power than a fully powered core.

        So if you cut voltage and frequency in half, and then give the system 4 cores, it will (probably) outperform 1 core in multithreaded tasks and the whole system will still use half the power of the original system.

        Single-threaded performance suffers significantly however. But if they’re aiming at very low power usage, thats what they want.

        ———-

        Of course, the consumer doesn’t care about this. But its just an engineering tidbit I thought I’d share. Both Kabini and Bay Trail (as well as Snapdragon and Tegra, and every other low-powered device) are designed to take advantage of this fact. Expect to see higher-core counts than normal on low-powered devices as we move into the future.

          • NeelyCam
          • 6 years ago

          [quote<]Expect to see higher-core counts than normal on low-powered devices as we move into the future.[/quote<] I'm not so sure about that... In most low-power devices (tablets, cell phones), single-threaded performance matters a lot. Not all code can be easily parallelized, and for general responsiveness one core needs to run pretty fast (turbo, high IPC). I don't see the benefit of having more than four cores on a cell phone chip. I get your point about power savings when you add cores and cut frequency/voltage (0.5x voltage scaling would be insane, but I understand this is just a mathematical example). But the efficiency gains can be largely negated by stuff like Amdahl's law, and adding more cores makes the chip more expensive and hurts yields. I think in the near future, core counts for cell phones and tablets will be stuck at 2 or 4 (depending on the core and power consumption profile of the gadget), unless you consider frankenchips like Exynos Octa to be an octacore chip (I don't). The focus will be more on extreme dynamic voltage/frequency scaling and turbo-like features. Long term, once the R&D on parallel computing has reached the mobile space, those workloads are probably going to be handled by GPU-like parallel engines instead of CPU cores (was that essentially your point?). I don't think it would make sense to have 8-16 A7 or A12 or Airmont cores on a cellphone/tablet chip.

            • dragontamer5788
            • 6 years ago

            [quote<] I think in the near future, core counts for cell phones and tablets will be stuck at 2 or 4 (depending on the core and power consumption profile of the gadget), unless you consider frankenchips like Exynos Octa to be an octacore chip (I don't). The focus will be more on extreme dynamic voltage/frequency scaling and turbo-like features.[/quote<] Thats the thing. 4 cores is definitely unusually high core count, especially when you consider just how weak each core is. [quote<]Long term, once the R&D on parallel computing has reached the mobile space, those workloads are probably going to be handled by GPU-like parallel engines instead of CPU cores (was that essentially your point?)[/quote<] That too. The industry is definitely showing movement in that direction. JavascriptCL and Renderscript for example are pushing towards that direction. Qualcomm is technically part of the HSA foundation, so there might even be some HSAIL support coming for these chips.

    • DavidC1
    • 6 years ago

    Bay Trail-I: Embedded
    Bay Trail-M: Notebook
    Bay Trail-D: Value Desktop

      • maroon1
      • 6 years ago

      There is also Bay Trail-T for tablets and smartphone

      Atom Z3770 has an impressive specs with only 2w SDP. Looks like nice successor for clovertrail
      [url<]http://www.anandtech.com/show/7048/silvermont-to-sell-under-atom-celeron-and-pentium-brands-24ghz-z3770-leaked[/url<]

    • ssidbroadcast
    • 6 years ago

    The [i<]Micheal Bay[/i<] of Celeron processors.

      • Novuake
      • 6 years ago

      You deserve more likes!

    • cmrcmk
    • 6 years ago

    Do we have any real info on the GPUs yet? Clock speeds without shader counts and arch details are pretty meaningless. I’d love to build an HTPC with a 10W SOC, but it would have to have smooth visuals to be worth it.

      • MadManOriginal
      • 6 years ago

      They are based on Core CPU ‘Intel HD’-type graphics, so they at least have dedicated encode/decode blocks and will certainly be enough for general HTPC use. They probably won’t be up to the level of NV or AMD for advanced image quality stuff but that only matters to the pickier cinephile types.

        • Flatland_Spider
        • 6 years ago

        Getting rid of the PowerVR junk is the biggest win this design has going for it. The Intel HD graphics are a decent solution. They aren’t the dead albatross they used to be.

          • Helmore
          • 6 years ago

          PowerVR isn’t bad. It’s their Windows support that’s horrible.

            • Flatland_Spider
            • 6 years ago

            Hardware is only as good and useful as it’s drivers.

            Windows support = Horrible
            Linux support = non-existent

      • DavidC1
      • 6 years ago

      It’s Ivy Bridge Gen 7 based with 4EUs.

      I think the early estimates are putting performance roughly on par with AMD Hondo Z-60 for the graphics.

      There’s no worry with video playback as you can already do it pretty well with Clover Trail, for games, probably if you stick to 5-6 year old ones.

    • NeelyCam
    • 6 years ago

    These guys also listed the prices for mobile versions:

    [url<]http://www.computerbase.de/news/2013-07/zwoelf-22-nm-atom-celeron-und-pentium-benannt/[/url<]

      • Unknown-Error
      • 6 years ago

      $132? OUCH!

        • dragontamer5788
        • 6 years ago

        Hopefully that is in error… that is 3x more expensive than Clovertrail Atoms, and $50 more expensive than current generation Celerons.

          • willmore
          • 6 years ago

          Well, they are making these with their newest process. They’re not using them to keep hot an already paid for old processing facility that would only be making chipsets and other sundries. That’s got to up the price from the older Atoms which didn’t have that ‘problem’.

        • Spunjji
        • 6 years ago

        *headdesk*

    • bjm
    • 6 years ago

    These are going to make awesome multi-purpose home server CPUs, from a custom firewall to a file server.

    • OneArmedScissor
    • 6 years ago

    Anyone know if these sync the L2 and memory controller to the core clock? I have seen no hints to that beyond the significantly higher memory bandwidth.

    • Deanjo
    • 6 years ago

    Now hopefully some board manufacturer is smart enough to slap these “Pentiums” on a mATX board with a ton of sata ports and dual intel nics.

      • StashTheVampede
      • 6 years ago

      Quad processor, 16GB of ram support, 4-6 SATA slots and dual nic OR slot for one PCIe. Believe!

        • Deanjo
        • 6 years ago

        I’m thinking more like 8-12 SATA ports for FreeNAS purposes (one mSATA slot would be nice too).

          • MadManOriginal
          • 6 years ago

          8-12 SATA ports…we won’t see that on consumer boards, but maybe Supermicro will make a nutty Atom board with a SAS controller or something.

            • Deanjo
            • 6 years ago

            Na putting SAS would be overkill. I bet they would sell a ton of them for the enthusiast/prosumer/small business market if they made one with that kind of storage expandability.

            • MadManOriginal
            • 6 years ago

            I was thinking of that in order to get maximum ports in limited board space by using a SAS expander cable which can still work with SATA drives. It would require additional controller chips either way but SAS would save a ton of board space. 8 is doable because it only takes one additional, cheap controller chip but 12 SATA ports would take a lot of board space.

            • Deanjo
            • 6 years ago

            No need for SAS if you want to go that route. A lot of SATA controllers already support FIS port multipliers. They would also have a bunch of room already on a mATX board especially if they knock down the number of PCI-e ports for example.

            • MadManOriginal
            • 6 years ago

            Ok, whatever, we still won’t see 12 SATA ports on a consumer Atom board….8, maybe.

    • Star Brood
    • 6 years ago

    Now let’s see a quad core i3?

    I wonder how the Pentium J2850 will compare to a Q6600.

      • Deanjo
      • 6 years ago

      Depends on the use. It would kick butt in performance per watt (and over all consumption) and encryption duties since it has AES-NI support.

      • FuturePastNow
      • 6 years ago

      If it’s still an in-order core, it’ll perform poorly compared to a Core 2 at the same clock speed. Far lower power consumption, though, and the difference will be less noticeable with very basic tasks and anything that can be offloaded to the GPU.

        • dragontamer5788
        • 6 years ago

        Bay Trail is an exciting upgrade to the Atom line because its an Out-of-order architecture. IIRC, they got rid of hyperthreading, which didn’t really add much to the Atom line anyway.

          • FuturePastNow
          • 6 years ago

          Oh, good. It it’s a real processor and not a toy like previous Atom CPUs we should see some good performance-per-watt.

            • Voldenuit
            • 6 years ago

            [quote<]It it's a real processor and not a toy like previous Atom CPUs we should see some good performance-per-watt.[/quote<] I'd be more interested in performance per dollar, and I'm hoping that migrating the 'Pentium' brand to Atom doesn't take us backwards on that score.

          • Theolendras
          • 6 years ago

          That I would disagree Atom HT was the only way it could in some case be competitive performance wise with Brazos.

      • DavidC1
      • 6 years ago

      The original Atom at 1.86GHz was about equal to 1GHz Pentium M-based core with cut down 512KB L2 cache.

      50% higher IPC would mean the Pentium M is still faster. Realistically I would put the IPC probably similar to Bobcat.

      This is how equivalently clocked/cored Bobcat compares to cut down Conroe core with 512KB L2 cache: [url<]http://anandtech.com/bench/Product/328?vs=71[/url<] Yea, its not going to fare very well.

        • Spunjji
        • 6 years ago

        That J2850 would probably be pretty decent if it wasn’t trying to be a Pentium.

    • Hattig
    • 6 years ago

    [quote<]Pentium J2850 ; Bay Trail-D; 4 cores; 2.41GHz CPU; 792MHz GPU; 10W TDP[/quote<] The 10W TDP for this SKU is not bad at all, assuming that the new Atom core is far more performant than the old one, and competitive with Jaguar on a per-clock basis.

      • MadManOriginal
      • 6 years ago

      If it’s about twice as fast as a Z2760 (1.8GHz 2c/4t Clovertrail+), it will have comparable performance to 15W TDP Kabini. The slower ones might not get to that 2x performance but will almost definitely have lower power draw than a comparable performing Kabini/Temash. The quad cores will also show a good improvement over Clovertrail 2c/4t in multithreaded programs. The 2GHz+ quad cores have a lot of potential, and the PowerVR graphics are finally going away although that’s the area where AMD will surely continue to have an advantage.

        • dragontamer5788
        • 6 years ago

        The real question for AMD is whether or not its HSA strategy pays off.

        [url<]http://www.anandtech.com/show/6974/amd-kabini-review/4[/url<] Bay Trail has a similar architecture to Kabini, but comes at a lower TDP due to Intel's process advantage. However, Kabini / OpenCL is more powerful than an ultrabook-class i5 in Anandtech's Vision and Fluid OpenCL benchmarks. Historically, programmers have been reluctant to change their programming habits. But Adobe Premier is now OpenCL accelerated, as well as Sony Vegas, WinZip, and most recently... OpenOffice. If HSA pays off, AMD will benefit from its superior GPU. (Intel's "Iris" will also benefit, but at $600+, it isn't a mainstream chip)

          • MadManOriginal
          • 6 years ago

          Hmm yeah, it’s cool that Kabini and Temash are good at OpenCL but it’s like using a .22 to kill a bear, I would think anyone who is really intending to run those kinds of applications – especially for real professional use where time=money – would look at something more powerful like a discrete card anyway.

            • Deanjo
            • 6 years ago

            Exactly. Kabini and Temash openCL support are ok for things like Winzip and such but absolutely get blown away by even 4 or 5 year old computing capable cards.

            • dragontamer5788
            • 6 years ago

            Not necessarily.

            The main problem with dGPUs is that to get data from the CPU… the CPU has to spend time copying all that memory from main memory… to the dGPU (through PCIe), then the dGPU works with it, and then the dGPU transfers the memory back to the CPU. While “bandwidth” intensive operations don’t really see a big issue, “latency” intensive operations are bottlenecked not by the CPU or the dGPU… but instead by the microseconds it takes to transfer data between the CPU and dGPU.

            AMD’s demo behind this concept is passing a pointer between the CPU and GPU. To do this with dGPUs, you’d have to perform a deep and recursive copy. To do this with Kabini, its a 4-byte pointer pass that is handled by the on-chip cache.

            Latency intensive operations will be faster on Kabini / Temash (and Intel’s Iris, across the on die Crystalwell memory space), due to the fact that the CPU and iGPU share the same memory space. A memory copy between the CPU and iGPU is instead apparently measured in *nanoseconds*. (as the CPU and iGPU share on-chip cache)

            Basically, Kabini and Temash are *much much* faster at sharing memory between the CPU and the iGPU. *Significantly* faster than even CPU and dGPU across 16x PCIe.

            Whether or not programmers actually take advantage of this… is the question. Its a new architecture, new performance metrics. Its fast in a *different* way, which is the scary part for AMD. They are relying on programmers to change their programming style to match AMD’s strengths over Intel’s.

            AMD’s been working extremely hard on this strategy however. Both the XBox One and PS4 have this shared CPU/iGPU architecture, so automatically all games written for next-gen consoles will be written like this. They’ve been paying companies like Adobe to focus on OpenCL implementations, optimized for AMD. Etc. etc.

            • Deanjo
            • 6 years ago

            The bandwidth advantage is quickly negated on pure computing performance.

            • xeridea
            • 6 years ago

            There are many tasks that see little performance with OpenCL because of how memory needs to be managed, and the ones that can, it takes very advanced programmers to do it right. Besides being a lot faster, this allows a lot more tasks to be accelerated where before it didn’t make sense to waste a lot of time with memory copies and coherency. And with HSA, they can use the exact same physical memory, so the performance is leaps and bounds faster in may situations.

            • Deanjo
            • 6 years ago

            In some loads yes it could be beneficial. However once that memory transfer is done the performance difference in actual computing can also be magnitudes faster on the dGPU. Also your memory bus on a dGPU is typically wider and operating with faster memory vs the narrower slower memory of a integrated solution. Keep in mind as well when you are dealing with large data sets that are more then the memory available that advantage becomes null and void.

            • dragontamer5788
            • 6 years ago

            Its the memory transfer that is killer however. The x264 project for instance has cited the memory transfer as the primary reason why OpenCL acceleration doesn’t give any advantage right now.

            [url<]http://forum.doom9.org/showthread.php?t=154773[/url<] A number of Open Source applications are trying to go OpenCL accelerated, but it doesn't make sense. Even x264 which benefits *significantly* from parallel computation is failing to extract meaningful performance benefits from dGPUs.

            • Deanjo
            • 6 years ago

            Yes in the current x264s implementation that would be hampering performance, however they are hardly using any onboard dGPU memory for the data set. They have chosen to use a small set copy and clear path. Mainconcepts encoder SDK however utilizes as much larger data sets reducing that slow copy operation to just a few times per encode and greatly reduces the encoding time. x264 is just choosing to do it in a poor implementation.

            x264 is taking the teaspoon to fill a bucket and Mainconcepts approach is utilizing gallon pails.

            • dragontamer5788
            • 6 years ago

            Yes. Part of the reason for that is that Mainconcepts is a professional encoder with full time programmers who can dedicated their time to rewriting their entire encoder to Cuda / OpenCL / Intel Quicksync.

            On the other hand, x264 is an Open Source project which evolves more slowly… programmers who contribute patches on their spare time who can only incrementally “grow” a project in short spurts.

            EDIT: removed stuff about Bolt, I forgot that Bolt also worked with dGPUs :-p

            My main point though, is that certain projects will find it easier to extract performance incrementally from a CPU / iGPU system where the CPU / iGPU have almost no penalty for sharing data.

            • Deanjo
            • 6 years ago

            Right, I agree with everything that you say here however the fact remains that the dGPU<–>CPU isn’t the primary reason for slow performance in x264 but rather the implementation.

            • xeridea
            • 6 years ago

            It is far easier to work with HUMA than it is to manually handle and deal with all the overhead of coherency, and won’t force you to rewrite your entire codebase to work around speed issues, which is the entire point of HUMA. Video transcoding is just one good example, there are other workloads which plain don’t make sense with current OpenCL that could benefit from HUMA.

            • Deanjo
            • 6 years ago

            [quote<]Video transcoding is just one good example[/quote<] No it isn't as the hit is primarily from the implementation not a limitation of capability.

            • xeridea
            • 6 years ago

            Many workloads require repeated sharing of memory between system and GPU. Its not a memory bandwidth issue, its the overhead of having tons of extra logic to keep coherency, data copy time, and delay waiting for things to be usable. If you workload is just not feasible on current model, you will see 0 benefit from dGPU. It is far easier to program for HUMA because you basically don’t have to worry about memory coherency, at all, ever, it is always coherent by design, and faster because of it.

            One example is video transcoding, which currently has very limited help from GPUs because of the nature of the task. If you already had your memory the same, it would be far easier and more beneficial to use GPU where now it doesn’t make sense, you CPU would spend more time trying to make it work with GPU than the time gained.

            A good APU would be able to accelerate a great many tasks that would be otherwise impossible.

            • Spunjji
            • 6 years ago

            AMD’s published figures actually show the opposite of that. Time will tell how tweaked those numbers are, but based on the disappointment that GPU compute has been thus far I’m not inclined to disbelieve them.

            • dragontamer5788
            • 6 years ago

            I see the confusion: I’m not claiming that people will run software like Sony Vegas on a Kabini. You’re right, that is silly. Sony Vegas is already easily served on the current CPU / dGPU architecture anyway.

            The real question is if typical applications, like WinZip, will continue to push HSA / OpenCL. For these more “typical” computing tasks, I expect the main holdup for OpenCL is that latency problem.

            Big tasks, like BTC mining or Sony Vegas can be solved with a dGPU right now. But the smaller tasks like file compression, don’t benefit much from the current CPU / dGPU architecture, as it is bandwidth constrained. (In the time it takes for the CPU to transfer all of the data to the dGPU, the CPU already would have completed the compression task).

            With Kabini, the CPU-iGPU bridge has been minimized. Code will have to be rewritten to take advantage of AMD’s CPU-iGPU connection… minimize copies and focus on shared memory pointers. Even WinZip’s current OpenCL implementation probably doesn’t take advantage of Kabini’s architecture correctly yet.

            Which is why I call AMD’s HSA strategy risky. They have to rely on programmers to change their habits, which is a lot harder than convincing consumers to simply buy the “next chip”.

            • MadManOriginal
            • 6 years ago

            AMD just has to wait for Intel to push it. It’s unfortunate for AMD but true that it takes Intel to push something in order for it to be more widely implemented.

            • srg86
            • 6 years ago

            That’s the problem, no matter how good this idea is, AMD does not have the market position to push it through. AMD64 succeded because the products were better than Intel’s, now it’s no longer the case and their market share is relatively tiny.

            As a developer, unless I know that the product uses one of these chips, most PCs have Intel CPUs, so no matter how clever it is, I’m not really interested.

            • dragontamer5788
            • 6 years ago

            AMD has 100% of the XBox One and PS4 market however, which is AMD’s wildcard. They seem to emphasize games and HSA right now.

            HSAIL is going to be a portable assembly language, and it looks like it will be supported between Volcanic Islands (aka Radeon 9000 series), AMD APUs, XBox One, and PS4. (WiiU, despite using an AMD GPU, probably won’t support HSA or HSAIL). Server-wise, I bet they’re working on HSA server accelerators based on Volcanic Islands.

            OpenCL is also the next wildcard. Both Intel and AMD have shown strong support for OpenCL… and frankly, OpenCL seems easier to write than figuring out AVX / SSE / MMX extensions at the assembly level, and is portable to Intel and AMD APUs, and NVidia and AMD GPUs. Even without writing specifically for HSA, AMD’s OpenCL support appears to scale better than Intel’s implementation.

            And no matter what, OpenCL is the choice for reducing power in your applications. Running code across multiple cores at low clock speeds / low voltages is more power-efficient than running code on a turbo-clocked high voltage CPU. (Frequency * Voltage ^2 is a killer to battery life). So designing your code for a laptop might prefer computation-heavy parts to be in OpenCL, regardless of if you’re on an Intel, AMD, or Intel+Nvidia system.

            • Theolendras
            • 6 years ago

            Not only because AMD had superior processor at the time, but also because the Intel 64 bit path was also undesirable for developpers with Itanium.

          • NIKOLAS
          • 6 years ago

          HSA is nonsense.

          It is the AMD fanboy’s Great White Hope and it will amount to nothing.

            • xeridea
            • 6 years ago

            I would like to see you write some efficient OpenCL code that requires a lot of communication with CPU efficiently.

            • Theolendras
            • 6 years ago

            It might come to that, but the concept is great, and I think it will be successful, just don’t expect it to be a silverbullet and a way to market or even technical domination for AMD. Intel could be just as good on GP-GPU stuff in if the market was gearing toward it.

            • Spunjji
            • 6 years ago

            Thank you for not participating in the discussion in any meaningful manner whatsoever.

Pin It on Pinterest

Share This