AMD unwraps its seventh-generation desktop APUs and AM4 platform

AMD’s Zen CPUs, code-named Summit Ridge, are set to arrive early next year, but there’s a lot of groundwork to be done before those next-generation processors make their entrance. Way back in January, AMD announced it would be unifying its desktop platforms for its CPUs and APUs with a new socket—AM4. Now, motherboards and chipsets with that new socket are on the move toward retail. Yesterday, the company announced that desktop AM4 motherboards and its seventh-generation desktop APUs, code-named Bristol Ridge, are now available for its OEM partners.

For some background, today’s AMD APUs require Socket FM2+ motherboards that are incompatible with their integrated-GPU-free CPU cousins, which still drop into Socket AM3+ motherboards. Each of those sockets has a range of associated chipsets, too, serving builders of all budgets. AM4 wipes away all of that cross-platform incompatibility with a single socket and motherboard lineup. Compatible APUs and CPUs will offer modern features like PCIe 3.0, DDR4 memory controllers, and support for NVMe storage devices on top of the new features that AM4-platform chipsets will include.

The first family of AM4 APUs, code-named Bristol Ridge—or “seventh-generation APUs,” as you’ll see them referred to in marketing materials—have been available in notebook PCs for a couple of months in TDPs ranging up to 45W. As a brief refresher, Bristol Ridge parts are built around AMD’s Excavator CPU core and GCN 3.0 graphics technology, but improved process technology and lots of optimization work have allowed those chips to deliver modest improvements in performance over Carrizo parts on the same 28-nm process. More importantly, Bristol Ridge parts give us our first glimpse at how AM4 motherboards might look for Zen CPUs when they arrive next year.

The lineup
Back when Carrizo APUs arrived for notebooks, AMD brought that processor family’s Excavator CPU core to the desktop with just one part: the Athlon X4 845, a 65W quad-core CPU without integrated graphics. That Athlon ran at 3.5GHz base and 3.8GHz boost clocks. In contrast, Bristol Ridge is a family of desktop APUs with Radeon graphics onboard, although one Bristol Ridge CPU will also be available in the form of the Athlon X4 950.

The extensive tweaking and tuning in Bristol Ridge is perhaps most evident in the specs of the top-end A12-9800 APU. Despite its 65W TDP, this chip’s boost clock is only 100MHz short of the 4.2GHz the 95W A10-7890K can muster. The graphics processor on this chip also runs its 512 GCN stream processors considerably faster than the ones on the A10-7890K at their quickest: 1108MHz on the Bristol part versus 866MHz on the Godavari chip.

Combined with the architectural improvements of the Excavator CPU cores over the Steamroller cores in the A10-7890K, the A12-9800 might be as fast as or faster than the Godavari part in practice (though it doesn’t have that K chip’s unlocked multiplier for easy overclocking).

AMD wanted to make it clear which of its seventh-generation APUs are lower-TDP parts, as well, so it’s bringing back the “E” suffix we’ve occasionally seen on other parts (the FX-8370E being the most prominent example). The A12-9800E, A10-9700E, and A6-9500E all slip into a 35W thermal envelope, at the expense of a couple hundred megahertz of clock speed for both their CPU cores and graphics processors.

AMD’s internal benchmarks suggest that Bristol Ridge desktop parts are nipping at the heels of Intel’s Skylake processors for regular desktop tasks, although the usual truckload of salt should probably apply to all of these claims. For example, the company says that in PCMark 8 Home Accelerated, an A12-9800 APU delivers performance equivalent to a Core i5-6500. It’s worth noting that the Accelerated version of PCMark 8 uses OpenCL for some of its tasks, however, a configuration which might favor the AMD chip’s IGP.

That powerful Radeon IGP does seem to give the A12-9800 a leg up in the 3DMark 11 Performance test. That benchmark runs at an undemanding 1280×720 resolution, and it’s been superseded by the 3DMark Fire Strike and 3DMark Sky Diver benchmarks. Still, AMD says the A12-9800 doubles the performance of the Intel HD Graphics 530 IGP in the Core i5-6500 at 65W. The company also claims the 35W A12-9800E can deliver 88% better results in 3DMark 11 Performance versus the 35W Core i5-6500T.

Those results might make that top-end Bristol Ridge APU well-suited for basic gaming, but we’d be interested to see more user-experience-centric numbers like frame times (or even just average FPS) for these IGPs. 3DMark’s index scores might be handy for broad comparisons, but they don’t tell us anything about the actual gaming experience one might have with an AMD IGP.

For example, a $130-ish Radeon RX 460 4GB card like the one we recently reviewed turns in a 3DMark 11 Performance result of 9133 on one of our X99 testbeds, while the A12-9800’s index score is actually just 3581 on its AM4 platform, according to AMD’s materials. The RX 460 is what we would call a “solid” basic gaming experience, so we wouldn’t expect to be blown back by the performance of the IGP on even this highest-end Bristol Ridge APU.

AMD also goes out of its way to highlight the performance-per-watt of these chips relative to the Intel competition, and it’s here we need to raise a red flag. The company performed its efficiency calculations by subbing in a 91W Core i5-6600K for the 65W i5-6500 in comparison to the A12-9800, and that same Core i5-6500 for the 35W Core i5-6500T in comparison to its 35W A12-9800E. We won’t lend these results any credibility except to say that when the goalposts are this mobile, you can make any claims you want. We have no idea why AMD made this switcheroo—it’s about as subtle as dropping a hot coal in someone’s pocket. 

Like Intel’s recently-released Kaby Lake CPUs, Bristol Ridge offers hardware acceleration for the HEVC and VP9 video codecs. It’s worth noting that these APUs can only perform hardware encoding and decoding of 4K content for HEVC, however—VP9 support is limited to 1080p content. Kaby Lake, on the other hand, can decode 4K VP9 content in hardware. Given that the next-generation codec wars still appear to be in full swing, Bristol’s limited VP9 support may not be that important in the grand scheme of things.

 

Chipsets fill another hole in the AM4 platform puzzle

Along with its full lineup of Bristol Ridge desktop APUs, AMD is also giving us a first glance at the chipsets that will power those systems. The company isn’t introducing its enthusiast chipsets now—those will presumably be held back until the Zen launch—but it is filling out the rest of the range for us. 

The highest-end chipset AMD will be introducing in Bristol Ridge OEM systems is B350, which will take the place of the 970 chipset for AM3+ systems and the A78 chipset for FM2+ systems in the platform lineup. In turn, the A320 platform will supersede the 760G and A68H chipsets that supported low-end AM3+ and Socket FM2 systems. Straightforward enough.

One potentially interesting thing about the AM4 platform is that AMD will also be introducing small-form-factor versions of its chipsets for (we assume) equally small-form-factor PCs. The X300, B300, and A300 chipsets are all small-footprint versions of the platform for these mini-PCs. Though AMD didn’t say as much, we’re guessing that “X” chipsets will be its highest-end platform for both Bristol Ridge and Zen CPUs alike.

AMD says it pulled most of the northbridge and southbridge features on its motherboards onto the CPU with the move to AM4, a move that might explain the rather barren-looking PCBs on the prototype motherboards we’ve seen running Zen CPUs thus far. Bristol Ridge APUs will offer eight lanes of PCIe Gen3 from the CPU for graphics cards, two channels of DDR4 memory, four USB 3.0 ports, two SATA ports, and two general-purpose PCIe Gen3 lanes from the SoC that can be used for connecting peripheral devices from the motherboard or to power an NVMe M.2 slot. It seems likely that Zen CPUs will have much more to offer on these points.

In turn, the chipsets (or platform controller hubs, as Intel calls them) don’t have a lot left to do. The B350 chipset provides two USB 3.1 Gen2 ports, two USB 3.0 ports, and six USB 3.0 ports. It also offers two SATA ports, one SATA Express connector, and six lanes of general-purpose PCIe Gen2 connectivity. The B350 controller can also perform RAID 0, 1, or 10 on connected storage devices.

The lower-end A320 chipset drops one of the USB 3.1 Gen2 ports, two lanes of PCIe Gen2—and that’s about it. AMD didn’t reveal what its 300-series small-form-factor chipsets will look like yet, but the controller does drop RAID 10 support.

What’s next

As we noted at the beginning of this piece, AMD is exclusively offering seventh-gen APUs and motherboards to its OEM partners right now. HP and Lenovo are just two of those partners that have designs lined up with these new desktop APUs inside, and more are apparently in the works. AMD says “we’ll announce channel parts later,” so we’d expect retail boxed processors and motherboards from the usual suspects to be announced separately later this year.

What’s perhaps most exciting about today’s announcement is that it gives us a good picture of what the ecosystem around Zen CPUs might look like. Builders who choose Zen CPUs and Bristol Ridge APUs will be able to avoid the labyrinthe range of coexisting-yet-incompatible motherboard and chipset offerings available for today’s Socket AM3+ CPUs and Socket FM2+ APUs. Instead, they can just purchase the motherboard with the feature set and price tag that fits best with their needs, much as Intel builders already can with the 100-series chipset lineup and Socket 1151 CPUs. That’s a major step forward in builder-friendliness. Now, AMD just needs to deliver on Zen’s performance claims, and life may be rosier for the red team. No pressure, folks.

Comments closed
    • ronch
    • 3 years ago

    Zen is gonna be the best processor ever made. It will beat every other processor ever built. It’s gonna be a beautiful thing. Let’s make PC computing great again.

    • tipoo
    • 3 years ago

    “AMD also goes out of its way to highlight the performance-per-watt of these chips relative to the Intel competition, and it’s here we need to raise a red flag. The company performed its efficiency calculations by subbing in a 91W Core i5-6600K for the 65W i5-6500 in comparison to the A12-9800, and that same Core i5-6500 for the 35W Core i5-6500T in comparison to its 35W A12-9800E. We won’t lend these results any credibility except to say that when the goalposts are this mobile, you can make any claims you want. We have no idea why AMD made this switcheroo—it’s about as subtle as dropping a hot coal in someone’s pocket. ”

    Hoo boy. Thanks for warning us off these chips, AMD.

    • richardjhonson
    • 3 years ago
    • TheMonkeyKing
    • 3 years ago

    Seventh generation already? Funny, every time I see “APU” I still envision:

    [url<]http://s2.quickmeme.com/img/cb/cbbd36920f4e004e8fdd56c429ea95b595fb80a94863c66b0a9346b538f9df4f.jpg[/url<]

    • ronch
    • 3 years ago

    The only way I’m getting these newfangled chipperies is if I’m looking for a low-power box to put together. Since no one expects a low-power box to be high performance, and most of these chips are just ‘good enough’ anyway, I might as well get a 35w SKU.

      • Anonymous Coward
      • 3 years ago

      I want one to keep my shiny new “Netburst” Xeons and (dead?) Prescott P4 company. I also have a classic Atom netbook gathering dust. Sadly not a single AMD processor in the house these days.

    • ronch
    • 3 years ago

    AMD, with their [b<][u<]7th[/b<][/u<]-Generation series of [u<]Accelerated[/u<] Processing Units, is clearly one generation ahead of Intel, who's still lagging behind with their 6th-generation Core parts. AMD clearly has far superior graphics technology too. Poor Intel.

      • chuckula
      • 3 years ago

      Intel is so far behind that even racing stripes can’t help at this point.
      I’m not sure that even a spoiler would be enough to catch up.

        • ronch
        • 3 years ago

        Well, red racing stripes are better than blue racing stripes. Too bad AMD already established their use of red racing stripes. Even a dozen blue stripes aren’t enough against a couple of reds.

          • chµck
          • 3 years ago

          If you’re going fast enough, red stripes will shift into the blue spectrum

            • ronch
            • 3 years ago

            If we’re talking about the visible light spectrum here, does it explain why AMD is lagging behind while Nvidia is pretty much just behind Intel in terms of technical prowess?

            • synthtel2
            • 3 years ago

            AMD’s better days were when they were green, too.

            That, and you can do finer lithography with shorter wavelengths, doubly explaining Intel’s lead?

            …. But now it’s all gone recursive. Going faster –> smaller transistors –> going faster –> smaller transistors –> AAAAA, now Intel will never have any competition! 😛

            • ronch
            • 3 years ago

            Intel is blue because it’s lonely at the top.

            Nvidia is green because they only care about money.

            AMD is red because they’re always in the red.

    • cmrcmk
    • 3 years ago

    So the chipsets support RAID 10 but only have 2 SATA ports. I guess that means they can co-op the 2 SATA ports on the APU? Or does that mean they support SATA expanders?

    Also, at what point will chipsets become nothing more than PCIe-attached feature hubs? If that were the case, you could mix and match any chipset with any socket (even obscene configs like daisy-chaining chipsets or an AMD chipset hanging off an intel chipset) and third part chipsets would be easy to accomodate (e.g. ASrock could build a storage-focused “chipset” that includes 12 SATA ports or Realtek could offer a rest-of-the-system chipset with a NIC and sound codec integrated). With APUs becoming more SOC-like every year, this might be inevitable.

      • dragontamer5788
      • 3 years ago

      [quote<]So the chipsets support RAID 10 but only have 2 SATA ports[/quote<] Four SATA ports. Its 2xSATA and 1x Sata Express. Each Sata Express port is 2-backwards compatible SATA ports.

    • ronch
    • 3 years ago

    I hope mobo vendors stick some sticky notes on their boards to indicate which ports feed directly off the CPU and which take the long way home.

      • Anonymous Coward
      • 3 years ago

      Perhaps you can settle such an important performance matter with a stopwatch.

        • ronch
        • 3 years ago

        No, probably not. But if you have an SSD as your main drive, wouldn’t you want to plug it into a SATA port that goes directly into the CPU?

    • DPete27
    • 3 years ago

    No Optane support eh? I assume it’s because of scheduling. [i<]Maybe[/i<] they'll add Optane support on the enthusiast AM4 chipset?

    • The Egg
    • 3 years ago

    So AMD makes a good decision and unifies their sockets…..only to turn around and fragment the chipset into 4 skus. *rolls eyes*

    I can’t imagine that cutting down a chipset saves any meaningful amount of money. In fact, I’d think the logistics of maintaining 4 skus more than offsets any cost savings.

      • ET3D
      • 3 years ago

      SKU’s are the best way to make money, and in particular here when there’s no huge different between them. OEM’s can offer various levels of pricing and features, and those people who don’t mind paying tons of money for a minor increase in performance or features (enthusiasts) can spend that money happily. OEM’s are happy, AMD gets lots of money for a chipset that didn’t cost a lot to produce, everything’s great.

        • The Egg
        • 3 years ago

        There are costs involved in producing 4 different chips. I’ve gotta wonder if they’re not all the same chip with 4 different modes (determined by which pins are connected, etc).

          • ET3D
          • 3 years ago

          There are costs in producing different chips, but there are also costs in designing a single chip that does everything and disabling parts of it. In any case for a chip of this low complexity I think that the costs are rather low.

          I do think there’s a chance that the A320 and B350 may be the same chip. The differences really are minor (one USB 3.1 G2 and two PCIe lanes). However I’d imagine that others are different.

          • Anonymous Coward
          • 3 years ago

          I think you’ve got them figured out. 🙂

      • Mr Bill
      • 3 years ago

      Not to save money but to segment the market.

        • chrcoluk
        • 3 years ago

        yeah they have looked at intel and nvidia, realised how both of them maximise revenue and have joined the SKU game.

        Because they will want to maximise the price enthusiasts pay whilst at the same time not scare away budget users, the answer is multiple SKU’s.

        Also that the low end boards, can save cost from vendors like asus with less power phases etc. knowing the user wont be overclocking.

        Its good and bad, it can limit certian features to the highest price segment only but the policy also lowers the entry level cost.

    • ronch
    • 3 years ago

    Guess AMD has realized that most people don’t need a bajillion SATA or USB ports. It all feels a bit spartan but I reckon for practical purposes it’s all good. Then again, AFAICT, we haven’t seen their top chipset yet. Could it have 30 USB ports and 10 SATA ports?

    • Unknown-Error
    • 3 years ago

    Is TR planning a review on these new chips?

      • ImSpartacus
      • 3 years ago

      They are probably notebook bound, so it’s tough to review.

        • chuckula
        • 3 years ago

        No, this is the announcement of desktop parts and you can tell by the power consumption numbers and the socketed motherboards that these aren’t notebook parts.

        The notebook versions of Bristol Ridge were theoretically launched in June.

      • ET3D
      • 3 years ago

      Eventually, I think yes. Since publications mostly count on OEM’s sending them hardware to review, and since full desktop systems rarely get sent to reviewer, I think it’s a good bet that we’d need to wait for the DIY release. Presumably (and hopefully) when AM4 motherboards are released they’d be reviewed alongside Bristol Ridge chips sent by AMD.

    • rechicero
    • 3 years ago

    The perf per watt graphics are just shameful. How anybody can think such a blatant manipulation was a good idea? Great way of tarnishing what it seems like an interesting product.

      • dragosmp
      • 3 years ago

      AMD can’t help itself to seem manipulative. My advice: if you can’t help to try to tweak results, be more subtle. I hope whoever approves this sort of shenanigans at AMD gets a good spanking

        • ET3D
        • 3 years ago

        It’s aimed at people who don’t understand such things, like investors and AMD fanboys.

      • Andrew Lauritzen
      • 3 years ago

      I don’t even understand what they think they are “measuring” there… none of the math even makes any sense. And of course picking SKUs with different TDPs and performance makes it all pretty meaningless anyways. A phone would wreck all of these parts in “efficiency” by that metric.

      Also… are they seriously just doing “efficiency” math based on the stated TDP values? They’re at least measuring power draw over the length of the run, right? And using appropriate workloads that saturate the relevant units… right? Oh boy…

      In all seriousness let’s see a 45W one of these up against a Skull Canyon. That’s probably a roughly reasonable comparison, although this notion of “TDP”s is quite fuzzy as people count things differently. Still, makes more sense than any of the comparisons here really.

    • crabjokeman
    • 3 years ago

    So A320, B350 and …. C380?
    WTF??? AMD should have fired more of its marketing department, especially with those hilarious “performance comparison” slides.

    Note: I still think Zen will be a well-engineered and competitive product, no matter how much woeful marketing surrounds it.

      • Zizy
      • 3 years ago

      X something, as can be seen on SFF. I imagine 390. Higher is better, doh.
      No idea why they don’t align all those numbers and have semi-consistent branding of CPUs, GPUs and motherboards, but well, this is AMD, the only people less capable than their managers is their marketing department.

      • ronch
      • 3 years ago

      I dunno about that last paragraph. Zen can be 10x faster than the competition but those AMD marketing eggheads will still probably blow it.

      • Prion
      • 3 years ago

      I was hoping for A320, B350, and C63 AMG.

        • ronch
        • 3 years ago

        You mean C63 AMD.

    • Bladespi1975
    • 3 years ago

    Very informative article. Definitely going to get one of these chips when they sale at local retailer. Something to experiment with till Zen drops. I do have one slight objection. The author compared an Rx 460 which alone is capable of 2.2 tflops with the IGP portion of the flagship(at this early point) apu. Which is a tough comparison altogether. This chip probably will give you something like 1.2 tflops I’m thinking. Which a better comparison might be made with the Xbox one(not s version). Which is widely reported to be capable of 1.4 tflops(cpu-112 GFLOPs + GPU-1.312). People forget Xbox one has an Apu also which runs most 1080p gaming at 30fps and a few others at 60fps. This new AMD apu might break even with an Xbox one since it benefits from ddr4 ram whereas Xbox one still uses ddr3 ram

      • synthtel2
      • 3 years ago

      Dual-channel DDR4-2400 only gets to 38.4 GB/s, while the XB1’s main memory alone is good for 68.3 GB/s, and its ESRAM can boost the effective numbers a fair bit above that.

        • Voldenuit
        • 3 years ago

        Well, you can’t go by raw numbers alone. The Xbone cpu is made up of two jaguar SOCs, each with 2 channels of DDR3 RAM, for a total of 4 channels. If a CPU core on one side wanted access to RAM that’s held in the DIMM controlled by the other SOC, I’m not sure if it can directly access it, or has to ask the other SOC to transfer it over the local bus.

        Anyone know? Either way, it would face a larger latency penalty even if it could directly access the address being controlled by he other memory controller. And let’s not forget that it has to share bandwidth with graphics, which means that in practice (gaming applications), the CPU(s) are getting much less bandwidth than our typical desktop CPUs will see, even with slower RAM.

          • Rza79
          • 3 years ago

          [quote<]The Xbone cpu is made up of two jaguar SOCs, each with 2 channels of DDR3 RAM, for a total of 4 channels. If a CPU core on one side wanted access to RAM that's held in the DIMM controlled by the other SOC, I'm not sure if it can directly access it, or has to ask the other SOC to transfer it over the local bus. [/quote<] I don't think that's true. It's the first time I hear of it and it doesn't make sense at all.

            • maxxcool
            • 3 years ago

            Here you go, [url<]http://www.extremetech.com/extreme/171375-reverse-engineered-ps4-apu-reveals-the-consoles-real-cpu-and-gpu-specs[/url<]

            • Rza79
            • 3 years ago

            Doesn’t say anything about what Voldenuit was speaking about.

          • ImSpartacus
          • 3 years ago

          Do you have a source for that?

          I wouldn’t be entirely surprised that it’s true, but this isn’t something that I’ve heard before.

            • maxxcool
            • 3 years ago

            and one for you too! 🙂 Here you go, [url<]http://www.extremetech.com/extreme/171375-reverse-engineered-ps4-apu-reveals-the-consoles-real-cpu-and-gpu-specs[/url<]

            • Voldenuit
            • 3 years ago

            No, afaik it’s conjecture based on jaguar. Expanding a single memory controller to 4 64-bit channels would presumably lead to some very dense trace packing, so some speculate that it’s 2 separate memory controllers with an interconnect bus to handle cross-core communication. Again, conjecture.

          • synthtel2
          • 3 years ago

          That would pretty well negate the benefits of being quad-channel for some pretty miniscule savings, so I doubt it. (Also because we know the GPU’s on paper and in reality memory bandwidth figures match up, so presumably it does on the CPU side too.) Regardless of the details there, both a desktop APU and the XB1 have to share memory bandwidth between CPU and GPU. Were the gap between the numbers 20 or 30%, I’d be interested in lots of little architectural things, but you have to go all the way to DDR4-4200 to match the XB1’s main memory alone. When you throw in the ESRAM’s effects, it’s incredibly lopsided.

      • ET3D
      • 3 years ago

      I think that the best comparison would be last gen APU’s. These can play many slightly older games decently at 1080p, although not at the highest quality.

      AMD said in the slides that Bristol Ridge at 65W is as powerful as last gen 95W APU’s, so even though raw GPU performance may be higher (with help of faster RAM), I wouldn’t expect a huge jump. So for now I’m assuming A10-7890 level of performance, maybe 10% more.

      • DPete27
      • 3 years ago

      XB1 renders many/most games at 900p and upscales to 1080p last I heard. Also game code is SIGNIFICANTLY more optimized “close to the metal” for XB1/PS4 than for PC since console hardware is a fixed entity, whereas the variety of PC hardware combinations is staggering.

      • tipoo
      • 3 years ago

      “People forget Xbox one has an Apu also which runs most 1080p gaming at 30fps and a few others at 60fps. ”

      900p 30 for the XBO and 1080p 30 for the PS4 has been a majority line in articles comparing their performance in different games. Simpler games hit 1080p, while even native Halo 5 had to use a dynamic resolution that even went under 900p at times and only hit 1080p when watching a wall.

        • Bladespi1975
        • 3 years ago

        Hey I was mostly taking some objections with why the author chose to compare an Rx 460 with the gpu side of this new apu in benchmark. To me it really didnt make much sense. But back to my lesser point the xbone can run some 1080p games at 30fps and a fewer still at 60fps native resolution. Yes 900p upscaled to 1080p seems to be the vast majority to your point. Anyhow all’s I’m saying is the xbone apu is like what five years old but I’m thinking this current gen of desktop apu may be more similar in terms of performance potential. It even uses a newer GCN than the xbone if I read correctly which suggests you may be able to do more than just play esports titles with these things.

    • flip-mode
    • 3 years ago

    AMD’s benchmarketing is so sleazy it belongs in the Craigslist “casual encounters w4m” section.

      • Voldenuit
      • 3 years ago

      [quote<]AMD's benchmarketing is so sleazy it belongs in the Craigslist "casual encounters w4m" section[/quote<] AMD marketing regrets the placement of recent promotional material, and insists that 'w4m' used in this context originally referred to "wafers for market". Despite the graffiti on the bathroom stalls, which AMD says was a dirty intel trick.

    • Alexko
    • 3 years ago

    So do you intend to review Bristol Ridge? If so, will you be able to get on of those OEM SKUs, or will it have to wait until retail versions are available?

    • Redocbew
    • 3 years ago

    I’m assuming AMDs big message of having One Socket to Rule Them All doesn’t preclude another new socket a year or so later? A single socket is great and all, but it’s not going to stay that way forever.

      • chuckula
      • 3 years ago

      [quote<]A single socket is great and all, but it's not going to stay that way forever.[/quote<] Why not. Just look at how future proof AM3+ has turned out to be! I mean, they debuted in [url=https://techreport.com/review/21019/bulldozer-mobos-from-asus-and-msi-sabertooth-990fx-990fxa-gd80<]May of 2011, months before you could buy Bulldozer[/url<] and here it is in late 2016 and there hasn't been a single improvement to the platform whatsoever! 5.5 years is what I call longetivity alright!

        • Redocbew
        • 3 years ago

        Did you find a way to downvote yourself? That one beat the notification email to my inbox. 🙂

          • chuckula
          • 3 years ago

          Don’t worry. There’s a whole squad of shills that said things 10x worse about Kaby Lake who are more than willing to do their part to pretend that this is a real upgrade over Kaveri.

            • sweatshopking
            • 3 years ago

            hey. kaby is a disappointment. I’m all for either company, but I, and many people, were really expecting more with kaby then basically encoding.

            • derFunkenstein
            • 3 years ago

            I think Chuckula’s problem is that he is, as you said, “for” a company. I think we should stop rooting for/against companies because they’re just corporate entities that want our money.

            • cegras
            • 3 years ago

            [url<]https://techreport.com/discussion/30587/intel-kaby-lake-cpus-revealed?post=998808[/url<] Chuckula is upset. He doesn't need to be so upset. "You called my favourite thing bad names!"

            • raddude9
            • 3 years ago

            LOL, I missed that one, what a hissy fit.

            What an intel fanboi/fangrl, and it was a blatant strawman argument BTW.

            • sweatshopking
            • 3 years ago

            fully with you 100%. none of them are our friends. I just meant I didn’t care who makes which chip, and just care about perf/$

            • crabjokeman
            • 3 years ago

            Quit while you’re behind.

        • albundy
        • 3 years ago

        am3+ started from usb 2.0 and SATA 2, and upgraded to USB 3, and SATA 3. i would know, lol. i’d say those are half decent improvements for a $30 used motherboard upgrade a few years ago. i contemplated getting addon cards instead, but they would have cost more and hindered performance.

          • ronch
          • 3 years ago

          AM3+ started with USB 3.0 and SATA 3.0 out the gate, IIRC. My first AM3+ board was an MSI 990FXA-GD65 which I bought in 2012 and I reckon it was old stock from 2011, the year FX came out to plug into those AM3+ boards.

        • ronch
        • 3 years ago

        AM3+ is so future proof you could’ve bought an FX-8150 in 2011 and STILL be able to upgrade your CPU today!! Amazing!!

        /sorry can’t help it

        • Anonymous Coward
        • 3 years ago

        Actually a socket lasting so long is a good thing, and it might also be common in the future. Of course so will soldered-on CPUs.

          • ronch
          • 3 years ago

          A socket that lasts 5 years is a good thing.

          Too bad the fastest ‘normal’ TDP CPU you could plug in and probably not make your ‘normal’ board burst in flames to step up came out back in um.. 2012. FX-8370? Oh yeah, right. (/s).

      • Mr Bill
      • 3 years ago

      Because these are APU sockets, not Zen cpu sockets?

      • ET3D
      • 3 years ago

      I assume that this socket is planned to last for a while. I assume that AMD included enough spare pins to allow the chip to offer extra functionality, and that signaling is planned to support future developments in hardware.

      At least, I hope so.

    • derFunkenstein
    • 3 years ago

    [quote<]The B350 chipset provides two USB 3.1 Gen2 ports, two USB 3.0 ports, and six USB 3.0 ports[/quote<] Looks like that second 3.0 supposed to be 2.0 from the graphic (found on page 2) Just a pair of SATA connectors seems kind of bare, honestly. Not to mention how the CPU is feeding onboard ethernet and audio controllers - are those going to steal from the 8 lanes previously earmarked for graphics? I know these are entry-level APUs built with old CPU tech specifically for this new platform (and therefore they're basically placeholders), but they seem kind of starved. Especially that graphics-free part. That thing is gonna be a dog since you *have* to pair it with a graphics card.

      • faramir
      • 3 years ago

      The article says there are two extra PCIe 3.0 lanes for the on-board devices:

      “… and two general-purpose PCIe Gen3 lanes from the SoC that can be used for connecting peripheral devices from the motherboard or to power an NVMe M.2 slot …”

        • derFunkenstein
        • 3 years ago

        So I guess everything will have to get jammed into that same two lanes. Looking forward to returning to the days where audio crackles because there’s network activity and disk activity.

          • derFunkenstein
          • 3 years ago

          No, I missed that the chipset has a few lanes of PCIe 2.0. So depending on the path from the APU to the chipset, it may be fine.

      • ET3D
      • 3 years ago

      There are 2 SATA ports on the Bristol Ridge processor and 2 + 1 SATA Express on the chipsets. I assume that the enthusiast chipset will include more. Still, 4+1+NVMe will probably be enough for normal use.

    • chuckula
    • 3 years ago

    [quote<]Bristol Ridge APUs will offer eight lanes of PCIe Gen3 from the CPU for graphics cards, two channels of DDR4 memory, four USB 3.0 ports, two SATA ports, and two general-purpose PCIe Gen3 lanes from the SoC that can be used for connecting peripheral devices from the motherboard or to power an NVMe M.2 slot.[/quote<] And we'll still hear complaints about how 28 lanes of PCIe + extra DMI lanes dedicated to high-speed I/O is "gimped". [quote<] It seems likely that Zen CPUs will have much more to offer on these points.[/quote<] Damn well better, although the sockets and motherboards are at best setup for 16 lanes from all the leaked information. Additionally, if these first-run motherboards only include the wiring for an 8x PCIe slot because CHEAP then no amount of magic in the Zen SoC is going to overcome the deficiencies in the motherboard wiring.

      • MOSFET
      • 3 years ago

      It was obvious someone was going to troll that point. “Oh sh!t, I bought my Zen motherboard in August and waited for the CPU to come out in January (or whenever), and now the first-gen board doesn’t have USB 3.2 or DisplayPort 1.5 or HDMI 2.1 or SATA-900, d@mnit.”

        • synthtel2
        • 3 years ago

        [quote<]It was obvious chuckula was going to troll that point.[/quote<] FTFY.

Pin It on Pinterest

Share This