Overclocking Intel’s Pentium G3258 ‘Anniversary Edition’ processor

Oh man. Oh man oh man. So, yes, the desktop processor market has been kind of a sleepy place of late. The story has been Intel’s consistent dominance, AMD’s repeated struggles, and not much in the way of performance progress. Worse still, prices have stagnated for way, way too long. There’s been precious little reason to consider an upgrade. Happily, Intel has decided to inject a little excitement into things by releasing a really cheap CPU that’s completely unlocked, the Pentium G3258. I’d say they’ve nailed it: excitement achieved.

This new Pentium is an unlocked dual-core CPU based on the latest 22-nm Haswell silicon. The list price is only 72 bucks, but Micro Center had them on sale for $60. In other words, you can get a processor that will quite possibly run at clock speeds north of 4GHz—with all the per-clock throughput of Intel’s very latest CPU core—for the price of a new Call of Shooty game. I ran out and picked one up as soon as they went on sale last week. Almost seems too good to be true. But is it? Let’s have a look at how this one performs.

A 20th anniversary gift

The Pentium G3258 is an Anniversary Edition, meant to “celebrate” 20 years of the Pentium brand. Because we live in the future, it delivers way more than 20 times the performance of the original Pentium 100. The G3258’s stock clock is 3.2GHz, or 32X that of the Pentium 100. And it has dual cores, so count it at 64X. Then multiply by some amount of increased per-clock instruction throughput and sprinkle in some gains from vector math and such. Don’t forget the vast increases in cache and memory performance, either. You’re surely at 128X the peak performance of a Pentium 100 at the end of the day, by my horrible, back-of-the-napkin estimation. Perhaps much more.

I dunno what that means, really. As a writer, though, I’m obligated to throw some big numbers at you as part of any retrospective involving the magic of Moore’s Law.

Oddly enough, the Pentium G3258 is kind of a weakling at its stock speeds. Pentium is now a “value” brand, and Intel has hobbled its low-end processors in various ways in order to keep its higher-end CPUs looking attractive. Intel has disabled a bunch of features, including Hyper-Threading, VT-d, TSX, vPro, AES-NI, and TXT. The spec sheet is like an alphabet soup of “nope.” Also, this chip has a relatively skimpy 3MB L3 cache, and its supported memory speeds top out at 1333 MT/s.

Thing is, in my view, that list of gimped specs is also a cavalcade of “don’t care.” Take the cache size, for instance. The working data set for most desktop programs is surely way less than 3MB. Larger caches are mostly helpful for sharing data between multiple cores—and with only two cores, the G3258 has less need for cache in that role. At the same time, Intel hasn’t lobotomized the QuickSync video transcoding block in this little Pentium. Video encoding is one of those few common desktop tasks where four cores is a big win, but the presence of dedicated hardware eases that worry.

More importantly, where we’re going, stock speeds don’t matter, and an abundance of hertz can make up for a whole host of other missing features. Overclocking this thing is a simple matter of twiddling a few bits in a BIOS menu.

Punch it, Chewie.

I strapped the Pentium G3258 into my Haswell test rig, which includes an Asus Z97-A motherboard and a Thermaltake NiC C5 cooler. The cooler’s specs say it can dissipate up to 230W, so I figured it should have plenty of headroom for this CPU with a 53W stock TDP.

I took the same basic approach to overclocking the G3258 that I took with Devil’s Canyon, making tweaks to the CPU multiplier and voltage in the motherboard’s firmware and leaving most other settings at “Auto.” Asus’ firmware tends to ramp up some secondary voltages automagically to improve stability while you’re overclocking, and I let it do so.

This Pentium came out of the box running at 3.2GHz and 1.04V. I fired up Prime95 to use as a load test. The Asus AISuite utility reported CPU power draw under load at just 30.8W, way less than the CPU’s max rating. Core temperatures were steady at 29°C. So yeah, early indications were good.

After just a few attempts, this G3258 was up and running stable at 4.8GHz and 1.375V. I tried for more, of course. The G3258 booted into Windows at 4.9GHz, but the blue screen of death came to visit once I ran Prime95. I tried cranking up the voltage to 1.4V and then 1.425V, but the extra juice didn’t help. 4.8GHz looked to be the practical limit.

Which is, you know, really quite nice. Our Core i7-4790K topped out at 4.7GHz and required more voltage to get there. Heck, since the Pentium’s stock clock is just 3.2GHz, this amounts to a 50% overclock. That’s a magical number us old farts associate with the ur-overclocker, ye olde Celeron 300A.

At this speed, the G3258’s temperatures rose to around 64°C under load, and AISuite estimated CPU power draw at 64.5W—not that I entirely trust that number. I stuck the test system on a power meter, and the whole thing draws 119W at the wall socket with Prime95 cranking. That’s not bad at all. I was able to dial back the NiC C5’s fan speed to about 1100 RPM, where it emits just a whisper of noise, and still keep the CPU’s core temperatures in the mid-60s.

Just for good measure, I also kicked up the memory speed to 1600 MT/s, which presented no problem at all for the G3258.

The Athlon X4 750K edges in on the action

AMD may not be competing too vigorously against Intel’s high-end CPUs these days, but when you get into budget territory—and especially unlocked CPUs with lots of bang for the buck—then you’ve just stepped into AMD’s wheelhouse. AMD’s current answer to the Anniversary Edition Pentium is the Athlon X4 750K, an unlocked quad-core processor selling for $79.99 at Newegg. AMD was kind enough to provide us with one to test against the Pentium G3258.

The X4 750K is based on Trinity silicon, which means it’s a generation or two behind the latest Kaveri chips, depending on how you’re counting. Still, the 750K’s dual “Piledriver” modules are pretty well suited for this mission. With four integer cores, two FPUs, dual 2MB L2 caches, a 3.7GHz base clock, and a 4GHz Turbo peak, the Athlon X4 brings somewhat beefier hardware to this fight than the Pentium does. The Athlon X4 officially supports DDR3 memory speeds up to 1866 MT/s, too.

In keeping with its general M.O., AMD has left this chip’s special features largely intact, so instructions like AES-NI for accelerated encryption are fully available. That’s nice. The one exception is built-in graphics. The Radeon IGP has been disabled in the Athlon X4—not that we’re likely to miss it with a discrete graphics card installed.

Even without the Radeon IGP, AMD is clearly willing to give you more hardware for your money at this price. There’s a trade-off, though, as with most AMD CPU offerings these days. The X4 750K’s default TDP rating is 100W—nearly double that of the Pentium G3258. That’s the starting point, and the 750K will likely draw even more power once it’s overclocked.

To test the X4 750K’s potential, I dropped it into an Asus A88X-Pro motherboard and attached a massive Cooler Master tower.

With a little tweaking, the 750K was soon running reliably at 4.5GHz and 1.425V.

I tried pushing as high as 1.525V in an attempt to get it stable at 4.6GHz, but that wasn’t meant to be. Consistently, one particular thread in our Prime95 test would exit with a computational error. One of the four cores evidently wasn’t happy at higher clocks. 4.5GHz isn’t bad, but it’s a little less than the 4.8GHz we reached with the Richland-derived A10-6800K. Ah, well.

I used the AMD Overdrive utility to monitor the CPU’s state while overclocking. This utility doesn’t report absolute CPU temperatures, but it said there was still 32°C worth of “margin,” or headroom, in the overclocked 750K when running Prime95. The entire Athlon X4 750K test rig pulled 163W at the wall socket during this same workload.

AMD’s Piledriver can’t match Intel’s Haswell clock for clock, but at 4.5GHz, the Athlon X4 750K ought to give the Pentium G3258 a run for its money in multithreaded tests. Right? Let’s have a look.

Our testing methods

The test systems were configured like so:

Processor AMD FX-8350 AMD A10-7850K Athlon X4
750K
Motherboard Asus Crosshair V Formula Asus A88X-PRO Asus A88X-PRO
North bridge 990FX A88X FCH A88X FCH
South bridge SB950
Memory size 16 GB (2 DIMMs) 16 GB (4 DIMMs) 16 GB (4 DIMMs)
Memory type AMD Performance

Series

DDR3 SDRAM

AMD Radeon Memory

Gamer Series

DDR3 SDRAM

AMD Radeon Memory

Gamer Series

DDR3 SDRAM

Memory speed 1600 MT/s 2133 MT/s 1866 MT/s
Memory timings 9-9-9-24 1T 10-11-11-30 1T 10-11-11-30 1T
Chipset

drivers

AMD chipset 13.12 AMD chipset 13.12 AMD chipset 13.12
Audio Integrated

SB950/ALC889 with

Realtek 6.0.1.7233 drivers

Integrated

A85/ALC892 with

Realtek 6.0.1.7233 drivers

Integrated

A85/ALC892 with

Realtek 6.0.1.7233 drivers

OpenCL ICD AMD APP 1526.3 AMD APP 1526.3 AMD APP 1526.3
IGP drivers
Processor Core i5-2500K Core i7-3770K Core
i7-4770K

Core i7-4790K

Pentium
G3258
Motherboard Asus P8Z77-V Pro Asus P8Z77-V Pro Asus Z97-A Asus Z97-A
North bridge Z77 Express Z77 Express Z97 Express Z97 Express
South bridge
Memory size 16 GB (2 DIMMs) 16 GB (2 DIMMs) 16 GB (2 DIMMs) 16 GB (2 DIMMs)
Memory type Corsair

Vengeance Pro

DDR3 SDRAM

Corsair

Vengeance Pro

DDR3 SDRAM

Corsair

Vengeance Pro

DDR3 SDRAM

Corsair

Vengeance Pro

DDR3 SDRAM

Memory speed 1333 MT/s 1600 MT/s 1600 MT/s 1333 MT/s
Memory timings 8-8-8-20 1T 9-9-9-24 1T 9-9-9-24 1T 8-8-8-20 1T
Chipset

drivers

INF update 10.0.14

iRST 13.0.3.1001

INF update 10.0.14

iRST 13.0.3.1001

INF update 10.0.14

iRST 13.0.3.1001

INF update 10.0.14

iRST 13.0.3.1001

Audio Integrated

Z77/ALC892 with

Realtek 6.0.1.7233 drivers

Integrated

Z77/ALC892 with

Realtek 6.0.1.7233 drivers

Integrated

Z97/ALC892 with

Realtek 6.0.1.7233 drivers

Integrated

Z97/ALC892 with

Realtek 6.0.1.7233 drivers

OpenCL ICD AMD APP 1526.3 AMD APP 1526.3 AMD APP 1526.3 AMD APP 1526.3

They all shared the following common elements:

Hard drive Kingston HyperX SH103S3 240GB SSD
Discrete graphics XFX Radeon HD 7950 Double Dissipation 3GB with Catalyst 14.6 beta drivers
OS Windows 8.1 Pro
Power supply Corsair AX650

Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Rendering and video encoding

Welp, this first result gives us a sense of how this story is about to unfold. When clocked at 4.8GHz, the Pentium G3258 is among the fastest CPUs you can buy in terms of single-threaded performance. Only the overclocked Core i7-4790K outperforms it, perhaps due to the 4790K’s larger 8MB L3 cache. Even when overclocked to 4.5GHz, the Athlon X4 750K can’t match the single-threaded performance of the stock-clocked Pentium. Jeez.

The contest grows closer when multiple threads are involved, but the overclocked Pentium still outperforms the overclocked Athlon slightly.

x264 encoding doesn’t scale as perfectly with multiple threads as Cinebench does, and it relies solely on the CPU cores, so QuickSync and other hardware encoders don’t get involved. Impressively, the overclocked G3258 darn near keeps pace with the eight-core FX-8350, one of AMD’s fastest desktop processors. The Pentium’s two cores at 4.8GHz also put it within reach of an enthusiast stalwart, the Core i5-2500K, a quad-core Sandy Bridge. This is crazy-fast performance for a $60 processor. Or $72. Whatever.

But how does a fast dual-core CPU perform in a modern game engine? Hmm.

Crysis 3


As usual, we’ve recorded every frame of animation in our gaming tests and are reporting results based on the entire distribution of frame times. This method lets us look much deeper than a simple FPS average would—and it reveals some interesting things about the performance of our overclocked Pentium. Click through the buttons above to see plots of the frame times from one of our three test runs for each CPU. Pay special attention to the overclocked G3258.

In this case, the FPS average and our frame-time-focused 99th percentile metric agree: the dual-core Pentium at 4.8GHz handles our Crysis 3 test scenario pretty nicely overall, jockeying for position versus the FX-8350 and the Core i5-2500K.

What’s intriguing is how the overclocked Pentium manages this feat. Crysis 3 clearly takes advantage of four or more hardware threads when they’re available; look at how poorly the Pentium fares at stock speeds compared to the Athlon X4 and friends. Still, the G3258 more than makes up the deficit at 4.8GHz, thanks to good, old-fashioned per-core performance. Suddenly, it’s in the mix with much higher-end CPUs.

Now, check out what happens when we look closely at the hiccups, those frames of animation where the game runs slowest on each system.


Per-thread performance matters tremendously in avoiding the slowdowns that interrupt smooth gaming. The Pentium G3258 at 4.8GHz looks pretty good at our 99th percentile cutoff, but it gets even stronger during the last, most difficult 1% of frames rendered. There, it outperforms the stock-clocked Core i5-2500K and i7-4770K, and it handily outdoes any AMD CPU you can buy.


The benefits of the G3258’s killer per-thread performance are best illustrated by our “badness” metric, which looks at the time spent working on frames above a series of thresholds. The more time spent working on frames that take longer than, say, 33 milliseconds (or two display refresh intervals at 60Hz) to produce, the slower the game is likely to feel.

Conclusions

Take a second to consider what those Crysis 3 results mean. At 4.8GHz, the Pentium G3258 avoids slowdowns much more capably than even AMD’s FX-8350. Just like car guys say “there’s no replacement for displacement,” we’ve gotta admit that there’s no replacement for per-thread performance. In a great many cases, including games, the user experience relies mostly on one single, gnarly thread’s execution. With only two cores and two hardware threads at its disposal, the overclocked Pentium G3258 can still feel very snappy thanks to its combination of unusually high revs and prodigious instruction throughput in each clock cycle.

We need to do more testing, but an overclocked G3258 looks to be a truly outstanding gaming CPU—not only on a budget, but just generally compared to much more expensive CPUs.

With that said, it’s time to pile on the caveats. For one thing, we can’t assume that every Pentium G3258 will reach 4.8GHz at under 1.4V like ours did. Happily, the early reports so far from end users do seem pretty promising, with speeds in excess of 4GHz looking common.

Beyond that, like I said, we need to do more testing. A fast dual-core, two-threaded CPU isn’t terribly common these days, and I’m curious how it performs across a range of newer games and other applications. Fortunately, I’m in a position to do something about that. We’ve already started compiling a pretty good set of results. I’ll see what we can do about adding more tests soon.

Also, although the Pentium G3258 kind of left the Athlon X4 750K in the dust in our first round of tests, AMD has newer silicon it could choose to position against the Pentium Anniversary Edition. I wouldn’t be surprised to see a more potent response from AMD in the coming weeks.

Our first taste of the Pentium G3258 has been compelling enough that we’ve cooked up two different sample builds around this processor for our next system guide update. Look for that to go online very soon. Now is a great time to build a new system. I think lots of PC hobbyists, particularly gamers, will find this chip’s combination of price and potential almost irresistible.

Comments closed
    • cdyoung1985
    • 5 years ago

    I have this CPU, and it works well, however it will not run Dragon Age Inquisition, as it needs 4 threads to run. Have tried OCing my G3258, getting up to 4.5 out of it, and still not enough. You have to either go with a quad core, or hyperthreading, so I just ordered the i3 4150, which is only about $100, and from what I’ve read, will act like a quad core. It has a locked core, but at least it will act like a quad core for ~1/2 the price of the i5 4690k, which to me is the next worthy step up, then there’s the i7 4790k, for around $350 normally.

    • cub_fanatic
    • 5 years ago

    Wasn’t the first Pentium a 66 or 60 MHz? In know the 75 MHz actually came out after the 100 MHz. I remember those original Pentiums. My dad got us one of the first 100 MHz chips they in a custom PC. Then a few months later, my neighbor got a 75 MHz Packard Bell. I still don’t think there has been such a drastic increase in power in such a short amount of time as there was between the 486 DX/66 and the Pentium 100. Even the Pentium 66 MHz was a huge performance boost over the 486 66 MHz in every application. Now, you can’t even expect more than a 15% increase from one “generation” to another although that probably has a lot to do with the fact that the manufacturing process is getting close to atomic level. Between 1994 and 1995, the die size shrank from 600nm to 350nm. Compare that to Intel’s current 22nm process which is about to shift to 14nm.

    • marvelous
    • 5 years ago

    For $60 at microcenter it can’t be beat for the price. Neither is 8320 for $100 when on sale. 1 game benchmark doesn’t tell the whole truth and a lucky overclock either.

    • novv
    • 5 years ago

    This cpu is good for an entry level system and that’s all. Tell me who is just gaming these days. I mean even if you game you have an antivirus , a torrent application maybe, a transfer of files between your laptop and pc or even an encoding application running in the background. A quad core cpu is definetely a better choice. For TR would be very easy to test if they like a transfer between 2 hard drives attached to an Intel powered pc and while this is happening start a game from the ssd. Put the same on Amd system and let’s see who’s the winner.

    • nanoflower
    • 5 years ago

    6277MHz That’s how far one person has managed to take his G3258 using liquid nitrogen. Wish I had the setup that would allow me to try something like that out but space is at a premium.
    [url<]http://hwbot.org/submission/2583371_dhenzjhen_hwbot_prime_pentium_g3258_5000.3_pps[/url<]

    • Kretschmer
    • 5 years ago

    It would be useful to include an i3 in all benchmarks going forward. If an i3 is sufficient for a task, users don’t have to spring for the i5 or i7. And if the i3 is consistently better off than the Pentium for a bit more dosh, it should be recommended.

    I’d imagine that most AMD FX-XXXX articles would be much more valuable with an i3 included, for example.

    • anubis44
    • 5 years ago

    Dual core without even hyperthreading? This chip’s already obsolete sitting on the store shelf. I don’t care what the current benchmarks show, the future is multi-thread optimized. And when you’re not playing games, the multi-threaded FX-8350 will cleave that little dual-core like a claymore through hot butter doing things like transcoding and compression.

    I was one of those lucky users who had a Celeron 300a that would overclock from 300MHz to 450MHz with a simple change of the FSB from 66 to 100MHz on default volts, and yes, this chip does look a little like that Celeron in terms of value, except that the lack of more than two cores or even hyperthreading is a decisive drawback compared to a 4 core processor. I’ll have to pass, and hope that Jim Keller’s brewing another AMD Athlon 64 for Intel to deal with…

      • Voldenuit
      • 5 years ago

      [quote<]the multi-threaded FX-8350 will cleave that little dual-core like a claymore through hot butter doing things like transcoding and compression[/quote<] Not all encoding tasks are infinitely parallelizable. It's been a long time since I've done anything intensive enough to need Avisynth, so I've been using handbrake for my general encoding taskes these days. In handbrake, at least, some of the filters (I believe deinterlace, IVTC and deblock) are limited to 2 threads. Going from an ageing Athlon II X4 to an i5 4670K has been an immense speedup for me, and I don't think an 8350 would be any faster for my workload; it might even be slower. The i5 has been a wonderfully balanced processor for my needs (some gaming, some encoding, some Matlab) and is cool running and power efficient. The 8350 may have a niche for some pure threaded tasks that don't rely on good IPC, but that niche is pretty small, and there are better options for other tasks both above and below its price point. EDIT: Anandtechg's bench has some [url=http://www.anandtech.com/bench/CPU/1052<]4k handbrake benchies[/url<], and the 4690K (19.44 fps) beats the 8350 (17.53) with half the cores, slower default clocks and 33% lower TDP.

      • Krogoth
      • 5 years ago

      It is a budget dual-core unit that is completely unlocked. Intel typically locked their lower-end stuff since Lynnfield/Clarkdale, because if you could overclock the lower-end stuff, you can get near high-end performance for most stuff for a fraction of the platform cost. The Pentium G3528 proves this without a doubt.

      ~$60 CPU that yields near-$249-$299 CPU performance for most stuff if you are willing to push it a bit. Even at stock, it is a pretty good deal if you want a low-end CPU for casual stuff that doesn’t need the power and energy consumption of a higher-end unit.

    • chuckula
    • 5 years ago

    OK I’ve figured it out!

    We need to see…. [b<]MANTLE BENCHMARKS[/b<] with these bad boys. Load up a hot-clocked R290x and see what the difference is between D3D and Mantle. If the performance delta using an afterthought Pentium-dual core is better than what you get using the vaunted Kaveri parts that cost over twice as much, then it will take a week for me to stop laughing.

      • ClickClick5
      • 5 years ago

      Oooohhhhh stop you.

      • sschaem
      • 5 years ago

      Mantle is designed to leverage multicore, and as a side benefit better use low power multicore design.
      This is why company like Microsoft and Apple have followed in AMD footsteps with Dx12 and Metal to mimic Mantle.

      But I guess a few Intel fanboys got a laught from your “clever” drivel.. good job… good job…
      (Time to pat yourself on the back)

        • maxxcool
        • 5 years ago

        So what you are saying is between amd’s shared fpu’s and intels dual fpu Intel would prevail in mantel. And you would be right.

      • atari030
      • 5 years ago

      I’m totally shocked by this comment as it came completely out of left field. I don’t know that the entire TR readership could have anticipated it or not, but it’d probably be close.

    • maxxcool
    • 5 years ago

    Soooo, When do we get a full in depth review 😉 ? Seems a lot of current debate might be settled that way..

    • Meadows
    • 5 years ago

    Mr Wasson, speaking of threads and whatnot, do you plan to incorporate Watch_Dogs into your suite of tests?

      • Meadows
      • 5 years ago

      Bumpity bumpbump.

    • ozzuneoj
    • 5 years ago

    So, what is the least expensive board that will allow this CPU to be overclocked like this? I know Asus said that all of their Socket 1150 boards would allow K series (or Pentium AE) overclocking, but I don’t know if there are any limitations on the really cheap models.

    If I were considering an inexpensive gaming system or HTPC, I’d be a bit put off by a $100+ motherboard to overclock a $70 CPU. Obviously in the end the performance may be worth it… but in many cases people have no use for the additional features a $100 Z97 offers over a $50 H81, but would gladly take the huge performance increase of the overclocked CPU. That $50 makes a far bigger difference in when put toward a GPU.

    Obviously you’d want a board that can handle the extra power draw gracefully, but we’re not exactly talking about a power hungry monster here… even overclocked it should use less than a stock i5 quad under load.

      • nanoflower
      • 5 years ago

      $40 for the MSI Z97 PC Mate if you get the G3258 and the MB from Microcenter. Other vendors have deals but I don’t think any are as good as the Microcenter bundle.

        • ozzuneoj
        • 5 years ago

        Sadly there isn’t a Microcenter within 300 miles of here.

        Or… I guess its probably a good thing. I don’t really need to buy any more PC parts right now. 😉

    • kamikaziechameleon
    • 5 years ago

    For some reason I can’t read this and not think of my Q6600 and how easily it OC’d producing amazing single thread results for the time as well as amazing quad core load carrying ability. 🙂

    • TDIdriver
    • 5 years ago

    Any idea on how the Richland-based Athlon x4 760k stacks up?

      • Concupiscence
      • 5 years ago

      Take the 750k’s results at stock, multiply by a linear scalar (in this case, 1.12 should do it), and that will give you performance numbers at stock speed. I don’t know whether the 760k overclocks any better, but in a best-case scenario it might be 15% faster or so. In other words, better, but still not competitive with the G3258. Pound for pound the Pentium is a monster.

        • sschaem
        • 5 years ago

        “Monster” ?

        Dual core haswell at 4.8ghz just match a dual module richland at 4.5ghz in cinebench,
        341 vs 347 : [url<]https://techreport.com/r.x/pentium-g3258-oc/cine-multi.gif[/url<] Not a sign of a monster when it cant really beat an old 32nm 2 module AMD chip.

          • exilon
          • 5 years ago

          You really think that four threads barely out scoring two threads is a good thing? Have you ever heard of Amdahl?

            • sschaem
            • 5 years ago

            A CPU that barely match another CPU cannot be called a “monster”, no matter how many logical core it support or doesn’t support.

            And changing the subject to make an unrelated point, very trolling… But I will bite

            To be continued….

            • exilon
            • 5 years ago

            I think my point went over your head. You should go Google the term ‘Amdahl’ before you dig yourself deeper into the hole.

            • sschaem
            • 5 years ago

            I think you are the one that is missing the entire point of calling one product a “monster” in relation to another when in reality it doesn’t always perform any better .

            I think you need to read a bit more about hw ‘thread’.ex: even so Intel CPU with HT have twice the amount of HW thread / logical cores then the one without HT, they dont provide twice the raw compute power.
            So thinking that a module, because it does expose two logical core, should be twice faster is so naive.

            If you bothered reading on the subject, you might find that the haswell FPU per core actually more powerfull then the FPU in an AMD module (AVX2). And those two haswell core where running at 4.8ghz.

            Nothing you can say will change TR benchmark result.

            4.8ghz 2 core : Score 347
            4.5ghz 2 module : Score 341

            Knowing that cinebench is prefer intel architecture, and we have a 300mhz in favor of the Pentium.
            There is no sign of a true Monster here, not the slighted.

            • exilon
            • 5 years ago

            You typed all that out without even looking up what Amdahl’s Law says about 2T vs 4T?

            • sschaem
            • 5 years ago

            You are just weaseling out of the subject.

            Amdahl’s Law doesn’t prove your point that a dual core CPU is a “monster” because it score the same as a dual module CPU running at a lower clock rate.

            And if you look closely at cinebench CPU usage you would realize how wrong you are to even bring that up. This alone tell me that you have no clue at what Amdahl’s law represent.

            • exilon
            • 5 years ago

            No, I was hoping that you would be able to put one and one together and learn something, but I was apparently expecting too much, so here’s the answer:

            Amdahl says since the minimum time needed for a task is limited by the sequential fraction of the task if parallelizable portions are distributed across an infinite pool of workers. Working backwards with some algebra, we can find that the maximum speed up for 1->N cores is limited by how much of the task can be done in parallel.

            So what does that say about 2T scoring the same as 4T in a trivially multithreaded benchmark?

            Answer: the 2T would absolutely destroy the 4T in other tasks with higher sequential fractions because the speed up from having the two extra cores is lower.

            • sschaem
            • 5 years ago

            The issue of the single threaded advantage was already covered, and is reflected in the single threaded cinebench test… so get up to speed and stop pedaling backward.

            At the root this is about the G3258 raw compute power, even when overclocked, is weak.
            And you cant call a weak processor a “Monster” when it benchmark equality in compute heavy test to the other CPU you are compare it with.

            “This A chip is a MONSTER compared to chip B”
            Yet, they both benchmark the same in compute heavy test…

            side note: cinebench favor Intel FPU (simd) architecture.. try something like POVray , and see how your “monster” CPU compares.

            The lesson is over… (if you still dont get it, re-read the thread from the first post.)

            • JumpingJack
            • 5 years ago

            Unquestionably it went over his head.

      • loophole
      • 5 years ago

      I wonder why AMD suggested a 750K rather than a 760K as their chosen fighter for the green corner of the ring… It does cost $10 more I guess but AMD needed every overclocked cycle they could get out of their part for this showdown. They could have had an extra 300Mhz up their sleeve (if you look on hwbot the average OC on air for the 750K is 4.5GHz and for the 760K it’s 4.8 which is exactly what Scott found)…

    • flip-mode
    • 5 years ago

    I almost want one. Impressive little beastie.

    • Ninjitsu
    • 5 years ago

    Scott, if you can, try using Arma 3 for CPU benchmarks, that really kills the CPU (with higher draw distances, ultra object visibility and high particles)…there are two benchmarks available in the Steam workshop:

    For Stratis:
    [url<]http://steamcommunity.com/sharedfiles/filedetails/?id=172475381[/url<] For Altis: [url<]http://steamcommunity.com/sharedfiles/filedetails/?id=173435011[/url<]

    • DPete27
    • 5 years ago

    AMD sent you a CPU for the Intel review, but you had to go out and buy yourself a Pentium AE….that’s ridiculous.

      • chuckula
      • 5 years ago

      [quote<]AMD sent you a CPU for the Intel review, but you had to go out and buy yourself a Pentium AE[/quote<] Well the difference is that AMD had a whole bunch lying around not doing much whereas Intel was actually shipping their chips out for sale... Actually, I like reviews of retail products because you know with 100% certainty that Intel wasn't sending out cherry-picked samples in order to look good in reviews.

        • DPete27
        • 5 years ago

        Very true.

      • Damage
      • 5 years ago

      AMD has been great about supporting us, even in hard times.

    • slaimus
    • 5 years ago

    1.4v will quickly degrade a 22nm chip though. Can you report the max overclock at stock voltage?

    And too bad about Intel changing motherboards so often. I had TWO 50% overclocks in the 300A era on the same motherboard. I upgraded the 300A@450 to a Celeron 600@900, that motherboard lasted me through the entire Pentium II and !!! era.

    • unclesharkey
    • 5 years ago

    I think of processors like this as being great for budget builds, so with that in mind; how well can it overclock with the stock heat sink, and how does it compared to an i3 which is about 30 dollars more. With overhead for a heavy duty heat sink, new motherboard to reach those over clocks, and the slight increase in power draw is it worth it over an i3?

      • nanoflower
      • 5 years ago

      I’ve already read of people getting over 4.0GHz with the stock cooler. In fact I haven’t seen a report yet of anyone not hitting at least 4.0GHz, but then most aren’t using the stock cooler. A new motherboard is needed only if you have a really old one that won’t support the G3258. Otherwise you would only be getting a new MB for the extra features or the greater OC’ing ability.

    • Bensam123
    • 5 years ago

    Definitely could use more then one game benchmark that isn’t limited by graphics and integrated graphics benchmarks. Perhaps a newer AMD processor with integrated graphics too. They make a dual core Richland for $60 that’s unlocked, not even counting the ones at $50 that you could overclock by tweaking the FSB (yeah remember the FSB?).

    Arguably the most important part is the integrated graphics at these price points too. You throw in a $200 graphics card it changes things. If this is intended to be a fair comparison and not just dragging out AMD for a kicking session.

      • nanoflower
      • 5 years ago

      Why would you test integrated graphics on the G3258? Everyone knows the IGP of the Pentium is poor and won’t hold up well in games other than the simple type like Plants vs Zombies. So any test of the IGP is going to be a waste.

      Though it might be interesting to see how the IGP performs when it is overclocked. I wonder if it also has the 50% OC’ing headroom? Either way it won’t compare to Iris Pro or the GPU side of the AMD APUs.

        • NeelyCam
        • 5 years ago

        [quote<]So any test of the IGP is going to be a waste. [/quote<] Not to Bensam, because it would show how utterly superior AMD is.

          • nanoflower
          • 5 years ago

          LOL. No one is denying AMD has the superior IGP at this price point. That’s been shown over and over again in the reviews on this site and many others. So other than giving fodder for the AMD fans to point at I’m not sure what the point is since no one is claiming this Pentium would do better than AMD’s APU using built in graphics.

          The claims for ‘superiority’ are all based around the CPU performance for the price. Especially given how easy these Pentiums are to overclock.

            • chuckula
            • 5 years ago

            [quote<]No one is denying AMD has the superior IGP at this price point. [/quote<] Having said that... AMD chose to send over an "equivalently priced" part that doesn't include an IGP at all. Intel's IGPs ain't anything special but even the Pentium will show you a boot screen.

          • derFunkenstein
          • 5 years ago

          Until they do IGP tests with the 750K, which has no IGP. #boom

        • Bensam123
        • 5 years ago

        Because ultra low end PCs don’t have $200 graphics cards in them. IF the insane OC and good CPU performance can make up for the crappy graphics is really the question here. At what point and how often does the balance teeter in favor of one side or another.

          • Voldenuit
          • 5 years ago

          [quote<]Because ultra low end PCs don't have $200 graphics cards in them.[/quote<] Plenty of gaming rigs have cheap CPUs with a $200 (which is upper midrange) GPU. Until the recent introduction of frame time/latency benchmarking by TR and PCPer, the prevailing wisdom (which was borne out by the available benchmarks at the time) was that "CPUs didn't matter", and that gamers should pour as much of the budget into the GPU as possible. Even today, if you were building a(n overclocked) Pentium Anniversary Edition rig, something like a R9 270X ($179) or R9 280 ($199) isn't out of the question. Now if you had raised the bar to a R9 290X or a 780Ti, then your statement would have made more sense.

            • Bensam123
            • 5 years ago

            I don’t think I’ve ever recommended that people put a super low end CPU with a super high end graphics card, prevailing wisdom has always been match hardware at price points as a general rule of thumb, not go to extremes. I don’t think I ever once recommended only a high end graphics card (or even mid range) and a crappy processor. (The most recent example of this is the people that have been recommending insane amounts of graphics memory now days to match newer consoles, regardless of costs.)

            IE a i5 with a $200-300 graphics card or a 8xxx series. The further outside the bounds you go, the more likely you are to run into a problem with either one or the other. The only exception to this has been business computers, depending on what they do.

            A 290x or 780ti is in i7 territory, completely putting aside that i7s are generally worthless (besides hex cores) compared to i5s.

          • JumpingJack
          • 5 years ago

          The point of this article is geared toward the crowd who want to take an inexpensive processor, run it beyond specifications to extract mid- to upper- range performance. This is a gaming system delight because it puts more budget into the GPU and avoids the bottlenecking that would require pairing a more expensive CPU in a targeted gaming rig.

          Intel’s single thread lead, paired into a dual core over clocked little chip like this is performing along the lines of an 8350 at stock. This is something to take note. This CPU essentially plays directly into the AMD marketing line of ‘cheaper’ for good gaming performance.

          It is clearly outperforming (dollar for dollar) the 8350 or the 750K in gaming for less money and at lower power…..

          Considering how you have historically argued value, one would think that this would completely tickle your ‘gotta have it’ bone.

          Alas, the only bone to get tickled here appears to be for AMD regardless of where the real value is…..

            • Bensam123
            • 5 years ago

            Hmmm…

        • ozzuneoj
        • 5 years ago

        I would be interested to see if the IGP can actually be overclocked a decent amount on this chip. The CPU performance is obviously way better than any similarly priced AMD offering, but if the IGP can be overclocked significantly then it may be a nice “freebie” boost for super low cost systems that don’t need a dedicated GPU, but may be used for a game here or there.

        The graphics performance of the “HD Graphics” Haswell parts should actually be similar or better than the HD3000 graphics from Sandy Bridge. Which is obviously slow by gaming enthusiast standards, but it’d be interesting to see if a decent overclock could make it more useful in a pinch.

        I’ve been quite surprised by my laptop’s HD4000 graphics performance in light 3D gaming situations (Minecraft, Robocraft, and some others).

      • chuckula
      • 5 years ago

      You know who disagrees with your whole made-up post?
      Those evil Intel fanboys at AMD marketing who went out of their way to intentionally send TR an Athlon X4 750K — that expressly does [b<][i<]not[/i<][/b<] include any IGP whatsoever -- for this review. It's kind of pathetic when even AMD finds your own copy-n-paste rants embarrassing and wants you to stop.

      • flip-mode
      • 5 years ago

      :such sigh:

      Bensam, what to do with you? Let go of the AMD nonsense, bro. When AMD comes out with a product that beats Intel’s chips, you’ll see it right here on TechReport. And let go of the IGP nonsense too – the article had a specific perspective on this chip that you completely ignore: it’s a $60 chip that can be overclocked to compete with the big boys. Big boy chips that use big boy graphics cards.

        • Bensam123
        • 5 years ago

        I don’t understand what any of your post has to do with integrated graphics. AMD generally does beat Intel on the low end, that’s why they’re recommended in low end builds due to their IGP. I ask for more benchmarks and topic focused on that if you’re going to crown someone king of the low end and I get ridiculed for it.

        If you’re talking about the price performance ratios, it beats even Intels high end chips. This shouldn’t then be a AMD vs Intel thing at that point… yet you can see the stance everyone is taking in this thread, including yourself. It’s Intel Vs AMD, completely putting aside that it’s also around the performance of i5s (although a few FPS doesn’t say a lot), which is quite a bit more expensive then a 8350 or 20.

        • sschaem
        • 5 years ago

        Its unclear if this is truly the case from this review. (able to play with the big boy cpus like the i5-4690k)

        The G3258 seem to struggle more to keep 60fps in this crysis3 test, that some say is actually CPU friendly. (because of the medium setting)

        So when the game is not configured using medium quality, 4 core can be pegged.
        And this will change the outcome dramatically. Thats why I said this chip as the potential to fall apart in well multithreaded gaming workload.

        Example of average usage in HQ with Crysis3:
        [url<]http://www.overclock.net/content/type/61/id/1302786/width/350/height/700/flags/LL[/url<] (During spikes this usage will go higher) And to be noted, TR 4.8ghz OC is not the norm, other site seem to have maxed at a 4.4ghz OC. So its not unrealistic to see a ~4.3ghz target for this chip. In term of raw compute its like having a 2.15ghz i5 , so its going to have real problems in scenario that require lots of compute. And this is evident in the Cinebench result, even overcloked at 4.8ghz it was the slowest of all the intel chip tested. even the old and venerable i5-2500k running at stock.

      • maxxcool
      • 5 years ago

      I thought the point of bench-marking a cpu was bench-marking the cpu.. not the IGP? We can all agree the intel IGP is slower-ish…

      *but* to Intels defense and using YOUR favorite AMD line.. “It is good enough for the price point it is sold at.”.

      Add to that NOBODY is buying a fake dual core AMD cpu with 1 effective FPU to overclock and play BF4 at low res and low effects settings.

      /hugs/ 🙂 bwahahahahahahah

      • sschaem
      • 5 years ago

      I think you already know the answer to your last question…

      Reality is that this chip is weak overall in all aspects, its only saving grace is with ‘single’ threaded apps when highly overclocked. (Assuming you do get that 50% overclock…)
      Otherwise it doesn’t perform any better then a low end 1.6 ghz quad core.

    • mikepers
    • 5 years ago

    Scott, nice overview. This is going to be a popular chip. Couple of questions:

    If you had upped the settings in Crysis would the G3258 fare as well relative to the other chips? Would that depend more on the graphics card? (though I guess if you’re going to spend a lot on the graphics card maybe you’re also buying a more expensive CPU)

    Side note, what is the case you’re using? I didn’t see it specifically mentioned in the review or comments.

    Thanks!

      • Damage
      • 5 years ago

      [quote<]If you had upped the settings in Crysis would the G3258 fare as well relative to the other chips? Would that depend more on the graphics card? (though I guess if you're going to spend a lot on the graphics card maybe you're also buying a more expensive CPU)[/quote<] The short answer is that there's no way to tell without testing. Some folks here are claiming the G3258 would suffer in other areas/at higher settings, but I've not seen any data, and besides, almost nobody else tests gaming performance by frametime, which is what you'd want to know. Yes, higher settings would lean more heavily on the GPU, but you can "fix" that by dropping to a lower resolution. An important related question is: How would you play the game? With the test equipment we used, including a 1080p display and a Radeon HD 7950 (same thing as R9 280), I'd want to play at the settings we used. Lower resolutions would look blurry, and higher image quality settings are too taxing on the GPU. So some of this concern may be a little contrived, if you intend on playing with more practical settings. Especially since Crysis 3 looks great on the "medium" presets. I may try to test some at higher settings in Crysis 3 soon, though. Seems interesting. The case is an MSI promo thing that, sadly, you cannot buy. There are similar cases on the market, though. Search for open-air cases at Newegg!

        • jessterman21
        • 5 years ago

        [quote<]I may try to test some at higher settings in Crysis 3 soon, though. Seems interesting.[/quote<] Yay! I spent an inordinate amount of time last year tweaking commands in a config file to get Crysis 3 running with almost all the eye-candy at a constant 40fps on my i3-2100 and GTX 660. I'd suggest either testing CPUs with the settings on High (leaves most of the special GPU-sauce off), or at least Medium with Object and Particles on High. All the CPU-intense draw-distance and LOD settings change from about 35% to max when changing Object from Medium to High.

        • travbrad
        • 5 years ago

        [quote<]An important related question is: How would you play the game? With the test equipment we used, including a 1080p display and a Radeon HD 7950 (same thing as R9 280), I'd want to play at the settings we used.[/quote<] I really prefer that style of testing for gaming performance, because it's much more realistic about how people actually play games. Seeing which CPU is faster at 800x600 or which budget CPU is faster when paired with a $800 graphics card is irrelevant, especially if you already have some other tests that focus solely on raw CPU performance. Even the 7950 is probably a higher-end card than most budget gamers would be using.

    • jessterman21
    • 5 years ago

    This is exactly why Intel never released an LGA 1155/1150 unlocked i3. It would overtake all the locked i5s in gaming performance.

      • derFunkenstein
      • 5 years ago

      I know I would have bought one instead of an i5.

        • jessterman21
        • 5 years ago

        And I never would’ve had to upgrade from my i3-2100(K)

      • Krogoth
      • 5 years ago

      In most games, but not all of them. There are some game that do take advantage of the extra threads.

    • derFunkenstein
    • 5 years ago

    50% overclocks on cheap CPUs are awesome. I never had a Celeron 300A – in 1998 I was rocking a Pentium MMX 166 that I was able to get up to an 83MHz bus for 207MHz, and a year later got one of those Evergreen CPU module things that let me plug in a K6-2 at 400MHz.

    Anyway, short story long, I had a Duron 600 that would do 1GHz and a Pentium E2140 that would do 3GHz. This is like those, which is totally awesome.

      • Concupiscence
      • 5 years ago

      The Spectra 400 CPU upgrade’s still the most impressive single improvement I’ve ever made to a PC. Jumping from a Pentium 90 to a K6-2 400 was incredible; after I slapped in a 12 MB Voodoo2 and crammed it full of RAM it was a pretty decent little LAN party box for a year or so afterward. If only it had L2 cache…

        • derFunkenstein
        • 5 years ago

        YES that’s the module I bought. It was pretty awesome. Instead of the Voodoo2 I got a Voodoo 3 2000 PCI. Mine was actually a Spectra 333 that the chip was actually a K6-2 400, so I played with the switches and made it run at 400MHz. Got it at Best Buy in late 1999, IIRC.

        • oldDummy
        • 5 years ago

        [quote<]...If only it had L2 cache...[/quote<] Dual Pentium Pro 166 OC'd to 200 Woot, took over local ISP DNS with a used workstation. "Sir, would you please shut it down?" Oops, sorry.

      • sschaem
      • 5 years ago

      My best overclock was on a 68020, with a 78% overclock. from 16mhz to 25.56mhz
      Then 33% on a P75, 50% on the celeron 300a, 45% on a pentium 450, then 35% on a Q6600

      I personally think the 35% on the Q6600 was the most impressive because I was also able to keep it at stock voltage.

    • Shambles
    • 5 years ago

    These are very interesting chips. I was on the fence about replacing the Q6600 in my HTPC with one of these but after factoring the cost of the motherboard + RAM (That would soon be obsolete) + wanting to downsize to a mATX/ITX case it doesn’t quite justify the savings of (what I’m guessing) is about $80-100/year of having the CPU idle 24/7.

    • DPete27
    • 5 years ago

    What clocks can these things get at-or-around stock voltage? I’m not one of those overclocker who likes to increase voltage by 40% for an “every day driver”

    • anotherengineer
    • 5 years ago

    [url<]http://www.newegg.ca/Product/Product.aspx?Item=N82E16819117374&cm_re=G3258-_-19-117-374-_-Product[/url<] 73.99 + 4.99 shipping = 78.98 + 13% hst = $89.25 Welcome to Ontario Canada, be nice if we had a micro center here. So is the $125 quad core anniversary edition??

    • dashbarron
    • 5 years ago

    Question from a non-overclocker: why does an open-case design like the test rig do for temperatures? Does the lack of any sort of vacuum and airflow hurt thermals more than allowing a passive dissipation?

    I always wanted to buy an open rig like this and nail the thing to a wall, more for S n’ G than anything else. Wireless keyboard and mice now-a-days….

      • travbrad
      • 5 years ago

      It depends a lot on the case design and airflow in a particular rig. In my limited experience an open test rig like that will generally run cooler than most cases, but the difference between it and a well-designed and maintained case is pretty small.

      The big downside of running an open rig like that is they tend to be louder, because there is nothing to muffle the sound of your GPU/CPU fans (which are usually the loudest fans in a PC by far). It’s not exactly ideal if you want to move your PC around either, or if you have young kids/pets poking at it.

    • Generic
    • 5 years ago

    I hope this chip is the start of a trend; if only for my reading enjoyment. 😉

    Thank you for this article, it’s been a long time since a cared enough to read an entire CPU piece that didn’t revolve around the release of a new Intel architecture.

    All that being said, I had great fun going the opposite direction last night; replacing a dual i5K with a quad i7-S that hits the same 65W envelope, but with virtualization.

    • Ninjitsu
    • 5 years ago

    My Core 2 Quad Q8400 achieves the following in Cinebench:

    Single thread: 75
    Multi-thread: 281.

    • HisDivineOrder
    • 5 years ago

    Now it’s time for Intel to release the i3 version. 😉 Why? Just cuz.

      • Ninjitsu
      • 5 years ago

      That’ll come in 2020 (Core i3 10 year Anniversary Edition). 😉

    • Flapdrol
    • 5 years ago

    It’s an interesting cpu, if my i5-750 died I’d probably replace it with this.

    But if it doesn’t I’ll keep this almost 5 year old cpu until there’s something twice as fast at the same price, will be another 2-3 years I guess 🙁

    too bad there’s no comparison here agianst a haswell i5 on stock (cheaper board, more expensive cpu, stock cooler, probably better performance) or the legendary i7-920 on 4GHz.

    • oldDummy
    • 5 years ago

    [quote<]...The spec sheet is like an alphabet soup of "nope.".........[/quote<] Gotta love it. Interesting work. Nice job.

    • johnf
    • 5 years ago

    I’d suggest that you try testing with Thief 2014. This was the first game that was completely unplayable on my venerable E8400 + GTX 560 Ti setup. Upgrading to an i7 4771 resulted in very good graphical performance. The game mentions quad core as a hard minimum requirement and made me wonder if there might be more to the uncompressed audio Titanfall debacle than meets the eye.

    Very long time reader, first time regging / posting, thanks for the great site and the best reviews out there. Inside the second changed everything.

    Tragically Thief was really not worth the price of admission 🙁

    • christos_thski
    • 5 years ago

    Just goes to show that -for most uses- cramming more cores is no substitute for improving per-core performance. To pick an extreme example, a dual core in-order Atom is still shite, whichever way you cut it. Too bad the ridiculous “cram more cores into it” paradigm is now taking hold in smartphones (take a look at these positively ridiculous Mediatek “octacores” on Android phones ; whatever the hell for?)

      • robliz2Q
      • 5 years ago

      More slower cores can help where tasks need be done to a deadline using less power. Samsungs SSD achieved great performance and very low power consumption using 3 ARM cores.

      Sometimes no one notices you raced to idle faster, longer battery life does get noticed (or in data centre reduced power bills).

    • Wirko
    • 5 years ago

    Scott, what’s the idle power draw, and how much does it rise at 4.8 GHz? I’m sure you took those measurements too.

      • smilingcrow
      • 5 years ago

      Yes, I’d like more info on power consumption as it helps to estimate how easy it is to cool the CPU silently in a low volume case.

        • nanoflower
        • 5 years ago

        I gather all of that will be forthcoming in a future more in-depth review. Think of this as an early taste of the future full review.

      • maxxcool
      • 5 years ago

      Less than Kevari at 4.6ghz.

      • Damage
      • 5 years ago

      Just checked for you. The system is idling right now at 50W on the desktop. Still using SpeedStep, so CPU speeds drop to 800MHz at idle. AISuite estimates CPU power at 14W. Seems quite acceptable to me.

        • Wirko
        • 5 years ago

        Thanks.
        The 14W estimate, however, looks incredibly high for the CPU alone, and may include all the components on the mobo. I was never able to find hard numbers but a Haswell CPU should not draw more than a couple of watts when doing nothing. It was designed for mobile after all, and desktop parts have all or most power saving technologies enabled.

          • NeelyCam
          • 5 years ago

          This is overvolted – maybe speedstep doesn’t bring the voltage down when idling…?

            • travbrad
            • 5 years ago

            With Sandy Bridge it does lower the voltage when it steps down the speed, but it seems to be percentage based rather than absolute. If you are using a higher voltage for your overclock it will also step down to a higher voltage than usual when idling, but still much lower than what it uses at full load. It steps down to the same frequency regardless of your overclock though (1600mhz on 2500K).

            I don’t know if Haswell works the same way but it seems like it based on those idle power numbers.

            • NeelyCam
            • 5 years ago

            Interesting – I didn’t know that. Thanks for the info!

            • jihadjoe
            • 5 years ago

            That will usually depend on how you overvolted the thing.

            I’m no expert on Haswell, but with X79 the trick is to use offset voltage instead of just entering the desired voltage in Vcore, or worse still leaving things on “auto”.

            Overclocking procedures is where I disagree a bit with TR. TR will usually test CPU/motherboard overclocking by upping multipliers and leaving things on auto, because it’s “what most users would do,” and that’s totally valid and a good way of representing the equipment at hand. OTOH, I feel it’s much more informative if they were to really delve deep into the settings and properly fine-tune the OC in manual mode, because it will better represent peak performance, and at the same time educate the reader on how overclocking should be done in case he buys a particular board or CPU.

            Offset voltage is just one example in particular, but I’m confident its well worth spending extra effort to have a CPU that goes down down to .9V at idle and jumps to 1.2-1.3V when overclocked, compared to one that stays at 1.3V all the time. After all, this sort of setup only needs to be done once, and it improves the experience for the entire useful life of the CPU.

    • USAFTW
    • 5 years ago

    For the first time in years, you can buy a low-end CPU and overclock it like a bastard and get very noticeable results! What a feeling!
    As I said in the original announcement thread, AMDs entire CPU lineup is now made redundant by this one, tiny puppy!

      • Voldenuit
      • 5 years ago

      But you have to pair it with a Z97 or Z87 mobo, unless you want to play a game of microcode cat and mouse with intel…

    • deepblueq
    • 5 years ago

    I wonder if this does tangibly better when paired with an NVidia card, with lower driver overhead?

    I recall some frame latency results showing Core-i3s doing a lot better than Pentiums or Celerons, which I interpret as a want for four threads for gaming.

    Let’s say a game sees two cores and spawns two threads to do the work it needs. There will be a driver thread and some other system background work interrupting this. On a Pentium/Celeron, this could impose some notable penalty in context switches, but the HT on an i3 makes it a non-issue. Further, my intuition says the background stuff is much more likely to affect frame latencies than frame rates.

    (This is all theoretical, I’m just some guy on the internet, don’t believe everything you read, yadda yadda yadda.)

    My strategy here tends to involve reducing background work as much as possible, with lean and mean Linux installs and NVidia’s more efficient drivers. I’m thinking a G3258 and a GTX 870 or similar with a G-Sync monitor would be ideal for my next system.

    • NeelyCam
    • 5 years ago

    Whoa..

    Bards will be singing songs about this CPU 10 years from now.

    • DarkMikaru
    • 5 years ago

    My jaw is officially on the ground. *Cries over his beloved FX-8350 lol

    That is amazing. Even hanging with the big dogs, Intel may have shot themselves in the arse on this one. Even an AMD Fan has to take his hat off to this little guy’s shear value.

    Intel is on fire. Damn.

    • GAMER4000
    • 5 years ago

    I see you used the later x264 r2334 encoder which supports AVX2 and FMA. However,Xbitlabs also used later versions of the same encoder,namely r2345 and r2358:

    [url<]http://www.xbitlabs.com/images/cpu/core-i5-4670k-4670-4570-4430/x264.png[/url<] [url<]http://www.xbitlabs.com/images/cpu/core-i3-4340-4330-4130/Charts-1/x264.png[/url<] Look at where the Core i3s are relative to the Core i7s and the benchmark does show some gains with HT. The Pentium dual core you have even at stock seems to be relatively faster than the Haswell Core i3s in that review compared with the Core i7s for example whilst having a lower clockspeed too,and the FX8350,well seems to have given up in your review and not in theirs. Also,any chance you can test some other parts of Crysis3 too on top of the current test sequence?? Welcome to the Jungle,seems to do well on CPUs which have better multi-threaded processing abilities,since it has animation of the vegetation - pcgameshardware showed HT actually made a noticeable difference in the game,meaning a Core i7 was noticeably faster than a Core i5: [url<]http://www.pcgameshardware.de/Crysis-3-PC-235317/Tests/Crysis-3-CPU-Test-1068140/[/url<] It requires at least the 1.3 patch for the game,since the initial release of Crysis3 had issues with HT it appears. Some other sites tested that area(just pure FPS though) and even the G3258 overclocked could not really beat a Core i3: [url<]http://pclab.pl/zdjecia/artykuly/radek/2014/pentium_g3258/charts/c3_j1920a.png[/url<] [url<]http://pclab.pl/zdjecia/artykuly/radek/2014/pentium_g3258/charts/c3_j1920a.png[/url<] The Haswell Core i7s also have a noticeable lead over the AMD chips too: [url<]http://pclab.pl/zdjecia/artykuly/radek/2014/devil/charts/def/c3_j1920a.png[/url<] They are also testing the game at Ultra,instead of high textures and medium settings.

    • sschaem
    • 5 years ago

    “the Pentium G3258 avoids slowdowns much more capably than even AMD’s FX-8350”

    Even at its max overclocked state, compared to a plain stock fx8350 I see:

    Time spend > 16ms:
    G3258 @4.8ghz – 1385
    FX-8250 – 1311

    The FX also beat it in the 99th percentile. 21.3 vs 24.1

    ??? so I’m confused with the article conclusion.

    Is it because the 4.8ghz pentium average 73FPS in crysis VS 72 fps for the FX ?
    Well, I wouldn’t call 1 FPS a ‘slowdown’ and then state from this, that the Pentium is much more capable at gaming. (At least not for Crysis, might be another story with other games)
    Older single single player game should favor the pentium, but newer multiplier game should favor the FX. Example Skyrim VS BF4 multiplier in mantle mode (or at some stage Dx12).

    Personally, even if I had to build a cheap gaming box I would skip this CPU.

      • Damage
      • 5 years ago

      There is a distinction you’re missing between avoiding slowdowns and general frame rates. Would you like me to explain it to you?

        • sschaem
        • 5 years ago

        Yes. Please explain how a CPU that has a higher 99th percentile rating (24.1 vs 21.3)
        AND took more time to render frame under 16ms (1385 vs 1311) is a CPU that avoid slowdown.

        This is also comparing the overclock pentium VS a stock FX (even so the FX is fully unlocked)

          • Meadows
          • 5 years ago

          Stop fanboying and compare the 33 ms and 50 ms charts as well.

            • sschaem
            • 5 years ago

            I will, if you check the frame graph.

            Nothing to do with Intel VS AMD, but what do you prefer happen during gameplay?

            To have a smooth 60fps gameplay interrupted with frame dropped (we dont have freesync yet),
            or during a heavy game sequence, that already have low frame rate on either CPU, have the frame delivered about 10ms later ?

            • Meadows
            • 5 years ago

            I want as few dropped frames as possible, so intel is better.

            Remember: when the load is light, both CPUs did “60 fps” just fine, but when things got rough, AMD’s processor worked harder for longer. That results in terrible hitches that you *will* notice more easily.

          • Damage
          • 5 years ago

          Both the FPS average and the 99th percentile frame time are meant to be general measures of performance, although the 99th percentile is more sensitive to frame times than the FPS average.

          A CPU with a higher FPS average and lower 99th percentile frametime does better in the general case. *Most* frames are rendered more quickly.

          However, it’s possible for a CPU with good general performance to fare poorly when confronted with particularly challenging frames. In our Crysis 3 results, those frames are represented as upward spikes in the individual frame time plots. You can see them. They comprise less than 1% of the frames rendered, so they don’t impact the 99th percentile score or the FPS average much at all.

          They are, however, potentially noticeable hiccups in the in-game animation. I can tell you they happen during combat action in this test sequence, with explosion animations that use lots of particles.

          In those cases, the frame time spikes–or slowdowns–are larger on the FX-8350 than they are on the Pentium G3258 at 4.8GHz. The Pentium smooths out those bumps much more effectively, so it spends less time rendering above the 50- and 33-ms frame time thresholds. Although the FX-8350 has slightly higher overall performance, it just does’t do as well in the most difficult cases.

          In this particular test, about 3.5% of frames take longer than 16.7 ms to render, so this threshold does’t just deal with the extreme edge cases. Hence its similarity to the 99th percentile results, where the FX-8350 does slightly better.

          So, to review: higher general performance for the FX-8350, but superior slowdown mitigation for the OCed Pentium G3258.

          What seems to be happening here is that, in those tough cases, the game’s performance is limited by the execution of one or two individual threads. Thus, the OCed Pentium’s higher per-thread performance trumps the FX-8350’s combo of a larger core count with lower per-thread performance.

          This outcome is a natural consequence of AMD’s architectural decisions with Bulldozer and friends, which appear to have traded off per-thread performance (IPC * frequency) for larger numbers of integer cores/hardware threads–although I do think AMD expected to get more IPC and frequency than it has achieved, which is one reason they’re now building a new, clean-sheet CPU core with a very different set of architectural choices.

          You’re right that the FX-8350 isn’t overclocked, but it’s not really the proper comparison for the G3258. I only singled it out only for emphasis. The proper direct comp is the OCed Athlon X4 750K, which is even slower than the FX, perhaps in part due to its inferior cache hierarchy, which lacks full cross-core sharing at the last level, or possibly due to the sharing of an FPU between two critical threads.

          I understand you may be somewhat frustrated by these results, being an obvious supporter of AMD. I find it frustrating that AMD isn’t more competitive, too. But I stand by my analysis, which is true regardless of what one might wish to be the case.

            • sschaem
            • 5 years ago

            Thanks for taking the time, its way more than I expected 🙂

            I will also clarify in greater detail why I felt compelled to write a comment on this article :

            So here is what I saw in the chart:

            [url<]https://imagizer.imageshack.us/v2/632x466q90/655/60db1c.png[/url<] For 99.3% of the frame rendered the FX performed better (no contention of this part) But for the most difficult frames I only saw a 10ms delta between the 4.8ghz overclock pentium and stock FX in this game test. And now looking closer at the FPS chart I see more spike above 16ms spaced apart for the pentium chart in later game play, while the FX seem to stay under 16 ms all that time ? [url<]https://imagizer.imageshack.us/v2/610x523q90/575/8df8d4.png[/url<] But then this data doesn't matter if you did experience more stutter on the FX then the pentium. So, if this is the case your conclusion is fair (cant argue on your game play experience) Now.. Not that I need to have the last word 🙂 But I think you are seeing the best of the Pentium as to offer in this test. While the FX got more to unlock. I can only guess that the frame graph might look very different with BF4 / Mantle (and so future engine with mantle/Dx12 suport) Yes, I do have a bias for AMD for sure. If someone bash an Intel CPU, I will let it fly. (usually) But AMD CPU got so much bad rap, often just because its fashionable to bash them, that I became overprotective. As you saw in my post to your article 🙂

            • Meadows
            • 5 years ago

            You’re looking at it wrong.
            Here, let me help you: [url<]http://puu.sh/a4N2u/d65f13aee4.png[/url<] I've marked the 4 highest peaks for your convenience. It's clear that the slowdowns on the FX are worse, because they take longer each time (resulting in very high numbers on the 33 ms and 50 ms charts). Also note that the intel processor costs half as much, consumes half as much power, and that Crysis 3 can take advantage of more than 2 threads. (Edit: sorry for the clutter, but TR has not offered an individual chart for each separate result.)

            • sschaem
            • 5 years ago

            We are focusing on two issues.

            How slow does it get when thing get “slow”
            & How well can the CPU deliver a steady frame rate

            My purple line show the 16.6ms threshold, to show how well the Pentium and FX behave at 60fps

            The chart is hard to read, I agree. But it seem that for the last 2 third of the test duration the FX delivered uninterrupted 60fps game play while the pentium at 4.8ghz crossed over many times.

            note: The frame time curve show the pentium @4.8ghz going as high as 48ms, yet the frame time graph we are looking at doesn’t show it going above 38ms.
            So for sure some of the lines you drew for the Pentium might be 10ms to low.

            Interestingly , if you look at the 20ms mark, it seen the G3258 doing more cross over.
            (But I dont know how much weight we can put on this.)

            What I’m getting overall is that when there is a frame drop below 30fps, the FX seem to be an extra 10ms or slow slower. Is this noticeable ? I dont think I could tell if an explosion is at 15fps or 20fps… But I can tell immediately when a game drop from 60fps to 30fps. (freesync cant come soon enough)

            I still think the FX got a bad rap in the conclusion,
            even so I do see now that it was all based on the 33 and 50ms data.

            BTW, I missed this originally, I see that the i7-4770k wasn’t spared.

            “it gets even stronger during the last, most difficult 1% of frames rendered. There, it outperforms the stock-clocked Core i5-2500K and i7-4770K”

            • Meadows
            • 5 years ago

            Insignificant.
            Dropping from 65 fps to 50 fps is not nearly as bad as dropping from 65 to 20 (which the FX has done several times throughout the test run).

            Losing 10-15 fps from above 60 fps is very often hard to notice in many games. For this reason, your argument that the FX gives “smoother 60 fps gameplay” does not stand easily.

            At the same time, you seem to ignore the worst of spikes, which is puzzling especially because the difference between 15 and 20 fps (total) *is* quite noticeable.

            • Chrispy_
            • 5 years ago

            [quote<]We are focusing on two issues.[/quote<] Not really. Everyone has a threshold where the game is "too slow"; It might be 16ms (60fps) frames, or it might be 33ms (30fps) frames. Your threshold is going to be somewhere, possibly even a different threshold for different types of games, but you'll have a limit somwhere that you call running "too slow" [quote<]How slow does it get when things get "slow"[/quote<] This is how much of your gametime is "too slow" The AMD architecture flat-out, categorically sucks here - with the flagship 8350 being three and six times worse than a cheap Pentium at 30fps and 20fps respectively. [quote<]& How well can the CPU deliver a steady frame rate[/quote<] What does it matter, if it's over your threshold? Your game is already smooth [i<]to you[/i<] and you don't notice or care about the higher framerate. [b<]Let's just exaggerate to illustrate the point:[/b<] Would you like to game on an AMD processor that gets 120fps most of the time, but drops to 1fps every time you try to line up a headshot, or would you take an Intel that always churns out frames at 30fps constantly and never ever gets a slowdown?

          • NeelyCam
          • 5 years ago

          Wow.

          I can’t believe AMD [i<]still[/i<] has fanbois

            • sschaem
            • 5 years ago

            Well, dont you think we got enough Intel fanboys around?
            (I’m just trying to bring balance to the force 🙂

            • chuckula
            • 5 years ago

            You do realize that by putting up a “heroic” fight where you nitpick about how AMD’s highest-end CPU can edge a $60 throw-away Intel dualie in games* you’re really doing more to make AMD look pathetic and desperate than anything that a regular Intel fanboy could post… right?

            * And you can’t even do that very convincingly.

            • sschaem
            • 5 years ago

            ” it gets even stronger during the last, most difficult 1% of frames rendered. There, it outperforms the stock-clocked Core i5-2500K and i7-4770K, ”

            How much is a i7-4770k again?
            From the graph, the Pentium overtook the $340 i7-4770k in the last 0.5%, and 0.7% for the FX.

            Do you really think you are going to have a bad gaming experience with a i7-4770k, even at stock ?

            • travbrad
            • 5 years ago

            I’ve heard there are also people who just buy the best CPU for the price and don’t have any loyalty to either company because neither company has any loyalty towards them.

            It’s just a rumor though.

            • NeelyCam
            • 5 years ago

            [quote<](I'm just trying to bring balance to the GeForce :)[/quote<] ftfy

            • maxxcool
            • 5 years ago

            Marketers and paid-vertising gorillas

            • ultima_trev
            • 5 years ago

            I am an AMD fanboi, but I will admit this must be embarrasing for AMD. I really wish they’d get their **** together and make a competetive product that doesn’t require a fusion reactor to power.

      • Voldenuit
      • 5 years ago

      [quote<]I'm confused with the article conclusion.[/quote<] You're either deliberately being obtuse by cherry-picking results, or have genuinely missed the conclusion in the data. Both the 8350 and the Pentium AE spend the same amount of time under 60 fps (the 16 ms mark), but the 8350 spends nearly [i<][url=https://techreport.com/r.x/pentium-g3258-oc/c3-33ms.gif<]three times as long as the Pentium under 30 fps (the 33ms mark)[/url<][/i<]. 500 ms of lost time may not sound like a huge amount, but it's probably happening the most when a game gets most intense (explosions, enemy swarms, lots of enemy players shooting at you). If you're playing online, this sort of stuff can get you killed. EDIT: Edited for snark.

        • sschaem
        • 5 years ago

        The chart show that when the FPS drop below 60fps the delta is 10ms, not 500ms

        So most of the time the FX (at stock, even so its fully unlocked) will deliver a better 60fps gameplay.
        But more importanly : when there is a slowdown (below 60fps), both CPU do slowdown,
        and the FX will be, in those rare cases, take an extra 10ms. (in this specific Crysis 3 test)

        Like I mentioned, most likely the pentium will do better in games like skyrim, but will do much worse in BF4+multiplayer+mantle.

          • Andrew Lauritzen
          • 5 years ago

          > The chart show that when the FPS drop below 60fps the delta is 10ms, not 500ms

          What chart? Are you sure you understand how to read this data?

          Stop thinking in FPS, it’s confusing you…. there’s a reason why TR et al. switched to primarily talking frame times.

          Also there’s no “better 60fps gameplay” unless you have a 120Hz monitor. Once you’re consistently 60, you’re done. Dropping frames (i.e. exactly what the threshold frame time stuff is measuring) is what hurts the smoothness experience. It isn’t helpful to be rendering frames in ~5ms if you still have spikes beyond ~16, let alone up to ~100ish…

            • sschaem
            • 5 years ago

            And what do you need to reach 60fps? at least 16.66ms frame time.
            This is why I keep mentioning the TR 16.6ms frame time result chart.

            You are just rephrasing what my point about 16,6ms frame time (60fps gaming)

            Both chip deliver 60fps in this game test, beside 1385 frame for the pentium and 1311 for the FX.
            So the pentium as a little harder time to keep the frame rate at 60fps.

            But when both chips drop below, 16.6ms frame time (drop below 60fps), the FX is only at max about 10ms behind. And thats for very few frames….. again ~10ms for a max overclocked chip vs another one at stock. (even so its fully unlocked)

            This is what matter. And this is why the conclusion of this article is misleading.

            I’m not making those numbers up, they are right in the article for anyone to see.

            • Andrew Lauritzen
            • 5 years ago

            > So the pentium as a little harder time to keep the frame rate at 60fps.
            No, that’s not how this data works. You’re either not understanding it or you are trying to “simplify” it and in the process making it mean something that it doesn’t.

            > But when both chips drop below, 16.6ms frame time (drop below 60fps), the FX is only at max about 10ms behind.
            What are you basing that on? Looking at the frame time graphs the pentium hits a max of ~35ms/frame while the FX hits 50-60ms on several occasions. That’s not 10ms and that’s a large difference…

            As per my other post… I’m not sure you really understand how these numbers work and what they mean in terms of “smooth” gameplay. If you’re not really sure, it’s best to just stick to the 99th percentile numbers. They are the best single indicator of smoothness given. To interpret the “time past X” stuff you really need to understand frame time distributions and what they mean in terms of smoothness.

      • colinstu12
      • 5 years ago

      The 8350 is more than double the price of the intel chip. Those $100 could be very well put towards a beefier video card.

        • sschaem
        • 5 years ago

        Thats assuming you will get the chip at 4.8ghz. Does Intel guaranty that?

        Also , like I said this chip is for single threaded games. This review does not show how this chip behave in multiplayer game workload, specially title like BF4 / multiplayer/ with a modern GFX API.

        For all we know this chips completely falls apart .

          • nanoflower
          • 5 years ago

          Of course Intel doesn’t guarrantee any overclock value. However, based on all the reviews I’ve seen it seems that even with a cheap motherboard and stock cooling 4.3GHz is achievable on all G3258s. Obviously you could get lucky and get a better chip but I haven’t seen anyone unable to hit that 4.3GHz mark if they were willing to up the voltage some (but not to ludicrous speed.)

            • sschaem
            • 5 years ago

            If the pentium at 4.8ghz already doesnt deliver 60fps as well as the FX,
            whats going to happen when its 10% slower ?
            And what happen if the unlocked FX is overclock just 10%?

            It seem you are all in an over protective mood of TR and missed the point

            TR claim of
            “the Pentium G3258 avoids slowdowns much more capably than even AMD’s FX-8350”

            Is simply incorrect using the data TR provided with that single game test.

            • Meadows
            • 5 years ago

            Do you want to get banned? Because that’s how you get banned.

            • Pwnstar
            • 5 years ago

            LOL

            • travbrad
            • 5 years ago

            Multiple people have explained to you exactly how they reached that conclusion. You should go work for Fox News with your impressive ability to ignore all facts that you don’t like.

            [url<]https://techreport.com/r.x/pentium-g3258-oc/c3-curve-amd.gif[/url<] [url<]https://techreport.com/r.x/pentium-g3258-oc/c3-50ms.gif[/url<] [url<]https://techreport.com/r.x/pentium-g3258-oc/c3-33ms.gif[/url<] [url<]https://techreport.com/r.x/pentium-g3258-oc/c3-16ms.gif[/url<] The FX8350 has a lot more time spent above 50ms and 33ms frames (equivalent to dropping below 20FPS and 30FPS respectively). In other words the FX8350 is slower in the areas you are most likely to notice slowdown. The fact we are even comparing a $175 CPU to a $75 CPU should tell you what a great chip this is. So, anything getting through that noggin yet?

            • sschaem
            • 5 years ago

            If you think I’m disingenuous by not looking at the 50/33 ms stats, lets also not ignore the 16ms stats when drawing a conclusion.

            When you have rock steady 60fps, dropping to 30fps will be noticeable,
            specially if it pops in&out from 60 to 30fps.

            And if you look at the frame rate chart this is what is going on with the Pentium.
            Not only more frame will drop to 30fps (not that many more then the FX, but still worse),
            but they seem more scattered.

            I still think the FX got to much slack, and the Pentium got over hyped.
            Not just because of both chips performing so close in delivering smooth 60hz game play,
            but because the FX was judged at stock speed while the pentium was overclocked.

            Drop the Pentium to what people might get, and overclock the FX and we might literally see the line blurred in those chart.

            On price, its unmatched. No argument here.

            But I would be weary of a dual core CPU going forward, even if it can be clock at 4ghz+

            • travbrad
            • 5 years ago

            [quote<]But I would be weary of a dual core CPU going forward, even if it can be clock at 4ghz+[/quote<] I personally wouldn't buy a dual-core going forward either, but I can afford to buy better CPUs with more cores, and I do some video encoding where extra cores help a lot. For a gamer on a really tight budget it's hard to beat the G3258 though, and there is a potential upgrade path to an i5 or i7 in the future if they can afford it later. [quote<]I still think the FX got to much slack, and the Pentium got over hyped Not just because of both chips performing so close in delivering smooth 60hz game play, but because the FX was judged at stock speed while the pentium was overclocked.[/quote<] TR did overclock AMD's closest competitor (X4 750K) to the G3258, and the results speak for themselves. The FX8350 is an interesting data point, but it isn't really even the same category of CPU. It is more than double the price and requires a much beefier PSU, especially when overclocked (where you are extremely lucky if you get a 25% overclock out of it) I'm not sure what you find so unbelievable about a 4.8ghz overclock either. The vast majority of "K" CPUs (since Sandy Bridge) can hit similar speeds, and all other things being equal CPUs with less cores can generally hit higher frequencies than CPUs with more cores. [quote<]On price, its unmatched. No argument here.[/quote<] Price is VERY important, particularly in a discussion about budget CPUs. If you take price out of the equation (like you are doing with the FX8350) then we can start comparing Athlon X2s to Core i7s.

            • nanoflower
            • 5 years ago

            But that is the point. The Pentium beat the FX at overall performance AND it did it using less power and costing less. In any completely fair comparison all of those factors have to be taken into account so that even if you overclocked the FX the Pentium G3258 will win based on the lower power usage and price while delivering good performance.

            • JumpingJack
            • 5 years ago

            You clearly do not understand what is going on nor how to interpret the data.

          • Andrew Lauritzen
          • 5 years ago

          > For all we know this chips completely falls apart .

          Well fundamentally it can’t do worse than a 4 core/4 thread Haswell @ 2.4Ghz, and likely better. (There are some cache size details and such that I’m glossing over here, but yeah.) More data is always nice of course but don’t forget that a CPU that can handle a single thread really fast is strictly better than one that requires two threads to get the same speed in terms of performance robustness.

          Thus while a “modern GFX API” and multi-threading can potentially flatten the graph and make everything less CPU dependent to start with, it’s not going to flip the results or anything. Better threading can allow lower frequency multi-core designs to get nearer to their peak throughput, but it’s not like it hurts higher frequency parts…

          Also not sure why you are equating “multi-player” with “multi-threaded” unless that’s a typo.

            • sschaem
            • 5 years ago

            The pentium falls apart based on this thread subject : TR article conclusion.

            “the Pentium G3258 avoids slowdowns much more capably than even AMD’s FX-8350”

            So in the case of multithreaded games/engines, the FX got plenty of head room while the pentium is already shown at its best.
            And you can see that in the multithreaded test where we see a 2x performance difference.
            Even so in those test the pentium is overclocked to 4.8ghz, while the FX is stock.

            And thinking multitasking benefit both CPU equality is naive.

            Mantle does not favor low thread count CPU, what the pentium is.
            Multithreaded game/engine dont run better on low thread count CPU, again, what the pentium is.
            But both technology do benefit from high thread count CPU, what the FX is.

            So the pentium might not get any benefit, but the FX surely does.

            For people that just want to find this or that to argue, I already stated that for games like Skyrim the reverse is true…

            • Voldenuit
            • 5 years ago

            [quote<]For people that just want to find this or that to argue, I already stated that for games like Skyrim the reverse is true...[/quote<] [url=http://www.bit-tech.net/hardware/2014/06/24/intel-pentium-g3258-review/5<]ORLY[/url<] The Pentium 3258 at stock gets better minimum framerates in Skyrim than the 8350, and at 4.8 GHz wipes it in both minimum and average. Just... stop already. At this point you're just making sh-- up.

            • Meadows
            • 5 years ago

            I think that’s actually what he meant.

            • sschaem
            • 5 years ago

            Yes, thank you.

            “Older single single player game should favor the pentium, but newer multiplier game should favor the FX. Example Skyrim VS BF4 multiplier in mantle mode (or at some stage Dx12).

            I’m haven’t turn full fanboy just yet…

            • Andrew Lauritzen
            • 5 years ago

            No offence, but I don’t think you understand how this stuff all works… Before we continue, are you here to learn or not?

            • sschaem
            • 5 years ago

            I dont think you got my point about how the pentium will not scale with multithreaded API and engine. so let me rephrase.

            If you have a single threaded program running on a single threaded CPU, adding multithreading to the software will not gain you any performance. Actually the reverse is often the case because of the needed synchronization overhead.

            From this, its clear that adding multithreading to Dx12, making a core agnostic API like mantle, or threading a game engine and game logic, bring little to no value to single core processors.

            We should agree on this. If not, lets call it a day and move on.

            So why does this matter in this context?

            Because we are comparing and judging a dual core CPU with a 8 ‘logical core’ CPU.
            As a result you should see a dramatic improvement on a 4+ core CPU, while the 2 core just get you nothing extra with multithreaded gaming engines/API.

            You can see a a sense of this with games like Battelfield4 and mantle.

            The pentium get no benefits from this, while the FX is primed.
            Hence the gap widen in favor of the multicore…

            • nanoflower
            • 5 years ago

            It’s true that multi-core will likely become more important in the future. That being said I doubt anyone interested in the best performance in future games is looking at using this processor in their main system for years to come. It delivers good performance today at a very good price and can easily be replaced in a year or two if games come out that require more cores without being out a lot of money.

    • trek205
    • 5 years ago

    PLEASE dont ever test a game on medium when trying to look at cpu results. Some settings impact the cpu not just the gpu so it does not tell the full story on how a cpu will perform. Use very high settings with no AA of course but lower the resolution. That is the ONLY way to properly test for cpu limitations.

    Anyone looking at the Crysis 3 bench and puts a high end gpu with this cpu is going to get a reality check on high or very high settings in parts of the game. My oced 2500k was pegged or nearly pegged on many occasions. Even my oced 4770k cannot keep full gpu usage in all areas but provided a very nice jump over the oced 2500k which could not even stay above 45 fps at times. The oced 4770k sometimes hits over 80% usage on its 8 threads in Crysis 3 and will choke a dual core Pentium to death in many spots.

      • jessterman21
      • 5 years ago

      I have to agree – Crysis 3 Medium reduces all draw-distances, decal-fade, and LODs by more than half, uses half-res particles that only interact with static objects, and drops the number of dynamic lights by a third, making the game MUCH easier on the CPU.

      Crysis 3 on High/Very High is brutal to my 4.0GHz i5-3570K – so much so, that I use Welcome to the Jungle as an OC stress test.

    • southrncomfortjm
    • 5 years ago

    How would this compare to the i3-3225 in my HTPC? Seems like at stock clocks or with a slight overclock, the Pentium wins for overall value due to it’s price. The Pentium surely
    wins when pushed to the max, but that would increase power draw and lessen the appeal.

    • crabjokeman
    • 5 years ago

    Intel should just let the G3258 be the last Pentium and let it retire on a high note. I notice when talking to people who aren’t tech heads that they recognize the Core name just as well or better than the Pentium name nowadays. Intel has cashed in all the prestige of the first 10 years of the Pentium name by releasing some less-than-great products (*cough* Prescott) and relegating it to second fiddle behind the Core series.

      • hbarnwheeler
      • 5 years ago

      Are you suggesting they drop the neutered dual core CPUs or just the name? If the former, I don’t see why they would given how they seem to sell a whole bunch of them. If the latter, why give up a name with such equity? If they roll these into the Core line, then the product line gets diluted and consumers have to work harder to get the CPU they want. Rolling them into the Celeron line would be bad for obvious reasons.

        • crabjokeman
        • 5 years ago

        The Celeron and Pentium lines already overlap. They’re both “neutered” dual cores. If Intel wants to make CPU selection easier, they could drop some extremely similar models (like the ones with odd clock speeds that retailers sell at the same price). But really, tech marketers who want to make things easier on consumers probably don’t exist. If I saw one, I would rub my eyes in disbelief like I had just seen a unicorn.

          • Wirko
          • 5 years ago

          There’s a much overlap in other series as well. Remember the time when we could choose between 100 MHz and 120 MHz Pentiums and nothing in between? Intel could just ditch everything between i5-4590S and i5-4690K for example, without missing anything, and the smaller number of SKUs could bring the prices down a little bit.

      • Chrispy_
      • 5 years ago

      Cashed in?

      Only the original Pentium and Pentium II were any good. Pentium III was barely anything more than a PentiumII anyway, P4 was a disaster that ruined the brand and since then Pentium has meant “slow and cheap”.

        • crabjokeman
        • 5 years ago

        Yeah, that’s basically what I’m saying (though I’d say PIII and Northwood P4 were good CPU’s).

        The point is that people still bought Pentium-branded chips when better alternatives existed because of the Pentium name, but I think we’re past that point and the Pentium cow is done giving milk.

          • Voldenuit
          • 5 years ago

          [quote<]The point is that people still bought Pentium-branded chips when better alternatives existed because of the Pentium name[/quote<] Anyone that buys a pc part because of the brand name deserves what they get. There's this guy at work who built an FX 8350 rig because "AMD is better for gaming". Nuff said.

            • crabjokeman
            • 5 years ago

            I agree. Caveat emptor.

            Doesn’t change what I said in the previous in the previous comments though…

        • jihadjoe
        • 5 years ago

        I thought Pentium III was pretty good, up until AMD finally broke the 1GHz barrier it was neck and neck with the K7 in terms of IPC.

        I sort of agree that architecturally it wasn’t very different from the Pentium II, but then the Pentium III did bring the cache on-board for better performance (something I’m sure they learned from how well the Celeron 300A performed) so there is that.

          • Krogoth
          • 5 years ago

          That would be the Pentium 3 Coppermines (Socket 370) that did bring the cache onto the die. 😉 The first generation of Pentium 3s (Katami) still rode the same cartridge platform (Slot 1) as their Pentium 2 predecessors with the same cache setup. The Pentium 3 was really just a refined Pentium 2 that introduced SSE. 😉

          It is a similar deal with AMD with the K7. The first generation K7s used a cartridge platform (Slot A) that had L2 cache off-die. It wasn’t until the die-shrink that came with next generation (Thunderbird) where AMD was able throw the L2 cache into the CPU die.

            • jihadjoe
            • 5 years ago

            Thanks. I skipped Katmai and my first Pentium III was indeed a Coppermine.

            Those days when Intel and AMD were neck and neck in terms of performance were golden.

    • Andrew Lauritzen
    • 5 years ago

    In terms of the total price story, are there cheap-ish motherboards that support overclocking this chip though? The Z97-A is ~$150CAN or so, which makes the price of the CPU a little bit moot…

      • Deanjo
      • 5 years ago

      There are a bunch of Z97 Gigabyte boards around the $120 CAD mark and Asrocks around the $110 mark.

      • Damage
      • 5 years ago

      Asus has enabled overclocking on a bunch of non-Z boards now:

      [url<]https://techreport.com/news/26650/asus-enables-haswell-overclocking-on-non-z-series-motherboards[/url<]

        • Deanjo
        • 5 years ago

        Ya but you can’t be guaranteed those will continue to work. One microcode update and poof that capability is gone.

    • Krogoth
    • 5 years ago

    Funny that a lowly dual-core Haswell at stock beats old first-generation quad-cores from 2007-2008 era and manages to keep up with Bloomfields if you overclock it a bit. Yet it still consumes less power.

    This guy is a dream for PC gamers who are on a tight budget.

      • chuckula
      • 5 years ago

      Whoa Krogoth… I don’t want to put words in your mouth, but are you… not unimpressed?

        • DarkMikaru
        • 5 years ago

        Clearly.. yes, he is.

    • Deanjo
    • 5 years ago

    This just solidifies one thing in regards to AMD.

    AMD, your only savior from being completely destroyed on every perspective is the DOJ not wanting intel to become a complete monopoly. On a technical basis, you are simply outclassed. Should intel release a bunch of unlocked lowend, your target audience is gone in a heartbeat.

      • UnfriendlyFire
      • 5 years ago

      AMD still has the IGP advantage.

      Sort of…

      Though I’m wondering what would the cost of an APU build vs this Pentium build would be in order to achieve similar gaming performance. If there aren’t any budget mobos for the APU or Pentium, then the cost argument is useless.

      I’ve been always suspicious of CPU cost-performance benchmarks because they leave out the cost of the mobos.

      You can design a cheap CPU by stripping all of the silicon down to its cores and L1/2 cache, and require an expensive mobo to use it because the memory controller and other stuff has to go somewhere.

        • Deanjo
        • 5 years ago

        When you are still getting sub 60 fps frames @ 1080 even with reduced details I wouldn’t call it much of an advantage. It just sucks less. Of course if Intel unleashed Iris Pro on lowend, even that advantage would quickly be next to none.

          • UnfriendlyFire
          • 5 years ago

          But Intel isn’t selling any budget Iris Pros. For now.

            • Deanjo
            • 5 years ago

            They don’t have to. It is still within their capability however to do so on a whim.

            • UnfriendlyFire
            • 5 years ago

            AMD can release 4-module Steamroller. Or a desktop version of the APU that’s powering the PS4, combined with soldered GDDR5.

            But they won’t.

            • Deanjo
            • 5 years ago

            Of course intel can tap any of Nvidia GPU IP at anytime as well.

            • UnfriendlyFire
            • 5 years ago

            “Hey Nividia, mind if we copy-and-paste your Maxwell GPU onto our Iris Pros?”

            “Go home Intel, you’re drunk.”

            • Deanjo
            • 5 years ago

            Essentially they could under the terms of the intel/nvidia agreement. That agreement gave intel unfettered access to their graphics IP.

            [url<]http://www.guru3d.com/news-story/intel-will-pay-1-5-billion-in-nvidia-settlement.html[/url<] [url<]http://download.intel.com/pressroom/legal/Intel_Nvidia_2011_Redacted.pdf[/url<] Read the grant of rights section.

            • MathMan
            • 5 years ago

            Typically, in this context, ‘access to IP’ means ‘access to patents’.

            It doesn’t mean that Intel can just come in a copy the source code of everything.

            • maxxcool
            • 5 years ago

            Not won’t, can’t. that die is huge and competently changes the cost per wafer, and it would ruin their lineup.

      • snook
      • 5 years ago

      you read it here first: Intel will bail AMD out long before the DOJ gets involved.

        • Deanjo
        • 5 years ago

        They might as well just add AMD to the payroll.

          • HisDivineOrder
          • 5 years ago

          Didn’t Intel give AMD and nVidia bonuses for a job well done a few years back? 😉

            • Deanjo
            • 5 years ago

            They both got bonuses but only AMD’s was a “keep me off the streets” bonus.

          • Ninjitsu
          • 5 years ago

          Yeah Intel’s expected acquisition of ATI will finally have been completed. 😀

      • dragosmp
      • 5 years ago

      The problem is by releasing this CPU Intel may also bin all their own CPUs up to 4670k, all their far more profitable i3 and i5 lines. Sure, HT on the i3 does help and some i3s have pretty high clocks to boot, but seriously. It looks like an amazing little CPU: low power, high performance, cheap; it just needs high-end chipset, but some boards with the Z97 and not many bells and whistles must exist /appear.

      I have a query @TR, since I own a quad core CPU since many years ago. Can the Pentium AE CPU if clocked high enough multitask at all? I know these tests are a mess, close to impossible to setup and hardly repeatable, but at least a subjective opinion: would a 4.8GHz Pentium be “fluid” enough in a game if a light background task decides to start? A light task could be a download, update, more I/O than CPU-intensive. Maybe compare that to a 4GHz core i3 or a quad/hex AMD. If the Pentium is a tolerable multitasker I think just about any CPU under the 4670k lost its market.

        • Chrispy_
        • 5 years ago

        I think so, yes;

        Operating systems are running dozens of processes and services in the background already. Unless the background task you’re thinking of uses a significant amount of processor, it shouldn’t really change anything.

        • DPete27
        • 5 years ago

        TR has historically done this in their CPU reviews. Run an encode while gaming. That’s probably a bit more demanding that what you’re suggesting though.

        Unfortunately, this looks like a half-baked review. I almost never down-talk TR, but hey, you missed the launch anyway, why not just finish your tests before you publish the review? That’s what you’ve typically done, and the gerbils don’t mind at all.

    • credible
    • 5 years ago

    Yet many laughed at me for considering this cpu for an HTPC, seems like a no brainer now.

      • Forge
      • 5 years ago

      I agree with you, but only partially. ~50W is nice, almost mobile levels, but HTPCs can actually spin out a decent number of threads and fairly high CPU load, if you’re playing with certain functionality. On the other hand, the most important things for a current HTPC IMO are a mildly capable GPU, configured to hardware decode video, and a good gigabit ethernet link. My current HTPC is actually someone else’s discarded laptop, a low end i5, 4GB of ram, gigabit and the Intel HD graphics actually do a really fine job, and it’s quiet, cool, and has a built-in battery backup.

      • Deanjo
      • 5 years ago

      I still do at the wattage it is chewing up and fan speeds that are required to keep it cool. Now put that in a HTPC case and your thermals and noise levels just get worse.

        • credible
        • 5 years ago

        Ty, I neglected to think about that, on the other-hand there are many options for cooling in an HTPC nowadays so I should;d be good to go.

          • Deanjo
          • 5 years ago

          That’s really dependent on the case. Most low profile HTPC coolers are are rated for lower wattage processors (<45W most of them) and they are usually not too cheap. Add in the price of a cooler, the need for a more expensive motherboard to overclock to get equivalent performance of an i5 s series at stock speeds that can even use the bundled cooler with lower tems and less power consumption and that price differential starts dwindling pretty fast.

            • hbarnwheeler
            • 5 years ago

            Sincere question: Why would one need i5 performance (or even Pentium performance) in an HTPC?

            So many very low power, budget chips can hardware decode HD video in a variety of formats.

            • Deanjo
            • 5 years ago

            There are a few reasons why you may want a i5 in a HTPC.

            1) you may want to use it as a gaming PC for items like Steam
            2) recording / ripping transcoding
            3) using it for live streaming server for other devices like webstreaming, portable device playback, other media boxes (like the capabilities that plex or ps3 media server)

            • hbarnwheeler
            • 5 years ago

            Gotcha. That makes sense. I rarely see these use-cases discussed when people talk HTPCs. It’s usually all about decode capability.

            • Concupiscence
            • 5 years ago

            Yep. Occasional lag in transcoding files with Air Video HD was why I ditched the Celeron G1610 I’d thrown into our HTPC and replaced it with a Core i5 3570k when the local Micro Center sold them for $150 a pop late last year. The difference is downright incredible; unless something breaks I don’t imagine I’ll modify it in any substantial way for years.

            • credible
            • 5 years ago

            It would appear that you have convinced me to retire my i5-2500k to my htpc, I was thinking about it a few times, it really does make sense to use it that way, unless I can win the lotto,lol.

            • Vaughn
            • 5 years ago

            Simple answer some people do more than just watch media on their HTPC.

            Some Game and some encode video!

    • Forge
    • 5 years ago

    ur-overclocker! ur-overclocker!

    I just picked up one of these puppies last night, 105.99$ incl. tax for one plus a low-end but decent Z97 motherboard to put it on. I’m headed home from work in a few minutes to see if I win the 5GHz lottery, but I’ll settle for anything North of 4GHz.

    What a deal! How bad of a CPU can it be, going for 50-75$??? This really is the 300A come again, albeit to a very different world.

      • Forge
      • 5 years ago

      And in case anyone wonders (I know, right? Everyone wants to know what I think about), I’m planning this as a diversionary entertainment until proper upgrades in the fall/winter. A nice high end Z97 motherboard first, then a Devil’s Canyon i7. That’ll be a good solid year or two of performance, I think.

      My poor battered i5-2500K doesn’t like the summer weather, and I’ve had to dial back all the way to stock clocks to maintain stability. Sandy is getting very aged as well.

        • yokem55
        • 5 years ago

        Hmm..What speeds had you been running your 2500K at? Mine runs slightly downvolted at 4.4 all day long in 85 degree ambient air. I’ve been squinting hard at the 4790 reviews and I’m just not seeing much of a case to upgrade, but if I should expect to see some reduced life from my chip there might be more of a reason….

          • robliz2Q
          • 5 years ago

          If you’re able to down volt you should expect long life, as well as less heat & noise.

        • Plazmodeus
        • 5 years ago

        Forge my i2600k is right up there with my celeron 300 as the best overclock I’ve ever had. Three years (and counting) stable at 4.8. Yesterday I had After Effects doing a big render, all 8 threads pinned for an hour, temps hovered in high eighties, never above ninety. Solid like rock.

          • Krogoth
          • 5 years ago

          Most “300A” in field never really were overclocked. Intel just artificially crippled the chip to operate at a 66Mhz FSB when the architecture and platform it ride on could safely operate at 100Mhz FSB. The deal was made sweeter when it had on-die L2 that operate at full clock speed (it had less of it though), unlike the normal Pentium 2 cartridges which had L2 cache off-die and only run at half-clock speed of the CPU (Back-side bus).

          • gmskking
          • 5 years ago

          Love my 2600K. I ran mine at 5GHz for almost a year with no issues. Dialed it back to 4.4GHz now to be safe. Best CPU i had since my Q6600.

        • Vaughn
        • 5 years ago

        Sounds like you need better cooling for that 2500k.

      • Wirko
      • 5 years ago

      The 300A was, er, [i<]ur[/i<] but this Pentium is simply ügly. I'm looking at the model number, not the performance. Can't Intel possibly come up with something like G3300K for an anniversary edition?

      • jihadjoe
      • 5 years ago

      I’m interested in the Z97 board you got for it.
      More info please?

Pin It on Pinterest

Share This