Review: Nvidia’s GeForce GT 640 graphics card

Nvidia’s Kepler architecture has displayed excellent, sometimes record-breaking performance since its grand debut in March. The GeForce GTX 680 was the fastest single-GPU graphics card until AMD recaptured the mojo last week, and the GeForce GTX 690 is still the undisputed king of single-card solutions, thanks to its dual GPUs (and prodigious thousand-dollar price tag).

In spite of these successes, Kepler has seemed eerily reluctant to get its feet wet in the shallow waters of the low-end market. Until earlier this month, the cheapest Kepler card Nvidia could offer cash-strapped enthusiasts was the GeForce GTX 670—which, at $400, really isn’t very cheap at all. Anything more affordable was derived from Nvidia’s older Fermi architecture and manufactured using a 40-nm fab process. All the while, AMD acolytes have had a whole lineup of state-of-the-art, 28-nm Radeons to choose from, including the $109.99 Radeon HD 7750 that came out back in February.

It couldn’t last…

…and it didn’t. In early June, Kepler finally jumped in the kiddie pool aboard the GeForce GT 640, an offering situated firmly in budget territory. The card launched at $99 and currently retails in the $99.99-109.99 range, almost squarely opposite the Radeon HD 7750. Nvidia makes a couple other versions of the GT 640, too, but those are for PC vendors, and you won’t find them listed at e-tailers like Newegg and Amazon.

In any case, Nvidia has given buyers a very real option if they want Kepler goodness minus the lofty price tags. That, in turn, raises an important question. How well does the GeForce GT 640 fare against other budget cards vying for supremacy around the hundred-dollar mark? By that we mean not just the Radeon HD 7750, but also Nvidia’s own, previous-gen offering.

More simply put, is the GeForce GT 640 a new budget wonder, or are you better off spending your lone Benjamin on another card? Let’s find out.

Here’s the star of our show, sans heatsink. Note the tiny GPU. That’s the GK107, which packs 1.3 billion transistors into a scant 118 mm² footprint using TSMC’s 28-nm fab process. The chip is slightly smaller than the Cape Verde GPU that powers AMD’s Radeon HD 7700 series. Cape Verde measures 123 mm² and plays host to 1.5 billion transistors, though some of those transistors are disabled on the 7750, which has two fewer compute units than the 7770.

The GK107 is also much smaller than Nvidia’s 40-nm GF106 and GF116 graphics chips, which drive the GeForce GTS 450 and GeForce GTX 550 Ti, respectively. Both of those older parts host just under 1.2 billion transistors in a die area of about 240 mm². Chips with physically smaller dimensions can cost less to produce, so although the GTS 450 is priced in the same neighborhood as the GT 640 now, Nvidia may have more freedom to apply future price cuts to the latter. So far, however, the GT 640 seems to be staying put at around $99.

Peer inside the GK107 with an electron microscope, and you’ll see lots of transistors and gates arranged in all kinds of crazy patterns. The diagram above provides an easier-to-parse overview of the chip’s various bits and pieces.

The GK107 features a single graphics processing cluster containing dual SMX shader multiprocessors. For reference, the GK104 chip inside the GeForce GTX 680 has four GPCs and eight SMXs. On both chips, each GPC contains two SMX units and a raster engine capable of rasterizing one triangle per clock. Each SMX has 192 arithmetic logic units (ALUs) and texture units capable of filtering 16 texels per clock cycle. The GK107’s lone GPC is backed by a single ROP partition capable of producing 16 pixels per clock. The chip also has dual 64-bit memory controllers that give the GeForce GT 640 a 128-bit path to its 2GB of DDR3 memory.

Hold on a minute—DDR3?

Yes, believe it or not, Nvidia has equipped this card with DDR3 RAM instead of the speedier GDDR5. It’s pretty sluggish DDR3, too, with an effective transfer rate of only 1782 MT/s. A version of the GT 640 with GDDR5 RAM does exist, but it’s one of those pesky cards reserved for PC vendors and not available for sale to the general public.

It doesn’t take a profound understanding of GPU architectures to guess that DDR3 could needlessly hamstring the GeForce GT 640 compared to its GDDR5-toting rivals. The peak theoretical numbers below lend weight to that notion:

  Peak pixel

fill rate

(Gpixels/s)

Peak bilinear

filtering

(Gtexels/s)

Peak bilinear

FP16 filtering

(Gtexels/s)

Peak shader

arithmetic

(tflops)

Peak

rasterization

rate

(Mtris/s)

Memory

bandwidth

(GB/s)

GeForce GTS 450 13 25 25 0.7 783 98
GeForce GTX 550 Ti 22 29 29 0.7 900 98
GeForce GT 640 14 29 29 0.7 900 29
Radeon HD 7750 13 26 13 0.8 800 72

While the GeForce GT 640 compares favorably to AMD’s Radeon HD 7750 in terms of peak shader, texturing, and rasterization throughput, it falls considerably short when it comes to memory bandwidth, with only 29GB/s to the Radeon’s 72GB/s. We’ll gauge the real-world performance implications of that shortcoming in a minute, but it certainly doesn’t bode well. Graphics cards today need plenty of memory bandwidth to juggle textures and frame data.

The GT 640 may have one minor trump card, and that’s its tight power envelope. Nvidia rates the card for peak power draw of only 65W, while AMD says the Radeon can draw up to 75W. The GeForce GTS 450 and GTX 550 Ti aren’t even in the same league, with respective TDPs of 106W and 116W.

The card

Our GeForce GT 640 sample was kindly volunteered by Zotac. Here it is again, this time with its single-slot heatsink and fan still firmly attached:

Zotac is known for its amped-up, er, AMP! Edition cards, but this GT 640 couldn’t be more sober. Its GPU runs at the standard 900MHz speed defined by Nvidia, and its two gigabytes of DDR3 memory are clocked at 891MHz, per the official specs. Zotac’s cooler at least looks different from the one on Nvidia’s reference design. The heatsink seems to have a fair bit more metal, which likely helps keep temperatures lower. The fan looks to be about the same size, though. It’ll have to spin quickly to generate decent airflow, and the accompanying noise may not blend into the background whoosh of a quiet desktop PC. We’ll look at noise levels in a bit.

The Zotac GeForce GT 640 sells for $109.99 at Newegg right now, which is in line with the prices of other GT 640 variants. We’re going to compare it to a stock-clocked Radeon HD 7750 and MSI’s slightly souped-up version of the GeForce GTX 550 Ti, the GTX 550 Ti Cyclone. For reference, vanilla 7750 variants sell for as little as $99.99 (or $89.99 after a mail-in rebate) at Newegg, and MSI’s GTX 550 Ti Cyclone costs $124.99 (or $109.99 after MIR) at Amazon.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we reported the median results. Our test systems were configured like so:

Processor Intel Core i5-750
Motherboard Asus P7P55D
North bridge Intel P55 Express
South bridge
Memory size 4GB (2 DIMMs)
Memory type Kingston HyperX KHX2133C9AD3X2K2/4GX

DDR3 SDRAM at 1333MHz

Memory timings 9-9-9-24 1T
Chipset drivers INF update 9.1.1.1020

Rapid Storage Technology 10.5.0.1026

Audio Integrated Via VT1828S

with 6.0.1.8700a drivers

Hard drive Crucial RealSSD C300 256GB
Power supply Corsair HX750W 750W
OS Windows 7 Ultimate x64 Edition

Service Pack 1

 

  Driver revision GPU core

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

MSI GeForce GTX 550 Ti Cyclone GeForce 304.48 beta 950 1075 1024 (GDDR5)
Zotac GeForce GT 640 GeForce 304.48 beta 900 900 2048 (DDR3)
AMD Radeon HD 7750 Catalyst 12.6 beta 800 1125 2048 (GDDR5)

Thanks to Asus, Corsair, Crucial, Kingston, and Intel for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the graphics cards used for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We used the Fraps utility to record frame rates while playing a 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.

  • We measured total system power consumption at the wall socket using a P3 Kill A Watt digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Skyrim at its High quality preset.

  • We measured noise levels on our test system, sitting on an open test bench, using a TES-52 digital sound level meter. The meter was held approximately 8″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Texture filtering

Before we get into game benchmarking, let’s run through some synthetic tests to get a better sense of the GT 640’s capabilities.

  Peak bilinear

filtering

(Gtexels/s)

Peak bilinear

FP16 filtering

(Gtexels/s)

Memory

bandwidth

(GB/s)

GeForce GTS 450 25 25 98
GeForce GTX 550 Ti 29 29 98
GeForce GTX 550 Ti Cyclone 30 30 103
GeForce GT 640 29 29 29
Radeon HD 7750 26 13 72

The GeForce GT 640’s low memory bandwidth is no handicap in these texture filtering tests. Kepler’s relatively strong texturing performance puts the GT 640 ahead of the Radeon HD 7750, especially in FP16 filtering, where the AMD card’s peak theoretical throughput is half its integer rate.

Tessellation

  Peak

rasterization

rate

(Mtris/s)

Memory

bandwidth

(GB/s)

GeForce GTS 450 783 98
GeForce GTX 550 Ti 900 98
GeForce GTX 550 Ti Cyclone 950 103
GeForce GT 640 900 29
Radeon HD 7750 800 72

In this tessellation test, the GT 640 manages to outpace the 7750 and the older GeForce GTX 550 Ti Cyclone—a card that has both greater memory bandwidth and higher peak theoretical rasterization performance. This isn’t the first time we see a Kepler part zoom ahead of the competition in this test.

Shader performance

  Peak shader

arithmetic

(TFLOPS)

Memory

bandwidth

(GB/s)

GeForce GTS 450 0.7 98
GeForce GTX 550 Ti 0.7 98
GeForce GTX 550 Ti Cyclone 0.7 103
GeForce GT 640 0.7 29
Radeon HD 7750 0.8 72

GPU computing

The GT 640 falls behind in these tests, which are chiefly designed to gauge its shader performance. Considering these three cards all have roughly comparable peak theoretical throughput, the GT 640’s low memory bandwidth is likely a bottleneck here.

Of course, the Radeon HD 7750 has a huge advantage in LuxMark regardless, even over the GeForce GTX 550 Ti. We’ve seen a similar pattern with high-end Radeon HD 7800- and 7900-series GPUs stacked up against the GTX 680 and older GeForces in this test. Nvidia has told us it doesn’t optimize its drivers for LuxMark, which likely explains the discrepancy.

Let’s now see how things shake out in some actual games.

The Elder Scrolls V: Skyrim

Our Skyrim test involved running around the town of Whiterun, starting from the city gates, all the way up to Dragonsreach, and then back down again.

We tested at a 1080p resolution using the “Medium” detail preset. (The “High” preset was playable on the GT 640, but it wasn’t completely smooth.)

Now, we should preface the results below with a little primer on our testing methodology. Along with measuring average frames per second, we delve inside the second to look at frame rendering times. Studying the time taken to render each frame gives us a better sense of playability, because it highlights issues like stuttering that can occur—and be felt by the player—within the span of one second. Charting frame times shows these issues clear as day, while charting average frames per second obscures them.

For example, imagine one hypothetical second of gameplay. Almost all frames in that second are rendered in 16.7 ms, but the game briefly hangs, taking a disproportionate 100 ms to produce one frame and then catching up by cranking out the next frame in 5 ms—not an uncommon scenario. You’re going to feel the game hitch, but the FPS counter will only report a dip from 60 to 56 FPS, which would suggest a negligible, imperceptible change. Looking inside the second helps us detect such skips, as well as other issues that conventional frame rate data measured in FPS tends to obscure.

We’re going to start by charting frame times over the totality of a representative run for each system—though we conducted five runs per system to sure our results are solid. These plots should give us an at-a-glance impression of overall playability, warts and all. (Note that, since we’re looking at frame latencies, plots sitting lower on the Y axis indicate quicker solutions.)

Frame time

in milliseconds

FPS rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

The GT 640 exhibits much higher and more frequent latency spikes than the Radeon HD 7750 and the GeForce GTX 550 Ti in our Skyrim test. It also produces less consistent frame times overall (as evidenced by the fatter-looking plot), which is exactly the opposite of what we want.

We can slice and dice our raw frame-time data in other ways to show different facets of the performance picture. Let’s start with something we’re all familiar with: average frames per second. Though this metric doesn’t account for irregularities in frame latencies, it does give us some sense of typical performance.

Next, we can demarcate the threshold below which 99% of frames are rendered. The lower the threshold, the more fluid the game. This metric offers a sense of overall frame latency, but it filters out fringe cases.

Of course, the 99th percentile result only shows a single point along the latency curve. We can show you that whole curve, as well. With single-GPU configs like these, the right hand-side of the graph—and especially the last 5% or so—is where you’ll want to look. That section tends to be where the best and worst solutions diverge.

Our raw frame-time plots show several big spikes above 80-ms for the GT 640, and those are reflected in the latency curve.

Finally, we can rank solutions based on how long they spent working on frames that took longer than 50 ms to render. The results should ideally be “0” across the board, because the illusion of motion becomes hard to maintain once frame latencies rise above 50-ms or so. (50 ms frame times are equivalent to a 20 FPS average.) Simply put, this metric is a measure of “badness.” It tells us about the scope of delays in frame delivery during the test scenario.

No matter how you slice it, the results are clear. In Skyrim, the GT 640 isn’t just slower than the Radeon HD 7750 overall; it’s also worse at keeping frame times consistent, which means it doesn’t do as good a job of maintaining the illusion of motion.

From a seat-of-the-pants perspective, Skyrim doesn’t feel completely fluid on the GT 640. While the game responds well to input, movement that should be smooth is punctuated by skips at short and regular intervals. Perhaps the GPU and its anemic memory configuration are to blame, or maybe it’s just a driver optimization issue. Either way, playing Skyrim on the GT 640 just isn’t a very good experience, even at this rather low detail preset.

Batman: Arkham City

We grappled and glided our way around Gotham, occasionally touching down to mingle with the inhabitants.

Arkham City was tested at 1080p using the “Medium” detail preset. Medium FXAA was enabled, as well, but DX11 effects were left disabled.

Frequent latency spikes are common in this particular stretch of Arkham City, likely because the game has to stream parts of the city on the fly—and this is a very rich and detailed environment.

Ignoring those spikes, we’re seeing basically the same pattern we saw in Skyrim. The GT 640 is clearly much slower than the Radeon HD 7750 and the GeForce GTX 550 Ti, as evidenced by its shorter plot (higher frame rates generate more frames in total) and frequent latency spikes. The Kepler-based GeForce’s frame times are much less consistent, too.

Yeah, the GT 640 isn’t really even in the same league as the other two cards.

Out of the three, the Radeon HD 7750 probably delivers the best overall experience. Its average frame rate may not be the highest, but it’s close enough, and the card’s frame times are more consistent, with fewer spikes than the GTX 550 Ti Cyclone.

Battlefield 3

We tested Battlefield 3 by playing through the start of the Kaffarov mission, right after the player lands. Our 90-second runs involved walking through the woods and getting into a firefight with a group of hostiles, who fired and lobbed grenades at us.

BF3 is a demanding game, and the GT 640 struggled with it until we bumped down the graphical preset to “Low.” We stuck with a 1080p resolution, however.

In BF3, the GT 640 doesn’t seem to have a problem maintaining relatively consistent frame times. However, those frame times are clearly higher overall than on the other cards.

Happily, none of the cards spent 50 ms or more on a single frame. Once we lower our threshold to 33.3 ms, we see the Radeon once again does a better job than its rivals of avoiding unusually long frame latencies. Considering its average FPS rate is close to the monitor’s refresh rate, I think we can give this victory to the Radeon.

Power consumption

The 7750 sips less power than the GeForce GT 640 at idle—especially when the display goes to sleep. AMD’s ZeroCore tech switches the GPU to an ultra-low-power state and disables its cooling fan in that scenario. Under load, however, the GT 640 manages the lowest power draw of the bunch. I suppose that’s not entirely unexpected, given that the GT 640 also happens to have the tightest thermal envelope of the three.

Noise levels and GPU temperatures

The numbers from our decibel meter are all pretty close, but to my ear, neither the GT 640 nor the 7750 sound particularly quiet. Their tiny fans both emit a sort of high-pitched hiss, while the bigger spinner on the GTX 550 Ti makes a fainter whooshing noise that’s much easier to tune out. That said, keep in mind our Radeon HD 7750 isn’t a retail offering; it’s a reference card from AMD, and most (if not all) the models in stores have different coolers.

Zotac’s little single-slot cooler keeps the GeForce GT 640 nice and chilly. The reference AMD heatsink and fan on the Radeon HD 7750 don’t do such a good job, on the other hand. Here’s hoping retail versions of the 7750 have beefier coolers that keep temperatures lower.

Conclusions

Let’s round things out with a couple of our famous scatter plots. We’re laying average performance (based on the results from the games we tested) along the Y axis and prices along the X axis. The sweet spot will be the card closest to the top left of the plot, while the worst will be closer to the bottom right. We fetched prices from Newegg and Amazon.

We can also compile a value scatter plot out of our 99th percentile frame time data. For consistency’s sake, we’ve converted the frame times to frame rates, so desirable offerings are still at the top left.

I think it’s pretty clear the retail GeForce GT 640 doesn’t belong anywhere near its introductory $99 price point. AMD’s Radeon HD 7750 delivers substantially higher performance for the same amount of money, and it does so using silicon that’s not much larger or hungrier for power. For folks shopping for a latest-generation, 28-nm GPU in this price range, the choice is clear.

The GeForce GT 640 feels like so much wasted potential. I can’t help but think it would have been much more compelling had Nvidia paired the retail card with GDDR5 RAM instead of DDR3. As we saw a few pages back, the GK107’s peak rates are all in the same league as the 7750’s, so a GDDR5-equipped version of the GT 640 could conceivably keep up with its Radeon rival. Heck, it might even be a little quicker here and there. With DDR3, however, the Nvidia card is hopelessly hamstrung.

In the end, it’s likely we’ll see the GT 640 quietly and unceremoniously work its way down the price ladder to a more reasonable level—maybe, say, $60 or $70. At the same time, I’d be surprised if the GK107 GPU on this card didn’t eventually pair up with GDDR5 RAM in a faster budget GeForce better equipped to compete at $99. The chip seems to be capable, and Nvidia should be able to loosen the power envelope and boost the clock speeds.

Comments closed
    • rrr
    • 7 years ago

    OMG, nVidia, pick up the pace at lower end too. Sure, 670 owns, but think of customers aiming for slower cards.

    Looks like my next card will be a Radeon again, sth like passive 7750 or 7770, because it’s apparently humiliating for nVidia to show decent budget card.

    • kalelovil
    • 7 years ago

    Perhaps most disappointingly, the reality of the low-end retail GPU market is that more people will buy a 2GB DDR3 card than a 1GB GDDR5 card.

    • Meadows
    • 7 years ago

    I didn’t really follow the architecture changes of Kepler, and so I’m glad I was totally proven wrong about this card’s potential.

    A while ago I was beginning to be afraid that NVidia might actually release a card that has a lower price and a higher performance, [u<]simultaneously[/u<], compared to mine. Alas, my investment remains. Disappointing, nonetheless.

    • Farting Bob
    • 7 years ago

    Even for the usually terrible value for money sub $100 range, this is pretty damn bad value.

    If your budget means sub $100 to spend on a GPU, go to the used market, far far better performance even if things like power consumption are higher.

    • kcgoat
    • 7 years ago

    for $100 there are used video cards .that will will blow this away . nvidia is hung up in the stock market world and is perceiving that all its customers are dumb

      • ColeLT1
      • 7 years ago

      This ^

      I sold my (sli) GTX 460 1gb to a friend with my I7 950 and some other stuff, and built a 3570k. I was going to get a GTX 470 or similar, but instead went to ebay and went back to 2 460s 🙂 ($105 each).

      • MrBojangles
      • 7 years ago

      Second that…i just picked up a used 5850 off Craigslist for only $50.

      • brucethemoose
      • 7 years ago

      +1

      Staying 1 or 2 generations behind gives you the best bang/$.

    • FormCode
    • 7 years ago

    I have a hunch that in 3 months tops this will become the GT640LE/SE and they’ll release a GDDR5 GT640. OR. the GT650 will be a GDDR5 640 with some extra ALU’s/ROP’s.

    • Silus
    • 7 years ago

    Pretty much confirms what I was expecting, ever since it was announced…The chip is quite capable, but with such a memory type attached to it, it suffers quite a bit as it starves for memory bandwidth. A version with GDDR5 would be much, much better…

    • Anonymous Coward
    • 7 years ago

    With that limited bandwidth, maybe they should have just dropped the GPU clock and made the whole thing passive…

    • codedivine
    • 7 years ago

    Nvidia’s use of DDR3 on the desktop part is baffling. I have a laptop with 650M, which is essentially this same card but with GDDR5 and mine has 64 GB/s of bandwidth as opposed to 23 GB/s here. Why not use the same configuration on the desktop too?

    • JustAnEngineer
    • 7 years ago

    [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814161417<]$110 +0sh[/url<] [i<]or[/i<] [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814102969<]$110 +5sh -10MIR[/url<] Radeon HD7750 [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814133460<]$110 +1sh[/url<] GeForce GT640 [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814162094<]$114 +7sh -20MIR[/url<] GeForce GTX550Ti [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814127664<]$125 +7sh -15MIR[/url<] [i<]or[/i<] [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814150598<]$135 +0sh -15MIR[/url<] Radeon HD7770 Even though it costs $10 to $15 more, the Radeon HD7770 is the best value by far.

      • JustAnEngineer
      • 7 years ago

      Newegg’s got the Asus HD7770-DC-1GD5-V2 for [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814121632<]$140 -21 code "EMCNDHB45" -10MIR[/url<] =$109AR today.

      • swaaye
      • 7 years ago

      6850 seems like the best choice in this segment.

        • Kurotetsu
        • 7 years ago

        It has been practically since it came out. Which is somewhat depressing.

        • Bombadil
        • 7 years ago

        The GTX 460 has been a better choice than the 6850.

          • BobbinThreadbare
          • 7 years ago

          No DX11 or new GPU compute stuff though.

            • Deanjo
            • 7 years ago

            The GTX-460 can run all current forms of GPU computing just fine and is DX 11.

          • swaaye
          • 7 years ago

          They seem to be disappearing from inventories but I think you are right. As long as it’s the 256-bit edition.

        • brucethemoose
        • 7 years ago

        Which is why I’m still sitting on two of em.

        Though If you can find them on sale, used 5850s are beastly cards for the $.

    • flip-mode
    • 7 years ago

    This card is so overpriced it makes the Facebook IPO price look like a steal. Also, card this slow should not have a fan.

    Conclusion: Epic poor value.

      • derFunkenstein
      • 7 years ago

      “A card this slow” would be put into perspective if they included a card that does come fanless most of the time, like a Radeon 6450 or GeForce 420. It’s faster than you think, but still far too slow for the $100 price.

        • flip-mode
        • 7 years ago

        True. Those others are not 28nm, but still true. There are also 6670 and higher passive cards, but it’s a much larger passive cooler.

    • odizzido
    • 7 years ago

    I guess Nvidia is hoping that they can sell a few of these at $100 before word gets around about how terrible they are.

    • Bensam123
    • 7 years ago

    Designer: …so what should we do for our new budget card?
    Nvidia: Well DDR 3 is cheap wholesale right now, why don’t we strap it to that to get rid of it?
    Designer: Wont the performance be abysmal?
    Nvidia: Who cares? People will buy it anyway as it has our name on it and all of our other cards in this price range have matched price/performance pretty well. It’ll raise our quarterly profits and our investors will love us for it.
    Designer: What about the hate?
    Nvidia: We’ll just do it right the next time and flip flop every generation. No one cares about budget cards anyway.

    Sorta makes you want to strap 6 of these to a SLI setup just for the hell of it! I know it’ll be terrible. ><

    TR really needs a review section of quirky, but unlikely setups. Like running six of these in SLI and seeing what the impact of such is on a PCIE bus (let alone if it works). I think it would be an interesting academic exercise.

      • Madman
      • 7 years ago

      Awww… no SLI connector 🙁

        • Bensam123
        • 7 years ago

        dude, these can do sli over the PCIE bus! They don’t need the connector. Get with the times bro.

      • bitcat70
      • 7 years ago

      LOL! Come on TR, can you do it?

      • Bensam123
      • 7 years ago

      I honestly want to see if it even works. Sixtuple sli with budget cards? That’s just bad ass.

    • Chrispy_
    • 7 years ago

    People were asking about this on the forums, THG had a ‘review’ out a few weeks back and it looked like a turd.

    After the usual exhaustive and in-depth TR analysis, Cyril has conclusively confirmed that yes, this is a turd.

    Even if it had GDDR5 at the same $99 price, it’ll probably lose out to the $89 7750 still.

    • ULYXX
    • 7 years ago

    Eh, i think id rather say “Goodbye” to little kepler.

    ¯\_(ツ)_/¯

    • Deanjo
    • 7 years ago

    Why is it so hard for reviews to test the one ^*#^%!*% interesting aspect on these cards and that is video playback and nvenc encoding?

      • Chrispy_
      • 7 years ago

      Because there are never any changes to video playback within the same product generation.

      There are no new features that I can recall, going back all the way to the original Fermi launch. Cards that aren’t powerful enough for hardware playback at 1080p stopped existing a while back. Even Intel IGP’s have made a decent job of it for the last two generations.

        • Deanjo
        • 7 years ago

        Umm sorry you are wrong. The video decoding engine is a new one thaty differs from fermi. The purevideo engine for example can now handle multiple HD streams simultaneously, as well as 4k decoding support and MVC support. Something Fermi and older (with the exception of the GT-520) cards did not support. As well now they have hardware based encoding which is not present in older cards (not cuda encoding but a dedicated encoding engine like intels quick sync.) It is quite a more capable engine then in previous generations. Even the GT-520 engine is slightly different as it can decode 4k video it can only output it to 1080p resolutions. The current true kepler cards are supposed to be able to output it at full 4k as well.

        [url<]http://www.anandtech.com/show/5969/zotac-geforce-gt-640-review-/4[/url<]

          • MadManOriginal
          • 7 years ago

          Chrispy_ mixed things up in his post but you obviously missed “within the same product generation” ie: this Kepler is no different from other Kepler.

          Now, if you wanted to suggest TR do video feature testing, go right ahead as you did in post # 11. I just might suggest you do it in a nicer way.

            • Deanjo
            • 7 years ago

            [quote<]Chrispy_ mixed things up in his post but you obviously missed "within the same product generation" [/quote<] No I didn't miss it because it is still wrong. Case and point, GTX-580 vs GT-520. So, a) video decoding capabilities DO change in the same generation b) the engine has also changed since the original Fermi.

            • Chrispy_
            • 7 years ago

            The GT-520 plays back video perfectly well. If you are not satisfied with this you are in the 0.1% minority, which is why there is never any coverage of this feature.

            [i<]Nobody cares[/i<].

      • Bensam123
      • 7 years ago

      Perhaps because these capabilities are embedded in GPUs that are now available on almost every modern CPU in existence?

      You just expect features like that to be there and work too. I think pretty much every graphics card for the last four generations has hardware encoding and decoding capabilities.

        • Deanjo
        • 7 years ago

        Well you would be wrong as well. Any GPU based encoding before was done via shader based encoding. This is not the case here. The only thing remotely close to it is intels Quick Sync which is also a dedicated hardware encoder. Plus show me an integrated graphics solution that can decode 4k video. Good luck even getting it to decode brute force on a i7 extreme.

          • Bensam123
          • 7 years ago

          Which is still done on the GPU, which is then being offloaded from the CPU. It doesn’t matter if it has a specific instruction set for it or not as long as it actually does it.

          dude, if someones hooking up a 4 megapixel monitor to a sub-$100 graphics card they deserve to be without their hardware accelerated video.

            • Deanjo
            • 7 years ago

            There are limitations as to what a shader based solution is vs dedicated encoding hardware. If you have ever done any gpu video encoding you would see that only a very limited set of functions are carried out on the GPU and the CPU still does a lot of heavy lifting. With dedicated encoding hardware it drops the encoding time, carries out more functions on the hardware and frees up the CPU tremendously.

            As far as the 4k decoding goes, pretty much any older card will choke the cpu just trying to play it back (and very badly). Decoding capability effects not only media playback for your favorite movie but also items like video editing of 4k video. 27″ and 30″ monitors with > 1080 resolutions are not uncommon now but until the latest gen of video decoding capabilities even scaled playback of 4k video was next to impossible. If a person isn’t gaming but does a lot of QHD work there would be absolutely no reason for spending the extra cash on a high end gaming card as it’s strength would never be used.

            Then there is also the cases like live transcoding. Because Windows 7+ does not allow accessing of video rendering to a non logged in user you cannot use a GPU shader based transcoding for streaming purposes. With a dedicated encoder however you can have access to those functions.

            Example: I have a hauppauge colossus capture card. If I have a logged in user I can use shader based encoding (despite that the quality on all shader based encoders absolutely sucks) for rendering to a target device stream. If I do not have a logged in user, that cannot be done. Windows does however allow a dedicated encoder to transcode such as Quick Sync and nvenc even without a logged in user. Without that, to transcode for multiple devices for multiple streams it is purely up to the CPU to do all the heavy lifting which bogs the system down quite a bit (not to mention uses an enormous amount of power).

            • Bensam123
            • 7 years ago

            Sure? If someone is doing Pixar level editing on a $100 video card something is a miss.

            As far as 4k monitors go. 4k just isn’t above 1080p, it’s roughly 2x the pixels of 1080p. They run for about $1000 a pop. Not to mention no one makes 4k encodings. Heck even 1080p encodes aren’t all that popular. Most people watch 720p unless they’re watching a bluray.

            I at no point argued the technology. Just no one is going to use it, especially on a $100 video card. I don’t think there is anyone that would use it even on a $500 video card unless they’re working with source material for a movie. Heck most movies are shot at 1080p at the source when working with digital over 35mm.

            So insinuating that this video card is superior because it can do 4k encodes/decodes and video cards for the past four generations are inferior because they can [i<]only[/i<] do 1080p encoding and decoding is a apt comparison at best. You're more likely to see games outputting at 4k before movies will ever get close to that level and there is no chance in hell this video card could drive a game on its lowest settings at 4k.

            • Deanjo
            • 7 years ago

            In case you missed it Youtube even has 4k videos to view. Yes 27″ and 30″ are not full 4k monitors yet but they are still substantially larger in resolution (2560×1600 vs 1920×1080). Even scaled down decoding of 4k video to a smaller display is still beyond the capabilities for all but the latest generations of cards.

            It is far more likely a case that a person that would purchase such a card like the GT-640 is going to be doing so for regular desktop work and perhaps video work. Video editing is rarely done with graphic card powerhouses and 4k prosumer video cameras are coming more and more affordable (5000 for a 4K JVC). Even if you are not rendering to 4k output in your final production having a 4k recording will still improve your end 1080 rendering over a 1080p recording to 1080 rendering.

            [quote<]4k just isn't above 1080p, it's roughly 2x the pixels of 1080p.[/quote<] Actually 4k is 4x the pixels of 1080p, not two. 3840x2160 = 8,294,400 pixels 1920x1080 = 2,073,600 pixels 2,073,600 x 4 = 8,294,400 The scaling factor is 2 not the number of pixels.

            • Bensam123
            • 7 years ago

            Yeah, I guess I must’ve missed it between 240, 360, 720, and sometimes 1080p videos… Great tech demos. And movies are still filmed at 1080p for digital cinematography, with 720p being the choice encode for video files at the moment.

            $5000 isn’t affordable. Once again you’re trying to pair two extremes together. Someone is going to buy a $5000 video camera, a $1000 monitor, and then decide they want a $100 video card?

            Video editing is rarely done with a fast video card? A faster video card reduces encoding times even if it is hardware accelerated. The acceleration depends completely on the graphics hardware itself, so this card is still going to be a PoS even if it’s accelerated. You’ll still buy a faster video card to do editing, more then likely a crossfire/sli array at that. The bigger the resolution the more time it’ll save.

            That just leaves watching 4k content which is extremely scarce… It’s a great niche piece and it’s neat hobbyists are filling their HDs with gigabytes of visual data, but not something I’d ever worry about anytime soon. Definitely not something I would purchase a video card solely based off of.

            • Deanjo
            • 7 years ago

            [quote<]And movies are still filmed at 1080p for digital cinematography, with 720p being the choice encode for video files at the moment.[/quote<] Sorry to bust your bubble but movies are not filmed @ 1080. They are rendered and encoded to that for your home entertainment devices. Even early movies pre 1950 have equivalent that are 2,160 × 2,970. 1080 would look absolutely horrible on the big screen. [quote<]$5000 isn't affordable. Once again you're trying to pr air two extremes together. Someone is going to buy a $5000 video camera, a $1000 monitor, and then decide they want a $100 video card?[/quote<] Five thousand is affordable for the prosumer. Remember that even the old VHS camcorders when they came out were $3000+. A $100 video card is very realistic in video editing especially in the prosumer market because gaming power has no real bearing on the video playback capabilities. [quote<]Video editing is rarely done with a fast video card? A faster video card reduces encoding times even if it is hardware accelerated. [/quote<] Big huge hunk of BS in that statement unless you are using shader based encoding which again is absolutely horrible in terms of quality. Where you will get acceleration is with filtering and composited effects, not the encoding as no sane person uses GPU encoding as the quality just plain sucks. As a separate independent encoding engine the number of shaders found on the card has no more bearing on performance then it would on the quality of audio stream. [quote<]The acceleration depends completely on the graphics hardware itself, so this card is still going to be a PoS even if it's accelerated.[/quote<] Since the encoding engine is not dependent on the computing capabilities that completely false. Again it is a separate dedicated encoding engine. [quote<]You'll still buy a faster video card to do editing, more then likely a crossfire/sli array at that. The bigger the resolution the more time it'll save[/quote<] Absolutely not. SLi/Crossfire is completely 100% useless in video editing. In fact it is a complete hindrance that introduces all kinds of artifacts from synchronization and timing. Even if you decided to use two cards for shader based rendering you would not be using them in a sli/crossfire fashion but using them separately as two independent rendering devices just like you do not use SLi or crossfire when doing other computing tasks like folding. [quote<]That just leaves watching 4k content which is extremely scarce... It's a great niche piece and it's neat hobbyists are filling their HDs with gigabytes of visual data, but not something I'd ever worry about anytime soon. Definitely not something I would purchase a video card solely based off of.[/quote<] This 'hobbyist' currently has 12 TB of 4k footage in a few months. You don't have to be a professional to enjoy high end devices. Just take a look at the die hard gamers out there. They have no problem spending huges amounts of cash for just gaming FFS.

            • Ryu Connor
            • 7 years ago

            [url<]http://youtu.be/m-AeELuIXt0?hd=1[/url<] Set the video to original. Right click the video and select show video info. Go fullscreen and you'll see that the video is being rendered in hardware, but decoded in software. Set the video to 1080p. You'll see that the video is being rendered in hardware and decoded in hardware. I'm doing this on a GTX 590 and a Dell 3008wfp. The GF110 is apparently unable to handle 4K video decoding. You must have a 2560x1440 or 2560x1600 screen for this test to work.

            • Bensam123
            • 7 years ago

            [url<]http://www.youtube.com/watch?v=D30a61m5byk[/url<] That's the one I actually watched. It pegs all four cores of my i7-860, but I'm able to play it rather fluidly. I have a Radeon 6970 as well.

            • Ryu Connor
            • 7 years ago

            YouTube still reports software decoding (CPU) for that clip.

            That the CPU can handle it fluidly is semi-irrelevant to me. More intriguing to me that high end GPU hardware doesn’t have the capability.

            • Bensam123
            • 7 years ago

            Where do you see information on that? I was just looking at my processor usage while watching clips.

            I’m not sure, but I don’t think the relevancy to you matters. :l

            • Ryu Connor
            • 7 years ago

            As I noted in my first post. Right click on the video in YouTube and select “Show video info” from the context menu.

            [url<]http://youtu.be/p_sD_5HAJPE?hd=1[/url<]

            • Bensam123
            • 7 years ago

            Neat…

            • Krogoth
            • 7 years ago

            Because, the demand for 4K is pretty much non-existent. Just like 1080P was back when it the new hotness.

            4K will only start to matter to video card guys once 4K become obtainable to the masses.

            • Bensam123
            • 7 years ago

            No bubble to burst sir. I’m also talking about digital film, not 35mm converted to digital. It’s sourced at 1080p for most movies.

            [url<]http://en.wikipedia.org/wiki/Digital_cinema#Technology[/url<] Scroll down to digital capture. I'm sorry, I'm not going to believe someone who will buy a $5000 camera will buy a sub-$100 bargin bin graphics card to match with it. $5000 is well out of the average consumers price range as well. If you were talking $1000 that might be plausible for a hobbyist. You can buy a nice used car for that price. So, you're telling me that no one uses hardware encoding for video? I actually thought that was one of the points you were gunning for, well regardless of the answer it adds to my point that no one would care what this encodes at compared to the last four generations of GPUs. I never said you needed to be a professional. You definitely need to be loaded with money though and that's the complete opposite of where this graphics card is aimed. I'm a die hard gamer and I'm pretty frugal. Even then, 99% of gamers aren't going to buy a SLI/Xfire setup with $500 video cards. And that's still well under the price range of a $5000 camera.

            • Deanjo
            • 7 years ago

            [quote<]I'm sorry, I'm not going to believe someone who will buy a $5000 camera will buy a sub-$100 bargin bin graphics card to match with it. $5000 is well out of the average consumers price range as well. If you were talking $1000 that might be plausible for a hobbyist. You can buy a nice used car for that price.[/quote<] Dude there are a ton of people that buy macs with less powerful video cards like the Mac Pro for videos editting. I never said it was the average hobbyist, just like the average tv buyer does not buy $5000 TV's or gamers that spend 5+ k on a gaming machine but there is large enough of a market for them to make such items for the enthusiast consumer. Hell just look at some of the stereo systems kids put in junk cars and how much they spent. [quote<]So, you're telling me that no one uses hardware encoding for video? I actually thought that was one of the points you were gunning for, well regardless of the answer it adds to my point that no one would care what this encodes at compared to the last four generations of GPUs.[/quote<] Get this through your head, no sane person would use shader based acceleration. NVenc is not shader based acceleration. There is a huge difference but you can't seem to grasp that. [quote<] No bubble to burst sir. I'm also talking about digital film, not 35mm converted to digital. It's sourced at 1080p for most movies.[/quote<] Guess you missed looking at the specs of those listed cameras and the "dated info" tag.

            • Bensam123
            • 7 years ago

            Yeah, you can spend a significant amount on a hobby, that usually isn’t just ONE thing. Comparing purchases of multiple items to one item, which you can spend even more money on isn’t the same. So you think along with the $5000 camera, $1000 monitor, and the $600 array of TB drives to store the information, they’re still going to buy a $100 video card? Not to mention a rather top of the line processor they’ll need to encode (since it will still more then likely be faster).

            The whole point I was trying to make is starting to get lost in all the stipulations (which I’m guessing was your point). This video card would never be bought solely for it’s ability to decode 4k video nor do 99% of people care about that. 4k video is rare, the video card can’t encode, the overall system would be rather expensive (the large array, fast processor, and monitor) especially when comparing this video card to it.

            Not to mention you can still play 4k video fluidly on a semi-recent processor in software, which you need anyway to encode in a reasonable amount of time. Mine is a i7-860, which is roughly three years old for instance. It makes your original post seem very niche. It probably wasn’t included because most people don’t care!

            (BTW I haven’t voted down any of your comments.)

            • Deanjo
            • 7 years ago

            [quote<]Yeah, you can spend a significant amount on a hobby, that usually isn't just ONE thing. Comparing purchases of multiple items to one item, which you can spend even more money on isn't the same. So you think along with the $5000 camera, $1000 monitor, and the $600 array of TB drives to store the information, they're still going to buy a $100 video card? [/quote<] You buy what is needed, anything more is a waste of money. People buy 5k gaming laptops, that is one item. People by 5k TV's and projectors, those are one item. People spend 5k on snowmobiles and ATV's. There is a ton of stuff that people spend 5k on for their hobbies. Hell a good fanjet engine for a model airplane can easily cost twice as much. Since there is no reason to spend extra money on a card that does gaming it would be a complete waste of money to do so. But with that money saved a person can go and spend that extra money on a larger monitor etc. A person doesn't go out and buy a prosumer audio card if they need a high end workstation for CAD. You buy the product that will do the job that you want it to do. [quote<]The whole point I was trying to make is starting to get lost in all the stipulations (which I'm guessing was your point). This video card would never be bought solely for it's ability to decode 4k video nor do 99% of people care about that.[/quote<] 99% of people don't care about the gaming performance on the video card either. Who cares about the performance is the gamers which are also a very small minority. The GT 520 sells very well. Why? Exactly for it's video decoding abilities. [quote<]4k video is rare, the video card can't encode, the overall system would be rather expensive (the large array, fast processor, and monitor) especially when comparing this video card to it.[/quote<] You are wrong, this video card CAN encode with it's dedicated engine. In fact it can accelerate decode and encode without even taxing the CPU which has basically straight I/O to worry about. CPU becomes a non factor. Having a dedicated hardware encoder is a very powerful thing. Take a look at what resolutions cellphones can encode at now days. It has nothing to do with the power of the CPU but has everything to do with the dedicated encoding engine that accompany the CPU. Do you really think that a Tegra 2 for example could do realtime decoding and encoding of 1080p relying on it's cpu alone? Not gonna happen. But with it's dedicated engine it can. My GT 520 sings through 4k playback decoding coupled to a very old x2 4200 with less then 15% cpu load [i<]at it's lowest powerstate[/i<] Something that the GTX-580 and 1090T stuggle with (not to mention kicking everything into uber power suck down load). Even the quadro FX 3800 at work can't playback the 4k streams without some serious cpu power behind it (and it still drops frames). [quote<]Not to mention you can still play 4k video fluidly on a semi-recent processor in software, which you need anyway to encode in a reasonable amount of time.[/quote<] Again wrong, you do not need powerful processors if you have a dedicated processor. This is why a dinky little SB i3 with Quick Sync can fly by a i7 doing straight brute force CPU encoding. Yes there is some trade off with quality currently but it is getting very close to a CPU encode. Let's even take 4k out of the equation. Lets look at encoding purposes for the average joe like transcoding for their portable device. They could have an ancient cpu and board but but by popping in a GT 640 they can substantially reduce the amount of time to encode their media for their iphone/ipad/android etc. Something that used to take hours to encode with their cpu would take minutes to complete. Even a semi modern i3 sees huge transcoding (we will use my new cheapo laptop that has a i3 2350M) To transcode a 90 minute standard def video it takes 6 minutes to encode to a iPhone format. Now switch it over to pure cpu in the lowest quality "fast" mode it takes 38 minutes to render. Enable Quicksync on the same video it now takes 6 minutes and it isn't bogging the system down or chewing through power.

            • MadManOriginal
            • 7 years ago

            You seem to have enjoyed discussing the 4k video features of this card. No doubt they are useful to a certain small number of people (much smaller than the number to whom video games matter I’d say.)

            Neat thing about the interwebs though…there are actually multiple websites! I’m sure you’ve already seen it but if one website doesn’t provide the information you are looking for you can always check others: [url<]http://www.anandtech.com/show/5969/zotac-geforce-gt-640-review-[/url<]

            • Deanjo
            • 7 years ago

            Oh I saw the Anandtech review, unfortunately it did not cover the encoder. I just expected TR to be more in depth article then it put out.

            • SPOOFE
            • 7 years ago

            [quote<]I'm sorry, I'm not going to believe someone who will buy a $5000 camera will buy a sub-$100 bargin bin graphics card to match with it.[/quote<] How come they have to buy a $5000 camera? Why all this focus on a $5000 camera? You do realize one can get a camera for like 50 bucks, right?

            • sheltiephil
            • 7 years ago

            So Deanjo, do you think that for HD video editing with Adobe Premiere Elements 10 or Cyberlink PowerDirector 8 Deluxe the 2GB Nvidia Geforce GT640 card is worthwhile, even if you have the Core i7 3770K CPU with HD4000 Intel graphics and 16Gb 1600mHz RAM? I have 64bit Win 7 and programs on an OCZ Agility 3 SSD and data on a 1Tb Seagate 7200RPM SATA 3 HDD. The GT640 seems to be performing fine.

            Is the memory bandwidth of 28.5Gb/second a severe limitation in video editing and encoding? Are the 384 CUDA cores, the dedicated hardware encodement and syncing more decisive in video editing? Is the limited bandwidth with DDR3 memoryonly a consideration for serious gamers?

            Do you think that Nvidia have really come up with a cool running card with low TDP and yet strong attributes for video playback, editing and encodement? Do these include CUDA cores, texture filters, 28nm architecture, nvenc and dedicated encodement? (Excuse my newby ignorance).

            Thanks, Deanjo, for discussing video aside from gaming with your strong knowledge. I’ve googled a lot, and haven’t been able to find one other in-depth, knowledgable discussion of graphics cards that doesn’t focus on gaming.

          • rootheday3
          • 7 years ago

          Ivybridge iGPU handles 4k decode and 4k displays (via Displlay Port)

      • flip-mode
      • 7 years ago

      Bensam has a fair enough point, which surprises me after the conversation I had with him the other day. “Prosumer” with $5000 video cameras and probably $1500-5000 video editing stations are probably going to be working with FirePro or Quadro video cards. At the same time, and according to what you’re saying, this card is now a viable option for a high-end video editing workstation. That seems crazy, but if true then that’s great.

      FWIW, TechReport has always focused on gaming. It would be nice, at times, to see that focus broaden, but a broader focus isn’t necessarily useful to the majority of the audience. How many readers are editing 4k video?

        • Chrispy_
        • 7 years ago

        Heh. Here is a quiz!

        [b<]ONE OF THESE DOES NOT BELONG:[/b<] [list<] [*<]$100 gaming card [/*<][*<]Flamewar over useless, 'nobody-cares' features. [/*<][*<]$5000 video cameras.[/*<] [/list<] Please send answers on the back of a $10 bill to <my home address> Competition closes when anyone actually buys a $5000 video camera, and makes a fool of themselves moaning about how their $100 gaming card doesn't have quite enough features to do what they were expecting in hardware.

          • ludi
          • 7 years ago

          I will happily send you a $10 bill. Please remit $10 in order to pay your $10 bill.

          • MadManOriginal
          • 7 years ago

          oo, oo, I know – trick question! The answer is ALL THREE.

            • Bensam123
            • 7 years ago

            You could’ve included the sentence at the end too.

          • SPOOFE
          • 7 years ago

          I have a $800 video camera that definitely challenges the quality of several $5000 camera solutions, and am certainly concerned about the video rendering element of this video card. Heck, even at SD resolutions a powerful rig can begin chugging once you apply just a handful of layers and effects.

          What do you edit?

        • Bensam123
        • 7 years ago

        Being on the opposite end of a argument often times makes it hard to see what the other person is arguing. You aren’t the only person who has said similar things about me.

        I don’t think TR focuses exclusively on gaming benchmarks. This is a small side review done by Cyril. The all encompassing architecture reviews include parallel programming benchmarks, rendering, and even folding results. I’m sure you’ve seen these though as you’re reading the same articles as I am.

      • derFunkenstein
      • 7 years ago

      That’s interesting?

        • Deanjo
        • 7 years ago

        More interesting then a card that no sane person would by for any kind of serious gaming.

          • flip-mode
          • 7 years ago

          True enough. There’s a strong case that low end card benchmarks should focus on entirely different matters than gaming.

            • Bensam123
            • 7 years ago

            Like price… but how do you rationalize price? You need to add tangible metrics to that. People are more likely to play light gaming on this card then encoding and watching 4k videos. I’d say that by a couple orders of magnitude as well.

            • flip-mode
            • 7 years ago

            I agree with Deanjo that the gaming capabilities of this card are not it’s most interesting capabilities. I guess I’m not impressed if there are still a huge number of boobs out there that would consider buying this card for any kind of gaming. After all, this is your own logic we’re using here. It’s the same logic you used earlier – a “prosumer” should never consider this card for processing video, and by the same logic, not even a casual gamer should consider this card for gaming.

            And that’s also saying I agree with you that no prosumer would really want this card, so its video encoding prowess seems a little pointless to me.

            So were left with a card that falls flat gaming but that still doesn’t measure up to prosumer standards. It’s a loser.

            • Bensam123
            • 7 years ago

            Sure… You’d be willing to say of the handful of people who would actually buy this card (by mistake or otherwise), the results would be more generalizeable to light gaming over 4k video encoding?

            There are a lot of neat things in the world, that doesn’t mean they always have a realistic application though.

            • flip-mode
            • 7 years ago

            LOL, you’ve stopped making sense again. I don’t even know what that post means. I think the bottom line is Bensam is always 100% right… and by 100% right I do mean that you [i<]will[/i<] argue your point, no matter how ridiculous or trivial, until the sun burns out or whomever you're talking to finally realizes the folly of ever having entered the conversation in the first place: Such is the case with me, presently. Adios.

      • can-a-tuna
      • 7 years ago

      The only interesting aspect of this card is that will it blend.

        • derFunkenstein
        • 7 years ago

        I -1 you on all AMD or nVidia related articles just out of principle. :p

          • flip-mode
          • 7 years ago

          I + you for that.

            • Meadows
            • 7 years ago

            “I + you”, how romantic.

      • sheltiephil
      • 7 years ago

      Speed and image clarity with GT640 for video editing, rendering, encodement, capture & playback

      So Deanjo, do you think that for HD video editing with Adobe Premiere Elements 10 or Cyberlink PowerDirector 8 Deluxe the 2GB Nvidia Geforce GT640 card is worthwhile, even if you have the Core i7 3770K CPU with HD4000 Intel graphics and 16Gb 1600mHz RAM? I have 64bit Win 7 Home Premium and my programs on an OCZ Agility 3 SSD and my data on a 1Tb Seagate 7200RPM SATA 3 HDD. The GT640 seems to be performing fine.

      Is the memory bandwidth of 28.5Gb/second a severe limitation in video editing and encoding? Are the 384 CUDA cores, the dedicated hardware encodement and syncing more decisive in video editing? Is the limited bandwidth with DDR3 memoryonly a consideration for serious gamers?

      Do you think that Nvidia have really come up with a card that runs cool with low TDP and yet strong attributes for video playback, editing and encodement? Do these include CUDA cores, texture filters, 28nm architecture, nvenc and dedicated encodement? (Excuse my newby ignorance).

      Thanks, Deanjo, for discussing video aside from gaming with your strong knowledge. I’ve googled a lot, and haven’t been able to find one other in-depth, knowledgable discussion of graphics cards that doesn’t focus on gaming.

    • crabjokeman
    • 7 years ago

    This card is only compelling for Linux users. If you’re a Penguin lover and your primary interest is video playback, this card will serve well, but so will cheaper (and passively-cooled) nvidia cards if you don’t need VDPAU D: [url<]https://en.wikipedia.org/wiki/Nvidia_PureVideo#The_Fifth_Generation_PureVideo_HD[/url<]

      • Deanjo
      • 7 years ago

      The GT520 however offers those same vdpau d capabilities and offers it in passive cooling.

      • dpaus
      • 7 years ago

      [quote<]only compelling for Linux users[/quote<] And apparently [url=http://www.phoronix.com/scan.php?page=news_item&px=MTEyNTE<]not even for them[/url<].

    • JdL
    • 7 years ago

    Memory bandwidth is abysmal. Just by looking at the specs we know it’s a huge step backward. Why are we looking at this product again? Just to get a sense for how laptops will be performing with it?

    • jdaven
    • 7 years ago

    I still don’t understand this product. Can someone explain to me how exactly going with DDR3 was the better choice over GDDR5?

      • Game_boy
      • 7 years ago

      Suppose they had lots of this specific chip with good yields, and very little of the chip that was meant to be there. To make it fit in this performance class, they could either a) downclock and fuse shaders, or b) hold it back with weaker memory. b) /also/ saves them money on memory, so they did that.

      • Coulda
      • 7 years ago

      Shortage of chips, possibly. DDR3 version appeals more to system integrators because cheaper 2GB DDR3 looks better than 1GB DDR5 if their customers don’t know the difference between memory type. Nvidia choose to limit the card spec to what they think will sell better. GDDR5 will come soon I hope (it exist as OEM version).

        • MadManOriginal
        • 7 years ago

        Ironically, the system integrator OEM version does have GDDR5.

        [url<]http://www.geforce.com/hardware/desktop-gpus/geforce-gt-640-oem/specifications[/url<]

      • slaimus
      • 7 years ago

      DDR3 has much lower power consumption than GDDR5, so that’s part of how they got the TDP lower.

      • continuum
      • 7 years ago

      Mass stupidity or really, really good drugs?

      Ok in all seriousness, DDR3 is cheaper than GDDR5… but yeah, not at this price. If I could get a GDDR5 version of this card for a low-end box then it might make sense, but then low-end boxes usually don’t need discrete graphics.

    • DancinJack
    • 7 years ago

    Goodness. There isn’t much to like about this little guy yet. Needs to be like 65 bucks to make it worth a darn.

Pin It on Pinterest

Share This