In the lab: EVGA’s GeForce GTX 1050 Ti Superclocked graphics card

Performance results for Nvidia's GeForce GTX 1050 and GTX 1050 Ti hit the wires today, but we didn't manage to get our hands on either of those cards before the embargo lifted. Happily, EVGA surprised us with a GeForce GTX 1050 Ti this evening. The company sent along one of its GTX 1050 Ti Superclocked cards for us to put through its paces, and we'll be getting it in our test rig ASAP.

The GTX 1050 Ti SC needs nothing but slot power to do its thing, and it's just 5.7" (14.5 cm) long, making it a seemingly perfect candidate for drop-in gaming duty in that prebuilt HP or Dell productivity machine that might be lying under the desk at home. Even with the GTX 1050 Ti's 75W thermal envelope, EVGA found room to push this GP107 chip to 1354 MHz base and 1468 MHz boost speeds, up from Nvidia's reference 1290 MHz base and 1392 MHz boost figures. We're eager to see how this mighty mite performs.

Comments closed
    • synthtel2
    • 3 years ago

    Kudos to EVGA for hooking TR up with the test hardware Nvidia won’t.

      • Leader952
      • 3 years ago

      Nvidia is not releasing a reference GTX 1050 and you expect that Nvidia would screw over their suppliers by seeding review sites with one of them and exclude the others.

        • synthtel2
        • 3 years ago

        Admittedly, I haven’t been following how this works for other reviewers lately (I don’t read others besides occasionally Anandtech for CPU stuff), but both teams used to make sure reviewers got what they needed to have content up on launch day, and it was always clear that it was the doing of AMD or Nvidia, not their board partners. That still applied even if there wasn’t a reference board and the chipmakers had to pass along a board with some other company’s name on it.

        Recent events, in contrast to that, give a very strong impression that Nvidia is in IDGAF-mode and TR only has this card because someone at EVGA has a clue.

    • jihadjoe
    • 3 years ago

    Nvidia seems to be artificially limiting the 1050/1050Ti to a maximum of 2002MHz when overclocking despite the silicone being capable of going higher.

      • DrDominodog51
      • 3 years ago

      Is it a BIOS or hardware limit?

        • jihadjoe
        • 3 years ago

        [url=https://www.techpowerup.com/reviews/MSI/GTX_1050_Gaming_X/30.html<]TPU[/url<] [url=https://www.techpowerup.com/reviews/MSI/GTX_1050_Ti_Gaming_X/30.html<]says[/url<] it's done by the driver.

    • LoneWolf15
    • 3 years ago

    Once again, a request to test HEVC H.265, and VP9 hardware decoding of video. This is a big deal for any people who have found iGPU and current budget GPUs to be lacking in their HTPC rig.

    I would gladly buy the non-SC version of this in a heartbeat if I find it does full 10-bit (not just 8-bit) decode in hardware, as well as VP9.

    • Mr Bill
    • 3 years ago

    Another Vial – Indeed. I approve the inclusion of a WOC Magic card in the teaser photo as a foreshadowing of the review.

    • hasseb64
    • 3 years ago

    Comparison against latest Iris Pro would be nice, benefit of a discrete GPU this weak should be minimal.

      • vargis14
      • 3 years ago

      This will blow away Iris IMHO

      • Jeff Kampman
      • 3 years ago

      I think you severely underestimate the potential of these cards. The GTX 1050 appears to be about as powerful as a GTX 950; the 1050 Ti is even faster still. Intel’s integrated graphics are nowhere close.

      There’s a dearth of useful benchmarks of Iris Pro out there, but using the admittedly inadequate measure of 3DMark, even a GTX 750 Ti seems much faster than the Iris Pro 580 graphics in the Skull Canyon NUC. If that’s true, these cards will be a huge upgrade over Iris Pro.

        • vargis14
        • 3 years ago

        Also Nvidia’s extra Physics features etc, and general better image quality over Intel will guarantee it is well worth the upgrade over Iris…Also the Iris graphics is basically for laptops and AIO’s, BUT that 64mb of eDram on die could help gamers with a dedicated card in some games on a custom build if you can get one that is not soldered onto a motherboard that is. But I do not feel it would be worth the price increase over other mainstream Intel CPU’s IMHO again.

        I am patiently waiting for AMD’s Zen CPU’s….also praying they can compete and beat Intel on some levels, that said I will not hold my breath in anticipation.
        IMHO IMHO IMHO 🙂

        • christos_thski
        • 3 years ago

        I agree with Jeff here. Moreover, the Iris Pro is practically a useless metric, because intel, in its wisdom, has seen fit to include it only on high end CPUs, instead of the low end CPUs it might make sense on.

        • hasseb64
        • 3 years ago

        Still very weak for any 4K gaming, which is around corner.
        Can’t see any reason to buy a weak discrete card like this, total waste.
        Better buy a used computer if money is the problem.

          • Leader952
          • 3 years ago

          First you want a comparison against the Iris Pro. Then when it is pointed out that the GTX 1080 Ti will blow away the Iris Pro you then change your message that a low end card is not good for 4K gaming.

          Did you think that the Iris Pro was good for 4K gaming?

          • Ninjitsu
          • 3 years ago

          obvious troll is obvious.

      • praxum
      • 3 years ago

      I have long looked for one number to compare cards, and by no means is this perfect but you can find it on Futuremark website “3DMARK GPU Score” can give a good idea of GPU1 v GPU2 type comparisons. It does not have all the nuances of what impacts a lower or higher CPU clock speed or core count can do but is a number that is a reasonable comparison.

      As a proxy comparison Futuremark has the 6200 at 2380 3DMARK GPU Score and since there are no 1050 numbers yet the GTX 950 has about 8500 3DMARK GPU Score. That is 3.57 X more performance at a minimum.

      That should put into perspective how much faster the card is.

      PS. Be aware when looking for 3DMARK numbers you can easily compare the wrong numbers to each other.

      PSS. I have a EVGA 1050 TI SC Gaming on the way for a to replace a GTX 550 ti.

      Sources:
      [url<]http://www.futuremark.com/hardware/gpu/Intel+Iris+Pro+Graphics+6200/review[/url<] [url<]http://www.futuremark.com/hardware/gpu/NVIDIA+GeForce+GTX+950/review[/url<]

    • Anovoca
    • 3 years ago

    Hi Jeff, might I ask what cards you will be testing this against in the benchmarks. Also, if you weren’t planning on doing so already, could you throw some 750ti numbers into your charts? Seeing as the 1050ti is the first card to fit this particular price point since the 750ti, I feel there are a lot of others like me looking to see just how much of an improvement it offers over the past budget offering.

    Edit: Not to mention that the 750ti is the third most actively used GPU on the market according to the most recent steam numbers 🙂

      • christos_thski
      • 3 years ago

      I agree with this. Testing against older cards is more useful than testing against newer ones, with a budget GPU such as this. More users are weighing a potential 1050 upgrade over 750 Tis and 7870 Radeons than weighing compared to newer, faster cards. Please try to include older GPUs!

        • DPete27
        • 3 years ago

        I’ve [i<]usually[/i<] been able to compare to legacy cards by interpolating between 2-3 different TR reviews, but yes, it would be nice if we can get at least one card that's a couple generations old in each review, it just helps tie the new card review back to reviews of older cards.

          • Anovoca
          • 3 years ago

          I’ve done this a few times but the older the review the less accurate the data is to current measurements. Some cards get better over time with driver optimization, while other cards get worse for arguably the same reason. Not to mention the change in testing methods/equipment/software mean that the results will rarely align, and if you are looking to place a % of performance gain behind it you aren’t going to be able to get close to reality.

            • DPete27
            • 3 years ago

            Very true, just take the Fury and Fury X performance increase noted in the recent GTX 1060 review.

      • Jeff Kampman
      • 3 years ago

      Everything in the Radeon RX 460 review will likely serve as a starting point.

        • Anovoca
        • 3 years ago

        Perfect. Thanks!

        Edit: I was worried the performance gap might be too wide to warrant (in your opinion) even posting a comparison to the older maxwell offering. But even if that is the case, most people will want to see that in hard numbers for piece of mind before clicking that “Add to Cart” button 🙂

    • TravelMug
    • 3 years ago

    Would be nice to see some 380X results in there as well. There are some in various reviews, but they seem very inconsistent and weird. In some reviews/games the 1050Ti seems faster than the 380 or 380X in others not so much.

    And I agree with the comments which say that some reasonable settings should be used for the games. My take on it would be to use the second best settings so for example Very High instead of Ultra or High instead of Very High. That makes the most sense with these cards.

    • Ninjitsu
    • 3 years ago

    1080p benchmarks plz

    • robertsup
    • 3 years ago

    does it use standard pcb or custom made? important cuz of using tgrd party custom cooles

      • Jeff Kampman
      • 3 years ago

      There is no standard design for GTX 1050 boards, to my knowledge. Nvidia left implementation entirely up to its board partners.

        • Shobai
        • 3 years ago

        Man, I hope that someone takes up the challenge to produce a half height version of something this generation…

    • robertsup
    • 3 years ago

    does it use standard pcb or custom made? important cuz of using tgrd party custom cooles

    • dragmor
    • 3 years ago

    Can you please include some older mid range cards like 760GTX or R280 and a few cards that are in the same price range when bought second hand.

      • Ninjitsu
      • 3 years ago

      If you have room in your schedule (Jeff), would be really neat to see the comparison with older x50 series cards – 550, 650, 650 Ti Boost, etc.

    • DragonDaddyBear
    • 3 years ago

    Can you please bench with a budget CPU in addition to the standard rig? I think most budget buyers would have a budget CPU. Historically this strongly favors Nvidia, but AMD has been making significant strides in their drivers recently. The results could significantly change the recommended card.

      • Visigoth
      • 3 years ago

      I agree, this is a very logical test too. Nobody with a budget CPU buys a 1080 GPU, and neither does someone with a top-of-the-line CPU buy a 1050 GPU.

        • BurntMyBacon
        • 3 years ago

        [quote=”Visigoth”<]Nobody with a budget CPU buys a 1080 GPU, and neither does someone with a top-of-the-line CPU buy a 1050 GPU.[/quote<] While I've seen both the scenarios you speak of, I agree that the first is uncommon. The most (relatively) common case I've seen is where the buyer intended to upgrade the CPU eventually, but needed to make due with a currently owned budget CPU in the meantime. The corner case, is where a buyer was convinced that a low end Haswell CPU would provide all the CPU muscle needed to push a 780Ti. It didn't quite work out the way he wanted it to, but he's still happy with it despite leaving some performance on the table. The second scenario is perhaps more common that you might expect. I've seen a lot of workhorse machines paired with low to lower midrange Geforce or Quadro gpu's both to avoid Intel integrated graphics and to lower the latency of their unnecessarily flashy GUI. It would seem that they can't be troubled to dig through the settings and turn off default effects. The machines include encoding machines, recording machines, compiling machines, and a number of other tasks that may use Windows, Linux, or Unix. That said, only one of these machines I mentioned have anything to do gaming.

          • Bauxite
          • 3 years ago

          I have a 950 in a “portableish” dual E5 system, would probably put a 1050 in a newer build for the updated ports. Its the cheapest way to get hdmi 2.0 for large 4k screens and the visualizations of the results are not really gpu dependent, running in a x8 slot anyways.

        • MOSFET
        • 3 years ago

        But Jeff is not writing a system review. He’s writing a GPU review. I prefer TR keep it as scientifically comparable as possible for as long as possible. However, I do love data points.

      • Jeff Kampman
      • 3 years ago

      I might run a couple tests on a Core i3-6100, but no guarantees.

      • Zizy
      • 3 years ago

      True, and the other value of the budget CPU review would be to answer what you get by upgrading your GPU using your existing old CPU.

    • DrDominodog51
    • 3 years ago

    Can you check the core clocks during the review please?

    I wouldn’t be surprised if it throttles under load due to power limits.

      • willmore
      • 3 years ago

      I would be. That heatsink looks similar to ones used on 90W CPUs. The biggest problem would likely be the power density of the chip–no IHS to help it out like a CPU would have.

      We’ll know soon enough, right?

        • RAGEPRO
        • 3 years ago

        IHSs tend to hurt CPUs more than help them, in terms of heat dissipation.

        Think about this way — with a bare die, you’ve got one guaranteed inefficient thermal transfer point between the die and the radiator. With an IHS you’re adding on another one between the die and the IHS. Unless the manufacturing is done to unbelievably strict tolerances (and if Intel doesn’t, I doubt EVGA would) you’ll end up with worse thermal behavior overall.

        Adding the IHS doesn’t actually help with heat dissipation at all; it’s a very poor name. It only exists to help fumble-fingered builders from gouging/chipping/smashing the chip when installing the heatsink.

          • psuedonymous
          • 3 years ago

          [quote<] Think about this way -- with a bare die, you've got one guaranteed inefficient thermal transfer point between the die and the radiator. With an IHS you're adding on another one between the die and the IHS. Unless the manufacturing is done to unbelievably strict tolerances (and if Intel doesn't, I doubt EVGA would) you'll end up with worse thermal behavior overall.[/quote<] This only applies if you consider the die a homogenous emitter. In reality, due to the way modern GPUs (and CPUs) vigorously power-gate during operation, there will be a whole load of transient mobile hotspots dotted around the die. Using an IHS, those hotspots can spear their energy over a larger area before coupling to the main heatsink, thus ejecting heat more effectively, and keeping it away from the areas surrounding the hotspots (preventing 'pre-heat'). "But," you may ask "wouldn't the base of the heatsink do the same job?". The problem is the heatsink is a big chunk of aluminium/copper, so has a high thermal mass. By using a low-mass IHS, in between, while the [i<]total[/i<] thermal resistance from die to heatsink is increased, it provides a quick way for energy to exit the die at the hotspots rather than spreading to surrounding areas. By keeping those areas cool, then [i<]they[/i<] are activated again they are starting from a cool state rather than a warm state, so have more 'thermal headroom'. top operate within.

            • RAGEPRO
            • 3 years ago

            Eh. Apologies friend, but I don’t buy it because I don’t think GPUs are doing thermal monitoring at that fine-grained of a level. Power-gating yes, thermal not so much. Ultimately the best thing you can do is remove heat from the chip as fast as possible, and you’ll do that with the best thermal interface. Increasing the overall thermal resistance is not a trade-off worth making in any case.

            Plus, if what you were saying were true it would apply to CPUs as well. Extensive testing has shown that it’s simply not the case. Bare-die heatsink mounting performs at least as well and usually better than using the IHS, even when you hook up the IHS to the die using something like CoolLabs’ Liquid Ultra.

            • synthtel2
            • 3 years ago

            I think you’re interested in the horizontal thermal conductivity, not anything about thermal mass. How much TIM heat has to go through before hitting metal would seem to be much more important than whether that metal is an IHS or the real deal, as far as evening out die temperature goes.

            If anything, more thermal mass would help keep hotspots controlled (to the extent that they’re transient). If the ratio of thermal mass to thermal conductivity went up, that could be a problem, but that’s a property of the material, not how much of it is present. More thermal mass means a hotspot has to be more intense or last longer to get a given amount warmer than the average across the die.

            Your conclusions make perfect sense if we assume that a die -> IHS junction has lower thermal resistance than a die -> main heatsink junction, but sloppy TIM jobs don’t seem to be restricted to one or the other.

      • Jeff Kampman
      • 3 years ago

      Did a quick look at this last night and the card actually boosts way above its specified clocks—like 300MHz-ish. Has no trouble holding those speeds, either.

        • DrDominodog51
        • 3 years ago

        Thanks.

    • HERETIC
    • 3 years ago

    Looking at a few reviews-seems the perfect 15″-1080 lappy GPU.
    The 1050 comes in around 50% of a 1060 and the Ti around 60%.
    Sammy did good……………………………….

    • thill9
    • 3 years ago

    Jeff,
    Congrats on cracking a Masterpiece!

      • Mr Bill
      • 3 years ago

      Cracking a box of Kaladesh and a [url=https://www.youtube.com/watch?v=hAzdgU_kpGo<]MASTERPIECE![/url<]

Pin It on Pinterest

Share This