news point of view insults intelligence with geforce 550 ti 4gb
News

Point of View insults intelligence with GeForce 550 Ti 4GB

Ladies and gentlemen, I present to you the world’s most pointless graphics card. Nvidia partner Point of View has come up with a GeForce 550 Ti with four gigabytes of graphics memory, which makes about as much sense as putting a spoiler on a Smart car. Epic fail.

The problem with this new card, of course, is that the lowly GeForce 550 Ti doesn’t need anywhere close to that much RAM. When we tested the 550 Ti back in March, we discovered that the budget GPU is really only fast enough to play games at the relatively low resolution of 1680×1050, which works out to less than two megapixels. No wonder most of the 550 Tis on the market have just 1GB of memory.

While Point of View loads the card up with high-density memory chips, they’re not very fast ones. The GeForce 550 Ti is meant to be paired with GDDR5 memory with an effective data rate of 3.6 GT/s. Point of View hits the 4GB mark with much slower DDR3 memory capable of pushing bits at less than one third that speed.

Graphics card makers have long tried to dress up low-end models with obscene amounts of memory, but this is the most egregious example yet. Some consumers are easily swayed by bigger numbers, I guess. Thankfully, it looks like the card isn’t available in North America right now. According to Computer Base, Euros can order the thing for €113.

0 responses to “Point of View insults intelligence with GeForce 550 Ti 4GB

  1. .. I think it was a troll/bait about AMD copying something from Intel (don’t remember what)

  2. The slowness of this memory might actually make it slower than just using system ram.

  3. I gave you a -1 on that just to join in the fun. I don’t even remember what your post was about.

  4. Catchy title in attempt to ignite swarm of comments & drive visitors volume. I agree with Chrispy_, the card has its uses & a market for it, as a company won’t be doing this to lose money & have a bad reputation.

  5. I feel bad for her. She was tasked with something she didn’t have the knowledge to do, with a clear intention to embarrass her for the amusement of the viewers.

    It’s not like I could navigate the various creams and chemicals and layers of different make-ups to achieve the magic that turns relatively nice looking women into goddesses. If I had to do that on TV, knowing that I would be made fun of for ‘doing it wrong’, I would probably get a bit frustrated myself.

    Nobody is perfect, and making fun of people because of their ‘deficiencies’ is just mean.

  6. ACK! MY BRAIN! IT HURTS SO BAD NOW! This is why i can’t venture out into reality anymore… this is what its filled with!!!

  7. Average Joe? My vote goes to average Barbie, who thinks the monitor IS the computer: [url<]http://www.youtube.com/watch?v=aY_CidIS8YM[/url<]

  8. Yup, but they don’t and they WILL buy shit like this simply because it has a bigger number. The company knows this and it’s why they’re doing it.

    Everyone on here who is smart enough not to buy this card, won’t. Nothing you say short of knowing the person who is thinking of buying this card will change their mind. It will succeed with or without neigh-saying it.

    If video card manufacturers were actually interested in helping out the average joe choose a card they would choose a different naming scheme for their cards and derivatives.

  9. My experience has been that the average joe thinks computers run on some sort of magic that will escape if the case is cracked open… The average joe will not be buying graphics cards!

  10. And thats the beauty of capitalism – if no one buys it, it’ll fail. Pre-price drop HP Touchpad, anyone?

    Whats amusing about people on the internet is they seem to think the whole world thinks (or should think) just like them.

    You and I know a 4GB frame buffer attached to a low mid range GPU is pointless. But people buy these things all the time. Consumers need to educate themselves to make a smart buying decision – its not up to the companies making products to do that for them.

    Take some responsibility people – you are in charge of what you buy!

  11. Demanding products that make sense is part of democratic capitalism. I hope this one dies in a fire, because it’s a truly crippled card (in terms of bandwidth) compared to its equally-priced bretheren.

  12. This card has slightly under 1/3 the memory bandwidth of my CPUs. For any CUDA apps that are IO bound, it would be *faster* to run on your CPU.

  13. This is democratic capitalism working as intended. If you would like communism or a corrupt dictatorship there are plenty of other countries that will let you in, live and work there.

  14. One third of the memory bandwidth, less than one tenth of the cost.
    Farms are still [i<]in[/i<], baby!

  15. I’ll be getting six of these. No, really, I will.

    I’ve been using Geforces instead of Quadros for a few years now. Sure, Quadros have better viewport performance because the Geforces are hamstrung by intentional driver crippling, but for production rendering like 3DSMax + Iray for example, the actual compute performance of a Geforce isn’t much different to a Tesla or a Quadro, as long as you can fit your whole scene within the onboard memory of the GPU. The 2GB cards were a vast improvement over the 1GB cards in this respect.

    I am wondering how much of an effect the reduced bandwidth of DDR3 will have. My experience says that the compute stuff isn’t as bandwidth hungry as frame buffer rendering, but even if it does have an impact, the move to 4GB will be worth the performance penalty.

    I’d post some benchmarks but I’m not stupid enough to have any 1GB cards in our compute boxes….

  16. We’ve seen stupid amounts of RAM before (a NV 9500GT coming with 1GB is a infamous one, i think that card came with 256 or 384MB as stock and was held back by the GPU on anything more than windows solitare) but never to this extent. I have yet to see any single GPU card come with 4GB of RAM. Not the superclocked GTX580’s and 6970’s. They top out at 2 or 3GB. And they are designed and capable of running resolutions 4 times what a 550 is capable of.

  17. It’s still a good strategy. Morales have nothing to do with it and I’m not encouraging people to buy this card.

  18. Not sure, but my question about the 580’s (and really, any card with increased frame buffers in multi-GPU setups) still stands

  19. Ultimately who has the responsibility to know what they are buying? The consumer. Caveat emptor!

    This is a fully functional product, it isn’t defective, won’t kill you, and isn’t any different than when Dell or HP or any other OEM throws some cut down “LE” version of a graphics card in their machines. What should they do, include benchmark scores?

    Customers are always going to buy things out of some perceived notions of superiority (Monster Cables, anyone?). These types of uneducated customers are a subset of the greater market, and if this company can capitalize on that market then there is nothing wrong with that – consumers need to educate themselves so they don’t fall into these types of marketing ploys.

    But then Monster Cables wouldn’t be around…

  20. Giving credit to a marketing department for exploiting ignorance? Praising doing the wrong thing is just….wrong

  21. And the average joe will buy this.

    I think it’s actually pretty smart from a marketing prospective.

  22. Well DDR3 is SO cheap these days… but anyways… I haven’t seen a budget card with a framebuffer that matches the cards performance since… my EVGA Geforce 7300GS with 128MB of DDR2. I can’t see a budget card using more than 256MB in almost any scenario and still have acceptable performance. But 4 Gigabytes on a GTX550? Yikes! Not only is it 0% faster (I can see it using 768MB or so), good luck using it in any 32-bit OS and having free RAM for running apps.

  23. As a single card, I agree, it’s stupid. However, I’ve often wondered with these kinds of products if they’d benefit from the additional memory when you put them in SLI and, for example, play games at much higher resolutions. A single 580 might perform the same with 1.5GB and 3GB, but would 2 3GB 580’s perform notably better than two standard 580’s in SLI, especially in multi-monitor configs?

  24. It’s about 2,123 megapixels (16:9), so if my hasty calculations are not wrong, even 16 gigs should be enough to fill all of the screen (for 2D desktop usage). This is assuming that a maximum of 8 bytes are enough for describing each pixel.

  25. Isn’t this targeted at remote desktop configuration ? RemoteFX for example need upto 330meg of video ram per client.
    No need to have a fast GPU, just allot of ram.

    Microsoft say they allow 12 clients per GPU. And what do you know : 4gig / 12 = 330meg

    Doesn’t sound to stupid to me if you are planning to run a server with 6+ remoteFX connections.

    edit: well, it doesn’t seem like the non quadro support remotefx… So this card seem to have some very specialized uses. anyone ?

  26. Sooo stupid,i mean a 1.5gb 580 performs just as good as a 3gb 580,plus its denser memory making it a bit harder to overclock,alot more transistors to run at faster speeds and timings.Nvidia Is doin the same crap with there mobile cards.Say a 560m with 1.5 gig of ram that extra 500mb i dont think does a thing,just a sales pitch and price priemium,besides itsa 550ti mobile chip 192 bit bus,but i bet its better then this ddr3 inno card.The 580m is the mobile 560ti chip,256 mem bus but they decide to use 2gig of memory for a chip core thats on average 275mhz slower then desktop version.
    Snake OIL!! kill the witch!!!

  27. Why are you surprised?

    Video vendors have always put an absurd amount of VRAM on their low-end GPU models, but could never never take advantage of it. It is another marketing gimmick that tries to lure customers who don’t know any better.

  28. How did they manage to get 4GBytes on a 128bit memory bus?

    Since DDR3 chips used on gfx cards are all x16-wide, using 8x chips of 2Gb each would only yield a total of 2GB. So, perhaps they used 16x 2Gbit chips in a dual-rank layout? Or, perhaps they got 4Gbit DDR3 chips (256Mx16) at a good price and used 8x of 4Gbit? I’m surprised the memory controller on the 550-Ti GPU can support 4Gbit chips, it’s pretty unlikely that NV’s GPU designers would anticipate anyone using that chip with 4GB of local memory! 😉

  29. If my very hasty calculations are correct, 32GB of ram would only fill 6% of that screen.

  30. What did you say was [url=http://www.google.com/imgres?q=smart+car+with+spoiler&hl=en&client=opera&hs=fkN&sa=X&rls=en&channel=suggest&tbm=isch&prmd=ivns&tbnid=o0ocfJnuZmdBvM:&imgrefurl=http://www.smartsrus.com/smart_car_spoilers.htm&docid=AlQtzGwQ8UyzxM&w=615&h=500&ei=Cr5WTvS_MKjq0gG28v23DA&zoom=1&iact=hc&vpx=419&vpy=103&dur=412&hovh=202&hovw=249&tx=93&ty=104&page=1&tbnh=149&tbnw=187&start=0&ndsp=35&ved=1t:429,r:1,s:0&biw=1732&bih=942/<]ridiculous[/url<]?

  31. “If you are doing work with CUDA and only using SP then you fail as developers.”

    No. There are tons of tasks that can benefit from CUDA and OpenCL that only require single precision.

  32. We have a lot of tasks that don’t need double precision that are still very beneficial from running even on a card like this. I am not saying that we would rather break up the problem and do more work upfront but this card does have a market.

    Image processing where you don’t want to have to break the image into tiles could see a significant benefit. also this may have a low enough power draw to meet someone’s requirements.

  33. If they put this is a 560 ti with GDDR5 memory I would buy hundreds for the office and my development team. I can see a number of cases where having the more memory on a cheap graphics card would work well for GPGPU applications to avoid having to write code to divide up a problem, those are very specific cases.

  34. Its pointless because no CUDA developer would ever look for a 550Ti to do development work.

  35. He literally joined just to post that one comment on this article. Who is he trying to fool here?

  36. I’d rather jump on the bandwagon of profiting from the ignorant. Trying to eliminate ignorance is the definition of futility. 😉 Just think of it as a super huge market segment ready for the pickings!

  37. Oh, for large numbers of people the only thing they look at on the stat page will be amount of VRAM. This product will undoubtedly sell well. The marketing department will have a nice Christmas bonus this year, while im sure the hardware guys at PoV have lost all faith in the company.

  38. There undoubtedly are, but I’m sure the fact that the 4GB card has less than [b<]one third[/b<] of the memory bandwidth will start to hurt badly in those cases.

  39. It isn’t so absurd imho. The price is about the same as any other 550Ti, so why not 4GB? It’s probably useless, the card won’t ever get close to 100% memory usage, though some CAD/CAM models may be able to fill a large part of that RAM.

    POW marketing dept “Hmmm, it would cost us 75 cents to add another 1GB of Vram. Might as well add 3GB over the competition and we’ll have a unique product”. As long as the end user doesn’t pay anything extra…

  40. He’s also not really endearing himself to the population if the reaction to “is there a brain in there?” being -9 is any indication.

  41. Did someone say spoiler on a Smart car?

    [url=http://www.smartsrus.com/smart_car_spoilers.htm<]http://www.smartsrus.com/smart_car_spoilers.htm[/url<]

  42. Almost any GPGPU application that can make use of that much local memory would also benefit from more shader resources. Generally speaking, if you want to load a large dataset into local memory it’s because you’re going to be doing a [i<]lot[/i<] of operations on it, so the more shaders the better. Alternatively, it's because your computations have a lot of dependencies that span the entire dataset, but in that case you'd want large [i<]fast[/i<] memory, which this card doesn't have either. I suppose there may be some folks out there with really tight budgets doing something that is mostly memory-bound yet somehow not memory-speed-dependent, but I suspect the vast majority of GPGPU applications aren't like that. So whether you're talking gaming or computing, this thing is badly unbalanced and deserves some "attitude." (Not that yours is helping)

  43. Oh hey look, Point of View’s marketing team found the TR. Nice try but sorry you are wrong. CUDA cores are more important that RAM. If you are doing work with CUDA and only using SP then you fail as developers.

  44. This is the card where they wasted their time tossing in extra quirks to make it able to deal with 1GB ram in the first place- a 192-bit memory interface and the number of cores on the chip would have naturally lent themselves well to using 768MB of memory, and I don’t think the extra 256MB does the card much good as it is.

  45. Um… if you’re trying to do serious CUDA processing without a Tesla card it’d be nice to get a card where DP isn’t totally nerfed i.e. a 570/580 (on those it’s only nerfed to 1/8 SP instead of totally nerfed to 1/12 SP), or at least a card with enough functional units to do serious work i.e. a 560/560 Ti. A 550Ti has enough power to be ok for learning CUDA and doing a handful of useful things, but I doubt many people really want to use a card with such a narrow memory interface and so few cores for serious scientific work that requires that much memory.

  46. My Intel graphics have 32GB of RAM. It’s future proof. I’ll be able to handle 34560p monitors.

  47. Are there that many compute applications that are memory bound and not shader bound in this sort of situation? (and hence a relatively low-computational-power-card like a GTX 550 Ti but with a buttload of memory would make sense?)

    This is an area we haven’t done that much research in, but I would be curious if such applications are more commonplace than I expected?

  48. +1 Lets just keep trying to educate people so products like this fall flat on their face.

  49. quote: Ladies and gentlemen, I present to you the world’s most pointless graphics card.

    response: Ladies and gentlemen, I present to you the world’s most pointless attitude.

    This card is great for CUDA developers dealing with stochastic mechanisms or sparse linear algebra. Normal cards have 1.5GB, the top have 3GB, and without splashing out 12x more for 6GB on a Tesla/Quadro, you can now have 4 GB for just over 100 euros, albeit single precision focused. Seriously, what are you thinking about when you posted that first sentence? Is there a brain in there? Hello?

  50. Son: But Father, I don’t love her!

    Father: What’s wrong with her! She’s beautiful…rich…She’s got huge……… tracts of land.

  51. Heh. Why don’t you like this card? She’s got [i<][url=http://poliisi.iki.fi/pub/fun/sounds/Monty_Python/She's%20got%20huge%20tracts%20of%20land.wav<]huge[/url<][/i<]... tracts of memory.

  52. Can any photo/video editing software use GPU ram? Not that this is aimed at that scene, mind.

  53. Point of View ?

    Politics? Questionable NSFW content ?

    Not my choice for a graphics card company.

  54. They should just put a couple of SD slots on a video card. Expandable to 64GB!!!

    If people want huge tracts of memory, I say let them have it. With both barrels.