The real difference is rendering. When you are watching pre-made video content you are essentially watching a big stream of pictures... a non-intensive task for your computer. Switch to gaming (and anything else that isn't pre-rendered) and you have a completely different story. When you can interact with your environment your video card needs to calculate and manage the changes in the image it produces 30+ times per second (which by the way is freaking amazing). Imagine having to redraw your landscape every time you moved your head... that's a lot of work. A lot more work than looking at a picture.
Also, read through this anandtech article
for more in-depth info.
I would like to add some things to muddy-up the water.
1) Video playback is subjective. If you can't notice the difference with your naked eye during video playback, then everything else is irrelevant.
2) When you're playing video, (as Bloodsoul
explained) the images are pre-determined. Video playback DOES require effort from the computer hardware though. Your computer shows you pictures on your monitor, when you flip through a string of pictures at any rate, you get animation (video). A picture (or any individual frame of a video) is made up of dots (pixels). At it's core, your computer has to convert 1's and 0's from your hard drive (or DVD/Blu Ray disc) into colored dots on your monitor to display an image (or video frame). As I'm sure you can imagine, a video frame of 1366x768 resolution (1.05 million dots) takes less effot to render than a video frame of 1920x1080 (1.96 million dots). Although any GPU can render a 1920x1080 image (for example), a faster GPU can get that image to your monitor faster than a slower GPU. Typical movies display 24 pictures (frames) per second. To maintain "perfect" smoothness at this point, your GPU only has to be fast enough to render 24 frames every second to the montior/TV. As others have said, this is fairly easy by today's standards. If two different GPUs can render 24 frames per second, there will be no difference in playback quality between them. This applies to item 3 as well.
3) A lot of people have been mentioning "post-processing" as a differentiating factor between the 6450 and 5770. Post processing in video playback is meant to "improve" the image quality of a given frame in a video. Video imperfections often result from a video file being compressed to save space. Suffice to say that the graphics card has to "catch" each frame that goes by, "look" at it, and "improve imperfections" before sending it off to the display. (there are visual examples of this in the anandtech article linked above) This obviously requires extra effort from the GPU. You can tell your computer how much effort it should spend "fixing imperfections" on each video frame, but the GPU will be receiving frames in at 24 frames-per-second from the video file regardless. If the GPU cannot perform all the rendering and post-processing work required before the next frame comes in, it has to ignore the 2nd frame and finish rendering/displaying the first frame. (if the GPU doesn't send an image, your monitor is black) If the GPU has to skip a frame, you will notice a short pause (hiccup) in the video. More frames skipped = longer pause. To summarize, it takes a certain amount of GPU effort to display an image or video frame, it takes additional effort to "post-process" video frames. That additional post-processing effort is variable based on your inputs. With a "weaker" GPU, depending on the post-processing requirements you've set, this load on the GPU may take longer to complete than 1second / 24 frames = .042 seconds. If that's the case, the video on the weaker GPU will appear to stutter whereas the video on the stronger GPU may not. Technically speaking, if you pull the same video frame from each GPU though, they will both look the same. The only caveat being that modern GPUs are smart, so if the GPU is “choking” on frames, it will automatically reduce post-processing operations to preserve smooth frame delivery.
4) Games are still a series of images displayed on your monitor, much like movies. But, as Bloodsoul
explained, the content of each frame (image) are no longer pre-determined. This is where the differences really become apparent between GPUs that cost $30 or $250 for example. There are staggering amounts of calculations that need to take place every fraction of a second to determine what each image should look like. Remember the resolution? A GPU can draw 1.05 million dots faster than it can draw 1.96 million dots. Less work = faster rendering times = more images displayed per second. Then theres the topic of how many images do you have to display every second to give the user the illusion of "smooth" motion. One person may think that 40 images per second looks smooth, the next might think 60 images per second looks smooth. There is also "post-processing" in games too which is similar to video post processing. Just because you
don't need a $250 GPU though, doesn't mean that there's no need for them. Think of it this way, there are 500GB, 1,000GB, even 4,000GB hard drives available today. The major reason for this is for people that store video on their hard drive. Someone who doesn't store or use video/music/games on their computer could easily get by with an 160GB hard drive, but that does't mean that nobody else needs the extra space.
Main: i5-3570K, ASRock Z77 Pro4-M, Asus GTX660 TOP, 500GB Crucial BX100, 2 TB Samsung EcoGreen F4, 16GB 1600MHz G.Skill @1.25V, EVGA 550-G2, Silverstone PS07B
HTPC: A8-5600K, MSI FM2-A75IA-E53, 4TB Seagate SSHD, 8GB 1866MHz G.Skill, Crosley D-25 Case Mod