Personal computing discussed
Moderators: renee, morphine, SecretSquirrel
churin wrote:OK, then what I should view to see the difference? Please exclude pc games. The purpose of this discussion is to determine how to pick a video card.
churin wrote:I have HD 6450 and HD 5770. I am asking what graphics display I should view to tell the difference.
churin wrote:Thanks for all those responses.
Let me go off the topic but continue asking question which is still very much relevant:
Video card is for displaying images on monitor(GPU processing for non-display drelated is excluded for a moment). Assume the monitor in this argument is a given good one. What difference is perceived by eye when viewing what kind of images on the screen between those using inexpensive video card and expensive video card? Is the difference only when viewing PC games? Why one wants to buy $500 video card rather than $30 one?
chuckula wrote:
NONE
bthylafh wrote:You're not even really asking the right question, but there are slight differences between the two cards' video decoders:
https://en.wikipedia.org/wiki/Unified_Video_Decoder
5770 has UVD 2.2, while the 6400 series has UVD 3.0. Unless you're playing videos encoded in the new formats that UVD 3 supports you're not going to see much if any difference.
churin wrote:In the end, on a typical monitor you are seeing 2D anyways.There are two different type of graphical images, a 2D and a 3D. The 2D is those created by camera whether it is still or moving image. The 2D graphical image is something viewer just passively watches, while the progress of the 3D graphical image on the monitor is controllable by the viewer in real time by using an input device connected to the computer. The 3D graphic is a software created by computer programming. PC games use the 3D graphic.
churin wrote:Encoder has nothing to do with the output image that you see on screen. This is not quite related to your question. Older decoders usually just won't be able to use hardware to achieve certain features, but modern player software will automatically revert to software decode which will produce the same output, just a little slower (so not all frames are delivered within the smoothness threshold), or at a higher CPU/GPU %utilization because of all the "roundabout" work they need to do.Video Encoder:The video card comes with a video encoder and version of the encoder may varies depending on an individual card. Reproduced image may suffer if the video decoder is older than the version used for creating the image.
churin wrote:Post-processing is usually a feature you enable in software (exception being hardwired postprocessing done at the driver level where we don't have control). It specifies certain algorithms to apply to the output image before letting it out to the monitor. Things like deinterlace, smarter interpolation, they should result in the same image between different cards if you run these in pure "software" mode. Remember there is always software fallback. So even if the shaders/decoder on a video card do not support Bluray, a powerful enough CPU can handle it, giving you the same image.Shader Power: Ability of post processing effects which relates to the reproduced video quality tends to be greater with more expensive video card. The post processing cleans artifacts in the process of reproducing compressed image.
Blu-Ray Support: There are video cards which support Blu-Ray and which do not. If two cards randomly picked, even if the inexpensive one can happen to support, the more expensive one may not necessarily.
churin wrote:Better in this case is not based on the same video data like in a MKV/AVI file. There is AA, AF, occlusion, higher quality (read: larger) textures, which means the "input data" that the GPU has to deal with is a lot more. Video playback the input data is the same no matter what card. When you are playing a video file, you can't simply say "drop all green colours" in the name of optimization. The data need to be reproduced somewhat faithfully. In post-processing, you can say data is "invented/derived" so processing power will make a difference; but then again, you can always go software on that stuff, given enough CPU power.For the 3D, more processing power is required than for the 2D. What differs $500 card from $30 is this processing power for the 3D image. Greater the processing power, better the 3D image, depending on the individual pc games. If one does not play pc game there is no need to use video card costing higher than $100. Real inexpensive card may uses lesser quality passive components which tends to suffer from shorter hardware longevity.
Flying Fox wrote:In the end, on a typical monitor you are seeing 2D anyways.
Flying Fox wrote:Unless you are still outputting using analog VGA, in terms of playing 1080p or less video, there should be no discernable difference (ok, gamma, colour mapping and others aside, but that is another set of variables) between cards, especially from the same GPU vendor, if you are using the same monitor, same playback software with the same playback settings, the same video file, and the same set of post-processing effects applied.
churin wrote:It's powerful enough, but unfortunately it doesn't really work that way. The software will use the GPU or CPU; generally, it can't divide the work between the two dynamically.Can I assume that FX6300 is powerful enough to be able to take over video processing chore to keep video processing under control if the processing load goes beyond GPU's ability limit?
churin wrote:Holographic projectors are not quite here yet, so there is really no "true" 3D graphics output. Pure-software "3D graphics processing" from the days of Wolfenstein 3D to the latest game engines up until now has been a collection of mathematics problems mapping data in 3D coordinates (call it "scenes" if you may) onto a 2D "plane" so that it can be sent to typical monitors. So calling modern GPU processing "3D" a fraud is a little harsh. The game/app engines send 3D coordinates of polygons, light sources, viewing angle and perspectives, depth information, textures, and other stuff to the GPU to calculate the "scenes", which are all "data in 3-dimensions". The GPU's job is then to take all these data, plus special instructions on specific sets of data (those are your shaders), and "render a scene". In the end, there is a transformation to into an array of RGB/RGBA pixels that can be dumped onto the monitor as a "frame". The goal of course is to calculate, "render", and send at least 60 (or 24, or 30, depending on who you talk to) of these frames to the monitor within a second, hopefully with these frames not too far apart for 'smoothness' sake. (I have just described the big debate we are having on FPS vs frame times here. ) And then to do the same for every second while the game/app is run.Flying Fox wrote:In the end, on a typical monitor you are seeing 2D anyways.
I understand this since the image is shown on a flat panel so it can not be 3 dimensional view. It looks like the type of graphic used for PC games is customarily called 3D graphics.
auxy wrote:churin wrote:It's powerful enough, but unfortunately it doesn't really work that way. The software will use the GPU or CPU; generally, it can't divide the work between the two dynamically.Can I assume that FX6300 is powerful enough to be able to take over video processing chore to keep video processing under control if the processing load goes beyond GPU's ability limit?
Flying Fox wrote:churin wrote:Holographic projectors are not quite here yet, so there is really no "true" 3D graphics output. Pure-software "3D graphics processing" from the days of Wolfenstein 3D to the latest game engines up until now has been a collection of mathematics problems mapping data in 3D coordinates (call it "scenes" if you may) onto a 2D "plane" so that it can be sent to typical monitors. So calling modern GPU processing "3D" a fraud is a little harsh. The game/app engines send 3D coordinates of polygons, light sources, viewing angle and perspectives, depth information, textures, and other stuff to the GPU to calculate the "scenes", which are all "data in 3-dimensions". The GPU's job is then to take all these data, plus special instructions on specific sets of data (those are your shaders), and "render a scene". In the end, there is a transformation to into an array of RGB/RGBA pixels that can be dumped onto the monitor as a "frame". The goal of course is to calculate, "render", and send at least 60 (or 24, or 30, depending on who you talk to) of these frames to the monitor within a second, hopefully with these frames not too far apart for 'smoothness' sake. (I have just described the big debate we are having on FPS vs frame times here. ) And then to do the same for every second while the game/app is run.Flying Fox wrote:In the end, on a typical monitor you are seeing 2D anyways.
I understand this since the image is shown on a flat panel so it can not be 3 dimensional view. It looks like the type of graphic used for PC games is customarily called 3D graphics.
Flying Fox wrote:auxy wrote:churin wrote:It's powerful enough, but unfortunately it doesn't really work that way. The software will use the GPU or CPU; generally, it can't divide the work between the two dynamically.Can I assume that FX6300 is powerful enough to be able to take over video processing chore to keep video processing under control if the processing load goes beyond GPU's ability limit?
The OP really likes to stretch and dream stuff up.
The choice of either using GPU vs "software" aka CPU to decode video is usually a setting in the player software. GPUs that are released in the last 2-3 years (yes, even the relatively "weak" Intel ones) should have support for most video formats like WMV, H.264, and VC-1. It is the oddball ones like RMVB or VP8 that modern GPUs may not have "native" capabilities for. Your "older" card is a 5770, which is new enough as long as you deal with mainstream formats. Otherwise, the FX-6300 should be good enough to play those video.
Flying Fox wrote:Still don't know why you are trying to insist there are differences. Buyer's remorse for your 5770 now that you have a much cheaper 6450?
churin wrote:Any "moving" images on a computer screen can be loosely classified as "animation". So don't really know what you are getting at. If you follow this line of thinking, this is all about sending at least 60 (or 24, or 30, or whatever) images/frames to the screen, while making sure the interval between each image/frame is small enough and more or less the same in order to achieve "smoothness".Thanks for the details. BTHW: Is "Computer animation" an alternative term for 3D Graphic?
churin wrote:For now, there is no software that can dynamically adjust/dispatch workloads between GPU and CPU well enough in the realm of video players (someone correct me my info is outdated). This was supposed to be the promise of AMD's Fusion concept when they acquired ATI. The main problem was that GPU processing of video data is way faster than using CPU, so the drastic difference makes it hard to make sure that if some stuff need to go to the CPU, it may not be able to finish the job in time (remember there is a time constraint when a "frame" needs to be sent to the monitor?). As CPU gets more powerful, they may be capable enough that eventually this can be possible.OK, the work load can not be divided dynamically between GPU and CPU: It has to be either GUP or CPU which do the job.
churin wrote:This is the part that may be confusing for you. In the past GPUs have been used exclusively solve that "3D data to 2D images" problem. As GPUs get more powerful and their designs get more generalized, vendors start to include "instructions" that can be used for rendering video data. There is nothing special about "3D" or "4D", in the end it is some silicon that do some calculations. So GPUs start to be able to decode videos, and in the past few years, more instructions are introduced to GPUs that can even be used for more "general purpose" problems (still mainly math problems though) and you get the term GPGPUs. People start to throw problems like encryption, specialized industry-specific calculations, etc's to the GPUs because of the performance they can bring compared to mainstream CPUs. Essentially GPUs have become like the co-processors of old.Does need to switch between GPU and CPU happens only for 3D? Since I use Windows Media Player and I never need to do such setting as mentioned above. I never played computer game but I sometimes watch non-interactive computer animation.
churin wrote:The same old rule applies: if you see slowdowns, then think about upgrade. If you are fine, then save the money for other things in life.Originally I planned to get HD 7750 for the new FX6300 machine. The HD 6450 was taken out from my older secondary machine and used temporarily on the new machine, I thought. But it looks like HD 7750 is wast of money for me.
morphine wrote:Ehh.Why do I get this feeling that we're collectively doing someone's homework/assignment?