WhatMeWorry wrote:I apologize if this has been asked a thousand times, but what prevents a a manufacturer from just sticking a discrete CPU and discrete GPU on the same PCB? Is it technical or cost or flexibility or space? Maybe a combination of?
ludi wrote:I don't have numbers for recent products, but high-performance GPUs have commonly lived on 8-12+ layer PCBs while conventional desktop motherboards have traditionally been in the range of 4-6 layers.
Another problem is that high-performance GPUs tend to produce heat in the same range as a high-performance CPU and tend to have similar cooling requirements, while BGA-packaged chips (including GPUs and chipsets) have a tendency to pop away from a PCB if it flexes too much.
TDIdriver wrote:I'd imagine cost is the major factor, especially from R&D. Anyone recall the recent(-ish) concept board Asus put out?
http://rog.asus.com/116882012/news/asus ... u-onboard/
Airmantharp wrote:With two 8-pin and two 6-pin connectors, that Asus board could have the equivalent of a pair of GTX680's, Titans, or most likely, HD7970's. That's a lot of power for what looks like 1/10th the cooling of said above GPUs. I mean, there's 600W there between PCIe power connectors and PCIe slots, with less thermal dissipation area than a typical aftermarket CPU cooler designed for 150W.
Forge wrote:Yes, G200MMS and ATI Rage Pro are common choices on server boards. They output a simple text console or VGA frame buffer very reliably, they draw a single watt, tops, need only 8MB or 16MB of vram to operate, and are documented to a ridiculous degree. They're also very simple designs, easy to fab even on low end machines, and very simple to diagnose/repair/replace.
Servers have a different set of criteria. By server criteria, that crap little G200 is superior to your GTX680 in so many ways that it is not even funny.
Users browsing this forum: AgnesAridly and 4 guests