Semiconductors like silicon have a *negative* temperature coefficient of resistivity -- their resistance drops with increasing temperature

Be careful with temperature coefficient of resistivity. Different transistor types have different coefficients. Diodes are typically negative temperature coefficient devices. Power MOSFET's are almost always positive temperature coefficient. Power IGBT's can be positive or negative, depending on the design of the transistor and operating region. The metal wires within a CPU are certainly positive coefficient, and that does increase power dissipation at higher temperatures**. CPU switching transistors could be positive or negative, I don't know for sure. All of those effects are usually linear in temperature. The subthreshold leakage referred to by Mr. Kantor is both positive and

exponential in the temperature.

So how do you separate cause from effect?

It is very easy to show that the increase in drawn power is not solely responsible for the increase in temperature. For a linear material or stack of linear materials (like a CPU die in package), the relationship between thermal power and temperature is \dot{Q} = U A (T_source - T_sink), where \dot{Q} is thermal power (Watts), U is a constant property of the material (usually just its conductivity), and A is the cross-sectional area for heat transfer. The product UA can be lumped into a "thermal resistance" term, in units of watts per degree K rise across that surface. Anything short of a fluid interface boundary follows this law. For constant fluid flow-rate, even fluid boundaries do too. If the effect observed in the 6950 review is 9 W for 16 K increase, and it was totally attributed to thermal resistance, then 1/UA would be a whopping 1.8 K/W. For a 250W card, that implies a total temperature rise above ambient of 450 C! Since this clearly is not the case, the increase thermal power must be caused by other factors.

** So, why does increasing wire resistance increase power dissipation? Because the transistors in a CPU are mostly switching the gate capacitance of other transistors on and off. If the resistance in the gate drive path goes up, then more energy is consumed in switching the transistor. In ECE101 terms, the effective circuit is that of a source capacitor, switch, wire resistor, and gate capacitor in a series loop. In the ideal case, the wire resistance is zero, and the act of closing the switch losslessly transfers a small amount of energy (1/2 CV^2) from the large source cap to the tiny gate cap. The gate voltage rises to the threshold voltage, and the transistor turns on. If the wire resistance is non-zero, then some energy is also dissipated in that wire resistance (integral of I^2 R dt), in addition to the energy required to charge the transistor's gate.

My knowledge of this theory comes from power electronics, but the basic principles should also apply to CPU's. At our scale, gate drive power is tiny compared to switching and conduction losses of the power transistors, so we don't care about optimizing gate drive leakage current much. So it was somewhat shocking to see that power dissipation in CPU's is all about the gate drive power itself.