The TR Podcast 35: Breaking the glass, surfing the music, and suing Apple

Date: Feb 14, 2009

Time: 1:17:15

Hosted by Jordan Drake

Co-Hosts: Scott Wasson, Geoff Gasior, Cyril Kowaliski, Matt Butrovich

Listen now:
Download: MP3 (53.1MB) | M4A (58.7MB)

Subscribe with RSS | Subscribe with iTunes

Show notes

Scott starts off this week’s tech discussion with tales of his recent trip to Oregon, where he talked with Intel executives about the newly announced 32nm Westmere processors. Our editors are skeptical about Intel’s motives for making Westmere into a CPU/GPU chimera, especially since—as Geoff points out—current Intel integrated graphics are still considerably slower than competition.

Scott then leaves our show early to continue his work in Damage Labs, and TR blogger and reviewer Matt Butrovich fills in to talk about everything from Mirror’s Edge PhysX performance to a possible $99 iPhone. We also hear Geoff’s thoughts on the 2TB Western Digital Caviar Green drive and his favorite musical rhythm game.

Send in listener mail, and we’ll answer on the podcast. – jdrake@techreport.com

Tech discussion:

    32nm Intel Westmere to hit production in late 2009 (0:02:04)- Read more

    A look at PhysX in Mirror’s Edge (0:28:50)- Read more

    Riding Audiosurf’s Technicolor rollercoaster (0:35:35)- Read more

    Western Digital’s Caviar Green 2TB hard drive (0:42:53)- Read more

    Nvidia posts losses for last fiscal quarter, fiscal year (0:49:48)- Read more

    Analyst report forecasts $99 iPhone for the summer (0:52:28)- Read more

    Judge gives nod to new Psystar countersuit against Apple (1:05:05)- Read more

That’s all, folks! Check back on February 21 for the next TR podcast.

Comments closed
    • axeman
    • 11 years ago

    Let me pick on a few things (in good humour though).

    Someone’s (Jordan? definitely not Geoff) pronunciation of “chic”… Chick ? Is that a mistake, or do Americans really not even try with French loanwords?

    Secondly… Scott said “more better”…. 😉

    I know this is not scripted or anything, but the little slip-ups are pretty funny. The content, though is excellent. It took me a while to get into the pod-casts, mostly because I’m so busy (or have too short of an attention span) that I can’t find the time to listen to it all. Now they are one of my favourite features of the site. Keep up the good work!

      • ssidbroadcast
      • 11 years ago

      Yeah Jordan incorrectly pronounced /[

        • jdrake
        • 11 years ago

        How would I know my place in this world if it wasn’t for you guys keeping me honest;-)

        I don’t believe I used the word Chic at all…. at least… I don’t remember. I’ll just assume you’re right. However… if I was pronouncing it wrong – then I would have had to have been reading it from somewhere…. any ideas?

          • Meadows
          • 11 years ago

          How are we supposed to know what you were reading?
          Try drinking less when you record yourself. Missing memory is a sign of either damage or senility. 😉

    • Farting Bob
    • 11 years ago

    Memory latency is nowhere near the top of the list in IGP limiting factors, when your getting single digit FPS on games made several years ago at low resolutions, latency makes no discernable difference. I think that Intel and AMD are just looking for ways to make use of the CPU more. Within a few years atom sized chips will be more than adequate for the majority of desktop users, moving the IGP into the CPU means intel can get a larger slice of the pie, while motherboard makers will likely suffer as another feature is removed, they are already running out of useful things that they can market to consumers.

    • axeman
    • 11 years ago

    On the integrated graphics and memory latency “issue”:

    I remember some people speculating that the original Athlon 64 platform would have issues with integrated graphics performance since the memory controller was now integrated with the CPU, so increased latency when compared to having the memory controller and IGP on the northbridge. Fast forward to right now, and it’s clear that memory latency isn’t a limiting factor in IGP performance. Nvidia and AMD’s integrated graphics wipe the floor with Intel’s despite Intel’s IGP being on the same chip as the memory controller. Even in situations where an AMD platform has less memory bandwidth, it still stomps Intel. So integrated graphics performance is dependent on pixel pushing power, then memory bandwidth, then memory latency, if memory latency matters at all. In fact, on discrete graphics cards, I’ve never heard anyone give a sniff about memory latency…

    • Pax-UX
    • 11 years ago

    Loved they way the audio goes crap when Geoff starts bashing the iPhone pricing! XD

    Apple is just in vogue at the moment. People don’t need more options, that’s PC thinking! Apple is all about keeping it simple, choices only confuse the customer.

    Heroes now suxxx!

      • ssidbroadcast
      • 11 years ago

      Yeah what Geoff said was bothering me and I’m glad that jdrake called him out on it.

      ANY discounted phone you “buy” with a cellphone provider is subsidized through the contract. Even the cheapo free ones.

        • Dissonance
        • 11 years ago

        You’ve missed the point. It’s not a question of whether the cost of a phone is subsidized by a service contract (the last few phones I’ve had have been free after contract), but whether the “cost” of just the phone matters at all.

        Apple made much of the fact that the iPhone 3G was cheaper than the original, but that was only the asking price of the phone. With the mandatory contract, the 3G actually cost more over the life of the contract than the first iPhone. Any discussion of a “cheaper” potential iPhone at $99 needs to take into account Apple’s history of low-balling the phone price so they can tout a price cut, but jacking you with a more expensive contract.

          • ssidbroadcast
          • 11 years ago

          Fair enough. If you frame that in terms of the time-value concept of money, it’s still a win, because you are giving /[

    • rootheday
    • 11 years ago

    Re Westmere: Why is 100% of the focus on 3D performance? What about performance/watt or performance/dollar for a 45nm igp? What about media playback, thermals, battery life? What about a 2 chip solution enabling smaller form factors, one cooling system for both cpu/gpu + passive southbridge…

    Clarkdale/Arrandale 2C/4T cpus will be very competent dual core cpus – better than Core2 and can always be paired with discrete graphics for budget gaming machines. That said, these are mainstream/value parts. In the performance segment, 4C/8T Clarksfield/Lynnfield (same LGA1156 socket) will be available (and sooner) for folks who want exclusively discrete graphics and maximum performance.

    Suppose, hypothetically, that Intel’s Westmere igp matches the nVidia 9300 that everyone is gushing over today – the impression given in this podcast is that somehow that wouldn’t be adequate. How much 3D in an igp is enough? Does it have to equal to a 4670? 9800GTX? higher?

    This has the feel of a moving target – people currently like the 9300 and 780G and bash the G45 – all released in 2008 – even though the G45 is faster than any IGP from 2007. And even though reviews of the 780G and 9300 then go on to say that, of course, real gamers should follow the price/performance curve to the sweet spot in the ~$80-$130 discrete gpu range – because nothing below that really offers decent gaming anyway.

    That kind of performance requires die area and power that are not feasible at igp price points with current silicon process technology. No amount of driver work can compensate for lack of gates/MHz.

    Neither OEMs nor mainstream/corporate buyers are willing to pay a premium for “somewhat better but still hohum 3D” igps – but they do value igps for reliability, media playback, battery life. OEMs also want igps to be really cheap for entry level pricing – they’d love to upsell to discrete at signficant markup. In fact, the popularity of G33/G31 and netbooks are a testament to the flight to low end value.

    Simple economics dictates that the value and mainstream segments will have the lowest performing graphics parts. Desire for the PC to be the preeminent gaming platform is not sufficient business justification for Intel (or anyone else) to try to force that lowest bar to be at Corvette/Porsche horsepower.

    Intel’s answer is to make higher performance happen at ~price parity by agressively pushing its igp to new process technology at twice the pace of Moores’s law – 90nm in 2007, 65 nm in 2008, 45 nm in late 2009, 32 nm in late 2010, 22 nm in 2011. That’s risk, but its the only way to get to higher performance (more SPs, bigger caches, more sophisticated architectures) without blowing the die size budget.

    Lastly, Intel already has a working solution for switchable graphics for both ATI and NVidia discrete graphics on Sony and Lenovo laptops based on the GM45 – no reboot required and the switch time isn’t anything like 20 seconds – more like 7. Obviously not instantaneous, but not debilitating either – its not like users will be wanting to toggle into and out of gaming mode every few minutes.

      • eitje
      • 11 years ago

      Welcome to Tech Report!

      I notice that your account doesn’t exist anymore. We’ll be sorry to see you go.

      If there was anything we could have done to keep you here, please let us know!

      • Damage
      • 11 years ago

      Hey, first let me say, congrats on landing a job at Intel. By all accounts, they are a fine employer.

      Your reply has some embedded premises and familiar arguments with which I don’t agree, but thanks for sharing this useful illustration of those things. Let me quickly address some points, to better clarify my reasons for saying what I did in our conversation on the podcast.

      You make a number of points related to silicon area and cost for integrated graphics. My frustrations with Intel’s integrated graphics aren’t purely related to this issue. Yes, the Intel IGPs are weak performers, but I think the larger problem involves a host of other things, as well: feature set (supporting all of the texture formats and capabilities needed to usefully meet a standard like DX10), architectural efficiency (performance/die area), and–critically–driver software quality. All of these considerations combine to determine the larger issues, like basic compatiblity with newer games and the combination of performance and image output needed to run those games comptently. Intel’s IGPs have historically, consistently lagged behind ATI/AMD and Nvidia’s in each of these departments.

      Here’s where things were, purely on performance, last we checked:

      §[<https://techreport.com/articles.x/15690/7<]§ The G45 IGP performs at somewhere between a half and a sixth of the speed of a GeForce 9300, and it also trails the AMD 790GX, 780G, and GeForce 8300. All three of the issues I've mentioned (architectural efficiency, feature set, and software) contribute to this poor performance, not just silicon area alone. Doubling the number of SPs in the G45 IGP still wouldn't bring Intel to parity with the competition on raw performance, let alone compatibility, stability, and image fidelity. Yes, the standard for judging IGP capability is a moving target. AMD and Nvidia improve their offerings regularly, as one would expect. But I would argue that, in a critical way, the target is moving slowly right now. Game developers have locked in on the current generation of console hardware as a baseline standard, and games haven't really pushed the boundaries in visual fidelity for a while. During this time, the other guys have made big strides toward real gaming competency in their IGPs. In this context, I think Intel is wrong to treat the goal of gaming competency as an unrealistic, unachievable one, as you do with your (ack!) car analogy. This tendency to regard gaming capability as a niche thing, separate and independent from the needs of the larger PC platform, is one of the primary problems with Intel's position as a graphics supplier for a major portion of the market. Because so very many consumer desktops and laptops ship with Intel graphics, this attitude--along with the problems with its IGPs that Intel has tolerated because of it--has stunted the growth of the PC gaming ecosystem. You're correct about the impact of cost considerations in one respect. Because Intel doesn't have a high-end gaming GPU, it has been unwilling to invest in the sort of robust driver development and developer relations efforts necessary to establish and maintain really solid game compatibility and performance. Nvidia and AMD IGPs benefit immensely from software work done to serve an entire lineup of GPUs. I view the Intel IGP software issue as just as intractable as any silicion issue; it has certainly endured longer than any single chip design. Things may change as the Larrabee dev effort ramps up and Larrabee-style hardware is eventually integrated into CPUs. But all indications are that the present situation will persist during Westemere's lifetime--especially because Westmere is based on very different hardware (and thus software) than Larrabee. Meanwhile, Intel has long used its immense market power to ensure that its IGPs ship in the majority of PCs sold. Westmere obviously ups the ante over the bundling and integration methods that Intel has used in the past; one won't be able to buy a dual-core 32nm Intel processor without an IGP. Naturally, this reality has focused our attention on the graphics component of the product--and on how its inclusion may affect the PC platform. Given everything we know about Intel's IGPs and track record, I simply cannot escape the impression that this impact will be negative. The fine chipset IGP alternatives we have now like the GeForce 9300 will be lost. Mobile OEMs will be forced to choose a hybrid graphics scheme or the additional power draw of a discrete GPU. High-volume systems will have an even higher percentage of Intel IGPs, with all of the unfortunate consequences. Incidentally, the fact that a user would perceive a 20-second hybrid switch delay while you would claim seven seconds is no surprise. User perceptions of large "seams" in device operation tend to be outsized. That's why the only viable standard for high-volume hybrid graphics solutions is seamless switching, a goal that remains far distant. Is Westmere's integration of this particular graphics technology the end of the world? No, not even close, and the market will adapt the best it can. But it is unfortunate.

        • ssidbroadcast
        • 11 years ago

        am I missing something? How does Damage know where this guy works?

          • Meadows
          • 11 years ago

          He’s got the /[

          • eitje
          • 11 years ago

          IP Address.

      • Voldenuit
      • 11 years ago

      l[

    • Prototyped
    • 11 years ago

    Auburndale/Havendale was /[https://techreport.com/discussions.x/16007<]§ §[<http://en.expreview.com/2008/09/04/lynnfield-has-powered-on-and-booted-linux-windows-prepare-for-holiday-refresh-2009.html<]§

    • jonybiskit
    • 11 years ago

    i wonder if westmere is gonna be in the new console intel is gonna be doing… hmmmmm

    • Meadows
    • 11 years ago

    Glass in Mirror’s Edge is lame.
    Discuss.

      • BoBzeBuilder
      • 11 years ago

      Meadows is lame.
      Discuss.

        • Meadows
        • 11 years ago

        I disagree.

          • 2cans
          • 11 years ago

          one for bob,
          lol

        • eitje
        • 11 years ago

        I concur (in the biblical sense).

Pin It on Pinterest

Share This