The TR Podcast 152: Intel’s new desktop mojo, DX12, and TR does subscriptions

The Tech Report Podcast

Date: March 23, 2013
Duration: 1:25:50

Hosted by: Jordan Drake

Co-Hosts: Scott Wasson, Geoff Gasior, and Cyril Kowaliski

MP3 (61.9MB)

RSS (MP3) | RSS (M4A)
iTunes (MP3) | iTunes (M4A)


Show notes

With Cyril reporting live from GDC in San Francisco, we kick off our program with a quick listener mail question before diving into the complete scoop on Microsoft’s long-awaited DirectX 12 API. We then move on to other GDC news, including Unreal Engine 4 and CryEngine’s subscription models and Intel’s new slew of enthusiast CPUs. Finally, Scott explains the ins and outs of our exciting new TR subscription model, before Geoff closes the show with his review of Crucial’s M550 SSD.

Send in listener mail, and we’ll answer on the podcast. –

Follow us on Twitter – ScottJordanGeoffCyrilThe Tech Report

Listener mail/tweets:

High Resolution Monitors? – from Matthew:

“Long time listener, first time caller. Here is my question. Why can you get a tablet (example Nexus 10) with a screen resolution of 2560×1600 or even a laptop screen that is 3200×1800 (Dell XPS 15”) but you cannot get 2560×1600 in a 24” inch monitor? Is it the manufacturers or a software problem? Some of us just cannot fit a 30in monitor on our desk but I would like to have a higher DPI display. Thanks for the quality work.”

Tech discussion:

  • DirectX 12 to support existing hardware; first games due in late 2015 – Read more

    Unreal Engine 4 available to all for $19/month and 5% of gross – Read more

    CryEngine joins the subscriptions club: royalty-free for $9.90/month – Read more

    Teams picked: CryEngine gets Mantle, UE4 adds GameWorks support – Read more

    Crytek demos CryEngine for Linux at GDC – Read more

    Intel to renew commitment to desktop PCs with a slew of new CPUs – Read more

    Intel shows portable all-in-one prototype, first Ready Mode app – Read more

    Introducing TR subscriptions – Read more

    Crucial’s M550 solid-state drive reviewed – Read more

That’s all, folks! We’ll see you on the next episode.

Comments closed
    • The Dark One
    • 6 years ago

    Valve’s tools have been notoriously difficult. Their hiring leans heavily on people with multidisciplinary skill-sets, where a “fix a problem if you find it” mentality can actually work, but that doesn’t mean that the result is anything an outsider would want (or know how) to use. Their big public-facing tools, the Workshop and Greenlight, seem to headed to the point where Valve can sit back and drop all pretenses of curation. Having to support an engine would take a gazillion times more effort than those two things, and in a corporate culture where people can vote with their movable desks, I doubt they’ll make a big effort to license Source 2.

    • danny e.
    • 6 years ago

    Hey now, Cyril! This is a family podcast. 😉

    • Ninjitsu
    • 6 years ago

    How do you record these podcasts? As is, what software, etc.

      • DancinJack
      • 6 years ago

      I think Garage Band for the most part. No idea about hardware, besides the Mac.

        • Ninjitsu
        • 6 years ago

        No i mean, how do they record all streams simultaneously?

        Audacity, for example, can record any one device’s input/output, so if i were to set up a conference call on skype, i wouldn’t be able to record both my mic and the stereo output simultaneously.

        So either they all record their own streams and Jason mixes them, but how does he get the timing absolutely correct then? Sync audio pulses at the start?

          • Duck
          • 6 years ago

          There are many audio interfaces available with more than just stereo input/output.

          Any real DAW software (digital audio workstation) can record any of the inputs or can record all of them simultaneously. I don’t think there is a limit to the number of inputs other than the number you have in your audio interface.

        • Dariens007
        • 6 years ago

        i thought on a previous episode because of changes to OS X they stopped using garage band and started using something else?

      • Milo Burke
      • 6 years ago

      Unless they have some bizarre conference calling technology I’m not been able to find, it’s very likely that they use Skype to talk together, but in addition to recording the Skype output, each person records his own microphone on his own computer. Then all email Jordan the file.

      This way, Jordan has a separate recording of each microphone in order to edit in multi-track and mix down later, and also the full Skype conversation so he knows how to line things up.

      But unlike with music or video, it’s not critical if one track is off another by 50-100 milliseconds. As long as they’re not talking over each other, it’s hard to tell. But I bet he could eyeball it within a few milliseconds even without test tones playing, just from matching it up to a Skype recording.

      As for microphones, Jordan has the best: a Rode NT1a. I know because I have the same mic, and I recognized his in a picture. =] I suspect the others use headsets of varying quality.

      Once you have the separate files for each participant (assuming each is diligent enough to record himself and send the file without issue), most any digital audio workstation (or DAW) will do the trick for editing. Audacity is a popular free one for Windows, although it has a learning curve. Sony Acid is cheap and decent, as is Tracktion. More professional favorites for working with music include Pro Tools, Logic, Reason, and Reaper.

        • jdrake
        • 6 years ago

        Points for Milo! This is pretty much exactly correct.

        We use Skype to talk to each other, while each person locally records their own voice. I record using a Rode NT1a microphone, which I’ve used since day one of the podcast.

        Each person’s audio is then delivered to me, and I compile all 4 voices in Audacity (quick, easy, free). We sync everything up to the master Skype track that I record on my end.

        I also apply noise removal and a variety of other effects before finalizing the final product.

          • Ninjitsu
          • 6 years ago

          Awesome, thanks both of you!

          • sigher
          • 6 years ago

          Wow, that’s pretty primitive and disappointing I think, surely you can just mix it live, the described setup sounds like how they would do it in the 1940’s.

    • Ninjitsu
    • 6 years ago

    I suspect Mantle may become a part of DX12, if AMD has shared the concept and parts of the implementation. Also probably why Nvidia and Intel may require new hardware to take advantage of Feature Level 12 stuff, but AMD mostly wont (or so it seems).

      • Welch
      • 6 years ago

      I do find it strange that AMDs own Mantle is only accessible for GCN cards and yet we find DX12 stating a requirement for AMD cards to be GCN. If nothing else it sounds like DX12 will have similar enough features due to the requirements or… as you said Mantle will in fact be packaged also in DX12.

      Gotta remember that Mantle may become the favored API on other platforms too… SteamOS anyone?

        • dragmor
        • 6 years ago

        The whole point of DX12 is to speed up the Xbox One using AMD’s mantle tech which is already in the APU. There is a reason that mantle uses Direct X shader language.

        The only thing that is strange is that all of the industry players where saying there is no future Direct X version and now it turns out that Nvidia has been working with MS on DX12 for the last 12 months.

        • Ninjitsu
        • 6 years ago

        I believe the reason is this:
        [quote<] The interesting thing about all of this is what’s excluded: namely, AMD’s D3D11 VLIW5 and VLIW4 architectures. We’ve written about VLIW in comparison to GCN in great depth, and the takeaway from that is that unlike any of the other architectures here, only AMD was using a VLIW design. Every architecture has its strengths and weaknesses, and while VLIW could pack a lot of hardware in a small amount of space, the inflexible scheduling inherent to the execution model was a very big part of the reason that AMD moved to GCN, along with a number of special cases regarding pipeline and memory operations. Now why do we bring this up? Because with GCN, Fermi, and Gen 7.5, all PC GPUs suddenly started looking a lot more alike. To be clear there are still a number of differences between these architectures, from their warp/wavefront size to how their SIMDs are organized and what they’re capable of. But the important point is that with each successive generation, driven by the flexibility required for efficient GPU computing, these architectures have become more and more alike. They’re far more similar now than they have been since even the earliest days of programmable GPUs. [/quote<] Source: [url<][/url<]

Pin It on Pinterest

Share This