The TR Podcast 178: Going deep with the Radeon Fury X

This week's episode is dedicated to the new Radeon Fury cards, the Radeon 300-series refresh, and the Fiji GPU architecture. We take a ton of questions and dig deep into AMD's new graphics chip.

As usual, the audio version of the podcast will be made available when we have it edited. In the meantime, you can enjoy the video version above. Thanks to everybody who watched live. We appreciate your questions and participation.

If you'd like to be notified when we're recording, follow us on Twitch or simply follow me on Twitter. You can also subscribe to our YouTube channel to receive notices when we post videos there.

Comments closed
    • USAFTW
    • 6 years ago

    It’s interesting how this has turned to one of the more commented podcasts that I can think of. Folks, it seems, are kinda looking forward to Fiji.

    • tipoo
    • 6 years ago

    So the part about the interposer, basically the added performance from HMB is limited by how big they can make a chip now. Which means, come on, process shrink! 28nm has lasted way too long. Gotta cram more transistors in there for more gains now more than ever.

    • Thbbft
    • 6 years ago

    Re the podcast:

    1. The R9 Nano – AMD has stated the Nano is ‘substantially’ more powerful than the 290X. Per WCCF Tech “In fact, AMD confirmed to us that the Radeon R9 Nano is about 85-90% of the Radeon R9 Fury X” – That puts it into direct competition with the 980, not the 970.

    2. The Quantum Computer – The graphics is in the bottom section, not the top section per your guess, the top section is purely cooling. Putting the graphics in the top section would increase latency in a system targeting VR applications.

      • auxy
      • 6 years ago

      Ever seen a PCIe ribbon cable…? (´・ω・`)

        • Thbbft
        • 6 years ago

        Nope, never heard of one. Seems a pretty esoteric item for home use, and wouldn’t that 24″ of additional signal path defeat the primary purpose of the Quantum computer – at least 90 FPS + absolute minimum attainable latency for VR applications?

          • auxy
          • 6 years ago

          How is an internal computer component ‘an esoteric item for home use’? (≧∇≦)b

          It seems fairly obvious you don’t know what I’m talking about. It’s an internal ribbon cable that lets you reposition a PCIe interface. The user doesn’t mess with it. A DRAM DIMM is also ‘a pretty esoteric item for home use,’ lol.

          Furthermore, how does it defeat any purpose…?

            • Thbbft
            • 6 years ago

            I understand what you’re talking about now that you’ve mentioned it, it’s that I’ve never seen one or thought about using one to this point, neither is a PCIe ribbon cable a usual internal computer component. A home system builder using a standard case, which is the vast majority of home system builders, would have no use for such an extender cable. As in my case. In the last ten years I’ve built 20 or so systems in a variety of cases for myself, friends and neighbors and a PCIe extender cable never entered my mind because I’ve never NEEDED such a cable and so I never researched it so see what was out there. It’s pretty ridiculous for you to infer a PCIe cable is a computer component commonly used in system builds or would be familiar to most home system builders.

            I stated how it defeats the purpose. The Quantum computer specifically targets VR applications where minimizing latency is a critical consideration, increasing signal path length to the CPU <-> GPU interface will add latency by definition. In normal use this would not matter, but in VR minimizing latency is critical to the experience and it would matter.

            • auxy
            • 6 years ago

            It doesn’t add latency. (*´∀`*)

            • Meadows
            • 6 years ago

            Every cable adds latency, but as long as it’s a direct connection (no chip logic) and the added latency stays in the nanosecond range, you’re golden.

            • auxy
            • 6 years ago

            Thanks, Autism Lad! Your unmatched pedantry wins the day once more! (・∀・)

            • Meadows
            • 6 years ago

            Thanks!
            By the way, for what it’s worth, I too have no idea what this cable is that we’re talking about, or whether it includes chip logic.

            • auxy
            • 6 years ago

            [url<]http://m.newegg.com/Product/index?itemnumber=9SIA76H2GT8096[/url<]

      • exilon
      • 6 years ago

      If the Nano is 85-90% of the Fury X at 175W, what’s AMD doing wrong with the Fury X? The claim smells.

        • NoOne ButMe
        • 6 years ago

        Nano @ 800Mhz should get around 85% of the Fury X stock performance. Using the best bins for the Nano (I expect Fury X to have the worst bins tbh) and you should be able to eek out that 175 watts at 800Mhz.

        Expecting Fury X to have worse bins because those are the most leaky/etc. Low heat lowers leakage, etc. Hence, lowest bin are the “Best” for water cooling. They gain the most value.

    • Arclight
    • 6 years ago

    Regarding the laptop with the new AMD chip, if you test it fast enough you can send it back afterwards and get your money back.

    • tipoo
    • 6 years ago

    Triple Fiji card? That would be 1) Insane 2) probably stupid. Frame times are suboptimal even with dual chip cards, throwing a third chip in there, I can only imagine would make it worse.

      • ImSpartacus
      • 6 years ago

      Yeah, from what I’ve heard, the idea if a dual gpu setup isn’t that bad for something like next gen vr because each gpu can drive the display in front of each eye and they can be allowed to ruin at different variable refresh rates as necessary.

      But since you don’t have three eyes…

    • NoOne ButMe
    • 6 years ago

    My only issue with your dealings with clocks from AMD is that there has never been anything showing what the base level of performance Nvidia cards will get, or an exploration of what real boost clocks end up being averaged across multiple cards, review and retail, from the same line of products.

    Would very much like to see a section for Nvidia cards where you turn off boost and overclock it to it’s minimum guaranteed clock. Maybe just on the FPS/$ chart (in theory the frame percentile should be about the same?) as a point of minimum performance.

    Some unlucky person out there is going to end up getting a card that only reaches the boost clock… That, or Nvidia’s boost clock is a made up number that should be treated as the base clock. Either way, please have a reference for it, given it is sanely possible.

    • l33t-g4m3r
    • 6 years ago

    Power consumption: “Other Things”.
    So when are we going to admit that it’s due to HBM? AMD has already stated that memory sucks a good deal of power, and moving to HBM saves power. The architecture has a few minor tweaks along with the cooling, but you guys are acting like HBM did nothing to save power, and it does.
    Prices: “These prices are for people who don’t know any better”.
    Since when have you guys ever stated this for Nvidia cards? They’re absolutely price gouging the high end, and so is the 970 for $300+. The 970 is a $200-ish card, like the 660 was. Gotta love the double standard, not to mention the constant insinuation that AMD cards have horrible caveats while NV cards don’t, and they absolutely do. As for price, the cards are pretty well matched in comparison to the competition, so if there’s a problem with price, it should be addressed to NV, and why their cards are so ridiculously expensive.
    Memory capacity: Where’s NV’s 8GB mid-range cards again? Yeah, buy a 980Ti. Let’s not talk about the 970’s 3.5GB either. This probably isn’t going to be an issue anytime soon, regardless.
    Benchmarks where AMD is doing good: “I think this is going to be a toss up”. lol.
    Testing the memory limit: Nice that you’re being thorough, now is the 970 going to be included?
    TR not getting review samples: [i<]Well gee, I wonder why?....[/i<] The only thing I agreed with was the ROPs, but IMO cards aren't limited in ROPs as much as they are in graphics/shader capability, but it could be a toss up depending on what settings you use. FP16 isn't an issue as long as AMD uses fp16 demotion, but they do need to fix that eventually. Conservative raster just sounds like we're back to dx10/dx10.1, and the differences are minor. NV drivers: Ask Kepler users if this is really a selling point anymore. I originally switched to NV for drivers, but at this point, I can't really say that's true anymore. There might be a few niche scenarios, but it's not to the extent that you need to use NV either. Overall, I think AMD has finally managed a win on their hands with Fury here, and it would be really sad to see it's review littered with caveats, and quotes like, "[i<][url=https://techreport.com/review/25602/amd-radeon-r9-290-graphics-card-reviewed<]One could conceivably make a case for the GTX 780 over the R9 290—and the 290X, for that matter.[/url<][/i<] ", which in retrospect was a bad deal considering NV stopped optimizing for Kepler. So I just hope Fury's review doesn't end up with the 980Ti somehow getting an editors choice award at the conclusion.

      • NoOne ButMe
      • 6 years ago

      The savings for power isn’t all due to HBM, just mostly. GDDR5 version of Fiji is likely ~325W. GDDR5 being 100~ watts, so, 50 watts are saved (ish) with HBM. GDDR5 on Hawaii uses under 25% of the regular power usage, or, under 70 watts…. Giving about 180-190 watts for the CU/misc. Fiji has ~225 watts for CU/misc. That only accounts for about a 25% increase in CU. So, minor improvements in other places probs. [b<]mostly HBM still[/b<]. Your pricing about Nvidia's card is completely wrong and also ignores the reality of development increasing. It's ~400mm^2 silicon, it has the same amount of RAM as the 290/290x running at a faster clock. It doesn't deserve a slot below maybe $250 based on economics and a few other things. And, that's whenever GM204 get EOL and Nvidia just needs to move stock to replace with something. AMD's profit for a 290 selling for $225-250 are pitiful. It's bad for AMD, it's bad for AIBs, it's bad for retailers. It's also bad for consumers in the long run. Same is true switching the 290 for 970 and AMD for Nvidia. As someone who uses a products from AMD and NVidia that are in the pretty low and mid-high end range... I don't have problems with either drivers more than the other. Some things I like about the control setup for CCC and NVidia graphics options. I like Nvidia's setup more, not some program that is always in taskbar and weird interface. [b<]A huge note, releasing tons of drivers DOES NOT mean they are better drivers, especially if the performance of two cards in a new game as similar to what they are for older games. And if faster drivers ends up with HUGE driver issues (see Kepler for TW3, as OP in this thread stated). As stated, i have about equal experience with both companies, just, AMD needs to learn to make decent interface QQQQQ[/b<]

        • Meadows
        • 6 years ago

        Your numbers are entirely off the mark.

          • NoOne ButMe
          • 6 years ago

          AMD says otherwise. only 15-20 watts you say lol! 🙂
          Compared to 290x maybe it uses only 15-20 watts less for memory. Compared to a theoretical Fiji with GDDR5? ~20 watts in DRAM and ~35 watts in the interface between GPU and memory.

          At most Fiji has ~30% more power specifically for non-memory stuff at the 275W ABP compared to ~250-260W that 290x uses. Hence, there must be some architectural gains, due to CU count increasing ~45% and stock clockspeed rising.

          My source: [url<]http://www.microarch.org/micro46/files/keynote1.pdf[/url<] (page 45 for this specific power usage claim.)

      • Meadows
      • 6 years ago

      I agree with your first point. TR had a write-up about HBM and I’m going to quote it verbatim:

      [quote<]"Macri did say that GDDR5 consumes roughly one watt per 10 GB/s of bandwidth. That would work out to about 32W on a Radeon R9 290X. If HBM delivers on AMD's claims of more than 35 GB/s per watt, then Fiji's 512 GB/s subsystem ought to consume under 15W at peak. A rough savings of 15-17W in memory power is a fine thing, I suppose, but it's still only about five percent of a high-end graphics cards's total power budget. Then again, the power-efficiency numbers Macri provided only include the power used by the DRAMs themselves. The power savings on the GPU from the simpler PHYs and such may be considerable."[/quote<] This means that Fiji saves anywhere between 15-20 W due to the new memory structure and that basically explains 100% of the TDP difference, with no "other things". They must've forgotten about that while filming the podcast.

        • NoOne ButMe
        • 6 years ago

        Please tell me you don’t seriously believe that the 290x only uses ~30 watts for it’s memory subsystem in total.

        And, to quote from your snippet…
        “The power savings on the GPU from the simpler PHYs and such may be considerable.”

        But, hey. Let’s not actually comprehend what we’re posting. That would be bad.
        [this edit applied after Meadows responded to this post, I believe this is the correct post to update with sad information.]

          • Meadows
          • 6 years ago

          That was said by Joe Macri, lead engineer to GDDR and HBM standards. In the other corner, we have you, a random shill who registered last month.

          It’s his word against yours.

            • NoOne ButMe
            • 6 years ago

            LOL. So, how does the GPU interface with the memory? What zero power bus does it use? I want to know. Badly.

            I’ll give you a hint, AMD’s Tahiti and Hawaii both us in the low 20% of their power for GDDR5 subsystem.
            Nvidia’s GK110/GM200 use over 30% for their memory subsystem.
            Nvidia’s mid-range uses in the mid 20%.

            What’s 250*.2 equal?

            The primary reason why Nvidia’s power consumption is so high is that when you driver GDDR5 clockspeeds up the interface to the GDDR5 draws power at an above linear rate, of course, it just means that their architectures have been that much more power efficient than AMD’s. Despite having a GDDR5 disadvantage in terms of power draw.

            • Meadows
            • 6 years ago

            You’re a funny man. Quite some conviction too.

            • NoOne ButMe
            • 6 years ago

            Once more, from AMD’s slides on HBM and die stacking… Page 45*.
            AMD’s claim is that 8GBps GDDR5 with a 512-bit bus uses around 85 watts of power. I’ve provided you a link earlier.

            Your inability or lack of desire to actually understand what you are talking about is quite annoying. And kind of funny.

            And, what zero power memory bus does GDDR5 use? I really, REALLY, want to know.

            • Meadows
            • 6 years ago

            Ask the lead engineer quoted above.

            Edit: nobody mentioned anything about zero-power-anything.

            • NoOne ButMe
            • 6 years ago

            You’re claiming that the GDDR5 on the 290x uses ~32 watts of power, and four stacks of HBM1 running at 1Ghz effective uses ~15 watts of power. THOSE FIGURES ARE CORRECT.

            You, however, are also claiming that the chips power consumption for that subsystem, and that HBM saves a total of only ~15 watts. Which is impossible unless both interfaces use a zero power interface.

            The reality is that GDDR5 uses a high power interface that can draw over twice the power of the actual GDDR5 memory chips. HBM memory interface is quiet simple and draws about power equal to what 4 stacks @ 500Mhz (1Ghz effective) draw.

            So, either you believe that there is no difference between the interfaces (something I have given you a link that disproves it coming from AMD) or that the interfaces both use zero power. Both of these assumptions are false.

            And, as I just noticed a short while ago from your quote from the TechReport’s earlier article….

            [quote<]"Macri did say that GDDR5 consumes roughly one watt per 10 GB/s of bandwidth. That would work out to about 32W on a Radeon R9 290X. If HBM delivers on AMD's claims of more than 35 GB/s per watt, then Fiji's 512 GB/s subsystem ought to consume under 15W at peak. A rough savings of 15-17W in memory power is a fine thing, I suppose, but it's still only about five percent of a high-end graphics cards's total power budget. [b<]Then again, the power-efficiency numbers Macri provided only include the power used by the DRAMs themselves. The power savings on the GPU from the simpler PHYs and such may be considerable.[/b<]"[/quote<] Hey, look! The numbers he gave didn't include the PHYs! Like what I'm talking about! It's almost like your "proof" isn't actually proof of what you want it to be.

            • Meadows
            • 6 years ago

            So you claim that they saved another 50 W thanks to the simpler power delivery and they recycled that into enabling a bigger GPU instead. Makes sense, I guess.

            Regardless, you sound a bit too twitchy and nervous. Might want to calm yourself a little. I’m not going to ruin my morning tea because of an internet misunderstanding and neither should you.

            • NoOne ButMe
            • 6 years ago

            ~50-55 watts total. =]
            And, sorry. I do get to worked up over this stuff. It’s why I generally stay away from commenting. Been reading for years here, just, as stated, I try to stay away from posting. For memory technology and foundries is my passion however, makes it hard.

            Have a good day.

    • liquid_mage
    • 6 years ago

    I like your comments about businesses and cad programs. Businesses buy Quadros and FirePros but are paying for the drivers which provide a high level of precision. But Auto Cad does not cost $40k… A high end Catia package is $25k.

    Otherwise I really enjoy your podcasts and site very much.

      • Damage
      • 6 years ago

      Yeah, I wasn’t trying to give a price quote on behalf of the AutoCAD guys. Just wanted to explain that workstation apps can cost tens of thousands, while the graphics cards cost a few grand or so. Is a different market dynamic.

        • anotherengineer
        • 6 years ago

        Ya one would think that.

        However companies try to nickel and dime whenever they can. I have a Dell Latitute i7 Ivy, with an SSD and an Nvidia NVS 5200M.

        And its driving AutoCAD 2015 Mechanical Deluxe Suite on a 23″ 1080p TN panel.

        And I think that particular ACAD package was about $5k

          • flip-mode
          • 6 years ago

          You’ve got to get pretty big for the graphics card to be important in a CAD station. And 3D. 2D autocad graphics are driven by the CPU, believe it or not. Autocad is hideous anyway. Revit or Archicad for the win (never seen Catia).

    • NoOne ButMe
    • 6 years ago

    statements in ” ” are reader/watcher questions.

    ~0:02:15- About Fiji tech and the Fury and Nano products
    ~0:11:45- Project Quantum info.
    ~0:14:30- About r7/r9 300 series.
    ~0:22:45- About chip generations from AMD since 28nm. (still talking about r7/r9 300 series)
    ~0:28:00- hope get 390/390x and test against 290/290x also.
    ~0:30:00- “Why Fiji chip no HDMI 2.0 and no DP1.3?”
    ~0:33:45- “What fan Fury X shipping with?”
    ~0:34:30- “Tonga memory tech in Fiji, helps with 4GB?”
    ~0:38:30- “remember AMD guy saying memory usage is bad because never put engineer on it”
    ~0:41:30- “Ever see fiji firepro?”
    ~0:43:30- “TR not getting review units?”
    ~0:46:00- Carrizo launch in notebooks. More about Review units.
    ~0:47:30- Staffing changes.
    ~0:48:00- Tech blogs/etc buying their own units v. companies letting reviewers see it.
    ~0:49:00- “Can AMD make triple Fiji card”
    ~0:50:30- “how well Fiji overclocks?”
    ~0:54:00- “you confirm Tonga 384-bit bus/Why no full Tonga?”
    ~0:58:45- “Impressions of AMD claims Fury X v. 980ti”
    ~1:02:00- Talking about Fiji architecture.
    ~1:09:00- About size/Engine of Fiji involving interposer
    ~1:16:30- “AMD losing money selling [Fury X] at $650?”
    ~1:19:30- “Think AMD launching HBM earlier keep ahead of Nvidia a significant time?”
    ~1:21:45- “What level compatibility Fiji has DX12?”
    ~1:24:15- “Think Fury/300 series == gain marketshare (& compete with Nvidia)”
    ~1:27:30- “Fiji die size? bigger GM200?”
    ~1:28:15- “DVI support on Fiji X?”
    ~1:30:30- Talking about PC gaming show
    ~1:34:00- “Will you test to find limit with Fury 4GB VRAM?”
    ~1:38:00- “About Skylake”
    /end

      • Damage
      • 6 years ago

      thanks!

        • NoOne ButMe
        • 6 years ago

        =]

      • tsk
      • 6 years ago

      You are the hero we need, but not the one we deserve right now.

      • AnotherReader
      • 6 years ago

      Long time reader here, but first time poster.

      Thanks for the index to the podcast and the link to AMD’s 2013 presentation about HBM. It is clear that replacing GDDR5 with HBM contributes significantly to Fiji’s power efficiency compared to Hawaii. However, it isn’t the whole story. AMD estimates that [url=http://www.anandtech.com/show/9266/amd-hbm-deep-dive/2<]15-20% of the Radeon R9 290X’s (290 W TDP) power consumption[/url<] is for the memory subsystem. This works out to 43 to 58 W. This means that the graphics engine in a full Hawaii part consumes 232 to 247 W. Fiji's memory subsytem consumes approximately 30 W. This means that the graphics engine consumes 245 W on average. Now, Fiji has about 53% more shading power than Hawaii. All in all, this works out to a 30 to 35% decrease in power per TFlop of shading arithmetic. This is still less efficient than Maxwell. Doing the same calculation for Maxwell yields a 20 to 25% reduction in power per fps from Fiji to Maxwell. The assumptions are: a) Titan X and Fury X having equivalent performance b) Titan X memory subsystem consuming about the same power as Hawaii's It is unclear if Fiji uses the power reduction techniques pioneered by [url=http://www.realworldtech.com/steamroller-clocking/<]SteamRoller[/url<] and refined in Carrizo as covered by [url=https://techreport.com/review/27853/amd-previews-carrizo-apu-offers-insights-into-power-savings/2<]TechReport[/url<] earlier this year. Voltage-adaptive operation was estimated to reduce the IGP's power consumption by 10%. If this hasn't been integrated into Fiji, then that is another way to get Fiji's efficiency closer to Maxwell's.

        • auxy
        • 6 years ago

        Titan X has a 384-bit memory bus while Hawaii’s is 512-bit. I strongly doubt their power consumption is equivalent… (;´∀`)

          • NoOne ButMe
          • 6 years ago

          Nvidia’s GDDR5 controllers are much larger than AMDs. They also pay more power to clock them.

          Their high end cards use over 30% of the power for VRAM.
          AMD’s wider bus with low VRAM uses under 25%.

          Or, Kepler/Maxwell 384-bit bus cards generally use 70-90W for their memory in total, VRAM, interface. ETC.
          Hawaii uses around 60-70 for 290x. 390x is probably 70-80 due to running at 6Ghz.

        • NoOne ButMe
        • 6 years ago

        First, you’re welcome, I just realized it wasn’t done starting with the podcast with David Kanter (the first time I actually watched it, because, I enjoy RWT and it’s forums a lot). So, I hope to keep up doing them, I’ve also been reading this site for a long time before making any comments. Nice to “give back” in a manner of speaking.

        TitanX/980ti with HBM (theory craft here) probably ends up about 20% better pref/watt than Fury X, 30%+ for the Fury. I think fury x will match 980ti/Titan X in efficiency within few percent either way for the actual configuartion we have on 980ti/TiranX

        On the other hand, the R9 Nano based on my theorycraft ends up slightly higher than the theory HBM TitanX/980ti. By ~5%. I think that while Maxwell has it’s efficiency curve go pretty high before you see drastic increases. While GCN wants to be super-wide sitting at 750-850Mhz (I think Nano ends up ~800Mhz, if 4096 shaders)

          • AnotherReader
          • 6 years ago

          The podcast featuring David Kanter was the first one that I watched too for similar reasons. I think the most interesting card is the R9 Nano. A short card with more performance than a stock GTX 980 sounds great.

          I had a couple of questions. Is the relationship between memory interface speed and power consumption linear or does it increase exponentially as with CPUs and GPUs?

          Your reply to auxy stated that Nvidia spends more of its power budget on memory than AMD despite having only 5% more bandwidth than the R9 290X. What are the advantages of choosing a narrower and faster bus over a wider and slower bus? One advantage would be less memory chips.

            • NoOne ButMe
            • 6 years ago

            Nvidia’s memory controllers have caught up and finally surpassed AMD for speed at 28nm. However, excluding Tahiti, AMDs memory controls are a lot smaller for equal bus side.

            GM200 would have to cut off at least 3 SMM to go 512-bit memory controller. If they make controller handle lower speeds only maybe only 2. But, GM200 isn’t bottlenecked in most areas my memory bandwidth thanks to great compression. So, it would not be worth the cost/loss of performance.

    • Flapdrol
    • 6 years ago

    The “project quantum” thing has the gpu’s in the bottom as well, top is just a radiator and a fan.

    • ImSpartacus
    • 6 years ago

    I just wanted to say that I appreciate you guys still offering audio versions of this even though they understandably come later.

      • Damage
      • 6 years ago

      We’re working on speeding up delivery of the audio versions, FWIW. Bear with us.

        • Milo Burke
        • 6 years ago

        Much appreciated.

        Do you pay for the editing? If so, I could do it on a more timely basis. With better EQ and more reasonable compression to boot.

          • Damage
          • 6 years ago

          Yeah, but it’s total nepotism. My son just started as our new audio editor. 😉

          I will keep you in mind in he’s too busy or otherwise unable to produce the show for us, though!

            • Milo Burke
            • 6 years ago

            Fair enough. I like the little gyro-eater. Fill him full of baklava too.

            But if his interests go in another direction, I’ve got the software and the know-how.

            How long has he been editing? Was Jordan doing it before?

            • Damage
            • 6 years ago

            Yeah, Jordan has produced the audio version of the show, and as you may have noticed, the lag between show and production got pretty long recently. Jordan’s busy this week, so we’re trying it with Gyromancer. I think we can close that gap. The challenge is just keeping something close to Jordan’s standards for quality. We’ll be working on sorting that out.

            • chuckula
            • 6 years ago

            Is a lot of the work doing the voice overs & title music?
            I’m assuming it’s more than just ripping the audio track from the recorded video?

            • Damage
            • 6 years ago

            We record each voice track individually and Jordan mixes them while doing noise reduction and such. It’s quite the production compared to ripping the audio from the video version, and you can tell if you listen closely.

            • chuckula
            • 6 years ago

            Ooh it’s all slick and professional!!
            Yeah more power to ya. I know just enough to be dangerous in a few areas, but digital audio editing ain’t one of them.

            • Meadows
            • 6 years ago

            For what it’s worth, I could also help, I have the required studio software (legally) and some requisite experience.

            Then again, I guess your expectations about me are that I’d set fire to the recording or something.

            • sweatshopking
            • 6 years ago

            BUT WHAT A FIRE IT WOULD BE!

            • crabjokeman
            • 6 years ago

            The U.S. is all about nepotism. You should have your son change his last name to ‘Bush’ and then total success will be all but guaranteed.

            • Meadows
            • 6 years ago

            Yeah, no. Favouring relatives in generally “smaller” business operations has been customary for as long as those businesses have been in existence, possibly thousands of years even if we discount monarchies and the associated family tree issues.

            If you still think it’s a US issue in general, then you haven’t seen Eastern European politics or Russia yet, not to even mention paradises such as China or North Korea.

            • travbrad
            • 6 years ago

            North Korea? You mean where Kim Jong-un had his uncle killed? That’s an interesting form of nepotism. :p

            • Meadows
            • 6 years ago

            Think of it this way: they placed that uncle there in the first place, instead of having a random applicant.

            I’m not directly comparing North Korea to TR, not unless Mr Wasson later kills his son for treason or something.

        • _Sigma
        • 6 years ago

        This is good to hear, thanks 🙂

        • ImSpartacus
        • 6 years ago

        I’m just happy that they come eventually.

      • odizzido
      • 6 years ago

      I like them as well

    • gecko575
    • 6 years ago

    Thanks for posting this so fast! I normally don’t get a chance to join the live stream and I enjoy listening to these at work.

      • jihadjoe
      • 6 years ago

      If you want fast the twitch archive is usually available immediately.

      Sometimes I manage to tune in halfway through the live stream then I’d go to the archive as the broadcast finishes to watch the start.

        • gecko575
        • 6 years ago

        Well look at that… Learned something new today. Thanks Jihadjoe! I’ll use that in the future for sure

Pin It on Pinterest

Share This