Larrabee architect Tom Forsyth rejoins Intel

Intel has been adding big-name semiconductor talent to its bench at a rapid clip over the past few months, and it's added another such name to its roster with the return of Larrabee architect Tom Forsyth. In a tweet, Forsyth confirmed that he will be returning to the blue team as a chip architect under Raja Koduri in the Core and Visual Computing Group. Forsyth says he's “not entirely sure what [he'll] be working on just yet.”

Forsyth was one of the key architects responsible for Intel's Larrabee many-core vector processor and its accompanying instruction set (best known today as AVX-512), and Intel has made no bones about its goal to introduce high-performance discrete graphics processors to the market as soon as 2020.

Given Intel's fresh enthusiasm for graphics—a zeal that was apparently lacking during Larrabee's development and ultimately resulted in its relegation to the data center as a high-performance computing part—Forsyth's expertise could play a role in massively parallel processing chips many years still in the future. Perhaps we'll hear just what Forsyth ultimately worked on at the end of one of the many-year cycles of chip development yet to occur.

Comments closed
    • psuedonymous
    • 1 year ago

    Time to fasten down my Refined Tin Conspiracy Helmet good and proper:

    – Xeon Phi is Larrabee (in the same way Tesla V100 PCIe is Titan V). Or rather, [url=http://tomforsyth1000.github.io/blog.wiki.html#5B%5BWhy%2520didn%2527t%2520Larrabee%2520fail%253F%5D%5D<]Larrabee was Xeon Phi running the DX-on-x86 program, and sitting on a PCB with a video output[/url<]. The silicon is identical, and Knights Ferry and Knights Corner even have the texture units on-die but not exposed to the user. - At the same time Intel announced the hire of Raja Koduri, they also announced Knights Hill (the next Xeon Phi iteration) was cancelled in favour of a 'future Exascale architecture' Xeon Phi to release in 2020 - When Intel's Lake Crest (now called Nervana NNP-L1000) was announced, they also announced a follow-on 'Knights Crest' for release 2020 'incorporating Xeon technology'. - Intel have announced their "GPU products for both the data center and the client markets" will be coming 2020 I'm not saying it was [s<]aliens[/s<] the Ghost of Larrabee...

      • chuckula
      • 1 year ago

      It’s not that shocking that bits and pieces of technology get reimplemented in future products. It happens all the time.

    • watzupken
    • 1 year ago

    It is good to see a new and big competitor coming into the GPU market where there is barely any competition currently or in the near future.

    However I do hope that Intel will pull through this time, considering that they were working on dedicated GPUs previously that never see the light of the day before it got canned.

      • Sahrin
      • 1 year ago

      Since when has Intel’s presence *added* competition to the market?

    • DeadOfKnight
    • 1 year ago

    This new group at Intel is the most exciting news in a long time; too bad we have to wait years to see the fruits of their labor.

    • Tristan
    • 1 year ago

    New Intel GPU: 200 Atom cores with 2048 bit AVX

      • chuckula
      • 1 year ago

      You like 32 cores?

      WE’LL GIVE YOU 200 COARZ!

        • Growler
        • 1 year ago

        MOAR COARZ!!!111!!

        • Redocbew
        • 1 year ago

        You want more cores?

        YOU CAN’T HANDLE MORE CORES!

      • Chrispy_
      • 1 year ago

      200 cores? THOSE ARE ROOKIE NUMBERS MAN.

      Tesla V100 has 5120 cores. You need to copy Nvidia (or AMD) and pretend that your cores are actual SMs or something and then split each SM into hundreds of massively deficient things that in no way resemble complete cores but then call them cores anyway and watch the consumers go all googly-eyed for them!

      12 billion [s<]transistors[/s<] cores. See how easy that was?

        • Redocbew
        • 1 year ago

        Don’t forget the swanky new block diagram. If you don’t compare and contrast the old diagram(bits and bobs of hardware around a whacking huge block of “cores”) with the new diagram(bits and bobs of hardware around an even bigger block of “cores”), then you’re clearly doing it wrong. Yeah, sure it doesn’t really tell anyone anything about what was done internally to improve the chip overall, but it’s got more “cores”!

          • Chrispy_
          • 1 year ago

          Hell yeah. Also, make sure the new diagram looks flashier, even if you just took the old diagram and turned it upside down.

          Don’t use the old annotations though; Invent some new tech terminology by having a multi-car-pileup of trending tech words; Avocado Tensor-core Lithography.

          See, that’s good for at least a 100% core increase right there.

        • cygnus1
        • 1 year ago

        F*ck it, might as well throw the RAM [s<]transistors[/s<] cores into that count too.

    • NTMBK
    • 1 year ago

    Hah, I first read this article on the RSS feed, and the formatting made the content of the tweet look like your words Jeff. I thought you were dropping some momentous news in a very nonchalant way!

      • tipoo
      • 1 year ago

      That would be something, with Scott at AMD.
      First mentors.
      Now rivals.
      Red vs Blue,
      Scott vs Jeff
      Still a better movie than TLJ.

    • tipoo
    • 1 year ago

    Now that Intel is interested in GPUs again…Could someone remind them they’re squatting on the rights for this?

    [url<]https://www.youtube.com/watch?v=TWNokSt_DjA[/url<]

      • aspect
      • 1 year ago

      Likely pointless now. The dev team is gone. The tech for that engine is outdated and those ideas are already being put to use in other modern engines.

        • tipoo
        • 1 year ago

        I know, I know, sometimes my 2005 14 year younger wonder still rears its head and wants to know more about that

    • demani
    • 1 year ago

    What was Forsyth working on in the meantime? I’m curious what he jumped to after Larrabee.

      • chuckula
      • 1 year ago

      From the link in the article:
      [quote<]I work at Oculus VR helping make the Rift the finest VR HMD in the world. I worked at Valve Software doing Virtual Reality R&D. I wrote big chunks of the Team Fortress 2 VR support for the Oculus Rift. I worked at Intel as a software and hardware architect on the Larrabee project, which has now become Knights Ferry/Corner/Landing, and is sold as the Xeon Phi. I worked at the fabulous RAD Game Tools in Seattle, where I worked on Granny3D, which is a runtime animation package and mesh export pipeline. If you're a games company, you'd be crazy not to at least get an evaluation. I worked at wonderful wonderful Muckyfoot Productions in Guildford until its demise where I worked on Urban Chaos, StarTopia and Blade II. In the past I have also worked at 3Dlabs, Sega, and Microprose, as well as run my own tiny games company you've never heard of. [/quote<]

        • meerkt
        • 1 year ago

        Urban Chaos and StarTopia were nice.

        • Mr Bill
        • 1 year ago

        and… Computer Methods for Mathematical Computations, Prentice-Hall, 1977; by Forsythe, George E.; Malcolm, Michael A.; Moler, Cleve B.

        Edit: Yeah, wrong Forsythe; but the name reminded me that I have the book on my shelf.

    • WaltC
    • 1 year ago

    Larrabee was always grossly misunderstood by the so-called tech press as a “real-time ray tracer” even though that was denied by Intel several times before the project was cancelled. Reading the “Larrabee experts” in those days was enormously funny and amusing–it was 99% pure speculation and wishful thinking, and often directly contradicted what Intel was saying about Larrabee. When Intel pulled the plug I was not in the least surprised–nothing could ever live up to the ridiculous hype that some people had spewed about Larrabee–even though it never came from Intel! I’m glad those days are gone, frankly, along with most of those “pundits”…!

      • chuckula
      • 1 year ago

      They made the mistake of running that [url=https://www.youtube.com/watch?v=XVZDH15TRro<]Wolfenstein ray traced demo.[/url<] Which was cool for what it was, but it then got misinterpreted as "OMG 100% Ray Trace everything or else it's a failure." Even today Nvidia & AMD get away with the occasional ray trace demo on their highest-end hardware because everybody knows that the same hardware isn't primarily doing ray tracing, but Intel didn't have existing products on the market to provide that context.

    • tipoo
    • 1 year ago

    I had been wondering how he felt about the renewed Intel interest in dGPUs after that Larrabee blog post, and now he’s working on it!

    • chuckula
    • 1 year ago

    THANKS INTEL!
    — AMD & Nvidia

      • Srsly_Bro
      • 1 year ago

      Whenever I drive by larabee state park just off the 5 in Washington, I always laugh and wonder if the park is as good as the failed product.

Pin It on Pinterest

Share This