Intel confirms that Xe will have ray-tracing hardware acceleration

It would have been all too easy to overlook a blog post about professional rendering and visual effects over at Intel's IT Peer Network site. Most of the post talks about Intel's rendering framework and how it gets used in the VFX industry. However, a kernel of information about the company's upcoming Xe graphics chips was buried in the post. Given how starved we are for details on the all-new lineup, it's pretty tantalizing. Here, I'll spoil it for you: Xe is going to have hardware-accelerated ray-tracing.

Rendered on Intel hardware. CPUs, though.

That's probably not a huge shocker to anyone who follows the industry. AMD's next graphics hardware will apparently have some form of ray-tracing support, and I'm sure I don't have to explain what the "RT" in "GeForce RTX" stands for. With that said, Intel's post says specifically that "the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support" for Intel's rendering framework [emphasis mine]. The presence of a feature in the datacenter-oriented parts says little about its availability to regular Joes like this writer.

Furthermore, Intel's rendering framework is largely oriented toward offline rendering, not real-time work as RTX is. Where Intel talks at length about its open-source libraries for rendering, nowhere in the post does it mention Microsoft's DXR—or Windows at all. Still, it's reasonable to think that Xe consumer parts could launch with DXR hardware acceleration considering Intel's emphasis on the importance of ray-tracing for both graphics and other purposes in the blog post.

I've largely glossed over the software side of Intel's announcement, but for those curious, the company says it just released an OSPRay plugin for Pixar's USD Hydra that enables "interactive, up-to-real-time, photo-realistic global-illuminated previews" in supported applications. That plugin uses the company's also-recently-released Open Image Denoise library, which is itself neural-network-based just like Nvidia's implementation. It makes use of the CPU's SSE, AVX, and even AVX-512 instructions (when available) to accelerate the denoising process. When Xe comes out, it will be interesting to see if it makes use of Open Image Denoise as-is, or whether that process will move to the GPU as in Nvidia's method.

Unfortunately we've still got a while to wait before we could even guess at expected performance, price, or power figures for Xe; Intel isn't expected to launch anything until next year. Here's hoping we hear some more details as we creep closer to launch.

Comments closed
    • ronch
    • 7 months ago

    It’s good to see Ray Tracing become more and more popular as more companies try their hand at it. I wonder how good this will be compared to Nvidia and whatever AMD is working on.

    Myself, I’m good with graphics in games like Thief 4 and Skyrim. Heck I’m even ok with Thief 3-level graphics from 2004 and Unreal Tournament 2003/04. Keeps my graphics card demands modest too.

    • pogsnet1
    • 7 months ago

    “Intel isn’t expected to launch anything until next year”

    then another, another year, another year… until forgotten!

      • nanoflower
      • 7 months ago

      You can say that but having their GPUs start to come out in 2020 makes sense as it takes time to go from nothing (unless you count the IGPUs) to a sellable product. They seem to be right on schedule at the moment.

        • pogsnet1
        • 7 months ago

        Like what they promised to Larrabee?

    • Srsly_Bro
    • 7 months ago

    @Chuck

    Another failed Intel project. Check on Raja for everyone’s sake.

    [url<]https://www.anandtech.com/show/14295/intel-to-discontinue-movidius-neural-compute-stick[/url<]

    • Chrispy_
    • 7 months ago

    Has there been anything more concrete about the launch date for Xe?

    Has there been any indication from Intel about what performance level they’re hoping to reach?

    I’m really looking forward to Xe, just to add a bit more competition into the GPU market.

    • davidbowser
    • 7 months ago

    Is it pronounced “zee” or “ecks ee” or some random l33t way? Because my immediate reaction was “Xenon”.

      • Grahambo910
      • 7 months ago

      I guessed “schee”.

      • Wirko
      • 7 months ago

      If Cray ends up using these processors, it will be Cray Zee.

        • Srsly_Bro
        • 7 months ago

        I walk by the cray office in downtown Seattle to work. I’m waiting for someone to come out of there yelling like a lunatic.

      • robliz2Q
      • 7 months ago

      I thought it was pronounced LaraXee 😉
      Intel’s track record is a bit spotty in this area 🙂

    • FuturePastNow
    • 7 months ago

    I sure hope Xe is just a development name and not something they’re going to try to market.

    • colinstu12
    • 7 months ago

    Don’t worry it’ll be discontinued in two years.

      • chuckula
      • 7 months ago

      Two years? What kind of shill are you?!??

      [spoiler<]WE'RE CANCELING IT WHEN NAVI LAUNCHES SUCKA![/spoiler<]

    • RoxasForTheWin
    • 7 months ago

    So ray tracing is becoming the new gimmick everyone’s trying to sell? Looks like everyone gave up on affordable 4k (Not that 4k was even viable, 1440p is easier to get), HDR, and variable refresh rates and needed a new marketing sticker for their boxes

      • Laykun
      • 7 months ago

      So, from a graphics programmer, I’ll let you in on what’s going on. With traditional rasterization, using shaders, we are getting to a point where achieving the next milestone of fidelity means we are effectively ray-tracing with compute shaders, technically a lot of your favourite modern games have already been doing it in some capacity. It solves a bunch of problems that the traditional rasterization pipeline is considered a bit clucky for.

      It’s pretty easy to say “psssh, just another gimmick”. But after playing minecraft, yes, minecraft, with the raytracing mod (which does it all with OpenGL shaders) I’m pretty much sold on raytracing solving a lot of the long standing issues I’ve had producing games. To be specific, the thing I’m most impressed with is it’s ability to solve GI (global illumination). The shadows were pretty nice too, I still honestly think screenspace reflections are good enough though. The only problem I had is the performance is trash (1080ti, 6700k, etc) because it’s written in a fragment shader instead of having hardware acceleration, so even on an RTX card it’d be similarly slow because it doesn’t leverage any raytracing API. We’re talking 20-30fps here. The other caveat for this example is that minecraft is the perfect candidate for raytracing, since it’s neatly divided up into a grid data structure and the pieces of that data structure barely get modified, making it ideal for tracing rays into (so if this were a traditional game I’d be getting even worse performance).

      If you check the release history of Unreal Engine 4, you’ll find they’ve tried to solve global illumination TWICE (using a traditional rendering pipeline) with experimental engine features that eventually get canned. Both approaches involved some amount of ray-tracing into data structures but had too many caveats or were too slow to be considered viable.

      It’s coming, and it’s here to solve issues with fidelity that currently have no elegant solution and have caused us to push up against the limits of graphics/compute shaders.

        • YukaKun
        • 7 months ago

        Why did you even waste so many words trying to explain it to him? I enjoyed reading your response nonetheless, but feels like a waste of your time.

        Also, I just find it stupid when people complain about new technologies about graphical fidelity on VIDEO CARDS. It’s so contradictory that it’s laughable!

        Isn’t that the whole goddamned point of GPUs? To make things more life-like? Moaning about RayTracing is like moaning about better physics or better textures, or at the very least, in the same ballpark.

        Cheers!

        • Chrispy_
        • 7 months ago

        So what you’re saying is that RT is inevitable, but that Geforce RTX is not going to be the driving force, but developers looking to find an easier GI solution will start to borrow more and more from the most accessible parts of Microsofts DirectX12 DXR API?

        So, chicken-and-egg, the answer to which one comes first is in this case the API gaining wide adoption. I’m shocked and appalled!

          • Srsly_Bro
          • 7 months ago

          Egg came first…

          Gotta come up with a new idiom, bro.

          [url<]https://www.smithsonianmag.com/smart-news/problem-solved-the-egg-came-first-6910803/[/url<]

          • psuedonymous
          • 7 months ago

          Once you have the RT API implemented in your engine for developer use (e.g. rapid baking of cubemaps), it’s less effort to go “hey, consumers with $gruntyGPU, you cna flick this switch and have RT lighting in game!”. Cards with RT acceleration are needed to get that initial foothold, and that’s what RTX does.

            • Krogoth
            • 7 months ago

            Nay, it is more like it is being pitch as an effort to make discrete GPU solutions relevant to the masses in light of iGPUs encroaching the lower and mid-portions of the discrete GPUs market. It is also a way of using tensor cores on Turing/Volta silicon for graphical usage patterns instead of having to spend more R&D a completely separate architecture/design.

            • psuedonymous
            • 7 months ago

            Except they DID spend more R&D on a completely separate design: the RT cores used to accelerate BSP traversal are NOT Tensor cores, and are completely different internally. Volta, which has the Tensor cores and simultaneous INT execution of Turing but lacks the RT cores, is not nearly as performant for retracing.

            • Krogoth
            • 7 months ago

            Turing is a tweaked Volta design. Engineers simply found a way to use tensor cores for graphical stuff and re-tool it accordingly and called them “RT cores”. It is not a new completely new design despite what Nvidia marketing would like you to think. That’s why and how Turing came out so quickly after Volta taped out.

            Turning and Volta are like fraternal twins to each other.

            • psuedonymous
            • 7 months ago

            [quote<]Engineers simply found a way to use tensor cores for graphical stuff and re-tool it accordingly and called them "RT cores". [/quote<]This is completely false. RT cores and Tensor cores are completely and utterly different on both a schematic and hardware level, and perform completely different operations. Tensor cores ingest a trio of matrices and perform a single basic fixed operation on them (FMA). That is all they can do, and they only have value because 1) they can do so very fast (faster than implementing matrix FMA with individual operations) and 2) that there is a class of problem that is best tackled with matrix FMA (ANNs). RT cores on the other hand are built entirely around BVH rapid tree traversal. They operate at a much abstract level: the SM issues a ray to the RT core, and the RT core will return it's intersection with the scene. All the scene test operations reuqired to do this are handled completely internally to the RT core and do not involve the Tensor units or any other of the SM INT or Float units Ironically, [quote<]Turing is a tweaked Volta design[/quote<] Is the opposite: Volta was a limited-production stopgap (a single die - GV100 - released just to address HPC market demand) with the RT cores diked out leaving just the Tensor cores.

            • Krogoth
            • 7 months ago

            Volta isn’t a stopgap. It is the beginning of a divergence in R&D product cycles. It is the start of the general compute focused designs.

            Turing continues the traditional graphical focused design pathway that we are much more familiar with.

            It is part of Nvidia’s long-term strategy of distancing themselves away from graphics as their bread and butter that has been going on ever since Fermi.

            • Srsly_Bro
            • 7 months ago

            My boy @pseudo has a point tho. Volta was GV100, nothing else. Volta addressed a market niche and made lots of money for Nvidia. Volta was a stop gap solution for enterprise prior to Turing release.

            • Krogoth
            • 7 months ago

            Volta isn’t a stop gap solution. It is the trailblazer for Nvidia’s pure GPGPU designs geared towards HPC market. It came out first because Nvidia wanted to snag more HPC marketshare and recoup the bulk of the R&D costs for both chip designs.

            The computing needs of HPC world and graphics are vastly different. This is reflected in the resource allocation in Volta and Turing designs. We will see this again with upcoming 7nm refreshes with one design focused and marketed towards HPC crowd and another design catered towards graphical needs instead of trying to make a design that tries to do both but falls shorts to completely satisfy either spectrum *cough* GCN *cough*.

            • Redocbew
            • 7 months ago

            The split between pro and consumer happened before Volta. The first Pascal chip to be released was the GP100. I believe that was Nvidia’s first GPU used only in the Tesla/Quadro lines and didn’t cross over into consumer-land. The GV100(Volta) which followed it did the same. Time will tell if Nvidia does the same with Turing and releases a TU100.

            The shifting of goalposts has left me a bit confused here, but if Volta and Turing are similar(which they are), and Turing has many features found also in Volta(which it does), then how does it make any sense that there’s now Turing-based cards all over the place when Volta is supposedly all about compute and no good for gaming? The pricing might be bonkers, but the performance is pretty good overall. The picture doesn’t seem to be as simple as you seem to make it.

            Anyway, I don’t think the fact that HPC people and general consumers have different workloads is a great argument for an architecture which is HPC-specific. As it was with Pascal and now with Turing each GPU in the line has the same general architecture, but with bigger frame buffers for pro cards, and/or HBM2 instead of GDDR5/6/X for pro cards, and FP64 hobbled in consumer cards, and so on. That’s just market segmentation. It doesn’t seem like it’s necessary to have a completely separate architecture only for HPC unless someone’s got a very specific use case that requires some oddball hardware.

            • Krogoth
            • 7 months ago

            TU100 will never exist because it already exists as “V100”. A TU100 wouldn’t offer anything over V100 save for RTX performance but that doesn’t need expensive HBM2 yet.

            HPC workloads have stronger preferences for FP64 and gobs of memory bandwidth/capacity(HBM2) neither are useful for graphical workloads. The memory controller for HBM2 and GDDRx are completely different. If you are going have to use separate memory controllers you might as redesign/tweak the whole bloody thing if you can afford it. Nvidia has done exaclty just that. AMD RTG isn’t as fortunate has been forced to use designs that attempt to satisfy both, but ultimately is outpaced in either area.

            Nvidia is going repeat the same thing again with their 7nm refreshes.

            • Redocbew
            • 7 months ago

            [quote<]The memory controller for HBM2 and GDDRx are completely different. If you are going have to use separate memory controllers you might as redesign/tweak the whole bloody thing if you can afford it.[/quote<] Except they didn't. There's plenty of examples of a chip using multiple types of memory without needing any other changes to the architecture. I imagine that's something they plan for up front in order to avoid exactly that problem. [quote<]Nvidia is going repeat the same thing again with their 7nm refreshes[/quote<] You keep using those words. I do not think they mean what you think they mean.

            • psuedonymous
            • 7 months ago

            [quote<]Volta isn't a stopgap. It is the beginning of a divergence in R&D product cycles. It is going start of the general compute focused designs. Turing continues the traditional graphical focused design pathway that we are much more familiar with. [/quote<] Whether you call Turing "Volta but with RT cores", or call Volta "Turing without the RT cores" depends on whether you go by actual release order or development order, but it is clear the two architectures are very similar. Indeed, there is [i<]less[/i<] of a divergence now than with Volta and its consumer contemporary, Pascal.

        • Krogoth
        • 7 months ago

        Real-time ray-tracing has always been too expensive on computing resources for the mainstream market. It always has been that way ever since its inception. The mainstream market has always chosen higher resolution, more complex models/textures, smoother animation/framerate over ray-tracing rendering despite being a solution to a large number of rendering problems with traditional rasterization.

        The story hasn’t really changed either. Nvidia’s RTX mode is really just a way of them up-selling their Volta/Turing silicon’s tensor cores for graphical usage patterns. It is mostly a boon to the graphical professional-world not mainstream customers.

        Unless there’s another massive leap in computing power, it is most likely real-time ray-tracking will remain a pipe-dream for the mainstream market for the foreseeable future.

        • anotherengineer
        • 7 months ago

        Interesting.

        Not trying to take away from what you said,

        “With traditional rasterization, using shaders, we are getting to a point where achieving the next milestone of fidelity”

        The big question is how important is that milestone? And when does diminishing returns come to play?

        If it’s a good game with a good soundtrack, graphics seem secondary.
        see
        snes – chronotrigger
        n64 – ocarina of time
        even old PC games like Borderlands (a lot funner multi-player co-op)

          • Srsly_Bro
          • 7 months ago

          Don’t forget Zelda: link to the past

          • jihadjoe
          • 7 months ago

          IMO RT is not just about graphical fidelity, but also about making everything ‘just work’.

          Rasterization as it is now is a whole bunch of hacks glued together in a way that makes a scene somewhat manageable, but in terms of scientific analogies it’s like the big convoluted models of the night sky where there are all sorts of complex mechanics to explain the retrograde motions of the stars.

          RT would be like going from that middle-age mathematical soup into the world where we have Einstein’s General Relativity.

          Even if it leads to no graphical improvement RT is worth doing just to clean up.

        • robliz2Q
        • 7 months ago

        Thank’s for giving your insight! Usually long explanatory posts seem to get swamped by cool quips, so I’m glad this one was highlighted, I enjoyed it. 🙂

        It’s made me keener on ordering a new desktop “present to myself” system to replace a 5yr old gaming laptop which I’ve managed to tune with compromises to run a good number of recent games, likely I guess because the console hardware cycle hasn’t moved on for a good while.

      • Krogoth
      • 7 months ago

      Real-time ray-tracing rendering for the mainstream market is mostly a gimmick unless we revert back to 640×480/800×600 and/or late 1990s-era 3D models. The hardware isn’t quite here yet. Notice how almost all of the tech demos for real-time ray-tracing use games from that era? Not a coincidence.

      RTX mode isn’t even full real-time ray-tracing either. It is a cheaper alternative that use limited light-sources and algorithms for “best-fits”/denoising rather than brute computing force. It is Nvidia way of using tensor cores on Volta/Turing silicon for graphical usage patterns.

      Ray-tracing rendering will most likely remain within professional/rendering-only throughout most of the 2020s barring a breakthrough that results in a massive leap in computing power.

        • NovusBogus
        • 7 months ago

        It absolutely is the next big thing, but as with going from T&L to shaders it’s going to be a bumpy road and RTX all up and down the line was too much, too soon. In the long run NV will probably be vindicated for cashing in their technology lead to drive the industry forward, but that doesn’t change the fact that the short to medium term GPU market is looking pretty mediocre and the shift will probably take a lot longer than the aforementioned shader transition.

          • Redocbew
          • 7 months ago

          It’s pretty much the same thing they did with gsync. Nvidia says “hey guys! Check out this nifty new widget which you can only get from us, but will probably be everywhere a few years from now anyway!”.

            • Srsly_Bro
            • 7 months ago

            But gsync is still proprietary and free sync is a total mess in terms of standardization.

            • Redocbew
            • 7 months ago

            But they’re both forms of a variable refresh rate. I have yet to see any conclusive data that makes either freesync or gsync better than the other. I’m not even sure what that would mean without employing lots of handwaving and just calling it a “cleaner experience” or something. If the way we classify these projects is the only difference, then this is like trying to argue with a chef over whether a tomato is a fruit or a vegetable. They don’t care.

            • Srsly_Bro
            • 7 months ago

            G sync is one range, free sync has many ranges, and usually narrow ranges. No free lunches here, bro.

            I have a 27″ Dell 1080P 75hz free sync, and I can’t tell you the range. My 27″ 1440P Dell gsync has a standardized range.

            • Redocbew
            • 7 months ago

            As a consumer that makes sense, but developers don’t always have the same priorities as consumers. Developers are the chef’s in my analogy there. 🙂

            If you want to say that a delay of widespread adoption caused by a tangle of competing standards and whatnot is more annoying than product segmentation games causing the same I guess that’s fair.

    • cynan
    • 7 months ago

    [quote<]I don't have to explain what the "RT" in "GeForce RTX" stands for[/quote<] Of course not. It's obviously named after their new sales strategy and stands for "Reasonably Thrifty". As in "Reasonably Thrifty [gamers] eXcluded"

      • Prestige Worldwide
      • 7 months ago

      I was saying “Really That eXpensive” a while back. Then I bought a 2080 anyway because I’m an idiot easily separated from his money. I am very happy with RTX in Metro Exodus though so ¯\_(ツ)_/¯ .

    • blastdoor
    • 7 months ago

    Is making consumer GPU really Intel’s comparative advantage?

    Given their scarce manufacturing resources, I’d think Intel would only want to focus on high-margin data-center products.

      • tipoo
      • 7 months ago

      Allegedly they approached Sammy to fab these, which isn’t a rousing endorsement for the state of Intel fabs in 2020.

        • chuckula
        • 7 months ago

        That rumor is almost as spicy as when AMD entered into the hot sauce business after Capsaicin!

        • Srsly_Bro
        • 7 months ago

        Or it’s a matter of fabrication allocation…

        Silly goose

          • chuckula
          • 7 months ago

          Or the fact that this entire idiotic rumor is based solely on Raj posting a picture from Korea in front of a statue that has nothing to do with Samsung whatsoever.

          Or the fact that even if Raj is meeting with Samsung about graphics that people tend to forget something: the IC branch of Samsung is basically a [b<]memory[/b<] company. I don't care that they have a pretty small side-gig making Exynos chips that are only used in a portion of Samsung's own phones, that's below the noise threshold compared to memory So Intel is making a GPU and Raj might.... MIGHT... have had a meeting with the world's largest HBM and GDDR6 supplier. Wow Intel is clearly failing hard here!! I love the idiocy!

            • K-L-Waster
            • 7 months ago

            Memory on GPUs is over-rated. 640 KB should be enough for anybody….

            • Srsly_Bro
            • 7 months ago

            Reusing the same dead misquoted statement ought to be done by nobody.

            • K-L-Waster
            • 7 months ago

            Same goes for nitpicking irrelevancies on every 3rd post.

            • Srsly_Bro
            • 7 months ago

            And you tried to get votes for repeating the same quote out of context in a hopeless attempt to sound like an edgy teen.

            I don’t care if ppl down vote me, bro.

            I’ll go back and give you the positive affirmation you seek with an upvote.

            • Redocbew
            • 7 months ago

            Anything Raja does on twitter is news.

      • K-L-Waster
      • 7 months ago

      As mentioned in the article, these [b<]are[/b<] data center parts. GPUs aren't just for games anymore.

        • blastdoor
        • 7 months ago

        Yes, I know, but there are references to consumer parts in the write-up.

          • Waco
          • 7 months ago

          Consumer parts will pave the way for the datacenter parts. There’s very little chance the first thing released by Intel in the heavy-duty GPU space will be something going into a rack.

            • the
            • 7 months ago

            Well Intel has a contract to provide the Dept of Energy with a new super computer leveraging Xe in 2021. These installations tend to lag a bit behind in delivery from when they were first launched. The first Xe part may not be for the data center but those parts are not going to lag behind: they are in simultaneous development.

            • Waco
            • 7 months ago

            I’m well aware of A21. 🙂 I’m amazed they took that kind of risk given Intel’s track record.

            Here’s to hoping consumer parts launch hard in 2019 to vet the basics.

      • cmrcmk
      • 7 months ago

      I think Intel’s primary interest is in datacenter and workstation graphics. If they can sell a few extra units to the kinds of people who read The Tech Report, all the better, but that’s not where their focus is.

      At the PC level, they’ve already conquered the market. But look at how much money Nvidia is making with Tesla parts in servers and supercomputers. Heck, the Top 500 supercomputer list is way more interested in what kind of accelerators are being used (Tesla, Xeon Phi, FPGAs) and the network to bind it together than what CPUs are controlling it all.

        • nanoflower
        • 7 months ago

        They’ve said they intend to enter the consumer market as well. They may well focus on the datacenter first since they already have relationships in place, but don’t be surprised if they are in the consumer market in a big way in a few years.

        • the
        • 7 months ago

        Much of the shift from CPU to accelerators is because the CPU performance has stagnated the past few years. Increasing core count per socket doesn’t matter much when the workloads are spread across a large number of nodes with high speed, low latency fabric between them. (There are other reasons why newer higher core counts are still desirable like performance per watt but just that performance is no longer the primary driving factor like it used to be.)

        Though Xe is going to be the foundation for their mobile graphics after Gen 11 and Ice Lake. Intel is aiming to have common platform of x86 + Xe from top to bottom.

        The only thing exotic that looks to be truly niche for the data center market is Omnipath which is one of the things that made Sky Lake-SP actually interesting at least. I don’t think it was a popular as Intel had hoped for in terms of sales but on package fabric did get attention.

    • The Egg
    • 7 months ago

    Intel is missing the perfect opportunity to name their product “Xen”, and confuse the living crap out of customers…..so they just decide to just buy a console instead.

    Isn’t that what we like to do in this industry?

      • K-L-Waster
      • 7 months ago

      iCore XenRTXellent

      • Neutronbeam
      • 7 months ago

      Xen? Half Life 3 confirmed!

        • tipoo
        • 7 months ago

        Imagine if they struck a deal to make HL3 exclusively run on Xen GPUs, imagine the meltdowns!

          • Neutronbeam
          • 7 months ago

          Oh, the humanity!

          • highlandr
          • 7 months ago

          Epic store only for a year! (Honestly, I’d put up with that if it meant finally getting HL3)

            • drfish
            • 7 months ago

            My first thought as well. HL3 developed by Valve, financed by Sweeney, running on UE4 as an EGS exclusive would probably end life as we know it.

            • Srsly_Bro
            • 7 months ago

            And a jobless GabeN roaming the clean streets of Bellevue.

    • chuckula
    • 7 months ago

    Bear in mind that OSPRay and potentially these other projects are primarily for ray tracing in professional rendering scenarios that care about performance but care about quality even more. They don’t care about delivering game graphics with raytracing at 60FPS.

    So this isn’t a 100% confirmation of ray tracing hardware for games, but it is definitely of interest to render farms. #Weta #INeedAShillcationInNewZealand

      • Krogoth
      • 7 months ago

      “One Shill to find them, One Shill to bind them and One Shill to rule them all……”

        • Shobai
        • 7 months ago

        “and in the darkness bined them” ?

        [For those playing at home, it’s apparently “rule” , “find” , “bring” , “bind” ]

      • Srsly_Bro
      • 7 months ago

      Word to pay attention to in OSPRAY is pray. Let’s see how far that gets Intel.

        • Shobai
        • 7 months ago

        Woah!

        According to my research into US pop culture, the consensus appears to be that prayer will get Intel roughly halfway there.

          • K-L-Waster
          • 7 months ago

          They’ll make it some way. Someday.

    • Captain Ned
    • 7 months ago

    How about Xer?

      • ermo
      • 7 months ago

      Too PC.

        • K-L-Waster
        • 7 months ago

        Right.

        Xer is the console part…

          • willmore
          • 7 months ago

          No, Xe is the console part, Xer is the PC part. Sheesh….

    • chuckula
    • 7 months ago

    CONFIRMED*

    * Then again, so did Larabee.

      • Krogoth
      • 7 months ago

      “Xe = Vega and Navi done right” – Raja 2019

        • chuckula
        • 7 months ago

        Pshaw… Like we even told Raj that Navi existed before he left!

          • K-L-Waster
          • 7 months ago

          “Navi? Yeah, yeah, that’s… that’s our experimental GPS product, yeah… Navi, navigation, get it? You wouldn’t be interested, Raj, not your thing.”

        • Srsly_Bro
        • 7 months ago

        That’s how raja got the job. Wait until what he actually creates and see him go back to AMD after a successful mission to destroy Intel from within.

        • freebird
        • 7 months ago

        and 3 years later…

          • chuckula
          • 7 months ago

          Funny, in the beginning of 2020 AMD’s best GPU will be Vega based and its newest GPUs that are only a couple of months old will be Navi based.

          If Xe is twice as big as the biggest Vega, sucks down twice the power, and performs worse…. well then I reserve the right to call it a miracle that should be revered and unthinkingly defended as a miracle.

          After all, that’s what went on for 6 years after Bulldozer launched.

            • freebird
            • 7 months ago

            Why don’t you just drag The Itanic into the discussion while you are at it…
            [url<]https://www.networkworld.com/article/3196809/the-itanic-finally-sinks.html[/url<] I thought Intel said once upon a time we shouldn't need more than 32-bit desktops or Xeons. Only Itanium needed that power.

            • NovusBogus
            • 7 months ago

            That was mostly HP’s doing, though. Intel was just the enabler. Of course, now that they own a major FPGA company I shudder to think about the next harebrained idea to come out of enterprise land…

      • Srsly_Bro
      • 7 months ago

      Been thinking of going to see my family north of Seattle.

      I guarantee there will be more people at Larabee state park on a rainy day than all users of Intel’s abomination and derivatives named from the same.

        • Waco
        • 7 months ago

        Likely true. While they have their niche…it’s exceedingly tiny.

      • BobbinThreadbare
      • 7 months ago

      Intel in 2 years: “we never said *real time* ray tracing”

Pin It on Pinterest

Share This