Hands on with Lucid’s Hydra GPU load balancer


Ever since its auspicious debut as a technology demo at last year’s fall IDF, Lucid’s Hydra chip has been an object of curiosity for us. Could this small start-up firm really create a GPU load-balancing chip that would function as smoothly as SLI and CrossFire, yet allow more leeway in mixing GPUs of different types? They’d taken on a daunting challenge, but they seemed to have a pretty good start on the problem.

Now, a little more than a year after that first IDF showing, Lucid says its Hydra 200 chip is ready to ship in consumer systems. To underscore that point, the firm recently invited us to its San Jose, California offices to experience a Hyrda-based solution first-hand. We came away with our impressions of the Hydra solution in action, along with some of the first performance numbers to be released to the public.

If you’re unfamiliar with the Hydra, I suggest you read our original coverage of the chip, which introduces the basics pretty well. The basic concept is that the Hydra chip can sit on a motherboard, between the north bridge (or CPU) and the PCI Express graphics slots, and provide real-time load-balancing between two or more GPUs. The Hydra accomplishes this task by intercepting calls from a graphics API like DirectX, dynamically dividing up the workload, and then assigning a portion of the work required to draw each frame to each GPU. The Hydra then combines the results into a single, hopefully coherent image, which is then sent to the display.

Several things have changed over the past year, as the Hydra has moved from a technology demo toward a real product with proper driver software. Most notably, perhaps, the first Hydra silicon demoed supported only the PCIe Gen1 standard, whereas today’s Hydra 200 series is PCIe Gen2-compliant.

In fact, the Hydra can support up to 48 lanes of PCIe 2.0 connectivity, with 16 “upstream” lanes to the host north bridge or CPU and 32 lanes intended for graphics cards. Those 32 lanes can be bifurcated into as many as four PCIe x8 connections, with several other configurations possible, including dual x16 connections and a single x16 plus dual x8s. The chip can auto-configure its PCIe connections to fit the situation, so this full range of connectivity options can be exposed on a single motherboard with the proper electrical connections and slot configuration.

The tiny Hydra chip package perched atop a holding case

To execute Lucid’s load-balancing algorithms, the Hyrda chip also includes a 300MHz RISC core based on the Tensilica Diamond architecture, complete with 64K of instruction memory and 32K of data memory, both on-chip. The chip itself is manufactured by TSMC on a 65-nm fabrication process, and Lucid rates its power draw (presumably peak) at a relatively modest 6W.

Bringing the Hydra to market: Big Bang fizzles?

That’s the hardware, pretty much, which is relatively straightforward. The story of the Hydra’s current status is considerably more complex. Lucid says it has been working with a number of motherboard makers on products that will employ the Hydra chip. MSI has been furthest along in the process and, to our knowledge, is so far the only partner to reveal its plans to the public. Those plans center around a gamer-oriented motherboard based on the Intel P55 chipset dubbed the Big Bang Fuzion.

MSI’s Big Bang Fuzion board

During IDF, MSI and Lucid announced plans for a public launch of the Big Bang motherboard on October 29. As we understand it, the idea was for the Big Bang board to be available to consumers for the holiday season, complete with a driver that supported both symmetrical—two or more identical video cards—and asymmetrical—a GeForce GTX 260, say, and a GeForce 9800 GTX—configurations. A new capability enabled by Windows 7, the ability to mix Radeons and GeForces in the same GPU team, would be enabled by a driver update at a future date.

However, October 29 came and went, and nothing happened—the Big Bang Fuzion wasn’t launched. Rumors flew that the board had been delayed to the first quarter of next year. A certain someone pinned the blame on Nvidia, to the surprise of no one. The charges were plausible, though, because the Hydra’s capabilities threaten Nvidia’s SLI licensing regime, whereby motherboard makers must pay tribute to Nvidia in order to enable GeForce multi-GPU configurations on their products. It’s conceivable Nvidia might have pressured MSI to delay the product.

According to a source familiar with the situation, MSI’s requirements for the Hydra driver changed abruptly after the press blitz at this past IDF, with the schedule for support of mixed-vendor GPU configurations pulled into October, a dramatic acceleration of the original plans.

When asked for comment on this story, Nvidia spokesman Ken Brown told us that Nvidia welcomes new technology and innovation, especially those that improve gaming, and said he wasn’t aware of Nvidia playing any role in the Big Bang Fuzion delay. Brown reiterated Nvidia’s long-standing position that what Lucid is attempting to do is “very ambitious,” “an enormous technological challenge,” a position the firm has rather curiously communicated at every opportunity. Furthermore, though, he confirmed to us that Nvidia will not block its partners from producing motherboards that incorporate Lucid’s technology.

For its part, MSI issued a statement here (at the bottom of the page) citing a two-fold reason for the delay related not to the Hydra hardware but the drivers: the need for better optimization and stability in Windows 7 and in multi-vendor GPU configs.

Everyone involved seems to agree that the Hydra 200 hardware is ready to go. Based on our brief hands-on experience with the Hydra in Lucid’s offices, though, we think MSI’s trepidation about the drivers may be warranted. Lucid gave us a preview of the mixed-vendor mode in action, and predictably, we ran into a minor glitch: the display appeared to be very dark, as if the gamma or brightness were set improperly, in DirectX 10 applications. This was a preview of that nascent functionality, though, so such things were expected at this stage.

More troubling was the obvious visual corruption we saw in DirectX 9 games when using an all-AMD mix of a Radeon HD 4980 and a Radeon HD 4770. The Lucid employees we spoke with about this problem attributed it to Windows 7, and indeed, Lucid VP of R&D and Engineering David Belz told us that Windows Vista had been the driver team’s primary focus up until the last month. Belz said they had found few differences when moving to Windows 7, but forthrightly admitted the firm might need to look into those differences further. Belz seemed surprised when he asked what percentage of prospective Hydra buyers might wish to run Windows 7 immediately and we answered, “Uhh… 99%.” The Hydra comes attached to a new motherboard, though, so one would think that answer would be rather obvious at this point in time, even if our estimate might be overstated by a few percentage points.

Belz did express confidence that the issues we saw were rather trivial, likely not difficult to fix with software tweaks. Given what we’ve seen of the Hydra in action, we’re not inclined to disagree with that assessment.

Hands on with a Hydra box

Our testing time with the Hydra in Lucid’s offices was limited, but we did have the chance to gather some preliminary performance data and record our impressions of the solution in action.

The guts of a Big Bang Fuzion-based system

Pictured above are the guts of a demo system based on the MSI board that Lucid had running. Although it was fully operational, the Big Bang Fuzion board is an unreleased product, so we didn’t get a chance to test with it. Instead, Lucid offered up its own test chassis, which looks like so:

This box below is a regular system with a PCIe cable connected to one of its PCI Express x16 slots. The box above contains a Lucid test board with a Hydra chip and some PCIe slots. Although this looks very different from a fully integrated solution, the system topology in fact is very similar. As you can see, we had a pair of GeForce GTX 260 cards installed in the system.

We were able to test mixed-vendor performance, as well. Above is the Device Manager view with a Radeon HD 4890 installed next to a GeForce GTX 260.

Here’s how the Hydra Engine software indicates that it’s using two GPUs from different vendors. In this case, the display is connected to the GeForce rather than the Radeon, although the choice of display GPU is apparently flexible.

Lucid’s control panels for the Hydra are pretty straightforward. The second one, as you can see, allows the user to enable or disable the Hydra on a per-game basis. Lucid’s software detects installed games on the system and offers this list. Obviously, this particular box has a considerable number of games installed simultaneously. Heck, I was a little surprised it generally worked properly. This system was prepared by Lucid’s own internal QA group, and they have been testing a big swath of the most popular games internally to ensure compatibility and performance.

A preview of Hydra performance

Below are the performance results we managed to squeeze out of the Hydra during our session in Lucid’s offices. Going on-site like this to conduct testing is never our preferred situation, and as one would expect, we were limited by time and circumstance in various ways. We had to choose quickly from a limited selection of games, to limit the number of repetitions in our test runs (we tried to do two for each test, if possible), and to test with the hardware Lucid made available to us.

Still, we had time to test a handful of games on a number of different GPU configs. We even got a preview of the Hydra’s mixed-vendor capability by pairing a GeForce GTX 260 with a couple of different Radeons.

The PC we used for testing was based on a Core i7-920, a Gigabyte EX58-UD3R motherboard, and the 32-bit version of Windows 7. Oddly, the system had only two 1GB DIMMs installed, so one of the Core i7-920’s memory channels wasn’t available. In fact, when we first started testing, only 1GB of memory was available, because the second DIMM was installed in the wrong slot—an artifact of the limited setup time Lucid had for this press demo. We corrected that problem, though, and moved on with our testing with the full 2GB at our disposal.

The first game we’ll look at is Operation Flashpoint: Dragon Rising. We used FRAPS to record frame rates in this game and in FEAR 2, and to keep things very repeatable, we simply stood still at a fixed point in the game and recorded frame rates for 30 seconds. That’s not how we usually test, but it should suffice for our purposes here. Also, all of the multi-GPU configurations below make use of the Hydra for load balancing. We didn’t have the chance to test with CrossFire or SLI for comparison.

In this game, the Hydra delivers performance scaling in earnest, even with asymmetrical and mixed-vendor configurations. No, we’re not seeing linear performance scaling when, say, going from a single GeForce GTX 260 to two of them, but I doubt this game is entirely GPU-limited at this resolution.

Surprisingly, the mixed-mode config with a GeForce GTX 260 and a Radeon HD 4770 outperforms dual GTX 260s. That’s unexpected, but otherwise, the Hydra’s performance is pretty much as advertised.

As I noted before, we did run into visual corruption problems with the 4890+4770 config in the two DX9 games, Operation Flashpoint: Dragon Rising and FEAR 2. We’ve reported the performance results anyhow, for the sake of completeness, but they come with that caveat.

The visible problems of the 4890+4770 config translate into performance issues here, as the pair of GPUs turns out to be slower than a single 4890. Otherwise, though, the Hydra does its thing quite well.

These last two benchmarks use DirectX 10, and as I mentioned, the mixed-mode configs with DX10 apps had much darker displays than normal, for whatever reason. They looked fine otherwise, though, and the performance scaling pictures for these two DX10 apps are very similar. Generally, the Hydra achieves good results once again, although the 4890+4770 pair’s scaling issues remain.

So what now?

Although we encountered some glitches with Lucid’s Window 7 drivers, the Hydra appears generally to work as advertised and to be tantalizingly close to ready to ship in consumer products. No doubt a start-up like Lucid would have benefited from having its products out on store shelves in time for the holidays, but that now seems unlikely. That’s unfortunate, but it doesn’t negate the enormity of Lucid’s apparent accomplishment. Even if we have to wait a few months to enjoy it, the flexibility to mix any two reasonably similar graphics cards and achieve proportionally better performance should be a very nice thing to have.

And make no mistake, Lucid has undertaken a monster challenge. The keys to success are in Lucid’s load-balancing methods, the details of which remain largely a secret. SLI and CrossFire predominantly use alternate-frame rendering, where the load is interleaved between GPUs. This method presents several problems, including the fact that most newer games have frame-to-frame dependencies that will limit performance scaling. Also, AFR doesn’t scale well when the GPUs aren’t evenly matched. Lucid may employ AFR in certain cases, but its best load-distribution methods are clearly finer-grained than AFR and involve per-polygon or per-object divisions of labor. We’ve gotten a taste of how they might work with some visual demos, but for obvious reasons, Lucid is guarding the particulars of their operation. Lucid’s Belz would only reiterate that his solution seeks to understand what the application is doing and then applies the appropriate load-balancing algorithm. The method used may change from frame to frame, as the application’s needs change.

Mixing different GPU architectures, whether it be different generations from one GPU maker or a cross-vendor config, presents additional issues. For things like internal mathematical precision or antialiasing techniques, Lucid must restrict itself to exposing the lowest common denominator between the GPUs. Among other things, that means specialized antialiasing methods like AMD’s custom filter AA or Nvidia’s coverage sampled AA won’t always be available, although the base multisampled modes appear to work just fine.

Lucid’s goal in all of this is to maintain what Belz calls image integrity. This is no small thing. For instance, we’ve documented the differences in texture filtering algorithms from one GPU architecture to the next. If Lucid isn’t careful about how it divides up the workload for a scene, either within one frame or from frame to frame, it risks exposing visual noise caused by the differences in image output. We spent some time peering skeptically at Operation Flashpoint: Dragon Rising with a mixed-vendor config trying to detect any such problems, though, and none were apparent. The on-screen image seemed solid and looked quite good.

Maintaining pristine image integrity will naturally limit the Hydra’s load-balancing options, and so it may present a performance tradeoff. Belz tells us Lucid has chosen image integrity over raw performance in its default driver tuning, but the firm plans to expose an option to allow the user to favor performance, instead. Some folks may not be annoyed by the subtle differences between GPU output, so giving users the option seems sensible.

The reality of these technical issues underscores a point: the Hydra will require continual driver support in order to maintain compatibility with future games. The Hydra doesn’t yet support DirectX 11, for instance, and the company will have to develop that. Lucid remains focused on solid overall support for graphics APIs, though. Belz says Lucid’s approach to fixing any problems its QA team finds in a specific game is to make a general tweak to its driver. Although that fix might resolve an issue with a particular game, he insists that game-specific profiles are not employed.

Nevertheless, Lucid the company will have to succeed in order for the Hydra to become and remain a viable, useful consumer product. We’re hopeful on that front, and we look forward to getting a Hydra-based motherboard into our labs for testing soon.

Comments closed
    • CADgamer
    • 10 years ago

    I would like to know if there is a possibility of combining a workstation card with a gaming card through the Hydra setup… it seems that this configuration would appeal to a number of people that alternate work and play needs.

    • mattthemuppet
    • 10 years ago

    now what would be really interesting (to me at least 🙂 would be a Lucid version of Hybrid-something-or-other – where you have a high powered GPU and a midrange card, use the midrange GPU for 2D and basic 3D then lump both of them together for games.

    That’d be neat, especially as AMD and Nvidia seem to have abandoned that idea many moons ago.

    • yogibbear
    • 10 years ago

    So…… now when the gt300 comes out i can still use my trusty 8800gt

    WOOOHOOOO 8800gt FOREVER! xoxo

    • obarthelemy
    • 10 years ago

    I already don’t see much point in SLI and Crossfire (I’m too old for the epeen thing). This seems even more like a solution in search of a problem.

    Maybe for use by OEMs as a king of hybrid graphics for Intal motherboards… AMD already has that.

      • StuG
      • 10 years ago

      The only reason that dual-gpu is good is if you care about keeping your graphics maxed out while not having to upgrade every generation. During the HD3870 generation I bought my first one when it came out, and waited until I could pick up another HD3870 for $80 bucks. That 90% of the time kept me in pace with my friends who all bought 4870’s for more money. Than, come the HD5870 series I bought one. Plan on following the same plan.

      Is that an entirely efficient cycle? No (higher power draw…though being solved with this new generation of cards…20w anybody?). Is it cost effective? Yes. Getting the performance of a next gen card for $90 rather than $250 is pretty nice.

      That’s my 2 cents on it, for however you feel it should be taken. Also, you don’t need to be e-peen craved to think having 2 gpu’s is cool, its just another technology thats fun to mess and tinker with. Or atleast, thats how I view it.

        • Arag0n
        • 10 years ago

        I usually go with the sell/purchase model.

        Selling my old graphic card and buying a new one.

    • Kreshna Aryaguna Nurzaman
    • 10 years ago

    Two questions:

    1) Does it have multi-GPU FSAA? Can we use the multiple video cards to crank up AA instead of frame rate?

    2) Does it work on Windows XP? 😀

    Well, I’m always interested to build an “ultimate” legacy system that is based on Windows XP, to play old games like Baldur’s Gate and Falcon 4.0. Frame rate shouldn’t be a problem with such games, but I’m looking for the highest configuration possible to crank up image quality (especially FSAA).

    Right now, it seems the only viable option is GeForce 7950 GX2 in Quad SLI configuration. It has 32xAA and it work in XP. IIRC GeForce 9800 Quad SLI only works in Vista and above.

      • Thorburn
      • 10 years ago

      Quad-SLI was never actually particularly fast, a 5870 will outpace a pair of 9800GX2’s in pretty much anything. Those games wouldn’t need anything like that power anyway – they were designed for Direct X 5 and would probably run 1920×1200 on integrated (Balders Gate isn’t even 3D is it?).

      That and on both Lucid and the existing CrossFire/SLI solutions are very much game title dependent, multi-GPU in older games will most likely be pretty broken.

        • Kreshna Aryaguna Nurzaman
        • 10 years ago

        l[< multi-GPU in older games will most likely be pretty broken.<]l Unless when it's used solely for AA, isn't it? No SFR, no AFR, just AA. l[

          • Meadows
          • 10 years ago

          Can you even do that?

      • Shining Arcanine
      • 10 years ago

      If you are interested in increasing image quality, you probably would be interested in ray-traced versions of those games, assuming that they were open sourced at some point.

    • geekl33tgamer
    • 10 years ago

    Now, this is cool, and it actually works?

    Lots of people are saying it’s pointless. Yeah, it copies what SLI and Crossfire have done for years. But, here’s the thing… I would hope that with continued support, it might handle the load balancing much more efficiently than existing solutions. It looks like it may not be hindered between splitting graphics data equally 3 or 4 ways too?

    Also, from a gaming standpoint (As I believe Lucids tech does not use AFR), even if you ran it with 2 identical cards like you would in say SLI, to the game, it’s none the wiser that the graphics rendering is being split.

    Some games I have can’t run in SLI, so I have one card sat idle. The biggest issue with nVidia’s AFR implimentation at the moment.

    Fingers crossed this comes out, and removes some performance hurdles along the way. Ahhhhh, it’s not perfect, but it’s so close to being perfect.

    It’s probably the single biggest thing in PC gaming I have been able to get excited about in a long time 🙂

      • swaaye
      • 10 years ago

      Heh. I still think that dual GPU setups are fundamentally semi-pointless because of all of the software issues you sign up for with your purchase no matter which way you go.

    • Waco
    • 10 years ago

    Hopefully Lucid will have a booth at SC ’09 next week (though I doubt it highly). It’d be nice to have a chance to see it in person.

    EDIT: Nope, not on the exhibitor list.

    • Veerappan
    • 10 years ago

    Alright, so lots of comments in this thread have stated that this product is niche-within-a-niche, and that this product is only interesting to those with either an old card a user would like to pair with a new card, or as a replacement for SLI/X-Fire for new cards.

    How about this use case:
    Gamer X wants to have a fast computer that can play that cool new game that came out. Their computer has this Lucid chip built-in, but their current video card is a discrete Radeon 4770 that they chose for its low power use and good value. Now they want to play that new game which requires a DirectX 11 card.

    This person now has a choice to make:
    1) Don’t play the game, or use software fallbacks in rendering.
    2) Replace their video card with a DX11 card.
    3) Get whatever new DX11 card they can afford, regardless of manufacturer.

    Option 3 might actually be viable here, not only because of the vendor-agnostic nature of this chip, but also because the user would be able to disable the Hydra feature on any games that their 4770 could play just fine. By disabling this feature for games where its not needed, the new Radeon 5870 X2 that they just bought (after they won the lottery) could be left in a low power state except when its needed. Only when a game that requires DX11 is being played would the card be pulled out of an idle state to do work.

    In fact, this chip could lead to something resembling an ideal implementation of hybrid power, where the Lucid chip enables the primary GPU at all times, but tells the secondary GPUs to power down when they’re not needed.

      • bogbox
      • 10 years ago

      I think that is not the case.
      1 DX 11 will not work if hasn’t to Dx11 cards in.
      2 How big is that power supply , how many cards will it take? 2 cards =4* 6 pin at least or maybe more 8pin etc (700W at least stable)
      3 how often the hidra will be driver dependent so the leateast games ar not going to be supported for a while.

        • Veerappan
        • 10 years ago

        l[

    • shaq_mobile
    • 10 years ago

    This tech opens up some pretty darn cool options. Imagine, instead of upgrading your computer or buying the mid range gpu (5850, 4850, gtx260 etc) when they come out, you can just slap an economy one into you computer each generation. that would be awesome! plus itd make my case look evne more ghetto and ridiculous. three economy gpus, six hard drives, no case side panel, no drive bay covers… sweet!

    • d0g_p00p
    • 10 years ago

    The link to the previous article (our original coverage of the chip) is broken. It leads to a “page not found” page.

    • stmok
    • 10 years ago

    $100 dollars says Nvidia will attempt to -[

      • silent ninjah
      • 10 years ago

      Instead of being happy that their older products have extended usage. Oh, no, wait, there’s no money in creating long lasting products. /em points at lightbulb manufacturers.

        • Meadows
        • 10 years ago

        Just make it bright enough and it won’t need to last a long time. That’s what being a one hit wonder is all about.

      • bfellow
      • 10 years ago

      Nvidia CEO: $100? How about $100 million then you’re on!

    • esterhasz
    • 10 years ago

    I have read to some of the comments and on the question whether the Hyrda is niche or not, I’d like to make the following point: the whole GPU market is really in transition right now; look at nvidia’s efforts to push GPGPU in Tesla and Ion, AMD’s Fusion effort, the whole PhysX wrangling, etc. Same with the gaming market, i.e. casual vs. hardcore and the question where actual money can be made. This is becoming a very complex ecosystem, even without figuring emerging markets (BRIC anyone?) into the equation. Success of a product like Hydra depends on a lot of factors, most importantly whether any of the big players picks it up and ramps up those economies of scale. As a startup, their strategy is probably to get bought and then to look further from there. It’s a risky bet but there might be some interesting IP in the product even if it never get’s to market in a significant way…

      • Waco
      • 10 years ago

      I don’t see anything bad about Lucid’s demo of the system…nor do I think it’s suspicious they allowed each group of reviewers to play with the same box.

      Meh. If it works as promised I’ll get a board with one of them. If not, well, I can’t say I’ll be heartbroken.

    • Rakhmaninov3
    • 10 years ago

    I think there will be better ways to spend the extra money that this would cost. The drivers that would continually need to be upgraded sound like a migraine waiting to happen, also.

    Really freakin’ cool academic project, for sure, but I think the market will have better graphics processing solutions available in terms of both cost and maintenance requirements.

    • Philldoe
    • 10 years ago

    Looks like I’ll be sticking some $$$ back for this things release. Hopefully I won’t have to hand out money for a new mobo.

    • lycium
    • 10 years ago

    WOW, it actually works! i’m not the least bit interested in multi-gpu gaming, but i am astounded to see how well they’ve managed to pull this off.

    of course there are rough edges, but that’s a great first start, lucid! *thumbs up*

    • paulsz28
    • 10 years ago

    Hmm, I see your point about Hydra forcing the weaker card’s capability limitations on the stronger card. For instance, using an X1900GT with a 4870 would force the system to run at DX9/SM3.0 (X1900GT capabilities) although the 4870 can actually output DX10.1/SM4.0 images. Perhaps I missed the article saying otherwise, but it seems that would be the case.

    However, what if you could force the weaker card into a single function: physics processing, AA processing, memory slave, etc. Now, you’ve really upped the horsepower of your graphics system without having to sacrifice the capabilities of the stronger card. I realize that is a lot to ask (not all cards can do physics, memory would likely have to match, i.e., not mix GDDR3 and GDDR5), but it’s a nice goal to have.

    The “active algorithms” Hydra uses to determine the best method to process the graphics data are interesting – would like to know how they “actively” do this, although I suspect it’s a trade secret. From my perspective, they’ll need to add more “active” algorithms for Hydra to show it’s true potential. For instance, if a weaker card is forced into a single function, but is sitting their idle, is there some way to exploit its resources without limiting the stronger card? Not yet, but that would be a clever feature.

    I’m not sure the alternating line algorithm makes sense, considering, as the article states, some games draw frames depending on what the previous frame was (just process the difference and output the new frame). Although, perhaps this could be one of the active algorithms Lucid could implement in Hydra depending on the game. . . . Only time will tell.

    There is GREAT potential for this product. Lucid needs to ensure that the lesser capable card doesn’t inherently limit the stronger (not sure that’s even possible due to INHERENT differences in the cards). I perhaps wouldn’t fork over the dough for a mobo that had this chip integrated, but I would consider a stand-alone add-in card for when I want to buy that new 5890 but keep my 4870 running as well . . . if the add-in card isn’t as expensive as another 4870 or 5890 – at that point, you’re going head to head with CF.

    • Firestarter
    • 10 years ago

    If I had to render CGI for a living, I’d be crossing my fingers for a Hydra setup with 10+ GPUs

    • MadManOriginal
    • 10 years ago

    Id be curious to know how this works for or affects GPGPU apps. I’m thinking it’s not hard to just chop up a parallel task across GPUs so many programs could do it on their own, but if not this might be interesting in that regard. Say, stuff a bunch of single slot ‘old’ cards in to a system that have more power than newer cards for lower total cost.

      • OneArmedScissor
      • 10 years ago

      That’s kind of what I was getting at in my earlier post. If they can get this to work so well in games, where the drivers and cards are so different, it implies that it’s easier to tear down all of the walls with GPUs, no matter how great, than it is move any more ground with multi-core CPUs than has already happened.

    • eitje
    • 10 years ago

    q[

    • Freon
    • 10 years ago

    I’m impressed it works as well as covered here, but I still think this is a dead end technology.

    Niche market (high end), adds cost, ultimately limited by practical concerns even though they seemed to have jumped some amazing hurdles, will need vigilance in software updates, adds cost…

    I guess it just seems like a solution for a problem that doesn’t exist, or at least is of marginal importance in the market.

    • wingless
    • 10 years ago

    The new winning Multi-GPU configuration is: *[

    • ub3r
    • 10 years ago

    Typical case of inter-terrestrial prostitution.

    • BooTs
    • 10 years ago

    l[

    • Sahrin
    • 10 years ago

    I noticed that all of the cards tested are DX10, so the first question is how does this handle mixed tech generations. While the shift from D3D10 to D3D11 was pretty quick (3 years), we were on D3D9 for the 6 before that – but within that six year period there were at least three major changes to the API (SM2.0, SM3.0, SM3.1).

    My wager is that the typical upgrade path for a gamer who cares about this tech in the first place (read: the only people who know enough to even understand what they can do with it) is generational-range upgrades (which is to say, they buy a card from a certain range – usually Enthusiast Mid Range or Enthusiast High Performance – e.g. 5850 and 5870). If that’s the case, then I’m not sure I see the value in this. For instance, if I owned a 4870 as my current card, a 7800GTX as my prior card – and was looking to buy a 5870 – how could I possibly get an implementation of Hydra that performs without buying a new card or sacrificing features?

    While SLI and Xfire (and now Hydra) and kind of neato technolgies – the pace of advances in GPU technology (doubling performance every 12-18 months) basically makes it irrelevant. My main rig currently has an X1900XT in it – the best card made in its heyday. I don’t know for sure but I’d be shocked to find that I can mix this card with my desired upgrade (5870) and still get the feature set I want (full DX11 + Video Acc). I’d have to buy a lesser card to mix it, but then on top of going through the hassle of installing and maintaining two driver sets (plus the hydra drivers) and maintaining profiles for all these games, I would have to drop another $100-$200 – for what appears to be only 10%-40% improvement over the already playable framerates of the 5870.

    I’ve always looked at the multi-gpu solutions to be a total waste – inelegant and brute force solutions to a simply problem (like building a render farm out of OoO Superscalar deep and narrow CPU’s) that do nothing but waste PCB, memory and silicon.

    As near as I can tell, these technologies only drive more sales for GPU and AIB partners – not more performance of value for their customers.

      • OneArmedScissor
      • 10 years ago

      That’s like saying that a quad-core CPU must suck because a single-core version of the same architecture doesn’t do exactly what you want it to.

      This isn’t the full implenetation of what the chip has to offer. It’s a glimpse of what’s to come. They got it to work with cards of different vendors. That’s a heck of a feat. You sure are discounting that quite a bit and making a lot of assumptions.

        • Sahrin
        • 10 years ago

        They did nothing of the sort. They wrote a middleware that is hardware accelerated on a very simple chip which translates API calles into a split-driver environment (sending a call to either vendor’s GPU). It may be a technical feat, but it is not an impressive technological one.

        And wrong on the quad-core point as well. A better comparison would be an advanced quad core (5870) to a less advanced four-socket system. There’s no doubt that the more advanced native quad core will perform better in almost every application (the exception being very memory bandwidth intensive ones); the quad-socket solution is a brute force “throw hardware at it until it goes away” one, the more advanced quad core uses integration and advances in design and engineering to more efficiently allocate resources (both compute and manufacturing).

    • OneArmedScissor
    • 10 years ago

    To the people wondering about how it was supposed to scale better than SLI and Crossfire:

    Thinking back now, I’m not sure they said it worked better than dual-GPU configurations.

    I think the point was that it would continue scaling well beyond just two GPUs, where SLI and Crossfire seem to flop.

    Much more interesting than comparing these Hydra results to dual-GPU SLI and Crossfire would be putting four totally different cards in a computer at once and seeing what happens.

      • DrDillyBar
      • 10 years ago

      I like this.

    • clone
    • 10 years ago

    I wonder if it would have been easier to go alternate line rendering while monitoring the hardware to watch it’s load balancing….. if the frames began to fall or one GPU is struggling shift more lines to the one that isn’t and monitor the situation on the fly.

    while very curious I’m losing interest because of required driver support and potential game profile requirements.

    • darryl
    • 10 years ago

    I’ll assume that once Hydra-embedded mobos are released, the tech sites will do the side-by-side comparisons between Hydra, X-fire, and SLi. I just hope some mobo mfg will make such a board which also uses the socket 1366, and not just the 1156. Somehow I doubt that will happen though.
    D

    • ChangWang
    • 10 years ago

    I would like to see an X2 type card with this chip instead of the PCIe bridge. Now that would be interesting

    • StashTheVampede
    • 10 years ago

    Cannot wait for the non-canned benchmarks. SLI vs. CF + Hydra comparisons.

    • ssidbroadcast
    • 10 years ago

    Hm. It’s way cool that you’re able to mix vendors, but I seem to remember in the older article that the performance scaling was a bit better.

    • Krogoth
    • 10 years ago

    Cool idea, but unfortunately it is doomed to be a niche within a niche.

    CF/SLI configuration are pretty rare to begin with. Nevermind being in a situation where you have an old video that you want to hook up with your newer card.

      • OneArmedScissor
      • 10 years ago

      At first glance, yes. Where it gets interesting is that it can move multi-GPU configurations out of the niche market.

      I’ve got quite a few different video cards lying around, but I’ve never bothered with Crossfire or SLI. How many people are in that boat? Squillions, no doubt.

      If this becomes a relatively cheap chip that can be slapped on all sorts of motherboards, it has a great chance.

        • Krogoth
        • 10 years ago

        I can’t see that happening.

        Dual-cards or more require larger PSU, better cooler solution, motherboards that support multiple 16x PCIe slots aren’t exactly cheap either.

        Multi-card setups are always going to be the tiny minority.

          • OneArmedScissor
          • 10 years ago

          You don’t need full 16x slots (especially with old cards, seriously…) to see a huge benefit. Boards with Hydra chips are probably going to have them, anyways, and more slots nad PCIe lanes will become increasingly more common as PCI dies off, Hydra or no Hydra.

          PSUs are generally overpowered, and graphics cards keep becoming more efficient.

          Nobody’s computer is going to crash or burn up from adding another graphics card that’s self sufficient at cooling itself.

          Come on Mr. Krogoth, try a little harder. :p

        • DrDillyBar
        • 10 years ago

        I for example recently handme-down’d my 3870 512.

        • wira020
        • 10 years ago

        I do agree on this… think like USB3.. now it adds a little premium.. but when it became standard, it shouldnt add extra cost… well, of course this chip aint no standard or anything, but we’ll have to see how much premium it adds to the mobo when it launch..if its just around $20, then i’d say it will be alot cheaper in a year or so.. Plus, i trust msi is not the kind of company that likes to burn a hole in their customer pocket…

        And to people who says this is dead end, dont be so sure..sounds pretty one-sided to me…. this is just a beginning… the possibility is still pretty much endless.. could lead to better scaling, less dependancy of drivers update to crossfire/sli a new game, more futureproof cards, etc2…

        And if the rumors of Nvidia pressuring msi to delay this board is true, that could means there’s something that they fear with in this technology…

          • Freon
          • 10 years ago

          USB3 doesn’t really require constant software updates. It is an industry standard, not terribly complex, nothing that innovative about it. Anyone can produce USB3 compliant devices.

          I think there is an order of magnitude of difference in complexity here.

      • Skrying
      • 10 years ago

      Isn’t being in a situation where you want to pair your old and new card actually much more common? I’m, and it seems most people, are far more likely to have both a new card and their previous card lying around than to buy two cards at the same time.

      I’m more than curious to see how this setup would perform with say a HD4890 paired with a HD5780. Let’s say the games only support DirectX 10 at most. Would it work? Similar scaling as seen in the article?

        • travbrad
        • 10 years ago

        Indeed. By the time I am ready to upgrade my card, the one I have is usually no longer for sale anywhere, so getting another of the same card isn’t an option.

        Even if that old card is still available, CF/SLI have some serious limitations. If a new game comes out that doesn’t scale well you are SOL using 2 older cards. However, when combining a new card with an old one, at the very least you should be able to get the new cards performance. Games that do scale well should give a nice bonus.

        Of course this will all come down to cost, and whether it is tied to the motherboard. I only go through 1-2 video cards for any given motherboard I buy, so buying this technology repeatedly sort of defeats the purpose, unless it’s very cheap

      • Meadows
      • 10 years ago

      No.

        • flip-mode
        • 10 years ago

        sleep till Brooklyn?

      • Pettytheft
      • 10 years ago

      It all depends on the additional cost. If I can take an old video card and slap it in for some extra FPS instead of buying a matching card then it may be worth it. I was thinking that I have a 8800 that is in another machine that is not being put to good use. I could slap an old ATI X800 card in there and use the 8800 as a secondary.

    • OneArmedScissor
    • 10 years ago

    So much for bazillion-core CPUs. It won’t be long before “quad-card” will mean a lot more than “quad-core.”

    Where I think this gets really interesting is the prospect of combining discrete cards with much more powerful on-die GPUs, which will be coming at us left and right from now on.

    It will be like a free processing boost, but possibly a rather significant one, with countless uses, considering the mixing and matching that will now be possible between CPUs, integrated GPUs, and discrete GPUs.

    • sluggo
    • 10 years ago

    If I’m in Intel’s market dev group I’d be happy to pay a significant premium for this company. Major shot across the bow for their chipset competitors and their proprietary multi-GPU chipsets.

      • UberGerbil
      • 10 years ago

      Intel Capital already provided them with early-round funding.

    • flip-mode
    • 10 years ago

    Pretty darn cool. Makes multi-card rigs much, much more interesting. How long before Nvidia sues them, buys them, or employs some driver busting shenanigans?

    • DrDillyBar
    • 10 years ago

    That’s a promising first look. Thanks.

    • khands
    • 10 years ago

    I was hoping the rumors of better than SLI/Xfire performance were true, but if I recall some other benchmarks this doesn’t appear to be the case, still very nice.

      • moritzgedig
      • 10 years ago

      I think that hope was very risky and without a basis.
      I see this as a “tech demo” / “proove of concept” and engeneering success but not a good product.

    • TurtlePerson2
    • 10 years ago

    I’m glad to see that the technology is confirmed running and more or less works as advertised.

    It would be really helpful if there was a true SLI or Crossfire setup added to the results. I’m curious to know whether Hydra outperforms SLI in a dual GTX 260 setup.

      • Creamsteak
      • 10 years ago

      I’m sure there are a ton of eager reviewers willing to answer that question once there’s a board out with it.

Pin It on Pinterest

Share This