Home Asus’ Strix Radeon R9 Fury graphics card reviewed
Reviews

Asus’ Strix Radeon R9 Fury graphics card reviewed

Scott Wasson Former Editor-in-Chief Author expertise
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

New Radeons are coming into Damage Labs at a rate and in a fashion I can barely handle. The first Radeon R9 Fury card, the air-cooled sibling to the R9 Fury X, arrived less than 24 hours ago, as I write. I’m mainlining an I.V. drip consisting of Brazilian coffee, vitamin B, and methamphetamine just to bring you these words. With luck, I’ll polish off this review and get it posted before suffering a major medical event. Hallucinations are possible and may involve large, red tentacles reaching forth from a GPU cooler.

GPU
boost
clock
Shader
processors
Memory
config
PCIe
aux
power
Peak
power
draw
E-tail
price
Radeon
R9 Fury
1000 MHz 3584 4 GB HBM 2 x 8-pin 275W $549.99*
Radeon
R9 Fury X
1050 MHz 4096 4 GB HBM 2 x 8-pin 275W $649.99

Despite my current state, the health of AMD’s new R9 Fury graphics card appears to be quite good. The Fury is based on the same Fiji GPU as the Radeon R9 Fury X, and it has the same prodigious memory subsystem powered by HBM, the stacked DRAM solution that’s likely the future of graphics memory. That means the Fury has the same ridiculous 512GB/s of memory bandwidth as its elder brother. The only real cut-downs come at the chip level. The Fiji GPU on Fury has had eight of its 64 GCN compute units deactivated, taking it down to “only” 3584 stream processors and 224 texels per clock of filtering power. The Fury’s only other concession to being second in the lineup is a 1000MHz peak clock speed, 50MHz shy of the big dawg’s.

At the end of the day, the Fury still has the second most powerful shader array in a consumer GPU, with 7.2 teraflops of single-precision arithmetic power on tap, and it has about a third more memory bandwidth than a GeForce Titan X.

The card we have on hand to test is Asus’ Strix rendition of the R9 Fury. Although all Fury X cards are supposed to be the same, AMD has given board makers the go-ahead to customize the non-X Fury as they see fit. Asus has taken that ball and run with it, slapping on its brand-spanking-new DirectCU III cooler. This beast is huge and heavy, with a pair of extra-thick heatpipes snaking through its array of cooling fins. The cooler is also one of the tallest and longest we’ve seen; it protrudes about two inches above the top of the PCIe slot cover, and the card is about 11.75″ long.

Like a number of aftermarket cards from the best manufacturers these days, the Strix R9 Fury’s fans do not spin until the GPU reaches a certain temperature. Generally, that means the fans stay completely still during everyday operation on the Windows desktop, which is excellent.

The Strix is premium in other ways, too many for me to retain in my semi-medicated state. I do recall something about Asus including a year-long license for XSplit Premium, so you can stream your Fury-powered exploits to the world. There’s also an “OC mode” that grants an extra 20MHz of GPU clock frequency at the flip of a switch. Oh, and I think those tentacle hallucinations may have been prompted in part by the throbbing light show under the Strix logo on the top of the cooler.

Asus expects the Strix R9 Fury to cost $579.99 at online retailers, a little more than AMD’s suggested base price for Fury cards. That sticker undercuts any GM200-based GeForce card, like the GTX 980 Ti and Titan X, and it’s roughly 50 bucks more expensive than hot-clocked GTX 980 cards based on the smaller GM204 GPU.

The Radeon R9 300 series, too
The R9 Fury isn’t the only new Radeon getting the treatment in Damage Labs. I’ve finally gotten my hands on a pair of R9 300-series cards and have a full set of results for you on the following pages.

The R9 390 and 390X are refreshed versions of the R9 290 and 290X before them. They’re based on the same Hawaii GPU, but AMD has juiced them up a bit with a series of tweaks. First, GPU clock speeds are up by 50MHz on both cards, yielding a bit more goodness. Memory clocks are up even more than that, from 5 GT/s to 6 GT/s, thanks to the availability of newer and better GDDR5 chips. As a result, memory bandwidth jumps from 320 to 384 GB/s, putting these cards also well ahead of the Titan X and anything else from the green team in terms of raw throughput. Furthermore, all R9 390 and 390X cards ship with a thunderous 8GB of GDDR5 memory onboard, just to remove any doubt.

Finally, AMD says it has conducted a “complete re-write” of the PowerTune algorithm that manages power consumption on these cards. Absolute peak power draw doesn’t change, but the company expects lower power use when running a game than on the older Hawaii-based cards.

Nope, this isn’t the Fury card I just showed you. It’s the Strix R9 390X, and Asus has equipped it with the same incredibly beefy DirectCU III cooler. This card is presently selling for $450 at Newegg, so it undercuts the GeForce GTX 980 while offering substantially higher memory bandwidth and double the memory capacity.

Meanwhile, at $330, this handsome XFX R9 390 card has the GeForce GTX 970 firmly in its sights. This puppy continues a long tradition of great-looking cards from XFX. Let’s see how it stacks up.

Test notes
You’ll see two different driver revisions listed for most of the cards below. That’s because we’ve carried over many of these results from our Fury X review, but we decided to re-test a couple of games that got recent, performance-impacting updates: Project Cars and The Witcher 3. Both games were tested with newer drivers from AMD and Nvidia that include specific tweaks for those games.

The Radeon driver situation is more complex. The Catalyst 15.15 drivers we used to test the R9 390, 390X, and Fury X wouldn’t install on the R9 290 and 295 X2, so we had to use older 15.4 and 15.6 drivers for those cards. Also, AMD dropped a new driver on us at the eleventh hour, Catalyst 15.7, that is actually a little fresher than the Cat 15.15 drivers used for the 390/X and Fury X. (Cat 15.7’s internal AMD revision number is 15.20.) We weren’t able to re-test all of the Radeons with the new driver, but we did test the R9 Fury with it.

I know some of you folks with OCD are having eye twitches right now. but if you’ll look at our results, I think you’ll find that the driver differences don’t add up to much in the grand scheme, especially since we re-tested the two newest games that have been the subject of recent optimizations.

Also, because some of you expressed a desire to see more testing at lower resolutions, we tested both The Witcher 3 and Project Cars at 2560×1440. Doing so made sense because we had trouble getting smooth, playable performance from all the cards in 4K, anyhow.

Our testing methods
Most of the numbers you’ll see on the following pages were captured with Fraps, a software tool that can record the rendering time for each frame of animation. We sometimes use a tool called FCAT to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average. This filter should account for the effect of the three-frame submission queue in Direct3D. If you see a frame time spike in our results, it’s likely a delay that would affect when the frame reaches the display.

We didn’t use Fraps with Civ: Beyond Earth or Battlefield 4. Instead, we captured frame times directly from the game engines using the games’ built-in tools. We didn’t use our low-pass filter on those results.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-5960X
Motherboard Gigabyte
X99-UD5 WiFi
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance LPX
DDR4 SDRAM at 2133 MT/s
Memory timings 15-15-15-36
2T
Chipset drivers INF update
10.0.20.0
Rapid Storage Technology Enterprise 13.1.0.1058
Audio Integrated
X79/ALC898
with Realtek 6.0.1.7246 drivers
Hard drive Kingston
SSDNow 310 960GB SATA
Power supply Corsair
AX850
OS Windows
8.1 Pro
Driver
revision
GPU
base
core clock
(MHz)
GPU
boost
clock
(MHz)
Memory
clock
(MHz)
Memory
size
(MB)
Asus
Radeon
R9 290X
Catalyst
15.4/15.6
betas
1050 1350 4096
Radeon
R9 295 X2
Catalyst
15.4/15.6
betas
1018 1250 8192
XFX
Radeon R9 390
Catalyst 15.15
beta
1015 1500 4096
Asus
Strix R9 390X
Catalyst 15.15
beta
1070 1500 4096
Asus
Strix R9 Fury
Catalyst 15.7
beta
1000 500 4096
Radeon
R9 Fury X
Catalyst 15.15 1050 500 4096
GeForce
GTX 780 Ti
GeForce 352.90/353.30 876 928 1750 3072
Asus
Strix GTX 970
GeForce
353.30
1114 1253 1753 4096
Gigabyte GTX 980
G1 Gaming
GeForce 352.90/353.30 1228 1329 1753 4096

GeForce GTX 980 Ti
GeForce
352.90/353.30
1002 1076 1753 6144
GeForce
Titan X
GeForce 352.90/353.30 1002 1076 1753 12288

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Project Cars
Project Cars is beautiful. I could race around Road America in a Formula C car for hours and be thoroughly entertained. In fact, that’s pretty much what I did in order to test these graphics cards.


Frame time
in milliseconds
FPS
rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

Click the buttons above to cycle through the plots. You’ll see frame times from one of the three test runs we conducted for each card. Notice that PC graphics cards don’t always produce smoothly flowing progressions of succeeding frames of animation, as the term “frames per second” would seem to suggest. Instead, the frame time distribution is a hairy, fickle beast that may vary widely. That’s why we capture rendering times for every frame of animation—so we can better understand the experience offered by each solution.

If you click through the plots above, you’ll probably gather that this game generally happens to run better on GeForce cards than on Radeons, for whatever reason. The plots from the Nvidia cards are generally smoother, with lower frame times and more frames produced overall. However, you’ll also notice that the Radeons also run this game quite well, with almost no frame times stretching beyond the about 33 milliseconds—so no single frame is slower than 30 FPS, basically.

The FPS averages and our more helpful 99th percentile frame time metric are mostly the inverse of one another here. When these two metrics align like this, that’s generally an indicator that we’re getting smooth, consistent frame times out of the cards. The one exception here is the Radeon R9 295 X2. Dual-GPU cards are a bit weird, performance-wise, and in this case, the 295 X2 simply has a pronounced slowdown in one part of the test run that contributes to its higher frame times at the 99th percentile. I noticed some visual artifacts on the 295 X2 during testing, as well.

We can understand in-game animation fluidity even better by looking at the entire “tail” of the frame time distribution for each card, which illustrates what happens with the most difficult frames.


The Radeons’ curves aren’t quite a low and flat as the GeForces’, but they’re largely excellent anyhow, with a peak under 30 milliseconds.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

In this case, generally good performance means all of the cards ace the first two thresholds, without a single frame beyond the mark. The Fury spends a little more time above 16.7 ms—or 60 FPS—than the Fury X, so it’s not quite as glassy smooth, but it’s close.

Again, this game just runs better on the GeForce cards, but not every game is that way. Let’s move on.

GTA V
Forgive me for the massive number of screenshots below, but GTA V has a ton of image quality settings. I more or less cranked them all up in order to stress these high-end video cards. Truth be told, most or all of these cards can run GTA V quite fluidly at lower settings in 4K—and it still looks quite nice. You don’t need a $500+ graphics card to get solid performance from this game in 4K, not unless you push all the quality sliders to the right.


GTA V is a more even playing field between the GeForce and Radeon camps, and it’s another game where we see very nice behavior in terms of smooth, consistently quick frame delivery. The R9 Fury mixes it up with the GeForce GTX 980 here, tying in the FPS average and trailing by less than two milliseconds at the 99th percentile. The R9 390X isn’t far behind.

The most intriguing match-up here may be the Radeon R9 390 versus the GeForce GTX 970. At roughly the same price, the R9 390 has an edge in both of our general performance metrics.



Since all of the cards perform well, there’s not much drama in our “badness” metric. Very little time is spent beyond the 33-ms threshold, and none beyond 50 ms. At 16.7 ms, the R9 Fury almost exactly matches the GTX 980.

The Witcher 3
We tested with version 1.06 of The Witcher 3 this time around, at 2560×1440, and we switched to SSAO instead of HBAO+ for ambient occlusion.

These frame time plots have quite a few more spikes in them than we saw in the two previous games. On the Radeons, some of those spikes stretch to 50 or 60 ms, and those slowdowns are easy to notice while playing. The Radeon aren’t alone in this regard, though. The Kepler-based GeForce GTX 780 Ti also suffers later in the test run, in the portion where we enter the woods and encounter lots of dense vegetation.


The Fury trails the GTX 980 by just a few frames per second in the FPS average, but those slowdowns cost it in terms of the 99th-percentile frame time. There, the Fury drops behind the GTX 970.


The curves for the Maxwell-based GeForces are lower and flatter than those for other GPUs.


Our “time beyond 33 ms” results pretty much tell the story. The Maxwell-based GeForces run this game smoothly, while the Radeons and the GTX 780 Ti struggle.

Far Cry 4


The FPS average and 99th percentile scores don’t agree here. Click through the frame time plots above, and you’ll see why: intermittent, pronounced spikes from the R9 290-series and Fury cards. Conspicuously, though, the R9 390 and 390X don’t have this problem. My crackpot insta-theory is that this may be a case where the 8GB of RAM on the 390/X cards pays off. The rest of the Radeons have 4GB per GPU, and they all suffer occasional slowdowns.

That said, even if this is a memory issue, it could probably be managed in driver software. None of the GeForces have this problem, not even the GTX 780 Ti with 3GB.



At the 50-ms threshold, our “badness” metric picks up the frame time spikes on the Furies and the 290X. You can feel these slowdowns as they happen, interrupting the flow of animation.

Alien: Isolation




Every single metric points to generally good performance from all of the cards here. Without any spikes or slowdowns to worry over, we can simply say that the Fury essentially ties the GTX 980, while the R9 390X trails them both. Meanwhile, the Radeon R9 390 slightly outperforms the GeForce GTX 970.

Civilization: Beyond Earth
Since this game’s built-in benchmark simply spits out frame times, we were able to give it a full workup without having to resort to manual testing. That’s nice, since manual benchmarking of an RTS with zoom is kind of a nightmare.

Oh, and the Radeons were tested with the Mantle API instead of Direct3D. Only seemed fair, since the game supports it.




This automated test has a bit of hitch in it, just as it starts, that’s most pronounced on the GeForce cards. The Mantle-powered Radeons avoid the worst of that problem. Beyond that strange little issue, though, the story is much the same as what we saw on the last page. The Fury matches up closely with the GTX 980, and the R9 390 outperforms the GTX 970.

Battlefield 4
Initially, I tested BF4 on the Radeons using the Mantle API, since it was available. Oddly enough, the Fury X’s performance was kind of lackluster with Mantle, so I tried switching over to Direct3D for that card. Doing so boosted performance from about 32 FPS to 40 FPS. The results below for the Fury X come from D3D, as do the results for the Fury.




Man, here’s another close match-up between the GTX 980 and the R9 Fury. The surprise this time around: the GTX 970 outperforms the R9 390 for once.

Crysis 3


Notice those big, 100-ms-plus spikes in the frame times for the R9 390/X and Fury cards. They’re big, and they’re hard to ignore. Let’s see what they do to our metrics.



Although the Radeon cards’ FPS averages look good, they suffer in our other metrics as a result of those pronounced slowdowns. The hiccups happen at a specific point in the test run, when I blow up a bunch of bad guys at once. Many of the Radeon cards suffer at that point, and the effect is not subtle while playing. The GeForces deliver fluid animation in this same spot, which makes for quite a contrast.

Notice something, though, in the beyond-50-ms results above: The R9 290X and 295 X2 suffer the least. You can see it in the raw frame time plots, too. My best guess about why is that AMD must have introduced some kind of regression in its newer drivers that exacerbates this problem. Perhaps they can fix it going forward.

Power consumption
Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

I’m a little surprised to see the R9 Fury system drawing 50W less than the same test rig equipped with a Fury X. The R9 Fury doesn’t require any more power under load than a GM200-based GTX 980 Ti. Then again, the Fury’s performance is generally closer to the GeForce GTX 980’s, and with a GTX 980 installed, our test system only draws 282W under load.

The story isn’t so good for the R9 390 and 390X. With the Asus Strix 390X, our test system pulls a whopping 449W while running Crysis 3. Yes, the 390X is clearly faster than the 290X before it, but it also requires an additional 82W to make it happen. And these Hawaii-based cards weren’t terribly efficient to start. The gap between the GTX 970 and the R9 390 under load is 121W. Yikes.

Noise levels and GPU temperatures
Our power and noise results are all-new for this review. I’ve managed to lower the noise floor on our GPU test rig by about 6 dBA by swapping in a quiet, PWM-controlled Cooler Master fan on the CPU cooler, so I re-tested everything in the past week. Our new setup lets us detect finer differences than we could before.

One unexpected side effect of our quieter test environment is that our sound level meter now clearly picks up on the pump whine from our Fury X. Interesting. I think the problem is more about the pitch of the noise than its volume, but we can put a dBA number to it now.

Asus’ DirectCU III cooler is incredibly effective at keeping the Strix R9 Fury cool without making much noise. Heck, it’s nearly as quiet as the Fury X’s water cooler. Asus hasn’t tuned the Strix R9 Fury to keep GPU temperatures as unusually low as on the Fury X, but 72°C is still pretty cool as these things go.

That same big DirectCU III cooler is pretty effective aboard the R9 390X, as well. Even when asked to move quite a bit more heat from this power-hungry card, it does the job while producing less noise than the Titan X. XFX’s cooler for the R9 390 is also quietly effective, though it can’t match the Asus Strix GTX 970, far and away the quietest card of the bunch.

Conclusions
As usual, we’ll sum up our test results with a couple of value scatter plots. The best values tend toward the upper left corner of each plot, where performance is highest and prices are lowest. We’ve converted our 99th-percentile frame time results into FPS, so that higher is better, in order to make this layout work. These overall numbers are produced using a geometric mean of the results from all of the games tested. The use of a geomean should limit the impact of outliers on the overall score, but we’ve also excluded Project Cars since those results are dramatically different enough to skew the average.

The R9 Fury outperforms the GeForce GTX 980 by 1.1 frames per second overall. That’s essentially a tie, folks. Since we’re talking about an FPS average here, this result is more about potential than it is about delivered performance. In that respect, these two cards are evenly matched. Most folks would stop there and draw conclusions, but we can go deeper by using a better metric of smooth animation, the 99th-percentile frame time, as we’ve demonstrated on the preceding pages. When we consider this method of measuring performance, the picture changes:

We saw substantially smoother gaming across our suite of eight test scenarios out of the GeForce GTX 980 than we did from the Radeon R9 Fury. This outcome will be nothing new for those folks who have been paying attention. Nvidia has led in this department for some time now. I continue to believe this gap is probably not etched into the silicon of today’s Radeons. Most likely, AMD could improve its performance with a round of targeted driver software optimizations focused on consistent frame delivery. Will it happen? Tough to say, but hope springs eternal.

The rest of the pieces are there for the R9 Fury to be a reasonably compelling product. The Fiji GPU continues to impress with its power efficiency compared to the Hawaii chip before it. Although the Fury X’s water cooler is nice, it apparently isn’t strictly necessary. Asus has engineered a startlingly effective air cooler for the Strix R9 Fury; it’s quieter than any of the GeForce cards we tested with Nvidia’s reference cooler.

Also, you may have noticed, the cuts AMD has made to the Fiji GPU aboard the R9 Fury just don’t hurt very much compared to the Fury X. The gap in performance is minimal and, as I’ve mentioned, the Fury still has a 7.2-teraflop shader array and 512 GB/s of memory bandwidth lying in wait. Games don’t yet seem to be taking full advantage of the Fiji GPU’s strengths. Should games begin using Fiji’s prodigious shader power and memory bandwidth to good effect, the Fury seems well positioned to benefit from that shift. Unless you really want that water cooler, I can’t see paying the premium for the Fury X.

So the R9 Fury has its virtues. The difficult reality at present, though, is that this card is based on a bigger GPU with a larger appetite for power than the competing GeForce GTX 980. The Fury has more than double the memory bandwidth via HBM. It costs about 50 bucks more than the 980. Yet the Fury isn’t much faster across our suite of games, in terms of FPS averages, than the GTX 980—and it has lower delivered performance when you look at advanced metrics.

We should say a word about the R9 390 and 390X, as well. The XFX R9 390 we tested is something of a bright spot for AMD in this whole contest. This card’s price currently undercuts that of the Asus Strix GTX 970, yet it offers very similar performance. The value proposition is solid in those terms. The downside is that you’re looking at an additional 120W or so of system power with the R9 390 installed instead of the GTX 970. That translates into more noise on the decibel meter and more heat in the surrounding PC case—and, heck, in the surrounding room, too. Buyers will have to decide how much that difference in power, heat, and noise matters to them. We can say that the XFX card’s cooler is pretty quiet, even if it’s not as wondrous as the Strix GTX 970’s.

The R9 390X, meanwhile, offers somewhat higher performance at the cost of, well, more money—but also substantially higher power consumption. Asus has done heroic work creating a cooler that will keep this thing relatively quiet while gaming, but it’s hard to imagine a smart PC builder deciding to opt for the 390X’s additional 167W to 199W of system power draw compared to a GeForce GTX 970 or 980. Perhaps the right user, who wants maximum performance across a triple-4K display setup, would find the 390X’s 8GB of memory compelling.

Enjoy our work? Pay what you want to subscribe and support us.

The Tech Report - Editorial ProcessOur Editorial Process

The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

Scott Wasson Former Editor-in-Chief

Scott Wasson Former Editor-in-Chief

Scott Wasson is a veteran in the tech industry and the former Editor-in-Chief at Tech Report. With a laser focus on tech product reviews, Wasson's expertise shines in evaluating CPUs and graphics cards, and much more.

Latest News

FCC to Introduce a New Law That Will Require Disclosures for AI Used in Political Ads
News

FCC to Introduce a New Law That Will Require Disclosures for AI Used in Political Ads

350,000 Ethereum (ETH) Options with $3,200 Max Pain Point to Expire Today
Crypto News

350,000 Ethereum (ETH) Options with $3,200 Max Pain Point to Expire Today

Given Ethereum’s ongoing price trajectory, the approval of spot Ethereum ETFs in the US Yesterday, May 23, seems like a buy-the-rumor, sell-the-news scenario. With Ethereum down over 4%, approximately 350,000...

Japanese Investment Firm Sees Massive Jump After Embracing Bitcoin
Crypto News

Japanese Investment Firm Sees Massive Jump After Embracing Bitcoin

Lately, the spotlight has been on Japan, which is slowly adopting Bitcoin and other digital assets. A few days ago, the stock price of Metaplanet (Japan MicroStrategy) surged significantly, increasing...

Detained Binance Executive Collapses During Money Laundering Trial in Nigeria
Crypto News

Detained Binance Executive Collapses During Money Laundering Trial in Nigeria

Bitcoin (BTC) Plummets After Testing $71,500 – Is it the End of the Bullish Ride?
Crypto News

Bitcoin (BTC) Plummets After Testing $71,500 – Is it the End of the Bullish Ride?

highest-paid college football coaches
Statistics

Top 10 Highest Paid College Football Coaches in 2023-24

ETH ETF Approval Could Trigger WienerAI ($WAI) Surge
Crypto News

ETH ETF Approval Could Trigger WienerAI ($WAI) Surge