|SteelSeries' Rival 600 gaming mouse reviewed||12|
|AMD's Ryzen Threadripper 2990WX CPU reviewed||160|
|Input Club's WhiteFox mechanical keyboard reviewed||20|
We just tried to predict the performance of Nvidia's rumored RTX 2080 and RTX 2080 Ti using a mix of public information and best guesses, and now we have the first official spec sheet for an RTX 2080 Ti thanks to a misfire from board partner PNY and a good catch from a TR staffer. The company posted a product page for its RTX 2080 Ti Overclocked XLR8 Edition early, revealing several critical specifications for the as-yet-unannounced and unreleased card.
Most critically, PNY lists a 1350-MHz base clock, a 1545-MHz boost clock, and a 285-W TDP for the card. While those clock speeds might sound low, they seem likely to continue the Pascal tradition of conservatism regarding delivered clock speeds from Nvidia products—GPU Boost 3.0 or its successor will likely push Turing cards' clocks much higher in real-world use under good cooling. The 285-W board power, on the other hand, likely reflects a 30-W allowance for the VirtualLink connector coming to Turing cards, meaning the RTX 2080 Ti itself lands close to the GTX 1080 Ti's 250-W figure.
PNY's product page also suggests that NVLink support is coming to Nvidia consumer products with the Turing generation. The page lists support for two-way NVLink, suggesting builders could be able to join their RTX 2080 Tis together over that coherent interconnect to create dual-card setups with a single 22-GB pool of VRAM.
Past those tantalizing details, the product page is fairly straightforward. The card will require two eight-pin PCIe connectors and could list for $1000. Presuming our guess at Titan-beating performance holds, it looks like that speed and a lack of a comparable product from AMD will result in Turing cards costing a pretty penny. We'll presumably find out more Monday.
This week at SIGGRAPH, Nvidia introduced its Turing microarchitecture, the next major advance for the green team since Pascal made its debut over two years ago. If you're not already familiar, Turing includes RT cores that accelerate certain operations related to ray-tracing and tensor cores for accelerating certain AI operations to render the results of those ray-tracing calculations usable for real-time rendering, among other benefits that we're likely unaware of.
Based on information that Nvidia revealed at SIGGRAPH, some back-of-the-napkin calculations, and a waterfall of leaks today, I wanted to see how the rumored GeForce RTX 2080 and GeForce RTX 2080 Ti will stack up against today's Pascal products—at least hypothetically.
Before we begin, I want to be clear that this article is mostly speculative and something I'm doing for fun. That speculation is based on my prior knowledge of Nvidia's organization of its graphics processors and the associated resource counts at each level of the chip hierarchy. It's entirely possible that my estimates and guesstimates are wildly off. Until Nvidia reveals an architectural white paper or briefs the press on Turing, we will not know just how correct any of these estimates are, if they are correct at all. I've marked figures I'm unsure of or produced using educated guesses with question marks.
My biggest leap of faith about Turing is that its basic streaming multiprocessor (or SM) design is not fundamentally that different from those in the Volta V100 GPU of June 2017. Nvidia will almost certainly drop the FP64 capabilities of Volta from Turing to save on die area, power, and cost, since those compute-focused ALUs have practically no relevance to real-time rendering. The company needs to make room for those RT cores, among other, better things it might be doing with the die area.
Past that, though, Nvidia has already said that Turing will maintain the independent parallel floating-point and integer execution paths of Volta. Furthermore, the number of tensor cores on the most powerful Turing card revealed so far, combined with some simple GPU math, suggests the Turing SM will maintain the same number of tensor cores as that of Volta. Those signs suggest we can fairly safely speculate about Turing using the organization of the Volta SM. That leap of faith is necessary here because Nvidia hasn't revealed the texturing power of Turing yet. Volta uses four texturing units per SM, so that's the fundamental assumption I'll work with for Turing, as well.
I also believe, without confirmation, that Nvidia will be releasing two Turing GPUs. One, which I'll call "bigger Turing," should power the Quadro RTX 6000 and Quadro RTX 8000, as well as the purported RTX 2080 Ti. That 754-mm² chip has a 384-bit memory bus and as many as 4608 CUDA cores, and I'm guessing it's organized into 72 SMs and six Graphics Processing Clusters (or GPCs).
The "smaller Turing" apparently has a 256-bit memory bus, and it likely powers the Quadro RTX 5000 and the purported RTX 2080. That card likely has 48 SMs, organized into four GPCs. Judging by today's leaks, Nvidia seems to be using slightly cut-down chips in GeForce RTX products (likely as a result of yields). Fully-active Turing chips seem to be reserved for Quadro RTX cards.
|Radeon RX Vega 56||1471||64||224/112||3584||2048||410 GB/s||8 GB|
|GTX 1070||1683||64||108/108||1920||256||259 GB/s||8 GB|
|GTX 1080||1733||64||160/160||2560||256||320 GB/s||8 GB|
|Radeon RX Vega 64||1546||64||256/128||4096||2048||484 GB/s||8 GB|
|RTX 2080?||~1800?||64?||184/184?||2944?||256||448 GB/s||8 GB?|
|GTX 1080 Ti||1582||88||224/224?||3584||352||484 GB/s||11 GB|
|RTX 2080 Ti?||~1740?||88?||272/272?||4352?||352?||616 GB/s?||11 GB?|
|Titan Xp||1582||96||240/240||3840||384||547 GB/s||12 GB|
|Quadro RTX 8000||~1740?||96?||288/288?||4608||384||672 GB/s||24 GB|
|Titan V||1455||96||320/320||5120||3072||653 GB/s||12 GB|
This first chart primarily shows how the move from 8 GT/s GDDR5, 10 GT/s GDDR5X, and 11 GT/s GDDR5X to 14 GT/s GDDR6 will affect our contenders, as well as their basic (estimated) resource counts. We know that Nvidia claims a 16 TFLOPS FP32 math rate for the Quadro RTX 6000's GPU, so that means a roughly 1740-MHz boost clock range. The potential RTX 2080's clock speed, on the other hand, is a total guess from the gut.
|RX Vega 56||94||330/165||5.9||10.5|
|RX Vega 64||99||396/198||6.2||12.7|
|GTX 1080 Ti||139||354/354||9.5||11.3|
|RTX 2080 Ti?||153?||473/473?||10.4?||15.1?|
|Quadro RTX 6000||167?||501/501?||10.4?||16.0?|
This second set of theoretical measurements shows that unlike the transition from the GTX 980 Ti to the GTX 1080, the RTX 2080 is unlikely to eclipse the GTX 1080 Ti in measures of traditional rasterization performance (likely how most users will first experience the card's power, as software adapts to the hybrid-rendering future that Turing promises). The 2080 could certainly come close to the 1080 Ti in texturing power and shader-math rates, but its pixel fill rate and peak rasterization rates aren't much changed from its Pascal predecessor (at least, if my guesses are right).
My guesstimates about the RTX 2080 Ti, on the other hand, suggest a real leap in performance for a Ti-class "bigger" GeForce. The texturing power of the purported 2080 Ti is quite a bit higher than even that of the Titan V's by my estimate, and its triangle throughput, peak pixel fill rate, and peak FLOPS are basically chart-topping for consumer Nvidia graphics processors. That should lead to some truly impressive performance figures, even before we consider the possibilities opened up by the card's ray-tracing acceleration hardware.
Nvidia will be holding a Gamescom event this Monday, August 20, where we expect to learn all about these purported Turing GeForces. We won't be on the ground in Cologne, Germany for the event, but we will be monitoring the live stream and will bring you all the details we can as we learn them. Stay tuned.ARM reveals client CPU ambitions with roadmap through 2020
ARM produces the basic CPU designs that power practically every smartphone and non-x86 tablet in the world. Now that the CPU IP licensing firm has tasted higher-power-envelope blood thanks to always-connected PCs from partnerships between Qualcomm, Microsoft, Asus, and HP, it wants to expand its ambitions in mobile computing to the 15-W performance class occupied by Intel and AMD U-series processors.
ARM's first step on the road to competing in these devices is the Cortex-A76 core, announced earlier this year. The Cortex-A76 promises a 35% generation-on-generation performance improvement relative to the Cortex-A75 before it, as well as a 40% power-efficiency improvement relative to that design. ARM isn't stopping with the A76, however. The company has released a CPU technology roadmap through 2020 that outlines its ambitions for client PCs.
The next high-performance ARM core for client PCs, codenamed "Deimos," will be made available to ARM's licensees in 2018. While the company didn't share much detail about this core, it's designed for foundries' 7-nm-class process technologies, it will be compatible with ARM's DynamIQ clustering technology and interconnect fabric, and it promises a 15% increase in "compute performance" over today's Cortex-A76.
The follow-on to Deimos is called Hercules, and ARM says its licensees will have access to that core IP in 2020. This core will be designed for fabrication on both foundry 7-nm and 5-nm process nodes. ARM claims the Hercules design will improve compute performance by some amount in addition to projected power reductions and area reductions of 10% over what's possible from the move to 5-nm-class processes alone.
To emphasize its readiness to jump into the client-computing market, ARM also released a tantalizing chart that suggests its upcoming Cortex-A76 core running at 3 GHz might deliver per-core SPECint 2006 performance similar to Intel's Core i5-7300U while consuming much less power. We weren't privy to the briefing where these slides were presented, but Anandtech's Andrei Frumusanu dug into some of the finer points of the presentation, and his information suggests it's worth taking some of these numbers with a grain of salt or two.
Frumusanu says ARM's less-than-5-W figure represents actual single-core power consumption under that single-threaded SPECint 2006 Speed workload, while it seems ARM simply took the bottom-line TDP from Intel's specifications for the Core i5-7300U rather than providing actual power-consumption figures—even internal ones—for the Intel system running the same workload. Intel defines TDP as the worst-case power consumption of the chip under a worst-case workload, not a single-threaded power-consumption figure as ARM seems to be comparing here. That alone should probably give us pause.
It's also worth noting that despite ARM's chest-thumping about double-digit performance gains from generation to generation, actual performance of the first PC-class products from its partners suggests there's plenty of room for improvement yet. Always-connected PCs from HP and Asus with Qualcomm Snapdragon 835 SoCs inside have been panned by reviewers who have tried them in the real world thanks to leisurely performance. The Snapdragon 835 uses older ARM A73-based Kryo 280 custom CPU cores in its high-performance arsenal, to be fair, and it's entirely possible that new cores powered by designs based on the Cortex-A76 could offer better performance in those form factors.
Even so, the point remains that Intel remains a large and slow-moving target for CPU IP developers looking to butt in on its dominance in markets from servers to notebooks. That's thanks to the fact that the blue team is still facing immense pressure to get its 10-nm process up to speed and to release next-generation architectures of its own on that process. Intel might be able to stave off some of this competition with continued improvement of the 14-nm process technology that underpins every one of its leading-edge products, but that doesn't change the fact that the Skylake core being implemented on refinements of 14-nm is a 2015-vintage product.
If Intel's 14-nm Whiskey Lake product family delivers the major boost in peak clock speeds that early leaks suggest, even ARM's projected 3.3-GHz peak speeds for A76 cores might not be enough to catch a Core i5 in the bursty, single-threaded workloads that characterize the vast majority of mobile PC usage. Still, ARM's roadmap, ambitious performance targets, and broad partner ecosystem suggest the clock is ticking if Intel wants to maintain performance leadership in the always-connected 5G PC platform of the future.Nvidia marks the death of crypto demand in Q2 of its fiscal 2019
Nvidia reported its results for the second quarter of its fiscal 2019 today. The company pulled in $3.12 billion in revenue, up 40% year-on-year, and operating income of $1.16 billion, up 68% year-on-year. Gross margin was 63.3%, up 4.9 percentage points on the year. The company reported record revenue across all of its divisions.
The GPU business made up the vast majority of Nvidia's revenue at $2.66 billion. The company said strong performance in its gaming, professional visualization, and data-center products made up for a "substantial decline" in cryptocurrency sales. Gaming revenue was up 52% from this time last year at $1.8 billion thanks to strong sales of Pascal cards for desktops and Max-Q notebooks. Professional visualization products brought in $281 million, 20% better than a year ago, and data center revenue reached $760 million, up 83% from a year ago. Those data center results came thanks to sales of Volta products like the Tesla V100 and the DGX systems containing them, according to the company.
The company's OEM and IP bucket leaked 54% of the revenue it posted this time last year, down to $116 million, thanks to declines in demand from cryptocurrency miners for the green team's GPUs. The sequential drop of 70% in this line item underscores just how much crypto demand has faltered of late. Nvidia notes that it had predicted crypto-specific demand would be $100 million for this quarter, while actual revenues were $18 million. Furthermore, the company expects no meaningful contributions from cryptocurrency products to revenue for the remainder of its fiscal 2019. If there's a surer sign that enthusiasm for new mining power is dead, I'm not sure we'll find it.
The company's Tegra business, on the other hand, brought in $467 million, up 40% from a year ago. Tegra chips find their way into Nvidia's automotive products, embedded platforms, and most importantly, the Nintendo Switch. Tegra products for cars brought in $161 million, up 13% from a year ago, including infotainment systems, Drive PX boards, and software-development partnerships with automakers.
For its next quarter, Nvidia expects $3.25 billion in revenue, plus or minus two percent. That figure includes no income from crypto demand. Going by Nvidia's third-quarter fiscal 2017 results, that figure would represent a 23.3% year-on-year increase, suggesting the company's meteoric rise of late might be tapering off a bit. GAAP gross margin is projected at 62.6%, which would reflect a 3.1-percentage-point increase. With the release of new GeForce products imminent, we'll have to see just how much pent-up demand for next-generation gaming products the company is able to unleash as cryptocurrency demand finally seems to be dying off.MSI WS65 mobile workstation gets dressed up for the office
MSI's workstation notebooks are getting dressed up with a toned-down new design, and the WS65 launching at SIGGRAPH this week is the first of the breed. This system is a thin 15.6" design that still squeezes plenty of power inside. MSI will let professionals get WS65s with CPUs as powerful as Intel's Core i9-8950HK and graphics processors ranging up to Nvidia's Quadro P4200.
While specs are still a little thin at the moment, we do know that the WS65 has a 1920x1080 display with 72% coverage of the NTSC gamut (or about 100% of sRGB). It sounds as though the system has two M.2 slots, one for SATA and NVMe devices and the other for NVMe devices only. The WS65 has three "USB 3.1" Type-A ports, one USB Type-C port, one HDMI 2.0 output, one mini-DisplayPort 1.4 connector, and headphone and microphone inputs. We'll learn more about the WS65 when it launches in September.Lian Li Lancool One chassis blends the best of past and present
Lian Li made its name with massive, featureless aluminum monoliths, but nobody can ignore the RGB LED craze. Enter the Lancool One. This case's front panel takes Lian Li's signature brushed-aluminum stylings and blends them with an RGB LED-accent that cleverly doubles as an ambient light source for the interior of the chassis.
That RGB LED accent shines through a cut-out on the semi-open front panel. Vents around the edges of the panel allow the included 120-mm front fan to breathe. Another 120-mm fan comes installed on the Lancool One's rear fan mount. The case has ample room for extra fans, as well. The front panel can accept two more 120-mm spinners or 140-mm air movers. The top panel can take another three 120-mm fans or two 140-mm units. Two more 120-mm fan mounts on the Lancool One's convertible PSU shroud can move air between the chambers, too.
As for cooling hardware, the Lancool One can swallow radiators as large as 280 mm or 360 mm on its front panel, another 360-mm radiator on its top panel (but no 280-mm units), and another 120-mm radiator at its rear. Tower-style air coolers as tall as 6.9" (175 mm) and graphics cards as long as 16.5" (420 mm) will find a home in the Lancool One, as well. As a mid-tower case, the Lancool One offers seven primary expansion slots and two more vertical slots for builders who want to tip their graphics cards on their sides.
For storage, the Lancool One has two dedicated 2.5" trays on the back of its motherboard, another two 2.5" mounts on top of the PSU shroud, and two 3.5" cages underneath its PSU shroud. The top, front, and bottom air intakes of the Lancool One all come with magnetic dust filters to keep builds clean, something builders will appreciate thanks to the case's tempered-glass left side panel.
Lian-Li makes the Lancool One available in two versions. The standard model has traditional RGB LED lighting and no USB Type-C connector on its top panel, while the Lancool One Digital offers a USB 3.1 Gen 2 connector and fully-addressable RGB LED accents. The standard Lancool One rings in at a reasonable $89.99 on Newegg, while the Lancool One Digital commands an extra $10. Both cases are available now on Newegg.Asus ROG Strix Scar II squeezes a 17.3" screen into a 15" chassis
13", 14", and 15.6" notebooks rule the mobile roost these days, but 17" notebooks endure for folks who care about performance above all. Asus is trying to make life easier for folks who want to lug around such large machines with its ROG Strix Scar II. This machine is built around a 17.3" display, but Asus has slimmed down three of the four screen bezels to the point that it claims the Scar II is no larger than a notebook with a 15.7" chassis.
Like the Zephyrus S announced today, the Scar II's top-end panel is a 144-Hz, 1920x1080 affair with a claimed 3-ms response time—exceptional for a notebook screen. The Scar II also boasts 100% coverage of the sRGB color space. A 60-Hz 1920x1080 model will also be available. Buyers can choose a Core i7-8750H CPU with six cores and 12 threads or a Core i3-8300H with four cores and eight threads. No matter what CPU you choose, the Scar II delivers pixels to its panel with a GeForce GTX 1060 6 GB graphics chip. The machine supports DDR4-2666 RAM in pools as large as 32 GB.
The Scar II's inch-thick body has room for NVMe SSDs ranging from 128 GB–512 GB and 1-TB hard drives in either plain-old-5400-RPM, 7200-RPM, or 5400-RPM SSHD flavors. Owners can hook up peripherals using USB 3.1 Gen 2 Type-C and Type-A ports, three USB 3.1 Gen 1 Type-A ports, a single Mini DisplayPort 1.2 connector, an HDMI 2.0 out, and an SD card reader. The Scar II has a Gigabit Ethernet jack and 802.11ac Wave 2 Wi-Fi with a 2x2 MIMO antenna.
As with the Zephyrus S, the Scar II has four-zone RGB LED lighting on its keyboard, a dual-zone RGB LED light bar on its front edge, and another blinkenlight zone behind the ROG logo on its lid. Asus will announce prices when the machine launches in September.Cooler Master goes long with its ML360R RGB liquid cooler
Cooler Master has put its stamp on several liquid coolers over the years, but it's never made one with a 360-mm radiator until now. The MasterLiquid ML360R RGB stretches out with an extra-long radiator that gets its airflow from three PWM fans with speed ranges from 650 RPM–2000 RPM. An array of 12 RGB LEDs encircles the top of the pump head, and each of the ML360R RGB's fans has a further eight addressable lights in their fan hubs.
Cooler Master gives builders with lights in their eyes plenty of options for controlling the ML360R's blinkenlights. Those interested in software tweaking can work their magic through Asus Aura Sync, Gigabyte RGB Fusion, or ASRock Polychrome RGB Sync. Folks whose motherboards don't support addressable RGB LEDs can rely on Cooler Master's included RGB LED lighting controller instead. Cooler Master's MasterPlus+ software is still in beta, but it'll also provide control options for the ML360R if you want to be a guinea pig.
To keep the ML360R's coolant where it belongs, Cooler Master uses sleeved FEP tubing that looks good on top of being functional. The ML360R RGB is compatible with all recent Intel mainstream and high-end desktop sockets, as well as AMD's mainstream mounting systems through Socket AM4. Builders who want to go long can find the ML360R RGB on Newegg today for $159.99.Asus cuts down the ROG Zephyrus S and makes it stronger than ever
Asus' ROG Zephyrus notebook turned heads last year by putting gamer-grade hardware in an ultrabook-like chassis. The concept apparently wowed enough people that Asus' designers went back and did it again. The ROG Zephyrus S keeps the keyboard-forward design and fold-out cooling system of the original Zephyrus, but it slims down the chassis even more. The Zephyrus S is just 0.62" (15.75 mm) thick at its thickest point, making it 12% thinner than even the original. It also slims down its display bezels for a clean appearance.
Despite the paring-down, the Zephyrus S doesn't sacrifice on power. The notebook relies on the one-two punch of a Core i7-8750H CPU and a GeForce GTX 1070 Max-Q graphics chip to drive a 15.6", 144-Hz 1920x1080 display with an exceptional claimed 3-ms response time. Asus says the panel covers 100% of the sRGB gamut, so gamers who need color-critical chops in their day jobs might find the Zephyrus S a capable companion. Asus will aslso offer a version with a full-fat mobile GeForce GTX 1060 6 GB, as well. Buyers can also configure NVMe storage devices ranging from 256 GB–1 TB in size, and the machine can swallow up to 24 GB of DDR4-2666 memory.
The Zephyrus S offers USB 3.1 Gen 2 ports in Type-A and Type-C flavors, one USB 3.1 Gen 1 Type-C port, and two USB 2.0 ports. Gamers can connect the Zephyrus S to external displays using an HDMI 2.0 port. As a gaming product, it's no surprise that the Zephyrus S offers four-zone RGB LED lighting on its keyboard. Another RGB LED zone shines through the "Active Aerodynamic System" vent at the back of the notebook. Asus didn't announce pricing today, but the notebook will be available next month.Intel teases discrete graphics card on new Twitter account
Intel's plans to release a discrete graphics product in 2020 are well-known, but just what that product will look like is not at all known. We may have a very slightly better idea today thanks to the newly-inaugurated Intel Graphics Twitter account. The company tweeted a teaser video reminding PC users that its graphics products power a huge swath of screens on the planet, and it closes with the reminder that "in 2020, we will set our graphics free."
The teaser video shows us what appears to be a single-slot card of some kind, though the largely featureless and likely-rendered image doesn't offer much more to go on than that. Still, Intel says of its 2020 plan: "that's just the beginning." For now, the next year and four months (or more) can't pass quickly enough.Samsung Exynos 5100 5G modem is the one chip to rule them all
Samsung is getting ready for 5G handsets with an all-in-one modem that can handle cellular standards of the past and future alike. The Exynos Modem 5100 claims full compliance with the 3GPP 5G NR Release 15 standards, and Samsung claims it's the first 5G modem in the industry to achieve that compliance. To prove its mettle, Samsung used the Exynos Modem 5100 to successfully place a 5G NR data call with its own base station and handset prototype.
The Exynos Modem 5100 has support for both the sub-6-GHz and mmWave spectrums that form the two pillars of 5G connectivity, and it can also transmit on 2G GSM and CDMA networks, 3G WCDMA, TD-SCDMA, HSPA, and 4G LTE networks. That broad compatibility is important since 5G-NR will have a non-standalone deployment phase requiring the use of existing cellular infrastructure ahead of the 5G NR standalone deployment phase for true next-generation cellular networks.
Samsung claims the Exynos Modem 5100 is good for maximum downlink speeds of 2 Gbps on the sub-6-GHz bands of 5G and up to 6 Gbps in mmWave environments. For 4G LTE networks, the modem can suck down data at rates of up to 1.6 Gbps. Along with the modem itself, Samsung has a complementary family of radio-frequency IC, envelope tracking, and power management parts for use in 5G devices. Samsung says the Exynos Modem 5100 will be available to interested customers by the end of this year.Nvidia teases potential GeForce RTX 2080 launch at Gamescom
Hot on the heels of CEO Jensen Huang's Quadro RTX announcement last night, Nvidia released a teaser video for its upcoming festivities at the Gamescom convention in Cologne, Germany. That video contains a number of winks and nods to the fact that next-generation GeForce cards are coming, as spotted by PCWorld's Brad Chacos on Twitter.
Along with gauzy shots of what looks like a finned, open-style graphics cooler and a stylized backplate that looks like it could have come off any recent Founders Edition card, the video depicts gamers building and gearing up with Nvidia hardware.
The first tip-offs that new hardware might be revealed at Gamescom come from shots of our GeForce gamers chatting. One of the chatters' handles is "RoyTeX," whose capitalization almost certainly alludes to the RTX branding introduced last night on the Quadro RTX series of graphics cards.
RoyTeX also chats with fellow gamer "Not_11."
Another group of chatters includes "Mac-20" and "Eight Tee." If Nvidia was laying it on any thicker here, it could star in a Gamers Nexus thermal-paste-application-testing article.
Even the soundtrack to this teaser is suggestive: it's a version of Sam and Dave's "Hold On, I'm Comin'", which would seem as clear a sign as anything that something is indeed going to be revealed or launched at the show.
The final tip-off comes at the end of the video, where the numbers 2-0-8-0 scroll up in order to form the date of Nvidia's announcement: August 20. We'll be watching for more details as they arise.Nvidia announces three Quadro RTX cards powered by Turing GPUs
As part of the proceedings at Nvidia's SIGGRAPH keynote this evening, the company took the wraps off the first graphics cards powered by its Turing architecture for hybrid rendering with real-time ray tracing—certainly the biggest change in the production of computer graphics since the introduction of unified shaders with the GeForce 8800 GTX in 2006.
According to David Kanter, who is on the ground at SIGGRAPH, Turing includes a new functional unit called an RT core that accelerates ray-tracing-related functions like traversing bounding volume hierarchies and handling triangle intersection. It also includes a new version of Nvidia's tensor cores to perform deep learning training and inferencing operations critical to AI denoising of ray-traced scenes.
The third pillar of Turing is its traditional shader array, assembled using groups of new Turing streaming multiprocessors (SMs). Turing chips will have as many as 4608 CUDA cores capable of performing as many as 16 TFLOPS of FP32 calculations in parallel with 16 TIPS (trillion integer operations per second). Turing parts can also operate on reduced-precision data types at rates of 125 TFLOPS for FP16, 250 TOPS for INT8, and 500 TOPS for INT4.
Nvidia CEO Jensen Huang revealed that the largest Turing GPU so far will be 754 mm² in area, smaller than the 815 mm² Volta V100 but still a massive GPU by any stretch of the imagination. Turing chips will be paired with Samsung 16-Gb density GDDR6 memory modules.
|Memory||Ray tracing||CUDA cores||Tensor cores|
|Quadro RTX 8000||48 GB||10 GRays/s||4608||576|
|Quadro RTX 6000||24 GB||10 GRays/s||4608||576|
|Quadro RTX 5000||16 GB||6 GRays/s||3072||384|
Nvidia is announcing three Quadro RTX cards this evening. The Quadro RTX 8000 will use the largest Turing GPU with 4608 CUDA cores and 576 tensor cores. It'll have a 48-GB pool of memory, and two of these cards can be paired up through NVLink to achieve a 96-GB pool of memory. Each RTX 8000 can perform 10 GRays/s of ray-tracing processing. The RTX 6000 cuts the size of the RTX 8000's memory pool in half but maintains the same provisions of CUDA cores and tensor cores as the RTX 8000. RTX 6000s can also be paired using NVLink.
The RTX 5000 most likely takes advantage of a smaller Turing GPU to do its thing. The card has 3,072 CUDA cores and 384 tensor cores, and its GPU can perform 6 GRays/s of ray-tracing operations. This card has 16 GB of memory on board and can be paired up using NVLink.
Quadro RTX cards will support USB Type-C video output and the VirtualLink standard for delivering power and pixels to next-generation VR headsets over a single cable.
Nvidia estimates that the Quadro RTX 8000 with 48 GB of memory on board will have a $10,000 street price. The RTX 6000 with 24 GB of memory will run $6,300, while the Quadro RTX 5000 with 16 GB of memory will carry a $2300 suggested price tag. Nvidia projects that the cards will become available in the fourth quarter of this year. We can't wait to find out more.Nvidia unveils raytracing-focused Quadro RTX GPU family
At Nvidia's SIGGRAPH event this evening, CEO Jensen Huang introduced "the world's first ray-tracing GPU," the Quadro RTX. This family of graphics processors offers up to "10 gigarays per second," 16 TFLOPS of single-precision performance, and "500 trillion tensor ops per second," all courtesy of the Turing microarchitecture.
The first Turing GPU is a 754 mm² monster that comprises a traditional shader array, Nvidia tensor cores, and a new processing resource called the "RT core."
Huang said the chip can address as much as 48 GB of frame buffer, and two Quadro RTX cards with this chip can combine resources to create a 96-GB pool of coherent memory using NVLink. The Quadro RTX product Huang was showing appears to have a 384-bit interface to GDDR6 memory running at 14 Gbps for a total of 672 GB/s of memory bandwidth.
Developing...Enermax spices up Liqtech TR4 II coolers with addressable RGB LEDs
Enermax's Liqtech TR4 closed-loop liquid cooler is one of a handful of Threadripper-specific heatsink options on the market. Thanks to its full-coverage cold plate, the Liqtech TR4 has the potential to cool AMD's high-end desktop parts better than the average liquid cooler. Now, Enermax is spiffing up the Liqtech TR4 with addressable RGB LED lighting encircling its pump head. That update means a new name for the series: Liqtech TR4 II.
The addressable RGB LEDs on the Liqtech TR4 II can be controlled from a digital RGB LED header on compatible motherboards or with Enermax's included RGB LED controller. The company says its blinkenlights brain can cycle through 10 pre-set effects, as well as a range of colors, brightness levels, and speeds. Inside the pump head, Enermax touts the Liqtech TR4 II's "shunt channel technology" (notches in the the cold plate's microfin array that prevent hot spots from forming), as well as its claimed flow rate of 450 liters per hour.
The Liqtech TR4 II will come in 240-mm, 280-mm, and 360-mm flavors, all of which come with PWM-controlled fans promising a 500 RPM–1500 RPM speed range and static pressure as high as 6.28 mm of H2O (in the case of the 120-mm units) and 4.8 mm H2O (for the 140-mm spinners). The 360-mm version is already on Newegg for $159.99, so folks who need a capable cooler to go with their brand-new Threadripper 2990WXes may want to give it a look.Radeon Pro WX 8200 promises pros lots of bang-for-the-buck
The SIGGRAPH conference kicks off today in Vancouver, and AMD is marking the occasion by launching a fresh pro graphics card. The Radeon Pro WX 8200 houses a Vega 10 GPU with 56 compute units enabled, fed by 8 GB of second-generation SK Hynix HBM2 RAM, and it'll feed displays using four Mini DisplayPort outputs.
That refined memory gives the WX 8200 512 GB/s of memory bandwidth, up from 410 GB/s on the consumer RX Vega 56. Working back from AMD's specs, that HBM2 RAM is likely running at an effective rate of 2 Gbps per pin. AMD's specified theoretical FP32 rate of 11 TFLOPS suggests the WX 8200 has a boost clock range of 1530 MHz. Both of those figures are superior to the RX Vega 56.
AMD is aggressively pitting the WX 8200 against Nvidia's roughly-$1800-at-retail Quadro P5000 in a range of workloads. The company generally promises a decent lead in performance over the P5000 for renderers like Radeon ProRender and Blender Cycles, as well as pro apps like Nuke, Premiere Pro, and Maya.
AMD believes that the future of professional graphics work lies in two new frontiers of performance that traditional metrics like those above don't capture, however: VR and simultaneous rendering and multitasking.
The company's DirectX 12-powered VRMark Cyan Room synthetic VR test result is self-explanatory. Like the other internal tests AMD presented, the company believes the WX 8200 leads the Quadro P5000 for professional use. For pros who are previewing their designs or models in VR, that performance parity could be an important mark in AMD's favor.
AMD also made the point that pros don't need to worry about manually tuning their systems for the best performance with the WX 8200 under blended workloads when multitasking, as one might with the Quadro P5000 and its dedicated toggles in Nvidia's control panel for managing graphics and compute workloads or focusing entirely on graphics performance.
To drive this point home, AMD mixed graphics and computing workloads all at once by running the SPECviewperf 13 benchmark on top of a Blender Cycles render task. The company claims that the SPECviewperf suite of tests generally ran at rates professionals would find acceptable for interactive use, while the Quadro P5000 failed to deliver acceptable performance in even one of the SPECviewperf tests under the same mixed workload.
The biggest selling point for the WX 8200 may be its price tag: $999, or slightly more than the retail price of the Quadro P4000 for that potentially Quadro P5000-meeting or -beating performance. The card will go up for pre-order at Newegg tomorrow and should hit e-tail shelves in early September.
AMD will be showing off the WX 8200 at SIGGRAPH alongside an improved version of Radeon ProRender that can perform hybrid rendering with ray-tracing and rasterization, as well as a heterogeneous rendering approach that can put CPU power to work on difficult areas of a rendered scene when it's needed—perhaps an ideal task for the Threadripper 2990WX that's coming to retail tomorrow, as well. Interested pros in Vancouver should check out AMD's booth for more information.Cryorig doubles the fun with its M9 Plus and H7 Plus coolers
Cryorig has a full line of CPU coolers ready to transport heat away from the chips beneath, and its H7 Plus and M9 Plus promise to be even more capable at that job than the existing H7 and M9 heatsinks. Cryorig is powering up these towers by offering them in dual-fan, push-pull configurations for higher cooling performance out of the box.
The H7 Plus is a compact tower with three six-millimeter copper heatpipes running through its 145-mm stature. Cryorig outfits it with two of its QF120 120-mm fans with PWM control and a claimed 330 RPM–1600 RPM range. The fans and tower are designed to avoid coming into contact with RAM slots close to the CPU, too.
The M9 Plus is even more compact than the H7, at 125 mm tall. Like its taller counterpart, the M9 Plus has three six-millimeter heatpipes running through its fin stack. The M9 Plus relies on a pair of 92-mm PWM fans with a 600 RPM–2200 RPM range.
Both coolers include a PWM splitter to make controlling their dual fans simpler. They both use Cryorig's "Hive Fin" design, which purports to allow for better airflow through the fin stack. Cryorig suggests the H7 Plus can dissipate up to 150 W of thermal load, while the M9 is rated for 130 W.
Cryorig says both coolers will arrive in mid-August at $44.45 for the H7 Plus and $24.45 for the M9 Plus. Switch that dollar sign for a euro symbol and you have both coolers' suggested prices for the Eurozone, including VAT.Intel NUCs with Cannon Lake inside pop up at SimplyNUC
Intel's Core i3-8121U is the one and only Cannon Lake CPU to exit the company's foundries so far, and its implementation in actual products has been confined to a single Chinese-market Lenovo laptop. That could change soon with the NUC8i3CYSM and NUC8i3CYSN, two NUCs widely referred to as "Crimson Canyon" in rumors and leaks.
According to new product listings from SimplyNUC, the NUC8i3CYSN will come with 4 GB of soldered-down LPDDR4-2666 memory in a dual-channel configuration, while the NUC8i3CYSM will bump the memory to 8 GB. The systems are otherwise identically configured around a Core i3-8121U CPU.
Since Cannon Lake doesn't include an active integrated graphics processor, the Crimson Canyon NUCs will rely on a Radeon 540 discrete graphics processor with 2 GB of its own GDDR5 RAM. SimplyNUC will include a 1-TB mechanical hard drive with both NUCs in their base configurations. That drive occupies the NUCs' sole SATA 6 Gbps port. Builders and reviewers who want to make an expedition to Cannon Lake will also find an M.2 2280 slot for SATA or NVMe gumsticks.
On the outside, the NUCs have two HDMI 2.0b ports capable of driving two 4K displays at 60 Hz. The systems have two USB 3.0 Type-A ports on their front panels and two more of those ports around back. Intel includes a Gigabit Ethernet jack driven by one of its own i219V controllers, as well. A UHS-I SD card reader and an integrated headphone-microphone jack round out the systems' connectivity options.
SimplyNUC will sell the base NUC8i3CYSM for $574 with the aforementioned 1-TB hard drive and Windows 10 Home included, while the NUC8i3CYSN will run $529 with that same configuration. The company estimates the NUC8i3CYSM will ship in mid-September, while the lower-end NUC8i3CYSN will begin shipping in late October.Book Lovers Day Shortbread
PC hardware and computing
Games, culture, and VR
Hacks, gadgets and crypto-jinks
Science, technology, and space news
Cheese, memes, and shiny things
Samsung's Galaxy Note phones have long held a reputation as the power user's Android device, and the company is adding to that legacy today with the Galaxy Note 9. The company's next-generation flagship handset gives demanding users access to its DeX desktop interface on external monitors over a single cable, and it teaches the S Pen new tricks like remote shutter control for the phone's cameras, presentation control, video playback control, and more using Bluetooth Low Energy.
The camera inside the Note 9 hops on board the computational-photography-and-AI train with two bundles of smarts. A "Scene Optimizer" mode can perform object recognition to try and box your shot into one of 20 categories to apply the color, contrast, and exposure settings that Samsung's programmers feel is the best expression of a given scene. The camera can also identify flaws like a smudged lens, excessive flare, a blurry exposure, or a blinking subject to prompt the user to fix the issue and try again.
The camera hardware itself is practically identical to that of the Galaxy S9. The world-facing camera has a dual-sensor system with optical image stabilization on both snappers. One of the shooters is a wide-angle affair with a 12-MP sensor and a diaphragm that can switch between f/1.5 and f/2.4 apertures, while the other is a telephoto with a 12-MP resolution, 2x optical zoom (presumably versus the wide-angle snapper), and up to 10x digital zoom. Finally, the Note 9's selfie camera is an 8-MP deal with an f/1.7 lens system.
Note 9 owners will be able to view those optimized photos and jot notes on a 6.4", 2960x1440 OLED screen. Samsung gives that screen a long life with a massive 4000-mAh battery, up from 3300 mAh on the Galaxy Note 8.
American Note 9 buyers will find a Snapdragon 845 SoC inside, running at up to 2.8 GHz on its performance cores and up to 1.7 GHz on its efficiency cores. That SoC comes paired with 6 GB of RAM and 128 GB of flash storage in the base Note 9, while an upgraded version offers 8 GB of RAM and 512 GB of storage.
Both configurations have a microSD slot for up to 512 GB of extra storage. The company says the handset has a "Water Carbon Cooling" system and "AI-based performance adjusting algorithm[s]" to deliver maximum sustained performance. Buyers in other regions will get an Exynos SoC with four custom performance cores running at up to 2.7 GHz and four efficiency cores running at up to 1.7 GHz.
The Note 9 also includes a wealth of software integration and improvements. The company has partnered with Spotify to make users' playlists, music, and podcasts available across Note 9, Galaxy Watch, and Smart TV devices. According to Anandtech's live blog, Samsung's Bixby assistant is now "more conversational, personal, and useful," and it has more powerful and context-aware queries on the Note 9. Samsung said that it's working with Spotify and Google to integrate Bixby into their products.
The American version of the Note 9 will be available in two colors: an "Ocean Blue" with a contrasting yellow S Pen and a "Lavender Purple" with a matching S Pen. Buyers in other regions will also get black and copper finishes.
The 128-GB version of the device will start at $1000 from AT&T, Sprint, T-Mobile, U.S. Cellular, Verizon Wireless and Xfinity stores, and it'll be available on August 24. Samsung will also make the device available from Amazon, Best Buy, Costco, Sam's Club, Straight Talk Wireless, Target and Walmart, as well as Samsung.com and the ShopSamsung app.
The 512-GB Note 9 will also be available August 24 for $1250 from "select retail locations" and online through AT&T, T-Mobile, Verizon, U.S. Cellular and Samsung.com. Buyers who pre-order a Note 9 from August 10 to August 23 will get a choice of AKG noise-canceling headphones or access to a special Fortnite skin and 15,000 "V-bucks" of in-game currency. Those who don't want to choose can add both the cans and the cash to their order for an extra $99. Samsung is also offering the exclusive Fortnite Galaxy skin to all Note 9 and Tab S4 owners.
|Spitballing Nvidia's purported GeForce RTX 2080 and RTX 2080 Ti||36|
|PNY reveals RTX 2080 Ti specs and potential $1000 price tag||38|
|ARM reveals client CPU ambitions with roadmap through 2020||42|
|Nvidia marks the death of crypto demand in Q2 of its fiscal 2019||42|
|MSI WS65 mobile workstation gets dressed up for the office||1|
|Lian Li Lancool One chassis blends the best of past and present||9|
|Asus ROG Strix Scar II squeezes a 17.3" screen into a 15" chassis||2|
|Cooler Master goes long with its ML360R RGB liquid cooler||3|
|Asus cuts down the ROG Zephyrus S and makes it stronger than ever||22|
|So bye bye Miss Etherium Pie. Put my Vega up on eBay but the price wasn't high. And all the fanboys gave each other high fives, saying this will be th...||+70|