|Corsair's K70 MK.2 Low Profile Rapidfire gaming keyboard reviewed||14|
|Forza Horizon 4 reviewed||32|
|Corsair's H100i RGB Platinum closed-loop liquid cooler reviewed||18|
Howdy, folks. We're short on time for pleasantries today, but you'll be fine. After all, you have a review to read—that of the Intel Core i9-9980XE. That's one big, seriously fast chip. Before you head out, though, grab your credit card and take a look at the selection of deals we have today.
That's all for today, folks! There's a chance you're looking for something we haven't covered. If that's the case, you can help The Tech Report by using the following referral links when you're out shopping: not only do we have a partnership with Newegg and Amazon, but we also work with Best Buy, Adorama, Rakuten, Walmart, and Sam's Club. For more specific needs, you can also shop with our links at Das Keyboard's shop.Intel unveils XMM 8160 5G modem for a 2H 2019 arrival
Intel is shaking up its plans for its rollout of 5G modems today. Just under a year ago, the company took the wraps off its XMM 8060 modem. The XMM 8060 promised support for the 5G New Radio (5G NR) standard in both Standalone (SA) and Non-Standalone (NSA) forms, as well as backward compatibility with 2G, 3G (including CDMA), and 4G networks. Today, the company announced that it plans to pull in the launch of its new XMM 8160 modem to the second half of 2019, or by more than six months, according to its press release. The XMM 8160 claims support for 5G speeds of up to 6 Gbps.
Past that, though, the XMM 8160 doesn't seem to unveil many new capabilities versus those announced for the XMM 8060. It has the same multi-mode chops promised by the 8060 across 2G, 3G, and 4G legacy networks, in addition to support for 5G NR SA and NSA across sub-6-GHz and mmWave spectra. Intel highlights the fact that the XMM 8160's backward and forward capabilities are all wrapped up in a single chip, though, and that could be an important distingushing point for the blue team.
In contrast, Qualcomm's Snapdragon X50 modem has to piggyback on Snapdragon SoCs that have modems for LTE and other legacy standards baked in to let a Snapdragon smartphone cover all its bases. Intel's purported single-chip approach could be important for companies who want to integrate a complete backward- and forward-compatible modem into products that don't necessarily have LTE support to begin with. That covers most every non-Qualcomm notebook PC sold today, to name just one attractive market that Intel might want to help itself to.
Intel also anticipates that implementing the XMM 8160 and its supporting transceivers in products might be less demanding of board area than other early 5G implementations, an important consideration in smartphones and tablets. The company produced a not-to-scale graphic to demonstrate that a complete XMM 8160 implementation will need just the modem itself, a 5G mmWave transceiver, and a seven-mode RF transceiver to cover sub-6-GHz operation. Whether the company's supporting graphic is meant to show the advantages of the XMM 8160 versus what would have been required to implement the XMM 8060 isn't made clear, however. It's also not whether Intel might be illustrating what it thinks would be required for Qualcomm's partners to implement the Snapdragon X50 modem alongside another Snapdragon SoC.
It's hard not to see this graphic as a dig at Qualcomm, however, as that company's recently-introduced QTM052 5G RF transceivers only claim support for the mmWave spectrum, not sub-6-GHz bands. Early Qualcomm 5G phones might need separate RF transceivers for sub-6-GHz 5G operation and another legacy networking standards as a result, but until real devices hit the market, we won't know for sure. It is clear that Intel is fudging a bit by leaving the primary SoC of any mobile device that implements the XMM 8160 out of the picture, though, while the legacy modem for Snapdragon devices is already part of the board area occupied by those SoCs. That omission might make Intel's graphic more dramatic than actual implementations will be.
Whatever its implementation details may be, the timeline for the introduction of the XMM 8160 is fascinating on its own. Apple is Intel's largest client for modems, and it regularly releases new iPhones in the fall of most years. Intel's second-half-of-2019 release window for the XMM 8160 could suggest that the first 5G-capable iPhones are coming in that time frame. As with so much else about 5G, we'll just have to wait and see.Das Keyboard 4Q updates a classic design with cloud smarts
Das Keyboard made its name with no-nonsense mechanical clickers whose quality could be felt with every key stroke, but a wide range of modern high-end keyboards make mechanical switches table stakes. The company has since introduced the Q software platform for lighting control and web-powered intelligent notifications, and the classic Das Keyboard 4 Professional design now boasts compatibility with that utility.
The Das Keyboard 4Q blends cloud brains with classic Cherry MX Brown switches for a tactile, non-clicky typing experience, or MX Blues for keystrokes that can be felt and heard. Each key has programmable RGB LED backlighting that can be connected to other web APIs to display information like stock price movement or weather forecast data. The Q utility can also be used to hook up the 4Q to compatible IoT devices to control smart lighting and thermostats, among other smart gear. If that's not enough control, Das Keyboard exposes a REST API for Q-enabled keyboards to let users develop their own extensions for the utility.
Smarts aside, the 4Q looks both sturdy and functional. Das Keyboard caps off the case of this deck with an anodized aluminum top panel, and dedicated media buttons provide quick control over play, pause, next track, and volume functions. N-key rollover and a two-port USB 2.0 hub round out the 4Q's features. If a smart, sturdy board with understated styling appeals to you, the 4Q is available now for $199 from Das Keyboard's shop.Zotac VR Go 2.0 gets caffeinated with a Core i7-8700T
Wireless VR adapters exist for those who want to cut the cord, but not every headset has one of those transceivers available. For those who still want to dump the fixed umbilical, Zotac has upgraded its VR Go backpack PC. Version 2.0 of the concept gets some new guts and an upgraded harness design that claims to hold the system further away from the wearer's body for better cooling performance.
The VR Go 2.0 houses Intel's six-core, 12-thread Core i7-8700T CPU. This 35-W chip has a 2.4-GHz base clock and a 4-GHz single-core boost speed. Zotac pairs the i7-8700T with 16 GB of DDR4 RAM and a GeForce GTX 1070 graphics card bearing 8 GB of GDDR5 memory. The VR Go 2.0 comes with a 240-GB M.2 SATA SSD as its primary storage option, and it has a 2.5" storage bay for further expansion.
On its top edge, the VR Go 2.0 offers three USB 3.1 Gen 1 ports, a USB 3.1 Gen 2 port, and an HDMI out for VR headsets. Three more USB 3.1 Gen 1 ports, another HDMI out, a DisplayPort out, a Gigabit Ethernet jack, and headphone and microphone jacks nestle in the right side of the backpack. 802.11ac Wi-Fi and Bluetooth 5 connectivity let the VR Go 2.0 communicate without wires.
The full VR Go 2.0 itself weighs 10 pounds (4.5 kg), about the same as the first-gen take on the design. Zotac says battery life is about the same as the first-gen model's, as well. A pair of hot-swappable batteries each provide about an hour and a half of run time. RGB LED accents on the back panel of the VR Go 2.0 let room-scale experience operators see just who is wearing a given VR Go backpack, too. Zotac didn't provide pricing for the VR Go 2.0, but we'd expect it to come in around the same $2000 mark that the original commanded.Radeon Software 18.11.1 gets ready to roll for Battlefield V
Not to be left behind by the green team, AMD has its own set of fresh drivers for the upcoming release of Battlefield V. With Radeon Software Adrenalin Edition 18.11.1, AMD claims gamers will enjoy up to 8% faster performance in BFV compared to the Radeon Software 18.10.2 release with the RX Vega 64 at 1920x1080, and Radeon RX 580 8 GB users will enjoy up to 9% higher performance under the same conditions. In absolute terms, the company observed 142.6 FPS on average from the RX Vega 64 using BFV's ultra preset, while the RX 580 delivered 91.1 FPS using the latest driver.
Hitman 2 arrives soon, as well, and AMD has baked some refinements for that title into the 18.11.1 release. The company says performance should improve roughly 3% from the 18.10.2 release on the RX 580 8 GB at 1920x1080. The RX 580's 57.2 average FPS on medium settings with the DirectX 11 API suggests Agent 47's return could be incredibly punishing to run.
AMD stomped out a few bugs in this release, including a situation where the Radeon Overlay didn't play well with the Windows 10 October 2018 update. Assassin's Creed Origins should no longer crash on launch or during gameplay with Windows 7. Wolfenstein II: The New Colossus should no longer exhibit graphical corruption when the player looks at lava or water. Finally, Strange Brigade should no longer crash intermittently when rendering with the DirectX 12 API.
Other bugs remain alive and well in the 18.11.1 release. The Radeon Overlay may not show up when a user tries to toggle it in Battlefield V. Systems with multiple displays may cause a laggy mouse cursor when at least one of those displays is powered off. RX Vega cards may still run with elevated memory clocks at idle. Finally, users of Microsoft's PIX tool will find that it's incompatible with this release. AMD says those users will need to use Radeon Software 18.9.3 in tandem with the tool.
If you're ready to drop into the world of Battlefield V, you should fire up Radeon Software to grab this update or head over to AMD's support site to download it directly.Tongue Twister Day Shortbread
PC hardware and computing
Games, culture, and VR
Hacks, gadgets and crypto-jinks
Science, technology, and space news
Cheese, memes, and shiny things
Howdy, gerbils. The TR labs are humming with activity behind the scenes, but that's not all that's happening. Black Friday is coming up fast, and deals at every e-tailer are heating up. Right now it's a mild boil, but as always, we picked out the best pieces for you. It's a long one today, so get a cup of tea and your credit card.
That's all for today, folks! There's a chance you're looking for something we haven't covered. If that's the case, you can help The Tech Report by using the following referral links when you're out shopping: not only do we have a partnership with Newegg and Amazon, but we also work with Best Buy, Adorama, Rakuten, Walmart, and Sam's Club. For more specific needs, you can also shop with our links at Das Keyboard's shop.GeForce 416.81 drivers marshal the troops for Battlefield V
The Windows 10 October Update remains a massive missing piece in Nvidia's RTX software stack, but you have to go to war with the army you have, and Battlefield V is set to arrive to the general public on November 20. Origin Access Premier subscribers will be able to take the field starting tomorrow, though, and that means it's time for a fresh GeForce driver. Version 416.81 gets GeForce gamers ready for Battlefield V, although the release makes no notes about when GeForce RTX owners should expect to be able to fire up their cards' RT cores.
Beyond Battlefield V, the 416.81 release adds support for player-versus-player-versus-environment title Hunt: Showdown, a Southern-themed shooter with a supernatural twist. This title is in Steam Early Access now, and Nvidia is supporting it with both this driver update and the addition of GeForce Experience Highlights recording. Whether you're killed or doing the killing, Highlights will automatically capture those moments so that you can share them with the world.
Nvidia has been doing some hunting and killing of its own for this release in the form of a long list of solved bugs. Microsoft Edge users with Application Guard enabled should be able to browse the web in tandem with Nvidia Surround. High multiple-monitor idle power draw with Turing cards should be tamed. Windows should no longer blue-screen when a user exits a game on an RTX 2080 Ti system with both G-Sync and non-G-Sync monitors connected. Speaking of the RTX 2080 Ti, its users should no longer see stuttering when playing back HEVC video.
Other minor issues were also stomped down. GeForce GTX 1060 and GTX 970 cards connected to receivers should no longer switch from multi-channel to stereo audio after just 5 seconds without outputting sound. Recording and streaming NVENC applications in SLI Titan X rigs should now work, too. Both ARK: Survival Evolved and Shadow of the Tomb Raider should be a little more stable, and flickering in Witcher 3 and Far Cry 5 should no longer occur. Finally, graphics corruption in Monster Hunter World when Volume Rendering is off should be gone.
There's still a small handful of open issues, though. Those lucky enough to own two RTX 2080 or RTX 2080 Ti cards in SLI could see their "single GPU response slow down after enabling/ disabling SLI, requiring system reboot" (sic). The combination of a GTX 1080 Ti and a motherboard with a PLX chip can result in watchdog violation errors. Battlefield 1 gamers might see their displays go pink when changing their monitor's refresh rate from 144 Hz to 120 Hz with HDR enabled. Mouse cursors in Firefox might still be briefly corrupted when hovering over certain links. Finally, G-Sync might not shut off after closing certain games, and unfortunate owners of GTX 780 cards might experience laggy desktops.
Whew, that was a lot of ground to cover. If you somehow need even more detail, you can check out the driver's release notes here. Otherwise, just head up to the Nvidia driver download page and grab it, or use GeForce Experience if you have it installed.Samsung shows off Infinity Flex foldable smartphone display
Rumors of a Samsung smartphone with a foldable display have been rampant of late, and the company confirmed that it's working on such a device at the Samsung Developer Conference this afternoon. The so-called Infinity Flex display made its debut in a "pocket-sized smartphone" that can fold open to reveal a 7.3" display—just a bit short of the diagonal of other small tablets like the iPad mini 4. The company's prototype device also has a screen on one of its outer faces for use when the full-size internal display isn't needed or practical.
Samsung's push to make foldable displays the next thing in phones has Google's official blessing, too. In a post on the Android Developers blog, the company revealed support for a new device class like Samsung's prototype called "Foldables." Google anticipates there will be two-screen and one-screen devices using this technology, and it says it's optimizing Android to deal with cases like starting a video on the outer screen of a two-screen device and seamlessly moving that video to the internal screen when the device is unfolded.
Google says to expect foldable devices from Samsung and "several" other Android manufacturers starting next year. The firm will be sharing more information about the possibilities of foldable devices at its own Android Dev Summit running from November 7 through 8 at the Computer History Museum in Mountain View, CA.Micron 12-Gb LPDDR4X RAM promises higher mobile performance and capacity
Mobile devices like smartphones and tablets often come with enough RAM to rival desktop PCs these days, but capacity isn't the sole measure of performance by any stretch. Mobile data rates are poised to skyrocket with 5G connectivity, and the applications that connectivity could enable may prove more and more demanding on devices' memory subsystems. Even today, imaging subsystems and AI workloads demand high-performance memory hierarchies.
To help cope with these demands, Micron is introducing what it calls its first monolithic 12-Gb (1.5 GB) LPDDR4X chip for use in mobile devices. Micron says the 12-Gb LPDDR4X packages will be produced on a 10-nm-class process for improved efficiency and lower power consumption. The company says this memory reduces power draw by as much as 10% compared to its previous-generation products while operating at a data rate of 4266 Mb/s. Finally, the density of these chips could allow for higher overall memory capacity in handsets.
Micron says this new memory is available today, so if you're a mobile device designer and want to take advantage, give the company a ring.AMD Radeon Instinct MI50 and MI60 bring 7-nm GPUs to the data center
Alongside a preview of its first 7-nm Epyc CPUs built with the Zen 2 microarchitecture, AMD debuted its first 7-nm data-center graphics-processing units today. The Radeon Instinct MI50 and Radeon Instinct MI60 take advantage of a new 7-nm GPU built with the Vega architecture to crunch through tomorrow's high-performance computing, deep learning, cloud computing, and virtualized desktop applications.
As we noted with AMD's next-generation Epyc CPUs, TSMC's 7-nm process provides the red team's chip designers with a 2x density improvement versus GlobalFoundries' 14-nm FinFET process. The resulting silicon can be tuned for 50% lower power for the same performance or for 1.25x the performance in the same power envelope. In the case of the Vega chip that powers the MI25 and MI60, that process change allowed AMD to cram a marketing-approved figure of 13.2 billion transistors into a 331-mm² die, up from 12.5 billion transistors in 471 mm² on the 14-nm Vega 10.
AMD didn't call this chip by an internal codename, but it's clearly a refined and tuned version of the Vega architecture we know from the gaming space. Vega DC (as I'll call it for convenience) unlocks a variety of data-processing capabilities to suit a wide range of compute demands. For those who need the highest-possible precision, Vega DC can perform double-precision floating-point math at half the rate of single-precision data types, for as much as 7.4 TFLOPS. Single-precision math proceeds at a rate of 14.7 TFLOPS. The fully-fledged version of this chip inside the Radeon Instinct MI60 crunches through half-precision floating point math at 29.5 TFLOPS, 59 TOPS for INT8, and 118 TOPS for INT4.
Compared to Nvidia's PCIe version of its Tesla V100 accelerator, the Radeon Instinct MI60 seems to stack up favorably. The green team specs the V100 for 7 TFLOPS FP64, 14 TFLOPS of FP32, 28 TFLOPS of FP16, 56 TOPS of INT8, and 112 TOPS on FP16 input data with FP32 accumulation by way of the Volta architecture's tensor cores. While the two architectures are not entirely cross-comparable in their capabilities, the relatively small die and high throughput of the Radeon Instinct MI60 still impresses by this measure.
To support that blistering number-crunching capability, AMD hooks Vega DC up to 32 GB of HBM2 RAM spread over four stacks of memory. With 1024-bit-wide interfaces per stack, Vega DC can claim as much as 1 TB/s of memory bandwidth. While Tesla V100 boasts a similarly wide bus, its HBM2 memory runs at a slightly slower speed, resulting in bandwidth of 900 GB/s. AMD also claims end-to-end ECC support with Vega DC for data integrity.
The bleeding-edge technology doesn't stop there, either. AMD has implemented PCI Express 4.0 links on Vega DC for a 31.5 GB/s path to the CPU and main memory, or up to 64 GB/s of bi-directional transfer. On top of that, AMD builds Infinity Fabric edge connectors onto every Radeon Instinct MI50 and MI60 card that allow for 200 GB/s of total bi-directional bandwidth for coherent GPU-to-GPU communication. These Infinity Fabric links form a ring topology across as many as four Radeon Instinct accelerators.
Like past Radeon data-center cards, the MI50 and MI60 will allow virtual desktop deployments using hardware-managed partitioning. Each Radeon Instinct card can support up to 16 guest VMs per card, or one VM can harness as many as eight accelerators. This feature will come free of charge for those who wish to harness it.
AMD expects the Radeon Instinct MI60 to ship to data-center customers before the end of 2018, while the Radeon Instinct MI50 will begin reaching customers by the end of the first quarter of 2019. AMD also announced its ROCm 2.0 software compute stack alongside this duo of 7-nm cards, and it expects that software to become available by the end of this year.Saxophone Day Shortbread
PC hardware and computing
Games, culture, and VR
Hacks, gadgets and crypto-jinks
Science, technology, and space news
Cheese, memes, and shiny things
At its Next Horizon event today, AMD gave us our first look at the Zen 2 microarchitecture. As one of AMD's first 7-nm products, Zen 2 will be making its debut on board the company's next-generation Epyc CPUs, code-named Rome.
According to AMD CTO Mark Papermaster, TSMC's 7-nm process offers twice the density of GlobalFoundries' 14-nm FinFET process. It can deliver the same performance as 14-nm FinFET for half the power, or 1.25 times the performance for the same power, all else being equal.
AMD is using those extra transistors to improve the basic Zen blueprint in at least two major ways. Zen 2 has an improved front-end with a more accurate branch predictor, smarter instruction pre-fetch, a "re-optimized instruction cache," and a larger op cache than its predecessor.
AMD also addressed a major competitive shortcoming of the Zen architecture for high-performance computing applications. The first Zen cores used 128-bit-wide registers to execute SIMD instructions, and in the case of executing 256-bit-wide AVX2 instructions, each Zen floating-point unit had to shoulder half of the workload. Compared to Intel's Skylake CPUs (for just one example), which have two 256-bit-wide SIMD execution units capable of independent operation, Ryzen CPUs offered half the throughput for floating-point and integer SIMD instructions.
Zen 2 addresses this shortcoming by doubling each core's SIMD register width to 256 bits. The floating-point side of the Zen 2 core has two 256-bit floating-point add units and two floating-point multiply units that can presumably be yoked together to perform two fused multiply-add operations simultaneously.
That capability would bring the Zen 2 core on par with the Skylake microarchitecture for SIMD throughput (albeit not the Skylake Server core, which boasts even wider data paths and 512-bit-wide SIMD units to support AVX-512 instructions.) To feed those 256-bit-wide execution engines, AMD also widened the load-store unit, load data path, and floating-point register file to support 256-bit chunks of data.
At the system level, Zen 2 also represents a major change in the way Epyc CPUs are constructed. Only the CPU core complexes and associated logic will be fabricated on TSMC's 7-nm process. To talk to the outside world, next-generation Epyc packages will feature an I/O die bound to as many as eight Zen 2 "chiplets," for as many as 64 cores and 128 threads per package. This I/O die will contain memory controllers, Infinity Fabric interfaces, and presumably as much other "uncore" as AMD can get onto this cheaper, more mature silicon.
Developing...Tuesday deals: a Ryzen 5 1500X for $140 and more
A fair day to you, gerbils. Long have I enjoyed in jokes about constipation. I am not laughing anymore after having experienced serious bloatedness for the first time. All is slightly better in the world now, though, and I can finally wear a belt without major discomfort. None of that is an excuse for missing out on hardware deals, though. Here are today's top picks.
That's all for today, folks! There's a chance you're looking for something we haven't covered. If that's the case, you can help The Tech Report by using the following referral links when you're out shopping: not only do we have a partnership with Newegg and Amazon, but we also work with Best Buy, Adorama, Rakuten, Walmart, and Sam's Club. For more specific needs, you can also shop with our links at Das Keyboard's shop.Corsair Vengeance 5180 gaming PC sets sail
If you've always wanted a Corsair-powered PC of your very own but don't know where to begin, help is on the way. The company is launching a line of prebuilt PCs under its Vengeance banner, and the first of the line is the Vengeance 5180.
This system starts with a six-core, 12-thread Intel Core i7-8700 in a B360 motherboard, cooled by one of Corsair's H100i Pro heatsinks. A GeForce RTX 2080 pushes pixels. Both motherboard and graphics card appear to be from MSI's stable, but Corsair didn't elaborate. Volatile memory comes courtesy of 16 GB of Vengeance RGB Pro RAM running at 2666 MT/s. A 480-GB Force MP300 NVMe SSD and a 2-TB, 7200-RPM hard drive handle non-volatile storage.
Corsair wraps all of those parts in its Crystal 280X RGB chassis, and it extends the light show offered by that case with one of its own RGB LED lighting strips. To juice up all that hardware, the company taps one of its CX750 power supplies. Buyers will also get a K55 RGB keyboard and Harpoon RGB mouse in the box. All of that RGB LED hardware can be coordinated through the company's iCUE utility.
Corsair asks $2399 for the Vengeance 5180, and that price includes a "comprehensive" two-year warranty with 24-hour phone support and access to dedicated technical support teams. The Vengeance 5180 is available now through Corsair's web store, and the company is throwing in a free HS50 headset with purchase for the moment.SK Hynix officially launches its 96-layer 3D TLC NAND
As traditional silicon scaling has stopped paying dividends for flash-storage density, NAND makers have packed more and more bits into their flash packages by layering more and more sheets of flash memory on top of one another. Today, SK Hynix is joining the elite 96-layer club with its "CTF-based 4D NAND flash." While that "4D" descriptor is purely fluff, the company is in fact producing many-layered charge-trap NAND (as opposed to the floating-gate tech favored by Intel and Micron). Those 96-layer stacks allow the company to pack 512 Gb (64 GB) of TLC flash into a single memory chip.
SK Hynix's 96-layer NAND isn't just about stacks on stacks, though. According to Tom's Hardware's report on the company's Flash Memory Summit presentation, the "4D" technique moves the supporting circuitry for each flash cell under that structure rather than etching it into silicon next to the cell, a technique the company calls "periphery under cell" or PUC. SK Hynix says this move lets it make 30% smaller dies and allows it to produce 49% more bits per wafer compared to the dies that went into its 72-layer, 512-Gb products.
The company also claims 30% higher write and 25% higher read performance from its latest flash, presumably versus that same 72-layer stuff, although the company doesn't say by what metric it measured those figures. SK Hynix further notes that thanks in part to a "multiple gate insulators architecture," its 96-layer flash can operate at I/O rates up to 1200 Mbps at 1.2 V.
Hynix plans to introduce 1-TB SSDs using its 4D flash and in-house controllers later this year. Looking to 2019, the company plans to introduce UFS 3.0 mobile SSDs with this flash in the first half of the year, and enterprise SSDs with this technology in the second half. Hynix also plans to introduce 1-Tb flash packages using 96-layer technology and products using QLC NAND next year.Gigabyte Z390 Designare offers pros a bevy of connectivity options
Some builders want high-end motherboards without a layer of gamer bling on top, and for those folks, Gigabyte has its Designare series. While these boards are ostensibly for creative pros, their understated looks and high-quality parts make them a good fit for any high-performance build. The company has just applied this treatment to the Z390 chipset with its Z390 Designare.
To power eighth- and ninth-gen Core CPUs in the LGA 1151 socket, Gigabyte taps 12 Vishay SiC634 integrated power stages driven by an Intersil ISL69138 PWM controller. Gigabyte turns six phases from that PWM chip into 12 using Intersil's ISL6617A doublers. To keep this VRM cool in operation, Gigabyte uses a chunky metal heatsink with a direct-contact heat pipe running over all 12 of those power stages. Power input comes courtesy of an eight-pin-plus-four-pin EPS duo. A 2-oz copper PCB helps draw heat away from the exposed pads on the bottom of those power stages, too.
High-quality VRM aside, the real action on the Z390 Designare plays out on its rear I/O panel. This board has two Thunderbolt 3 ports with all the trimmings, including support for DisplayPort input to drive single-cable displays. Gigabyte also provides two USB 3.1 Gen 2 ports, four USB 3.1 Gen 1 ports (two of which feature its DAC-Up adjustable voltage tech), two USB 2.0 ports, and a hybrid PS/2 keyboard-and-mouse port. Twin Gigabit Ethernet jacks and an integrated Intel Wireless-AC 9560 wireless radio round out those impressive connectivity options.
To support demanding storage configurations or multiple graphics cards, the Designare can split its CPU-driven PCIe lanes into a x8/x4/x4 config, allowing a high-performance graphics card and two PCIe 3.0 x4 SSDs to communicate directly with the CPU from three physical PCIe x16 slots. That connectivity comes on top of two M.2 slots with heatsinks, two PCIe 3.0 x1 slots, and six SATA ports. Demanding NVMe storage users will be pleased to find that at least one M.2 slot and its heatsink stand well clear of the primary PCIe slot to prevent throttling due to waste heat from a system's graphics card on this board.
We see little to take issue with from the Z390 Designare's loadout and layout on visual inspection, and that's a good thing given this board's $270 suggested price tag. Keep an eye out for the Z390 Designare on your favorite e-tailer's pages soon.Updated: Cascade Lake-AP Xeon CPUs embrace the multi-chip module
After taking a little over a year to think on it, Intel appears to have decided that glue can be pretty Epyc after all. The company teased plans for a new Xeon platform called Cascade Lake Advanced Performance, or Cascade Lake-AP, this morning ahead of the Supercomputing 2018 conference. This next-gen platform doubles the cores per socket from an Intel system by joining a number of Cascade Lake Xeon dies together on a single package with the blue team's Ultra Path Interconnect, or UPI. Intel will allow Cascade Lake-AP servers to employ up to two-socket (2S) topologies, for as many as 96 cores per server.
Intel chose to share two competitive performance numbers alongside the disclosure of Cascade Lake-AP. One of these is that a top-end Cascade Lake-AP system can put up 3.4x the Linpack throughput of a dual-socket AMD Epyc 7601 platform. This benchmark hits AMD where it hurts. The AVX-512 instruction set gives Intel CPUs a major leg up on the competition in high-performance computing applications where floating-point throughput is paramount. Intel used its own compilers to create binaries for this comparison, and that decision could create favorable Linpack performance results versus AMD CPUs, as well.
AMD has touted superior floating-point throughput from its Epyc platforms in the past for two-socket systems, but those comparisons were made against Broadwell CPUs with two AVX2 execution units per core rather than the twin AVX-512 engines of Skylake Server and the derivative Cascade Lake cores. AMD also chose to use the GCC compiler for those comparisons rather than Intel's compiler suite. Intel has clearly had enough of that kind of claim from AMD, and it seems keen to reassert its chips' superiority for floating-point performance with this benchmark info.
Other decisions about configuring the systems under test will likely raise louder objections. Intel didn't note whether Hyper-Threading would be available from Cascade Lake-AP chips, and indeed, its comparative numbers against that dual-socket Epyc 7601 system were obtained with SMT off on the AMD platform. 64 active cores is nothing to sniff at, to be sure, but when a platform is capable of throwing 128 threads at a problem and one artificially slices that number in half, eyebrows are going to go up.
Update 11/5/2018 at 18:11: According to an Intel spokesperson who contacted me this evening, "it's common industry practice for Intel to disable simultaneous multithreading on processors when running STREAM and LINPACK to achieve the highest processor performance, which is why we disabled it on all processors we benchmarked." Our independent research on this point corroborates Intel's statement, as Linpack fully occupies the floating-point units of the CPU and would likely experience performance regressions from resource contention with SMT on. Point taken.
Intel also asserted that on the Stream Triad benchmark, a Cascade Lake-AP system will be able to offer 1.3x the memory bandwidth of that same 2S Epyc 7601 system with eight channels of DDR4-2666 RAM. That figure comes courtesy of 12 channels of DDR4 memory per socket, a simple doubling-up of the six memory channels available per socket from a typical Xeon Scalable processor today. Dual-socket Cascade Lake-AP systems will be able to offer an incredible 24 channels of DDR4 memory per server. Intel didn't disclose the memory speed it used to arrive at this figure, however.
Intel also teased some deep-learning performance numbers against its own products. Compared to a 2S system with Xeon Platinum 8180 CPUs, Intel projects that a 2S Cascade Lake-AP server will offer as many as 17 times the deep-learning image inference throughput per second as today's systems. That figure could be related to Cascade Lake's support for the Vector Neural Network Instruction (VNNI) subset of the AVX-512 instruction set. VNNI allows Cascade Lake processors to perform INT8 and INT16 operations that are important to AI inferencing operations.
Beyond this high-level teaser, Intel didn't specify nitty-gritty details like the inter-socket interconnect topology or the number of PCIe lanes available per socket from each Cascade Lake-AP CPU. We expect to learn more upon the official release of the Cascade Lake family of processors later this year.AMD schedules "Next Horizon" event for November 6
A new AMD event has appeared on the company's investor relations website, as spotted by the eagle eyes at Anandtech. The company will be holding an event it calls "Next Horizon" on November 6. Although no details of this event accompany its page on the investor relations site, the "Horizon" name has some historical portent for past AMD events. The "New Horizon" presentation in December 2016 proved one of the more public demonstrations of the Zen architecture ahead of the chip's launch the following March, and it was the place where AMD revealed the Ryzen nameplate for its enthusiast CPUs with Zen silicon inside.
With that historical precedent in mind, AMD could be poised to reveal some details of the next architectural revision on its roadmap, code-named Zen 2. We could also learn more about the company's other 7-nm products, like the next-generation Radeon Instinct GPU the company has been teasing for some time. According to the earnings call transcript from AMD's most recent financial results, the company plans to talk about "innovation of AMD products and technologies, specifically designed for the datacenter on industry-leading 7-nanometer process technology" at Next Horizon. That could mean some Epyc news come next Tuesday, as well. We'll be keeping our eyes on it.The-day-after-Author's-Day Shortbread
PC hardware and computing
Games, culture, and VR
Hacks, gadgets and crypto-jinks
Science, technology, and space news
Cheese, memes, and shiny things
|Intel's Core i9-9980XE CPU reviewed||15|
|Tuesday deals: a Samsung 970 EVO 500 GB for $113, cheap RAM, and more||1|
|Intel unveils XMM 8160 5G modem for a 2H 2019 arrival||30|
|Corsair's K70 MK.2 Low Profile Rapidfire gaming keyboard reviewed||14|
|Das Keyboard 4Q updates a classic design with cloud smarts||31|
|Zotac VR Go 2.0 gets caffeinated with a Core i7-8700T||4|
|Radeon Software 18.11.1 gets ready to roll for Battlefield V||16|
|Tongue Twister Day Shortbread||27|
|Thursday deals: a Ryzen 7 2700X for $295 and more||39|
|No, 90% of these ads have nothing to do with anything I've ever searched for. They're the same "fake news" ads you see plastered all over shady websit...||+21|