Google Compute Engine is now powered in part by Pascal

Google's Compute Engine is expanding the availability of powerful compute GPUs in its Compute Engine cloud platform. The search giant is now offering access to its battery of Nvidia Tesla P100 compute GPUs in its Compute Engine in a beta rollout. Also, the company's collection of Tesla K80 dual-GPU compute cards is now available to the general public for number crunching.

Compute capabilities not shown to scale

The cloud GPU resources let Google's customers perform tasks like machine learning training and inference, geophysical data processing, simulation, seismic analysis, and other scientific computation. The Tesla P100 GPUs are based on the largest of all of Nvidia's Pascal chips, boasting 3584 stream processors at a brisk 1480 MHz boost clock rate. The Tesla K80 is based on the company's older Kepler architecture and has 4992 SPs with a maximum boost clock of 875 MHz. Google says the Tesla P100 GPUs can perform some tasks 10 times faster than the Tesla K80 cards.

Customers can spin up customizable VMs with varying amounts of CPU cores, memory, disk, and GPU resources. A single VM can have up to four Tesla P100 or K80 cards (eight Kepler GPU chips). The GPUs are available in all four Google Compute Engine regions, and can be used in either VMs or containers. The company also offers up to 3 TB of high-speed SSD storage per VM.

The company also announced that the price to access those GPU resources can be reduced by using its sustained use discount, with a reduction of up to 30%. Reaching that discount level requires running a VM for at least 75% of the hours in a billing month, but lesser discounts can be attained with VMs that operate as little as 25% of the hours in a month.

Comments closed
    • ronch
    • 2 years ago

    While AMD got HBM first, I think it’s Nvidia that will harness its potential better. Also, I haven’t been watching this space but does AMD have a product that competes with Nvidia here? If yes, how does it stack up?

      • Airmantharp
      • 2 years ago

      Nvidia was smart to avoid HBM1, and they’re certainly employing HBM2 well in the compute space while smartly avoiding the lost cause of trying to field it in consumer products that couldn’t use it to begin with.

      And the answer to AMD having a competitive product? Maybe. Not only are they running into difficulties producing Vega parts in volume, but they’re also well behind in terms of software support.

    • davidbowser
    • 2 years ago

    Disclaimer – I work for Google. My opinions are my own.

    If anyone wants to learn more at a day of Free in-person Google Cloud Platform training, there are several dates/places left on the 2017 training calendar. I will be helping run the one in NYC next week.

    [url<]https://cloudplatformonline.com/2017-Onboard-Northam.html[/url<] And for those that want tp play around with the freebies [url<]https://cloud.google.com/free/[/url<]

    • R-Type
    • 2 years ago

    Just lol if you clicked on this article thinking about the programming language like I did.

      • Prion
      • 2 years ago

      lol

      (@Google powered by Turbo Pascal)

      • RickyTick
      • 2 years ago

      lol

    • NTMBK
    • 2 years ago

    Cool to see the K80, that thing is still a double precision beast.

    • chuckula
    • 2 years ago

    I can finally afford a P100!

    For a couple of hours at least.

      • jihadjoe
      • 2 years ago

      They’re only about $200 a day on Turo

        • K-L-Waster
        • 2 years ago

        “Only” — to put that in perspective, in less than a week you would pay as much as buying a 1080 TI outright….

Pin It on Pinterest

Share This