Amazon Announces New AWS EC2 P3 Instances
In a recent post on the AWS Blog, Amazon announced the availability of the newest Elastic Compute Cloud (EC2) P3 instances. These new P3 instances are designed for processor intensive activities like machine learning, deep learning, fluid-dynamics, and computational finance, among others.
Devolutions Remote Desktop Manager
Devolutions RDM centralizes all remote connections on a single platform that is securely shared between users and across the entire team. With support for hundreds of integrated technologies — including multiple protocols and VPNs — along with built-in enterprise-grade password management tools, global and granular-level access controls, and robust mobile apps to complement desktop clients.
Amazon’s new P3 instances are powered by up to eight NVIDIA Tesla V100 GPUs, as well as custom Intel Xeon E5-2686v4 processors, clocked at up to 2.7GHz. Each NVIDIA GPU contains 5,120 CUDA cores, along with 640 Tensor cores that are capable of providing varying levels of floating point precision. Specifically, these GPUs can provide up to 125 TFLOPS of mixed-precision floating point, 15.7 TFLOPS of single-precision floating point, and 7.8 TFLOPS of double-precision floating point.
Available in three different sizes, the new P3 instances should be able to fit the computing needs of many organizations that require reliable high-performance computing at scale. The smallest of the new P3 instances features one NVIDIA Tesla V1000 GPU, the mid-tier instance features four NVIDIA GPUs, and the largest instance features 8 NVIDIA GPUs.
In its announcement, Amazon provided the following chart that details the full specifications of the latest EC2 P3 instances:
|Model||NVIDIA Tesla V100 GPUs||GPU Memory||NVIDIA NVLink||vCPUs||Main Memory||Network Bandwidth||EBS Bandwidth|
|p3.2xlarge||1||16 GiB||n/a||8||61 GiB||Up to 10 Gbps||1.5 Gbps|
|p3.8xlarge||4||64 GiB||200 GBps||32||244 GiB||10 Gbps||7 Gbps|
|p3.16xlarge||8||128 GiB||300 GBps||64||488 GiB||25 Gbps||14 Gbps|
The two largest instances (p3.8xlarge and p3.16xlarge) feature an NVIDIA NVLink 2.0 connection, which provides a link between the instance’s GPUs, enabling high-speed data transfer between them. Given that the data does not have to travel through other hardware (CPU and PCI-Express fabric), users can see data transfer rates of up to 300GBps.
For those wishing to run one of AWS’s new P3 instances, NVIDIA’s CUDA 9 and cuDNN7 drivers and libraries are required. The good news is that it has already been added to the latest versions of the Windows AMI and are also going to be included in an upcoming version of the Amazon Linux AMI that’s due to be released on November 7th. However, these drivers can be installed on an existing Amazon Linux AMI sooner, as they are already available in new packages in Amazon’s repositories. Users can also access Amazon’s new P3 instances via the AWS Deep Learning AMIs, which come preinstalled with services that support NVIDIA’s Tesla V100 GPUs.
While Amazon’s new EC2 P3 instances are not available in all regions at the time of this writing, they are currently available in the US East, US West, EU, and Asia Pacific Regions.
As more and more data is produced, it’s important that organizations have a way to process and analyze it without having to worry about large data sets or influxes of data putting a strain on the computing resources. With Amazon’s new EC2 P3 instances, organizations requiring a great deal of processing or computing power can rest easy knowing that they can continue to collect and analyze data without having to worry about bogging down the computational resources.