Traditionally, high-performance science and data analytics workloads have been run on specialized clusters tailored to a small number of applications. OpenStack provides a superior cloud platform that not only accommodates a wider range of applications than specialized clusters but also reduces costs by running steady state workloads on-premise while providing the ability to “burst” into a variety of off-premise clouds. This session details technology and architectures to optimize OpenStack for technical and data-intensive computing as well as techniques for tailoring HPC workloads for better portability and performance on private and public cloud platforms. Lastly, we cover case studies that demonstrate how research organizations leverage OpenStack to address these issues in pursuit of their life-changing missions.
- Topologies for both high-speed/low-latency communication among instances and high-throughput transfers between instances and shared storage.
- Enabling a “Science DMZ” for high-throughput, wide-area data transfers.
- The use of IPv6 for instance communication and how it addresses certain limitations.
- The use of Ironic to support containers, special hardware, and bare metal workloads.
- Using resource schedulers such as Slurm and Mesos to enable meta-scheduling between specialized clusters, on-premise OpenStack cloud, and off-premise clouds.
- Hybrid computing between private and public clouds (e.g. AWS).
- Techniques and technologies for providing scale-out file, block, and object storage.
- Techniques for tailoring HPC workloads for maximum portability and performance on cloud platforms such as OpenStack