Vancouver, BC
May 21-24, 2018

Event Details

Please note: All times listed below are in Central Time Zone


High Performance Ceph for Hyper-converged Telco NFV Infrastructure

The next wave of transformation for Telco Cloud is Hyper-converged NFV infrastructure (NFVi) which is expected to bring cost efficiency and scale.  Storage is a key piece of any Hyper-converged platform and new solid-state storage technologies such as Intel Optane, are bringing a big shift in compute vs storage budget balance in Hyper-converged platforms.  Intel and its Telco partners are collaborating on developing new ways to leverage these technologies in Ceph to enable a low latency, high performance OpenStack platform for Telco NFV applications.  Join the team to hear about our contributions to Ceph and Openstack-HELM and take away the basics of tuning Hyper-converged Ceph.


What can I expect to learn?

Join the team to hear about our contributions to Ceph and Openstack-HELM and take away the basics of tuning Hyper-converged Ceph.  We’ll cover:

NFV Infrastructure, Hyper-convergence, Ceph block storage

Performance insights gathered working on Hyper-converged Ceph POC

  • Hyper-converged considerations for Ceph – resource partitioning, NUMA
  • RDMA-based Ceph messenger for lower Ceph CPU utilization, improved latency
  • Intel Optane P4800X for improving tail latency
  • Performance results, VM scalability and QoS insights

Kubernetes-managed Openstack with Openstack-HELM

Monday, May 21, 11:35am-12:15pm (6:35pm - 7:15pm UTC)
Difficulty Level: Intermediate
Solutions Architect
Accomplished systems architect with over 15 years’ experience in high-end virtualized computing environments, Service Provider Mobility, Linux and opensource solutions, and complex software defined storage infrastructure. Highly developed comprehensive skill set with special emphasis on; Open Source Software Projects, Agile DevOps, CI/CD, Platform as a Service (PaaS), and... FULL PROFILE
Intel
Tushar is a Principal Engineer - Software Architect with Intel's Data Center Group.  He has been working on the open-source networking and storage-related technologies for over a decade now – his recent contributions have been to Ceph, OpenStack Swift, Intel’s Storage Performance Dev Kit (SPDK) and networking in the Linux kernel. Prior to joining Intel, Tushar was a lead... FULL PROFILE
Principal Inventive Scientist
Moo-Ryong Ra joined AT&T Labs Research in 2013 and is now working as a Principal Inventive Scientist in Cloud Platform Software Research department. He is broadly interested in solving challenging problems related to cloud platforms and software-defined storage systems. His recent projects span across multiple areas – storage-centric hardware/software... FULL PROFILE
Comments
4 Reviews
3
Posted: 2483 days ago
Hi, thanks for an interesting talk. I didn't make it in person so was hoping to ask a follow up question here... You talked about putting quotas on Ceph OSD daemon CPU usage but didn't explain how this was implemented, I'm guessing you used cgroups? If that's the case, and furthermore assuming you used CPU share as opposed to hard CPU time limits, then how did you ensure the system was loaded enough for CPU share to impact the Ceph OSDs in the proportions tested? Or perhaps I misunderstood due to the use of the word "quota" and in fact you used CPU pinning / process affinity (though when you mentioned this it seemed to be more about NUMA optimisation)...? Thanks, b1airo
Posted: 2488 days ago
Posted: 2488 days ago
Posted: 2488 days ago