Elasticity, resiliency, load-balancing are among of the most-wanted features for services or applications deployed on a cloud. However, it takes a non-trivial effort to integrate pieces of technologies from different projects. There are many other subtle issues when users are buiding such a solution in a DIY manner.
Just think about choosing alarms from the various events and meters you can access; the complicated processes to hook these alarms to the proper operations; the need to watch your scaling cluster for node failures; the requirement for load-balancing...
This talk is about a simple and straightforward approach to deploy and manage a cluster which can be auto-scaled, load-balanced and resilient to node failures. By seemlessly integrating the clustering, telemetry, load-balance and orchestration services, we will show Senlin as the pivot for a practical, easy way to build a resource pool of web services on OpenStack.
Key takeaways:
- An end-to-end solution integrated with clustering, load-balancing, auto-scaling and auto-healing: Senlin will work as the pivot to automatically hide the complicated configurations and preparation steps from users.
- Design considerations in real-life when deploying and managing a resource pool: Senlin provides OpenStack native resource model, like Cluster, Node Profile, with special considerations to extensibility.
- Configuration options to evaluate when customization or adaptation is needed: Any behaviour, no matter when scale out/in or failure happens, can be customizable. Across-AZ, no problem! Recreate or rebuilt to recover, no problem!
- Known limits in solutions based on current OpenStack services: Lession-learn and attention-caught points collected from early trial users.