Cloud Architecture Patterns @ Book
I just got some interests on cloud technologies, after watching videos about parallel/concurrent processing. The scalable application pattern would be getting more important lately, along with the surge of cloud-based platform providers.
This book is mainly using Windows Azure as examples for providing concrete description, but fundamentals would be applicable for most of the providers.
“Sticky session” can be used for keep assigning the same server to a certain user. However, recommended cloud-native approach is to have stateless nodes for scaling horizontally well.
- Amazon EC2’S Elastic Load Balancing provides sticky session (New Elastic Load Balancing Feature: Sticky Sessions).
- Windows Azure provides a way to use sticky session, but it’s not a great fit, and recommended to move to stateless architecture (Sticky Sessions and Windows Azure).
Queue workflow pattern can be used for decoupling layers (e.g. presentation payer and service layer). One consideration is Idempotent Component. It’s is easy to prescribe, but not always easy to implement.
Auto-scaling pattern is useful for optimizing cost with limited efforts. The trigger can be schedule-based or dynamic logic based on metrics like “average queue length”.
- Auto-scaling shouldn’t be too responsive to the workload change. Each provider has rules for billing for certain clock-hour, like 30 minutes or one hour. Need to check the documentation at first.
Database sharding provides a way to separate data into multiple nodes.
- In Windows Azure, “SQL Azure” provides sharding through “federations”. Also, NoSQL service through Table Storage Service can be an alternative option.
Cloud-native world tends to focus more on MTTR than MTBF. Many cloud services use commodity hardwares with certain error-rate. However, your service doesn’t have to fail frequently, and hardware failure should impact only a small fraction.
Busy signals need to be separated from error signals. Basic approach for busy signals are just retry. Least aggressive approach is retry with increasing delays (delay can be increased in linear or exponential).
- Error retry is not so uncommon. One stat in Azure was that among 1 million 4MB-file uploads, around 1.8% of the files required at least one retry.