Highly Scalable/Available Serverless Architecture in AWS



 

Draft, Still Under Review!

 

Now with the new services and features that were launched recently from Amazon, I can say there is a highly scalable and fault tolerant solution using the following services from Amazon Web Services, in this article I am going to describe how the solution should be architected and also I will try to make proof of concept by introducing some code snippets that would help you starting out quickly!

  • S3 (Using Cloud Front Edge Optimization)
  • AWS API Gateway (Edge Optimized, Caching Ability)
  • [email protected] (Using Cloudfront Edge Optimization, Caching Ability)
  • DynamoDB (Global Tables, using Master/Master replication, DAX Clusters)

 

S3 and CloudFront: 

 

Using Simple Storage Service (S3), you can host your static files that were generated   after packaging your Angular Application for example or just writing some static HTML where it has some jQuery inside calling your APIs. However S3 is highly scalable and replicating your files across multiple availability zones (AZs) however this is only in one region and if it goes down your whole system is down, Also  suppose you may need to deploy your application worldwide and your services should be accessible from different regions around the world. if you customer now is trying to access your site from Germany while your S3 bucket is hosted in west coast of US (e.g. Oregon “us-west-2”) it will take long time to  send your HTTP request/response as it is not near by your customer.

 

So, would it be better to “replicate or copy” your files or resources to region (not AZ) that is nearby your customer’s location to access it quickly. I can say replicate here is wrong however I used to clarify only the concept, hence it should be “cache” your resource to the nearest edge location to your customer. now it comes to the concept of using CloudFront service to be as proxy caching service infant of your S3 bucket that are existing far away from your customer. when a customer tries to access your resource, it will check first whether the nearest edge is having a copy of your resource, if not bring it from the backend and cache it there for any further access by the same or different customer. A lot of edge locations distributed around the world however you still have the option to choose where exactly you want to distribute your content into your CloudFront, aka CDN.

 

Now we can say our customer can access the resources very quickly as long as our resources are going to be saved or cached into the nearest edge and also we should not ignore the fact that our system’s resources should be regularly accessed to make sure that CloudFront is going to keep such resources into its cache (not being evicted from cache due to inaccessibility or idleness).

 

We should not ignore the fact of even distribution of our items into S3 by choosing  random prefixes of the keys to avoid overloading specific servers in the S3 side while  keeping others idle. you should always remember that S3 is distributing your content or files among different servers using consistent hashing technique exactly the same concept they use in DynamoDB (they use file names as keys to distribute and save your files into different partitions based on the hash keys of file names)

 

AWS API Gateway (Edge Optimized, Caching Ability): 

 

API gateways come every day with a lot of new features, it is really amazing to deploy your API and have a cloud front distribution launched in front of your API as a CDN or what they called as Edge Optimized, the below architecture show how your API proxied by CloudFront for content distribution. Not only that but the ability to spin up new backend caching layers for your API working out of the box without the need to update cache cluster’s OS or install any software. you have the option to choose how much storage you need from 500M up to 237GB based on your requirements.

As shown in the screenshot below, you can always add caching for specific stage anytime by navigating through the stage link in the left side pane in API gateway page. This is actually so useful especially in case if you have a production environment where you need your customers to get high availability by adding a backend caching layer while in your development stage you don’t need that as long as you don’t care about high availability unless you’re making some benchmarks or load tests.

Please note that enabling API cache costs money and it is not covered by free tier. also, you have the option to flush all contents from the cache and to scale up/down cache capacity anytime based on the workload.

 

 

 

 


Related articles