Host your site in Amazon S3 and link to domain via Route53

 Apr, 30 - 2017   no comments   Amazon Web ServicesDomainsWeb


 

Introduction

 

You might wonder what is the benefit of hosting a static website nowadays however we should have enough new web technologies and languages for coding dynamic sites, however I can say it is still helpful for creating landing pages for an event to come up or may be for a site that is going to be launched soon also it can help to recover after failures as a great solution we can configure DNS to automatically fail over to this static site in case of disasters or main website failures. if we even concentrate we will find that web servers and dynamic sites are just as simple as static sites along with javascript’s script to fetch resources from servers and to update some portions of web pages dynamically. Thanks to Amazon S3 that is highly available by design which means it replicates your files or resources to avoid loss of data and also to make your resources highly available in case of failures, moreover it scales up automatically for high workload which means instead of provisioning new instances and cares about scalability, fault tolerant, and high availability we can now depend on S3.

 

 

Definitions 

(if you are familiar with AWS, you can skip this section)

 

Region: a collection of independent, isolated and separated data centers which are named or known in AWS world as availability zones.

Amazon S3: an acronym of Amazon Simple Storage Service, it is usually used to store all static content, you can say it is like the big disk in the cloud.

Bucket: it is like a container that contains your files and directories inside S3, bucket names are unique in AWS, you can say it’s like sub domain names within AWS, while  the content of the bucket are going be resident in a specific region and synced to all availability zones within that region. usually EC2 instances accessing S3 buckets with the same region doesn’t have any data transfer costs. By default all objects that are going to be uploaded inside buckets are private and not publicly accessible.

Route 53: the number 53 here refers to the port that’s used in DNS, what is DNS? it stands for domain name servers which are used to map the human-readable website names to their corresponding IP addresses of physical servers, it is exactly like standard phone book, for example if you typed http://example.com it should route or map this name to an IP address like 10.34.22.188

Hosted Zones: Responsible for hosting websites and link them to your servers, it is considered as if it is your own phone book which should be published through Amazon DNS servers in the internet.

Record Set: is the actual mapping or routing between site or domain name and actual servers.

 

Let’s Start

We will go ahead from scratch to create buckets and host files in S3 then integrate with Route 53 service.

 

First Bucket (example.com)

 

1. Open your AWS console and go to “S3” from “Services” in the top left corner.

2. Create a new bucket and name it with your site name e.g. msoliman.me (IT’S A MUST TO BE THE SAME AS YOUR WEBSITE NAME TO WORK)

 

3. Go to properties and under “Static website hosting” section toggle “Use this bucket to host a website” then specify only the index or the default html document.

 

4. Copy the end point of this S3 bucket as shown in the above figure and keep it to use in Step (7)

 

5. Upload a simple html file having a simple welcome body for just testing purpose to your new bucket.

 

<html>
<body>
<h2>Welcome to my website</h2>
</body>
</html>

 

Please notice that while uploading the object we should specify the group permission to allow read for “Everyone” in “Step 2. Set permission” to avoid access denied errors. (in other words you are uploading this object to be public and accessibly by everyone) then you can press “Upload” without proceeding to next steps.

 

 

 

Sites has a mandatory naked domain name which means the name without any subdomain like  “example.com” however sometimes it’s required to add the well known subdomain  “www.example.com” so we can create optionally a new bucket to serve this subdomain. but we don’t need to have a duplicate code or files in the first bucket that serves naked domain and in the second one that is going to serve subdomain “www.example.com” so instead we will do different methodology here, we will redirect all requests coming to the new bucket “www.example.com” to the original bucket “example.com” (the first bucket)

 

Second Bucket (www.example.com)

 

6. Create a new bucket and name it “www.example.com” using the same way as described before in Step (2).

 

7. Go to properties and configure static website hosting to be like the following, choose “Redirect requests” and paste the end point of first bucket that you copied in Step (4) in “Target bucket or domain” and specify protocol as “http”

HINT“Target bucket or domain” should NOT contain “http://”, It should be like this “msoliman.me.s3-website-us-west-2.amazonaws.com”

 

DNS Configuration (Route 53)

 

8. Now it is time to configure DNS, From “Services” go to Route 53 under “Networking and Content Delivery” section in the first column.

9. Create a new hosted zone, and name it with your site name. e.g. msoliman.me

10. Open the hosted zone and create a new “Record Set” of type “A” and it should be aliased to your S3 bucket (Alias=Yes).

Hints:

  • Route 53 will only display the buckets that match the same name as your record set name
  • You have to create and configure both example.com and www.example.com

           

Now you can browse your site in the same way as below

http://msoliman.me

http://www.msoliman.me

The practical part here for this simple operation is to host our static files in s3 (like css, images, and javascript files) in a bucket or even separate buckets and integrate Route 53 to alias different subdomains to the bucket(s) that hold or have our files. in other words instead of having ugly and long URLs for our static files we can manage to shorten our those URLs using DNS records. Suppose we have 3 buckets like below

 

  1. Images Bucket (images.msoliman.me) having endpoint http://imgs.msoliman.me.s3-website-us-west-2.amazonaws.com
  2. Javascript Bucket (js.msoliman.me) having endpoint http://js.msoliman.me.s3-website-us-west-2.amazonaws.com
  3. CSS styles (css.msoliman.me) having endpoint http://css.msoliman.me.s3-website-us-west-2.amazonaws.com

 

Instead of our webpages having links to those long URLs we can use the same concept in to link subdomain names with those long endpoints as below by creating record sets for each one:

 

  1. “imgs.msoliman.me” record has an alias to the bucket named “imgs.msoliman.me”
  2. “css.msoliman.me” record has an alias to the bucket named “css.msoliman.me”
  3. “js.msoliman.me” record has an alias to the bucket named “js.msoliman.me”

 

Please note that incase we saved or hosted the content of our javascript files to be served from different bucket or subdomain we have to add our website as an allowed origin via CORS configuration otherwise those javascript file will be blocked by browsers from being loaded or executed in your main website for security reasons. as shown below we are granting any subdomain under “msoliman.me” to access the files in this bucket or subdomain “js.msoliman.me”.

 

 

We can have one more step to mention here that I might include or add later in this article. It is caching we might have in front of S3 buckets a CloudFront’s distribution that keeps caches of your content near to your customers for better performance and read throughput.

Summary

We have create 2 buckets named “example.com” and “www.example.com” with activating static website hosting in both, the first one has the original content or html files and the other has to redirect all requests to the original or the first one then we linked the record sets inside our hosted zone to those buckets we have created.

 


Related articles