Thursday, February 24, 2011

Amazon S3 Webinar on New S3 Features

This morning I attended Amazon's Webinar to learn about their new S3 features. Their newest feature, which was launched last week, allows you to set a bucket's default root object (i.e. index.html) and it also allows you set a default HTML error page to return for 4xx HTML status codes. These features make it easier to host a static website on S3.

Actually watching someone walk through the Amazon console was very helpful and I learned a few things to keep in mind.

1. If you set a bucket's default object to index.html, then each bucket's subfolder's default object must have the same name (i.e. index.html). For example, if you want to return a default object then you'll need an object named subfolder/index.html.

2. Don't forget make website objects public so that they can be read by anyone on the Web. You can set a bucket's default upload policy to public and then, later, explicitly set a specific object as private so that it will only be served up to a user after authentication (i.e. a digital signature with expiration, referrer, or specific IP address).

3. The new feature maintains backward compatibility so that the API still returns XML when accessing a bucket directly, yet, it'll return your default object when appropriate. They accomplish this by changing the URL to your bucket's root object using a slightly different end point URL for you default HTML object, such as:
This is a technical issue which will be completely transparent to anyone configuring the new feature using the AWS console and, most importantly, it's elegant in that it fully maintains backward compatibility with their APIs.

Although Amazon S3 still does not allow you to configure an A record so that you can host, I got the impression that this feature will be available in the future. Although I'm speculating, today's comment, from Amazon, was, "We're looking at ways that we can do that [host a domain's root without requiring a subdomain]."

In the mean time, Donovan Watts showed me his workaround, last night, that he uses with Adjix and CloudFlare. His workaround allows a domain's root domain, which normally must be a DNS A record, to be configured as a CNAME. Although I have't tried his technique, yet, I can see it in action with his short domain name.


px said...

I would venture a guess that the only way they will be able to efficiently host a root domain on S3, would be if you used their Route 53 DNS service and pointed your domain name to them for resolution. If they haven't thought of this, they should, and also they should lower the rates of the DNS service, perhaps even have a freebie level for Amazon Prime/Students.

Joe Moreno (@JoeMoreno) said...


Wow, using Route 53 is an excellent suggestion - I hadn't considered that.

I was thinking that they'd need to load balance an IP address similar to how the DNS system load balances the 13 static IPs for the root servers. But, Route 53 could work nicely.

However they solve this problem - it's obviously the missing piece.

- Joe

sull said...

the other less desirable option is to setup a CNAME for www subdomain to point to point to S3 and on your own server you can do a 301 redirect of non-www traffic to your domain to redirect to www.domain.tld.
i personally do not like using www. on my domains but since you can still use non-www to save chars on twitter etc and still works and people do not usually look at the URL once they arrive at destination.... it's simply another approach worth mentioning.

tsharma said...

Amazon's new feature S3 Website is right option to host static website efficiently. Any one can host there static website using following tools
Bucket Explorer to set bucket as s3 website, for route53 DNS Service.