Actually watching someone walk through the Amazon console was very helpful and I learned a few things to keep in mind.
1. If you set a bucket's default object to index.html, then each bucket's subfolder's default object must have the same name (i.e. index.html). For example, if you want http://www.example.com/subfolder to return a default object then you'll need an object named subfolder/index.html.
2. Don't forget make website objects public so that they can be read by anyone on the Web. You can set a bucket's default upload policy to public and then, later, explicitly set a specific object as private so that it will only be served up to a user after authentication (i.e. a digital signature with expiration, referrer, or specific IP address).
3. The new feature maintains backward compatibility so that the API still returns XML when accessing a bucket directly, yet, it'll return your default object when appropriate. They accomplish this by changing the URL to your bucket's root object using a slightly different end point URL for you default HTML object, such as:
http://pubs.joemoreno.com.s3-website-us-east-1.amazonaws.com
This is a technical issue which will be completely transparent to anyone configuring the new feature using the AWS console and, most importantly, it's elegant in that it fully maintains backward compatibility with their APIs.
Gotcha
Although Amazon S3 still does not allow you to configure an A record so that you can host http://example.com, I got the impression that this feature will be available in the future. Although I'm speculating, today's comment, from Amazon, was, "We're looking at ways that we can do that [host a domain's root without requiring a subdomain]."
Workaround
In the mean time, Donovan Watts showed me his workaround, last night, that he uses with Adjix and CloudFlare. His workaround allows a domain's root domain, which normally must be a DNS A record, to be configured as a CNAME. Although I have't tried his technique, yet, I can see it in action with his short domain name.
I would venture a guess that the only way they will be able to efficiently host a root domain on S3, would be if you used their Route 53 DNS service and pointed your domain name to them for resolution. If they haven't thought of this, they should, and also they should lower the rates of the DNS service, perhaps even have a freebie level for Amazon Prime/Students.
ReplyDeletePX,
ReplyDeleteWow, using Route 53 is an excellent suggestion - I hadn't considered that.
I was thinking that they'd need to load balance an IP address similar to how the DNS system load balances the 13 static IPs for the root servers. But, Route 53 could work nicely.
However they solve this problem - it's obviously the missing piece.
- Joe
the other less desirable option is to setup a CNAME for www subdomain to point to point to S3 and on your own server you can do a 301 redirect of non-www traffic to your domain to redirect to www.domain.tld.
ReplyDeletei personally do not like using www. on my domains but since you can still use non-www to save chars on twitter etc and still works and people do not usually look at the URL once they arrive at destination.... it's simply another approach worth mentioning.
Amazon's new feature S3 Website is right option to host static website efficiently. Any one can host there static website using following tools
ReplyDeleteBucket Explorer to set bucket as s3 website, www.dns30.com for route53 DNS Service.