The first thing we noticed about S3 is that it does not provide great granularity of commands in certain areas but, we understand why since it must be tied to the storage behind the API and the type of service Amazon offers.
Another thing we noticed is that the implementation behind S3 is not very strict with the API definition. That is great for developers, i.e. you can make a few mistakes calling the API with wrong parameters and the system still works. However, that makes it difficult if you are trying to develop a backend that simulates the behaviour of an S3 server, as I describe later.
I must admit Amazon did very well launching an API and infrastructure so easy to use that it attracted developers and made the API a de-facto standard. These days, everybody talks about S3.
However, as an external company working with APIs and partners, we are kind of forced to use S3 even knowing it’s not ideal. It does not answer everybody’s needs and it didn’t come from any standardisation body.
It was interesting to see how the API evolved. It started as a REST API defined by Amazon but it has evolved to have multiple wrappers in different programming languages. That made S3 even easier to use but at the same time, more difficult to support as a backend.
Of course, being a REST API has its own benefits and issues, e.g., the speed of a REST API is never going to be great compared to optimised network protocols – something that is important when transferring large media files.