Announced during AWS re:Invent back in December, Outposts is a hybrid cloud platform that extends native AWS or VMware Cloud on AWS deployments to customers’ own data centers, effectively giving them an on-premises version of the AWS cloud. The service is somewhat similar to Microsoft Corp.’s Azure Stack offering, which enables users to run Azure cloud services within their own data centers.
With Outposts, customers can run compute and storage tasks on premise, with fully managed and configurable compute and storage racks built with AWS-designed hardware. The service lets customers run those workloads using with the same AWS application programming interfaces, control planes, hardware and tools they use to connect with their other AWS applications.
Today’s announcement means customers can now use regular Amazon S3 application programming interfaces to store and retrieve their data, just as they access data stored in a regular AWS region. It means applications, tools, scripts and utilities that already use the Amazon S3 service can now be configured to store their data locally on Outposts, Amazon said.
Previously, AWS Outposts workloads would always store data to whatever AWS region they’re connected to, but the update means they can store and process data locally.
Amazon said the main benefits for users are lower latency and a reduction in data transfers, since tasks such as filtering, compression and pre-processing can now be performed on their data locally.
“Speaking of keeping your data local, any objects and the associated metadata and tags are always stored on the Outpost and are never sent or stored elsewhere,” AWS Technical Evangelist Martin Beeby said in a blog post. “However, it is essential to remember that if you have data residency requirements, you may need to put some guardrails in place to ensure no one has the permissions to copy objects manually from your Outposts to an AWS region.”
Amazon S3 on Outposts creates a new S3 storage class that Amazon has named S3 Outposts. The service provides users with either 48 terabytes or 96 terabytes of S3 storage capacity, with up to 100 buckets on each outpost. All data stored on it is encrypted using SSE-S3 encryption, with the option for additional server-side encryption with the user’s own keys.
Constellation Research Inc. analyst Holger Mueller said Amazon S3 on Outposts is a huge step forward that significantly expands the service’s footprint.
“Given the popularity of S3 with next generation applications on AWS, this opens up many use cases for these apps to run both in the cloud and on Outposts,” Mueller said. “Outposts enables workload portability across the cloud and on-premises, bringing high performance performance and helping customers meet data residency requirements and best practices.”
Time-series databases are designed to store and retrieve data records that are part of a “time series,” which is a set of data points that are associated with timestamps. The timestamps provide critical context for each of the data points in how they are related to others. They’re ideal for applications that generate a continuous flow of data, such as measurements from IoT sensors, as they make it possible to store large volumes of timestamped information in a format that allows fast insertion and fast retrieval to support complex analysis.
Amazon Timestream was announced at AWS re:Invent 2019, and the company boasts that it can scale to process trillions of time series events per day up to 1,000 times faster than relational databases.
Amazon Timestream is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland), with availability in additional regions expected in future.