AWS re:Invent:  Some of the Important Storage Service, Backup and Database Service updates

AWS re:Invent: Some of the Important Storage Service, Backup and Database Service updates

Clock Icon2022.12.09

この記事は公開されてから1年以上経過しています。情報が古い可能性がありますので、ご注意ください。

AWS re:Invent 2022 began with a burst of product announcements affecting Storage, Compute, Analytics, Containers, Databases, and other areas. This post focuses on AWS storage service and Database service updates.

Note: Some of the links shared in this post might be in Japanese, please use google translate if that's the case, sorry for the inconvenience.

New for AWS Backup – Protect and Restore Your CloudFormation Stacks

When managing applications with infrastructure as code, these components are described in a single repository. It would be phenomenal if that info could be used to help secure applications. Attaching an AWS CloudFormation stack to data protection policies is now supported by AWS Backup.

To define an application's data protection policy, examine its components and determine which ones store data that must be protected. Databases and file systems are examples of stateful components of your application. Other components do not store data but must be restored in the event of a problem. Containers and their network configurations are examples of stateless components.

When using CloudFormation as a resource, all AWS Backup-supported stateful components are backed up at the same time. The backup also includes AWS Identity and Access Management (IAM) roles and Amazon Virtual Private Cloud (Amazon VPC) security groups, which are stateless resources in the stack.

There is now a single recovery point from which the application stack or individual resources can be recovered. In the event of a recovery, there is no need to combine automated tools with custom scripts and manual activities to recover and reassemble the entire application stack. AWS Backup keeps track of changes and updates data protection policies for you as you modernise and update an application managed by CloudFormation.

Support for AWS Backup via CloudFormation also aids in demonstrating compliance with data protection policies. Monitoring application resources in AWS Backup Audit Manager is a feature of AWS Backup that provides auditing and reporting on data protection policy compliance. AWS Backup Vault Lock can also be used to manage backup immutability, which is required by the company's compliance requirements.

For more information check the link: https://aws.amazon.com/blogs/aws/new-for-aws-backup-protect-and-restore-your-cloudformation-stacks/

Amazon Redshift Supported in AWS Backup

Amazon Redshift enables customers to analyze data at any scale in the cloud. Amazon Redshift provides native data protection capabilities that include automatic and manual snapshots. This works well on its own, but when combined with other AWS services, you will need to configure more than one tool to manage your data protection policies.

AWS has added support for Amazon Redshift in AWS Backup to make this process easier. AWS Backup enables customers to define a centralized backup policy for managing application data protection and protecting Amazon Redshift clusters. Managing data protection across all supported services provides a consistent experience.

The centralized policies in AWS Backup provide the option to define data protection policies across all accounts within the AWS Organizations in a multi-account setup. AWS Backup now includes Amazon Redshift in its auditor-ready reports to meet regulatory compliance requirements. AWS Backup Vault Lock can also be used to create immutable backups and prevent malicious or accidental changes.

For more please check: https://dev.classmethod.jp/articles/aws-backup-redshift-support/

Redshift Multi-AZ #reinvent

Amazon Redshift has announced a highly available Multi-AZ configuration, Multi-AZ cluster recovery is a feature that allows Amazon Redshift to use relocation to move a cluster to another Availability Zone (AZ) without data loss or application changes.

In the event of a service interruption, your cluster can continue to operate with minimal impact thanks to this feature. Furthermore, it includes this functionality at no additional cost.

This feature is only available on the RA3 instance types ra3.16xlarge, ra3.4xlarge, and ra3.xlplus. RA3 instances use  RMS as a durable storage layer, ensuring that an up-to-date copy of your data is always available in other Availability Zones. Amazon Redshift clusters can be relocated to another Availability Zone at no cost and with no data loss using the RA3 mechanism.

For more information and hands-on check the link: https://dev.classmethod.jp/articles/try-redshift-multi-az-preview/

 AWS Launches Amazon Redshift Integration with Apache Spark

At the Keynote session, Amazon Redshift Integration with Apache Spark was announced and made General Available. Engineers can use this to run Apache Spark applications on Redshift data, allowing Spark applications to read and write data from the Amazon Redshift cluster.

You can get started with Amazon Redshift integration for Apache Spark in seconds and build Apache Spark applications in a variety of languages, including Java, Scala, and Python.
Your applications can now read from and write to your Amazon Redshift data warehouse without sacrificing application performance or data transactional consistency, and you can improve performance with pushdown optimizations.

RDS for MySQL now by default doubles write throughput! And the price remains unchanged!

Amazon RDS (Amazon Relational Database Service) for MySQL now supports Amazon RDS Optimized Writes. With Optimized Writes, you can increase write throughput by up to twofold at no extra cost. This is especially beneficial for RDS for MySQL customers who have write-intensive database workloads, which are common in applications like digital payments, financial trading, and online gaming.

Using a built-in feature known as the "doublewrite buffer," MySQL protects you from data loss due to unexpected events such as a power outage. However, this method of writing takes up to twice as long, consumes twice as much I/O bandwidth, and reduces your database's throughput and performance. Starting today, Amazon RDS Optimized Writes provide up to a 2x improvement in write transaction throughput on RDS for MySQL by writing only once, protecting you from data loss, and at no extra cost. The AWS Nitro System is used by Optimized Writes to reliably and durably write to table storage in a single step.

For more: https://dev.classmethod.jp/articles/update-rds-optimized-writes/

AWS Elastic Disaster Recovery Now Supports Cross-Region and Cross-Availability Zone Failback

When enabled, AWS Elastic Disaster Recovery (DRS) keeps the operating systems, applications, and databases in a constant replication state. DRS now supports in-AWS failback, in addition to non-disruptive recovery drills and on-premises failback, according to AWS.

Testing and drills are frequently overlooked because they are disruptive and time-consuming. Add automation and simplification to the mix to encourage large-scale drills to better prepare for disasters. These tests can be run on-premises or in AWS using in-AWS Failback. Implementing non-disruptive recovery drills provides assurance that recovery time objectives (RTOs) and recovery point objectives (RPOs) will be met in the event of a recovery or failback.

The automated support in this new service makes it easier and faster to fail back Amazon Elastic Compute Cloud (Amazon EC2) instances to their original Region, and both failover and failback processes (for on-premises or in-AWS recovery) can be started from the AWS Management Console.

Failover vs. Failback

If outages or issues threaten the application's availability, failover switches it to another Availability Zone or even a different Region. The process of returning the application to its original on-premises location or Region is known as failback. Customers who are zone-agnostic may continue running the application in its new zone indefinitely if a failover to another Availability Zone is required. In this case, they will reverse the recovery replication so that the recovered instance can be recovered again in the future. Assume, however, that the failover was to a different Region. In that case, customers are likely to want to fail back and return to the original Region once the issues that caused the failover have been resolved.

For more please check the link: https://dev.classmethod.jp/articles/drs-cross-region-cross-az-failback/

Zero ETL Amazon Aurora and Amazon Redshift Integrations

Amazon Aurora now supports zero-ETL integration with Amazon Redshift, allowing for near real-time analytics and machine learning (ML) on petabytes of Aurora transactional data. Because transactional data is available in Amazon Redshift within seconds of being written into Aurora, you don't need to build and maintain complex data pipelines to perform extract, transform, and load (ETL) operations.

It combines transactional data and analytics, removing the need to create and manage customer data pipelines between Aurora and Redshift. Easily accessible within Redshift. Data from multiple Aurora databases can be replicated to your Redshift instance.
In the US East (N. Virginia) region, Amazon Aurora Zero ETL integration with Amazon Redshift is now available in a limited preview for Amazon Aurora MySQL 3 with MySQL 8.0 compatibility.

Amazon GuardDuty adds threat detection for RDS databases

Aurora. GuardDuty begins analyzing and monitoring login activity to existing and new databases in your account once enabled. Administrators of GuardDuty can enable the feature for member accounts. GuardDuty RDS Protection is available at no additional cost during the public preview.

Many organizations rely on RDS to store critical data and power applications that require a high-performance database, getting GuardDuty threat detection support will give these organizations more comfort in using RDS Aurora for their important data.

Currently, Guard Duty supports the following Aurora database versions:

  • Aurora MySQL versions 2.10.2 and 3.2.1 or higher.
  • Aurora PostgreSQL versions 10.17, 11.12, 12.7, 13.3, and 14.3 or higher.

For more: https://dev.classmethod.jp/articles/update-new-feature-amazon-guardduty-now-supports-rds-protection-reinvent/

"Elastic Throughput" for Amazon Elastic File System.

Conventionally, EFS throughput has  "burst throughput" by file system size, burst credits, and "provisioned throughput" that was allocated in advance when a relatively constant throughput was required.

Elastic Throughput is a new throughput mode for Amazon Elastic File System (Amazon EFS) that provides your applications with as much throughput as they require with pay-as-you-go pricing. Elastic Throughput is intended to simplify the operation of workloads and applications on AWS by providing file storage that does not require performance provisioning.

Elastic Throughput is intended for dynamic and unpredictable workloads with difficult-to-predict performance requirements. Elastic Throughput on an Amazon EFS file system, when enabled, actively manages file system performance and prevents overpaying for idle resources to ensure application performance.

Amazon EFS is designed to provide serverless, fully elastic file storage, allowing cloud-based applications to share file data without having to worry about provisioning or managing storage capacity and performance. Amazon EFS extends its simplicity and elasticity to performance with Elastic Throughput, allowing customers to run an even broader range of file workloads. Amazon EFS is well suited to a wide range of use cases, including analytics and data science, machine learning, continuous integration, and delivery tools, content management, web serving, and SaaS applications.
Except for the AWS China Regions, Amazon EFS Elastic Throughput is available in all Regions that support EFS.

Failover Controls for Amazon S3 Multi-Region Access Point

Failover controls enable users to quickly redirect S3 data access request traffic routed through an Amazon S3 Multi-Region Access Point to an alternate AWS Region to test and build highly available applications for business continuity.

The existing Multi-Region Access Point model considers all Regions to be active and can route traffic to any of them. Users can designate Regions as active or passive using the model introduced at AWS re:Invent. Buckets in active Regions receive traffic from the Multi-Region Access Point (GET, PUT, and other requests); buckets in passive Regions do not. Amazon S3 Cross-Region Replication works regardless of whether a Region is active or passive in relation to a specific Multi-Region Access Point.

Reference: https://aws.amazon.com/blogs/aws/new-failover-controls-for-amazon-s3-multi-region-access-points/

Share this article

facebook logohatena logotwitter logo

© Classmethod, Inc. All rights reserved.