Using the AWS Advanced JDBC Wrapper to dynamically switch between writer/reader connections for Amazon Aurora
This page has been translated by machine translation. View original
Introduction
I'm Sato from the Cloud Business Division's Service Development Department. In my work, I'm responsible for developing the Classmethod Members Portal (CMP). CMP uses Amazon Aurora MySQL as its database. Amazon Aurora has a feature called Aurora Replicas that allows for load balancing by offloading read (SELECT) queries. Additionally, the Auto Scaling feature can automatically increase or decrease the number of replicas.
When connecting to Aurora from web applications and the like, you typically connect using the cluster endpoint. Since the cluster endpoint uses the writer, all queries, both read and update operations, connect to the writer. While investigating whether it would be possible to dynamically switch to Aurora Replicas for SELECT queries from applications for load balancing purposes, I discovered that this can be achieved using the Read Write Splitting plugin of AWS Advanced JDBC Wrapper.
In this article, I'll introduce how to implement AWS Advanced JDBC Wrapper in Spring Boot + Kotlin and use the Read Write Splitting plugin.
What is AWS Advanced JDBC Wrapper?
AWS Advanced JDBC Wrapper is a wrapper driver provided by AWS that extends JDBC driver functionality. It allows you to use RDS, Aurora, and AWS features through plugins on top of underlying JDBC drivers such as MySQL Connector/J and PostgreSQL JDBC Driver.
What is the Read Write Splitting Plugin?
The Read/Write Splitting plugin provides functionality to switch between writers and readers through calls to Java's Connection#setReadOnly method.
When setReadOnly(true) is called, the plugin connects to a reader according to the configured reader selection strategy, and routes subsequent queries to that instance.
Each time setReadOnly is called again, the connection automatically switches between writer and reader connections based on the boolean value (true/false) passed as an argument.
In Spring Boot, you can control Connection#setReadOnly using the @Transactional annotation.
Available Plugins
AWS Advanced JDBC Wrapper provides functionality in plugin form, allowing you to combine the plugins you need. There are quite a few plugins as listed below, but for this article, we'll mainly use the Read Write Splitting Plugin.
| Plugin Name | Description |
|---|---|
| Failover Connection Plugin | Failover functionality for Aurora and RDS Multi-AZ clusters |
| Host Monitoring Plugin | Fast detection of host connection failures |
| IAM Authentication Plugin | Database connection using IAM authentication |
| AWS Secrets Manager Plugin | Authentication credential retrieval from Secrets Manager |
| Federated Authentication Plugin | IAM connections using federated authentication |
| Okta Authentication Plugin | Authentication via Okta integration |
| Aurora Connection Tracker Plugin | Connection tracking for Aurora clusters |
| Read Write Splitting Plugin | Automatic routing to readers/writers |
| Limitless Connection Plugin | Load balancing for Aurora Limitless Database |
| Custom Endpoint Plugin | Support for custom endpoints |
| Blue/Green Deployment Plugin | Client-side support for Blue/Green deployments |
| Data Cache Plugin | Caching of SQL query results |
Implementation Method
We'll implement this with Kotlin + Spring Boot 3.x + Spring Data.
Adding Dependencies
First, we'll add the AWS Advanced JDBC Wrapper dependency. Since we're using Aurora MySQL, we also need to add the MySQL Connector J library. We're also using Spring Data JPA in this case.
dependencies {
implementation("software.amazon.jdbc:aws-advanced-jdbc-wrapper:2.6.6")
runtimeOnly("com.mysql:mysql-connector-j:8.2.0")
implementation("org.springframework.boot:spring-boot-starter-data-jpa")
}
Application Configuration
Add a configuration file to pass parameters to the AWS JDBC Wrapper.
spring:
datasource:
aws-jdbc-wrapper:
url: ${JDBC_CONNECTION_STRING}
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
wrapper-plugins: initialConnection,auroraConnectionTracker,readWriteSplitting,failover2,efm2
wrapper-dialect: aurora-mysql
reader-host-selector-strategy: random
You can enable plugins by setting their identifiers as comma-separated values in the wrapper-plugins property. To use Read Write Splitting, several prerequisite plugins are required, so we've enabled the following plugins with the setting initialConnection,auroraConnectionTracker,readWriteSplitting,failover2,efm2.
| Plugin Name | Description |
|---|---|
| Aurora Initial Connection Strategy | Required to prevent incorrect connections to old nodes due to DNS update delays during failover |
| Failover Connection Plugin | Failover functionality for Aurora and RDS Multi-AZ clusters |
| Host Monitoring Plugin | Fast detection of host connection failures |
| Aurora Connection Tracker Plugin | Connection tracking for Aurora clusters |
| Read Write Splitting Plugin | Automatic routing to reader/writer nodes |
There's also a parameter called reader-host-selector-strategy, which allows you to set the strategy for selecting instances when there are multiple Aurora replicas. There are four possible values.
We're using random here.
| Value | Description |
|---|---|
random |
Randomly select from available reader hosts (default) |
roundRobin |
Select reader hosts in sequence |
leastConnections |
Select the reader host with the fewest connections |
fastestResponse |
Select the reader host with the fastest response |
Next, create a configuration class to load the YAML settings:
package com.example.demo.config
import org.springframework.boot.context.properties.ConfigurationProperties
import org.springframework.context.annotation.Profile
@ConfigurationProperties(prefix = "spring.datasource.aws-jdbc-wrapper")
data class AwsJdbcWrapperConfig(
val url: String,
val username: String,
val password: String,
val wrapperPlugins: String,
val wrapperDialect: String,
val readerHostSelectorStrategy: String,
val clusterInstanceHostPattern: String,
)
Creating the DataSource Class
Configure a data source that uses AWS JDBC Wrapper. Here is the complete configuration:
package com.example.demo.config
import com.zaxxer.hikari.HikariConfig
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.context.annotation.Profile
import org.springframework.jdbc.datasource.SimpleDriverDataSource
import software.amazon.jdbc.Driver
import software.amazon.jdbc.HikariPooledConnectionProvider
import software.amazon.jdbc.ds.AwsWrapperDataSource
import software.amazon.jdbc.hostlistprovider.RdsHostListProvider
import java.util.Properties
import java.util.concurrent.TimeUnit
import javax.sql.DataSource
@Configuration
class DataSourceConfig(
private val awsJdbcWrapperConfig: AwsJdbcWrapperConfig,
) {
@Bean
fun dataSource(): DataSource {
// Use SimpleDriverDataSource to disable Spring Boot's default HikariCP
// As we'll be using AWS JDBC Wrapper's internal connection pool
val ds = SimpleDriverDataSource()
val properties = Properties()
// JDBC Wrapper configuration
properties["wrapperPlugins"] = awsJdbcWrapperConfig.wrapperPlugins
properties["wrapperDialect"] = awsJdbcWrapperConfig.wrapperDialect
properties["readerHostSelectorStrategy"] = awsJdbcWrapperConfig.readerHostSelectorStrategy
properties["connectTimeout"] = TimeUnit.SECONDS.toMillis(30).toString()
ds.setDriverClass(Driver::class.java)
ds.url = awsJdbcWrapperConfig.url
ds.username = awsJdbcWrapperConfig.username
ds.password = awsJdbcWrapperConfig.password
ds.connectionProperties = properties
// Enable JDBC Wrapper's internal connection pool (HikariCP)
Driver.setCustomConnectionProvider(
HikariPooledConnectionProvider { _, _ ->
HikariConfig().apply {
maximumPoolSize = 10
minimumIdle = 2
connectionTimeout = TimeUnit.SECONDS.toMillis(30)
idleTimeout = TimeUnit.MINUTES.toMillis(10)
maxLifetime = TimeUnit.MINUTES.toMillis(30)
}
},
)
return ds
}
}
There are several things to be aware of when configuring DataSource to use the Read Write Splitting plugin with Spring Boot.
As mentioned in the AWS Advanced JDBC Wrapper documentation:
The use of read/write splitting with the annotation @Transactional(readOnly = True) is only recommended for configurations using an internal connection pool. Using the annotation with any other configurations will cause a significant performance degradation.
It states that performance issues can occur when using Spring Framework's @Transactional(readOnly = True) together with Read Write Splitting. When I actually performed load testing with Read Write Splitting, performance decreased to about one-tenth of normal.
The solution to this problem, as mentioned in the documentation, is:
If you want to use the driver's internal connection pooling, we recommend that you explicitly disable external connection pools (provided by Spring). You need to check the spring.datasource.type property to ensure that any external connection pooling is disabled.
You can address this by disabling Spring's connection pooling feature and enabling AWS Advanced JDBC Wrapper's internal connection pooling.
Therefore, in the DataSource configuration, we use SimpleDriverDataSource to disable Spring Boot's default connection pool, HikariCP. Then, we enable HikariCP as the internal connection pool for AWS JDBC Wrapper using Driver.setCustomConnectionProvider and HikariPooledConnectionProvider. This allows for efficient connection management for both Writer and Reader instances. Load testing confirmed that the performance issue was resolved.
Using in Service Classes
When the Read Write Splitting plugin is enabled, it will automatically route to reader instances for @Transactional(readonly = true) and to writer instances for @Transactional.
package com.example.demo.service
import com.example.demo.repository.UserRepository
import org.springframework.stereotype.Service
import org.springframework.transaction.annotation.Transactional
@Service
@Transactional(readOnly = true) // readonly by default
class UserService(
private val userRepository: UserRepository,
) {
// Routed to reader
fun findAll(): List<User> {
return userRepository.findAll()
}
// Routed to writer
@Transactional
fun create(name: String, email: String): User {
val user = User(name = name, email = email)
return userRepository.save(user)
}
}
Verification
I deployed the application to ECS and verified its operation. I checked from the logs whether Read Write Splitting was working correctly. You can check the logs from ReadWriteSplittingPlugin by setting the log level to TRACE.
logging:
level:
software.amazon.jdbc: TRACE
Checking CloudWatch Logs, I could confirm logs from ReadWriteSplitting. As indicated by Switched from a writer to a reader host., I confirmed that switching from writer to reader was occurring.
Reader connection set to 'xxx.xxxx.rds.amazonaws.com:3306/
Switched from a writer to a reader host. New reader host: 'xxx.xxxx.ap-northeast-1.rds.amazonaws.com:3306/
Successfully connected to a new reader host: 'xxx.xxxx.ap-northeast-1.rds.amazonaws.com:3306
Conclusion
By using AWS Advanced JDBC Wrapper, we were able to dynamically switch between reader and writer instances when connecting to Aurora from an application. There are also many other useful plugins available, so try using the ones you need as appropriate.