Skip to content

Latest commit

 

History

History
190 lines (120 loc) · 22.2 KB

File metadata and controls

190 lines (120 loc) · 22.2 KB

Read/Write Splitting Plugin

The Read/Write Splitting Plugin adds functionality to switch between writer/reader instances via calls to the Connection#setReadOnly method. Upon calling setReadOnly(true), the plugin will connect to a reader instance according to a reader selection strategy and direct subsequent queries to this instance. Future calls to setReadOnly will switch between the established writer and reader connections according to the boolean argument you supply to the setReadOnly method.

Plugin Availability

The plugin is available since version 1.0.1.

Loading the Read/Write Splitting Plugin

The Read/Write Splitting Plugin is not loaded by default. To load the plugin, include it in the wrapperPlugins connection parameter. If you would like to load the Read/Write Splitting Plugin alongside the failover and host monitoring plugins, the Read/Write Splitting Plugin must be listed before these plugins in the plugin chain. If it is not, failover exceptions will not be properly processed by the plugin. See the example below to properly load the Read/Write Splitting Plugin with these plugins. Although the driver performs proper plugin sorting by default (see autoSortWrapperPluginOrder configuration parameter), this note remains important.

final Properties properties = new Properties();
properties.setProperty(PropertyDefinition.PLUGINS.name, "readWriteSplitting,failover2,efm2");

If you would like to use the Read/Write Splitting Plugin without the failover plugin, make sure you have the readWriteSplitting plugin in the wrapperPlugins property, and that the failover plugin is not part of it.

final Properties properties = new Properties();
properties.setProperty(PropertyDefinition.PLUGINS.name, "readWriteSplitting");

Warning

Do not use the readWriteSplitting, srw and/or gdbReadWriteSplitting plugins (or their combination) at the same time for the same connection!

Supplying the connection string

When using the Read/Write Splitting Plugin against Aurora clusters, you do not have to supply multiple instance URLs in the connection string. Instead, supply just the URL for the initial instance to which you're connecting. You must also include either the failover plugin or the Aurora host list plugin in your plugin chain so that the driver knows to query Aurora for its topology. See the section on loading the Read/Write Splitting Plugin for more info.

Configuration Parameters

Parameter Value Required Description Default Value
readerHostSelectorStrategy String No The name of the strategy that should be used to select a new reader host. random
cachedReaderKeepAliveTimeoutMs Integer No The time in milliseconds to keep a reader connection alive in the cache. Default value 0 means the Wrapper will keep reusing the same cached reader connection. 0
verifyInitialConnectionRole Boolean No Whether to verify the role of the initial connection by querying the database. When true, the plugin executes a query to confirm the connected host's actual role (writer/reader) and updates the internal host spec if it differs from the expected role. When false, this check is skipped, which can save time per connection but means the plugin trusts the endpoint's assumed role. See the Initial Connection Role Verification section for details on when it is safe to disable this check. true

Using the Read/Write Splitting Plugin against non-Aurora clusters

The Read/Write Splitting Plugin is not currently supported for non-Aurora clusters.

Initial Connection Role Verification

When the Read/Write Splitting Plugin establishes an initial connection, it queries the database to verify whether the connected host is a writer or reader. This ensures the internal host spec reflects the host's actual role.

Setting verifyInitialConnectionRole to false skips this query, which can improve initial connection time. However, this means the plugin will trust the role assumed from the endpoint type without verification.

When is it safe to disable?

The role check is strongly recommended for endpoint types where the actual role may differ from the assumed role — specifically Aurora Custom Endpoints (which may route to either writers or readers), Aurora Instance Endpoints (where the URL pattern may not reflect the current role after a failover) and non-RDS endpoints (IP addresses, custom domains).

For endpoint types where the role is reliably known (e.g. cluster writer/reader endpoints, Multi-AZ cluster endpoints), the check can be safely disabled.

This role check can also be safely disabled if the Aurora Initial Connection Strategy plugin is enabled with role verification using verifyOpenedConnectionType.

See the endpoint compatibility matrix for details on which endpoint types may require this check.

Internal connection pooling

Warning

If internal connection pools are enabled, database passwords may not be verified with every connection request. The initial connection request for each database instance in the cluster will verify the password, but subsequent requests may return a cached pool connection without re-verifying the password. This behavior is inherent to the nature of connection pools in general and not a bug with the driver. ConnectionProviderManager.releaseResources can be called to close all pools and remove all cached pool connections. See InternalConnectionPoolPasswordWarning.java for more details.

Whenever setReadOnly(true) is first called on a Connection object, the read/write plugin will internally open a new physical connection to a reader. After this first call, the physical reader connection will be cached for the given Connection. By default, this cached connection will never expire, meaning all subsequent setReadOnly(true) calls on the same Connection object will keep reusing the same reader connection. If your application frequently calls setReadOnly, this may have a performance impact. There are two ways to improve performance:

  1. You can enable internal connection pooling to improve performance. When enabled, the wrapper driver will maintain an internal connection pool for each instance in the cluster. This allows the Read/Write Splitting Plugin to reuse connections that were established by setReadOnly calls on previous Connection objects.
  2. You can also use the cachedReaderKeepAliveTimeoutMs connection parameter. This sets an expiration time on the reader connection. When setReadOnly(true) is called and the reader connection has expired, the plugin will create a new reader connection using the specified reader selection strategy.

Note

Initial connections to a cluster URL will not be pooled. The driver does not pool cluster URLs because it can be problematic to pool a URL that resolves to different instances over time. The main benefit of internal connection pools is when setReadOnly is called. When setReadOnly is called (regardless of the initial connection URL), an internal pool will be created for the writer/reader that the plugin switches to and connections for that instance can be reused in the future.

The wrapper driver currently uses Hikari and c3p0 to create and maintain its internal connection pools.

Activate and configure with built-in configuration parameters

The internal connection pool can be activated using the connectionPoolType parameter. This parameter should be added to the connection string along with additional configuration parameters to configure the pool itself. See configuration parameter documentation for more details.

For example, to activate the internal connection pool with HikariCP underlying implementation, the following parameter can be added. Users can add additional parameters to configure the pool as needed. These additional parameters should be prefixed by cp-. Consult with Hikari documentation to get configuration property names.

connectionPoolType=hikari&cp-MaximumPoolSize=20&cp-MinimumIdle=1

To configure the internal connection pool with c3p0 as the underlying implementation, users can use the following parameters. Consult the c3p0 documentation for more configuration parameters.

connectionPoolType=c3p0&cp-MaxConnectionAge=3600&cp-MaxPoolSize=15

Activate and configure with custom connection provider

Alternative way to activate and configure internal connection pool is described below. This approach provides additional configurable options and may be helpful for some scenarios. The sample code here provides a useful example of how to enable this feature. The steps are as follows:

  1. Create an instance of HikariPooledConnectionProvider. The HikariPooledConnectionProvider constructor requires you to pass in a HikariPoolConfigurator function. Inside this function, you should create a HikariConfig, configure any desired properties on it, and return it. Note that the Hikari properties below will be set by default and will override any values you set in your function. This is done to follow desired behavior and ensure that the read/write plugin can internally establish connections to new instances.
  • jdbcUrl (including the host, port, and database)
  • exception override class name
  • username
  • password

You can optionally pass in a HikariPoolMapping function as a second parameter to the HikariPooledConnectionProvider. This allows you to decide when new connection pools should be created by defining what is included in the pool map key. A new pool will be created each time a new connection is requested with a unique key. By default, a new pool will be created for each unique instance-user combination. If you would like to define a different key system, you should pass in a HikariPoolMapping function defining this logic. A simple example is show below. Please see ReadWriteSplittingPostgresExample.java for the full example.

Warning

If you do not include the username in your HikariPoolMapping function, connection pools may be shared between different users. As a result, an initial connection established with a privileged user may be returned to a connection request with a lower-privilege user without re-verifying credentials. This behavior is inherent to the nature of connection pools in general and not a bug with the driver. ConnectionProviderManager.releaseResources can be called to close all pools and remove all cached pool connections.

props.setProperty("somePropertyValue", "1"); // used in getPoolKey
final HikariPooledConnectionProvider connProvider =
    new HikariPooledConnectionProvider(
        ReadWriteSplittingPostgresExample::getHikariConfig,
        ReadWriteSplittingPostgresExample::getPoolKey
    );
Driver.setCustomConnectionProvider(connProvider);

private static String getPoolKey(HostSpec hostSpec, Properties props) {
  // Include the URL, user, and somePropertyValue in the connection pool key so that a new
  // connection pool will be opened for each different instance-user-somePropertyValue
  // combination.
  final String user = props.getProperty(PropertyDefinition.USER.name);
  final String somePropertyValue = props.getProperty("somePropertyValue");
  return hostSpec.getUrl() + user + somePropertyValue;
}
  1. Call Driver.setCustomConnectionProvider, passing in the HikariPooledConnectionProvider you created in step 1.

  2. By default, the read/write plugin randomly selects a reader instance the first time that setReadOnly(true) is called. If you would like the plugin to select a reader based on a different selection strategy, please see the Reader Selection section for more information.

  3. Continue as normal: create connections and use them as needed.

  4. When you are finished using all connections, call ConnectionProviderManager.releaseResources.

Important

You must call ConnectionProviderManager.releaseResources to close the internal connection pools when you are finished using all connections. Unless ConnectionProviderManager.releaseResources is called, the wrapper driver will keep the pools open so that they can be shared between connections.

Reader Selection

To indicate which selection strategy to use, the readerHostSelectorStrategy configuration parameter can be set to one of the selection strategies in this table. The following is an example of enabling the least connections strategy:

props.setProperty(ReadWriteSplittingPlugin.READER_HOST_SELECTOR_STRATEGY.name, "leastConnections");

Reader keep-alive timeout

If no connection pool is used, reader connections created by calls to setReadOnly(true) will be cached for the entire lifetime of the Connection object. This may have a negative performance impact if your application makes frequent calls to setReadOnly(true) on the same Connection object, as all read traffic for that Connection will be directed to a single reader instance. To improve performance, you can specify a timeout for the cached reader connection using cachedReaderKeepAliveTimeoutMs. Once the reader has expired, the next call to setReadOnly(true) will create a new reader connection determined by the reader host selection strategy. The default value of 0 means the wrapper will keep reusing the same cached reader connection.

final Properties properties = new Properties();
properties.setProperty("cachedReaderKeepAliveTimeoutMs", "600000");

Note

If a connection pool is used, this setting is ignored and the lifespan of this cached connection object will be handled by the connection pool instead.

Limitations

General plugin limitations

When a Statement or ResultSet is created, it is internally bound to the database connection established at that moment. There is no standard JDBC functionality to change the internal connection used by Statement or ResultSet objects. Consequently, even if the read/write plugin switches the internal connection, any Statements/ResultSets created before this will continue using the old database connection. This bypasses the desired functionality provided by the plugin. To prevent these scenarios, an exception will be thrown if your code uses any Statements/ResultSets created before a change in internal connection. To solve this problem, please ensure you create new Statement/ResultSet objects after switching between the writer/reader.

Verify plugin compatibility within your driver configuration using the compatibility guide.

Session state

The plugin supports session state transfer when switching connection. All attributes mentioned in Session State are automatically transferred to a new connection.

Limitations when using Spring Boot/Framework

@Transactional(readOnly = True)

Warning

The use of read/write splitting with the annotation @Transactional(readOnly = True) is only recommended for configurations using an internal connection pool. Using the annotation with any other configurations will cause a significant performance degradation.

When a method with this annotation is hit, Spring calls conn.setReadOnly(true), executes the method, and then calls setReadOnly(false) to restore the connection's initial readOnly value. Consequently, every time the method is called, the plugin switches to the reader, executes the method, and then switches back to the writer. Although the reader connection will be cached after the first setReadOnly call, there is still some overhead when switching between the cached writer/reader connections. This constant switching is not an ideal use of the plugin because it is frequently incurring this overhead. The suggested approach for this scenario is to avoid loading the Read/Write Splitting Plugin and instead use the writer cluster URL for your write operations and the reader cluster URL for your read operations. By doing this you avoid the overhead of constantly switching between connections while still spreading load across the database instances in your cluster.

Internal connection pools

If you want to use the driver's internal connection pooling, we recommend that you explicitly disable external connection pools (provided by Spring). You need to check the spring.datasource.type property to ensure that any external connection pooling is disabled. This is necessary because Spring uses datasource auto-detection by default, and it may enable external connection pooling. Using internal and external pools at the same time has not been tested and may result in problematic behaviour. The recommended configuration for a Spring application should either enable internal connection pooling or external connection pooling, but not both at once.

Spring applications are encouraged to use configuration profiles and presets optimized specifically for Spring applications. More details are available at Configuration Presets.

Example

ReadWriteSplittingPostgresExample.java demonstrates how to enable and configure Read/Write Splitting with the AWS Advanced JDBC Wrapper.

SpringHibernateBalancedReaderOneDataSourceExample demonstrates how to enable and configure the Read/Write Splitting Plugin with the AWS Advanced JDBC Wrapper. This example application uses a configuration with internal connection pooling and provides a load-balanced reader connection according to a specified reader selection strategy. @Transactional(readOnly = True) annotations in the code help the Read/Write Splitting Plugin switch between datasources.

SpringHibernateBalancedReaderTwoDataSourceExample demonstrates how to enable and configure the Read/Write Splitting Plugin with the AWS Advanced JDBC Wrapper. This example application uses a configuration with two Spring datasources, where each datasource uses internal connection pooling. The example application provides a load-balanced reader connection according to a specified reader selection strategy. The example application does not use the Read/Write Splitting Plugin. Switching between the writer datasource and reader datasource occurs with the help of the @WithLoadBalancedReaderDataSource annotation.