How do I handle a new replica in an RDS cluster using Sequelize?

We're preparing for a very big spike in traffic, but the question is also meant to be general:

Knowing that you can configure Sequelize to use an RDS database cluster (in our case: Aurora):

const master = { rdsClusterWriterEndpoint, username, password, port, database }
const replica = { rdsClusterReaderEndpoint, username, password, port, database }
const Sequelize = require('sequelize')
const sequelize = new Sequelize(null, null, null, {
  dialect: 'mysql',
  pool: {
    handleDisconnects: true,
    min: 0,
    max: 10,
    idle: 10000,
  },
  replication: {
    write: master,
    read: [replica],
  },
})

      

How can I handle adding a new RDS instance to the cluster to load the load balance again without reloading the application?

I tore up but couldn't find a good way to do it. DNS resolution seems to be done once at startup and I haven't found a way to update it every time.

Has anyone found a safe way to do this?

thank

+3


source to share


1 answer


I got this configuration:

const getRandomWithinRange = (min, max) => {
  min = Math.ceil(min)
  max = Math.floor(max)
  return Math.floor(Math.random() * (max - min + 1)) + min // The maximum is inclusive and the minimum is inclusive
}
const maxConnectionAge = moment.duration(10, 'minutes').asSeconds()
const pool =     {
  handleDisconnects: true,
  min: pool.min || 1, // Keep one connection open
  max: pool.max || 10, // Max 10 connections
  idle: pool.idle || 9000, // 9 secondes
  validate: (obj) => {
    // Recycle connexions periodically
    if (!obj.recycleWhen) {
      // Setup expiry on new connexions and return the connexion as valid
      obj.recycleWhen = moment().add(getRandomWithinRange(maxConnectionAge, maxConnectionAge * 2), 'seconds')
      return true
    }
    // Recycle the connexion if it has expired
    return moment().diff(obj.recycleWhen, 'seconds') < 0
  }
}
const master = { rdsClusterWriterEndpoint, username, password, port, database, pool }
const replica = { rdsClusterReaderEndpoint, username, password, port, database, pool }
const sequelize = new Sequelize(null, null, null, {
  dialect: 'mysql',
  replication: {
    write: master,
    read: [replica]
  }
}

      

Connections in the pool are regularly overwritten, causing propagation to new replicas presented in the cluster.



This is not ideal as it gets recycled most of the time for no reason, and when you add replicas to deal with the increasing pressure on the DB, you might want this to take effect sooner rather than later, but my poor man decided at the moment, and it has helped we are experiencing a fairly large surge in traffic lately.

For simplicity, here I am using the same pooling configuration for master and readers, but obviously it shouldn't be.

If anyone has a better idea, I have all my ears;)

+3


source







All Articles