Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorYorick Peterse <yorickpeterse@gmail.com>2018-07-19 18:16:47 +0300
committerYorick Peterse <yorickpeterse@gmail.com>2018-08-06 16:20:36 +0300
commit91b752dce63147bc99d7784d3d37865efb5e9352 (patch)
tree447dcd9dc5efcb14af5439f247d87938daf845dc /app/workers/background_migration_worker.rb
parent5f742eb95a0080343167469ccabfeccd3630007d (diff)
Respond to DB health in background migrations
This changes the BackgroundMigration worker so it checks for the health of the DB before performing a background migration. This in turn allows us to reduce the minimum interval, without having to worry about blowing things up if we schedule too many migrations. In this setup, the BackgroundMigration worker will reschedule jobs as long as the database is considered to be in an unhealthy state. Once the database has recovered, the migration can be performed. To determine if the database is in a healthy state, we look at the replication lag of any replication slots defined on the primary. If the lag is deemed to great (100 MB by default) for too many slots, the migration is rescheduled for a later point in time. The health checking code is hidden behind a feature flag, allowing us to disable it if necessary.
Diffstat (limited to 'app/workers/background_migration_worker.rb')
-rw-r--r--app/workers/background_migration_worker.rb61
1 files changed, 54 insertions, 7 deletions
diff --git a/app/workers/background_migration_worker.rb b/app/workers/background_migration_worker.rb
index eaec7d48f35..7d006cc348e 100644
--- a/app/workers/background_migration_worker.rb
+++ b/app/workers/background_migration_worker.rb
@@ -6,10 +6,22 @@ class BackgroundMigrationWorker
# The minimum amount of time between processing two jobs of the same migration
# class.
#
- # This interval is set to 5 minutes so autovacuuming and other maintenance
- # related tasks have plenty of time to clean up after a migration has been
- # performed.
- MIN_INTERVAL = 5.minutes.to_i
+ # This interval is set to 2 or 5 minutes so autovacuuming and other
+ # maintenance related tasks have plenty of time to clean up after a migration
+ # has been performed.
+ def self.minimum_interval
+ if enable_health_check?
+ 2.minutes.to_i
+ else
+ 5.minutes.to_i
+ end
+ end
+
+ def self.enable_health_check?
+ Rails.env.development? ||
+ Rails.env.test? ||
+ Feature.enabled?('background_migration_health_check')
+ end
# Performs the background migration.
#
@@ -27,7 +39,8 @@ class BackgroundMigrationWorker
# running a migration of this class or we ran one recently. In this case
# we'll reschedule the job in such a way that it is picked up again around
# the time the lease expires.
- self.class.perform_in(ttl || MIN_INTERVAL, class_name, arguments)
+ self.class
+ .perform_in(ttl || self.class.minimum_interval, class_name, arguments)
end
end
@@ -39,17 +52,51 @@ class BackgroundMigrationWorker
[true, nil]
else
lease = lease_for(class_name)
+ perform = !!lease.try_obtain
+
+ # If we managed to acquire the lease but the DB is not healthy, then we
+ # want to simply reschedule our job and try again _after_ the lease
+ # expires.
+ if perform && !healthy_database?
+ database_unhealthy_counter.increment
- [lease.try_obtain, lease.ttl]
+ perform = false
+ end
+
+ [perform, lease.ttl]
end
end
def lease_for(class_name)
Gitlab::ExclusiveLease
- .new("#{self.class.name}:#{class_name}", timeout: MIN_INTERVAL)
+ .new(lease_key_for(class_name), timeout: self.class.minimum_interval)
+ end
+
+ def lease_key_for(class_name)
+ "#{self.class.name}:#{class_name}"
end
def always_perform?
Rails.env.test?
end
+
+ # Returns true if the database is healthy enough to allow the migration to be
+ # performed.
+ #
+ # class_name - The name of the background migration that we might want to
+ # run.
+ def healthy_database?
+ return true unless self.class.enable_health_check?
+
+ return true unless Gitlab::Database.postgresql?
+
+ !Postgresql::ReplicationSlot.lag_too_great?
+ end
+
+ def database_unhealthy_counter
+ Gitlab::Metrics.counter(
+ :background_migration_database_health_reschedules,
+ 'The number of times a background migration is rescheduled because the database is unhealthy.'
+ )
+ end
end