Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-03-09 03:08:14 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2020-03-09 03:08:14 +0300
commited4df05ce917d6cf175aeb508b0485ae5f281a0a (patch)
tree9313abd8fd41a78eb2bf1ffd0f95f10b69cd25cb /doc/administration/geo
parent1bdb3fe3821fc3d222361d8b2e2ec2fea2915372 (diff)
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc/administration/geo')
-rw-r--r--doc/administration/geo/disaster_recovery/background_verification.md6
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md2
-rw-r--r--doc/administration/geo/replication/configuration.md4
-rw-r--r--doc/administration/geo/replication/database.md2
-rw-r--r--doc/administration/geo/replication/troubleshooting.md2
5 files changed, 8 insertions, 8 deletions
diff --git a/doc/administration/geo/disaster_recovery/background_verification.md b/doc/administration/geo/disaster_recovery/background_verification.md
index 2caaaac2b9d..6852d08cc07 100644
--- a/doc/administration/geo/disaster_recovery/background_verification.md
+++ b/doc/administration/geo/disaster_recovery/background_verification.md
@@ -17,7 +17,7 @@ You can restore it from backup or remove it from the **primary** node to resolve
If verification succeeds on the **primary** node but fails on the **secondary** node,
this indicates that the object was corrupted during the replication process.
Geo actively try to correct verification failures marking the repository to
-be resynced with a backoff period. If you want to reset the verification for
+be resynced with a back-off period. If you want to reset the verification for
these failures, so you should follow [these instructions][reset-verification].
If verification is lagging significantly behind replication, consider giving
@@ -114,9 +114,9 @@ Feature.enable('geo_repository_reverification')
## Reset verification for projects where verification has failed
Geo actively try to correct verification failures marking the repository to
-be resynced with a backoff period. If you want to reset them manually, this
+be resynced with a back-off period. If you want to reset them manually, this
rake task marks projects where verification has failed or the checksum mismatch
-to be resynced without the backoff period:
+to be resynced without the back-off period:
For repositories:
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index bfeb202dd9a..8af60a42fbb 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -36,7 +36,7 @@ Repository-centric strategies for using `rsync` effectively can be found in the
be adapted for use with any other file-based data, such as GitLab Pages (to
be found in `/var/opt/gitlab/gitlab-rails/shared/pages` if using Omnibus).
-## Pre-flight checks
+## Preflight checks
Follow these steps before scheduling a planned failover to ensure the process
will go smoothly.
diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md
index 74ece38d793..1434eeb61af 100644
--- a/doc/administration/geo/replication/configuration.md
+++ b/doc/administration/geo/replication/configuration.md
@@ -107,7 +107,7 @@ keys must be manually replicated to the **secondary** node.
scp root@<primary_node_fqdn>:/etc/ssh/ssh_host_*_key* /etc/ssh
```
- If you only have access through a user with **sudo** privileges:
+ If you only have access through a user with `sudo` privileges:
```shell
# Run this from your primary node:
@@ -153,7 +153,7 @@ keys must be manually replicated to the **secondary** node.
NOTE: **Note:**
The output for private keys and public keys command should generate the same fingerprint.
-1. Restart sshd on your **secondary** node:
+1. Restart `sshd` on your **secondary** node:
```shell
# Debian or Ubuntu installations
diff --git a/doc/administration/geo/replication/database.md b/doc/administration/geo/replication/database.md
index 48681d03838..f25aa0e5da8 100644
--- a/doc/administration/geo/replication/database.md
+++ b/doc/administration/geo/replication/database.md
@@ -469,7 +469,7 @@ work:
1. On the **primary** Geo database, enter the PostgreSQL on the console as an
admin user. If you are using an Omnibus-managed database, log onto the **primary**
- node that is running the PostgreSQL database (the default Omnibus database name is gitlabhq_production):
+ node that is running the PostgreSQL database (the default Omnibus database name is `gitlabhq_production`):
```shell
sudo \
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
index 2eeca41724e..6a62ce23d0d 100644
--- a/doc/administration/geo/replication/troubleshooting.md
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -290,7 +290,7 @@ sudo gitlab-ctl \
This will give the initial replication up to six hours to complete, rather than
the default thirty minutes. Adjust as required for your installation.
-### Message: "PANIC: could not write to file 'pg_xlog/xlogtemp.123': No space left on device"
+### Message: "PANIC: could not write to file `pg_xlog/xlogtemp.123`: No space left on device"
Determine if you have any unused replication slots in the **primary** database. This can cause large amounts of
log data to build up in `pg_xlog`. Removing the unused slots can reduce the amount of space used in the `pg_xlog`.