Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2022-11-21 15:09:07 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2022-11-21 15:09:07 +0300
commitfbc1f4ffc29ae99f438b07872c3c00fbe7651caa (patch)
tree8d82cc8f4037880ebd6fedee1952b48784a634ee /doc
parentff2606426b46aa3c5ea9fb18a6d0ce7c38aed183 (diff)
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc')
-rw-r--r--doc/administration/configure.md8
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md2
-rw-r--r--doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md8
-rw-r--r--doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md2
-rw-r--r--doc/administration/geo/replication/geo_validation_tests.md8
-rw-r--r--doc/administration/gitaly/configure_gitaly.md4
-rw-r--r--doc/development/documentation/styleguide/word_list.md30
-rw-r--r--doc/development/migration_style_guide.md10
-rw-r--r--doc/update/index.md1
9 files changed, 40 insertions, 33 deletions
diff --git a/doc/administration/configure.md b/doc/administration/configure.md
index 91fec753f7e..bf618345a94 100644
--- a/doc/administration/configure.md
+++ b/doc/administration/configure.md
@@ -16,7 +16,7 @@ Customize and configure your self-managed GitLab installation. Here are some qui
- [Packages](packages/index.md)
The following tables are intended to guide you to choose the right combination of capabilities based on your requirements. It is common to want the most
-available, quickly recoverable, highly performant and fully data resilient solution. However, there are tradeoffs.
+available, quickly recoverable, highly performant, and fully data resilient solution. However, there are tradeoffs.
The tables lists features on the left and provides their capabilities to the right along with known trade-offs.
@@ -24,9 +24,9 @@ The tables lists features on the left and provides their capabilities to the rig
| | Availability | Recoverability | Data Resiliency | Performance | Risks/Trade-offs|
|-|--------------|----------------|-----------------|-------------|-----------------|
-|Gitaly Cluster | Very high - tolerant of node failures | RTO for a single node of 10s with no manual intervention | Data is stored on multiple nodes | Good - While writes may take slightly longer due to voting, read distribution improves read speeds | **Trade-off** - Slight decrease in write speed for redundant, strongly-consistent storage solution. **Risks** - [Does not currently support snapshot backups](gitaly/index.md#snapshot-backup-and-recovery-limitations), GitLab backup task can be slow for large data sets |
+|Gitaly Cluster | Very high - tolerant of node failures | RTO for a single node of 10 s with no manual intervention | Data is stored on multiple nodes | Good - While writes may take slightly longer due to voting, read distribution improves read speeds | **Trade-off** - Slight decrease in write speed for redundant, strongly-consistent storage solution. **Risks** - [Does not support snapshot backups](gitaly/index.md#snapshot-backup-and-recovery-limitations), GitLab backup task can be slow for large data sets |
|Gitaly Shards | Single storage location is a single point of failure | Would need to restore only shards which failed | Single point of failure | Good - can allocate repositories to shards to spread load | **Trade-off** - Need to manually configure repositories into different shards to balance loads / storage space **Risks** - Single point of failure relies on recovery process when single-node failure occurs |
-|Gitaly + NFS | Single storage location is a single point of failure | Single node failure requires restoration from backup | Single point of failure | Average - NFS is not ideally suited to large quantities of small reads / writes which can have a detrimental impact on performance | **Trade-off** - Easy and familiar administration though NFS is not ideally suited to Git demands **Risks** - Many instances of NFS compatibility issues which provide very poor customer experiences |
+|Gitaly + NFS | Single storage location is a single point of failure | Single node failure requires restoration from backup | Single point of failure | Average - NFS is not ideally suited to large quantities of small reads / writes which can have a detrimental impact on performance | **Trade-off** - Familiar administration though NFS is not ideally suited to Git demands **Risks** - Many instances of NFS compatibility issues which provide very poor customer experiences |
## Geo Capabilities
@@ -34,7 +34,7 @@ If your availability needs to span multiple zones or multiple locations, read ab
| | Availability | Recoverability | Data Resiliency | Performance | Risks/Trade-offs|
|-|--------------|----------------|-----------------|-------------|-----------------|
-|Geo| Depends on the architecture of the Geo site. It is possible to deploy secondaries in single and multiple node configurations. | Eventually consistent. Recovery point depends on replication lag, which depends on a number of factors such as network speeds. Geo supports failover from a primary to secondary site using manual commands that are scriptable. | Geo currently replicates 100% of planned data types and verifies 50%. See [limitations table](geo/replication/datatypes.md#limitations-on-replicationverification) for more detail. | Improves read/clone times for users of a secondary. | Geo is not intended to replace other backup/restore solutions. Because of replication lag and the possibility of replicating bad data from a primary, we recommend that customers also take regular backups of their primary site and test the restore process. |
+|Geo| Depends on the architecture of the Geo site. It is possible to deploy secondaries in single and multiple node configurations. | Eventually consistent. Recovery point depends on replication lag, which depends on a number of factors such as network speeds. Geo supports failover from a primary to secondary site using manual commands that are scriptable. | Geo replicates 100% of planned data types and verifies 50%. See [limitations table](geo/replication/datatypes.md#limitations-on-replicationverification) for more detail. | Improves read/clone times for users of a secondary. | Geo is not intended to replace other backup/restore solutions. Because of replication lag and the possibility of replicating bad data from a primary, customers should also take regular backups of their primary site and test the restore process. |
## Scenarios for failure modes and available mitigation paths
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index 80707afacca..e9eb154d398 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -217,7 +217,7 @@ GitLab 13.9 through GitLab 14.3 are affected by a bug in which the Geo secondary
- All replication meters reach 100% replicated, 0% failures.
- All verification meters reach 100% verified, 0% failures.
- - Database replication lag is 0ms.
+ - Database replication lag is 0 ms.
- The Geo log cursor is up to date (0 events behind).
1. On the **secondary** site:
diff --git a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
index 8c18aaa944d..05b7b38a51f 100644
--- a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
+++ b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
@@ -72,7 +72,7 @@ On the **secondary** site:
1. On the left sidebar, select **Geo > Sites** to see its status.
Replicated objects (shown in green) should be close to 100%,
and there should be no failures (shown in red). If a large proportion of
- objects aren't yet replicated (shown in gray), consider giving the site more
+ objects aren't replicated (shown in gray), consider giving the site more
time to complete.
![Replication status](../../replication/img/geo_dashboard_v14_0.png)
@@ -162,7 +162,7 @@ follow these steps to avoid unnecessary data loss:
- All replication meters reach 100% replicated, 0% failures.
- All verification meters reach 100% verified, 0% failures.
- - Database replication lag is 0ms.
+ - Database replication lag is 0 ms.
- The Geo log cursor is up to date (0 events behind).
1. On the **secondary** site:
@@ -215,7 +215,7 @@ follow these steps to avoid unnecessary data loss:
`initctl stop gitlab-runsvvdir && echo 'manual' > /etc/init/gitlab-runsvdir.override && initctl reload-configuration`.
- If you do not have SSH access to the **primary** site, take the machine offline and
- prevent it from rebooting. Since there are many ways you may prefer to accomplish
+ prevent it from rebooting. As there are many ways you may prefer to accomplish
this, we avoid a single recommendation. You may have to:
- Reconfigure the load balancers.
@@ -276,7 +276,7 @@ WARNING:
If you encounter an `ActiveRecord::RecordInvalid: Validation failed: Name has already been taken` error during this process, read
[the troubleshooting advice](../../replication/troubleshooting.md#fixing-errors-during-a-failover-or-when-promoting-a-secondary-to-a-primary-site).
-The `gitlab-ctl promote-to-primary-node` command cannot be used yet in
+The `gitlab-ctl promote-to-primary-node` command cannot be used in
conjunction with multiple servers, as it can only
perform changes on a **secondary** with only a single machine. Instead, you must
do this manually.
diff --git a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
index f9d99095951..085765ec51e 100644
--- a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
+++ b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
@@ -147,7 +147,7 @@ follow these steps to avoid unnecessary data loss:
- All replication meters reach 100% replicated, 0% failures.
- All verification meters reach 100% verified, 0% failures.
- - Database replication lag is 0ms.
+ - Database replication lag is 0 ms.
- The Geo log cursor is up to date (0 events behind).
1. On the **secondary** site:
diff --git a/doc/administration/geo/replication/geo_validation_tests.md b/doc/administration/geo/replication/geo_validation_tests.md
index f09422d1e26..a12dd8d9d68 100644
--- a/doc/administration/geo/replication/geo_validation_tests.md
+++ b/doc/administration/geo/replication/geo_validation_tests.md
@@ -198,7 +198,7 @@ The following are additional validation tests we performed.
[Validate Object storage replication using Azure based object storage](https://gitlab.com/gitlab-org/gitlab/-/issues/348804#note_821294631):
-- Description: Tested the average time it takes for a single image to replicate from the primary object storage location to the secondary when using Azure based object storage replication and [GitLab based object storage replication](object_storage.md#enabling-gitlab-managed-object-storage-replication). This was tested by uploading a 1mb image to a project on the primary site every second for 60 seconds. The time was then measured until a image was available on the secondary site. This was achieved using a [Ruby Script](https://gitlab.com/gitlab-org/quality/geo-replication-tester).
+- Description: Tested the average time it takes for a single image to replicate from the primary object storage location to the secondary when using Azure based object storage replication and [GitLab based object storage replication](object_storage.md#enabling-gitlab-managed-object-storage-replication). This was tested by uploading a 1 MB image to a project on the primary site every second for 60 seconds. The time was then measured until a image was available on the secondary site. This was achieved using a [Ruby Script](https://gitlab.com/gitlab-org/quality/geo-replication-tester).
- Outcome: When using Azure based replication the average time for an image to replicate from the primary object storage to the secondary was recorded as 40 seconds, the longest replication time was 70 seconds and the quickest was 11 seconds. When using GitLab based replication the average time for replication to complete was 5 seconds, the longest replication time was 10 seconds and the quickest was 3 seconds.
- Follow up issue:
- [Validate Cross Region Object storage replication using Azure based object storage](https://gitlab.com/gitlab-org/gitlab/-/issues/358154)
@@ -207,12 +207,12 @@ The following are additional validation tests we performed.
[Validate Object storage replication using AWS based object storage](https://gitlab.com/gitlab-org/gitlab/-/issues/351463):
-- Description: Tested the average time it takes for a single image to replicate from the primary object storage location to the secondary when using AWS based object storage replication and [GitLab based object storage replication](object_storage.md#enabling-gitlab-managed-object-storage-replication). This was tested by uploading a 1mb image to a project on the primary site every second for 60 seconds. The time was then measured until a image was available on the secondary site. This was achieved using a [Ruby Script](https://gitlab.com/gitlab-org/quality/geo-replication-tester).
-- Outcome: When using AWS managed replication the average time for an image to replicate between sites is about 49 seconds, this is true for when sites are located within the same region and when they are further apart (Europe to America). When using Geo managed replication within the same region the average time for replication took just 5 seconds, however when replicating cross region the average time rose to 33 seconds.
+- Description: Tested the average time it takes for a single image to replicate from the primary object storage location to the secondary when using AWS based object storage replication and [GitLab based object storage replication](object_storage.md#enabling-gitlab-managed-object-storage-replication). This was tested by uploading a 1 MB image to a project on the primary site every second for 60 seconds. The time was then measured until a image was available on the secondary site. This was achieved using a [Ruby Script](https://gitlab.com/gitlab-org/quality/geo-replication-tester).
+- Outcome: When using AWS managed replication the average time for an image to replicate between sites is about 49 seconds, this is true for when sites are located in the same region and when they are further apart (Europe to America). When using Geo managed replication in the same region the average time for replication took just 5 seconds, however when replicating cross region the average time rose to 33 seconds.
[Validate Object storage replication using GCP based object storage](https://gitlab.com/gitlab-org/gitlab/-/issues/351464):
-- Description: Tested the average time it takes for a single image to replicate from the primary object storage location to the secondary when using GCP based object storage replication and [GitLab based object storage replication](object_storage.md#enabling-gitlab-managed-object-storage-replication). This was tested by uploading a 1mb image to a project on the primary site every second for 60 seconds. The time was then measured until a image was available on the secondary site. This was achieved using a [Ruby Script](https://gitlab.com/gitlab-org/quality/geo-replication-tester).
+- Description: Tested the average time it takes for a single image to replicate from the primary object storage location to the secondary when using GCP based object storage replication and [GitLab based object storage replication](object_storage.md#enabling-gitlab-managed-object-storage-replication). This was tested by uploading a 1 MB image to a project on the primary site every second for 60 seconds. The time was then measured until a image was available on the secondary site. This was achieved using a [Ruby Script](https://gitlab.com/gitlab-org/quality/geo-replication-tester).
- Outcome: GCP handles replication differently than other Cloud Providers. In GCP, the process is to a create single bucket that is either multi, dual, or single region based. This means that the bucket automatically stores replicas in a region based on the option chosen. Even when using multi region, this only replicates in a single continent, the options being America, Europe, or Asia. At current there doesn't seem to be any way to replicate objects between continents using GCP based replication. For Geo managed replication the average time when replicating in the same region was 6 seconds, and when replicating cross region this rose to just 9 seconds.
## Other tests
diff --git a/doc/administration/gitaly/configure_gitaly.md b/doc/administration/gitaly/configure_gitaly.md
index dd814b7436e..d2f5282a69a 100644
--- a/doc/administration/gitaly/configure_gitaly.md
+++ b/doc/administration/gitaly/configure_gitaly.md
@@ -937,8 +937,8 @@ gitaly['cgroups_cpu_enabled'] = true
In the previous example using the new configuration method:
-- The top level memory limit is capped at 60gb.
-- Each of the 1000 cgroups in the repositories pool is capped at 20gb.
+- The top level memory limit is capped at 60 GB.
+- Each of the 1000 cgroups in the repositories pool is capped at 20 GB.
This configuration leads to "oversubscription". Each cgroup in the pool has a much larger capacity than 1/1000th
of the top-level memory limit.
diff --git a/doc/development/documentation/styleguide/word_list.md b/doc/development/documentation/styleguide/word_list.md
index d80f0e0d6f0..ed1c668a7a6 100644
--- a/doc/development/documentation/styleguide/word_list.md
+++ b/doc/development/documentation/styleguide/word_list.md
@@ -767,20 +767,23 @@ Do not use **navigate**. Use **go** instead. For example:
([Vale](../testing.md#vale) rule: [`SubstitutionSuggestions.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/SubstitutionSuggestions.yml))
-## need to, should
+## need to
-Try to avoid **needs to**, because it's wordy. If something is recommended, use **should** instead. If something is required, use **must**.
+Try to avoid **need to**, because it's wordy.
-Use:
+For example, when a variable is **required**,
+instead of **You need to set the variable**, use:
-- You should set the variable. (recommended)
-- You must set the variable. (required)
-- Set the variable. (required)
+- Set the variable.
+- You must set the variable.
-Instead of:
+When the variable is **recommended**:
+
+- You should set the variable.
+
+When the variable is **optional**:
-- You need to set the variable.
-- We recommend that you set the variable.
+- You can set the variable.
## note that
@@ -898,6 +901,15 @@ For example, you might write something like:
Use lowercase for **push rules**.
+## recommend, we recommend
+
+Instead of **we recommend**, use **you should**. We want to talk to the user the way
+we would talk to a colleague, and to avoid differentiation between `we` and `them`.
+
+- You should set the variable. (It's recommended.)
+- Set the variable. (It's required.)
+- You can set the variable. (It's optional.)
+
## register
Use **register** instead of **sign up** when talking about creating an account.
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index fdd9c5f29fb..faf49b26788 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -766,12 +766,7 @@ With PostgreSQL 11 being the minimum version in GitLab 13.0 and later, adding co
the standard `add_column` helper should be used in all cases.
Before PostgreSQL 11, adding a column with a default was problematic as it would
-have caused a full table rewrite. The corresponding helper `add_column_with_default`
-has been deprecated and is scheduled to be removed in a later release.
-
-If a backport adding a column with a default value is needed for %12.9 or earlier versions,
-it should use `add_column_with_default` helper. If a [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3)
-is involved, backporting to %12.9 is contraindicated.
+have caused a full table rewrite.
## Removing the column default for non-nullable columns
@@ -852,8 +847,7 @@ update_column_in_batches(:projects, :foo, update_value) do |table, query|
end
```
-Like `add_column_with_default`, there is a RuboCop cop to detect usage of this
-on large tables. In the case of `update_column_in_batches`, it may be acceptable
+In the case of `update_column_in_batches`, it may be acceptable
to run on a large table, as long as it is only updating a small subset of the
rows in the table, but do not ignore that without validating on the GitLab.com
staging environment - or asking someone else to do so for you - beforehand.
diff --git a/doc/update/index.md b/doc/update/index.md
index 410cdadff70..e810c63a7ea 100644
--- a/doc/update/index.md
+++ b/doc/update/index.md
@@ -473,6 +473,7 @@ and [Helm Chart deployments](https://docs.gitlab.com/charts/). They come with ap
### 15.6.0
+- You should use one of the [officially supported PostgreSQL versions](../administration/package_information/postgresql_versions.md). Some database migrations can cause stability and performance issues with older PostgreSQL versions.
- Git 2.37.0 and later is required by Gitaly. For installations from source, we recommend you use the [Git version provided by Gitaly](../install/installation.md#git).
### 15.5.0