Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2021-06-24 13:31:56 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2021-06-24 13:31:56 +0300
commit159f25da0106c574f2c855b44d5ba4e46822d3a3 (patch)
tree0c0c451079f5a737e3a45461473f45fb5f845921 /doc
parentf1926d2aa6447173a06fee5e0a3141bea27a0d8d (diff)
Add latest changes from gitlab-org/gitlab@14-0-stable-ee
Diffstat (limited to 'doc')
-rw-r--r--doc/administration/auditor_users.md17
-rw-r--r--doc/administration/auth/ldap/ldap-troubleshooting.md4
-rw-r--r--doc/administration/geo/disaster_recovery/background_verification.md53
-rw-r--r--doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.pngbin28817 -> 0 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/img/replication-status.pngbin7716 -> 0 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/img/verification-status-primary.pngbin13329 -> 0 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/img/verification-status-secondary.pngbin12186 -> 0 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/img/verification_status_primary_v14_0.pngbin0 -> 28197 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/img/verification_status_secondary_v14_0.pngbin0 -> 35270 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md81
-rw-r--r--doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md66
-rw-r--r--doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md53
-rw-r--r--doc/administration/geo/replication/configuration.md28
-rw-r--r--doc/administration/geo/replication/datatypes.md1
-rw-r--r--doc/administration/geo/replication/disable_geo.md9
-rw-r--r--doc/administration/geo/replication/docker_registry.md9
-rw-r--r--doc/administration/geo/replication/img/geo_architecture.pngbin53225 -> 49547 bytes
-rw-r--r--doc/administration/geo/replication/img/geo_node_dashboard.pngbin41734 -> 0 bytes
-rw-r--r--doc/administration/geo/replication/img/geo_node_dashboard_v14_0.pngbin0 -> 48805 bytes
-rw-r--r--doc/administration/geo/replication/img/geo_node_health_v14_0.pngbin0 -> 57973 bytes
-rw-r--r--doc/administration/geo/replication/object_storage.md9
-rw-r--r--doc/administration/geo/replication/remove_geo_site.md3
-rw-r--r--doc/administration/geo/replication/troubleshooting.md20
-rw-r--r--doc/administration/geo/replication/tuning.md28
-rw-r--r--doc/administration/housekeeping.md24
-rw-r--r--doc/administration/img/auditor_access_form.pngbin11910 -> 0 bytes
-rw-r--r--doc/administration/maintenance_mode/index.md20
-rw-r--r--doc/administration/operations/extra_sidekiq_processes.md6
-rw-r--r--doc/administration/operations/fast_ssh_key_lookup.md10
-rw-r--r--doc/administration/operations/img/sidekiq-cluster.pngbin22576 -> 0 bytes
-rw-r--r--doc/administration/operations/img/write_to_authorized_keys_setting.pngbin29192 -> 0 bytes
-rw-r--r--doc/administration/polling.md39
-rw-r--r--doc/administration/raketasks/check.md3
-rw-r--r--doc/administration/raketasks/project_import_export.md11
-rw-r--r--doc/administration/raketasks/storage.md15
-rw-r--r--doc/install/azure/index.md8
-rw-r--r--doc/subscriptions/index.md4
-rw-r--r--doc/topics/autodevops/upgrading_auto_deploy_dependencies.md75
-rw-r--r--doc/user/admin_area/geo_nodes.md14
-rw-r--r--doc/user/group/index.md20
-rw-r--r--doc/user/project/members/index.md14
-rw-r--r--doc/user/project/members/share_project_with_groups.md4
-rw-r--r--doc/user/project/merge_requests/allow_collaboration.md2
-rw-r--r--doc/user/project/settings/project_access_tokens.md2
44 files changed, 368 insertions, 284 deletions
diff --git a/doc/administration/auditor_users.md b/doc/administration/auditor_users.md
index 96bfbd88ddf..5f31ed709f2 100644
--- a/doc/administration/auditor_users.md
+++ b/doc/administration/auditor_users.md
@@ -53,17 +53,16 @@ helpful:
you can create an Auditor user and then share the credentials with those users
to which you want to grant access.
-## Adding an Auditor user
+## Add an Auditor user
-To create a new Auditor user:
+To create an Auditor user:
-1. Create a new user or edit an existing one by navigating to
- **Admin Area > Users**. The option of the access level is located in
- the 'Access' section.
-
- ![Admin Area Form](img/auditor_access_form.png)
-
-1. Select **Save changes** or **Create user** for the changes to take effect.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Overview > Users**.
+1. Create a new user or edit an existing one, and in the **Access** section
+ select Auditor.
+1. Select **Create user** or **Save changes** if you created a new user or
+ edited an existing one respectively.
To revoke Auditor permissions from a user, make them a regular user by
following the previous steps.
diff --git a/doc/administration/auth/ldap/ldap-troubleshooting.md b/doc/administration/auth/ldap/ldap-troubleshooting.md
index acafe52007b..1215d90134f 100644
--- a/doc/administration/auth/ldap/ldap-troubleshooting.md
+++ b/doc/administration/auth/ldap/ldap-troubleshooting.md
@@ -357,8 +357,8 @@ things to check to debug the situation.
LDAP yet and must do so first.
- You've waited an hour or [the configured
interval](index.md#adjusting-ldap-group-sync-schedule) for the group to
- sync. To speed up the process, either go to the GitLab group **Settings ->
- Members** and press **Sync now** (sync one group) or [run the group sync Rake
+ sync. To speed up the process, either go to the GitLab group **Group information > Members**
+ and press **Sync now** (sync one group) or [run the group sync Rake
task](../../raketasks/ldap.md#run-a-group-sync) (sync all groups).
If all of the above looks good, jump in to a little more advanced debugging in
diff --git a/doc/administration/geo/disaster_recovery/background_verification.md b/doc/administration/geo/disaster_recovery/background_verification.md
index 8d3745130bd..f03cd64c14e 100644
--- a/doc/administration/geo/disaster_recovery/background_verification.md
+++ b/doc/administration/geo/disaster_recovery/background_verification.md
@@ -58,19 +58,25 @@ Feature.enable('geo_repository_verification')
## Repository verification
-Go to the **Admin Area > Geo** dashboard on the **primary** node and expand
-the **Verification information** tab for that node to view automatic checksumming
-status for repositories and wikis. Successes are shown in green, pending work
-in gray, and failures in red.
+On the **primary** node:
-![Verification status](img/verification-status-primary.png)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Expand **Verification information** tab for that node to view automatic checksumming
+ status for repositories and wikis. Successes are shown in green, pending work
+ in gray, and failures in red.
-Go to the **Admin Area > Geo** dashboard on the **secondary** node and expand
-the **Verification information** tab for that node to view automatic verification
-status for repositories and wikis. As with checksumming, successes are shown in
-green, pending work in gray, and failures in red.
+ ![Verification status](img/verification_status_primary_v14_0.png)
-![Verification status](img/verification-status-secondary.png)
+On the **secondary** node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Expand **Verification information** tab for that node to view automatic checksumming
+ status for repositories and wikis. Successes are shown in green, pending work
+ in gray, and failures in red.
+
+ ![Verification status](img/verification_status_secondary_v14_0.png)
## Using checksums to compare Geo nodes
@@ -92,11 +98,14 @@ data. The default and recommended re-verification interval is 7 days, though
an interval as short as 1 day can be set. Shorter intervals reduce risk but
increase load and vice versa.
-Go to the **Admin Area > Geo** dashboard on the **primary** node, and
-click the **Edit** button for the **primary** node to customize the minimum
-re-verification interval:
+On the **primary** node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **Edit** for the **primary** node to customize the minimum
+ re-verification interval:
-![Re-verification interval](img/reverification-interval.png)
+ ![Re-verification interval](img/reverification-interval.png)
The automatic background re-verification is enabled by default, but you can
disable if you need. Run the following commands in a Rails console on the
@@ -141,17 +150,19 @@ sudo gitlab-rake geo:verification:wiki:reset
If the **primary** and **secondary** nodes have a checksum verification mismatch, the cause may not be apparent. To find the cause of a checksum mismatch:
-1. Go to the **Admin Area > Overview > Projects** dashboard on the **primary** node, find the
- project that you want to check the checksum differences and click on the
- **Edit** button:
- ![Projects dashboard](img/checksum-differences-admin-projects.png)
+1. On the **primary** node:
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Overview > Projects**.
+ 1. Find the project that you want to check the checksum differences and
+ select its name.
+ 1. On the project administration page get the **Gitaly storage name**,
+ and **Gitaly relative path**.
-1. On the project administration page get the **Gitaly storage name**, and **Gitaly relative path**:
- ![Project administration page](img/checksum-differences-admin-project-page.png)
+ ![Project administration page](img/checksum-differences-admin-project-page.png)
1. Go to the project's repository directory on both **primary** and **secondary** nodes
(the path is usually `/var/opt/gitlab/git-data/repositories`). Note that if `git_data_dirs`
- is customized, check the directory layout on your server to be sure.
+ is customized, check the directory layout on your server to be sure:
```shell
cd /var/opt/gitlab/git-data/repositories
diff --git a/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.png b/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.png
deleted file mode 100644
index 85759d903a4..00000000000
--- a/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/img/replication-status.png b/doc/administration/geo/disaster_recovery/img/replication-status.png
deleted file mode 100644
index d7085927c75..00000000000
--- a/doc/administration/geo/disaster_recovery/img/replication-status.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/img/verification-status-primary.png b/doc/administration/geo/disaster_recovery/img/verification-status-primary.png
deleted file mode 100644
index 2503408ec5d..00000000000
--- a/doc/administration/geo/disaster_recovery/img/verification-status-primary.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/img/verification-status-secondary.png b/doc/administration/geo/disaster_recovery/img/verification-status-secondary.png
deleted file mode 100644
index 462274d8b14..00000000000
--- a/doc/administration/geo/disaster_recovery/img/verification-status-secondary.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/img/verification_status_primary_v14_0.png b/doc/administration/geo/disaster_recovery/img/verification_status_primary_v14_0.png
new file mode 100644
index 00000000000..9d2537a18bf
--- /dev/null
+++ b/doc/administration/geo/disaster_recovery/img/verification_status_primary_v14_0.png
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/img/verification_status_secondary_v14_0.png b/doc/administration/geo/disaster_recovery/img/verification_status_secondary_v14_0.png
new file mode 100644
index 00000000000..3b4ff9f393b
--- /dev/null
+++ b/doc/administration/geo/disaster_recovery/img/verification_status_secondary_v14_0.png
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index d50078da172..5c15523ac78 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -109,13 +109,16 @@ The maintenance window won't end until Geo replication and verification is
completely finished. To keep the window as short as possible, you should
ensure these processes are close to 100% as possible during active use.
-Go to the **Admin Area > Geo** dashboard on the **secondary** node to
-review status. Replicated objects (shown in green) should be close to 100%,
-and there should be no failures (shown in red). If a large proportion of
-objects aren't yet replicated (shown in gray), consider giving the node more
-time to complete
+On the **secondary** node:
-![Replication status](img/replication-status.png)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+ Replicated objects (shown in green) should be close to 100%,
+ and there should be no failures (shown in red). If a large proportion of
+ objects aren't yet replicated (shown in gray), consider giving the node more
+ time to complete
+
+ ![Replication status](../replication/img/geo_node_dashboard_v14_0.png)
If any objects are failing to replicate, this should be investigated before
scheduling the maintenance window. Following a planned failover, anything that
@@ -134,23 +137,26 @@ This [content was moved to another location](background_verification.md).
### Notify users of scheduled maintenance
-On the **primary** node, navigate to **Admin Area > Messages**, add a broadcast
-message. You can check under **Admin Area > Geo** to estimate how long it
-takes to finish syncing. An example message would be:
+On the **primary** node:
-> A scheduled maintenance takes place at XX:XX UTC. We expect it to take
-> less than 1 hour.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Messages**.
+1. Add a message notifying users on the maintenance window.
+ You can check under **Geo > Nodes** to estimate how long it
+ takes to finish syncing.
+1. Select **Add broadcast message**.
## Prevent updates to the **primary** node
To ensure that all data is replicated to a secondary site, updates (write requests) need to
-be disabled on the primary site:
-
-1. Enable [maintenance mode](../../maintenance_mode/index.md).
-
-1. Disable non-Geo periodic background jobs on the **primary** node by navigating
- to **Admin Area > Monitoring > Background Jobs > Cron**, pressing `Disable All`,
- and then pressing `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
+be disabled on the **primary** site:
+
+1. Enable [maintenance mode](../../maintenance_mode/index.md) on the **primary** node.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Monitoring > Background Jobs**.
+1. On the Sidekiq dashboard, select **Cron**.
+1. Select `Disable All` to disable non-Geo periodic background jobs.
+1. Select `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
This job re-enables several other cron jobs that are essential for planned
failover to complete successfully.
@@ -158,23 +164,28 @@ be disabled on the primary site:
1. If you are manually replicating any data not managed by Geo, trigger the
final replication process now.
-1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
- and wait for all queues except those with `geo` in the name to drop to 0.
- These queues contain work that has been submitted by your users; failing over
- before it is completed, causes the work to be lost.
-1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
- following conditions to be true of the **secondary** node you are failing over to:
-
- - All replication meters to each 100% replicated, 0% failures.
- - All verification meters reach 100% verified, 0% failures.
- - Database replication lag is 0ms.
- - The Geo log cursor is up to date (0 events behind).
-
-1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
- and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
-1. On the **secondary** node, use [these instructions](../../raketasks/check.md)
- to verify the integrity of CI artifacts, LFS objects, and uploads in file
- storage.
+1. On the **primary** node:
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. On the Sidekiq dashboard, select **Queues**, and wait for all queues except
+ those with `geo` in the name to drop to 0.
+ These queues contain work that has been submitted by your users; failing over
+ before it is completed, causes the work to be lost.
+ 1. On the left sidebar, select **Geo > Nodes** and wait for the
+ following conditions to be true of the **secondary** node you are failing over to:
+
+ - All replication meters reach 100% replicated, 0% failures.
+ - All verification meters reach 100% verified, 0% failures.
+ - Database replication lag is 0ms.
+ - The Geo log cursor is up to date (0 events behind).
+
+1. On the **secondary** node:
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. On the Sidekiq dashboard, select **Queues**, and wait for all the `geo`
+ queues to drop to 0 queued and 0 running jobs.
+ 1. [Run an integrity check](../../raketasks/check.md) to verify the integrity
+ of CI artifacts, LFS objects, and uploads in file storage.
At this point, your **secondary** node contains an up-to-date copy of everything the
**primary** node has, meaning nothing was lost when you fail over.
diff --git a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
index 3227fafca0f..4cfe781c7a4 100644
--- a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
+++ b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
@@ -63,13 +63,16 @@ Before following any of those steps, make sure you have `root` access to the
**secondary** to promote it, since there isn't provided an automated way to
promote a Geo replica and perform a failover.
-On the **secondary** node, navigate to the **Admin Area > Geo** dashboard to
-review its status. Replicated objects (shown in green) should be close to 100%,
-and there should be no failures (shown in red). If a large proportion of
-objects aren't yet replicated (shown in gray), consider giving the node more
-time to complete.
+On the **secondary** node:
-![Replication status](../img/replication-status.png)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes** to see its status.
+ Replicated objects (shown in green) should be close to 100%,
+ and there should be no failures (shown in red). If a large proportion of
+ objects aren't yet replicated (shown in gray), consider giving the node more
+ time to complete.
+
+ ![Replication status](../../replication/img/geo_node_dashboard_v14_0.png)
If any objects are failing to replicate, this should be investigated before
scheduling the maintenance window. After a planned failover, anything that
@@ -126,11 +129,14 @@ follow these steps to avoid unnecessary data loss:
existing Git repository with an SSH remote URL. The server should refuse
connection.
- 1. On the **primary** node, disable non-Geo periodic background jobs by navigating
- to **Admin Area > Monitoring > Background Jobs > Cron**, clicking `Disable All`,
- and then clicking `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
- This job will re-enable several other cron jobs that are essential for planned
- failover to complete successfully.
+ 1. On the **primary** node:
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. On the Sidekiq dhasboard, select **Cron**.
+ 1. Select `Disable All` to disable any non-Geo periodic background jobs.
+ 1. Select `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
+ This job will re-enable several other cron jobs that are essential for planned
+ failover to complete successfully.
1. Finish replicating and verifying all data:
@@ -141,22 +147,28 @@ follow these steps to avoid unnecessary data loss:
1. If you are manually replicating any
[data not managed by Geo](../../replication/datatypes.md#limitations-on-replicationverification),
trigger the final replication process now.
- 1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
- and wait for all queues except those with `geo` in the name to drop to 0.
- These queues contain work that has been submitted by your users; failing over
- before it is completed will cause the work to be lost.
- 1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
- following conditions to be true of the **secondary** node you are failing over to:
- - All replication meters to each 100% replicated, 0% failures.
- - All verification meters reach 100% verified, 0% failures.
- - Database replication lag is 0ms.
- - The Geo log cursor is up to date (0 events behind).
-
- 1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
- and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
- 1. On the **secondary** node, use [these instructions](../../../raketasks/check.md)
- to verify the integrity of CI artifacts, LFS objects, and uploads in file
- storage.
+ 1. On the **primary** node:
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. On the Sidekiq dashboard, select **Queues**, and wait for all queues except
+ those with `geo` in the name to drop to 0.
+ These queues contain work that has been submitted by your users; failing over
+ before it is completed, causes the work to be lost.
+ 1. On the left sidebar, select **Geo > Nodes** and wait for the
+ following conditions to be true of the **secondary** node you are failing over to:
+
+ - All replication meters reach 100% replicated, 0% failures.
+ - All verification meters reach 100% verified, 0% failures.
+ - Database replication lag is 0ms.
+ - The Geo log cursor is up to date (0 events behind).
+
+ 1. On the **secondary** node:
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. On the Sidekiq dashboard, select **Queues**, and wait for all the `geo`
+ queues to drop to 0 queued and 0 running jobs.
+ 1. [Run an integrity check](../../../raketasks/check.md) to verify the integrity
+ of CI artifacts, LFS objects, and uploads in file storage.
At this point, your **secondary** node will contain an up-to-date copy of everything the
**primary** node has, meaning nothing will be lost when you fail over.
diff --git a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
index 7f311d172ef..6caeddad51a 100644
--- a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
+++ b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_single_node.md
@@ -57,7 +57,7 @@ and there should be no failures (shown in red). If a large proportion of
objects aren't yet replicated (shown in gray), consider giving the node more
time to complete.
-![Replication status](../img/replication-status.png)
+![Replication status](../../replication/img/geo_node_dashboard_v14_0.png)
If any objects are failing to replicate, this should be investigated before
scheduling the maintenance window. After a planned failover, anything that
@@ -114,11 +114,14 @@ follow these steps to avoid unnecessary data loss:
existing Git repository with an SSH remote URL. The server should refuse
connection.
- 1. On the **primary** node, disable non-Geo periodic background jobs by navigating
- to **Admin Area > Monitoring > Background Jobs > Cron**, clicking `Disable All`,
- and then clicking `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
- This job will re-enable several other cron jobs that are essential for planned
- failover to complete successfully.
+ 1. On the **primary** node:
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. On the Sidekiq dhasboard, select **Cron**.
+ 1. Select `Disable All` to disable any non-Geo periodic background jobs.
+ 1. Select `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
+ This job will re-enable several other cron jobs that are essential for planned
+ failover to complete successfully.
1. Finish replicating and verifying all data:
@@ -129,22 +132,28 @@ follow these steps to avoid unnecessary data loss:
1. If you are manually replicating any
[data not managed by Geo](../../replication/datatypes.md#limitations-on-replicationverification),
trigger the final replication process now.
- 1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
- and wait for all queues except those with `geo` in the name to drop to 0.
- These queues contain work that has been submitted by your users; failing over
- before it is completed will cause the work to be lost.
- 1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
- following conditions to be true of the **secondary** node you are failing over to:
- - All replication meters to each 100% replicated, 0% failures.
- - All verification meters reach 100% verified, 0% failures.
- - Database replication lag is 0ms.
- - The Geo log cursor is up to date (0 events behind).
-
- 1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
- and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
- 1. On the **secondary** node, use [these instructions](../../../raketasks/check.md)
- to verify the integrity of CI artifacts, LFS objects, and uploads in file
- storage.
+ 1. On the **primary** node:
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. On the Sidekiq dashboard, select **Queues**, and wait for all queues except
+ those with `geo` in the name to drop to 0.
+ These queues contain work that has been submitted by your users; failing over
+ before it is completed, causes the work to be lost.
+ 1. On the left sidebar, select **Geo > Nodes** and wait for the
+ following conditions to be true of the **secondary** node you are failing over to:
+
+ - All replication meters reach 100% replicated, 0% failures.
+ - All verification meters reach 100% verified, 0% failures.
+ - Database replication lag is 0ms.
+ - The Geo log cursor is up to date (0 events behind).
+
+ 1. On the **secondary** node:
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. On the Sidekiq dashboard, select **Queues**, and wait for all the `geo`
+ queues to drop to 0 queued and 0 running jobs.
+ 1. [Run an integrity check](../../../raketasks/check.md) to verify the integrity
+ of CI artifacts, LFS objects, and uploads in file storage.
At this point, your **secondary** node will contain an up-to-date copy of everything the
**primary** node has, meaning nothing will be lost when you fail over.
diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md
index 6d5f3e61ba0..926c4c565aa 100644
--- a/doc/administration/geo/replication/configuration.md
+++ b/doc/administration/geo/replication/configuration.md
@@ -196,9 +196,9 @@ keys must be manually replicated to the **secondary** node.
gitlab-ctl reconfigure
```
-1. Visit the **primary** node's **Admin Area > Geo**
- (`/admin/geo/nodes`) in your browser.
-1. Click the **New node** button.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **New node**.
![Add secondary node](img/adding_a_secondary_node_v13_3.png)
1. Fill in **Name** with the `gitlab_rails['geo_node_name']` in
`/etc/gitlab/gitlab.rb`. These values must always match *exactly*, character
@@ -209,7 +209,7 @@ keys must be manually replicated to the **secondary** node.
1. Optionally, choose which groups or storage shards should be replicated by the
**secondary** node. Leave blank to replicate all. Read more in
[selective synchronization](#selective-synchronization).
-1. Click the **Add node** button to add the **secondary** node.
+1. Select **Add node** to add the **secondary** node.
1. SSH into your GitLab **secondary** server and restart the services:
```shell
@@ -252,24 +252,28 @@ on the **secondary** node.
Geo synchronizes repositories over HTTP/HTTPS, and therefore requires this clone
method to be enabled. This is enabled by default, but if converting an existing node to Geo it should be checked:
-1. Go to **Admin Area > Settings** (`/admin/application_settings/general`) on the **primary** node.
-1. Expand "Visibility and access controls".
+On the **primary** node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Settings > General**.
+1. Expand **Visibility and access controls**.
1. Ensure "Enabled Git access protocols" is set to either "Both SSH and HTTP(S)" or "Only HTTP(S)".
### Step 6. Verify proper functioning of the **secondary** node
-Your **secondary** node is now configured!
+You can sign in to the **secondary** node with the same credentials you used with
+the **primary** node. After you sign in:
-You can sign in to the _secondary_ node with the same credentials you used with
-the _primary_ node. Visit the _secondary_ node's **Admin Area > Geo**
-(`/admin/geo/nodes`) in your browser to determine if it's correctly identified
-as a _secondary_ Geo node, and if Geo is enabled.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Verify that it's correctly identified as a **secondary** Geo node, and that
+ Geo is enabled.
The initial replication, or 'backfill', is probably still in progress. You
can monitor the synchronization process on each Geo node from the **primary**
node's **Geo Nodes** dashboard in your browser.
-![Geo dashboard](img/geo_node_dashboard.png)
+![Geo dashboard](img/geo_node_dashboard_v14_0.png)
If your installation isn't working properly, check the
[troubleshooting document](troubleshooting.md).
diff --git a/doc/administration/geo/replication/datatypes.md b/doc/administration/geo/replication/datatypes.md
index a1461a64518..6989765dbad 100644
--- a/doc/administration/geo/replication/datatypes.md
+++ b/doc/administration/geo/replication/datatypes.md
@@ -189,6 +189,7 @@ successfully, you must replicate their data using some other means.
|[Object pools for forked project deduplication](../../../development/git_object_deduplication.md) | **Yes** | No | No | |
|[Container Registry](../../packages/container_registry.md) | **Yes** (12.3) | No | No | Disabled by default. See [instructions](docker_registry.md) to enable. |
|[Content in object storage (beta)](object_storage.md) | **Yes** (12.4) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/13845) | No | |
+|[Infrastructure Registry for Terraform Module](../../../user/packages/terraform_module_registry/index.md) | **Yes** (14.0) | [**Yes**](#limitation-of-verification-for-files-in-object-storage) (14.0) | Via Object Storage provider if supported. Native Geo support (Beta). | Behind feature flag `geo_package_file_replication`, enabled by default. |
|[Project designs repository](../../../user/project/issues/design_management.md) | **Yes** (12.7) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/32467) | No | Designs also require replication of LFS objects and Uploads. |
|[Package Registry for npm](../../../user/packages/npm_registry/index.md) | **Yes** (13.2) | [**Yes**](#limitation-of-verification-for-files-in-object-storage) (13.10) | Via Object Storage provider if supported. Native Geo support (Beta). | Behind feature flag `geo_package_file_replication`, enabled by default. |
|[Package Registry for Maven](../../../user/packages/maven_repository/index.md) | **Yes** (13.2) | [**Yes**](#limitation-of-verification-for-files-in-object-storage) (13.10) | Via Object Storage provider if supported. Native Geo support (Beta). | Behind feature flag `geo_package_file_replication`, enabled by default. |
diff --git a/doc/administration/geo/replication/disable_geo.md b/doc/administration/geo/replication/disable_geo.md
index c71cf80d0c1..ba01c55a157 100644
--- a/doc/administration/geo/replication/disable_geo.md
+++ b/doc/administration/geo/replication/disable_geo.md
@@ -33,9 +33,12 @@ to do that.
## Remove the primary site from the UI
-1. Go to **Admin Area > Geo** (`/admin/geo/nodes`).
-1. Click the **Remove** button for the **primary** node.
-1. Confirm by clicking **Remove** when the prompt appears.
+To remove the **primary** site:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **Remove** for the **primary** node.
+1. Confirm by selecting **Remove** when the prompt appears.
## Remove secondary replication slots
diff --git a/doc/administration/geo/replication/docker_registry.md b/doc/administration/geo/replication/docker_registry.md
index a8628481ba7..cc0719442a1 100644
--- a/doc/administration/geo/replication/docker_registry.md
+++ b/doc/administration/geo/replication/docker_registry.md
@@ -124,7 +124,10 @@ For each application and Sidekiq node on the **secondary** site:
### Verify replication
-To verify Container Registry replication is working, go to **Admin Area > Geo**
-(`/admin/geo/nodes`) on the **secondary** site.
-The initial replication, or "backfill", is probably still in progress.
+To verify Container Registry replication is working, on the **secondary** site:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+ The initial replication, or "backfill", is probably still in progress.
+
You can monitor the synchronization process on each Geo site from the **primary** site's **Geo Nodes** dashboard in your browser.
diff --git a/doc/administration/geo/replication/img/geo_architecture.png b/doc/administration/geo/replication/img/geo_architecture.png
index aac63be41ff..90272537f43 100644
--- a/doc/administration/geo/replication/img/geo_architecture.png
+++ b/doc/administration/geo/replication/img/geo_architecture.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/geo_node_dashboard.png b/doc/administration/geo/replication/img/geo_node_dashboard.png
deleted file mode 100644
index 8b9aceba825..00000000000
--- a/doc/administration/geo/replication/img/geo_node_dashboard.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/geo/replication/img/geo_node_dashboard_v14_0.png b/doc/administration/geo/replication/img/geo_node_dashboard_v14_0.png
new file mode 100644
index 00000000000..6d183fc6bd2
--- /dev/null
+++ b/doc/administration/geo/replication/img/geo_node_dashboard_v14_0.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/geo_node_health_v14_0.png b/doc/administration/geo/replication/img/geo_node_health_v14_0.png
new file mode 100644
index 00000000000..4c640522569
--- /dev/null
+++ b/doc/administration/geo/replication/img/geo_node_health_v14_0.png
Binary files differ
diff --git a/doc/administration/geo/replication/object_storage.md b/doc/administration/geo/replication/object_storage.md
index 7dd831092a3..90a41ed3e1c 100644
--- a/doc/administration/geo/replication/object_storage.md
+++ b/doc/administration/geo/replication/object_storage.md
@@ -21,7 +21,7 @@ To have:
[Read more about using object storage with GitLab](../../object_storage.md).
-## Enabling GitLab managed object storage replication
+## Enabling GitLab-managed object storage replication
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/10586) in GitLab 12.4.
@@ -31,10 +31,11 @@ This is a [**beta** feature](https://about.gitlab.com/handbook/product/#beta) an
**Secondary** sites can replicate files stored on the **primary** site regardless of
whether they are stored on the local file system or in object storage.
-To enable GitLab replication, you must:
+To enable GitLab replication:
-1. Go to **Admin Area > Geo**.
-1. Press **Edit** on the **secondary** site.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **Edit** on the **secondary** site.
1. In the **Synchronization Settings** section, find the **Allow this secondary node to replicate content on Object Storage**
checkbox to enable it.
diff --git a/doc/administration/geo/replication/remove_geo_site.md b/doc/administration/geo/replication/remove_geo_site.md
index a42a4c4eb47..274eb28dbc9 100644
--- a/doc/administration/geo/replication/remove_geo_site.md
+++ b/doc/administration/geo/replication/remove_geo_site.md
@@ -9,7 +9,8 @@ type: howto
**Secondary** sites can be removed from the Geo cluster using the Geo administration page of the **primary** site. To remove a **secondary** site:
-1. Go to **Admin Area > Geo** (`/admin/geo/nodes`).
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
1. Select the **Remove** button for the **secondary** site you want to remove.
1. Confirm by selecting **Remove** when the prompt appears.
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
index 1fd923dbaf1..c00f523957c 100644
--- a/doc/administration/geo/replication/troubleshooting.md
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -25,8 +25,12 @@ Before attempting more advanced troubleshooting:
### Check the health of the **secondary** node
-Visit the **primary** node's **Admin Area > Geo** (`/admin/geo/nodes`) in
-your browser. We perform the following health checks on each **secondary** node
+On the **primary** node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+
+We perform the following health checks on each **secondary** node
to help identify if something is wrong:
- Is the node running?
@@ -35,7 +39,7 @@ to help identify if something is wrong:
- Is the node's secondary tracking database connected?
- Is the node's secondary tracking database up-to-date?
-![Geo health check](img/geo_node_dashboard.png)
+![Geo health check](img/geo_node_health_v14_0.png)
For information on how to resolve common errors reported from the UI, see
[Fixing Common Errors](#fixing-common-errors).
@@ -129,7 +133,8 @@ Geo finds the current machine's Geo node name in `/etc/gitlab/gitlab.rb` by:
- Using the `gitlab_rails['geo_node_name']` setting.
- If that is not defined, using the `external_url` setting.
-This name is used to look up the node with the same **Name** in **Admin Area > Geo**.
+This name is used to look up the node with the same **Name** in the **Geo Nodes**
+dashboard.
To check if the current machine has a node name that matches a node in the
database, run the check task:
@@ -739,8 +744,11 @@ If you are able to log in to the **primary** node, but you receive this error
when attempting to log into a **secondary**, you should check that the Geo
node's URL matches its external URL.
-1. On the primary, visit **Admin Area > Geo**.
-1. Find the affected **secondary** and click **Edit**.
+On the **primary** node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Find the affected **secondary** site and select **Edit**.
1. Ensure the **URL** field matches the value found in `/etc/gitlab/gitlab.rb`
in `external_url "https://gitlab.example.com"` on the frontend server(s) of
the **secondary** node.
diff --git a/doc/administration/geo/replication/tuning.md b/doc/administration/geo/replication/tuning.md
index a4aad3dec68..9807f3e6444 100644
--- a/doc/administration/geo/replication/tuning.md
+++ b/doc/administration/geo/replication/tuning.md
@@ -7,20 +7,28 @@ type: howto
# Tuning Geo **(PREMIUM SELF)**
-## Changing the sync/verification capacity values
+You can limit the number of concurrent operations the nodes can run
+in the background.
-In **Admin Area > Geo** (`/admin/geo/nodes`),
-there are several variables that can be tuned to improve performance of Geo:
+## Changing the sync/verification concurrency values
-- Repository sync capacity
-- File sync capacity
-- Container repositories sync capacity
-- Verification capacity
+On the **primary** site:
-Increasing capacity values will increase the number of jobs that are scheduled.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **Edit** of the secondary node you want to tune.
+1. Under **Tuning settings**, there are several variables that can be tuned to
+ improve the performance of Geo:
+
+ - Repository synchronization concurrency limit
+ - File synchronization concurrency limit
+ - Container repositories synchronization concurrency limit
+ - Verification concurrency limit
+
+Increasing the concurrency values will increase the number of jobs that are scheduled.
However, this may not lead to more downloads in parallel unless the number of
-available Sidekiq threads is also increased. For example, if repository sync
-capacity is increased from 25 to 50, you may also want to increase the number
+available Sidekiq threads is also increased. For example, if repository synchronization
+concurrency is increased from 25 to 50, you may also want to increase the number
of Sidekiq threads from 25 to 50. See the
[Sidekiq concurrency documentation](../../operations/extra_sidekiq_processes.md#number-of-threads)
for more details.
diff --git a/doc/administration/housekeeping.md b/doc/administration/housekeeping.md
index 9668b7277c2..a89e8a2bad5 100644
--- a/doc/administration/housekeeping.md
+++ b/doc/administration/housekeeping.md
@@ -9,25 +9,27 @@ info: To determine the technical writer assigned to the Stage/Group associated w
GitLab supports and automates housekeeping tasks within your current repository,
such as compressing file revisions and removing unreachable objects.
-## Automatic housekeeping
+## Configure housekeeping
GitLab automatically runs `git gc` and `git repack` on repositories
-after Git pushes. You can change how often this happens or turn it off in
-**Admin Area > Settings > Repository** (`/admin/application_settings/repository`).
+after Git pushes.
-## Manual housekeeping
+You can change how often this happens or turn it off:
-The housekeeping function runs `repack` or `gc` depending on the
-**Housekeeping** settings configured in **Admin Area > Settings > Repository**.
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Settings > Repository**.
+1. Expand **Repository maintenance**.
+1. Configure the Housekeeping options.
+1. Select **Save changes**.
-For example in the following scenario a `git repack -d` will be executed:
+For example, in the following scenario a `git repack -d` will be executed:
- Project: pushes since GC counter (`pushes_since_gc`) = `10`
- Git GC period = `200`
- Full repack period = `50`
When the `pushes_since_gc` value is 50 a `repack -A -d --pack-kept-objects` runs, similarly when
-the `pushes_since_gc` value is 200 a `git gc` runs.
+the `pushes_since_gc` value is 200 a `git gc` runs:
- `git gc` ([man page](https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-gc.html)) runs a number of housekeeping tasks,
such as compressing file revisions (to reduce disk space and increase performance)
@@ -38,12 +40,6 @@ the `pushes_since_gc` value is 200 a `git gc` runs.
Housekeeping also [removes unreferenced LFS files](../raketasks/cleanup.md#remove-unreferenced-lfs-files)
from your project on the same schedule as the `git gc` operation, freeing up storage space for your project.
-To manually start the housekeeping process:
-
-1. In your project, go to **Settings > General**.
-1. Expand the **Advanced** section.
-1. Select **Run housekeeping**.
-
## How housekeeping handles pool repositories
Housekeeping for pool repositories is handled differently from standard repositories.
diff --git a/doc/administration/img/auditor_access_form.png b/doc/administration/img/auditor_access_form.png
deleted file mode 100644
index c179a7d3b0a..00000000000
--- a/doc/administration/img/auditor_access_form.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/maintenance_mode/index.md b/doc/administration/maintenance_mode/index.md
index c73a49287db..2f5d366f927 100644
--- a/doc/administration/maintenance_mode/index.md
+++ b/doc/administration/maintenance_mode/index.md
@@ -21,10 +21,11 @@ Maintenance Mode allows most external actions that do not change internal state.
There are three ways to enable Maintenance Mode as an administrator:
- **Web UI**:
- 1. Go to **Admin Area > Settings > General**, expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**.
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Settings > General**.
+ 1. Expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**.
You can optionally add a message for the banner as well.
-
- 1. Click **Save** for the changes to take effect.
+ 1. Select **Save changes**.
- **API**:
@@ -44,9 +45,11 @@ There are three ways to enable Maintenance Mode as an administrator:
There are three ways to disable Maintenance Mode:
- **Web UI**:
- 1. Go to **Admin Area > Settings > General**, expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**.
-
- 1. Click **Save** for the changes to take effect.
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Settings > General**.
+ 1. Expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**.
+ You can optionally add a message for the banner as well.
+ 1. Select **Save changes**.
- **API**:
@@ -166,7 +169,10 @@ Background jobs (cron jobs, Sidekiq) continue running as is, because background
[During a planned Geo failover](../geo/disaster_recovery/planned_failover.md#prevent-updates-to-the-primary-node),
it is recommended that you disable all cron jobs except for those related to Geo.
-You can monitor queues and disable jobs in **Admin Area > Monitoring > Background Jobs**.
+To monitor queues and disable jobs:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Monitoring > Background Jobs**.
### Incident management
diff --git a/doc/administration/operations/extra_sidekiq_processes.md b/doc/administration/operations/extra_sidekiq_processes.md
index ed89d11da75..b910a789d29 100644
--- a/doc/administration/operations/extra_sidekiq_processes.md
+++ b/doc/administration/operations/extra_sidekiq_processes.md
@@ -87,10 +87,10 @@ To start multiple processes:
sudo gitlab-ctl reconfigure
```
-After the extra Sidekiq processes are added, navigate to
-**Admin Area > Monitoring > Background Jobs** (`/admin/background_jobs`) in GitLab.
+To view the Sidekiq processes in GitLab:
-![Multiple Sidekiq processes](img/sidekiq-cluster.png)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Monitoring > Background Jobs**.
## Negate settings
diff --git a/doc/administration/operations/fast_ssh_key_lookup.md b/doc/administration/operations/fast_ssh_key_lookup.md
index 8acc40da4ab..bb0756cf948 100644
--- a/doc/administration/operations/fast_ssh_key_lookup.md
+++ b/doc/administration/operations/fast_ssh_key_lookup.md
@@ -104,11 +104,13 @@ In the case of lookup failures (which are common), the `authorized_keys`
file is still scanned. So Git SSH performance would still be slow for many
users as long as a large file exists.
-You can disable any more writes to the `authorized_keys` file by unchecking
-`Write to "authorized_keys" file` in the **Admin Area > Settings > Network > Performance optimization** of your GitLab
-installation.
+To disable any more writes to the `authorized_keys` file:
-![Write to authorized keys setting](img/write_to_authorized_keys_setting.png)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Settings > Network**.
+1. Expand **Performance optimization**.
+1. Clear the **Write to "authorized_keys" file** checkbox.
+1. Select **Save changes**.
Again, confirm that SSH is working by removing your user's SSH key in the UI,
adding a new one, and attempting to pull a repository.
diff --git a/doc/administration/operations/img/sidekiq-cluster.png b/doc/administration/operations/img/sidekiq-cluster.png
deleted file mode 100644
index 3899385eb8f..00000000000
--- a/doc/administration/operations/img/sidekiq-cluster.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/operations/img/write_to_authorized_keys_setting.png b/doc/administration/operations/img/write_to_authorized_keys_setting.png
deleted file mode 100644
index f6227a6057b..00000000000
--- a/doc/administration/operations/img/write_to_authorized_keys_setting.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/polling.md b/doc/administration/polling.md
index f6732b8edc6..d3f558eeaaa 100644
--- a/doc/administration/polling.md
+++ b/doc/administration/polling.md
@@ -9,23 +9,24 @@ info: To determine the technical writer assigned to the Stage/Group associated w
The GitLab UI polls for updates for different resources (issue notes, issue
titles, pipeline statuses, etc.) on a schedule appropriate to the resource.
-In **[Admin Area](../user/admin_area/index.md) > Settings > Preferences > Real-time features**,
-you can configure "Polling
-interval multiplier". This multiplier is applied to all resources at once,
-and decimal values are supported. For the sake of the examples below, we will
-say that issue notes poll every 2 seconds, and issue titles poll every 5
-seconds; these are _not_ the actual values.
+To configure the polling interval multiplier:
-- 1 is the default, and recommended for most installations. (Issue notes poll
- every 2 seconds, and issue titles poll every 5 seconds.)
-- 0 disables UI polling completely. (On the next poll, clients stop
- polling for updates.)
-- A value greater than 1 slows polling down. If you see issues with
- database load from lots of clients polling for updates, increasing the
- multiplier from 1 can be a good compromise, rather than disabling polling
- completely. (For example: If this is set to 2, then issue notes poll every 4
- seconds, and issue titles poll every 10 seconds.)
-- A value between 0 and 1 makes the UI poll more frequently (so updates
- show in other sessions faster), but is **not recommended**. 1 should be
- fast enough. (For example, if this is set to 0.5, then issue notes poll every
- 1 second, and issue titles poll every 2.5 seconds.)
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Settings > Preferences**.
+1. Expand **Real-time features**.
+1. Set a value for the polling interval multiplier. This multiplier is applied
+ to all resources at once, and decimal values are supported:
+
+ - `1.0` is the default, and recommended for most installations.
+ - `0` disables UI polling completely. On the next poll, clients stop
+ polling for updates.
+ - A value greater than `1` slows polling down. If you see issues with
+ database load from lots of clients polling for updates, increasing the
+ multiplier from 1 can be a good compromise, rather than disabling polling
+ completely. For example, if you set the value to `2`, all polling intervals
+ are multiplied by 2, which means that polling happens half as frequently.
+ - A value between `0` and `1` makes the UI poll more frequently (so updates
+ show in other sessions faster), but is **not recommended**. `1` should be
+ fast enough.
+
+1. Select **Save changes**.
diff --git a/doc/administration/raketasks/check.md b/doc/administration/raketasks/check.md
index 7f344a00f72..f7c91aa6b47 100644
--- a/doc/administration/raketasks/check.md
+++ b/doc/administration/raketasks/check.md
@@ -207,8 +207,7 @@ above.
### Dangling commits
`gitlab:git:fsck` can find dangling commits. To fix them, try
-[manually triggering housekeeping](../housekeeping.md#manual-housekeeping)
-for the affected project(s).
+[enabling housekeeping](../housekeeping.md).
If the issue persists, try triggering `gc` via the
[Rails Console](../operations/rails_console.md#starting-a-rails-console-session):
diff --git a/doc/administration/raketasks/project_import_export.md b/doc/administration/raketasks/project_import_export.md
index cd6ffc957b1..80321d75d66 100644
--- a/doc/administration/raketasks/project_import_export.md
+++ b/doc/administration/raketasks/project_import_export.md
@@ -50,8 +50,13 @@ Note the following:
- Importing is only possible if the version of the import and export GitLab instances are
compatible as described in the [Version history](../../user/project/settings/import_export.md#version-history).
-- The project import option must be enabled in
- application settings (`/admin/application_settings/general`) under **Import sources**, which is available
- under **Admin Area > Settings > Visibility and access controls**.
+- The project import option must be enabled:
+
+ 1. On the top bar, select **Menu >** **{admin}** **Admin**.
+ 1. On the left sidebar, select **Settings > General**.
+ 1. Expand **Visibility and access controls**.
+ 1. Under **Import sources**, check the "Project export enabled" option.
+ 1. Select **Save changes**.
+
- The exports are stored in a temporary directory and are deleted every
24 hours by a specific worker.
diff --git a/doc/administration/raketasks/storage.md b/doc/administration/raketasks/storage.md
index 5b6d4e16d8d..cee63a6cae5 100644
--- a/doc/administration/raketasks/storage.md
+++ b/doc/administration/raketasks/storage.md
@@ -107,12 +107,15 @@ to project IDs 50 to 100 in an Omnibus GitLab installation:
sudo gitlab-rake gitlab:storage:migrate_to_hashed ID_FROM=50 ID_TO=100
```
-You can monitor the progress in the **Admin Area > Monitoring > Background Jobs** page.
-There is a specific queue you can watch to see how long it will take to finish:
-`hashed_storage:hashed_storage_project_migrate`.
+To monitor the progress in GitLab:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Monitoring > Background Jobs**.
+1. Watch how long the `hashed_storage:hashed_storage_project_migrate` queue
+ will take to finish. After it reaches zero, you can confirm every project
+ has been migrated by running the commands above.
-After it reaches zero, you can confirm every project has been migrated by running the commands above.
-If you find it necessary, you can run this migration script again to schedule missing projects.
+If you find it necessary, you can run the previous migration script again to schedule missing projects.
Any error or warning is logged in Sidekiq's log file.
@@ -120,7 +123,7 @@ If [Geo](../geo/index.md) is enabled, each project that is successfully migrated
generates an event to replicate the changes on any **secondary** nodes.
You only need the `gitlab:storage:migrate_to_hashed` Rake task to migrate your repositories, but there are
-[additional commands(#list-projects-and-attachments) to help you inspect projects and attachments in both legacy and hashed storage.
+[additional commands](#list-projects-and-attachments) to help you inspect projects and attachments in both legacy and hashed storage.
## Rollback from hashed storage to legacy storage
diff --git a/doc/install/azure/index.md b/doc/install/azure/index.md
index 0d62e4d1215..1351489642e 100644
--- a/doc/install/azure/index.md
+++ b/doc/install/azure/index.md
@@ -238,9 +238,11 @@ in this section whenever you need to update GitLab.
### Check the current version
-To determine the version of GitLab you're currently running,
-go to the **{admin}** **Admin Area**, and find the version
-under the **Components** table.
+To determine the version of GitLab you're currently running:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Overview > Dashboard**.
+1. Find the version under the **Components** table.
If there's a newer available version of GitLab that contains one or more
security fixes, GitLab displays an **Update asap** notification message that
diff --git a/doc/subscriptions/index.md b/doc/subscriptions/index.md
index 62681d9a657..575ddd5462e 100644
--- a/doc/subscriptions/index.md
+++ b/doc/subscriptions/index.md
@@ -211,13 +211,13 @@ After you ensure that you are using OSI-approved licenses for your projects, you
###### Screenshot 1: License overview
-On the left sidebar, select **Project Information > Details**. Take a screenshot that includes a view of the license you've chosen for your project.
+On the left sidebar, select **Project information > Details**. Take a screenshot that includes a view of the license you've chosen for your project.
![License overview](img/license-overview.png)
###### Screenshot 2: License file
-Navigate to one of the license files that you uploaded. You can usually find the license file by selecting **Project Information > Details** and scanning the page for the license.
+Navigate to one of the license files that you uploaded. You can usually find the license file by selecting **Project information > Details** and scanning the page for the license.
Make sure the screenshot includes the title of the license.
![License file](img/license-file.png)
diff --git a/doc/topics/autodevops/upgrading_auto_deploy_dependencies.md b/doc/topics/autodevops/upgrading_auto_deploy_dependencies.md
index 62dc061aba6..48d37e5125c 100644
--- a/doc/topics/autodevops/upgrading_auto_deploy_dependencies.md
+++ b/doc/topics/autodevops/upgrading_auto_deploy_dependencies.md
@@ -77,7 +77,7 @@ The v2 auto-deploy-image drops support for Kubernetes 1.15 and lower. If you nee
Kubernetes cluster, follow your cloud provider's instructions. Here's
[an example on GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster).
-#### Helm 3
+#### Helm v3
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/228609) in GitLab 13.4.
@@ -86,47 +86,38 @@ Previously, `auto-deploy-image` used Helm v2, which used Tiller in a cluster.
In the v2 `auto-deploy-image`, it uses Helm v3 that doesn't require Tiller anymore.
If your Auto DevOps project has an active environment that was deployed with the v1
-`auto-deploy-image`, use the following steps to upgrade to v2, which uses Helm 3:
-
-1. Modify your `.gitlab-ci.yml` with:
-
- ```yaml
- include:
- - template: Auto-DevOps.gitlab-ci.yml
- - remote: https://gitlab.com/hfyngvason/ci-templates/-/raw/master/Helm-2to3.gitlab-ci.yml
-
- variables:
- # If this variable is not present, the migration jobs will not show up
- MIGRATE_HELM_2TO3: "true"
-
- .auto-deploy:
- # Optional: If you are on GitLab 13.12 or older, pin the auto-deploy-image
- # image: registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v2.6.0
- variables:
- AUTO_DEVOPS_FORCE_DEPLOY_V2: 1
- # If you have non-public pipelines, you can back up the entire namespace in a job artifact
- # prior to the migration by setting the CI variable BACKUP_NAMESPACE to a non-empty value.
- # WARNING: If you have public pipelines, this artifact will be public and can
- # expose your secrets.
- # BACKUP_HELM2_RELEASES: 1
- ```
-
-1. Run the `<environment-name>:helm-2to3:migrate` job.
-1. Deploy your environment as usual. This deployment uses Helm 3.
-1. If the deployment succeeds, you can safely run `environment:helm-2to3:cleanup`.
- This deletes all Helm 2 release data from the namespace.
-
- If you set `BACKUP_HELM2_RELEASES` to a non-empty value, the `<environment-name>:helm2to3:migrate`
- job saves a backup for 1 week in a job artifact called `helm-2-release-backups`.
- If you accidentally delete the Helm 2 releases before you are ready, then
- this backup is in a Kubernetes manifest file that can be restored using
- `kubectl apply -f $backup`.
-
- **WARNING:**
- This artifact can contain secrets and is visible to any
- user who can see your job.
-
-1. Remove the `MIGRATE_HELM_2TO3` CI/CD variable.
+`auto-deploy-image`, use the following steps to upgrade to v2, which uses Helm v3:
+
+1. Include the [Helm 2to3 migration CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/raw/master/lib/gitlab/ci/templates/Jobs/Helm-2to3.gitlab-ci.yml):
+
+ - If you are on GitLab.com, or GitLab 14.0.1 or later, this template is already included in Auto DevOps.
+ - On other versions of GitLab, you can modify your `.gitlab-ci.yml` to include the templates:
+
+ ```yaml
+ include:
+ - template: Auto-DevOps.gitlab-ci.yml
+ - remote: https://gitlab.com/gitlab-org/gitlab/-/raw/master/lib/gitlab/ci/templates/Jobs/Helm-2to3.gitlab-ci.yml
+ ```
+
+1. Set the following CI/CD variables:
+
+ - `MIGRATE_HELM_2TO3` to `true`. If this variable is not present, migration jobs do not run.
+ - `AUTO_DEVOPS_FORCE_DEPLOY_V2` to `1`.
+ - **Optional:** `BACKUP_HELM2_RELEASES` to `1`. If you set this variable, the migration
+ job saves a backup for 1 week in a job artifact called `helm-2-release-backups`.
+ If you accidentally delete the Helm v2 releases before you are ready, you can restore
+ this backup from a Kubernetes manifest file by using `kubectl apply -f $backup`.
+
+ **WARNING:**
+ *Do not use this if you have public pipelines*.
+ This artifact can contain secrets and is visible to any
+ user who can see your job.
+
+1. Run a pipeline and trigger the `<environment-name>:helm-2to3:migrate` job.
+1. Deploy your environment as usual. This deployment uses Helm v3.
+1. If the deployment succeeds, you can safely run `<environment-name>:helm-2to3:cleanup`.
+ This deletes all Helm v2 release data from the namespace.
+1. Remove the `MIGRATE_HELM_2TO3` CI/CD variable or set it to `false`. You can do this one environment at a time using [environment scopes](../../ci/environments/index.md#scoping-environments-with-specs).
#### In-Cluster PostgreSQL Channel 2
diff --git a/doc/user/admin_area/geo_nodes.md b/doc/user/admin_area/geo_nodes.md
index 32b1555c33d..19a76d0938b 100644
--- a/doc/user/admin_area/geo_nodes.md
+++ b/doc/user/admin_area/geo_nodes.md
@@ -10,7 +10,10 @@ type: howto
You can configure various settings for GitLab Geo nodes. For more information, see
[Geo documentation](../../administration/geo/index.md).
-On the primary node, go to **Admin Area > Geo**. On secondary nodes, go to **Admin Area > Geo > Nodes**.
+On either the primary or secondary node:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
## Common settings
@@ -61,8 +64,13 @@ The **primary** node's Internal URL is used by **secondary** nodes to contact it
[External URL](https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab)
which is used by users. Internal URL does not need to be a private address.
-Internal URL defaults to External URL, but you can customize it under
-**Admin Area > Geo > Nodes**.
+Internal URL defaults to external URL, but you can also customize it:
+
+1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the left sidebar, select **Geo > Nodes**.
+1. Select **Edit** on the node you want to customize.
+1. Edit the internal URL.
+1. Select **Save changes**.
WARNING:
We recommend using an HTTPS connection while configuring the Geo nodes. To avoid
diff --git a/doc/user/group/index.md b/doc/user/group/index.md
index 104ea57db4a..15fbb442752 100644
--- a/doc/user/group/index.md
+++ b/doc/user/group/index.md
@@ -79,7 +79,7 @@ You can give a user access to all projects in a group.
1. On the top bar, select **Menu > Groups**.
1. Select **Your Groups**.
1. Find your group and select it.
-1. From the left sidebar, select **Members**.
+1. From the left sidebar, select **Group information > Members**.
1. Fill in the fields.
- The role applies to all projects in the group. [Learn more about permissions](../permissions.md).
- On the **Access expiration date**, the user can no longer access projects in the group.
@@ -118,11 +118,11 @@ You can change the owner of a group. Each group must always have at least one
member with the [Owner role](../permissions.md#group-members-permissions).
- As an administrator:
- 1. Go to the group and from the left menu, select **Members**.
+ 1. Go to the group and from the left menu, select **Group information > Members**.
1. Give a different member the **Owner** role.
1. Refresh the page. You can now remove the **Owner** role from the original owner.
- As the current group's owner:
- 1. Go to the group and from the left menu, select **Members**.
+ 1. Go to the group and from the left menu, select **Group information > Members**.
1. Give a different member the **Owner** role.
1. Have the new owner sign in and remove the **Owner** role from you.
@@ -138,7 +138,7 @@ Prerequisites:
To remove a member from a group:
1. Go to the group.
-1. From the left menu, select **Members**.
+1. From the left menu, select **Group information > Members**.
1. Next to the member you want to remove, select **Delete**.
1. Optional. On the **Remove member** confirmation box, select the
**Also unassign this user from linked issues and merge requests** checkbox.
@@ -156,7 +156,7 @@ To find members in a group, you can sort, filter, or search.
Filter a group to find members. By default, all members in the group and subgroups are displayed.
-1. Go to the group and select **Members**.
+1. Go to the group and select **Group information > Members**.
1. Above the list of members, in the **Filter members** box, enter filter criteria.
- To view members in the group only, select **Membership = Direct**.
- To view members of the group and its subgroups, select **Membership = Inherited**.
@@ -166,7 +166,7 @@ Filter a group to find members. By default, all members in the group and subgrou
You can search for members by name, username, or email.
-1. Go to the group and select **Members**.
+1. Go to the group and select **Group information > Members**.
1. Above the list of members, in the **Filter members** box, enter search criteria.
1. To the right of the **Filter members** box, select the magnifying glass (**{search}**).
@@ -174,7 +174,7 @@ You can search for members by name, username, or email.
You can sort members by **Account**, **Access granted**, **Max role**, or **Last sign-in**.
-1. Go to the group and select **Members**.
+1. Go to the group and select **Group information > Members**.
1. Above the list of members, on the top right, from the **Account** list, select
the criteria to filter by.
1. To switch the sort between ascending and descending, to the right of the **Account** list, select the
@@ -273,7 +273,7 @@ To share a given group, for example, `Frontend` with another group, for example,
`Engineering`:
1. Go to the `Frontend` group.
-1. From the left menu, select **Members**.
+1. From the left menu, select **Group information > Members**.
1. Select the **Invite group** tab.
1. In the **Select a group to invite** list, select `Engineering`.
1. For the **Max role**, select a [role](../permissions.md).
@@ -297,7 +297,7 @@ In GitLab 13.11, you can optionally replace the sharing form with a modal window
To share a group after enabling this feature:
1. Go to your group's page.
-1. In the left sidebar, go to **Members**, and then select **Invite a group**.
+1. In the left sidebar, go to **Group information > Members**, and then select **Invite a group**.
1. Select a group, and select a **Max role**.
1. (Optional) Select an **Access expiration date**.
1. Select **Invite**.
@@ -341,7 +341,7 @@ To create group links via filter:
LDAP user permissions can be manually overridden by an administrator. To override a user's permissions:
-1. Go to your group's **Members** page.
+1. Go to your group's **Group information > Members** page.
1. In the row for the user you are editing, select the pencil (**{pencil}**) icon.
1. Select the brown **Edit permissions** button in the modal.
diff --git a/doc/user/project/members/index.md b/doc/user/project/members/index.md
index ab33ff0f6d8..11d6bfb5d0c 100644
--- a/doc/user/project/members/index.md
+++ b/doc/user/project/members/index.md
@@ -21,7 +21,7 @@ Prerequisite:
To add a user to a project:
-1. Go to your project and select **Members**.
+1. Go to your project and select **Project information > Members**.
1. On the **Invite member** tab, under **GitLab member or Email address**, type the username or email address.
In GitLab 13.11 and later, you can [replace this form with a modal window](#add-a-member-modal-window).
1. Select a [role](../../permissions.md).
@@ -52,7 +52,7 @@ Prerequisite:
To add groups to a project:
-1. Go to your project and select **Members**.
+1. Go to your project and select **Project information > Members**.
1. On the **Invite group** tab, under **Select a group to invite**, choose a group.
1. Select the highest max [role](../../permissions.md) for users in the group.
1. Optional. Choose an expiration date. On that date, the user can no longer access the project.
@@ -75,7 +75,7 @@ Prerequisite:
To import users:
-1. Go to your project and select **Members**.
+1. Go to your project and select **Project information > Members**.
1. On the **Invite member** tab, at the bottom of the panel, select **Import**.
1. Select the project. You can view only the projects for which you're a maintainer.
1. Select **Import project members**.
@@ -113,7 +113,7 @@ Prerequisite:
To remove a member from a project:
-1. Go to your project and select **Members**.
+1. Go to your project and select **Project information > Members**.
1. Next to the project member you want to remove, select **Remove member** **{remove}**.
1. Optional. In the confirmation box, select the **Also unassign this user from related issues and merge requests** checkbox.
1. Select **Remove member**.
@@ -128,7 +128,7 @@ You can filter and sort members in a project.
### Display inherited members
-1. Go to your project and select **Members**.
+1. Go to your project and select **Project information > Members**.
1. In the **Filter members** box, select `Membership` `=` `Inherited`.
1. Press Enter.
@@ -136,7 +136,7 @@ You can filter and sort members in a project.
### Display direct members
-1. Go to your project and select **Members**.
+1. Go to your project and select **Project information > Members**.
1. In the **Filter members** box, select `Membership` `=` `Direct`.
1. Press Enter.
@@ -205,7 +205,7 @@ This feature might not be available to you. Check the **version history** note a
In GitLab 13.11, you can optionally replace the form to add a member with a modal window.
To add a member after enabling this feature:
-1. Go to your project and select **Members**.
+1. Go to your project and select **Project information > Members**.
1. Select **Invite members**.
1. Enter an email address and select a role.
1. Optional. Select an **Access expiration date**.
diff --git a/doc/user/project/members/share_project_with_groups.md b/doc/user/project/members/share_project_with_groups.md
index caef5ef60b7..353ce73329e 100644
--- a/doc/user/project/members/share_project_with_groups.md
+++ b/doc/user/project/members/share_project_with_groups.md
@@ -27,7 +27,7 @@ This is where the group sharing feature can be of use.
To share 'Project Acme' with the 'Engineering' group:
-1. For 'Project Acme' use the left navigation menu to go to **Members**.
+1. For 'Project Acme' use the left navigation menu to go to **Project information > Members**.
1. Select the **Invite group** tab.
1. Add the 'Engineering' group with the maximum access level of your choice.
1. Optionally, select an expiring date.
@@ -59,7 +59,7 @@ In GitLab 13.11, you can optionally replace the sharing form with a modal window
To share a project after enabling this feature:
1. Go to your project's page.
-1. In the left sidebar, go to **Members**, and then select **Invite a group**.
+1. In the left sidebar, go to **Project information > Members**, and then select **Invite a group**.
1. Select a group, and select a **Max role**.
1. (Optional) Select an **Access expiration date**.
1. Select **Invite**.
diff --git a/doc/user/project/merge_requests/allow_collaboration.md b/doc/user/project/merge_requests/allow_collaboration.md
index 63d5119c1b4..5d1a04e1fe0 100644
--- a/doc/user/project/merge_requests/allow_collaboration.md
+++ b/doc/user/project/merge_requests/allow_collaboration.md
@@ -87,7 +87,7 @@ To see the pipeline status from the merge request page of a forked project
going back to the original project:
1. Create a group containing all the upstream members.
-1. Go to the **Members** tab in the forked project and invite the newly-created
+1. Go to the **Project information > Members** page in the forked project and invite the newly-created
group to the forked project.
<!-- ## Troubleshooting
diff --git a/doc/user/project/settings/project_access_tokens.md b/doc/user/project/settings/project_access_tokens.md
index be8a961d6c0..d7121239610 100644
--- a/doc/user/project/settings/project_access_tokens.md
+++ b/doc/user/project/settings/project_access_tokens.md
@@ -49,7 +49,7 @@ For the bot:
API calls made with a project access token are associated with the corresponding bot user.
-These bot users are included in a project's **Members** list but cannot be modified. Also, a bot
+These bot users are included in a project's **Project information > Members** list but cannot be modified. Also, a bot
user cannot be added to any other project.
- The username is set to `project_{project_id}_bot` for the first access token, such as `project_123_bot`.