Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-08-20 21:42:06 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2020-08-20 21:42:06 +0300
commit6e4e1050d9dba2b7b2523fdd1768823ab85feef4 (patch)
tree78be5963ec075d80116a932011d695dd33910b4e /doc/administration
parent1ce776de4ae122aba3f349c02c17cebeaa8ecf07 (diff)
Add latest changes from gitlab-org/gitlab@13-3-stable-ee
Diffstat (limited to 'doc/administration')
-rw-r--r--doc/administration/audit_events.md37
-rw-r--r--doc/administration/audit_reports.md31
-rw-r--r--doc/administration/auth/ldap/index.md4
-rw-r--r--doc/administration/auth/okta.md2
-rw-r--r--doc/administration/auth/smartcard.md4
-rw-r--r--doc/administration/consul.md239
-rw-r--r--doc/administration/environment_variables.md1
-rw-r--r--doc/administration/feature_flags.md4
-rw-r--r--doc/administration/file_hooks.md7
-rw-r--r--doc/administration/geo/disaster_recovery/background_verification.md8
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md14
-rw-r--r--doc/administration/geo/replication/configuration.md11
-rw-r--r--doc/administration/geo/replication/database.md64
-rw-r--r--doc/administration/geo/replication/datatypes.md41
-rw-r--r--doc/administration/geo/replication/disable_geo.md2
-rw-r--r--doc/administration/geo/replication/docker_registry.md2
-rw-r--r--doc/administration/geo/replication/external_database.md49
-rw-r--r--doc/administration/geo/replication/geo_validation_tests.md22
-rw-r--r--doc/administration/geo/replication/img/adding_a_secondary_node.pngbin23555 -> 0 bytes
-rw-r--r--doc/administration/geo/replication/img/adding_a_secondary_node_v13_3.pngbin0 -> 20195 bytes
-rw-r--r--doc/administration/geo/replication/index.md3
-rw-r--r--doc/administration/geo/replication/multiple_servers.md41
-rw-r--r--doc/administration/geo/replication/object_storage.md2
-rw-r--r--doc/administration/geo/replication/remove_geo_node.md2
-rw-r--r--doc/administration/geo/replication/troubleshooting.md347
-rw-r--r--doc/administration/geo/replication/tuning.md17
-rw-r--r--doc/administration/geo/replication/version_specific_updates.md162
-rw-r--r--doc/administration/git_annex.md8
-rw-r--r--doc/administration/git_protocol.md4
-rw-r--r--doc/administration/gitaly/img/cluster_example_v13_3.pngbin0 -> 27703 bytes
-rw-r--r--doc/administration/gitaly/img/shard_example_v13_3.pngbin0 -> 14869 bytes
-rw-r--r--doc/administration/gitaly/index.md2
-rw-r--r--doc/administration/gitaly/praefect.md308
-rw-r--r--doc/administration/gitaly/reference.md2
-rw-r--r--doc/administration/high_availability/alpha_database.md2
-rw-r--r--doc/administration/high_availability/consul.md197
-rw-r--r--doc/administration/high_availability/gitaly.md59
-rw-r--r--doc/administration/high_availability/gitlab.md214
-rw-r--r--doc/administration/high_availability/load_balancer.md134
-rw-r--r--doc/administration/high_availability/monitoring_node.md103
-rw-r--r--doc/administration/high_availability/nfs.md321
-rw-r--r--doc/administration/high_availability/nfs_host_client_setup.md156
-rw-r--r--doc/administration/high_availability/pgbouncer.md165
-rw-r--r--doc/administration/high_availability/sidekiq.md189
-rw-r--r--doc/administration/img/repository_storages_admin_ui_v13_1.pngbin85160 -> 26234 bytes
-rw-r--r--doc/administration/index.md8
-rw-r--r--doc/administration/instance_limits.md76
-rw-r--r--doc/administration/integration/plantuml.md7
-rw-r--r--doc/administration/integration/terminal.md4
-rw-r--r--doc/administration/invalidate_markdown_cache.md7
-rw-r--r--doc/administration/issue_closing_pattern.md11
-rw-r--r--doc/administration/job_artifacts.md45
-rw-r--r--doc/administration/lfs/index.md45
-rw-r--r--doc/administration/load_balancer.md126
-rw-r--r--doc/administration/logs.md28
-rw-r--r--doc/administration/merge_request_diffs.md7
-rw-r--r--doc/administration/monitoring/github_imports.md3
-rw-r--r--doc/administration/monitoring/gitlab_self_monitoring_project/img/self_monitoring_overview_dashboard.png (renamed from doc/administration/monitoring/gitlab_self_monitoring_project/img/self_monitoring_default_dashboard.png)bin51508 -> 51508 bytes
-rw-r--r--doc/administration/monitoring/gitlab_self_monitoring_project/index.md8
-rw-r--r--doc/administration/monitoring/performance/grafana_configuration.md4
-rw-r--r--doc/administration/monitoring/performance/performance_bar.md6
-rw-r--r--doc/administration/monitoring/performance/request_profiling.md6
-rw-r--r--doc/administration/monitoring/prometheus/gitlab_metrics.md7
-rw-r--r--doc/administration/monitoring/prometheus/index.md93
-rw-r--r--doc/administration/nfs.md368
-rw-r--r--doc/administration/object_storage.md84
-rw-r--r--doc/administration/operations/cleaning_up_redis_sessions.md3
-rw-r--r--doc/administration/operations/extra_sidekiq_processes.md2
-rw-r--r--doc/administration/operations/filesystem_benchmarking.md2
-rw-r--r--doc/administration/operations/moving_repositories.md47
-rw-r--r--doc/administration/operations/puma.md2
-rw-r--r--doc/administration/operations/sidekiq_memory_killer.md2
-rw-r--r--doc/administration/packages/container_registry.md75
-rw-r--r--doc/administration/packages/index.md2
-rw-r--r--doc/administration/pages/index.md77
-rw-r--r--doc/administration/pages/source.md14
-rw-r--r--doc/administration/postgresql/external.md2
-rw-r--r--doc/administration/postgresql/index.md2
-rw-r--r--doc/administration/postgresql/pgbouncer.md168
-rw-r--r--doc/administration/postgresql/replication_and_failover.md51
-rw-r--r--doc/administration/raketasks/check.md19
-rw-r--r--doc/administration/raketasks/maintenance.md51
-rw-r--r--doc/administration/raketasks/project_import_export.md2
-rw-r--r--doc/administration/raketasks/storage.md4
-rw-r--r--doc/administration/raketasks/uploads/migrate.md10
-rw-r--r--doc/administration/redis/replication_and_failover.md75
-rw-r--r--doc/administration/redis/replication_and_failover_external.md10
-rw-r--r--doc/administration/redis/standalone.md51
-rw-r--r--doc/administration/reference_architectures/10k_users.md2115
-rw-r--r--doc/administration/reference_architectures/1k_users.md109
-rw-r--r--doc/administration/reference_architectures/25k_users.md2115
-rw-r--r--doc/administration/reference_architectures/2k_users.md31
-rw-r--r--doc/administration/reference_architectures/3k_users.md84
-rw-r--r--doc/administration/reference_architectures/50k_users.md2115
-rw-r--r--doc/administration/reference_architectures/5k_users.md81
-rw-r--r--doc/administration/reference_architectures/img/reference-architectures.pngbin47459 -> 12585 bytes
-rw-r--r--doc/administration/reference_architectures/index.md151
-rw-r--r--doc/administration/reference_architectures/troubleshooting.md10
-rw-r--r--doc/administration/reply_by_email.md8
-rw-r--r--doc/administration/reply_by_email_postfix_setup.md2
-rw-r--r--doc/administration/repository_checks.md7
-rw-r--r--doc/administration/repository_storage_paths.md4
-rw-r--r--doc/administration/server_hooks.md10
-rw-r--r--doc/administration/sidekiq.md188
-rw-r--r--doc/administration/smime_signing_email.md2
-rw-r--r--doc/administration/snippets/index.md17
-rw-r--r--doc/administration/static_objects_external_storage.md11
-rw-r--r--doc/administration/terraform_state.md2
-rw-r--r--doc/administration/troubleshooting/debug.md8
-rw-r--r--doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md23
-rw-r--r--doc/administration/troubleshooting/group_saml_scim.md2
-rw-r--r--doc/administration/troubleshooting/img/network_monitor_xid.pngbin0 -> 131339 bytes
-rw-r--r--doc/administration/troubleshooting/postgresql.md4
-rw-r--r--doc/administration/troubleshooting/sidekiq.md22
-rw-r--r--doc/administration/troubleshooting/tracing_correlation_id.md126
-rw-r--r--doc/administration/wikis/index.md75
116 files changed, 9163 insertions, 2753 deletions
diff --git a/doc/administration/audit_events.md b/doc/administration/audit_events.md
index c85fb2d2e47..e7eab5a291e 100644
--- a/doc/administration/audit_events.md
+++ b/doc/administration/audit_events.md
@@ -1,6 +1,6 @@
---
stage: Manage
-group: Analytics
+group: Compliance
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
---
@@ -48,24 +48,25 @@ You need Owner [permissions](../user/permissions.md) to view the group Audit Eve
To view a group's audit events, navigate to **Group > Settings > Audit Events**.
From there, you can see the following actions:
-- Group name or path changed
-- Group repository size limit changed
-- Group created or deleted
-- Group changed visibility
-- User was added to group and with which [permissions](../user/permissions.md)
-- User sign-in via [Group SAML](../user/group/saml_sso/index.md)
-- Permissions changes of a user assigned to a group
-- Removed user from group
-- Project repository imported into group
+- Group name or path changed.
+- Group repository size limit changed.
+- Group created or deleted.
+- Group changed visibility.
+- User was added to group and with which [permissions](../user/permissions.md).
+- User sign-in via [Group SAML](../user/group/saml_sso/index.md).
+- Permissions changes of a user assigned to a group.
+- Removed user from group.
+- Project repository imported into group.
- [Project shared with group](../user/project/members/share_project_with_groups.md)
- and with which [permissions](../user/permissions.md)
-- Removal of a previously shared group with a project
-- LFS enabled or disabled
-- Shared runners minutes limit changed
-- Membership lock enabled or disabled
-- Request access enabled or disabled
-- 2FA enforcement or grace period changed
-- Roles allowed to create project changed
+ and with which [permissions](../user/permissions.md).
+- Removal of a previously shared group with a project.
+- LFS enabled or disabled.
+- Shared runners minutes limit changed.
+- Membership lock enabled or disabled.
+- Request access enabled or disabled.
+- 2FA enforcement or grace period changed.
+- Roles allowed to create project changed.
+- Group CI/CD variable added, removed, or protected status changed. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/30857) in GitLab 13.3.
Group events can also be accessed via the [Group Audit Events API](../api/audit_events.md#group-audit-events-starter)
diff --git a/doc/administration/audit_reports.md b/doc/administration/audit_reports.md
new file mode 100644
index 00000000000..d5a08b711be
--- /dev/null
+++ b/doc/administration/audit_reports.md
@@ -0,0 +1,31 @@
+---
+stage: Manage
+group: Compliance
+description: 'Learn how to create evidence artifacts typically requested by a 3rd party auditor.'
+---
+
+# Audit Reports
+
+GitLab can help owners and administrators respond to auditors by generating
+comprehensive reports. These **Audit Reports** vary in scope, depending on the
+need:
+
+## Use cases
+
+- Generate a report of audit events to provide to an external auditor requesting proof of certain logging capabilities.
+- Provide a report of all users showing their group and project memberships for a quarterly access review so the auditor can verify compliance with an organization's access management policy.
+
+## APIs
+
+- `https://docs.gitlab.com/ee/api/audit_events.html`
+- `https://docs.gitlab.com/ee/api/graphql/reference/#user`
+- `https://docs.gitlab.com/ee/api/graphql/reference/#groupmember`
+- `https://docs.gitlab.com/ee/api/graphql/reference/#projectmember`
+
+## Features
+
+- `https://docs.gitlab.com/ee/administration/audit_events.html`
+- `https://docs.gitlab.com/ee/administration/logs.html`
+
+We plan on making Audit Events [downloadable as a CSV](https://gitlab.com/gitlab-org/gitlab/-/issues/1449)
+in the near future.
diff --git a/doc/administration/auth/ldap/index.md b/doc/administration/auth/ldap/index.md
index aef6c70ff92..548e734c931 100644
--- a/doc/administration/auth/ldap/index.md
+++ b/doc/administration/auth/ldap/index.md
@@ -548,7 +548,7 @@ or more LDAP group links](#adding-group-links-starter-only).
### Adding group links **(STARTER ONLY)**
-For information on adding group links via CNs and filters, refer to [the GitLab groups documentation](../../../user/group/index.md#manage-group-memberships-via-ldap).
+For information on adding group links via CNs and filters, refer to [the GitLab groups documentation](../../../user/group/index.md#manage-group-memberships-via-ldap-starter-only).
### Administrator sync **(STARTER ONLY)**
@@ -596,6 +596,8 @@ group, as opposed to the full DN.
### Global group memberships lock **(STARTER ONLY)**
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/1793) in GitLab 12.0.
+
"Lock memberships to LDAP synchronization" setting allows instance administrators
to lock down user abilities to invite new members to a group.
diff --git a/doc/administration/auth/okta.md b/doc/administration/auth/okta.md
index 78b10aedb77..7a55e127733 100644
--- a/doc/administration/auth/okta.md
+++ b/doc/administration/auth/okta.md
@@ -156,7 +156,7 @@ You might want to try this out on an incognito browser window.
## Configuring groups
->**Note:**
+NOTE: **Note:**
Make sure the groups exist and are assigned to the Okta app.
You can take a look of the [SAML documentation](../../integration/saml.md#saml-groups) on configuring groups.
diff --git a/doc/administration/auth/smartcard.md b/doc/administration/auth/smartcard.md
index 80d2efbad84..0ecf3ca090d 100644
--- a/doc/administration/auth/smartcard.md
+++ b/doc/administration/auth/smartcard.md
@@ -310,6 +310,10 @@ attribute. As a prerequisite, you must use an LDAP server that:
1. Save the file and [restart](../restart_gitlab.md#installations-from-source)
GitLab for the changes to take effect.
+## Passwords for users created via smartcard authentication
+
+The [Generated passwords for users created through integrated authentication](../../security/passwords_for_integrated_authentication_methods.md) guide provides an overview of how GitLab generates and sets passwords for users created via smartcard authentication.
+
<!-- ## Troubleshooting
Include any troubleshooting steps that you can foresee. If you know beforehand what issues
diff --git a/doc/administration/consul.md b/doc/administration/consul.md
new file mode 100644
index 00000000000..ae22d8bd4d0
--- /dev/null
+++ b/doc/administration/consul.md
@@ -0,0 +1,239 @@
+---
+type: reference
+---
+
+# How to set up Consul **(PREMIUM ONLY)**
+
+A Consul cluster consists of both
+[server and client agents](https://www.consul.io/docs/agent).
+The servers run on their own nodes and the clients run on other nodes that in
+turn communicate with the servers.
+
+GitLab Premium includes a bundled version of [Consul](https://www.consul.io/)
+a service networking solution that you can manage by using `/etc/gitlab/gitlab.rb`.
+
+## Configure the Consul nodes
+
+NOTE: **Important:**
+Before proceeding, refer to the
+[available reference architectures](reference_architectures/index.md#available-reference-architectures)
+to find out how many Consul server nodes you should have.
+
+On **each** Consul server node perform the following:
+
+1. Follow the instructions to [install](https://about.gitlab.com/install/)
+ GitLab by choosing your preferred platform, but do not supply the
+ `EXTERNAL_URL` value when asked.
+1. Edit `/etc/gitlab/gitlab.rb`, and add the following by replacing the values
+ noted in the `retry_join` section. In the example below, there are three
+ nodes, two denoted with their IP, and one with its FQDN, you can use either
+ notation:
+
+ ```ruby
+ # Disable all components except Consul
+ roles ['consul_role']
+
+ # Consul nodes: can be FQDN or IP, separated by a whitespace
+ consul['configuration'] = {
+ server: true,
+ retry_join: %w(10.10.10.1 consul1.gitlab.example.com 10.10.10.2)
+ }
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure GitLab](restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes
+ to take effect.
+1. Run the following command to ensure Consul is both configured correctly and
+ to verify that all server nodes are communicating:
+
+ ```shell
+ sudo /opt/gitlab/embedded/bin/consul members
+ ```
+
+ The output should be similar to:
+
+ ```plaintext
+ Node Address Status Type Build Protocol DC
+ CONSUL_NODE_ONE XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
+ CONSUL_NODE_TWO XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
+ CONSUL_NODE_THREE XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
+ ```
+
+ If the results display any nodes with a status that isn't `alive`, or if any
+ of the three nodes are missing, see the [Troubleshooting section](#troubleshooting-consul).
+
+## Upgrade the Consul nodes
+
+To upgrade your Consul nodes, upgrade the GitLab package.
+
+Nodes should be:
+
+- Members of a healthy cluster prior to upgrading the Omnibus GitLab package.
+- Upgraded one node at a time.
+
+Identify any existing health issues in the cluster by running the following command
+within each node. The command will return an empty array if the cluster is healthy:
+
+```shell
+curl http://127.0.0.1:8500/v1/health/state/critical
+```
+
+Consul nodes communicate using the raft protocol. If the current leader goes
+offline, there needs to be a leader election. A leader node must exist to facilitate
+synchronization across the cluster. If too many nodes go offline at the same time,
+the cluster will lose quorum and not elect a leader due to
+[broken consensus](https://www.consul.io/docs/internals/consensus.html).
+
+Consult the [troubleshooting section](#troubleshooting-consul) if the cluster is not
+able to recover after the upgrade. The [outage recovery](#outage-recovery) may
+be of particular interest.
+
+NOTE: **Note:**
+GitLab uses Consul to store only transient data that is easily regenerated. If
+the bundled Consul was not used by any process other than GitLab itself, then
+[rebuilding the cluster from scratch](#recreate-from-scratch) is fine.
+
+## Troubleshooting Consul
+
+Below are some useful operations should you need to debug any issues.
+You can see any error logs by running:
+
+```shell
+sudo gitlab-ctl tail consul
+```
+
+### Check the cluster membership
+
+To determine which nodes are part of the cluster, run the following on any member in the cluster:
+
+```shell
+sudo /opt/gitlab/embedded/bin/consul members
+```
+
+The output should be similar to:
+
+```plaintext
+Node Address Status Type Build Protocol DC
+consul-b XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
+consul-c XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
+consul-c XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
+db-a XX.XX.X.Y:8301 alive client 0.9.0 2 gitlab_consul
+db-b XX.XX.X.Y:8301 alive client 0.9.0 2 gitlab_consul
+```
+
+Ideally all nodes will have a `Status` of `alive`.
+
+### Restart Consul
+
+If it is necessary to restart Consul, it is important to do this in
+a controlled manner to maintain quorum. If quorum is lost, to recover the cluster,
+you will need to follow the Consul [outage recovery](#outage-recovery) process.
+
+To be safe, it's recommended that you only restart Consul in one node at a time to
+ensure the cluster remains intact. For larger clusters, it is possible to restart
+multiple nodes at a time. See the
+[Consul consensus document](https://www.consul.io/docs/internals/consensus.html#deployment-table)
+for how many failures it can tolerate. This will be the number of simultaneous
+restarts it can sustain.
+
+To restart Consul:
+
+```shell
+sudo gitlab-ctl restart consul
+```
+
+### Consul nodes unable to communicate
+
+By default, Consul will attempt to
+[bind](https://www.consul.io/docs/agent/options.html#_bind) to `0.0.0.0`, but
+it will advertise the first private IP address on the node for other Consul nodes
+to communicate with it. If the other nodes cannot communicate with a node on
+this address, then the cluster will have a failed status.
+
+If you are running into this issue, you will see messages like the following in `gitlab-ctl tail consul` output:
+
+```plaintext
+2017-09-25_19:53:39.90821 2017/09/25 19:53:39 [WARN] raft: no known peers, aborting election
+2017-09-25_19:53:41.74356 2017/09/25 19:53:41 [ERR] agent: failed to sync remote state: No cluster leader
+```
+
+To fix this:
+
+1. Pick an address on each node that all of the other nodes can reach this node through.
+1. Update your `/etc/gitlab/gitlab.rb`
+
+ ```ruby
+ consul['configuration'] = {
+ ...
+ bind_addr: 'IP ADDRESS'
+ }
+ ```
+
+1. Reconfigure GitLab;
+
+ ```shell
+ gitlab-ctl reconfigure
+ ```
+
+If you still see the errors, you may have to
+[erase the Consul database and reinitialize](#recreate-from-scratch) on the affected node.
+
+### Consul does not start - multiple private IPs
+
+In case that a node has multiple private IPs, Consul will be confused as to
+which of the private addresses to advertise, and then immediately exit on start.
+
+You will see messages like the following in `gitlab-ctl tail consul` output:
+
+```plaintext
+2017-11-09_17:41:45.52876 ==> Starting Consul agent...
+2017-11-09_17:41:45.53057 ==> Error creating agent: Failed to get advertise address: Multiple private IPs found. Please configure one.
+```
+
+To fix this:
+
+1. Pick an address on the node that all of the other nodes can reach this node through.
+1. Update your `/etc/gitlab/gitlab.rb`
+
+ ```ruby
+ consul['configuration'] = {
+ ...
+ bind_addr: 'IP ADDRESS'
+ }
+ ```
+
+1. Reconfigure GitLab;
+
+ ```shell
+ gitlab-ctl reconfigure
+ ```
+
+### Outage recovery
+
+If you lost enough Consul nodes in the cluster to break quorum, then the cluster
+is considered failed, and it will not function without manual intervention.
+In that case, you can either recreate the nodes from scratch or attempt a
+recover.
+
+#### Recreate from scratch
+
+By default, GitLab does not store anything in the Consul node that cannot be
+recreated. To erase the Consul database and reinitialize:
+
+```shell
+sudo gitlab-ctl stop consul
+sudo rm -rf /var/opt/gitlab/consul/data
+sudo gitlab-ctl start consul
+```
+
+After this, the node should start back up, and the rest of the server agents rejoin.
+Shortly after that, the client agents should rejoin as well.
+
+#### Recover a failed node
+
+If you have taken advantage of Consul to store other data and want to restore
+the failed node, follow the
+[Consul guide](https://learn.hashicorp.com/tutorials/consul/recovery-outage)
+to recover a failed cluster.
diff --git a/doc/administration/environment_variables.md b/doc/administration/environment_variables.md
index ecf9464eb91..ebc3848017f 100644
--- a/doc/administration/environment_variables.md
+++ b/doc/administration/environment_variables.md
@@ -33,6 +33,7 @@ Variable | Type | Description
`GITLAB_UNICORN_MEMORY_MIN` | integer | The minimum memory threshold (in bytes) for the Unicorn worker killer
`GITLAB_UNICORN_MEMORY_MAX` | integer | The maximum memory threshold (in bytes) for the Unicorn worker killer
`GITLAB_SHARED_RUNNERS_REGISTRATION_TOKEN` | string | Sets the initial registration token used for GitLab Runners
+`UNSTRUCTURED_RAILS_LOG` | string | Enables the unstructured log in addition to JSON logs (defaults to `true`)
## Complete database variables
diff --git a/doc/administration/feature_flags.md b/doc/administration/feature_flags.md
index 678ab6c5d7b..7cc7692975a 100644
--- a/doc/administration/feature_flags.md
+++ b/doc/administration/feature_flags.md
@@ -103,10 +103,10 @@ Some feature flags can be enabled or disabled on a per project basis:
Feature.enable(:<feature flag>, Project.find(<project id>))
```
-For example, to enable the [`:junit_pipeline_view`](../ci/junit_test_reports.md#enabling-the-junit-test-reports-feature-core-only) feature flag for project `1234`:
+For example, to enable the [`:product_analytics`](../operations/product_analytics.md#enable-or-disable-product-analytics) feature flag for project `1234`:
```ruby
-Feature.enable(:junit_pipeline_view, Project.find(1234))
+Feature.enable(:product_analytics, Project.find(1234))
```
`Feature.enable` and `Feature.disable` always return `nil`, this is not an indication that the command failed:
diff --git a/doc/administration/file_hooks.md b/doc/administration/file_hooks.md
index 7903da675fd..c0b31769e7f 100644
--- a/doc/administration/file_hooks.md
+++ b/doc/administration/file_hooks.md
@@ -1,3 +1,10 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference
+---
+
# File hooks
> - Introduced in GitLab 10.6.
diff --git a/doc/administration/geo/disaster_recovery/background_verification.md b/doc/administration/geo/disaster_recovery/background_verification.md
index 88218d24b2f..89a93e45b1d 100644
--- a/doc/administration/geo/disaster_recovery/background_verification.md
+++ b/doc/administration/geo/disaster_recovery/background_verification.md
@@ -58,14 +58,14 @@ Feature.enable('geo_repository_verification')
## Repository verification
-Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **primary** node and expand
+Navigate to the **Admin Area > Geo** dashboard on the **primary** node and expand
the **Verification information** tab for that node to view automatic checksumming
status for repositories and wikis. Successes are shown in green, pending work
in gray, and failures in red.
![Verification status](img/verification-status-primary.png)
-Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **secondary** node and expand
+Navigate to the **Admin Area > Geo** dashboard on the **secondary** node and expand
the **Verification information** tab for that node to view automatic verification
status for repositories and wikis. As with checksumming, successes are shown in
green, pending work in gray, and failures in red.
@@ -92,7 +92,7 @@ data. The default and recommended re-verification interval is 7 days, though
an interval as short as 1 day can be set. Shorter intervals reduce risk but
increase load and vice versa.
-Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **primary** node, and
+Navigate to the **Admin Area > Geo** dashboard on the **primary** node, and
click the **Edit** button for the **primary** node to customize the minimum
re-verification interval:
@@ -141,7 +141,7 @@ sudo gitlab-rake geo:verification:wiki:reset
If the **primary** and **secondary** nodes have a checksum verification mismatch, the cause may not be apparent. To find the cause of a checksum mismatch:
-1. Navigate to the **{admin}** **Admin Area >** **{overview}** **Overview > Projects** dashboard on the **primary** node, find the
+1. Navigate to the **Admin Area > Overview > Projects** dashboard on the **primary** node, find the
project that you want to check the checksum differences and click on the
**Edit** button:
![Projects dashboard](img/checksum-differences-admin-projects.png)
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index ef0e67434d0..a0cf263a762 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -110,7 +110,7 @@ The maintenance window won't end until Geo replication and verification is
completely finished. To keep the window as short as possible, you should
ensure these processes are close to 100% as possible during active use.
-Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **secondary** node to
+Navigate to the **Admin Area > Geo** dashboard on the **secondary** node to
review status. Replicated objects (shown in green) should be close to 100%,
and there should be no failures (shown in red). If a large proportion of
objects aren't yet replicated (shown in gray), consider giving the node more
@@ -135,8 +135,8 @@ This [content was moved to another location](background_verification.md).
### Notify users of scheduled maintenance
-On the **primary** node, navigate to **{admin}** **Admin Area >** **{bullhorn}** **Messages**, add a broadcast
-message. You can check under **{admin}** **Admin Area >** **{location-dot}** **Geo** to estimate how long it
+On the **primary** node, navigate to **Admin Area > Messages**, add a broadcast
+message. You can check under **Admin Area > Geo** to estimate how long it
will take to finish syncing. An example message would be:
> A scheduled maintenance will take place at XX:XX UTC. We expect it to take
@@ -181,7 +181,7 @@ access to the **primary** node during the maintenance window.
connection.
1. Disable non-Geo periodic background jobs on the **primary** node by navigating
- to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Cron**, pressing `Disable All`,
+ to **Admin Area > Monitoring > Background Jobs > Cron**, pressing `Disable All`,
and then pressing `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
This job will re-enable several other cron jobs that are essential for planned
failover to complete successfully.
@@ -190,11 +190,11 @@ access to the **primary** node during the maintenance window.
1. If you are manually replicating any data not managed by Geo, trigger the
final replication process now.
-1. On the **primary** node, navigate to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Queues**
+1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
and wait for all queues except those with `geo` in the name to drop to 0.
These queues contain work that has been submitted by your users; failing over
before it is completed will cause the work to be lost.
-1. On the **primary** node, navigate to **{admin}** **Admin Area >** **{location-dot}** **Geo** and wait for the
+1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
following conditions to be true of the **secondary** node you are failing over to:
- All replication meters to each 100% replicated, 0% failures.
@@ -202,7 +202,7 @@ access to the **primary** node during the maintenance window.
- Database replication lag is 0ms.
- The Geo log cursor is up to date (0 events behind).
-1. On the **secondary** node, navigate to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Queues**
+1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
1. On the **secondary** node, use [these instructions](../../raketasks/check.md)
to verify the integrity of CI artifacts, LFS objects, and uploads in file
diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md
index 3d08ed81700..74fa8e3b8f2 100644
--- a/doc/administration/geo/replication/configuration.md
+++ b/doc/administration/geo/replication/configuration.md
@@ -191,17 +191,16 @@ keys must be manually replicated to the **secondary** node.
gitlab-ctl reconfigure
```
-1. Visit the **primary** node's **{admin}** **Admin Area >** **{location-dot}** **Geo**
+1. Visit the **primary** node's **Admin Area > Geo**
(`/admin/geo/nodes`) in your browser.
1. Click the **New node** button.
- ![Add secondary node](img/adding_a_secondary_node.png)
+ ![Add secondary node](img/adding_a_secondary_node_v13_3.png)
1. Fill in **Name** with the `gitlab_rails['geo_node_name']` in
`/etc/gitlab/gitlab.rb`. These values must always match *exactly*, character
for character.
1. Fill in **URL** with the `external_url` in `/etc/gitlab/gitlab.rb`. These
values must always match, but it doesn't matter if one ends with a `/` and
the other doesn't.
-1. **Do NOT** check the **This is a primary node** checkbox.
1. Optionally, choose which groups or storage shards should be replicated by the
**secondary** node. Leave blank to replicate all. Read more in
[selective synchronization](#selective-synchronization).
@@ -238,7 +237,7 @@ You can login to the **secondary** node with the same credentials as used for th
Using Hashed Storage significantly improves Geo replication. Project and group
renames no longer require synchronization between nodes.
-1. Visit the **primary** node's **{admin}** **Admin Area >** **{settings}** **Settings > Repository**
+1. Visit the **primary** node's **Admin Area > Settings > Repository**
(`/admin/application_settings/repository`) in your browser.
1. In the **Repository storage** section, check **Use hashed storage paths for newly created and renamed projects**.
@@ -255,7 +254,7 @@ on the **secondary** node.
### Step 6. Enable Git access over HTTP/HTTPS
Geo synchronizes repositories over HTTP/HTTPS, and therefore requires this clone
-method to be enabled. Navigate to **{admin}** **Admin Area >** **{settings}** **Settings**
+method to be enabled. Navigate to **Admin Area > Settings**
(`/admin/application_settings/general`) on the **primary** node, and set
`Enabled Git access protocols` to `Both SSH and HTTP(S)` or `Only HTTP(S)`.
@@ -264,7 +263,7 @@ method to be enabled. Navigate to **{admin}** **Admin Area >** **{settings}** **
Your **secondary** node is now configured!
You can login to the **secondary** node with the same credentials you used for the
-**primary** node. Visit the **secondary** node's **{admin}** **Admin Area >** **{location-dot}** **Geo**
+**primary** node. Visit the **secondary** node's **Admin Area > Geo**
(`/admin/geo/nodes`) in your browser to check if it's correctly identified as a
**secondary** Geo node and if Geo is enabled.
diff --git a/doc/administration/geo/replication/database.md b/doc/administration/geo/replication/database.md
index 02f51e79907..0bc37ce6438 100644
--- a/doc/administration/geo/replication/database.md
+++ b/doc/administration/geo/replication/database.md
@@ -37,8 +37,7 @@ recover. See below for more details.
The following guide assumes that:
- You are using Omnibus and therefore you are using PostgreSQL 11 or later
- which includes the [`pg_basebackup` tool](https://www.postgresql.org/docs/11/app-pgbasebackup.html) and improved
- [Foreign Data Wrapper](https://www.postgresql.org/docs/11/postgres-fdw.html) support.
+ which includes the [`pg_basebackup` tool](https://www.postgresql.org/docs/11/app-pgbasebackup.html).
- You have a **primary** node already set up (the GitLab server you are
replicating from), running Omnibus' PostgreSQL (or equivalent version), and
you have a new **secondary** server set up with the same versions of the OS,
@@ -167,6 +166,11 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
corresponding to the given address. See [the PostgreSQL documentation](https://www.postgresql.org/docs/11/runtime-config-connection.html)
for more details.
+ NOTE: **Note:**
+ If you need to use `0.0.0.0` or `*` as the listen_address, you will also need to add
+ `127.0.0.1/32` to the `postgresql['md5_auth_cidr_addresses']` setting, to allow Rails to connect through
+ `127.0.0.1`. For more information, see [omnibus-5258](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5258).
+
Depending on your network configuration, the suggested addresses may not
be correct. If your **primary** node and **secondary** nodes connect over a local
area network, or a virtual network connecting availability zones like
@@ -341,10 +345,10 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
Ensure that the contents of `~gitlab-psql/data/server.crt` on the **primary** node
match the contents of `~gitlab-psql/.postgresql/root.crt` on the **secondary** node.
-1. Configure PostgreSQL to enable FDW support:
+1. Configure PostgreSQL:
This step is similar to how we configured the **primary** instance.
- We need to enable this, to enable FDW support, even if using a single node.
+ We need to enable this, even if using a single node.
Edit `/etc/gitlab/gitlab.rb` and add the following, replacing the IP
addresses with addresses appropriate to your network configuration:
@@ -369,11 +373,6 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
##
postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
gitlab_rails['db_password'] = '<your_password_here>'
-
- ##
- ## Enable FDW support for the Geo Tracking Database (improves performance)
- ##
- geo_secondary['db_fdw'] = true
```
For external PostgreSQL instances, see [additional instructions](external_database.md).
@@ -385,15 +384,12 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
gitlab-ctl reconfigure
```
-1. Restart PostgreSQL for the IP change to take effect and reconfigure again:
+1. Restart PostgreSQL for the IP change to take effect:
```shell
gitlab-ctl restart postgresql
- gitlab-ctl reconfigure
```
- This last reconfigure will provision the FDW configuration and enable it.
-
### Step 3. Initiate the replication process
Below we provide a script that connects the database on the **secondary** node to
@@ -468,48 +464,6 @@ high-availability configuration with a cluster of nodes supporting a Geo
**primary** node and another cluster of nodes supporting a Geo **secondary** node. For more
information, see [High Availability with Omnibus GitLab](../../postgresql/replication_and_failover.md).
-For a Geo **secondary** node to work properly with PgBouncer in front of the database,
-it will need a separate read-only user to make [PostgreSQL FDW queries](https://www.postgresql.org/docs/11/postgres-fdw.html)
-work:
-
-1. On the **primary** Geo database, enter the PostgreSQL on the console as an
- admin user. If you are using an Omnibus-managed database, log onto the **primary**
- node that is running the PostgreSQL database (the default Omnibus database name is `gitlabhq_production`):
-
- ```shell
- sudo \
- -u gitlab-psql /opt/gitlab/embedded/bin/psql \
- -h /var/opt/gitlab/postgresql gitlabhq_production
- ```
-
-1. Then create the read-only user:
-
- ```sql
- -- NOTE: Use the password defined earlier
- CREATE USER gitlab_geo_fdw WITH password 'mypassword';
- GRANT CONNECT ON DATABASE gitlabhq_production to gitlab_geo_fdw;
- GRANT USAGE ON SCHEMA public TO gitlab_geo_fdw;
- GRANT SELECT ON ALL TABLES IN SCHEMA public TO gitlab_geo_fdw;
- GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO gitlab_geo_fdw;
-
- -- Tables created by "gitlab" should be made read-only for "gitlab_geo_fdw"
- -- automatically.
- ALTER DEFAULT PRIVILEGES FOR USER gitlab IN SCHEMA public GRANT SELECT ON TABLES TO gitlab_geo_fdw;
- ALTER DEFAULT PRIVILEGES FOR USER gitlab IN SCHEMA public GRANT SELECT ON SEQUENCES TO gitlab_geo_fdw;
- ```
-
-1. On the **secondary** nodes, change `/etc/gitlab/gitlab.rb`:
-
- ```ruby
- geo_postgresql['fdw_external_user'] = 'gitlab_geo_fdw'
- ```
-
-1. Save the file and reconfigure GitLab for the changes to be applied:
-
- ```shell
- gitlab-ctl reconfigure
- ```
-
## Troubleshooting
Read the [troubleshooting document](troubleshooting.md).
diff --git a/doc/administration/geo/replication/datatypes.md b/doc/administration/geo/replication/datatypes.md
index 5636ff79189..b8d01e80371 100644
--- a/doc/administration/geo/replication/datatypes.md
+++ b/doc/administration/geo/replication/datatypes.md
@@ -5,7 +5,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
type: howto
---
-# Geo data types support
+# Geo data types support **(PREMIUM ONLY)**
A Geo data type is a specific class of data that is required by one or more GitLab features to
store relevant information.
@@ -126,6 +126,33 @@ these epics/issues:
- [Unreplicated Data Types](https://gitlab.com/groups/gitlab-org/-/epics/893)
- [Verify all replicated data](https://gitlab.com/groups/gitlab-org/-/epics/1430)
+### Replicated data types behind a feature flag
+
+The replication for some data types is behind a corresponding feature flag:
+
+> - They're deployed behind a feature flag, enabled by default.
+> - They're enabled on GitLab.com.
+> - They can't be enabled or disabled per-project.
+> - They are recommended for production use.
+> - For GitLab self-managed instances, GitLab administrators can opt to [disable them](#enable-or-disable-replication-for-some-data-types-core-only). **(CORE ONLY)**
+
+#### Enable or disable replication (for some data types) **(CORE ONLY)**
+
+Replication for some data types are released behind feature flags that are **enabled by default**.
+[GitLab administrators with access to the GitLab Rails console](../../feature_flags.md) can opt to disable it for your instance. You can find feature flag names of each of those data types in the notes column of the table below.
+
+To disable, such as for package file replication:
+
+```ruby
+Feature.disable(:geo_package_file_replication)
+```
+
+To enable, such as for package file replication:
+
+```ruby
+Feature.enable(:geo_package_file_replication)
+```
+
DANGER: **Danger:**
Features not on this list, or with **No** in the **Replicated** column,
are not replicated on the **secondary** node. Failing over without manually
@@ -151,12 +178,12 @@ successfully, you must replicate their data using some other means.
| [Elasticsearch integration](../../../integration/elasticsearch.md) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/1186) | No | |
| [GitLab Pages](../../pages/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/589) | No | |
| [Container Registry](../../packages/container_registry.md) | **Yes** (12.3) | No | |
-| [NPM Registry](../../../user/packages/npm_registry/index.md) | **Yes** (13.2) | No | |
-| [Maven Repository](../../../user/packages/maven_repository/index.md) | **Yes** (13.2) | No | |
-| [Conan Repository](../../../user/packages/conan_repository/index.md) | **Yes** (13.2) | No | |
-| [NuGet Repository](../../../user/packages/nuget_repository/index.md) | **Yes** (13.2) | No | |
-| [PyPi Repository](../../../user/packages/pypi_repository/index.md) | **Yes** (13.2) | No | |
-| [Composer Repository](../../../user/packages/composer_repository/index.md) | **Yes** (13.2) | No | |
+| [NPM Registry](../../../user/packages/npm_registry/index.md) | **Yes** (13.2) | No | Behind feature flag `geo_package_file_replication`, enabled by default | |
+| [Maven Repository](../../../user/packages/maven_repository/index.md) | **Yes** (13.2) | No | Behind feature flag `geo_package_file_replication`, enabled by default | |
+| [Conan Repository](../../../user/packages/conan_repository/index.md) | **Yes** (13.2) | No | Behind feature flag `geo_package_file_replication`, enabled by default | |
+| [NuGet Repository](../../../user/packages/nuget_repository/index.md) | **Yes** (13.2) | No | Behind feature flag `geo_package_file_replication`, enabled by default | |
+| [PyPi Repository](../../../user/packages/pypi_repository/index.md) | **Yes** (13.2) | No | Behind feature flag `geo_package_file_replication`, enabled by default | |
+| [Composer Repository](../../../user/packages/composer_repository/index.md) | **Yes** (13.2) | No | Behind feature flag `geo_package_file_replication`, enabled by default | |
| [External merge request diffs](../../merge_request_diffs.md) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/33817) | No | |
| [Terraform State](../../terraform_state.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/3112)(*3*) | No | |
| [Vulnerability Export](../../../user/application_security/security_dashboard/#export-vulnerabilities) | [No](https://gitlab.com/groups/gitlab-org/-/epics/3111)(*3*) | No | | |
diff --git a/doc/administration/geo/replication/disable_geo.md b/doc/administration/geo/replication/disable_geo.md
index f53b4c1b796..aed8e5fc3bc 100644
--- a/doc/administration/geo/replication/disable_geo.md
+++ b/doc/administration/geo/replication/disable_geo.md
@@ -33,7 +33,7 @@ in order to do that.
## Remove the primary node from the UI
-1. Go to **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`).
+1. Go to **Admin Area > Geo** (`/admin/geo/nodes`).
1. Click the **Remove** button for the **primary** node.
1. Confirm by clicking **Remove** when the prompt appears.
diff --git a/doc/administration/geo/replication/docker_registry.md b/doc/administration/geo/replication/docker_registry.md
index c34732cba67..5403edc69da 100644
--- a/doc/administration/geo/replication/docker_registry.md
+++ b/doc/administration/geo/replication/docker_registry.md
@@ -122,7 +122,7 @@ generate a short-lived JWT that is pull-only-capable to access the
### Verify replication
-To verify Container Registry replication is working, go to **{admin}** **Admin Area >** **{location-dot}** **Geo**
+To verify Container Registry replication is working, go to **Admin Area > Geo**
(`/admin/geo/nodes`) on the **secondary** node.
The initial replication, or "backfill", will probably still be in progress.
You can monitor the synchronization process on each Geo node from the **primary** node's **Geo Nodes** dashboard in your browser.
diff --git a/doc/administration/geo/replication/external_database.md b/doc/administration/geo/replication/external_database.md
index e85a82fa414..d860a3dd368 100644
--- a/doc/administration/geo/replication/external_database.md
+++ b/doc/administration/geo/replication/external_database.md
@@ -183,9 +183,6 @@ to grant additional roles to your tracking database user (by default, this is
- Amazon RDS requires the [`rds_superuser`](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.Roles) role.
- Azure Database for PostgreSQL requires the [`azure_pg_admin`](https://docs.microsoft.com/en-us/azure/postgresql/howto-create-users#how-to-create-additional-admin-users-in-azure-database-for-postgresql) role.
-The tracking database requires an [FDW](https://www.postgresql.org/docs/11/postgres-fdw.html)
-connection with the **secondary** replica database for improved performance.
-
If you have an external database ready to be used as the tracking database,
follow the instructions below to use it:
@@ -224,7 +221,6 @@ the tracking database on port 5432.
geo_secondary['db_host'] = '<tracking_database_host>'
geo_secondary['db_port'] = <tracking_database_port> # change to the correct port
- geo_secondary['db_fdw'] = true # enable FDW
geo_postgresql['enable'] = false # don't use internal managed instance
```
@@ -236,48 +232,3 @@ the tracking database on port 5432.
gitlab-rake geo:db:create
gitlab-rake geo:db:migrate
```
-
-1. Configure the [PostgreSQL FDW](https://www.postgresql.org/docs/11/postgres-fdw.html)
- connection and credentials:
-
- Save the script below in a file, ex. `/tmp/geo_fdw.sh` and modify the connection
- parameters to match your environment. Execute it to set up the FDW connection.
-
- ```shell
- #!/bin/bash
-
- # Secondary Database connection params:
- DB_HOST="<public_ip_or_vpc_private_ip>"
- DB_NAME="gitlabhq_production"
- DB_USER="gitlab"
- DB_PASS="<your_password_here>"
- DB_PORT="5432"
-
- # Tracking Database connection params:
- GEO_DB_HOST="<public_ip_or_vpc_private_ip>"
- GEO_DB_NAME="gitlabhq_geo_production"
- GEO_DB_USER="gitlab_geo"
- GEO_DB_PORT="5432"
-
- query_exec () {
- gitlab-psql -h $GEO_DB_HOST -U $GEO_DB_USER -d $GEO_DB_NAME -p $GEO_DB_PORT -c "${1}"
- }
-
- query_exec "CREATE EXTENSION postgres_fdw;"
- query_exec "CREATE SERVER gitlab_secondary FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '${DB_HOST}', dbname '${DB_NAME}', port '${DB_PORT}');"
- query_exec "CREATE USER MAPPING FOR ${GEO_DB_USER} SERVER gitlab_secondary OPTIONS (user '${DB_USER}', password '${DB_PASS}');"
- query_exec "CREATE SCHEMA gitlab_secondary;"
- query_exec "GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO ${GEO_DB_USER};"
- ```
-
- NOTE: **Note:**
- The script template above uses `gitlab-psql` as it's intended to be executed from the Geo machine,
- but you can change it to `psql` and run it from any machine that has access to the database. We also recommend using
- `psql` for AWS RDS.
-
-1. Save the file and [restart GitLab](../../restart_gitlab.md#omnibus-gitlab-restart)
-1. Populate the FDW tables:
-
- ```shell
- gitlab-rake geo:db:refresh_foreign_tables
- ```
diff --git a/doc/administration/geo/replication/geo_validation_tests.md b/doc/administration/geo/replication/geo_validation_tests.md
index 0255e5c9883..323f6f367b1 100644
--- a/doc/administration/geo/replication/geo_validation_tests.md
+++ b/doc/administration/geo/replication/geo_validation_tests.md
@@ -5,7 +5,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
type: howto
---
-# Geo validation tests
+# Geo validation tests **(PREMIUM ONLY)**
The Geo team performs manual testing and validation on common deployment configurations to ensure
that Geo works when upgrading between minor GitLab versions and major PostgreSQL database versions.
@@ -27,6 +27,13 @@ The following are GitLab upgrade validation tests we performed.
- [Investigate why `reconfigure` and `hup` cause downtime on multi-node Geo deployments](https://gitlab.com/gitlab-org/gitlab/-/issues/228898)
- [Geo multi-node deployment upgrade: investigate order when upgrading non-deploy nodes](https://gitlab.com/gitlab-org/gitlab/-/issues/228954)
+[Switch from repmgr to Patroni on a Geo primary site](https://gitlab.com/gitlab-org/gitlab/-/issues/224652):
+
+- Description: Tested switching from repmgr to Patroni on a multi-node Geo primary site. Used [the orchestrator tool](https://gitlab.com/gitlab-org/gitlab-orchestrator) to deploy a Geo installation with 3 database nodes managed by repmgr. With this approach, we were also able to address a related issue for [verifying a Geo installation with Patroni and PostgreSQL 11](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5113).
+- Outcome: Partial success. We enabled Patroni on the primary site and set up database replication on the secondary site. However, we found that Patroni would delete the secondary site's replication slot whenever Patroni was restarted. Another issue is that when Patroni elects a new leader in the cluster, the secondary site will fail to automatically follow the new leader. Until these issues are resolved, we cannot officially support and recommend Patroni for Geo installations.
+- Follow up issues/actions:
+ - [Investigate permanent replication slot for Patroni with Geo single node secondary](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5528)
+
### June 2020
[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/223284):
@@ -107,6 +114,19 @@ The following are GitLab upgrade validation tests we performed.
The following are PostgreSQL upgrade validation tests we performed.
+### August 2020
+
+[Verify Geo installation with PostgreSQL 12](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5453):
+
+- Description: Prior to PostgreSQL 12 becoming available as an opt-in version in GitLab 13.3,
+ we tested fresh installations of GitLab 13.3 with PostgreSQL 12 enabled and Geo installed.
+- Outcome: Setting up a Geo secondary required manual intervention because the `recovery.conf` file
+ is no longer supported in PostgreSQL 12. We do not recommend deploying Geo with PostgreSQL 12 until
+ the appropriate changes have been made to Omnibus and verified.
+- Follow up issues:
+ - [Update `replicate-geo-database` to support PostgreSQL 12](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5575)
+ - [Remove PostgreSQL 12 check in `replicate-geo-database` for 14.0](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5576)
+
### April 2020
[PostgreSQL 11 upgrade procedure for Geo installations](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4975):
diff --git a/doc/administration/geo/replication/img/adding_a_secondary_node.png b/doc/administration/geo/replication/img/adding_a_secondary_node.png
deleted file mode 100644
index e33b690da18..00000000000
--- a/doc/administration/geo/replication/img/adding_a_secondary_node.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/geo/replication/img/adding_a_secondary_node_v13_3.png b/doc/administration/geo/replication/img/adding_a_secondary_node_v13_3.png
new file mode 100644
index 00000000000..e43ace99666
--- /dev/null
+++ b/doc/administration/geo/replication/img/adding_a_secondary_node_v13_3.png
Binary files differ
diff --git a/doc/administration/geo/replication/index.md b/doc/administration/geo/replication/index.md
index 51c831abaf3..f1cc9f0df8b 100644
--- a/doc/administration/geo/replication/index.md
+++ b/doc/administration/geo/replication/index.md
@@ -117,7 +117,7 @@ The following are required to run Geo:
The following operating systems are known to ship with a current version of OpenSSH:
- [CentOS](https://www.centos.org) 7.4+
- [Ubuntu](https://ubuntu.com) 16.04+
-- PostgreSQL 11+ with [FDW](https://www.postgresql.org/docs/11/postgres-fdw.html) support and [Streaming Replication](https://wiki.postgresql.org/wiki/Streaming_Replication)
+- PostgreSQL 11+ with [Streaming Replication](https://wiki.postgresql.org/wiki/Streaming_Replication)
- Git 2.9+
- All nodes must run the same GitLab version.
@@ -166,7 +166,6 @@ The tracking database instance is used as metadata to control what needs to be u
- Fetch changes from a repository that has recently been updated.
Because the replicated database instance is read-only, we need this additional database instance for each **secondary** node.
-The tracking database requires the `postgres_fdw` extension.
### Geo Log Cursor
diff --git a/doc/administration/geo/replication/multiple_servers.md b/doc/administration/geo/replication/multiple_servers.md
index d8f04e07fb0..31d1ea2cd2b 100644
--- a/doc/administration/geo/replication/multiple_servers.md
+++ b/doc/administration/geo/replication/multiple_servers.md
@@ -137,7 +137,7 @@ documentation:
synchronized from the **primary** node.
NOTE: **Note:**
-[NFS](../../high_availability/nfs.md) can be used in place of Gitaly but is not
+[NFS](../../nfs.md) can be used in place of Gitaly but is not
recommended.
### Step 2: Configure the main read-only replica PostgreSQL database on the **secondary** node
@@ -196,9 +196,27 @@ the **primary** database. Use the following as a guide.
geo_postgresql['enable'] = false
##
- ## Disable `geo_logcursor` service so Rails doesn't get configured here
+ ## Disable all other services that aren't needed. Note that we had to enable
+ ## geo_secondary_role to cause some configuration changes to postgresql, but
+ ## the role enables single-node services by default.
##
+ alertmanager['enable'] = false
+ consul['enable'] = false
geo_logcursor['enable'] = false
+ gitaly['enable'] = false
+ gitlab_exporter['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = false
+ node_exporter['enable'] = false
+ pgbouncer_exporter['enable'] = false
+ prometheus['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ repmgr['enable'] = false
+ sidekiq['enable'] = false
+ sidekiq_cluster['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
```
After making these changes, [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) so the changes take effect.
@@ -242,10 +260,8 @@ Configure the tracking database.
geo_postgresql['sql_user_password'] = '<tracking_database_password_md5_hash>'
##
- ## Configure FDW connection to the replica database
+ ## Configure PostgreSQL connection to the replica database
##
- geo_secondary['db_fdw'] = true
- geo_postgresql['fdw_external_password'] = '<replica_database_password_plaintext>'
geo_postgresql['md5_auth_cidr_addresses'] = ['<replica_database_ip>/32']
gitlab_rails['db_host'] = '<replica_database_ip>'
@@ -253,11 +269,11 @@ Configure the tracking database.
gitlab_rails['auto_migrate'] = false
##
- ## Disable all other services that aren't needed, since we don't have a role
- ## that does this.
+ ## Ensure unnecessary services are disabled
##
alertmanager['enable'] = false
consul['enable'] = false
+ geo_logcursor['enable'] = false
gitaly['enable'] = false
gitlab_exporter['enable'] = false
gitlab_workhorse['enable'] = false
@@ -270,7 +286,9 @@ Configure the tracking database.
redis_exporter['enable'] = false
repmgr['enable'] = false
sidekiq['enable'] = false
+ sidekiq_cluster['enable'] = false
puma['enable'] = false
+ unicorn['enable'] = false
```
After making these changes, [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) so the changes take effect.
@@ -284,9 +302,9 @@ In the architecture overview, there are two machines running the GitLab
application services. These services are enabled selectively in the
configuration.
-Configure the application servers following
-[Configuring GitLab for multiple nodes](../../high_availability/gitlab.md), then make the
-following modifications:
+Configure the GitLab Rails application servers following the relevant steps
+outlined in the [reference architectures](../../reference_architectures/index.md),
+then make the following modifications:
1. Edit `/etc/gitlab/gitlab.rb` on each application server in the **secondary**
cluster, and add the following:
@@ -373,7 +391,7 @@ application servers.
In this topology, a load balancer is required at each geographic location to
route traffic to the application servers.
-See [Load Balancer for GitLab with multiple nodes](../../high_availability/load_balancer.md) for
+See [Load Balancer for GitLab with multiple nodes](../../load_balancer.md) for
more information.
### Step 6: Configure the backend application servers on the **secondary** node
@@ -417,6 +435,7 @@ application servers above, with some changes to run only the `sidekiq` service:
redis_exporter['enable'] = false
repmgr['enable'] = false
puma['enable'] = false
+ unicorn['enable'] = false
##
## The unique identifier for the Geo node.
diff --git a/doc/administration/geo/replication/object_storage.md b/doc/administration/geo/replication/object_storage.md
index 3cc0ade414e..159e2524198 100644
--- a/doc/administration/geo/replication/object_storage.md
+++ b/doc/administration/geo/replication/object_storage.md
@@ -33,7 +33,7 @@ whether they are stored on the local filesystem or in object storage.
To enable GitLab replication, you must:
-1. Go to **{admin}** **Admin Area >** **{location-dot}** **Geo**.
+1. Go to **Admin Area > Geo**.
1. Press **Edit** on the **secondary** node.
1. Enable the **Allow this secondary node to replicate content on Object Storage**
checkbox.
diff --git a/doc/administration/geo/replication/remove_geo_node.md b/doc/administration/geo/replication/remove_geo_node.md
index 539132776b3..2b01231241c 100644
--- a/doc/administration/geo/replication/remove_geo_node.md
+++ b/doc/administration/geo/replication/remove_geo_node.md
@@ -9,7 +9,7 @@ type: howto
**Secondary** nodes can be removed from the Geo cluster using the Geo admin page of the **primary** node. To remove a **secondary** node:
-1. Navigate to **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`).
+1. Navigate to **Admin Area > Geo** (`/admin/geo/nodes`).
1. Click the **Remove** button for the **secondary** node you want to remove.
1. Confirm by clicking **Remove** when the prompt appears.
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
index c2f8aa35d2d..b8172322c10 100644
--- a/doc/administration/geo/replication/troubleshooting.md
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -14,7 +14,6 @@ Here is a list of steps you should take to attempt to fix problem:
- Perform [basic troubleshooting](#basic-troubleshooting).
- Fix any [replication errors](#fixing-replication-errors).
-- Fix any [Foreign Data Wrapper](#fixing-foreign-data-wrapper-errors) errors.
- Fix any [common](#fixing-common-errors) errors.
## Basic troubleshooting
@@ -26,7 +25,7 @@ Before attempting more advanced troubleshooting:
### Check the health of the **secondary** node
-Visit the **primary** node's **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`) in
+Visit the **primary** node's **Admin Area > Geo** (`/admin/geo/nodes`) in
your browser. We perform the following health checks on each **secondary** node
to help identify if something is wrong:
@@ -44,6 +43,8 @@ For information on how to resolve common errors reported from the UI, see
If the UI is not working, or you are unable to log in, you can run the Geo
health check manually to get this information as well as a few more details.
+#### Health check Rake task
+
This Rake task can be run on an app node in the **primary** or **secondary**
Geo nodes:
@@ -62,8 +63,6 @@ This machine's Geo node name matches a database record ... yes, found a secondar
GitLab Geo secondary database is correctly configured ... yes
Database replication enabled? ... yes
Database replication working? ... yes
-GitLab Geo tracking database is configured to use Foreign Data Wrapper? ... yes
-GitLab Geo tracking database Foreign Data Wrapper schema is up-to-date? ... yes
GitLab Geo HTTP(S) connectivity ...
* Can connect to the primary node ... yes
HTTP/HTTPS repository cloning is enabled ... yes
@@ -77,6 +76,8 @@ All projects are in hashed storage? ... yes
Checking Geo ... Finished
```
+#### Sync status Rake task
+
Current sync information can be found manually by running this Rake task on any
**secondary** app node:
@@ -128,8 +129,7 @@ Geo finds the current machine's Geo node name in `/etc/gitlab/gitlab.rb` by:
- Using the `gitlab_rails['geo_node_name']` setting.
- If that is not defined, using the `external_url` setting.
-This name is used to look up the node with the same **Name** in
-**{admin}** **Admin Area >** **{location-dot}** **Geo**.
+This name is used to look up the node with the same **Name** in **Admin Area > Geo**.
To check if the current machine has a node name that matches a node in the
database, run the check task:
@@ -205,7 +205,7 @@ sudo gitlab-rake gitlab:geo:check
- Verify the correct password is set for `gitlab_rails['db_password']` that was used when creating the hash in `postgresql['sql_user_password']` by running `gitlab-ctl pg-password-md5 gitlab` and entering the password.
-1. Check returns not a secondary node
+1. Check returns `not a secondary node`
```plaintext
Checking Geo ...
@@ -252,12 +252,12 @@ sudo gitlab-rake gitlab:geo:check
When performing a PostgreSQL major version (9 > 10) update this is expected. Follow:
- [initiate-the-replication-process](database.md#step-3-initiate-the-replication-process)
- - [Geo database has an outdated FDW remote schema](troubleshooting.md#geo-database-has-an-outdated-fdw-remote-schema-error)
## Fixing replication errors
The following sections outline troubleshooting steps for fixing replication
-errors.
+errors (indicated by `Database replication working? ... no` in the
+[`geo:check` output](#health-check-rake-task).
### Message: `ERROR: replication slots can only be used if max_replication_slots > 0`?
@@ -390,13 +390,13 @@ to respect the CIDR format (i.e. `1.2.3.4/32`).
GitLab places a timeout on all repository clones, including project imports
and Geo synchronization operations. If a fresh `git clone` of a repository
-on the **primary** takes more than a few minutes, you may be affected by this.
+on the **primary** takes more than the default three hours, you may be affected by this.
To increase the timeout, add the following line to `/etc/gitlab/gitlab.rb`
on the **secondary** node:
```ruby
-gitlab_rails['gitlab_shell_git_timeout'] = 10800
+gitlab_rails['gitlab_shell_git_timeout'] = 14400
```
Then reconfigure GitLab:
@@ -405,7 +405,7 @@ Then reconfigure GitLab:
sudo gitlab-ctl reconfigure
```
-This will increase the timeout to three hours (10800 seconds). Choose a time
+This will increase the timeout to four hours (14400 seconds). Choose a time
long enough to accommodate a full clone of your largest repositories.
### New LFS objects are never replicated
@@ -504,11 +504,63 @@ to start again from scratch, there are a few steps that can help you:
gitlab-ctl start
```
-1. Refresh Foreign Data Wrapper tables
+## Fixing errors during a PostgreSQL upgrade or downgrade
- ```shell
- gitlab-rake geo:db:refresh_foreign_tables
- ```
+### Message: `ERROR: psql: FATAL: role "gitlab-consul" does not exist`
+
+When
+[upgrading PostgreSQL on a Geo instance](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance), you might encounter the
+following error:
+
+```plaintext
+$ sudo gitlab-ctl pg-upgrade --target-version=11
+Checking for an omnibus managed postgresql: OK
+Checking if postgresql['version'] is set: OK
+Checking if we already upgraded: NOT OK
+Checking for a newer version of PostgreSQL to install
+Upgrading PostgreSQL to 11.7
+Checking if PostgreSQL bin files are symlinked to the expected location: OK
+Waiting 30 seconds to ensure tasks complete before PostgreSQL upgrade.
+See https://docs.gitlab.com/omnibus/settings/database.html#upgrade-packaged-postgresql-server for details
+If you do not want to upgrade the PostgreSQL server at this time, enter Ctrl-C and see the documentation for details
+
+Please hit Ctrl-C now if you want to cancel the operation.
+..............................Detected an HA cluster.
+Error running command: /opt/gitlab/embedded/bin/psql -qt -d gitlab_repmgr -h /var/opt/gitlab/postgresql -p 5432 -c "SELECT name FROM repmgr_gitlab_cluster.repl_nodes WHERE type='master' AND active != 'f'" -U gitlab-consul
+ERROR: psql: FATAL: role "gitlab-consul" does not exist
+Traceback (most recent call last):
+ 10: from /opt/gitlab/embedded/bin/omnibus-ctl:23:in `<main>'
+ 9: from /opt/gitlab/embedded/bin/omnibus-ctl:23:in `load'
+ 8: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/omnibus-ctl-0.6.0/bin/omnibus-ctl:31:in `<top (required)>'
+ 7: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/omnibus-ctl-0.6.0/lib/omnibus-ctl.rb:746:in `run'
+ 6: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/omnibus-ctl-0.6.0/lib/omnibus-ctl.rb:204:in `block in add_command_under_category'
+ 5: from /opt/gitlab/embedded/service/omnibus-ctl/pg-upgrade.rb:171:in `block in load_file'
+ 4: from /opt/gitlab/embedded/service/omnibus-ctl-ee/lib/repmgr.rb:248:in `is_master?'
+ 3: from /opt/gitlab/embedded/service/omnibus-ctl-ee/lib/repmgr.rb:100:in `execute_psql'
+ 2: from /opt/gitlab/embedded/service/omnibus-ctl-ee/lib/repmgr.rb:113:in `cmd'
+ 1: from /opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/mixlib-shellout-3.0.9/lib/mixlib/shellout.rb:287:in `error!'
+/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/mixlib-shellout-3.0.9/lib/mixlib/shellout.rb:300:in `invalid!': Expected process to exit with [0], but received '2' (Mixlib::ShellOut::ShellCommandFailed)
+---- Begin output of /opt/gitlab/embedded/bin/psql -qt -d gitlab_repmgr -h /var/opt/gitlab/postgresql -p 5432 -c "SELECT name FROM repmgr_gitlab_cluster.repl_nodes WHERE type='master' AND active != 'f'" -U gitlab-consul ----
+STDOUT:
+STDERR: psql: FATAL: role "gitlab-consul" does not exist
+---- End output of /opt/gitlab/embedded/bin/psql -qt -d gitlab_repmgr -h /var/opt/gitlab/postgresql -p 5432 -c "SELECT name FROM repmgr_gitlab_cluster.repl_nodes WHERE type='master' AND active != 'f'" -U gitlab-consul ----
+Ran /opt/gitlab/embedded/bin/psql -qt -d gitlab_repmgr -h /var/opt/gitlab/postgresql -p 5432 -c "SELECT name FROM repmgr_gitlab_cluster.repl_nodes WHERE type='master' AND active != 'f'" -U gitlab-consul returned 2
+```
+
+If you are upgrading the PostgreSQL read-replica of a Geo secondary node, and
+you are not using `consul` or `repmgr`, you may need to disable `consul` and/or
+`repmgr` services in `gitlab.rb`:
+
+```ruby
+consul['enable'] = false
+repmgr['enable'] = false
+```
+
+Then reconfigure GitLab:
+
+```shell
+sudo gitlab-ctl reconfigure
+```
## Fixing errors during a failover or when promoting a secondary to a primary node
@@ -551,6 +603,35 @@ or `gitlab-ctl promote-to-primary-node`, either:
bug](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/22021) was
fixed.
+If the above does not work, another possible reason is that you have paused replication
+from the original primary node before attempting to promote this node.
+
+To double check this, you can do the following:
+
+- Get the current secondary node's ID using:
+
+ ```shell
+ sudo gitlab-rails runner 'puts GeoNode.current_node.id'
+ ```
+
+- Double check that the node is active through the database by running the following
+ on the secondary node, replacing `ID_FROM_ABOVE`:
+
+ ```shell
+ sudo gitlab-rails dbconsole
+
+ SELECT enabled FROM geo_nodes WHERE id = ID_FROM_ABOVE;
+ ```
+
+- If the above returned `f` it means that the replication was paused.
+ You can re-enable it through an `UPDATE` statement in the database:
+
+ ```shell
+ sudo gitlab-rails dbconsole
+
+ UPDATE geo_nodes SET enabled = 't' WHERE id = ID_FROM_ABOVE;
+ ```
+
### Message: ``NoMethodError: undefined method `secondary?' for nil:NilClass``
When [promoting a **secondary** node](../disaster_recovery/index.md#step-3-promoting-a-secondary-node),
@@ -593,231 +674,6 @@ sudo /opt/gitlab/embedded/bin/gitlab-pg-ctl promote
GitLab 12.9 and later are [unaffected by this error](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5147).
-## Fixing Foreign Data Wrapper errors
-
-This section documents ways to fix potential Foreign Data Wrapper errors.
-
-### "Foreign Data Wrapper (FDW) is not configured" error
-
-When setting up Geo, you might see this warning in the `gitlab-rake
-gitlab:geo:check` output:
-
-```plaintext
-GitLab Geo tracking database Foreign Data Wrapper schema is up-to-date? ... foreign data wrapper is not configured
-```
-
-There are a few key points to remember:
-
-1. The FDW settings are configured on the Geo **tracking** database.
-1. The configured foreign server enables a login to the Geo
- **secondary**, read-only database.
-
-By default, the Geo secondary and tracking database are running on the
-same host on different ports. That is, 5432 and 5431 respectively.
-
-#### Checking configuration
-
-NOTE: **Note:**
-The following steps are for Omnibus installs only. Using Geo with source-based installs was **deprecated** in GitLab 11.5.
-
-To check the configuration:
-
-1. SSH into an app node in the **secondary**:
-
- ```shell
- sudo -i
- ```
-
- Note: An app node is any machine running at least one of the following services:
-
- - `puma`
- - `unicorn`
- - `sidekiq`
- - `geo-logcursor`
-
-1. Enter the database console:
-
- If the tracking database is running on the same node:
-
- ```shell
- gitlab-geo-psql
- ```
-
- Or, if the tracking database is running on a different node, you must specify
- the user and host when entering the database console:
-
- ```shell
- gitlab-geo-psql -U gitlab_geo -h <IP of tracking database>
- ```
-
- You will be prompted for the password of the `gitlab_geo` user. You can find
- it in plaintext in `/etc/gitlab/gitlab.rb` at:
-
- ```ruby
- geo_secondary['db_password'] = '<geo_tracking_db_password>'
- ```
-
- This password is normally set on the tracking database during
- [Step 3: Configure the tracking database on the secondary node](multiple_servers.md#step-3-configure-the-tracking-database-on-the-secondary-node),
- and it is set on the app nodes during
- [Step 4: Configure the frontend application servers on the secondary node](multiple_servers.md#step-4-configure-the-frontend-application-servers-on-the-secondary-node).
-
-1. Check whether any tables are present with the following statement:
-
- ```sql
- SELECT * from information_schema.foreign_tables;
- ```
-
- If everything is working, you should see something like this:
-
- ```plaintext
- gitlabhq_geo_production=# SELECT * from information_schema.foreign_tables;
- foreign_table_catalog | foreign_table_schema | foreign_table_name | foreign_server_catalog | foreign_server_name
- -------------------------+----------------------+-------------------------------------------------+-------------------------+---------------------
- gitlabhq_geo_production | gitlab_secondary | abuse_reports | gitlabhq_geo_production | gitlab_secondary
- gitlabhq_geo_production | gitlab_secondary | appearances | gitlabhq_geo_production | gitlab_secondary
- gitlabhq_geo_production | gitlab_secondary | application_setting_terms | gitlabhq_geo_production | gitlab_secondary
- gitlabhq_geo_production | gitlab_secondary | application_settings | gitlabhq_geo_production | gitlab_secondary
- <snip>
- ```
-
- However, if the query returns with `0 rows`, then continue onto the next steps.
-
-1. Check that the foreign server mapping is correct via `\des+`. The
- results should look something like this:
-
- ```plaintext
- gitlabhq_geo_production=# \des+
- List of foreign servers
- -[ RECORD 1 ]--------+------------------------------------------------------------
- Name | gitlab_secondary
- Owner | gitlab-psql
- Foreign-data wrapper | postgres_fdw
- Access privileges | "gitlab-psql"=U/"gitlab-psql" +
- | gitlab_geo=U/"gitlab-psql"
- Type |
- Version |
- FDW Options | (host '0.0.0.0', port '5432', dbname 'gitlabhq_production')
- Description |
- ```
-
- NOTE: **Note:**
- Pay particular attention to the host and port under
- FDW options. That configuration should point to the Geo secondary
- database.
-
- If you need to experiment with changing the host or password, the
- following queries demonstrate how:
-
- ```sql
- ALTER SERVER gitlab_secondary OPTIONS (SET host '<my_new_host>');
- ALTER SERVER gitlab_secondary OPTIONS (SET port 5432);
- ```
-
- If you change the host and/or port, you will also have to adjust the
- following settings in `/etc/gitlab/gitlab.rb` and run `gitlab-ctl
- reconfigure`:
-
- - `gitlab_rails['db_host']`
- - `gitlab_rails['db_port']`
-
-1. Check that the user mapping is configured properly via `\deu+`:
-
- ```plaintext
- gitlabhq_geo_production=# \deu+
- List of user mappings
- Server | User name | FDW Options
- ------------------+------------+--------------------------------------------------------------------------------
- gitlab_secondary | gitlab_geo | ("user" 'gitlab', password 'YOUR-PASSWORD-HERE')
- (1 row)
- ```
-
- Make sure the password is correct. You can test that logins work by running `psql`:
-
- ```shell
- # Connect to the tracking database as the `gitlab_geo` user
- sudo \
- -u git /opt/gitlab/embedded/bin/psql \
- -h /var/opt/gitlab/geo-postgresql \
- -p 5431 \
- -U gitlab_geo \
- -W \
- -d gitlabhq_geo_production
- ```
-
- If you need to correct the password, the following query shows how:
-
- ```sql
- ALTER USER MAPPING FOR gitlab_geo SERVER gitlab_secondary OPTIONS (SET password '<my_new_password>');
- ```
-
- If you change the user or password, you will also have to adjust the
- following settings in `/etc/gitlab/gitlab.rb` and run `gitlab-ctl
- reconfigure`:
-
- - `gitlab_rails['db_username']`
- - `gitlab_rails['db_password']`
-
- If you are using [PgBouncer in front of the secondary
- database](database.md#pgbouncer-support-optional), be sure to update
- the following settings:
-
- - `geo_postgresql['fdw_external_user']`
- - `geo_postgresql['fdw_external_password']`
-
-#### Manual reload of FDW schema
-
-If you're still unable to get FDW working, you may want to try a manual
-reload of the FDW schema. To manually reload the FDW schema:
-
-1. On the node running the Geo tracking database, enter the PostgreSQL console via
- the `gitlab_geo` user:
-
- ```shell
- sudo \
- -u git /opt/gitlab/embedded/bin/psql \
- -h /var/opt/gitlab/geo-postgresql \
- -p 5431 \
- -U gitlab_geo \
- -W \
- -d gitlabhq_geo_production
- ```
-
- Be sure to adjust the port and hostname for your configuration. You
- may be asked to enter a password.
-
-1. Reload the schema via:
-
- ```sql
- DROP SCHEMA IF EXISTS gitlab_secondary CASCADE;
- CREATE SCHEMA gitlab_secondary;
- GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO gitlab_geo;
- IMPORT FOREIGN SCHEMA public FROM SERVER gitlab_secondary INTO gitlab_secondary;
- ```
-
-1. Test that queries work:
-
- ```sql
- SELECT * from information_schema.foreign_tables;
- SELECT * FROM gitlab_secondary.projects limit 1;
- ```
-
-### "Geo database has an outdated FDW remote schema" error
-
-GitLab can error with a `Geo database has an outdated FDW remote schema` message.
-
-For example:
-
-```plaintext
-Geo database has an outdated FDW remote schema. It contains 229 of 236 expected tables. Please refer to Geo Troubleshooting.
-```
-
-To resolve this, run the following command on the **secondary**:
-
-```shell
-sudo gitlab-rake geo:db:refresh_foreign_tables
-```
-
## Expired artifacts
If you notice for some reason there are more artifacts on the Geo
@@ -835,7 +691,7 @@ If you are able to log in to the **primary** node, but you receive this error
when attempting to log into a **secondary**, you should check that the Geo
node's URL matches its external URL.
-1. On the primary, visit **{admin}** **Admin Area >** **{location-dot}** **Geo**.
+1. On the primary, visit **Admin Area > Geo**.
1. Find the affected **secondary** and click **Edit**.
1. Ensure the **URL** field matches the value found in `/etc/gitlab/gitlab.rb`
in `external_url "https://gitlab.example.com"` on the frontend server(s) of
@@ -896,13 +752,6 @@ If you are using Omnibus GitLab installation, something might have failed during
- Run `sudo gitlab-ctl reconfigure`.
- Manually trigger the database migration by running: `sudo gitlab-rake geo:db:migrate` as root on the **secondary** node.
-### Geo database is not configured to use Foreign Data Wrapper
-
-This error means the Geo Tracking Database doesn't have the FDW server and credentials
-configured.
-
-See ["Foreign Data Wrapper (FDW) is not configured" error?](#foreign-data-wrapper-fdw-is-not-configured-error).
-
### GitLab indicates that more than 100% of repositories were synced
This can be caused by orphaned records in the project registry. You can clear them
diff --git a/doc/administration/geo/replication/tuning.md b/doc/administration/geo/replication/tuning.md
index 63a8f81eecb..fe0b189863c 100644
--- a/doc/administration/geo/replication/tuning.md
+++ b/doc/administration/geo/replication/tuning.md
@@ -7,18 +7,25 @@ type: howto
# Tuning Geo **(PREMIUM ONLY)**
-## Changing the sync capacity values
+## Changing the sync/verification capacity values
-In the Geo admin page at **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`),
+In the Geo admin page at **Admin Area > Geo** (`/admin/geo/nodes`),
there are several variables that can be tuned to improve performance of Geo:
-- Repository sync capacity.
-- File sync capacity.
+- Repository sync capacity
+- File sync capacity
+- Container repositories sync capacity
+- Verification capacity
-Increasing these values will increase the number of jobs that are scheduled.
+Increasing capacity values will increase the number of jobs that are scheduled.
However, this may not lead to more downloads in parallel unless the number of
available Sidekiq threads is also increased. For example, if repository sync
capacity is increased from 25 to 50, you may also want to increase the number
of Sidekiq threads from 25 to 50. See the
[Sidekiq concurrency documentation](../../operations/extra_sidekiq_processes.md#number-of-threads)
for more details.
+
+## Repository re-verification
+
+See
+[Automatic background verification](../disaster_recovery/background_verification.md).
diff --git a/doc/administration/geo/replication/version_specific_updates.md b/doc/administration/geo/replication/version_specific_updates.md
index 777de715a8c..900d09bdd34 100644
--- a/doc/administration/geo/replication/version_specific_updates.md
+++ b/doc/administration/geo/replication/version_specific_updates.md
@@ -11,6 +11,35 @@ Check this document if it includes instructions for the version you are updating
These steps go together with the [general steps](updating_the_geo_nodes.md#general-update-steps)
for updating Geo nodes.
+## Updating to GitLab 13.3
+
+In GitLab 13.3, Geo removed the PostgreSQL [Foreign Data Wrapper](https://www.postgresql.org/docs/11/postgres-fdw.html) dependency for the tracking database.
+
+The FDW server, user, and the extension will be removed during the upgrade process on each **secondary** node. The GitLab settings related to the FDW in the `/etc/gitlab/gitlab.rb` have been deprecated and can be safely removed.
+
+There are some scenarios like using an external PostgreSQL instance for the tracking database where the FDW settings must be removed manually. Enter the PostgreSQL console of that instance and remove them:
+
+```shell
+DROP SERVER gitlab_secondary CASCADE;
+DROP EXTENSION IF EXISTS postgres_fdw;
+```
+
+## Updating to GitLab 13.0
+
+Upgrading to GitLab 13.0 requires GitLab 12.10 to already be using PostgreSQL
+version 11. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for the recommended procedure.
+
+## Updating to GitLab 12.10
+
+GitLab 12.10 does not attempt to automatically update the embedded PostgreSQL
+server when using Geo, because the PostgreSQL upgrade requires downtime on
+secondaries while reinitializing streaming replication. It must be upgraded
+manually. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for the recommended procedure.
+
## Updating to GitLab 12.9
CAUTION: **Warning:**
@@ -18,6 +47,32 @@ GitLab 12.9.0 through GitLab 12.9.3 are affected by [a bug that stops
repository verification](https://gitlab.com/gitlab-org/gitlab/-/issues/213523).
The issue is fixed in GitLab 12.9.4. Please upgrade to GitLab 12.9.4 or later.
+By default, GitLab 12.9 will attempt to automatically update the embedded
+PostgreSQL server to 10.12 from 9.6, which requires downtime on secondaries
+while reinitializing streaming replication. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for the recommended procedure.
+
+This can be temporarily disabled by running the following before updating:
+
+```shell
+sudo touch /etc/gitlab/disable-postgresql-upgrade
+```
+
+## Updating to GitLab 12.8
+
+By default, GitLab 12.8 will attempt to automatically update the embedded
+PostgreSQL server to 10.12 from 9.6, which requires downtime on secondaries
+while reinitializing streaming replication. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for the recommended procedure.
+
+This can be temporarily disabled by running the following before updating:
+
+```shell
+sudo touch /etc/gitlab/disable-postgresql-upgrade
+```
+
## Updating to GitLab 12.7
DANGER: **Danger:**
@@ -28,8 +83,92 @@ bug](https://gitlab.com/gitlab-org/gitlab/-/issues/199672) that causes Geo
fix](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/24021) was
shipped in 12.7.5.
+By default, GitLab 12.7 will attempt to automatically update the embedded
+PostgreSQL server to 10.9 from 9.6, which requires downtime on secondaries while
+reinitializing streaming replication. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for the recommended procedure.
+
+This can be temporarily disabled by running the following before updating:
+
+```shell
+sudo touch /etc/gitlab/disable-postgresql-upgrade
+```
+
+## Updating to GitLab 12.6
+
+By default, GitLab 12.6 will attempt to automatically update the embedded
+PostgreSQL server to 10.9 from 9.6, which requires downtime on secondaries while
+reinitializing streaming replication. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for the recommended procedure.
+
+This can be temporarily disabled by running the following before updating:
+
+```shell
+sudo touch /etc/gitlab/disable-postgresql-upgrade
+```
+
+## Updating to GitLab 12.5
+
+By default, GitLab 12.5 will attempt to automatically update the embedded
+PostgreSQL server to 10.9 from 9.6, which requires downtime on secondaries while
+reinitializing streaming replication. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for the recommended procedure.
+
+This can be temporarily disabled by running the following before updating:
+
+```shell
+sudo touch /etc/gitlab/disable-postgresql-upgrade
+```
+
+## Updating to GitLab 12.4
+
+By default, GitLab 12.4 will attempt to automatically update the embedded
+PostgreSQL server to 10.9 from 9.6, which requires downtime on secondaries while
+reinitializing streaming replication. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for the recommended procedure.
+
+This can be temporarily disabled by running the following before updating:
+
+```shell
+sudo touch /etc/gitlab/disable-postgresql-upgrade
+```
+
+## Updating to GitLab 12.3
+
+DANGER: **Danger:**
+If the existing PostgreSQL server version is 9.6.x, it is recommended to
+upgrade to GitLab 12.4 or newer. By default, GitLab 12.3 will attempt to
+automatically update the embedded PostgreSQL server to 10.9 from 9.6. In
+certain circumstances, it will fail. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for more information.
+
+Additionally, if the PostgreSQL upgrade does not fail, a successful upgrade
+requires downtime on secondaries while reinitializing streaming replication.
+Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for the recommended procedure.
+
## Updating to GitLab 12.2
+DANGER: **Danger:**
+If the existing PostgreSQL server version is 9.6.x, it is recommended to
+upgrade to GitLab 12.4 or newer. By default, GitLab 12.2 will attempt to
+automatically update the embedded PostgreSQL server to 10.9 from 9.6. In
+certain circumstances, it will fail. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for more information.
+
+Additionally, if the PostgreSQL upgrade does not fail, a successful upgrade
+requires downtime on secondaries while reinitializing streaming replication.
+Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for the recommended procedure.
+
GitLab 12.2 includes the following minor PostgreSQL updates:
- To version `9.6.14` if you run PostgreSQL 9.6.
@@ -48,17 +187,20 @@ The restart avoids a version mismatch when PostgreSQL tries to load the FDW exte
## Updating to GitLab 12.1
-By default, GitLab 12.1 will attempt to automatically update the
-embedded PostgreSQL server to 10.7 from 9.6. Please see
-[the omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+DANGER: **Danger:**
+If the existing PostgreSQL server version is 9.6.x, it is recommended to
+upgrade to GitLab 12.4 or newer. By default, GitLab 12.1 will attempt to
+automatically update the embedded PostgreSQL server to 10.9 from 9.6. In
+certain circumstances, it will fail. Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
+for more information.
+
+Additionally, if the PostgreSQL upgrade does not fail, a successful upgrade
+requires downtime on secondaries while reinitializing streaming replication.
+Please see
+[the Omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance)
for the recommended procedure.
-This can be temporarily disabled by running the following before updating:
-
-```shell
-sudo touch /etc/gitlab/disable-postgresql-upgrade
-```
-
## Updating to GitLab 12.0
CAUTION: **Warning:**
@@ -214,7 +356,7 @@ Replicating over SSH has been deprecated, and support for this option will be
removed in a future release.
To switch to HTTP/HTTPS replication, log into the **primary** node as an admin and visit
-**{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`). For each **secondary** node listed,
+**Admin Area > Geo** (`/admin/geo/nodes`). For each **secondary** node listed,
press the "Edit" button, change the "Repository cloning" setting from
"SSH (deprecated)" to "HTTP/HTTPS", and press "Save changes". This should take
effect immediately.
diff --git a/doc/administration/git_annex.md b/doc/administration/git_annex.md
index ed6218ede91..59c1e621e46 100644
--- a/doc/administration/git_annex.md
+++ b/doc/administration/git_annex.md
@@ -1,4 +1,8 @@
---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference, howto
disqus_identifier: 'https://docs.gitlab.com/ee/workflow/git_annex.html'
---
@@ -90,8 +94,8 @@ one is located in `config.yml` of GitLab Shell.
## Using GitLab git-annex
-> **Note:**
-> Your Git remotes must be using the SSH protocol, not HTTP(S).
+NOTE: **Note:**
+Your Git remotes must be using the SSH protocol, not HTTP(S).
Here is an example workflow of uploading a very large file and then checking it
into your Git repository:
diff --git a/doc/administration/git_protocol.md b/doc/administration/git_protocol.md
index e1600d972bd..841e636bd0a 100644
--- a/doc/administration/git_protocol.md
+++ b/doc/administration/git_protocol.md
@@ -1,4 +1,8 @@
---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference
description: "Set and configure Git protocol v2"
---
diff --git a/doc/administration/gitaly/img/cluster_example_v13_3.png b/doc/administration/gitaly/img/cluster_example_v13_3.png
new file mode 100644
index 00000000000..efca79ecef9
--- /dev/null
+++ b/doc/administration/gitaly/img/cluster_example_v13_3.png
Binary files differ
diff --git a/doc/administration/gitaly/img/shard_example_v13_3.png b/doc/administration/gitaly/img/shard_example_v13_3.png
new file mode 100644
index 00000000000..47f582cfa78
--- /dev/null
+++ b/doc/administration/gitaly/img/shard_example_v13_3.png
Binary files differ
diff --git a/doc/administration/gitaly/index.md b/doc/administration/gitaly/index.md
index 057d0559c14..9558488c89e 100644
--- a/doc/administration/gitaly/index.md
+++ b/doc/administration/gitaly/index.md
@@ -376,7 +376,7 @@ This can be risky because anything that prevents your Gitaly clients from reachi
servers will cause all Gitaly requests to fail. For example, any sort of network, firewall, or name
resolution problems.
-Additionally, you must [disable Rugged](../high_availability/nfs.md#improving-nfs-performance-with-gitlab)
+Additionally, you must [disable Rugged](../nfs.md#improving-nfs-performance-with-gitlab)
if previously enabled manually.
Gitaly makes the following assumptions:
diff --git a/doc/administration/gitaly/praefect.md b/doc/administration/gitaly/praefect.md
index 1f97cd304f9..2e36a754c79 100644
--- a/doc/administration/gitaly/praefect.md
+++ b/doc/administration/gitaly/praefect.md
@@ -59,6 +59,38 @@ Follow the [HA Gitaly epic](https://gitlab.com/groups/gitlab-org/-/epics/1489)
for improvements including
[horizontally distributing reads](https://gitlab.com/groups/gitlab-org/-/epics/2013).
+## Cluster or shard
+
+Gitaly supports multiple models of scaling:
+
+- Clustering using Gitaly Cluster, where each repository is stored on multiple Gitaly nodes in the
+ cluster. Read requests are distributed between repository replicas and write requests are
+ broadcast to repository replicas.
+- Sharding using [repository storage paths](../repository_storage_paths.md), where each repository
+ is stored on the assigned Gitaly node. All requests are routed to this node.
+
+| Cluster | Shard |
+|---|---|
+| ![Cluster example](img/cluster_example_v13_3.png) | ![Shard example](img/shard_example_v13_3.png) |
+
+Generally, Gitaly Cluster can replace sharded configurations, at the expense of additional storage
+needed to store each repository on multiple Gitaly nodes. The benefit of using Gitaly Cluster over
+sharding is:
+
+- Improved fault tolerance, because each Gitaly node has a copy of every repository.
+- Improved resource utilization, reducing the need for over-provisioning for shard-specific peak
+ loads, because read loads are distributed across replicas.
+- Manual rebalancing for performance is not required, because read loads are distributed across
+ replicas.
+- Simpler management, because all Gitaly nodes are identical.
+
+Under some workloads, CPU and memory requirements may require a large fleet of Gitaly nodes and it
+can be uneconomical to have one to one replication factor.
+
+A hybrid approach can be used in these instances, where each shard is configured as a smaller
+cluster. [Variable replication factor](https://gitlab.com/groups/gitlab-org/-/epics/3372) is planned
+to provide greater flexibility for extremely large GitLab instances.
+
## Requirements for configuring a Gitaly Cluster
The minimum recommended configuration for a Gitaly Cluster requires:
@@ -142,6 +174,16 @@ database on the same PostgreSQL server if using
[Geo](../geo/replication/index.md). The replication state is internal to each instance
of GitLab and should not be replicated.
+These instructions help set up a single PostgreSQL database, which creates a single point of
+failure. For greater fault tolerance, the following options are available:
+
+- For non-Geo installations, use one of the fault-tolerant
+ [PostgreSQL setups](../postgresql/index.md).
+- For Geo instances, either:
+ - Set up a separate [PostgreSQL instance](https://www.postgresql.org/docs/11/high-availability.html).
+ - Use a cloud-managed PostgreSQL service. AWS
+ [Relational Database Service](https://aws.amazon.com/rds/)) is recommended.
+
To complete this section you will need:
- 1 Praefect node
@@ -188,6 +230,49 @@ node, using `psql` which is installed by Omnibus GitLab.
The database used by Praefect is now configured.
+#### PgBouncer
+
+To reduce PostgreSQL resource consumption, you should set up and configure
+[PgBouncer](https://www.pgbouncer.org/) in front of the PostgreSQL instance. To do
+this, replace value of the `POSTGRESQL_SERVER_ADDRESS` with corresponding IP or host
+address of the PgBouncer instance.
+
+This documentation doesn't provide PgBouncer installation instructions,
+you can:
+
+- Find instructions on the [official website](https://www.pgbouncer.org/install.html).
+- Use a [Docker image](https://hub.docker.com/r/edoburu/pgbouncer/).
+
+In addition to base PgBouncer configuration options, set the following values:
+
+- The [Praefect PostgreSQL database](#postgresql) in the `[databases]` section:
+
+ ```ini
+ [databases]
+ * = host=POSTGRESQL_SERVER_ADDRESS port=5432 auth_user=praefect
+ ```
+
+- [`pool_mode`](https://www.pgbouncer.org/config.html#pool_mode)
+ and [`ignore_startup_parameters`](https://www.pgbouncer.org/config.html#ignore_startup_parameters)
+ in the `[pgbouncer]` section:
+
+ ```ini
+ [pgbouncer]
+ pool_mode = transaction
+ ignore_startup_parameters = extra_float_digits
+ ```
+
+The `praefect` user and its password should be included in the file (default is
+`userlist.txt`) used by PgBouncer if the [`auth_file`](https://www.pgbouncer.org/config.html#auth_file)
+configuration option is set.
+
+NOTE: **Note:**
+By default PgBouncer uses port `6432` to accept incoming
+connections. You can change it by setting the [`listen_port`](https://www.pgbouncer.org/config.html#listen_port)
+configuration option. We recommend setting it to the default port value (`5432`) used by
+PostgreSQL instances. Otherwise you should change the configuration parameter
+`praefect['database_port']` for each Praefect instance to the correct value.
+
### Praefect
To complete this section you will need:
@@ -212,6 +297,7 @@ application server, or a Gitaly node.
postgresql['enable'] = false
redis['enable'] = false
nginx['enable'] = false
+ alertmanager['enable'] = false
prometheus['enable'] = false
grafana['enable'] = false
puma['enable'] = false
@@ -657,6 +743,13 @@ internal traffic from the GitLab application to the Praefect nodes. The
specifics on which load balancer to use or the exact configuration is beyond the
scope of the GitLab documentation.
+NOTE: **Note:**
+The load balancer must be configured to accept traffic from the Gitaly nodes in
+addition to the GitLab nodes. Some requests handled by
+[`gitaly-ruby`](index.md#gitaly-ruby) sidecar processes call into the main Gitaly
+process. `gitaly-ruby` uses the Gitaly address set in the GitLab server's
+`git_data_dirs` setting to make this connection.
+
We hope that if you’re managing HA systems like GitLab, you have a load balancer
of choice already. Some examples include [HAProxy](https://www.haproxy.org/)
(open-source), [Google Internal Load Balancer](https://cloud.google.com/load-balancing/docs/internal/),
@@ -847,19 +940,14 @@ cluster.
## Distributed reads
-> Introduced in GitLab 13.1 in [beta](https://about.gitlab.com/handbook/product/#alpha-beta-ga) with feature flag `gitaly_distributed_reads` set to disabled.
-
Praefect supports distribution of read operations across Gitaly nodes that are
configured for the virtual node.
-To allow for [performance testing](https://gitlab.com/gitlab-org/quality/performance/-/issues/231),
-distributed reads are currently in
-[beta](https://about.gitlab.com/handbook/product/#alpha-beta-ga) and disabled by
-default. To enable distributed reads, the `gitaly_distributed_reads`
-[feature flag](../feature_flags.md) must be enabled in a Ruby console:
+The feature is enabled by default. To disable distributed reads, the `gitaly_distributed_reads`
+[feature flag](../feature_flags.md) must be disabled in a Ruby console:
```ruby
-Feature.enable(:gitaly_distributed_reads)
+Feature.disable(:gitaly_distributed_reads)
```
If enabled, all RPCs marked with `ACCESSOR` option like
@@ -884,32 +972,51 @@ They reflect configuration defined for this instance of Praefect.
## Strong consistency
-> Introduced in GitLab 13.1 in [alpha](https://about.gitlab.com/handbook/product/#alpha-beta-ga), disabled by default.
+> - Introduced in GitLab 13.1 in [alpha](https://about.gitlab.com/handbook/product/gitlab-the-product/#alpha-beta-ga), disabled by default.
+> - Entered [beta](https://about.gitlab.com/handbook/product/gitlab-the-product/#alpha-beta-ga) in GitLab 13.2, disabled by default.
+> - From GitLab 13.3, disabled unless primary-wins reference transactions strategy is disabled.
Praefect guarantees eventual consistency by replicating all writes to secondary nodes
after the write to the primary Gitaly node has happened.
Praefect can instead provide strong consistency by creating a transaction and writing
changes to all Gitaly nodes at once. Strong consistency is currently in
-[alpha](https://about.gitlab.com/handbook/product/#alpha-beta-ga) and not enabled by
+[alpha](https://about.gitlab.com/handbook/product/gitlab-the-product/#alpha-beta-ga) and not enabled by
default. If enabled, transactions are only available for a subset of RPCs. For more
information, see the [strong consistency epic](https://gitlab.com/groups/gitlab-org/-/epics/1189).
To enable strong consistency:
-- In GitLab 13.2 and later, enable the `:gitaly_reference_transactions` feature flag.
+- In GitLab 13.3 and later, reference transactions are enabled by default with
+ a primary-wins strategy. This strategy causes all transactions to succeed for
+ the primary and thus does not ensure strong consistency. To enable strong
+ consistency, disable the `:gitaly_reference_transactions_primary_wins`
+ feature flag.
+- In GitLab 13.2, enable the `:gitaly_reference_transactions` feature flag.
- In GitLab 13.1, enable the `:gitaly_reference_transactions` and `:gitaly_hooks_rpc`
feature flags.
-Enabling feature flags requires [access to the Rails console](../feature_flags.md#start-the-gitlab-rails-console).
+Changing feature flags requires [access to the Rails console](../feature_flags.md#start-the-gitlab-rails-console).
In the Rails console, enable or disable the flags as required. For example:
```ruby
Feature.enable(:gitaly_reference_transactions)
+Feature.disable(:gitaly_reference_transactions_primary_wins)
```
-To monitor strong consistency, use the `gitaly_praefect_transactions_total` and
-`gitaly_praefect_transactions_delay_seconds` Prometheus counter metrics.
+To monitor strong consistency, you can use the following Prometheus metrics:
+
+- `gitaly_praefect_transactions_total`: Number of transactions created and
+ voted on.
+- `gitaly_praefect_subtransactions_per_transaction_total`: Number of times
+ nodes cast a vote for a single transaction. This can happen multiple times if
+ multiple references are getting updated in a single transaction.
+- `gitaly_praefect_voters_per_transaction_total`: Number of Gitaly nodes taking
+ part in a transaction.
+- `gitaly_praefect_transactions_delay_seconds`: Server-side delay introduced by
+ waiting for the transaction to be committed.
+- `gitaly_hook_transaction_voting_delay_seconds`: Client-side delay introduced
+ by waiting for the transaction to be committed.
## Automatic failover and leader election
@@ -940,76 +1047,167 @@ strategy in the future.
## Primary Node Failure
-Praefect recovers from a failing primary Gitaly node by promoting a healthy secondary as the new primary. To minimize data loss, Praefect elects the secondary with the least unreplicated writes from the primary. There can still be some unreplicated writes, leading to data loss.
+Gitaly Cluster recovers from a failing primary Gitaly node by promoting a healthy secondary as the
+new primary.
+
+To minimize data loss, Gitaly Cluster:
+
+- Switches repositories that are outdated on the new primary to [read-only mode](#read-only-mode).
+- Elects the secondary with the least unreplicated writes from the primary to be the new primary.
+ Because there can still be some unreplicated writes, [data loss can occur](#check-for-data-loss).
+
+### Read-only mode
+
+> - Introduced in GitLab 13.0 as [generally available](https://about.gitlab.com/handbook/product/gitlab-the-product/#generally-available-ga).
+> - Between GitLab 13.0 and GitLab 13.2, read-only mode applied to the whole virtual storage and occurred whenever failover occurred.
+> - [In GitLab 13.3 and later](https://gitlab.com/gitlab-org/gitaly/-/issues/2862), read-only mode applies on a per-repository basis and only occurs if a new primary is out of date.
+
+When Gitaly Cluster switches to a new primary, repositories enter read-only mode if they are out of
+date. This can happen after failing over to an outdated secondary. Read-only mode eases data
+recovery efforts by preventing writes that may conflict with the unreplicated writes on other nodes.
+
+To enable writes again, an administrator can:
+
+1. [Check](#check-for-data-loss) for data loss.
+1. Attempt to [recover](#recover-missing-data) missing data.
+1. Either [enable writes](#enable-writes-or-accept-data-loss) in the virtual storage or
+ [accept data loss](#enable-writes-or-accept-data-loss) if necessary, depending on the version of
+ GitLab.
-Praefect switches a virtual storage in to read-only mode after a failover event. This eases data recovery efforts by preventing new, possibly conflicting writes to the newly elected primary. This allows the administrator to attempt recovering the lost data before allowing new writes.
+### Check for data loss
-If you prefer write availability over consistency, this behavior can be turned off by setting `praefect['failover_read_only_after_failover'] = false` in `/etc/gitlab/gitlab.rb` and [reconfiguring Praefect](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+The Praefect `dataloss` sub-command identifies replicas that are likely to be outdated. This is
+useful for identifying potential data loss after a failover. The following parameters are
+available:
-### Checking for data loss
+- `-virtual-storage` that specifies which virtual storage to check. The default behavior is to
+ display outdated replicas of read-only repositories as they generally require administrator
+ action.
+- In GitLab 13.3 and later, `-partially-replicated` that specifies whether to display a list of
+ [outdated replicas of writable repositories](#outdated-replicas-of-writable-repositories).
-The Praefect `dataloss` sub-command helps identify lost writes by checking for uncompleted replication jobs. This is useful for identifying possible data loss cases after a failover. This command must be executed on a Praefect node.
+NOTE: **Note:**
+`dataloss` is still in beta and the output format is subject to change.
+
+To check for outdated replicas of read-only repositories, run:
```shell
sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dataloss [-virtual-storage <virtual-storage>]
```
-If the virtual storage is not specified, every configured virtual storage is checked for data loss.
+Every configured virtual storage is checked if none is specified:
```shell
sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dataloss
```
+The number of potentially unapplied changes to repositories is listed for each replica. Listed
+repositories might have the latest changes but it is not guaranteed. Only outdated replicas of
+read-only repositories are listed by default. For example:
+
```shell
Virtual storage: default
- Current read-only primary: gitaly-2
- Previous write-enabled primary: gitaly-1
- Nodes with data loss from failing over from gitaly-1:
- @hashed/2c/62/2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb64a3.git: gitaly-0
- @hashed/4b/22/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a.git: gitaly-0, gitaly-2
+ Primary: gitaly-3
+ Outdated repositories:
+ @hashed/2c/62/2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb64a3.git (read-only):
+ gitaly-2 is behind by 1 change or less
+ gitaly-3 is behind by 2 changes or less
```
-Currently `dataloss` only considers a repository up to date if it has been directly replicated to from the previous write-enabled primary. While reconciling from an up to date secondary can recover the data, this is not visible in the data loss report. This is due for improvement via [Gitaly#2866](https://gitlab.com/gitlab-org/gitaly/-/issues/2866).
+A confirmation is printed out when every repository is writable. For example:
-NOTE: **Note:**
-`dataloss` is still in beta and the output format is subject to change.
+```shell
+Virtual storage: default
+ Primary: gitaly-1
+ All repositories are writable!
+```
-### Checking repository checksums
+#### Outdated replicas of writable repositories
-To check a project's repository checksums across on all Gitaly nodes, run the
-[replicas Rake task](../raketasks/praefect.md#replica-checksums) on the main GitLab node.
+> [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/3019) in GitLab 13.3.
-### Recovering lost writes
+To also list information for outdated replicas of writable repositories, use the
+`-partially-replicated` parameter.
-The Praefect `reconcile` sub-command can be used to recover lost writes from the
-previous primary once it is back online. This is only possible when the virtual storage
-is still in read-only mode.
+A repository is writable if the primary has the latest changes. Secondaries might be temporarily
+outdated while they are waiting to replicate the latest changes.
```shell
-sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml reconcile -virtual <virtual-storage> -reference <previous-primary> -target <current-primary> -f
+sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dataloss [-virtual-storage <virtual-storage>] [-partially-replicated]
```
-Refer to [Backend Node Recovery](#backend-node-recovery) section for more details on
-the `reconcile` sub-command.
+Example output:
-### Enabling Writes
+```shell
+Virtual storage: default
+ Primary: gitaly-3
+ Outdated repositories:
+ @hashed/2c/62/2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb64a3.git (read-only):
+ gitaly-2 is behind by 1 change or less
+ gitaly-3 is behind by 2 changes or less
+ @hashed/4b/22/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a.git (writable):
+ gitaly-2 is behind by 1 change or less
+```
-Any data recovery attempts should have been made before enabling writes to eliminate
-any chance of conflicting writes. Virtual storage can be re-enabled for writes by using
-the Praefect `enable-writes` sub-command.
+With the `-partially-replicated` flag set, a confirmation is printed out if every replica is fully up to date.
+For example:
```shell
-sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml enable-writes -virtual-storage <virtual-storage>
+Virtual storage: default
+ Primary: gitaly-1
+ All repositories are up to date!
```
-## Backend Node Recovery
+### Check repository checksums
+
+To check a project's repository checksums across on all Gitaly nodes, run the
+[replicas Rake task](../raketasks/praefect.md#replica-checksums) on the main GitLab node.
+
+### Recover missing data
+
+The Praefect `reconcile` sub-command can be used to recover unreplicated changes from another replica.
+The source must be on a later version than the target storage.
+
+```shell
+sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml reconcile -virtual <virtual-storage> -reference <up-to-date-storage> -target <outdated-storage> -f
+```
+
+Refer to [Gitaly node recovery](#gitaly-node-recovery) section for more details on the `reconcile` sub-command.
+
+### Enable writes or accept data loss
+
+Praefect provides the following subcommands to re-enable writes:
+
+- In GitLab 13.2 and earlier, `enable-writes` to re-enable virtual storage for writes after data
+ recovery attempts.
+
+ ```shell
+ sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml enable-writes -virtual-storage <virtual-storage>
+ ```
+
+- [In GitLab 13.3](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/2415) and later,
+ `accept-dataloss` to accept data loss and re-enable writes for repositories after data recovery
+ attempts have failed. Accepting data loss causes current version of the repository on the
+ authoritative storage to be considered latest. Other storages are brought up to date with the
+ authoritative storage by scheduling replication jobs.
+
+ ```shell
+ sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml accept-dataloss -virtual-storage <virtual-storage> -repository <relative-path> -authoritative-storage <storage-name>
+ ```
+
+CAUTION: **Caution:**
+`accept-dataloss` causes permanent data loss by overwriting other versions of the repository. Data
+[recovery efforts](#recover-missing-data) must be performed before using it.
+
+## Gitaly node recovery
+
+When a secondary Gitaly node fails and is no longer able to replicate changes, it starts
+to drift from the primary Gitaly node. If the failed Gitaly node eventually recovers,
+it needs to be reconciled with the primary Gitaly node. The primary Gitaly node is considered
+the single source of truth for the state of a shard.
-When a Praefect backend node fails and is no longer able to
-replicate changes, the backend node will start to drift from the primary. If
-that node eventually recovers, it will need to be reconciled with the current
-primary. The primary node is considered the single source of truth for the
-state of a shard. The Praefect `reconcile` sub-command allows for the manual
-reconciliation between a backend node and the current primary.
+The Praefect `reconcile` sub-command allows for the manual reconciliation between a secondary
+Gitaly node and the current primary Gitaly node.
Run the following command on the Praefect server after all placeholders
(`<virtual-storage>` and `<target-storage>`) have been replaced:
@@ -1018,8 +1216,8 @@ Run the following command on the Praefect server after all placeholders
sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml reconcile -virtual <virtual-storage> -target <target-storage>
```
-- Replace the placeholder `<virtual-storage>` with the virtual storage containing the backend node storage to be checked.
-- Replace the placeholder `<target-storage>` with the backend storage name.
+- Replace the placeholder `<virtual-storage>` with the virtual storage containing the Gitaly node storage to be checked.
+- Replace the placeholder `<target-storage>` with the Gitaly storage name.
The command will return a list of repositories that were found to be
inconsistent against the current primary. Each of these inconsistencies will
@@ -1030,11 +1228,11 @@ also be logged with an accompanying replication job ID.
If your GitLab instance already has repositories, these won't be migrated
automatically.
-Repositories may be moved from one storage location using the [Repository
-API](../../api/projects.html#edit-project):
+Repositories may be moved from one storage location using the [Project repository storage moves API](../../api/project_repository_storage_moves.md):
```shell
-curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" --data "repository_storage=praefect" https://example.gitlab.com/api/v4/projects/123
+curl --request POST --header "Private-Token: <your_access_token>" --header "Content-Type: application/json" \
+--data '{"destination_storage_name":"praefect"}' "https://gitlab.example.com/api/v4/projects/123/repository_storage_moves"
```
## Debugging Praefect
diff --git a/doc/administration/gitaly/reference.md b/doc/administration/gitaly/reference.md
index 0429149ec2d..0c211c220d7 100644
--- a/doc/administration/gitaly/reference.md
+++ b/doc/administration/gitaly/reference.md
@@ -233,7 +233,7 @@ The following values configure logging in Gitaly under the `[logging]` section.
| `format` | string | no | Log format: `text` or `json`. Default: `text`. |
| `level` | string | no | Log level: `debug`, `info`, `warn`, `error`, `fatal`, or `panic`. Default: `info`. |
| `sentry_dsn` | string | no | Sentry DSN for exception monitoring. |
-| `sentry_environment` | string | no | [Sentry Environment](https://docs.sentry.io/enriching-error-data/environments/) for exception monitoring. |
+| `sentry_environment` | string | no | [Sentry Environment](https://docs.sentry.io/product/sentry-basics/environments/) for exception monitoring. |
| `ruby_sentry_dsn` | string | no | Sentry DSN for `gitaly-ruby` exception monitoring. |
While the main Gitaly application logs go to `stdout`, there are some extra log
diff --git a/doc/administration/high_availability/alpha_database.md b/doc/administration/high_availability/alpha_database.md
index 7afd739f44c..99c28e5c7a6 100644
--- a/doc/administration/high_availability/alpha_database.md
+++ b/doc/administration/high_availability/alpha_database.md
@@ -2,4 +2,4 @@
redirect_to: 'database.md'
---
-This document was moved to [another location](database.md).
+This document was moved to [another location](../postgresql/index.md).
diff --git a/doc/administration/high_availability/consul.md b/doc/administration/high_availability/consul.md
index 978ba08c4fa..362d6ee8ba7 100644
--- a/doc/administration/high_availability/consul.md
+++ b/doc/administration/high_availability/consul.md
@@ -1,198 +1,5 @@
---
-type: reference
+redirect_to: ../consul.md
---
-# Working with the bundled Consul service **(PREMIUM ONLY)**
-
-As part of its High Availability stack, GitLab Premium includes a bundled version of [Consul](https://www.consul.io/) that can be managed through `/etc/gitlab/gitlab.rb`. Consul is a service networking solution. When it comes to [GitLab Architecture](../../development/architecture.md), Consul utilization is supported for configuring:
-
-1. [Monitoring in Scaled and Highly Available environments](monitoring_node.md)
-1. [PostgreSQL High Availability with Omnibus](../postgresql/replication_and_failover.md)
-
-A Consul cluster consists of multiple server agents, as well as client agents that run on other nodes which need to talk to the Consul cluster.
-
-## Prerequisites
-
-First, make sure to [download/install](https://about.gitlab.com/install/)
-Omnibus GitLab **on each node**.
-
-Choose an installation method, then make sure you complete steps:
-
-1. Install and configure the necessary dependencies.
-1. Add the GitLab package repository and install the package.
-
-When installing the GitLab package, do not supply `EXTERNAL_URL` value.
-
-## Configuring the Consul nodes
-
-On each Consul node perform the following:
-
-1. Make sure you collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step, before executing the next step.
-
-1. Edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
-
- ```ruby
- # Disable all components except Consul
- roles ['consul_role']
-
- # START user configuration
- # Replace placeholders:
- #
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses gathered for CONSUL_SERVER_NODES
- consul['configuration'] = {
- server: true,
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
- }
-
- # Disable auto migrations
- gitlab_rails['auto_migrate'] = false
- #
- # END user configuration
- ```
-
- > `consul_role` was introduced with GitLab 10.3
-
-1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes
- to take effect.
-
-### Consul checkpoint
-
-Before moving on, make sure Consul is configured correctly. Run the following
-command to verify all server nodes are communicating:
-
-```shell
-/opt/gitlab/embedded/bin/consul members
-```
-
-The output should be similar to:
-
-```plaintext
-Node Address Status Type Build Protocol DC
-CONSUL_NODE_ONE XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
-CONSUL_NODE_TWO XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
-CONSUL_NODE_THREE XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
-```
-
-If any of the nodes isn't `alive` or if any of the three nodes are missing,
-check the [Troubleshooting section](#troubleshooting) before proceeding.
-
-## Operations
-
-### Checking cluster membership
-
-To see which nodes are part of the cluster, run the following on any member in the cluster
-
-```shell
-$ /opt/gitlab/embedded/bin/consul members
-Node Address Status Type Build Protocol DC
-consul-b XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
-consul-c XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
-consul-c XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
-db-a XX.XX.X.Y:8301 alive client 0.9.0 2 gitlab_consul
-db-b XX.XX.X.Y:8301 alive client 0.9.0 2 gitlab_consul
-```
-
-Ideally all nodes will have a `Status` of `alive`.
-
-### Restarting the server cluster
-
-NOTE: **Note:**
-This section only applies to server agents. It is safe to restart client agents whenever needed.
-
-If it is necessary to restart the server cluster, it is important to do this in a controlled fashion in order to maintain quorum. If quorum is lost, you will need to follow the Consul [outage recovery](#outage-recovery) process to recover the cluster.
-
-To be safe, we recommend you only restart one server agent at a time to ensure the cluster remains intact.
-
-For larger clusters, it is possible to restart multiple agents at a time. See the [Consul consensus document](https://www.consul.io/docs/internals/consensus.html#deployment-table) for how many failures it can tolerate. This will be the number of simultaneous restarts it can sustain.
-
-## Upgrades for bundled Consul
-
-Nodes running GitLab-bundled Consul should be:
-
-- Members of a healthy cluster prior to upgrading the Omnibus GitLab package.
-- Upgraded one node at a time.
-
-NOTE: **Note:**
-Running `curl http://127.0.0.1:8500/v1/health/state/critical` from any Consul node will identify existing health issues in the cluster. The command will return an empty array if the cluster is healthy.
-
-Consul clusters communicate using the raft protocol. If the current leader goes offline, there needs to be a leader election. A leader node must exist to facilitate synchronization across the cluster. If too many nodes go offline at the same time, the cluster will lose quorum and not elect a leader due to [broken consensus](https://www.consul.io/docs/internals/consensus.html).
-
-Consult the [troubleshooting section](#troubleshooting) if the cluster is not able to recover after the upgrade. The [outage recovery](#outage-recovery) may be of particular interest.
-
-NOTE: **Note:**
-GitLab only uses Consul to store transient data that is easily regenerated. If the bundled Consul was not used by any process other than GitLab itself, then [rebuilding the cluster from scratch](#recreate-from-scratch) is fine.
-
-## Troubleshooting
-
-### Consul server agents unable to communicate
-
-By default, the server agents will attempt to [bind](https://www.consul.io/docs/agent/options.html#_bind) to '0.0.0.0', but they will advertise the first private IP address on the node for other agents to communicate with them. If the other nodes cannot communicate with a node on this address, then the cluster will have a failed status.
-
-You will see messages like the following in `gitlab-ctl tail consul` output if you are running into this issue:
-
-```plaintext
-2017-09-25_19:53:39.90821 2017/09/25 19:53:39 [WARN] raft: no known peers, aborting election
-2017-09-25_19:53:41.74356 2017/09/25 19:53:41 [ERR] agent: failed to sync remote state: No cluster leader
-```
-
-To fix this:
-
-1. Pick an address on each node that all of the other nodes can reach this node through.
-1. Update your `/etc/gitlab/gitlab.rb`
-
- ```ruby
- consul['configuration'] = {
- ...
- bind_addr: 'IP ADDRESS'
- }
- ```
-
-1. Run `gitlab-ctl reconfigure`
-
-If you still see the errors, you may have to [erase the Consul database and reinitialize](#recreate-from-scratch) on the affected node.
-
-### Consul agents do not start - Multiple private IPs
-
-In the case that a node has multiple private IPs the agent be confused as to which of the private addresses to advertise, and then immediately exit on start.
-
-You will see messages like the following in `gitlab-ctl tail consul` output if you are running into this issue:
-
-```plaintext
-2017-11-09_17:41:45.52876 ==> Starting Consul agent...
-2017-11-09_17:41:45.53057 ==> Error creating agent: Failed to get advertise address: Multiple private IPs found. Please configure one.
-```
-
-To fix this:
-
-1. Pick an address on the node that all of the other nodes can reach this node through.
-1. Update your `/etc/gitlab/gitlab.rb`
-
- ```ruby
- consul['configuration'] = {
- ...
- bind_addr: 'IP ADDRESS'
- }
- ```
-
-1. Run `gitlab-ctl reconfigure`
-
-### Outage recovery
-
-If you lost enough server agents in the cluster to break quorum, then the cluster is considered failed, and it will not function without manual intervention.
-
-#### Recreate from scratch
-
-By default, GitLab does not store anything in the Consul cluster that cannot be recreated. To erase the Consul database and reinitialize
-
-```shell
-gitlab-ctl stop consul
-rm -rf /var/opt/gitlab/consul/data
-gitlab-ctl start consul
-```
-
-After this, the cluster should start back up, and the server agents rejoin. Shortly after that, the client agents should rejoin as well.
-
-#### Recover a failed cluster
-
-If you have taken advantage of Consul to store other data, and want to restore the failed cluster, please follow the [Consul guide](https://learn.hashicorp.com/consul/day-2-operations/outage) to recover a failed cluster.
+This document was moved to [another location](../consul.md).
diff --git a/doc/administration/high_availability/gitaly.md b/doc/administration/high_availability/gitaly.md
index 230f5baf33a..a1e8fe3b488 100644
--- a/doc/administration/high_availability/gitaly.md
+++ b/doc/administration/high_availability/gitaly.md
@@ -1,60 +1,5 @@
---
-stage: Create
-group: Gitaly
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
-type: reference
+redirect_to: ../gitaly/index.md
---
-# Configuring Gitaly for Scaled and High Availability
-
-A [Gitaly Cluster](../gitaly/praefect.md) can be used to increase the fault
-tolerance of Gitaly in high availability configurations.
-
-This document is relevant for [scalable and highly available setups](../reference_architectures/index.md).
-
-## Running Gitaly on its own server
-
-See [Run Gitaly on its own server](../gitaly/index.md#run-gitaly-on-its-own-server)
-in Gitaly documentation.
-
-Continue configuration of other components by going back to the
-[reference architecture](../reference_architectures/index.md#configure-gitlab-to-scale) page.
-
-## Enable Monitoring
-
-> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
-
-1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
-
-1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
-
- ```ruby
- # Enable service discovery for Prometheus
- consul['enable'] = true
- consul['monitoring_service_discovery'] = true
-
- # Replace placeholders
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses of the Consul server nodes
- consul['configuration'] = {
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
- }
-
- # Set the network addresses that the exporters will listen on
- node_exporter['listen_address'] = '0.0.0.0:9100'
- gitaly['prometheus_listen_addr'] = "0.0.0.0:9236"
- ```
-
-1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
-
-<!-- ## Troubleshooting
-
-Include any troubleshooting steps that you can foresee. If you know beforehand what issues
-one might have when setting this up, or when something is changed, or on upgrading, it's
-important to describe those, too. Think of things that may go wrong and include them here.
-This is important to minimize requests for support, and to avoid doc comments with
-questions that you know someone might ask.
-
-Each scenario can be a third-level heading, e.g. `### Getting error message X`.
-If you have none to add when creating a doc, leave this section in place
-but commented out to help encourage others to add to it in the future. -->
+This document was moved to [another location](../gitaly/index.md).
diff --git a/doc/administration/high_availability/gitlab.md b/doc/administration/high_availability/gitlab.md
index dc8c997bab5..748373c8941 100644
--- a/doc/administration/high_availability/gitlab.md
+++ b/doc/administration/high_availability/gitlab.md
@@ -1,215 +1,5 @@
---
-type: reference
+redirect_to: ../reference_architectures/index.md
---
-# Configuring GitLab application (Rails)
-
-This section describes how to configure the GitLab application (Rails) component.
-
-NOTE: **Note:**
-There is some additional configuration near the bottom for
-additional GitLab application servers. It's important to read and understand
-these additional steps before proceeding with GitLab installation.
-
-NOTE: **Note:**
-[Cloud Object Storage service](object_storage.md) with [Gitaly](gitaly.md)
-is recommended over [NFS](nfs.md) wherever possible for improved performance.
-
-1. If necessary, install the NFS client utility packages using the following
- commands:
-
- ```shell
- # Ubuntu/Debian
- apt-get install nfs-common
-
- # CentOS/Red Hat
- yum install nfs-utils nfs-utils-lib
- ```
-
-1. Specify the necessary NFS exports in `/etc/fstab`.
- The exact contents of `/etc/fstab` will depend on how you chose
- to configure your NFS server. See [NFS documentation](nfs.md#nfs-client-mount-options)
- for examples and the various options.
-
-1. Create the shared directories. These may be different depending on your NFS
- mount locations.
-
- ```shell
- mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
- ```
-
-1. Download/install Omnibus GitLab using **steps 1 and 2** from
- [GitLab downloads](https://about.gitlab.com/install/). Do not complete other
- steps on the download page.
-1. Create/edit `/etc/gitlab/gitlab.rb` and use the following configuration.
- Be sure to change the `external_url` to match your eventual GitLab front-end
- URL. Depending your the NFS configuration, you may need to change some GitLab
- data locations. See [NFS documentation](nfs.md) for `/etc/gitlab/gitlab.rb`
- configuration values for various scenarios. The example below assumes you've
- added NFS mounts in the default data locations. Additionally the UID and GIDs
- given are just examples and you should configure with your preferred values.
-
- ```ruby
- external_url 'https://gitlab.example.com'
-
- # Prevent GitLab from starting if NFS data mounts are not available
- high_availability['mountpoint'] = '/var/opt/gitlab/git-data'
-
- # Disable components that will not be on the GitLab application server
- roles ['application_role']
- nginx['enable'] = true
-
- # PostgreSQL connection details
- gitlab_rails['db_adapter'] = 'postgresql'
- gitlab_rails['db_encoding'] = 'unicode'
- gitlab_rails['db_host'] = '10.1.0.5' # IP/hostname of database server
- gitlab_rails['db_password'] = 'DB password'
-
- # Redis connection details
- gitlab_rails['redis_port'] = '6379'
- gitlab_rails['redis_host'] = '10.1.0.6' # IP/hostname of Redis server
- gitlab_rails['redis_password'] = 'Redis Password'
-
- # Ensure UIDs and GIDs match between servers for permissions via NFS
- user['uid'] = 9000
- user['gid'] = 9000
- web_server['uid'] = 9001
- web_server['gid'] = 9001
- registry['uid'] = 9002
- registry['gid'] = 9002
- ```
-
-1. [Enable monitoring](#enable-monitoring)
-
- NOTE: **Note:**
- To maintain uniformity of links across HA clusters, the `external_url`
- on the first application server as well as the additional application
- servers should point to the external URL that users will use to access GitLab.
- In a typical HA setup, this will be the URL of the load balancer which will
- route traffic to all GitLab application servers in the HA cluster.
-
- NOTE: **Note:**
- When you specify `https` in the `external_url`, as in the example
- above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
- certificates are not present, NGINX will fail to start. See
- [NGINX documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
- for more information.
-
- NOTE: **Note:**
- It is best to set the `uid` and `gid`s prior to the initial reconfigure
- of GitLab. Omnibus will not recursively `chown` directories if set after the initial reconfigure.
-
-## First GitLab application server
-
-On the first application server, run:
-
-```shell
-sudo gitlab-ctl reconfigure
-```
-
-This should compile the configuration and initialize the database. Do
-not run this on additional application servers until the next step.
-
-## Extra configuration for additional GitLab application servers
-
-Additional GitLab servers (servers configured **after** the first GitLab server)
-need some extra configuration.
-
-1. Configure shared secrets. These values can be obtained from the primary
- GitLab server in `/etc/gitlab/gitlab-secrets.json`. Copy this file to the
- secondary servers **prior to** running the first `reconfigure` in the steps
- above.
-
- ```ruby
- gitlab_shell['secret_token'] = 'fbfb19c355066a9afb030992231c4a363357f77345edd0f2e772359e5be59b02538e1fa6cae8f93f7d23355341cea2b93600dab6d6c3edcdced558fc6d739860'
- gitlab_rails['otp_key_base'] = 'b719fe119132c7810908bba18315259ed12888d4f5ee5430c42a776d840a396799b0a5ef0a801348c8a357f07aa72bbd58e25a84b8f247a25c72f539c7a6c5fa'
- gitlab_rails['secret_key_base'] = '6e657410d57c71b4fc3ed0d694e7842b1895a8b401d812c17fe61caf95b48a6d703cb53c112bc01ebd197a85da81b18e29682040e99b4f26594772a4a2c98c6d'
- gitlab_rails['db_key_base'] = 'bf2e47b68d6cafaef1d767e628b619365becf27571e10f196f98dc85e7771042b9203199d39aff91fcb6837c8ed83f2a912b278da50999bb11a2fbc0fba52964'
- ```
-
-1. Run `touch /etc/gitlab/skip-auto-reconfigure` to prevent database migrations
- from running on upgrade. Only the primary GitLab application server should
- handle migrations.
-
-1. **Recommended** Configure host keys. Copy the contents (private and public keys) of `/etc/ssh/` on
- the primary application server to `/etc/ssh` on all secondary servers. This
- prevents false man-in-the-middle-attack alerts when accessing servers in your
- High Availability cluster behind a load balancer.
-
-1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
-
-NOTE: **Note:**
-You will need to restart the GitLab applications nodes after an update has occurred and database
-migrations performed.
-
-## Enable Monitoring
-
-> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
-
-If you enable Monitoring, it must be enabled on **all** GitLab servers.
-
-1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
-
-1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
-
- ```ruby
- # Enable service discovery for Prometheus
- consul['enable'] = true
- consul['monitoring_service_discovery'] = true
-
- # Replace placeholders
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses of the Consul server nodes
- consul['configuration'] = {
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
- }
-
- # Set the network addresses that the exporters will listen on
- node_exporter['listen_address'] = '0.0.0.0:9100'
- gitlab_workhorse['prometheus_listen_addr'] = '0.0.0.0:9229'
- sidekiq['listen_address'] = "0.0.0.0"
- puma['listen'] = '0.0.0.0'
-
- # Add the monitoring node's IP address to the monitoring whitelist and allow it to
- # scrape the NGINX metrics. Replace placeholder `monitoring.gitlab.example.com` with
- # the address and/or subnets gathered from the monitoring node(s).
- gitlab_rails['monitoring_whitelist'] = ['monitoring.gitlab.example.com', '127.0.0.0/8']
- nginx['status']['options']['allow'] = ['monitoring.gitlab.example.com', '127.0.0.0/8']
- ```
-
-1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
-
- CAUTION: **Warning:**
- If running Unicorn, after changing `unicorn['listen']` in `gitlab.rb`, and
- running `sudo gitlab-ctl reconfigure`, it can take an extended period of time
- for Unicorn to complete reloading after receiving a `HUP`. For more
- information, see the
- [issue](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4401).
-
-## Troubleshooting
-
-- `mount: wrong fs type, bad option, bad superblock on`
-
-You have not installed the necessary NFS client utilities. See step 1 above.
-
-- `mount: mount point /var/opt/gitlab/... does not exist`
-
-This particular directory does not exist on the NFS server. Ensure
-the share is exported and exists on the NFS server and try to remount.
-
----
-
-## Upgrading GitLab HA
-
-GitLab HA installations can be upgraded with no downtime, but the
-upgrade process must be carefully coordinated to avoid failures. See the
-[Omnibus GitLab multi-node upgrade
-document](https://docs.gitlab.com/omnibus/update/#multi-node--ha-deployment)
-for more details.
-
-Read more on high-availability configuration:
-
-1. [Configure the database](../postgresql/replication_and_failover.md)
-1. [Configure Redis](redis.md)
-1. [Configure NFS](nfs.md)
-1. [Configure the load balancers](load_balancer.md)
+This document was moved to [another location](../reference_architectures/index.md).
diff --git a/doc/administration/high_availability/load_balancer.md b/doc/administration/high_availability/load_balancer.md
index 75703327140..5cedc4e11ae 100644
--- a/doc/administration/high_availability/load_balancer.md
+++ b/doc/administration/high_availability/load_balancer.md
@@ -1,135 +1,5 @@
---
-type: reference
+redirect_to: ../load_balancer.md
---
-# Load Balancer for multi-node GitLab
-
-In an multi-node GitLab configuration, you will need a load balancer to route
-traffic to the application servers. The specifics on which load balancer to use
-or the exact configuration is beyond the scope of GitLab documentation. We hope
-that if you're managing HA systems like GitLab you have a load balancer of
-choice already. Some examples including HAProxy (open-source), F5 Big-IP LTM,
-and Citrix Net Scaler. This documentation will outline what ports and protocols
-you need to use with GitLab.
-
-## SSL
-
-How will you handle SSL in your multi-node environment? There are several different
-options:
-
-- Each application node terminates SSL
-- The load balancer(s) terminate SSL and communication is not secure between
- the load balancer(s) and the application nodes
-- The load balancer(s) terminate SSL and communication is *secure* between the
- load balancer(s) and the application nodes
-
-### Application nodes terminate SSL
-
-Configure your load balancer(s) to pass connections on port 443 as 'TCP' rather
-than 'HTTP(S)' protocol. This will pass the connection to the application nodes
-NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
-
-See [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
-for details on managing SSL certificates and configuring NGINX.
-
-### Load Balancer(s) terminate SSL without backend SSL
-
-Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
-The load balancer(s) will then be responsible for managing SSL certificates and
-terminating SSL.
-
-Since communication between the load balancer(s) and GitLab will not be secure,
-there is some additional configuration needed. See
-[NGINX Proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
-for details.
-
-### Load Balancer(s) terminate SSL with backend SSL
-
-Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
-The load balancer(s) will be responsible for managing SSL certificates that
-end users will see.
-
-Traffic will also be secure between the load balancer(s) and NGINX in this
-scenario. There is no need to add configuration for proxied SSL since the
-connection will be secure all the way. However, configuration will need to be
-added to GitLab to configure SSL certificates. See
-[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
-for details on managing SSL certificates and configuring NGINX.
-
-## Ports
-
-### Basic ports
-
-| LB Port | Backend Port | Protocol |
-| ------- | ------------ | ------------------------ |
-| 80 | 80 | HTTP (*1*) |
-| 443 | 443 | TCP or HTTPS (*1*) (*2*) |
-| 22 | 22 | TCP |
-
-- (*1*): [Web terminal](../../ci/environments/index.md#web-terminals) support requires
- your load balancer to correctly handle WebSocket connections. When using
- HTTP or HTTPS proxying, this means your load balancer must be configured
- to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
- [web terminal](../integration/terminal.md) integration guide for
- more details.
-- (*2*): When using HTTPS protocol for port 443, you will need to add an SSL
- certificate to the load balancers. If you wish to terminate SSL at the
- GitLab application server instead, use TCP protocol.
-
-### GitLab Pages Ports
-
-If you're using GitLab Pages with custom domain support you will need some
-additional port configurations.
-GitLab Pages requires a separate virtual IP address. Configure DNS to point the
-`pages_external_url` from `/etc/gitlab/gitlab.rb` at the new virtual IP address. See the
-[GitLab Pages documentation](../pages/index.md) for more information.
-
-| LB Port | Backend Port | Protocol |
-| ------- | ------------- | --------- |
-| 80 | Varies (*1*) | HTTP |
-| 443 | Varies (*1*) | TCP (*2*) |
-
-- (*1*): The backend port for GitLab Pages depends on the
- `gitlab_pages['external_http']` and `gitlab_pages['external_https']`
- setting. See [GitLab Pages documentation](../pages/index.md) for more details.
-- (*2*): Port 443 for GitLab Pages should always use the TCP protocol. Users can
- configure custom domains with custom SSL, which would not be possible
- if SSL was terminated at the load balancer.
-
-### Alternate SSH Port
-
-Some organizations have policies against opening SSH port 22. In this case,
-it may be helpful to configure an alternate SSH hostname that allows users
-to use SSH on port 443. An alternate SSH hostname will require a new virtual IP address
-compared to the other GitLab HTTP configuration above.
-
-Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
-
-| LB Port | Backend Port | Protocol |
-| ------- | ------------ | -------- |
-| 443 | 22 | TCP |
-
-## Readiness check
-
-It is strongly recommend that multi-node deployments configure load balancers to utilize the [readiness check](../../user/admin_area/monitoring/health_check.md#readiness) to ensure a node is ready to accept traffic, before routing traffic to it. This is especially important when utilizing Puma, as there is a brief period during a restart where Puma will not accept requests.
-
----
-
-Read more on high-availability configuration:
-
-1. [Configure the database](../postgresql/replication_and_failover.md)
-1. [Configure Redis](redis.md)
-1. [Configure NFS](nfs.md)
-1. [Configure the GitLab application servers](gitlab.md)
-
-<!-- ## Troubleshooting
-
-Include any troubleshooting steps that you can foresee. If you know beforehand what issues
-one might have when setting this up, or when something is changed, or on upgrading, it's
-important to describe those, too. Think of things that may go wrong and include them here.
-This is important to minimize requests for support, and to avoid doc comments with
-questions that you know someone might ask.
-
-Each scenario can be a third-level heading, e.g. `### Getting error message X`.
-If you have none to add when creating a doc, leave this section in place
-but commented out to help encourage others to add to it in the future. -->
+This document was moved to [another location](../load_balancer.md).
diff --git a/doc/administration/high_availability/monitoring_node.md b/doc/administration/high_availability/monitoring_node.md
index 6b6f0ae9ea3..76bcf6d0d40 100644
--- a/doc/administration/high_availability/monitoring_node.md
+++ b/doc/administration/high_availability/monitoring_node.md
@@ -1,104 +1,5 @@
---
-type: reference
+redirect_to: ../monitoring/prometheus/index.md
---
-# Configuring a Monitoring node for Scaling and High Availability
-
-> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
-
-You can configure a Prometheus node to monitor GitLab.
-
-## Standalone Monitoring node using Omnibus GitLab
-
-The Omnibus GitLab package can be used to configure a standalone Monitoring node running [Prometheus](../monitoring/prometheus/index.md) and [Grafana](../monitoring/performance/grafana_configuration.md).
-The monitoring node is not highly available. See [Scaling and High Availability](../reference_architectures/index.md)
-for an overview of GitLab scaling and high availability options.
-
-The steps below are the minimum necessary to configure a Monitoring node running Prometheus and Grafana with
-Omnibus:
-
-1. SSH into the Monitoring node.
-1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
- package you want using **steps 1 and 2** from the GitLab downloads page.
- - Do not complete any other steps on the download page.
-
-1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
-
-1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
-
- ```ruby
- external_url 'http://gitlab.example.com'
-
- # Enable Prometheus
- prometheus['enable'] = true
- prometheus['listen_address'] = '0.0.0.0:9090'
- prometheus['monitor_kubernetes'] = false
-
- # Enable Login form
- grafana['disable_login_form'] = false
-
- # Enable Grafana
- grafana['enable'] = true
- grafana['admin_password'] = 'toomanysecrets'
-
- # Enable service discovery for Prometheus
- consul['enable'] = true
- consul['monitoring_service_discovery'] = true
-
- # Replace placeholders
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses of the Consul server nodes
- consul['configuration'] = {
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
- }
-
- # Disable all other services
- gitlab_rails['auto_migrate'] = false
- alertmanager['enable'] = false
- gitaly['enable'] = false
- gitlab_exporter['enable'] = false
- gitlab_workhorse['enable'] = false
- nginx['enable'] = true
- postgres_exporter['enable'] = false
- postgresql['enable'] = false
- redis['enable'] = false
- redis_exporter['enable'] = false
- sidekiq['enable'] = false
- puma['enable'] = false
- node_exporter['enable'] = false
- gitlab_exporter['enable'] = false
- ```
-
-1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
-
-The next step is to tell all the other nodes where the monitoring node is:
-
-1. Edit `/etc/gitlab/gitlab.rb`, and add, or find and uncomment the following line:
-
- ```ruby
- gitlab_rails['prometheus_address'] = '10.0.0.1:9090'
- ```
-
- Where `10.0.0.1:9090` is the IP address and port of the Prometheus node.
-
-1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to
- take effect.
-
-## Migrating to Service Discovery
-
-Once monitoring using Service Discovery is enabled with `consul['monitoring_service_discovery'] = true`,
-ensure that `prometheus['scrape_configs']` is not set in `/etc/gitlab/gitlab.rb`. Setting both
-`consul['monitoring_service_discovery'] = true` and `prometheus['scrape_configs']` in `/etc/gitlab/gitlab.rb`
-will result in errors.
-
-<!-- ## Troubleshooting
-
-Include any troubleshooting steps that you can foresee. If you know beforehand what issues
-one might have when setting this up, or when something is changed, or on upgrading, it's
-important to describe those, too. Think of things that may go wrong and include them here.
-This is important to minimize requests for support, and to avoid doc comments with
-questions that you know someone might ask.
-
-Each scenario can be a third-level heading, e.g. `### Getting error message X`.
-If you have none to add when creating a doc, leave this section in place
-but commented out to help encourage others to add to it in the future. -->
+This document was moved to [another location](../monitoring/prometheus/index.md).
diff --git a/doc/administration/high_availability/nfs.md b/doc/administration/high_availability/nfs.md
index 6e8dc2c6c57..e3342fa0813 100644
--- a/doc/administration/high_availability/nfs.md
+++ b/doc/administration/high_availability/nfs.md
@@ -1,322 +1,5 @@
---
-type: reference
+redirect_to: ../nfs.md
---
-# NFS
-
-You can view information and options set for each of the mounted NFS file
-systems by running `nfsstat -m` and `cat /etc/fstab`.
-
-CAUTION: **Caution:**
-From GitLab 13.0, using NFS for Git repositories is deprecated. In GitLab 14.0,
-support for NFS for Git repositories is scheduled to be removed. Upgrade to
-[Gitaly Cluster](../gitaly/praefect.md) as soon as possible.
-
-NOTE: **Note:**
-Filesystem performance has a big impact on overall GitLab
-performance, especially for actions that read or write to Git repositories. See
-[Filesystem Performance Benchmarking](../operations/filesystem_benchmarking.md)
-for steps to test filesystem performance.
-
-## Known kernel version incompatibilities
-
-RedHat Enterprise Linux (RHEL) and CentOS v7.7 and v7.8 ship with kernel
-version `3.10.0-1127`, which [contains a
-bug](https://bugzilla.redhat.com/show_bug.cgi?id=1783554) that causes
-[uploads to fail to copy over NFS](https://gitlab.com/gitlab-org/gitlab/-/issues/218999). The
-following GitLab versions include a fix to work properly with that
-kernel version:
-
-1. [12.10.12](https://about.gitlab.com/releases/2020/06/25/gitlab-12-10-12-released/)
-1. [13.0.7](https://about.gitlab.com/releases/2020/06/25/gitlab-13-0-7-released/)
-1. [13.1.1](https://about.gitlab.com/releases/2020/06/24/gitlab-13-1-1-released/)
-1. 13.2 and up
-
-If you are using that kernel version, be sure to upgrade GitLab to avoid
-errors.
-
-## NFS Server features
-
-### Required features
-
-**File locking**: GitLab **requires** advisory file locking, which is only
-supported natively in NFS version 4. NFSv3 also supports locking as long as
-Linux Kernel 2.6.5+ is used. We recommend using version 4 and do not
-specifically test NFSv3.
-
-### Recommended options
-
-When you define your NFS exports, we recommend you also add the following
-options:
-
-- `no_root_squash` - NFS normally changes the `root` user to `nobody`. This is
- a good security measure when NFS shares will be accessed by many different
- users. However, in this case only GitLab will use the NFS share so it
- is safe. GitLab recommends the `no_root_squash` setting because we need to
- manage file permissions automatically. Without the setting you may receive
- errors when the Omnibus package tries to alter permissions. Note that GitLab
- and other bundled components do **not** run as `root` but as non-privileged
- users. The recommendation for `no_root_squash` is to allow the Omnibus package
- to set ownership and permissions on files, as needed. In some cases where the
- `no_root_squash` option is not available, the `root` flag can achieve the same
- result.
-- `sync` - Force synchronous behavior. Default is asynchronous and under certain
- circumstances it could lead to data loss if a failure occurs before data has
- synced.
-
-Due to the complexities of running Omnibus with LDAP and the complexities of
-maintaining ID mapping without LDAP, in most cases you should enable numeric UIDs
-and GIDs (which is off by default in some cases) for simplified permission
-management between systems:
-
-- [NetApp instructions](https://library.netapp.com/ecmdocs/ECMP1401220/html/GUID-24367A9F-E17B-4725-ADC1-02D86F56F78E.html)
-- For non-NetApp devices, disable NFSv4 `idmapping` by performing opposite of [enable NFSv4 idmapper](https://wiki.archlinux.org/index.php/NFS#Enabling_NFSv4_idmapping)
-
-### Disable NFS server delegation
-
-We recommend that all NFS users disable the NFS server delegation feature. This
-is to avoid a [Linux kernel bug](https://bugzilla.redhat.com/show_bug.cgi?id=1552203)
-which causes NFS clients to slow precipitously due to
-[excessive network traffic from numerous `TEST_STATEID` NFS messages](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/52017).
-
-To disable NFS server delegation, do the following:
-
-1. On the NFS server, run:
-
- ```shell
- echo 0 > /proc/sys/fs/leases-enable
- sysctl -w fs.leases-enable=0
- ```
-
-1. Restart the NFS server process. For example, on CentOS run `service nfs restart`.
-
-#### Important notes
-
-The kernel bug may be fixed in
-[more recent kernels with this commit](https://github.com/torvalds/linux/commit/95da1b3a5aded124dd1bda1e3cdb876184813140).
-
-Red Hat Enterprise 7 [shipped a kernel update](https://access.redhat.com/errata/RHSA-2019:2029)
-on August 6, 2019 that may also have resolved this problem.
-
-You may not need to disable NFS server delegation if you know you are using a version of
-the Linux kernel that has been fixed. That said, GitLab still encourages instance
-administrators to keep NFS server delegation disabled.
-
-### Improving NFS performance with GitLab
-
-#### Improving NFS performance with Unicorn
-
-NOTE: **Note:**
-From GitLab 12.1, it will automatically be detected if Rugged can and should be used per storage.
-
-If you previously enabled Rugged using the feature flag, you will need to unset the feature flag by using:
-
-```shell
-sudo gitlab-rake gitlab:features:unset_rugged
-```
-
-If the Rugged feature flag is explicitly set to either true or false, GitLab will use the value explicitly set.
-
-#### Improving NFS performance with Puma
-
-NOTE: **Note:**
-From GitLab 12.7, Rugged auto-detection is disabled if Puma thread count is greater than 1.
-
-If you want to use Rugged with Puma, it is recommended to [set Puma thread count to 1](https://docs.gitlab.com/omnibus/settings/puma.html#puma-settings).
-
-If you want to use Rugged with Puma thread count more than 1, Rugged can be enabled using the [feature flag](../../development/gitaly.md#legacy-rugged-code)
-
-If the Rugged feature flag is explicitly set to either true or false, GitLab will use the value explicitly set.
-
-### Known issues
-
-#### Avoid using AWS's Elastic File System (EFS)
-
-GitLab strongly recommends against using AWS Elastic File System (EFS).
-Our support team will not be able to assist on performance issues related to
-file system access.
-
-Customers and users have reported that AWS EFS does not perform well for GitLab's
-use-case. Workloads where many small files are written in a serialized manner, like `git`,
-are not well-suited for EFS. EBS with an NFS server on top will perform much better.
-
-If you do choose to use EFS, avoid storing GitLab log files (e.g. those in `/var/log/gitlab`)
-there because this will also affect performance. We recommend that the log files be
-stored on a local volume.
-
-For more details on another person's experience with EFS, see this [Commit Brooklyn 2019 video](https://youtu.be/K6OS8WodRBQ?t=313).
-
-#### Avoid using CephFS and GlusterFS
-
-GitLab strongly recommends against using CephFS and GlusterFS.
-These distributed file systems are not well-suited for GitLab's input/output access patterns because Git uses many small files and access times and file locking times to propagate will make Git activity very slow.
-
-#### Avoid using PostgreSQL with NFS
-
-GitLab strongly recommends against running your PostgreSQL database
-across NFS. The GitLab support team will not be able to assist on performance issues related to
-this configuration.
-
-Additionally, this configuration is specifically warned against in the
-[PostgreSQL Documentation](https://www.postgresql.org/docs/current/creating-cluster.html#CREATING-CLUSTER-NFS):
-
->PostgreSQL does nothing special for NFS file systems, meaning it assumes NFS behaves exactly like
->locally-connected drives. If the client or server NFS implementation does not provide standard file
->system semantics, this can cause reliability problems. Specifically, delayed (asynchronous) writes
->to the NFS server can cause data corruption problems.
-
-For supported database architecture, please see our documentation on
-[Configuring a Database for GitLab HA](../postgresql/replication_and_failover.md).
-
-## NFS Client mount options
-
-Here is an example snippet to add to `/etc/fstab`:
-
- ```plaintext
- 10.1.0.1:/var/opt/gitlab/.ssh /var/opt/gitlab/.ssh nfs4 defaults,vers=4.1,hard,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
- 10.1.0.1:/var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/uploads nfs4 defaults,vers=4.1,hard,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
- 10.1.0.1:/var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-rails/shared nfs4 defaults,vers=4.1,hard,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
- 10.1.0.1:/var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/gitlab-ci/builds nfs4 defaults,vers=4.1,hard,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
- 10.1.0.1:/var/opt/gitlab/git-data /var/opt/gitlab/git-data nfs4 defaults,vers=4.1,hard,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
- ```
-
-Note there are several options that you should consider using:
-
-| Setting | Description |
-| ------- | ----------- |
-| `vers=4.1` |NFS v4.1 should be used instead of v4.0 because there is a Linux [NFS client bug in v4.0](https://gitlab.com/gitlab-org/gitaly/-/issues/1339) that can cause significant problems due to stale data.
-| `nofail` | Don't halt boot process waiting for this mount to become available
-| `lookupcache=positive` | Tells the NFS client to honor `positive` cache results but invalidates any `negative` cache results. Negative cache results cause problems with Git. Specifically, a `git push` can fail to register uniformly across all NFS clients. The negative cache causes the clients to 'remember' that the files did not exist previously.
-| `hard` | Instead of `soft`. [Further details](#soft-mount-option).
-
-### soft mount option
-
-We recommend that you use `hard` in your mount options, unless you have a specific
-reason to use `soft`.
-
-On GitLab.com, we use `soft` because there were times when we had NFS servers
-reboot and `soft` improved availability, but everyone's infrastructure is different.
-If your NFS is provided by on-premise storage arrays with redundant controllers,
-for example, you shouldn't need to worry about NFS server availability.
-
-The NFS man page states:
-
-> "soft" timeout can cause silent data corruption in certain cases
-
-Read the [Linux man page](https://linux.die.net/man/5/nfs) to understand the difference,
-and if you do use `soft`, ensure that you've taken steps to mitigate the risks.
-
-If you experience behavior that might have been caused by
-writes to disk on the NFS server not occurring, such as commits going missing,
-use the `hard` option, because (from the man page):
-
-> use the soft option only when client responsiveness is more important than data integrity
-
-Other vendors make similar recommendations, including
-[SAP](http://wiki.scn.sap.com/wiki/x/PARnFQ) and NetApp's
-[knowledge base](https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/What_are_the_differences_between_hard_mount_and_soft_mount),
-they highlight that if the NFS client driver caches data, `soft` means there is no certainty if
-writes by GitLab are actually on disk.
-
-Mount points set with the option `hard` may not perform as well, and if the
-NFS server goes down, `hard` will cause processes to hang when interacting with
-the mount point. Use `SIGKILL` (`kill -9`) to deal with hung processes.
-The `intr` option
-[stopped working in the 2.6 kernel](https://access.redhat.com/solutions/157873).
-
-## A single NFS mount
-
-It's recommended to nest all GitLab data directories within a mount, that allows automatic
-restore of backups without manually moving existing data.
-
-```plaintext
-mountpoint
-└── gitlab-data
- ├── builds
- ├── git-data
- ├── shared
- └── uploads
-```
-
-To do so, we'll need to configure Omnibus with the paths to each directory nested
-in the mount point as follows:
-
-Mount `/gitlab-nfs` then use the following Omnibus
-configuration to move each data location to a subdirectory:
-
-```ruby
-git_data_dirs({"default" => { "path" => "/gitlab-nfs/gitlab-data/git-data"} })
-gitlab_rails['uploads_directory'] = '/gitlab-nfs/gitlab-data/uploads'
-gitlab_rails['shared_path'] = '/gitlab-nfs/gitlab-data/shared'
-gitlab_ci['builds_directory'] = '/gitlab-nfs/gitlab-data/builds'
-```
-
-Run `sudo gitlab-ctl reconfigure` to start using the central location. Please
-be aware that if you had existing data you will need to manually copy/rsync it
-to these new locations and then restart GitLab.
-
-## Bind mounts
-
-Alternatively to changing the configuration in Omnibus, bind mounts can be used
-to store the data on an NFS mount.
-
-Bind mounts provide a way to specify just one NFS mount and then
-bind the default GitLab data locations to the NFS mount. Start by defining your
-single NFS mount point as you normally would in `/etc/fstab`. Let's assume your
-NFS mount point is `/gitlab-nfs`. Then, add the following bind mounts in
-`/etc/fstab`:
-
-```shell
-/gitlab-nfs/gitlab-data/git-data /var/opt/gitlab/git-data none bind 0 0
-/gitlab-nfs/gitlab-data/.ssh /var/opt/gitlab/.ssh none bind 0 0
-/gitlab-nfs/gitlab-data/uploads /var/opt/gitlab/gitlab-rails/uploads none bind 0 0
-/gitlab-nfs/gitlab-data/shared /var/opt/gitlab/gitlab-rails/shared none bind 0 0
-/gitlab-nfs/gitlab-data/builds /var/opt/gitlab/gitlab-ci/builds none bind 0 0
-```
-
-Using bind mounts will require manually making sure the data directories
-are empty before attempting a restore. Read more about the
-[restore prerequisites](../../raketasks/backup_restore.md).
-
-## Multiple NFS mounts
-
-When using default Omnibus configuration you will need to share 4 data locations
-between all GitLab cluster nodes. No other locations should be shared. The
-following are the 4 locations need to be shared:
-
-| Location | Description | Default configuration |
-| -------- | ----------- | --------------------- |
-| `/var/opt/gitlab/git-data` | Git repository data. This will account for a large portion of your data | `git_data_dirs({"default" => { "path" => "/var/opt/gitlab/git-data"} })`
-| `/var/opt/gitlab/gitlab-rails/uploads` | User uploaded attachments | `gitlab_rails['uploads_directory'] = '/var/opt/gitlab/gitlab-rails/uploads'`
-| `/var/opt/gitlab/gitlab-rails/shared` | Build artifacts, GitLab Pages, LFS objects, temp files, etc. If you're using LFS this may also account for a large portion of your data | `gitlab_rails['shared_path'] = '/var/opt/gitlab/gitlab-rails/shared'`
-| `/var/opt/gitlab/gitlab-ci/builds` | GitLab CI/CD build traces | `gitlab_ci['builds_directory'] = '/var/opt/gitlab/gitlab-ci/builds'`
-
-Other GitLab directories should not be shared between nodes. They contain
-node-specific files and GitLab code that does not need to be shared. To ship
-logs to a central location consider using remote syslog. Omnibus GitLab packages
-provide configuration for [UDP log shipping](https://docs.gitlab.com/omnibus/settings/logs.html#udp-log-shipping-gitlab-enterprise-edition-only).
-
-Having multiple NFS mounts will require manually making sure the data directories
-are empty before attempting a restore. Read more about the
-[restore prerequisites](../../raketasks/backup_restore.md).
-
----
-
-Read more on high-availability configuration:
-
-1. [Configure the database](../postgresql/replication_and_failover.md)
-1. [Configure Redis](redis.md)
-1. [Configure the GitLab application servers](gitlab.md)
-1. [Configure the load balancers](load_balancer.md)
-
-<!-- ## Troubleshooting
-
-Include any troubleshooting steps that you can foresee. If you know beforehand what issues
-one might have when setting this up, or when something is changed, or on upgrading, it's
-important to describe those, too. Think of things that may go wrong and include them here.
-This is important to minimize requests for support, and to avoid doc comments with
-questions that you know someone might ask.
-
-Each scenario can be a third-level heading, e.g. `### Getting error message X`.
-If you have none to add when creating a doc, leave this section in place
-but commented out to help encourage others to add to it in the future. -->
+This document was moved to [another location](../nfs.md).
diff --git a/doc/administration/high_availability/nfs_host_client_setup.md b/doc/administration/high_availability/nfs_host_client_setup.md
index 213680e2f64..e3342fa0813 100644
--- a/doc/administration/high_availability/nfs_host_client_setup.md
+++ b/doc/administration/high_availability/nfs_host_client_setup.md
@@ -1,157 +1,5 @@
---
-type: reference
+redirect_to: ../nfs.md
---
-# Configuring NFS for GitLab HA
-
-Setting up NFS for a GitLab HA setup allows all applications nodes in a cluster
-to share the same files and maintain data consistency. Application nodes in an HA
-setup act as clients while the NFS server plays host.
-
-> Note: The instructions provided in this documentation allow for setting a quick
-proof of concept but will leave NFS as potential single point of failure and
-therefore not recommended for use in production. Explore options such as [Pacemaker
-and Corosync](https://clusterlabs.org) for highly available NFS in production.
-
-Below are instructions for setting up an application node(client) in an HA cluster
-to read from and write to a central NFS server(host).
-
-NOTE: **Note:**
-Using EFS may negatively impact performance. Please review the [relevant documentation](nfs.md#avoid-using-awss-elastic-file-system-efs) for additional details.
-
-## NFS Server Setup
-
-> Follow the instructions below to set up and configure your NFS server.
-
-### Step 1 - Install NFS Server on Host
-
-Installing the `nfs-kernel-server` package allows you to share directories with the clients running the GitLab application.
-
-```shell
-apt-get update
-apt-get install nfs-kernel-server
-```
-
-### Step 2 - Export Host's Home Directory to Client
-
-In this setup we will share the home directory on the host with the client. Edit the exports file as below to share the host's home directory with the client. If you have multiple clients running GitLab you must enter the client IP addresses in line in the `/etc/exports` file.
-
-```plaintext
-#/etc/exports for one client
-/home <client_ip_address>(rw,sync,no_root_squash,no_subtree_check)
-
-#/etc/exports for three clients
-/home <client_ip_address>(rw,sync,no_root_squash,no_subtree_check) <client_2_ip_address>(rw,sync,no_root_squash,no_subtree_check) <client_3_ip_address>(rw,sync,no_root_squash,no_subtree_check)
-```
-
-Restart the NFS server after making changes to the `exports` file for the changes
-to take effect.
-
-```shell
-systemctl restart nfs-kernel-server
-```
-
-NOTE: **Note:**
-You may need to update your server's firewall. See the [firewall section](#nfs-in-a-firewalled-environment) at the end of this guide.
-
-## Client / GitLab application node Setup
-
-> Follow the instructions below to connect any GitLab Rails application node running
-inside your HA environment to the NFS server configured above.
-
-### Step 1 - Install NFS Common on Client
-
-The `nfs-common` provides NFS functionality without installing server components which
-we don't need running on the application nodes.
-
-```shell
-apt-get update
-apt-get install nfs-common
-```
-
-### Step 2 - Create Mount Points on Client
-
-Create a directory on the client that we can mount the shared directory from the host.
-Please note that if your mount point directory contains any files they will be hidden
-once the remote shares are mounted. An empty/new directory on the client is recommended
-for this purpose.
-
-```shell
-mkdir -p /nfs/home
-```
-
-Confirm that the mount point works by mounting it on the client and checking that
-it is mounted with the command below:
-
-```shell
-mount <host_ip_address>:/home
-df -h
-```
-
-### Step 3 - Set up Automatic Mounts on Boot
-
-Edit `/etc/fstab` on the client as below to mount the remote shares automatically at boot.
-Note that GitLab requires advisory file locking, which is only supported natively in
-NFS version 4. NFSv3 also supports locking as long as Linux Kernel 2.6.5+ is used.
-We recommend using version 4 and do not specifically test NFSv3.
-See [NFS documentation](nfs.md#nfs-client-mount-options) for guidance on mount options.
-
-```plaintext
-#/etc/fstab
-<host_ip_address>:/home /nfs/home nfs4 defaults,hard,vers=4.1,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
-```
-
-Reboot the client and confirm that the mount point is mounted automatically.
-
-NOTE: **Note:**
-If you followed our guide to [GitLab Pages on a separate server](../pages/index.md#running-gitlab-pages-on-a-separate-server)
-here, please continue there with the pages-specific NFS mounts.
-The step below is for broader use-cases than only sharing pages data.
-
-### Step 4 - Set up GitLab to Use NFS mounts
-
-When using the default Omnibus configuration you will need to share 4 data locations
-between all GitLab cluster nodes. No other locations should be shared. Changing the
-default file locations in `gitlab.rb` on the client allows you to have one main mount
-point and have all the required locations as subdirectories to use the NFS mount for
-`git-data`.
-
-```plaintext
-git_data_dirs({"default" => {"path" => "/nfs/home/var/opt/gitlab-data/git-data"}})
-gitlab_rails['uploads_directory'] = '/nfs/home/var/opt/gitlab-data/uploads'
-gitlab_rails['shared_path'] = '/nfs/home/var/opt/gitlab-data/shared'
-gitlab_ci['builds_directory'] = '/nfs/home/var/opt/gitlab-data/builds'
-```
-
-Save the changes in `gitlab.rb` and run `gitlab-ctl reconfigure`.
-
-## NFS in a Firewalled Environment
-
-If the traffic between your NFS server and NFS client(s) is subject to port filtering
-by a firewall, then you will need to reconfigure that firewall to allow NFS communication.
-
-[This guide from TDLP](http://tldp.org/HOWTO/NFS-HOWTO/security.html#FIREWALLS)
-covers the basics of using NFS in a firewalled environment. Additionally, we encourage you to
-search for and review the specific documentation for your operating system or distribution and your firewall software.
-
-Example for Ubuntu:
-
-Check that NFS traffic from the client is allowed by the firewall on the host by running
-the command: `sudo ufw status`. If it's being blocked, then you can allow traffic from a specific
-client with the command below.
-
-```shell
-sudo ufw allow from <client_ip_address> to any port nfs
-```
-
-<!-- ## Troubleshooting
-
-Include any troubleshooting steps that you can foresee. If you know beforehand what issues
-one might have when setting this up, or when something is changed, or on upgrading, it's
-important to describe those, too. Think of things that may go wrong and include them here.
-This is important to minimize requests for support, and to avoid doc comments with
-questions that you know someone might ask.
-
-Each scenario can be a third-level heading, e.g. `### Getting error message X`.
-If you have none to add when creating a doc, leave this section in place
-but commented out to help encourage others to add to it in the future. -->
+This document was moved to [another location](../nfs.md).
diff --git a/doc/administration/high_availability/pgbouncer.md b/doc/administration/high_availability/pgbouncer.md
index 15e4da5b1f7..44f4aa37651 100644
--- a/doc/administration/high_availability/pgbouncer.md
+++ b/doc/administration/high_availability/pgbouncer.md
@@ -1,166 +1,5 @@
---
-type: reference
+redirect_to: ../postgresql/pgbouncer.md
---
-# Working with the bundled PgBouncer service **(PREMIUM ONLY)**
-
-As part of its High Availability stack, GitLab Premium includes a bundled version of [PgBouncer](http://www.pgbouncer.org/) that can be managed through `/etc/gitlab/gitlab.rb`. PgBouncer is used to seamlessly migrate database connections between servers in a failover scenario. Additionally, it can be used in a non-HA setup to pool connections, speeding up response time while reducing resource usage.
-
-In a HA setup, it's recommended to run a PgBouncer node separately for each database node with an internal load balancer (TCP) serving each accordingly.
-
-## Operations
-
-### Running PgBouncer as part of an HA GitLab installation
-
-This content has been moved to a [new location](../postgresql/replication_and_failover.md#configuring-the-pgbouncer-node).
-
-### Running PgBouncer as part of a non-HA GitLab installation
-
-1. Generate PGBOUNCER_USER_PASSWORD_HASH with the command `gitlab-ctl pg-password-md5 pgbouncer`
-
-1. Generate SQL_USER_PASSWORD_HASH with the command `gitlab-ctl pg-password-md5 gitlab`. We'll also need to enter the plaintext SQL_USER_PASSWORD later
-
-1. On your database node, ensure the following is set in your `/etc/gitlab/gitlab.rb`
-
- ```ruby
- postgresql['pgbouncer_user_password'] = 'PGBOUNCER_USER_PASSWORD_HASH'
- postgresql['sql_user_password'] = 'SQL_USER_PASSWORD_HASH'
- postgresql['listen_address'] = 'XX.XX.XX.Y' # Where XX.XX.XX.Y is the ip address on the node postgresql should listen on
- postgresql['md5_auth_cidr_addresses'] = %w(AA.AA.AA.B/32) # Where AA.AA.AA.B is the IP address of the pgbouncer node
- ```
-
-1. Run `gitlab-ctl reconfigure`
-
- **Note:** If the database was already running, it will need to be restarted after reconfigure by running `gitlab-ctl restart postgresql`.
-
-1. On the node you are running PgBouncer on, make sure the following is set in `/etc/gitlab/gitlab.rb`
-
- ```ruby
- pgbouncer['enable'] = true
- pgbouncer['databases'] = {
- gitlabhq_production: {
- host: 'DATABASE_HOST',
- user: 'pgbouncer',
- password: 'PGBOUNCER_USER_PASSWORD_HASH'
- }
- }
- ```
-
-1. Run `gitlab-ctl reconfigure`
-
-1. On the node running Puma, make sure the following is set in `/etc/gitlab/gitlab.rb`
-
- ```ruby
- gitlab_rails['db_host'] = 'PGBOUNCER_HOST'
- gitlab_rails['db_port'] = '6432'
- gitlab_rails['db_password'] = 'SQL_USER_PASSWORD'
- ```
-
-1. Run `gitlab-ctl reconfigure`
-
-1. At this point, your instance should connect to the database through PgBouncer. If you are having issues, see the [Troubleshooting](#troubleshooting) section
-
-## Enable Monitoring
-
-> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
-
-If you enable Monitoring, it must be enabled on **all** PgBouncer servers.
-
-1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
-
- ```ruby
- # Enable service discovery for Prometheus
- consul['enable'] = true
- consul['monitoring_service_discovery'] = true
-
- # Replace placeholders
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses of the Consul server nodes
- consul['configuration'] = {
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
- }
-
- # Set the network addresses that the exporters will listen on
- node_exporter['listen_address'] = '0.0.0.0:9100'
- pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
- ```
-
-1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
-
-### Interacting with PgBouncer
-
-#### Administrative console
-
-As part of Omnibus GitLab, we provide a command `gitlab-ctl pgb-console` to automatically connect to the PgBouncer administrative console. Please see the [PgBouncer documentation](https://www.pgbouncer.org/usage.html#admin-console) for detailed instructions on how to interact with the console.
-
-To start a session, run
-
-```shell
-# gitlab-ctl pgb-console
-Password for user pgbouncer:
-psql (11.7, server 1.7.2/bouncer)
-Type "help" for help.
-
-pgbouncer=#
-```
-
-The password you will be prompted for is the PGBOUNCER_USER_PASSWORD
-
-To get some basic information about the instance, run
-
-```shell
-pgbouncer=# show databases; show clients; show servers;
- name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
----------------------+-----------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
- gitlabhq_production | 127.0.0.1 | 5432 | gitlabhq_production | | 100 | 5 | | 0 | 1
- pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
-(2 rows)
-
- type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link
-| remote_pid | tls
-------+-----------+---------------------+--------+-----------+-------+------------+------------+---------------------+---------------------+-----------+------
-+------------+-----
- C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44590 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12444c0 |
-| 0 |
- C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44592 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12447c0 |
-| 0 |
- C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44594 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x1244940 |
-| 0 |
- C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44706 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:16:31 | 0x1244ac0 |
-| 0 |
- C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44708 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:15:15 | 0x1244c40 |
-| 0 |
- C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44794 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:15:15 | 0x1244dc0 |
-| 0 |
- C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44798 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:16:31 | 0x1244f40 |
-| 0 |
- C | pgbouncer | pgbouncer | active | 127.0.0.1 | 44660 | 127.0.0.1 | 6432 | 2018-04-24 22:13:51 | 2018-04-24 22:17:12 | 0x1244640 |
-| 0 |
-(8 rows)
-
- type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | rem
-ote_pid | tls
-------+--------+---------------------+-------+-----------+------+------------+------------+---------------------+---------------------+-----------+------+----
---------+-----
- S | gitlab | gitlabhq_production | idle | 127.0.0.1 | 5432 | 127.0.0.1 | 35646 | 2018-04-24 22:15:15 | 2018-04-24 22:17:10 | 0x124dca0 | |
- 19980 |
-(1 row)
-```
-
-## Troubleshooting
-
-In case you are experiencing any issues connecting through PgBouncer, the first place to check is always the logs:
-
-```shell
-# gitlab-ctl tail pgbouncer
-```
-
-Additionally, you can check the output from `show databases` in the [Administrative console](#administrative-console). In the output, you would expect to see values in the `host` field for the `gitlabhq_production` database. Additionally, `current_connections` should be greater than 1.
-
-### Message: `LOG: invalid CIDR mask in address`
-
-See the suggested fix [in Geo documentation](../geo/replication/troubleshooting.md#message-log--invalid-cidr-mask-in-address).
-
-### Message: `LOG: invalid IP mask "md5": Name or service not known`
-
-See the suggested fix [in Geo documentation](../geo/replication/troubleshooting.md#message-log--invalid-ip-mask-md5-name-or-service-not-known).
+This document was moved to [another location](../postgresql/pgbouncer.md).
diff --git a/doc/administration/high_availability/sidekiq.md b/doc/administration/high_availability/sidekiq.md
index 98a9af64e5e..ac92ae2eaaa 100644
--- a/doc/administration/high_availability/sidekiq.md
+++ b/doc/administration/high_availability/sidekiq.md
@@ -1,190 +1,5 @@
---
-type: reference
+redirect_to: ../sidekiq.md
---
-# Configuring Sidekiq
-
-This section discusses how to configure an external Sidekiq instance.
-
-Sidekiq requires connection to the Redis, PostgreSQL and Gitaly instance.
-To configure the Sidekiq node:
-
-1. SSH into the Sidekiq server.
-
-1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab package
-you want using steps 1 and 2 from the GitLab downloads page.
-**Do not complete any other steps on the download page.**
-
-1. Open `/etc/gitlab/gitlab.rb` with your editor.
-
-1. Generate the Sidekiq configuration:
-
- ```ruby
- sidekiq['listen_address'] = "10.10.1.48"
-
- ## Optional: Enable extra Sidekiq processes
- sidekiq_cluster['enable'] = true
- sidekiq_cluster['enable'] = true
- "elastic_indexer"
- ]
- ```
-
-1. Setup Sidekiq's connection to Redis:
-
- ```ruby
- ## Must be the same in every sentinel node
- redis['master_name'] = 'gitlab-redis'
-
- ## The same password for Redis authentication you set up for the master node.
- redis['master_password'] = 'YOUR_PASSOWORD'
-
- ## A list of sentinels with `host` and `port`
- gitlab_rails['redis_sentinels'] = [
- {'host' => '10.10.1.34', 'port' => 26379},
- {'host' => '10.10.1.35', 'port' => 26379},
- {'host' => '10.10.1.36', 'port' => 26379},
- ]
- ```
-
-1. Setup Sidekiq's connection to Gitaly:
-
- ```ruby
- git_data_dirs({
- 'default' => { 'gitaly_address' => 'tcp://gitaly:8075' },
- })
- gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
- ```
-
-1. Setup Sidekiq's connection to PostgreSQL:
-
- ```ruby
- gitlab_rails['db_host'] = '10.10.1.30'
- gitlab_rails['db_password'] = 'YOUR_PASSOWORD'
- gitlab_rails['db_port'] = '5432'
- gitlab_rails['db_adapter'] = 'postgresql'
- gitlab_rails['db_encoding'] = 'unicode'
- gitlab_rails['auto_migrate'] = false
- ```
-
- Remember to add the Sidekiq nodes to the PostgreSQL whitelist:
-
- ```ruby
- postgresql['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.10.1.30/32 10.10.1.31/32 10.10.1.32/32 10.10.1.33/32 10.10.1.38/32)
- ```
-
-1. Disable other services:
-
- ```ruby
- nginx['enable'] = false
- grafana['enable'] = false
- prometheus['enable'] = false
- gitlab_rails['auto_migrate'] = false
- alertmanager['enable'] = false
- gitaly['enable'] = false
- gitlab_monitor['enable'] = false
- gitlab_workhorse['enable'] = false
- nginx['enable'] = false
- postgres_exporter['enable'] = false
- postgresql['enable'] = false
- redis['enable'] = false
- redis_exporter['enable'] = false
- puma['enable'] = false
- gitlab_exporter['enable'] = false
- ```
-
-1. Run `gitlab-ctl reconfigure`.
-
-NOTE: **Note:**
-You will need to restart the Sidekiq nodes after an update has occurred and database
-migrations performed.
-
-## Example configuration
-
-Here's what the ending `/etc/gitlab/gitlab.rb` would look like:
-
-```ruby
-########################################
-##### Services Disabled ###
-########################################
-
-nginx['enable'] = false
-grafana['enable'] = false
-prometheus['enable'] = false
-gitlab_rails['auto_migrate'] = false
-alertmanager['enable'] = false
-gitaly['enable'] = false
-gitlab_monitor['enable'] = false
-gitlab_workhorse['enable'] = false
-nginx['enable'] = false
-postgres_exporter['enable'] = false
-postgresql['enable'] = false
-redis['enable'] = false
-redis_exporter['enable'] = false
-puma['enable'] = false
-gitlab_exporter['enable'] = false
-
-########################################
-#### Redis ###
-########################################
-
-## Must be the same in every sentinel node
-redis['master_name'] = 'gitlab-redis'
-
-## The same password for Redis authentication you set up for the master node.
-redis['master_password'] = 'YOUR_PASSOWORD'
-
-## A list of sentinels with `host` and `port`
-gitlab_rails['redis_sentinels'] = [
- {'host' => '10.10.1.34', 'port' => 26379},
- {'host' => '10.10.1.35', 'port' => 26379},
- {'host' => '10.10.1.36', 'port' => 26379},
- ]
-
-#######################################
-### Gitaly ###
-#######################################
-
-git_data_dirs({
- 'default' => { 'gitaly_address' => 'tcp://gitaly:8075' },
-})
-gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
-
-#######################################
-### Postgres ###
-#######################################
-gitlab_rails['db_host'] = '10.10.1.30'
-gitlab_rails['db_password'] = 'YOUR_PASSOWORD'
-gitlab_rails['db_port'] = '5432'
-gitlab_rails['db_adapter'] = 'postgresql'
-gitlab_rails['db_encoding'] = 'unicode'
-gitlab_rails['auto_migrate'] = false
-
-#######################################
-### Sidekiq configuration ###
-#######################################
-sidekiq['listen_address'] = "10.10.1.48"
-
-#######################################
-### Monitoring configuration ###
-#######################################
-consul['enable'] = true
-consul['monitoring_service_discovery'] = true
-
-consul['configuration'] = {
- bind_addr: '10.10.1.48',
- retry_join: %w(10.10.1.34 10.10.1.35 10.10.1.36)
-}
-
-# Set the network addresses that the exporters will listen on
-node_exporter['listen_address'] = '10.10.1.48:9100'
-
-# Rails Status for prometheus
-gitlab_rails['monitoring_whitelist'] = ['10.10.1.42', '127.0.0.1']
-```
-
-## Further reading
-
-Related Sidekiq configuration:
-
-1. [Extra Sidekiq processes](../operations/extra_sidekiq_processes.md)
-1. [Using the GitLab-Sidekiq chart](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/)
+This document was moved to [another location](../sidekiq.md).
diff --git a/doc/administration/img/repository_storages_admin_ui_v13_1.png b/doc/administration/img/repository_storages_admin_ui_v13_1.png
index f8c13a35369..a2b88d14a36 100644
--- a/doc/administration/img/repository_storages_admin_ui_v13_1.png
+++ b/doc/administration/img/repository_storages_admin_ui_v13_1.png
Binary files differ
diff --git a/doc/administration/index.md b/doc/administration/index.md
index fa415e5f78d..ed079abf708 100644
--- a/doc/administration/index.md
+++ b/doc/administration/index.md
@@ -138,7 +138,7 @@ Learn how to install, configure, update, and maintain your GitLab instance.
## Package Registry administration
- [Container Registry](packages/container_registry.md): Configure Container Registry with GitLab.
-- [Package Registry](packages/index.md): Enable GitLab to act as an NPM Registry and a Maven Repository. **(PREMIUM ONLY)**
+- [Package Registry](packages/index.md): Enable GitLab to act as an NPM Registry and a Maven Repository.
- [Dependency Proxy](packages/dependency_proxy.md): Configure the Dependency Proxy, a local proxy for frequently used upstream images/packages. **(PREMIUM ONLY)**
### Repository settings
@@ -163,7 +163,11 @@ Learn how to install, configure, update, and maintain your GitLab instance.
## Snippet settings
-- [Setting snippet content size limit](snippets/index.md): Set a maximum size limit for snippets' content.
+- [Setting snippet content size limit](snippets/index.md): Set a maximum content size limit for snippets.
+
+## Wiki settings
+
+- [Setting wiki page content size limit](wikis/index.md): Set a maximum content size limit for wiki pages.
## Git configuration options
diff --git a/doc/administration/instance_limits.md b/doc/administration/instance_limits.md
index 6d2fbd95d5e..f30dba331b8 100644
--- a/doc/administration/instance_limits.md
+++ b/doc/administration/instance_limits.md
@@ -160,7 +160,7 @@ There is a limit when embedding metrics in GFM for performance reasons.
## Number of webhooks
-On GitLab.com, the [maximum number of webhooks](../user/gitlab_com/index.md#maximum-number-of-webhooks) per project, and per group, is limited.
+On GitLab.com, the [maximum number of webhooks and their size](../user/gitlab_com/index.md#webhooks) per project, and per group, is limited.
To set this limit on a self-managed installation, run the following in the
[GitLab Rails console](troubleshooting/debug.md#starting-a-rails-console-session):
@@ -314,6 +314,58 @@ To update this limit to a new value on a self-managed installation, run the foll
Plan.default.actual_limits.update!(ci_instance_level_variables: 30)
```
+### Maximum file size per type of artifact
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37226) in GitLab 13.3.
+
+Job artifacts defined with [`artifacts:reports`](../ci/pipelines/job_artifacts.md#artifactsreports)
+that are uploaded by the Runner are rejected if the file size exceeds the maximum
+file size limit. The limit is determined by comparing the project's
+[maximum artifact size setting](../user/admin_area/settings/continuous_integration.md#maximum-artifacts-size-core-only)
+with the instance limit for the given artifact type, and choosing the smaller value.
+
+Limits are set in megabytes, so the smallest possible value that can be defined is `1 MB`.
+
+Each type of artifact has a size limit that can be set. A default of `0` means there
+is no limit for that specific artifact type, and the project's maximum artifact size
+setting is used:
+
+| Artifact limit name | Default value |
+|---------------------------------------------|---------------|
+| `ci_max_artifact_size_accessibility` | 0 |
+| `ci_max_artifact_size_archive` | 0 |
+| `ci_max_artifact_size_browser_performance` | 0 |
+| `ci_max_artifact_size_cluster_applications` | 0 |
+| `ci_max_artifact_size_cobertura` | 0 |
+| `ci_max_artifact_size_codequality` | 0 |
+| `ci_max_artifact_size_container_scanning` | 0 |
+| `ci_max_artifact_size_coverage_fuzzing` | 0 |
+| `ci_max_artifact_size_dast` | 0 |
+| `ci_max_artifact_size_dependency_scanning` | 0 |
+| `ci_max_artifact_size_dotenv` | 0 |
+| `ci_max_artifact_size_junit` | 0 |
+| `ci_max_artifact_size_license_management` | 0 |
+| `ci_max_artifact_size_license_scanning` | 0 |
+| `ci_max_artifact_size_load_performance` | 0 |
+| `ci_max_artifact_size_lsif` | 20 MB ([introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37226) in GitLab 13.3) |
+| `ci_max_artifact_size_metadata` | 0 |
+| `ci_max_artifact_size_metrics_referee` | 0 |
+| `ci_max_artifact_size_metrics` | 0 |
+| `ci_max_artifact_size_network_referee` | 0 |
+| `ci_max_artifact_size_performance` | 0 |
+| `ci_max_artifact_size_requirements` | 0 |
+| `ci_max_artifact_size_sast` | 0 |
+| `ci_max_artifact_size_secret_detection` | 0 |
+| `ci_max_artifact_size_terraform` | 5 MB ([introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37018) in GitLab 13.3) |
+| `ci_max_artifact_size_trace` | 0 |
+
+For example, to set the `ci_max_artifact_size_junit` limit to 10MB on a self-managed
+installation, run the following in the [GitLab Rails console](troubleshooting/debug.md#starting-a-rails-console-session):
+
+```ruby
+Plan.default.actual_limits.update!(ci_max_artifact_size_junit: 10)
+```
+
## Instance monitoring and metrics
### Incident Management inbound alert limits
@@ -388,6 +440,24 @@ Reports that go over the 20 MB limit won't be loaded. Affected reports:
## Advanced Global Search limits
+### Maximum file size indexed
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/8638) in GitLab 13.3.
+
+You can set a limit on the content of repository files that are indexed in
+Elasticsearch. Any files larger than this limit will not be indexed, and thus
+will not be searchable.
+
+Setting a limit helps reduce the memory usage of the indexing processes as well
+as the overall index size. This value defaults to `1024 KiB` (1 MiB) as any
+text files larger than this likely aren't meant to be read by humans.
+
+NOTE: **Note:**
+You must set a limit, as an unlimited file size is not supported. Setting this
+value to be greater than the amount of memory on GitLab's Sidekiq nodes will
+lead to GitLab's Sidekiq nodes running out of memory as they will pre-allocate
+this amount of memory during indexing.
+
### Maximum field length
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/201826) in GitLab 12.8.
@@ -396,6 +466,9 @@ You can set a limit on the content of text fields indexed for Global Search.
Setting a maximum helps to reduce the load of the indexing processes. If any
text field exceeds this limit then the text will be truncated to this number of
characters and the rest will not be indexed and hence will not be searchable.
+This is applicable to all indexed data except repository files that get
+indexed, which have a separate limit (see [Maximum file size
+indexed](#maximum-file-size-indexed)).
- On GitLab.com this is limited to 20000 characters
- For self-managed installations it is unlimited by default
@@ -408,6 +481,7 @@ Set the limit to `0` to disable it.
## Wiki limits
+- [Wiki page content size limit](wikis/index.md#wiki-page-content-size-limit).
- [Length restrictions for file and directory names](../user/project/wiki/index.md#length-restrictions-for-file-and-directory-names).
## Snippets limits
diff --git a/doc/administration/integration/plantuml.md b/doc/administration/integration/plantuml.md
index 2a30eced7c4..49ea59d239c 100644
--- a/doc/administration/integration/plantuml.md
+++ b/doc/administration/integration/plantuml.md
@@ -1,3 +1,10 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference, howto
+---
+
# PlantUML & GitLab
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/8537) in GitLab 8.16.
diff --git a/doc/administration/integration/terminal.md b/doc/administration/integration/terminal.md
index fd9e09dc17a..c363bd30543 100644
--- a/doc/administration/integration/terminal.md
+++ b/doc/administration/integration/terminal.md
@@ -57,7 +57,7 @@ through to the next one in the chain. If you installed GitLab using Omnibus, or
from source, starting with GitLab 8.15, this should be done by the default
configuration, so there's no need for you to do anything.
-However, if you run a [load balancer](../high_availability/load_balancer.md) in
+However, if you run a [load balancer](../load_balancer.md) in
front of GitLab, you may need to make some changes to your configuration. These
guides document the necessary steps for a selection of popular reverse proxies:
@@ -98,4 +98,4 @@ they will receive a `Connection failed` message.
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/8413) in GitLab 8.17.
Terminal sessions, by default, do not expire.
-You can limit terminal session lifetime in your GitLab instance. To do so, navigate to **{admin}** [**Admin Area > Settings > Web terminal**](../../user/admin_area/settings/index.md#general), and set a `max session time`.
+You can limit terminal session lifetime in your GitLab instance. To do so, navigate to [**Admin Area > Settings > Web terminal**](../../user/admin_area/settings/index.md#general), and set a `max session time`.
diff --git a/doc/administration/invalidate_markdown_cache.md b/doc/administration/invalidate_markdown_cache.md
index 5fd804e11dc..ac43d0eae63 100644
--- a/doc/administration/invalidate_markdown_cache.md
+++ b/doc/administration/invalidate_markdown_cache.md
@@ -1,3 +1,10 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference
+---
+
# Invalidate Markdown Cache
For performance reasons, GitLab caches the HTML version of Markdown text
diff --git a/doc/administration/issue_closing_pattern.md b/doc/administration/issue_closing_pattern.md
index 579b957eb47..7abe0f725f2 100644
--- a/doc/administration/issue_closing_pattern.md
+++ b/doc/administration/issue_closing_pattern.md
@@ -1,6 +1,13 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference
+---
+
# Issue closing pattern **(CORE ONLY)**
->**Note:**
+NOTE: **Note:**
This is the administration documentation. There is a separate [user documentation](../user/project/issues/managing_issues.md#closing-issues-automatically)
on issue closing pattern.
@@ -16,7 +23,7 @@ is installed on.
The default pattern can be located in [`gitlab.yml.example`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/gitlab.yml.example)
under the "Automatic issue closing" section.
-> **Tip:**
+TIP: **Tip:**
You are advised to use <https://rubular.com> to test the issue closing pattern.
Because Rubular doesn't understand `%{issue_ref}`, you can replace this by
`#\d+` when testing your patterns, which matches only local issue references like `#123`.
diff --git a/doc/administration/job_artifacts.md b/doc/administration/job_artifacts.md
index cb31a8934b1..66a7bcb90f6 100644
--- a/doc/administration/job_artifacts.md
+++ b/doc/administration/job_artifacts.md
@@ -164,9 +164,30 @@ _The artifacts are stored by default in
gitlab-rake gitlab:artifacts:migrate
```
+1. Optional: Verify all files migrated properly.
+ From [PostgreSQL console](https://docs.gitlab.com/omnibus/settings/database.html#connecting-to-the-bundled-postgresql-database)
+ (`sudo gitlab-psql -d gitlabhq_production`) verify `objectstg` below (where `file_store=2`) has count of all artifacts:
+
+ ```shell
+ gitlabhq_production=# SELECT count(*) AS total, sum(case when file_store = '1' then 1 else 0 end) AS filesystem, sum(case when file_store = '2' then 1 else 0 end) AS objectstg FROM ci_job_artifacts;
+
+ total | filesystem | objectstg
+ ------+------------+-----------
+ 2409 | 0 | 2409
+ ```
+
+ Verify no files on disk in `artifacts` folder:
+
+ ```shell
+ sudo find /var/opt/gitlab/gitlab-rails/shared/artifacts -type f | grep -v tmp/cache | wc -l
+ ```
+
+ In some cases, you may need to run the [orphan artifact file cleanup Rake task](../raketasks/cleanup.md#remove-orphan-artifact-files)
+ to clean up orphaned artifacts.
+
CAUTION: **Caution:**
JUnit test report artifact (`junit.xml.gz`) migration
-[is not supported](https://gitlab.com/gitlab-org/gitlab/-/issues/27698)
+[was not supported until GitLab 12.8](https://gitlab.com/gitlab-org/gitlab/-/issues/27698#note_317190991)
by the `gitlab:artifacts:migrate` script.
**In installations from source:**
@@ -197,9 +218,29 @@ _The artifacts are stored by default in
sudo -u git -H bundle exec rake gitlab:artifacts:migrate RAILS_ENV=production
```
+1. Optional: Verify all files migrated properly.
+ From PostgreSQL console (`sudo -u git -H psql -d gitlabhq_production`) verify `objectstg` below (where `file_store=2`) has count of all artifacts:
+
+ ```shell
+ gitlabhq_production=# SELECT count(*) AS total, sum(case when file_store = '1' then 1 else 0 end) AS filesystem, sum(case when file_store = '2' then 1 else 0 end) AS objectstg FROM ci_job_artifacts;
+
+ total | filesystem | objectstg
+ ------+------------+-----------
+ 2409 | 0 | 2409
+ ```
+
+ Verify no files on disk in `artifacts` folder:
+
+ ```shell
+ sudo find /var/opt/gitlab/gitlab-rails/shared/artifacts -type f | grep -v tmp/cache | wc -l
+ ```
+
+ In some cases, you may need to run the [orphan artifact file cleanup Rake task](../raketasks/cleanup.md#remove-orphan-artifact-files)
+ to clean up orphaned artifacts.
+
CAUTION: **Caution:**
JUnit test report artifact (`junit.xml.gz`) migration
-[is not supported](https://gitlab.com/gitlab-org/gitlab/-/issues/27698)
+[was not supported until GitLab 12.8](https://gitlab.com/gitlab-org/gitlab/-/issues/27698#note_317190991)
by the `gitlab:artifacts:migrate` script.
### OpenStack example
diff --git a/doc/administration/lfs/index.md b/doc/administration/lfs/index.md
index 4a8151bd091..5c1a9519a35 100644
--- a/doc/administration/lfs/index.md
+++ b/doc/administration/lfs/index.md
@@ -1,4 +1,8 @@
---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference, howto
disqus_identifier: 'https://docs.gitlab.com/ee/workflow/lfs/lfs_administration.html'
---
@@ -152,7 +156,24 @@ On Omnibus installations, the settings are prefixed by `lfs_object_store_`:
This will migrate existing LFS objects to object storage. New LFS objects
will be forwarded to object storage unless
- `gitlab_rails['lfs_object_store_background_upload']` is set to false.
+ `gitlab_rails['lfs_object_store_background_upload']` and `gitlab_rails['lfs_object_store_direct_upload']` is set to `false`.
+1. Optional: Verify all files migrated properly.
+ From [PostgreSQL console](https://docs.gitlab.com/omnibus/settings/database.html#connecting-to-the-bundled-postgresql-database)
+ (`sudo gitlab-psql -d gitlabhq_production`) verify `objectstg` below (where `file_store=2`) has count of all artifacts:
+
+ ```shell
+ gitlabhq_production=# SELECT count(*) AS total, sum(case when file_store = '1' then 1 else 0 end) AS filesystem, sum(case when file_store = '2' then 1 else 0 end) AS objectstg FROM lfs_objects;
+
+ total | filesystem | objectstg
+ ------+------------+-----------
+ 2409 | 0 | 2409
+ ```
+
+ Verify no files on disk in `artifacts` folder:
+
+ ```shell
+ sudo find /var/opt/gitlab/gitlab-rails/shared/lfs-objects -type f | grep -v tmp/cache | wc -l
+ ```
### S3 for installations from source
@@ -187,14 +208,30 @@ For source installations the settings are nested under `lfs:` and then
```
This will migrate existing LFS objects to object storage. New LFS objects
- will be forwarded to object storage unless `background_upload` is set to
- false.
+ will be forwarded to object storage unless `background_upload` and `direct_upload` is set to
+ `false`.
+1. Optional: Verify all files migrated properly.
+ From PostgreSQL console (`sudo -u git -H psql -d gitlabhq_production`) verify `objectstg` below (where `file_store=2`) has count of all artifacts:
+
+ ```shell
+ gitlabhq_production=# SELECT count(*) AS total, sum(case when file_store = '1' then 1 else 0 end) AS filesystem, sum(case when file_store = '2' then 1 else 0 end) AS objectstg FROM lfs_objects;
+
+ total | filesystem | objectstg
+ ------+------------+-----------
+ 2409 | 0 | 2409
+ ```
+
+ Verify no files on disk in `artifacts` folder:
+
+ ```shell
+ sudo find /var/opt/gitlab/gitlab-rails/shared/lfs-objects -type f | grep -v tmp/cache | wc -l
+ ```
### Migrating back to local storage
In order to migrate back to local storage:
-1. Set both `direct_upload` and `background_upload` to false under the LFS object storage settings. Don't forget to restart GitLab.
+1. Set both `direct_upload` and `background_upload` to `false` under the LFS object storage settings. Don't forget to restart GitLab.
1. Run `rake gitlab:lfs:migrate_to_local` on your console.
1. Disable `object_storage` for LFS objects in `gitlab.rb`. Remember to restart GitLab afterwards.
diff --git a/doc/administration/load_balancer.md b/doc/administration/load_balancer.md
new file mode 100644
index 00000000000..fe534f30f66
--- /dev/null
+++ b/doc/administration/load_balancer.md
@@ -0,0 +1,126 @@
+---
+type: reference
+---
+
+# Load Balancer for multi-node GitLab
+
+In an multi-node GitLab configuration, you will need a load balancer to route
+traffic to the application servers. The specifics on which load balancer to use
+or the exact configuration is beyond the scope of GitLab documentation. We hope
+that if you're managing HA systems like GitLab you have a load balancer of
+choice already. Some examples including HAProxy (open-source), F5 Big-IP LTM,
+and Citrix Net Scaler. This documentation will outline what ports and protocols
+you need to use with GitLab.
+
+## SSL
+
+How will you handle SSL in your multi-node environment? There are several different
+options:
+
+- Each application node terminates SSL
+- The load balancer(s) terminate SSL and communication is not secure between
+ the load balancer(s) and the application nodes
+- The load balancer(s) terminate SSL and communication is *secure* between the
+ load balancer(s) and the application nodes
+
+### Application nodes terminate SSL
+
+Configure your load balancer(s) to pass connections on port 443 as 'TCP' rather
+than 'HTTP(S)' protocol. This will pass the connection to the application nodes
+NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
+
+See [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Load Balancer(s) terminate SSL without backend SSL
+
+Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
+The load balancer(s) will then be responsible for managing SSL certificates and
+terminating SSL.
+
+Since communication between the load balancer(s) and GitLab will not be secure,
+there is some additional configuration needed. See
+[NGINX Proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
+for details.
+
+### Load Balancer(s) terminate SSL with backend SSL
+
+Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
+The load balancer(s) will be responsible for managing SSL certificates that
+end users will see.
+
+Traffic will also be secure between the load balancer(s) and NGINX in this
+scenario. There is no need to add configuration for proxied SSL since the
+connection will be secure all the way. However, configuration will need to be
+added to GitLab to configure SSL certificates. See
+[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+## Ports
+
+### Basic ports
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | ------------------------ |
+| 80 | 80 | HTTP (*1*) |
+| 443 | 443 | TCP or HTTPS (*1*) (*2*) |
+| 22 | 22 | TCP |
+
+- (*1*): [Web terminal](../ci/environments/index.md#web-terminals) support requires
+ your load balancer to correctly handle WebSocket connections. When using
+ HTTP or HTTPS proxying, this means your load balancer must be configured
+ to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
+ [web terminal](integration/terminal.md) integration guide for
+ more details.
+- (*2*): When using HTTPS protocol for port 443, you will need to add an SSL
+ certificate to the load balancers. If you wish to terminate SSL at the
+ GitLab application server instead, use TCP protocol.
+
+### GitLab Pages Ports
+
+If you're using GitLab Pages with custom domain support you will need some
+additional port configurations.
+GitLab Pages requires a separate virtual IP address. Configure DNS to point the
+`pages_external_url` from `/etc/gitlab/gitlab.rb` at the new virtual IP address. See the
+[GitLab Pages documentation](pages/index.md) for more information.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------- | --------- |
+| 80 | Varies (*1*) | HTTP |
+| 443 | Varies (*1*) | TCP (*2*) |
+
+- (*1*): The backend port for GitLab Pages depends on the
+ `gitlab_pages['external_http']` and `gitlab_pages['external_https']`
+ setting. See [GitLab Pages documentation](pages/index.md) for more details.
+- (*2*): Port 443 for GitLab Pages should always use the TCP protocol. Users can
+ configure custom domains with custom SSL, which would not be possible
+ if SSL was terminated at the load balancer.
+
+### Alternate SSH Port
+
+Some organizations have policies against opening SSH port 22. In this case,
+it may be helpful to configure an alternate SSH hostname that allows users
+to use SSH on port 443. An alternate SSH hostname will require a new virtual IP address
+compared to the other GitLab HTTP configuration above.
+
+Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | -------- |
+| 443 | 22 | TCP |
+
+## Readiness check
+
+It is strongly recommend that multi-node deployments configure load balancers to utilize the [readiness check](../user/admin_area/monitoring/health_check.md#readiness) to ensure a node is ready to accept traffic, before routing traffic to it. This is especially important when utilizing Puma, as there is a brief period during a restart where Puma will not accept requests.
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/logs.md b/doc/administration/logs.md
index 3db9d32563e..2e8d0bf7461 100644
--- a/doc/administration/logs.md
+++ b/doc/administration/logs.md
@@ -14,6 +14,11 @@ Find more about them [in Audit Events documentation](audit_events.md).
System log files are typically plain text in a standard log file format.
This guide talks about how to read and use these system log files.
+[Read more about how to customise logging on Omnibus GitLab
+installations](https://docs.gitlab.com/omnibus/settings/logs.html)
+including adjusting log retention, log forwarding,
+switching logs from JSON to plain text logging, and more.
+
## `production_json.log`
This file lives in `/var/log/gitlab/gitlab-rails/production_json.log` for
@@ -832,6 +837,29 @@ For example:
This message shows that Geo detected that a repository update was needed for project `1`.
+## `update_mirror_service_json.log`
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/commit/7f637e2af7006dc2b1b2649d9affc0b86cfb33c4) in GitLab 11.12.
+
+This file is stored in:
+
+- `/var/log/gitlab/gitlab-rails/update_mirror_service_json.log` for Omnibus GitLab installations.
+- `/home/git/gitlab/log/update_mirror_service_json.log` for installations from source.
+
+This file contains information about any errors that occurred during project mirroring.
+
+```json
+{
+ "severity":"ERROR",
+ "time":"2020-07-28T23:29:29.473Z",
+ "correlation_id":"5HgIkCJsO53",
+ "user_id":"x",
+ "project_id":"x",
+ "import_url":"https://mirror-source/group/project.git",
+ "error_message":"The LFS objects download list couldn't be imported. Error: Unauthorized"
+}
+```
+
## Registry Logs
For Omnibus installations, Container Registry logs reside in `/var/log/gitlab/registry/current`.
diff --git a/doc/administration/merge_request_diffs.md b/doc/administration/merge_request_diffs.md
index 93cdbff6621..3f4cd6e2751 100644
--- a/doc/administration/merge_request_diffs.md
+++ b/doc/administration/merge_request_diffs.md
@@ -1,3 +1,10 @@
+---
+stage: Create
+group: Editor
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference
+---
+
# Merge request diffs storage **(CORE ONLY)**
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/52568) in GitLab 11.8.
diff --git a/doc/administration/monitoring/github_imports.md b/doc/administration/monitoring/github_imports.md
index 21cc4c708a8..0d79684c951 100644
--- a/doc/administration/monitoring/github_imports.md
+++ b/doc/administration/monitoring/github_imports.md
@@ -6,8 +6,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Monitoring GitHub imports
->**Note:**
-Available since [GitLab 10.2](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/14731).
+> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/14731) in GitLab 10.2.
The GitHub importer exposes various Prometheus metrics that you can use to
monitor the health and progress of the importer.
diff --git a/doc/administration/monitoring/gitlab_self_monitoring_project/img/self_monitoring_default_dashboard.png b/doc/administration/monitoring/gitlab_self_monitoring_project/img/self_monitoring_overview_dashboard.png
index 1d61823ce41..1d61823ce41 100644
--- a/doc/administration/monitoring/gitlab_self_monitoring_project/img/self_monitoring_default_dashboard.png
+++ b/doc/administration/monitoring/gitlab_self_monitoring_project/img/self_monitoring_overview_dashboard.png
Binary files differ
diff --git a/doc/administration/monitoring/gitlab_self_monitoring_project/index.md b/doc/administration/monitoring/gitlab_self_monitoring_project/index.md
index 44ba26296b9..e272cccb7ce 100644
--- a/doc/administration/monitoring/gitlab_self_monitoring_project/index.md
+++ b/doc/administration/monitoring/gitlab_self_monitoring_project/index.md
@@ -55,10 +55,10 @@ panels, provide a regular expression in the **Instance label regex** field.
The dashboard uses metrics available in
[Omnibus GitLab](https://docs.gitlab.com/omnibus/) installations.
-![GitLab self monitoring default dashboard](img/self_monitoring_default_dashboard.png)
+![GitLab self monitoring overview dashboard](img/self_monitoring_overview_dashboard.png)
You can also
-[create your own dashboards](../../../operations/metrics/dashboards/index.md#defining-custom-dashboards-per-project).
+[create your own dashboards](../../../operations/metrics/dashboards/index.md).
## Connection to Prometheus
@@ -83,8 +83,8 @@ Once the webhook is setup, you can
You can add custom metrics in the self monitoring project by:
-1. [Duplicating](../../../operations/metrics/dashboards/index.md#duplicating-a-gitlab-defined-dashboard) the default dashboard.
-1. [Editing](../../../operations/metrics/dashboards/index.md#view-and-edit-the-source-file-of-a-custom-dashboard) the newly created dashboard file and configuring it with [dashboard YAML properties](../../../operations/metrics/dashboards/yaml.md).
+1. [Duplicating](../../../operations/metrics/dashboards/index.md#duplicate-a-gitlab-defined-dashboard) the overview dashboard.
+1. [Editing](../../../operations/metrics/index.md) the newly created dashboard file and configuring it with [dashboard YAML properties](../../../operations/metrics/dashboards/yaml.md).
## Troubleshooting
diff --git a/doc/administration/monitoring/performance/grafana_configuration.md b/doc/administration/monitoring/performance/grafana_configuration.md
index 96f1377fb73..136a2749e80 100644
--- a/doc/administration/monitoring/performance/grafana_configuration.md
+++ b/doc/administration/monitoring/performance/grafana_configuration.md
@@ -68,7 +68,7 @@ repository.
After setting up Grafana, you can enable a link to access it easily from the
GitLab sidebar:
-1. Navigate to the **{admin}** **Admin Area > Settings > Metrics and profiling**.
+1. Navigate to the **Admin Area > Settings > Metrics and profiling**.
1. Expand **Metrics - Grafana**.
1. Check the **Enable access to Grafana** checkbox.
1. Configure the **Grafana URL**:
@@ -77,7 +77,7 @@ GitLab sidebar:
- *Otherwise,* enter the full URL of the Grafana instance.
1. Click **Save changes**.
-GitLab displays your link in the **{admin}** **Admin Area > Monitoring > Metrics Dashboard**.
+GitLab displays your link in the **Admin Area > Monitoring > Metrics Dashboard**.
## Security Update
diff --git a/doc/administration/monitoring/performance/performance_bar.md b/doc/administration/monitoring/performance/performance_bar.md
index 8002a9ab296..e247ec3708c 100644
--- a/doc/administration/monitoring/performance/performance_bar.md
+++ b/doc/administration/monitoring/performance/performance_bar.md
@@ -23,7 +23,7 @@ From left to right, it displays:
details:
![Gitaly profiling using the Performance Bar](img/performance_bar_gitaly_calls.png)
- **Rugged calls**: the time taken (in milliseconds) and the total number of
- [Rugged](../../high_availability/nfs.md#improving-nfs-performance-with-gitlab) calls.
+ [Rugged](../../nfs.md#improving-nfs-performance-with-gitlab) calls.
Click to display a modal window with more details:
![Rugged profiling using the Performance Bar](img/performance_bar_rugged_calls.png)
- **Redis calls**: the time taken (in milliseconds) and the total number of
@@ -72,8 +72,8 @@ Requests with warnings display `(!)` after their path in the **Request selector*
The GitLab Performance Bar is disabled by default. To enable it for a given group:
1. Sign in as a user with Administrator [permissions](../../../user/permissions.md).
-1. In the menu bar, click the **{admin}** **Admin Area** icon.
-1. Navigate to **{settings}** **Settings > Metrics and profiling**
+1. In the menu bar, click **Admin Area**.
+1. Navigate to **Settings > Metrics and profiling**
(`admin/application_settings/metrics_and_profiling`), and expand the section
**Profiling - Performance bar**.
1. Click **Enable access to the Performance Bar**.
diff --git a/doc/administration/monitoring/performance/request_profiling.md b/doc/administration/monitoring/performance/request_profiling.md
index a3b29493d84..5746b95eb44 100644
--- a/doc/administration/monitoring/performance/request_profiling.md
+++ b/doc/administration/monitoring/performance/request_profiling.md
@@ -9,8 +9,8 @@ info: To determine the technical writer assigned to the Stage/Group associated w
To profile a request:
1. Sign in to GitLab as a user with Administrator or Maintainer [permissions](../../../user/permissions.md).
-1. In the navigation bar, click **{admin}** **Admin area**.
-1. Navigate to **{monitor}** **Monitoring > Requests Profiles**.
+1. In the navigation bar, click **Admin area**.
+1. Navigate to **Monitoring > Requests Profiles**.
1. In the **Requests Profiles** section, copy the token.
1. Pass the headers `X-Profile-Token: <token>` and `X-Profile-Mode: <mode>`(where
`<mode>` can be `execution` or `memory`) to the request you want to profile. When
@@ -29,7 +29,7 @@ To profile a request:
Profiled requests can take longer than usual.
After the request completes, you can view the profiling output from the
-**{monitor}** **Monitoring > Requests Profiles** administration page:
+**Monitoring > Requests Profiles** administration page:
![Profiling output](img/request_profile_result.png)
diff --git a/doc/administration/monitoring/prometheus/gitlab_metrics.md b/doc/administration/monitoring/prometheus/gitlab_metrics.md
index fa5e5da80c3..bff689c0c0c 100644
--- a/doc/administration/monitoring/prometheus/gitlab_metrics.md
+++ b/doc/administration/monitoring/prometheus/gitlab_metrics.md
@@ -9,7 +9,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
To enable the GitLab Prometheus metrics:
1. Log into GitLab as a user with [administrator permissions](../../../user/permissions.md).
-1. Navigate to **{admin}** **Admin Area > Settings > Metrics and profiling**.
+1. Navigate to **Admin Area > Settings > Metrics and profiling**.
1. Find the **Metrics - Prometheus** section, and click **Enable Prometheus Metrics**.
1. [Restart GitLab](../../restart_gitlab.md#omnibus-gitlab-restart) for the changes to take effect.
@@ -50,7 +50,8 @@ The following metrics are available:
| `gitlab_page_out_of_bounds` | Counter | 12.8 | Counter for the PageLimiter pagination limit being hit | `controller`, `action`, `bot` |
| `gitlab_rails_queue_duration_seconds` | Histogram | 9.4 | Measures latency between GitLab Workhorse forwarding a request to Rails | |
| `gitlab_sql_duration_seconds` | Histogram | 10.2 | SQL execution time, excluding `SCHEMA` operations and `BEGIN` / `COMMIT` | |
-| `gitlab_transaction_allocated_memory_bytes` | Histogram | 10.2 | Allocated memory for all transactions (`gitlab_transaction_*` metrics) | |
+| `gitlab_ruby_threads_max_expected_threads` | Gauge | 13.3 | Maximum number of threads expected to be running and performing application work |
+| `gitlab_ruby_threads_running_threads` | Gauge | 13.3 | Number of running Ruby threads by name |
| `gitlab_transaction_cache_<key>_count_total` | Counter | 10.2 | Counter for total Rails cache calls (per key) | |
| `gitlab_transaction_cache_<key>_duration_total` | Counter | 10.2 | Counter for total time (seconds) spent in Rails cache calls (per key) | |
| `gitlab_transaction_cache_count_total` | Counter | 10.2 | Counter for total Rails cache calls (aggregate) | |
@@ -95,8 +96,6 @@ The following metrics are available:
| `gitlab_transaction_db_count_total` | Counter | 13.1 | Counter for total number of SQL calls | `controller`, `action` |
| `gitlab_transaction_db_write_count_total` | Counter | 13.1 | Counter for total number of write SQL calls | `controller`, `action` |
| `gitlab_transaction_db_cached_count_total` | Counter | 13.1 | Counter for total number of cached SQL calls | `controller`, `action` |
-| `http_redis_requests_duration_seconds` | Histogram | 13.1 | Redis requests duration during web transactions | `controller`, `action` |
-| `http_redis_requests_total` | Counter | 13.1 | Redis requests count during web transactions | `controller`, `action` |
| `http_elasticsearch_requests_duration_seconds` **(STARTER)** | Histogram | 13.1 | Elasticsearch requests duration during web transactions | `controller`, `action` |
| `http_elasticsearch_requests_total` **(STARTER)** | Counter | 13.1 | Elasticsearch requests count during web transactions | `controller`, `action` |
| `pipelines_created_total` | Counter | 9.4 | Counter of pipelines created | |
diff --git a/doc/administration/monitoring/prometheus/index.md b/doc/administration/monitoring/prometheus/index.md
index f0ad0a1a2e6..7d93e9797be 100644
--- a/doc/administration/monitoring/prometheus/index.md
+++ b/doc/administration/monitoring/prometheus/index.md
@@ -109,6 +109,81 @@ prometheus['scrape_configs'] = [
]
```
+### Standalone Prometheus using Omnibus GitLab
+
+The Omnibus GitLab package can be used to configure a standalone Monitoring node running Prometheus and [Grafana](../performance/grafana_configuration.md).
+
+The steps below are the minimum necessary to configure a Monitoring node running Prometheus and Grafana with Omnibus GitLab:
+
+1. SSH into the Monitoring node.
+1. [Install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page, but
+ do not follow the remaining steps.
+1. Make sure to collect the IP addresses or DNS records of the Consul server nodes, for the next step.
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ external_url 'http://gitlab.example.com'
+
+ # Enable Prometheus
+ prometheus['enable'] = true
+ prometheus['listen_address'] = '0.0.0.0:9090'
+ prometheus['monitor_kubernetes'] = false
+
+ # Enable Login form
+ grafana['disable_login_form'] = false
+
+ # Enable Grafana
+ grafana['enable'] = true
+ grafana['admin_password'] = 'toomanysecrets'
+
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # The addresses can be IPs or FQDNs
+ consul['configuration'] = {
+ retry_join: %w(10.0.0.1 10.0.0.2 10.0.0.3),
+ }
+
+ # Disable all other services
+ gitlab_rails['auto_migrate'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_exporter['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = true
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ sidekiq['enable'] = false
+ puma['enable'] = false
+ node_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+ ```
+
+1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
+
+The next step is to tell all the other nodes where the monitoring node is:
+
+1. Edit `/etc/gitlab/gitlab.rb`, and add, or find and uncomment the following line:
+
+ ```ruby
+ gitlab_rails['prometheus_address'] = '10.0.0.1:9090'
+ ```
+
+ Where `10.0.0.1:9090` is the IP address and port of the Prometheus node.
+
+1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to
+ take effect.
+
+NOTE: **Note:**
+Once monitoring using Service Discovery is enabled with `consul['monitoring_service_discovery'] = true`,
+ensure that `prometheus['scrape_configs']` is not set in `/etc/gitlab/gitlab.rb`. Setting both
+`consul['monitoring_service_discovery'] = true` and `prometheus['scrape_configs']` in `/etc/gitlab/gitlab.rb`
+will result in errors.
+
### Using an external Prometheus server
NOTE: **Note:**
@@ -128,14 +203,24 @@ To use an external Prometheus server:
1. Set each bundled service's [exporter](#bundled-software-metrics) to listen on a network address, for example:
```ruby
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ gitlab_workhorse['prometheus_listen_addr'] = "0.0.0.0:9229"
+
+ # Rails nodes
gitlab_exporter['listen_address'] = '0.0.0.0'
- sidekiq['listen_address'] = '0.0.0.0'
gitlab_exporter['listen_port'] = '9168'
- node_exporter['listen_address'] = '0.0.0.0:9100'
+
+ # Sidekiq nodes
+ sidekiq['listen_address'] = '0.0.0.0'
+
+ # Redis nodes
redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # PostgreSQL nodes
postgres_exporter['listen_address'] = '0.0.0.0:9187'
+
+ # Gitaly nodes
gitaly['prometheus_listen_addr'] = "0.0.0.0:9236"
- gitlab_workhorse['prometheus_listen_addr'] = "0.0.0.0:9229"
```
1. Install and set up a dedicated Prometheus instance, if necessary, using the [official installation instructions](https://prometheus.io/docs/prometheus/latest/installation/).
@@ -227,7 +312,7 @@ To use an external Prometheus server:
You can visit `http://localhost:9090` for the dashboard that Prometheus offers by default.
->**Note:**
+NOTE: **Note:**
If SSL has been enabled on your GitLab instance, you may not be able to access
Prometheus on the same browser as GitLab if using the same FQDN due to [HSTS](https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security). We plan to
[provide access via GitLab](https://gitlab.com/gitlab-org/multi-user-prometheus), but in the interim there are
diff --git a/doc/administration/nfs.md b/doc/administration/nfs.md
new file mode 100644
index 00000000000..bae6bd0dd6c
--- /dev/null
+++ b/doc/administration/nfs.md
@@ -0,0 +1,368 @@
+---
+type: reference
+---
+
+# Using NFS with GitLab
+
+NFS can be used as an alternative for object storage but this isn't typically
+recommended for performance reasons. Note however it is required for [GitLab
+Pages](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196).
+
+For data objects such as LFS, Uploads, Artifacts, etc., an [Object Storage service](object_storage.md)
+is recommended over NFS where possible, due to better performance.
+
+CAUTION: **Caution:**
+From GitLab 13.0, using NFS for Git repositories is deprecated. In GitLab 14.0,
+support for NFS for Git repositories is scheduled to be removed. Upgrade to
+[Gitaly Cluster](gitaly/praefect.md) as soon as possible.
+
+NOTE: **Note:**
+Filesystem performance has a big impact on overall GitLab
+performance, especially for actions that read or write to Git repositories. See
+[Filesystem Performance Benchmarking](operations/filesystem_benchmarking.md)
+for steps to test filesystem performance.
+
+## Known kernel version incompatibilities
+
+RedHat Enterprise Linux (RHEL) and CentOS v7.7 and v7.8 ship with kernel
+version `3.10.0-1127`, which [contains a
+bug](https://bugzilla.redhat.com/show_bug.cgi?id=1783554) that causes
+[uploads to fail to copy over NFS](https://gitlab.com/gitlab-org/gitlab/-/issues/218999). The
+following GitLab versions include a fix to work properly with that
+kernel version:
+
+1. [12.10.12](https://about.gitlab.com/releases/2020/06/25/gitlab-12-10-12-released/)
+1. [13.0.7](https://about.gitlab.com/releases/2020/06/25/gitlab-13-0-7-released/)
+1. [13.1.1](https://about.gitlab.com/releases/2020/06/24/gitlab-13-1-1-released/)
+1. 13.2 and up
+
+If you are using that kernel version, be sure to upgrade GitLab to avoid
+errors.
+
+## Fast lookup of authorized SSH keys
+
+The [fast SSH key lookup](operations/fast_ssh_key_lookup.md) feature can improve
+performance of GitLab instances even if they're using block storage.
+
+[Fast SSH key lookup](operations/fast_ssh_key_lookup.md) is a replacement for
+`authorized_keys` (in `/var/opt/gitlab/.ssh`) using the GitLab database.
+
+NFS increases latency, so fast lookup is recommended if `/var/opt/gitlab`
+is moved to NFS.
+
+We are investigating the use of
+[fast lookup as the default](https://gitlab.com/groups/gitlab-org/-/epics/3104).
+
+## NFS server
+
+Installing the `nfs-kernel-server` package allows you to share directories with
+the clients running the GitLab application:
+
+```shell
+sudo apt-get update
+sudo apt-get install nfs-kernel-server
+```
+
+### Required features
+
+**File locking**: GitLab **requires** advisory file locking, which is only
+supported natively in NFS version 4. NFSv3 also supports locking as long as
+Linux Kernel 2.6.5+ is used. We recommend using version 4 and do not
+specifically test NFSv3.
+
+### Recommended options
+
+When you define your NFS exports, we recommend you also add the following
+options:
+
+- `no_root_squash` - NFS normally changes the `root` user to `nobody`. This is
+ a good security measure when NFS shares will be accessed by many different
+ users. However, in this case only GitLab will use the NFS share so it
+ is safe. GitLab recommends the `no_root_squash` setting because we need to
+ manage file permissions automatically. Without the setting you may receive
+ errors when the Omnibus package tries to alter permissions. Note that GitLab
+ and other bundled components do **not** run as `root` but as non-privileged
+ users. The recommendation for `no_root_squash` is to allow the Omnibus package
+ to set ownership and permissions on files, as needed. In some cases where the
+ `no_root_squash` option is not available, the `root` flag can achieve the same
+ result.
+- `sync` - Force synchronous behavior. Default is asynchronous and under certain
+ circumstances it could lead to data loss if a failure occurs before data has
+ synced.
+
+Due to the complexities of running Omnibus with LDAP and the complexities of
+maintaining ID mapping without LDAP, in most cases you should enable numeric UIDs
+and GIDs (which is off by default in some cases) for simplified permission
+management between systems:
+
+- [NetApp instructions](https://library.netapp.com/ecmdocs/ECMP1401220/html/GUID-24367A9F-E17B-4725-ADC1-02D86F56F78E.html)
+- For non-NetApp devices, disable NFSv4 `idmapping` by performing opposite of [enable NFSv4 idmapper](https://wiki.archlinux.org/index.php/NFS#Enabling_NFSv4_idmapping)
+
+### Disable NFS server delegation
+
+We recommend that all NFS users disable the NFS server delegation feature. This
+is to avoid a [Linux kernel bug](https://bugzilla.redhat.com/show_bug.cgi?id=1552203)
+which causes NFS clients to slow precipitously due to
+[excessive network traffic from numerous `TEST_STATEID` NFS messages](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/52017).
+
+To disable NFS server delegation, do the following:
+
+1. On the NFS server, run:
+
+ ```shell
+ echo 0 > /proc/sys/fs/leases-enable
+ sysctl -w fs.leases-enable=0
+ ```
+
+1. Restart the NFS server process. For example, on CentOS run `service nfs restart`.
+
+NOTE: **Important note:**
+The kernel bug may be fixed in
+[more recent kernels with this commit](https://github.com/torvalds/linux/commit/95da1b3a5aded124dd1bda1e3cdb876184813140).
+Red Hat Enterprise 7 [shipped a kernel update](https://access.redhat.com/errata/RHSA-2019:2029)
+on August 6, 2019 that may also have resolved this problem.
+You may not need to disable NFS server delegation if you know you are using a version of
+the Linux kernel that has been fixed. That said, GitLab still encourages instance
+administrators to keep NFS server delegation disabled.
+
+### Improving NFS performance with GitLab
+
+#### Improving NFS performance with Unicorn
+
+NOTE: **Note:**
+From GitLab 12.1, it will automatically be detected if Rugged can and should be used per storage.
+
+If you previously enabled Rugged using the feature flag, you will need to unset the feature flag by using:
+
+```shell
+sudo gitlab-rake gitlab:features:unset_rugged
+```
+
+If the Rugged feature flag is explicitly set to either true or false, GitLab will use the value explicitly set.
+
+#### Improving NFS performance with Puma
+
+NOTE: **Note:**
+From GitLab 12.7, Rugged auto-detection is disabled if Puma thread count is greater than 1.
+
+If you want to use Rugged with Puma, it is recommended to [set Puma thread count to 1](https://docs.gitlab.com/omnibus/settings/puma.html#puma-settings).
+
+If you want to use Rugged with Puma thread count more than 1, Rugged can be enabled using the [feature flag](../development/gitaly.md#legacy-rugged-code)
+
+If the Rugged feature flag is explicitly set to either true or false, GitLab will use the value explicitly set.
+
+## NFS client
+
+The `nfs-common` provides NFS functionality without installing server components which
+we don't need running on the application nodes.
+
+```shell
+apt-get update
+apt-get install nfs-common
+```
+
+### Mount options
+
+Here is an example snippet to add to `/etc/fstab`:
+
+```plaintext
+10.1.0.1:/var/opt/gitlab/.ssh /var/opt/gitlab/.ssh nfs4 defaults,vers=4.1,hard,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+10.1.0.1:/var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/uploads nfs4 defaults,vers=4.1,hard,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+10.1.0.1:/var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-rails/shared nfs4 defaults,vers=4.1,hard,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+10.1.0.1:/var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/gitlab-ci/builds nfs4 defaults,vers=4.1,hard,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+10.1.0.1:/var/opt/gitlab/git-data /var/opt/gitlab/git-data nfs4 defaults,vers=4.1,hard,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+```
+
+Note there are several options that you should consider using:
+
+| Setting | Description |
+| ------- | ----------- |
+| `vers=4.1` |NFS v4.1 should be used instead of v4.0 because there is a Linux [NFS client bug in v4.0](https://gitlab.com/gitlab-org/gitaly/-/issues/1339) that can cause significant problems due to stale data.
+| `nofail` | Don't halt boot process waiting for this mount to become available
+| `lookupcache=positive` | Tells the NFS client to honor `positive` cache results but invalidates any `negative` cache results. Negative cache results cause problems with Git. Specifically, a `git push` can fail to register uniformly across all NFS clients. The negative cache causes the clients to 'remember' that the files did not exist previously.
+| `hard` | Instead of `soft`. [Further details](#soft-mount-option).
+
+#### `soft` mount option
+
+It's recommended that you use `hard` in your mount options, unless you have a specific
+reason to use `soft`.
+
+On GitLab.com, we use `soft` because there were times when we had NFS servers
+reboot and `soft` improved availability, but everyone's infrastructure is different.
+If your NFS is provided by on-premise storage arrays with redundant controllers,
+for example, you shouldn't need to worry about NFS server availability.
+
+The NFS man page states:
+
+> "soft" timeout can cause silent data corruption in certain cases
+
+Read the [Linux man page](https://linux.die.net/man/5/nfs) to understand the difference,
+and if you do use `soft`, ensure that you've taken steps to mitigate the risks.
+
+If you experience behavior that might have been caused by
+writes to disk on the NFS server not occurring, such as commits going missing,
+use the `hard` option, because (from the man page):
+
+> use the soft option only when client responsiveness is more important than data integrity
+
+Other vendors make similar recommendations, including
+[SAP](http://wiki.scn.sap.com/wiki/x/PARnFQ) and NetApp's
+[knowledge base](https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/What_are_the_differences_between_hard_mount_and_soft_mount),
+they highlight that if the NFS client driver caches data, `soft` means there is no certainty if
+writes by GitLab are actually on disk.
+
+Mount points set with the option `hard` may not perform as well, and if the
+NFS server goes down, `hard` will cause processes to hang when interacting with
+the mount point. Use `SIGKILL` (`kill -9`) to deal with hung processes.
+The `intr` option
+[stopped working in the 2.6 kernel](https://access.redhat.com/solutions/157873).
+
+### A single NFS mount
+
+It's recommended to nest all GitLab data directories within a mount, that allows automatic
+restore of backups without manually moving existing data.
+
+```plaintext
+mountpoint
+└── gitlab-data
+ ├── builds
+ ├── git-data
+ ├── shared
+ └── uploads
+```
+
+To do so, we'll need to configure Omnibus with the paths to each directory nested
+in the mount point as follows:
+
+Mount `/gitlab-nfs` then use the following Omnibus
+configuration to move each data location to a subdirectory:
+
+```ruby
+git_data_dirs({"default" => { "path" => "/gitlab-nfs/gitlab-data/git-data"} })
+gitlab_rails['uploads_directory'] = '/gitlab-nfs/gitlab-data/uploads'
+gitlab_rails['shared_path'] = '/gitlab-nfs/gitlab-data/shared'
+gitlab_ci['builds_directory'] = '/gitlab-nfs/gitlab-data/builds'
+```
+
+Run `sudo gitlab-ctl reconfigure` to start using the central location. Please
+be aware that if you had existing data you will need to manually copy/rsync it
+to these new locations and then restart GitLab.
+
+### Bind mounts
+
+Alternatively to changing the configuration in Omnibus, bind mounts can be used
+to store the data on an NFS mount.
+
+Bind mounts provide a way to specify just one NFS mount and then
+bind the default GitLab data locations to the NFS mount. Start by defining your
+single NFS mount point as you normally would in `/etc/fstab`. Let's assume your
+NFS mount point is `/gitlab-nfs`. Then, add the following bind mounts in
+`/etc/fstab`:
+
+```shell
+/gitlab-nfs/gitlab-data/git-data /var/opt/gitlab/git-data none bind 0 0
+/gitlab-nfs/gitlab-data/.ssh /var/opt/gitlab/.ssh none bind 0 0
+/gitlab-nfs/gitlab-data/uploads /var/opt/gitlab/gitlab-rails/uploads none bind 0 0
+/gitlab-nfs/gitlab-data/shared /var/opt/gitlab/gitlab-rails/shared none bind 0 0
+/gitlab-nfs/gitlab-data/builds /var/opt/gitlab/gitlab-ci/builds none bind 0 0
+```
+
+Using bind mounts will require manually making sure the data directories
+are empty before attempting a restore. Read more about the
+[restore prerequisites](../raketasks/backup_restore.md).
+
+You can view information and options set for each of the mounted NFS file
+systems by running `nfsstat -m` and `cat /etc/fstab`.
+
+### Multiple NFS mounts
+
+When using default Omnibus configuration you will need to share 4 data locations
+between all GitLab cluster nodes. No other locations should be shared. The
+following are the 4 locations need to be shared:
+
+| Location | Description | Default configuration |
+| -------- | ----------- | --------------------- |
+| `/var/opt/gitlab/git-data` | Git repository data. This will account for a large portion of your data | `git_data_dirs({"default" => { "path" => "/var/opt/gitlab/git-data"} })`
+| `/var/opt/gitlab/gitlab-rails/uploads` | User uploaded attachments | `gitlab_rails['uploads_directory'] = '/var/opt/gitlab/gitlab-rails/uploads'`
+| `/var/opt/gitlab/gitlab-rails/shared` | Build artifacts, GitLab Pages, LFS objects, temp files, etc. If you're using LFS this may also account for a large portion of your data | `gitlab_rails['shared_path'] = '/var/opt/gitlab/gitlab-rails/shared'`
+| `/var/opt/gitlab/gitlab-ci/builds` | GitLab CI/CD build traces | `gitlab_ci['builds_directory'] = '/var/opt/gitlab/gitlab-ci/builds'`
+
+Other GitLab directories should not be shared between nodes. They contain
+node-specific files and GitLab code that does not need to be shared. To ship
+logs to a central location consider using remote syslog. Omnibus GitLab packages
+provide configuration for [UDP log shipping](https://docs.gitlab.com/omnibus/settings/logs.html#udp-log-shipping-gitlab-enterprise-edition-only).
+
+Having multiple NFS mounts will require manually making sure the data directories
+are empty before attempting a restore. Read more about the
+[restore prerequisites](../raketasks/backup_restore.md).
+
+## NFS in a Firewalled Environment
+
+If the traffic between your NFS server and NFS client(s) is subject to port filtering
+by a firewall, then you will need to reconfigure that firewall to allow NFS communication.
+
+[This guide from TDLP](http://tldp.org/HOWTO/NFS-HOWTO/security.html#FIREWALLS)
+covers the basics of using NFS in a firewalled environment. Additionally, we encourage you to
+search for and review the specific documentation for your operating system or distribution and your firewall software.
+
+Example for Ubuntu:
+
+Check that NFS traffic from the client is allowed by the firewall on the host by running
+the command: `sudo ufw status`. If it's being blocked, then you can allow traffic from a specific
+client with the command below.
+
+```shell
+sudo ufw allow from <client_ip_address> to any port nfs
+```
+
+## Known issues
+
+### Avoid using AWS's Elastic File System (EFS)
+
+GitLab strongly recommends against using AWS Elastic File System (EFS).
+Our support team will not be able to assist on performance issues related to
+file system access.
+
+Customers and users have reported that AWS EFS does not perform well for GitLab's
+use-case. Workloads where many small files are written in a serialized manner, like `git`,
+are not well-suited for EFS. EBS with an NFS server on top will perform much better.
+
+If you do choose to use EFS, avoid storing GitLab log files (e.g. those in `/var/log/gitlab`)
+there because this will also affect performance. We recommend that the log files be
+stored on a local volume.
+
+For more details on another person's experience with EFS, see this [Commit Brooklyn 2019 video](https://youtu.be/K6OS8WodRBQ?t=313).
+
+### Avoid using CephFS and GlusterFS
+
+GitLab strongly recommends against using CephFS and GlusterFS.
+These distributed file systems are not well-suited for GitLab's input/output access patterns because Git uses many small files and access times and file locking times to propagate will make Git activity very slow.
+
+### Avoid using PostgreSQL with NFS
+
+GitLab strongly recommends against running your PostgreSQL database
+across NFS. The GitLab support team will not be able to assist on performance issues related to
+this configuration.
+
+Additionally, this configuration is specifically warned against in the
+[PostgreSQL Documentation](https://www.postgresql.org/docs/current/creating-cluster.html#CREATING-CLUSTER-NFS):
+
+>PostgreSQL does nothing special for NFS file systems, meaning it assumes NFS behaves exactly like
+>locally-connected drives. If the client or server NFS implementation does not provide standard file
+>system semantics, this can cause reliability problems. Specifically, delayed (asynchronous) writes
+>to the NFS server can cause data corruption problems.
+
+For supported database architecture, please see our documentation on
+[Configuring a Database for GitLab HA](postgresql/replication_and_failover.md).
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/object_storage.md b/doc/administration/object_storage.md
index b51b722fbd7..49716883310 100644
--- a/doc/administration/object_storage.md
+++ b/doc/administration/object_storage.md
@@ -15,12 +15,18 @@ GitLab has been tested on a number of object storage providers:
- [Amazon S3](https://aws.amazon.com/s3/)
- [Google Cloud Storage](https://cloud.google.com/storage)
-- [Digital Ocean Spaces](https://www.digitalocean.com/products/spaces)
+- [Digital Ocean Spaces](https://www.digitalocean.com/products/spaces/)
- [Oracle Cloud Infrastructure](https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm)
- [Openstack Swift](https://docs.openstack.org/swift/latest/s3_compat.html)
- On-premises hardware and appliances from various storage vendors.
- MinIO. We have [a guide to deploying this](https://docs.gitlab.com/charts/advanced/external-object-storage/minio.html) within our Helm Chart documentation.
+### Known compatibility issues
+
+- Dell EMC ECS: Prior to GitLab 13.3, there is a [known bug in GitLab Workhorse that prevents
+ HTTP Range Requests from working with CI job artifacts](https://gitlab.com/gitlab-org/gitlab/-/issues/223806).
+ Be sure to upgrade to GitLab v13.3.0 or above if you use S3 storage with this hardware.
+
## Configuration guides
There are two ways of specifying object storage configuration in GitLab:
@@ -55,6 +61,10 @@ NOTE: **Note:**
Consolidated object storage configuration cannot be used for
backups or Mattermost. See [the full table for a complete list](#storage-specific-configuration).
+NOTE: **Note:**
+Enabling consolidated object storage will enable object storage for all object types.
+If you wish to use local storage for specific object types, you can [selectively disable object storages](#selectively-disabling-object-storage).
+
Most types of objects, such as CI artifacts, LFS files, upload
attachments, and so on can be saved in object storage by specifying a single
credential for object storage with multiple buckets. A [different bucket
@@ -84,6 +94,11 @@ See the section on [ETag mismatch errors](#etag-mismatch) for more details.
'aws_access_key_id' => '<AWS_ACCESS_KEY_ID>',
'aws_secret_access_key' => '<AWS_SECRET_ACCESS_KEY>'
}
+ # OPTIONAL: The following lines are only needed if server side encryption is required
+ gitlab_rails['object_store']['storage_options'] = {
+ 'server_side_encryption' => '<AES256 or aws:kms>',
+ 'server_side_encryption_kms_key_id' => '<arn:s3:aws:xxx>'
+ }
gitlab_rails['object_store']['objects']['artifacts']['bucket'] = '<artifacts>'
gitlab_rails['object_store']['objects']['external_diffs']['bucket'] = '<external-diffs>'
gitlab_rails['object_store']['objects']['lfs']['bucket'] = '<lfs-objects>'
@@ -119,6 +134,9 @@ See the section on [ETag mismatch errors](#etag-mismatch) for more details.
aws_access_key_id: <AWS_ACCESS_KEY_ID>
aws_secret_access_key: <AWS_SECRET_ACCESS_KEY>
region: <eu-central-1>
+ storage_options:
+ server_side_encryption: <AES256 or aws:kms>
+ server_side_encryption_key_kms_id: <arn:s3:aws:xxx>
objects:
artifacts:
bucket: <artifacts>
@@ -184,7 +202,8 @@ gitlab_rails['object_store']['connection'] = {
|---------|-------------|
| `enabled` | Enable/disable object storage |
| `proxy_download` | Set to `true` to [enable proxying all files served](#proxy-download). Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data |
-| `connection` | Various connection options described below |
+| `connection` | Various [connection options](#connection-settings) described below |
+| `storage_options` | Options to use when saving new objects, such as [server side encryption](#server-side-encryption-headers). Introduced in GitLab 13.3 |
| `objects` | [Object-specific configuration](#object-specific-configuration)
### Connection settings
@@ -494,16 +513,18 @@ If you configure GitLab to use object storage for CI logs and artifacts,
### Proxy Download
-A number of the use cases for object storage allow client traffic to be redirected to the
-object storage back end, like when Git clients request large files via LFS or when
-downloading CI artifacts and logs.
+Clients can download files in object storage by receiving a pre-signed, time-limited URL,
+or by GitLab proxying the data from object storage to the client.
+Downloading files from object storage directly
+helps reduce the amount of egress traffic GitLab
+needs to process.
When the files are stored on local block storage or NFS, GitLab has to act as a proxy.
This is not the default behavior with object storage.
The `proxy_download` setting controls this behavior: the default is generally `false`.
-Verify this in the documentation for each use case. Set it to `true` so that GitLab proxies
-the files.
+Verify this in the documentation for each use case. Set it to `true` if you want
+GitLab to proxy the files.
When not proxying files, GitLab returns an
[HTTP 302 redirect with a pre-signed, time-limited object storage URL](https://gitlab.com/gitlab-org/gitlab/-/issues/32117#note_218532298).
@@ -524,7 +545,9 @@ certificate, or may return common TLS errors such as:
x509: certificate signed by unknown authority
```
-- Clients will need network access to the object storage. Errors that might result
+- Clients will need network access to the object storage.
+Network firewalls could block access.
+Errors that might result
if this access is not in place include:
```plaintext
@@ -535,6 +558,10 @@ Getting a `403 Forbidden` response is specifically called out on the
[package repository documentation](packages/index.md#using-object-storage)
as a side effect of how some build tools work.
+Additionally for a short time period users could share pre-signed, time-limited object storage URLs
+with others without authentication. Also bandwidth charges may be incurred
+between the object storage provider and the client.
+
### ETag mismatch
Using the default GitLab settings, some object storage back-ends such as
@@ -576,21 +603,46 @@ configuration.
#### Encrypted S3 buckets
-> - Introduced in [GitLab 13.1](https://gitlab.com/gitlab-org/gitlab-workhorse/-/merge_requests/466) for instance profiles only.
-> - Introduced in [GitLab 13.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/34460) for static credentials when [consolidated object storage configuration](#consolidated-object-storage-configuration) is used.
+> - Introduced in [GitLab 13.1](https://gitlab.com/gitlab-org/gitlab-workhorse/-/merge_requests/466) for instance profiles only and [S3 default encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html).
+> - Introduced in [GitLab 13.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/34460) for static credentials when [consolidated object storage configuration](#consolidated-object-storage-configuration) and [S3 default encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html) are used.
When configured either with an instance profile or with the consolidated
-object configuration, GitLab Workhorse properly uploads files to S3 buckets
-that have [SSE-S3 or SSE-KMS encryption enabled by
+object configuration, GitLab Workhorse properly uploads files to S3
+buckets that have [SSE-S3 or SSE-KMS encryption enabled by
default](https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html).
-Note that customer master keys (CMKs) and
-SSE-C encryption are [not yet supported since this requires supplying
-keys to the GitLab configuration](https://gitlab.com/gitlab-org/gitlab/-/issues/226006).
+Note that customer master keys (CMKs) and SSE-C encryption are [not
+supported since this requires sending the encryption keys in every request](https://gitlab.com/gitlab-org/gitlab/-/issues/226006).
+
+##### Server-side encryption headers
+
+> Introduced in [GitLab 13.3](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38240).
+
+Setting a default encryption on an S3 bucket is the easiest way to
+enable encryption, but you may want to [set a bucket policy to ensure
+only encrypted objects are uploaded](https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-store-kms-encrypted-objects/).
+To do this, you must configure GitLab to send the proper encryption headers
+in the `storage_options` configuration section:
+
+| Setting | Description |
+|-------------------------------------|-------------|
+| `server_side_encryption` | Encryption mode (AES256 or aws:kms) |
+| `server_side_encryption_kms_key_id` | Amazon Resource Name. Only needed when `aws:kms` is used in `server_side_encryption`. See the [Amazon documentation on using KMS encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html) |
+
+As with the case for default encryption, these options only work when
+the Workhorse S3 client is enabled. One of the following two conditions
+must be fulfilled:
+
+- `use_iam_profile` is `true` in the connection settings.
+- Consolidated object storage settings are in use.
+
+[ETag mismatch errors](#etag-mismatch) will occur if server side
+encryption headers are used without enabling the Workhorse S3 client.
##### Disabling the feature
The Workhorse S3 client is enabled by default when the
-[`use_iam_profile` configuration option](#iam-permissions) is set to `true`.
+[`use_iam_profile` configuration option](#iam-permissions) is set to `true` or consolidated
+object storage settings are configured.
The feature can be disabled using the `:use_workhorse_s3_client` feature flag. To disable the
feature, ask a GitLab administrator with
diff --git a/doc/administration/operations/cleaning_up_redis_sessions.md b/doc/administration/operations/cleaning_up_redis_sessions.md
index 38fac8a0eca..d2aec2f7c47 100644
--- a/doc/administration/operations/cleaning_up_redis_sessions.md
+++ b/doc/administration/operations/cleaning_up_redis_sessions.md
@@ -15,7 +15,8 @@ prefixed with `session:gitlab:`, so they would look like
`session:gitlab:976aa289e2189b17d7ef525a6702ace9`. Below we describe how to
remove the keys in the old format.
-**Note:** the instructions below must be modified in accordance with your
+NOTE: **Note:**
+The instructions below must be modified in accordance with your
configuration settings if you have used the advanced Redis
settings outlined in
[Configuration Files Documentation](https://gitlab.com/gitlab-org/gitlab/blob/master/config/README.md).
diff --git a/doc/administration/operations/extra_sidekiq_processes.md b/doc/administration/operations/extra_sidekiq_processes.md
index 155680354da..e589ecc4216 100644
--- a/doc/administration/operations/extra_sidekiq_processes.md
+++ b/doc/administration/operations/extra_sidekiq_processes.md
@@ -82,7 +82,7 @@ To start multiple processes:
```
After the extra Sidekiq processes are added, navigate to
-**{admin}** **Admin Area > Monitoring > Background Jobs** (`/admin/background_jobs`) in GitLab.
+**Admin Area > Monitoring > Background Jobs** (`/admin/background_jobs`) in GitLab.
![Multiple Sidekiq processes](img/sidekiq-cluster.png)
diff --git a/doc/administration/operations/filesystem_benchmarking.md b/doc/administration/operations/filesystem_benchmarking.md
index 856061348ed..64afd1b44f3 100644
--- a/doc/administration/operations/filesystem_benchmarking.md
+++ b/doc/administration/operations/filesystem_benchmarking.md
@@ -26,7 +26,7 @@ To install:
Then run the following:
```shell
-fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/path/to/git-data/testfile --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
+fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=4G --filename=/path/to/git-data/testfile
```
This will create a 4GB file in `/path/to/git-data/testfile`. It performs
diff --git a/doc/administration/operations/moving_repositories.md b/doc/administration/operations/moving_repositories.md
index 960005fe25d..4763c012538 100644
--- a/doc/administration/operations/moving_repositories.md
+++ b/doc/administration/operations/moving_repositories.md
@@ -9,16 +9,17 @@ We will look at three scenarios: the target directory is empty, the
target directory contains an outdated copy of the repositories, and
how to deal with thousands of repositories.
-**Each of the approaches we list can/will overwrite data in the
+DANGER: **Danger:**
+Each of the approaches we list can/will overwrite data in the
target directory `/mnt/gitlab/repositories`. Do not mix up the
-source and the target.**
+source and the target.
-## Target directory is empty: use a tar pipe
+## Target directory is empty: use a `tar` pipe
If the target directory `/mnt/gitlab/repositories` is empty the
-simplest thing to do is to use a tar pipe. This method has low
-overhead and tar is almost always already installed on your system.
-However, it is not possible to resume an interrupted tar pipe: if
+simplest thing to do is to use a `tar` pipe. This method has low
+overhead and `tar` is almost always already installed on your system.
+However, it is not possible to resume an interrupted `tar` pipe: if
that happens then all data must be copied again.
```shell
@@ -28,9 +29,9 @@ sudo -u git sh -c 'tar -C /var/opt/gitlab/git-data/repositories -cf - -- . |\
If you want to see progress, replace `-xf` with `-xvf`.
-### Tar pipe to another server
+### `tar` pipe to another server
-You can also use a tar pipe to copy data to another server. If your
+You can also use a `tar` pipe to copy data to another server. If your
`git` user has SSH access to the new server as `git@newserver`, you
can pipe the data through SSH.
@@ -42,13 +43,13 @@ sudo -u git sh -c 'tar -C /var/opt/gitlab/git-data/repositories -cf - -- . |\
If you want to compress the data before it goes over the network
(which will cost you CPU cycles) you can replace `ssh` with `ssh -C`.
-## The target directory contains an outdated copy of the repositories: use rsync
+## The target directory contains an outdated copy of the repositories: use `rsync`
If the target directory already contains a partial / outdated copy
of the repositories it may be wasteful to copy all the data again
-with tar. In this scenario it is better to use rsync. This utility
+with `tar`. In this scenario it is better to use `rsync`. This utility
is either already installed on your system or easily installable
-via apt, yum etc.
+via `apt`, `yum`, etc.
```shell
sudo -u git sh -c 'rsync -a --delete /var/opt/gitlab/git-data/repositories/. \
@@ -59,30 +60,30 @@ The `/.` in the command above is very important, without it you can
easily get the wrong directory structure in the target directory.
If you want to see progress, replace `-a` with `-av`.
-### Single rsync to another server
+### Single `rsync` to another server
If the `git` user on your source system has SSH access to the target
-server you can send the repositories over the network with rsync.
+server you can send the repositories over the network with `rsync`.
```shell
sudo -u git sh -c 'rsync -a --delete /var/opt/gitlab/git-data/repositories/. \
git@newserver:/mnt/gitlab/repositories'
```
-## Thousands of Git repositories: use one rsync per repository
+## Thousands of Git repositories: use one `rsync` per repository
-Every time you start an rsync job it has to inspect all files in
+Every time you start an `rsync` job it has to inspect all files in
the source directory, all files in the target directory, and then
decide what files to copy or not. If the source or target directory
-has many contents this startup phase of rsync can become a burden
-for your GitLab server. In cases like this you can make rsync's
+has many contents this startup phase of `rsync` can become a burden
+for your GitLab server. In cases like this you can make `rsync`'s
life easier by dividing its work in smaller pieces, and sync one
repository at a time.
-In addition to rsync we will use [GNU
+In addition to `rsync` we will use [GNU
Parallel](http://www.gnu.org/software/parallel/). This utility is
-not included in GitLab so you need to install it yourself with apt
-or yum. Also note that the GitLab scripts we used below were added
+not included in GitLab so you need to install it yourself with `apt`
+or `yum`. Also note that the GitLab scripts we used below were added
in GitLab 8.1.
**This process does not clean up repositories at the target location that no
@@ -90,9 +91,9 @@ longer exist at the source.** If you start using your GitLab instance with
`/mnt/gitlab/repositories`, you need to run `gitlab-rake gitlab:cleanup:repos`
after switching to the new repository storage directory.
-### Parallel rsync for all repositories known to GitLab
+### Parallel `rsync` for all repositories known to GitLab
-This will sync repositories with 10 rsync processes at a time. We keep
+This will sync repositories with 10 `rsync` processes at a time. We keep
track of progress so that the transfer can be restarted if necessary.
First we create a new directory, owned by `git`, to hold transfer
@@ -147,7 +148,7 @@ cat /home/git/transfer-logs/* | sort | uniq -u |\
`
```
-### Parallel rsync only for repositories with recent activity
+### Parallel `rsync` only for repositories with recent activity
Suppose you have already done one sync that started after 2015-10-1 12:00 UTC.
Then you might only want to sync repositories that were changed via GitLab
diff --git a/doc/administration/operations/puma.md b/doc/administration/operations/puma.md
index 62b93d40a6b..e7b4bb88faf 100644
--- a/doc/administration/operations/puma.md
+++ b/doc/administration/operations/puma.md
@@ -27,7 +27,7 @@ will _not_ carry over automatically, due to differences between the two applicat
deployments, see [Configuring Puma Settings](https://docs.gitlab.com/omnibus/settings/puma.html#configuring-puma-settings).
For Helm based deployments, see the [Webservice Chart documentation](https://docs.gitlab.com/charts/charts/gitlab/webservice/index.html).
-Additionally we strongly recommend that multi-node deployments [configure their load balancers to utilize the readiness check](../high_availability/load_balancer.md#readiness-check) due to a difference between Unicorn and Puma in how they handle connections during a restart of the service.
+Additionally we strongly recommend that multi-node deployments [configure their load balancers to utilize the readiness check](../load_balancer.md#readiness-check) due to a difference between Unicorn and Puma in how they handle connections during a restart of the service.
## Performance caveat when using Puma with Rugged
diff --git a/doc/administration/operations/sidekiq_memory_killer.md b/doc/administration/operations/sidekiq_memory_killer.md
index fdccfacc8a9..e829d735c4f 100644
--- a/doc/administration/operations/sidekiq_memory_killer.md
+++ b/doc/administration/operations/sidekiq_memory_killer.md
@@ -71,5 +71,5 @@ The MemoryKiller is controlled using environment variables.
If the process hard shutdown/restart is not performed by Sidekiq,
the Sidekiq process will be forcefully terminated after
- `Sidekiq.options[:timeout] * 2` seconds. An external supervision mechanism
+ `Sidekiq.options[:timeout] + 2` seconds. An external supervision mechanism
(e.g. runit) must restart Sidekiq afterwards.
diff --git a/doc/administration/packages/container_registry.md b/doc/administration/packages/container_registry.md
index 44f1d075a5e..1883f6659e6 100644
--- a/doc/administration/packages/container_registry.md
+++ b/doc/administration/packages/container_registry.md
@@ -331,6 +331,9 @@ The different supported drivers are:
| swift | OpenStack Swift Object Storage |
| oss | Aliyun OSS |
+NOTE: **Note:**
+Although most S3 compatible services (like [MinIO](https://min.io/)) should work with the registry, we only guarantee support for AWS S3. Because we cannot assert the correctness of third-party S3 implementations, we can debug issues, but we cannot patch the registry unless an issue is reproducible against an AWS S3 bucket.
+
Read more about the individual driver's configuration options in the
[Docker Registry docs](https://docs.docker.com/registry/configuration/#storage).
@@ -446,27 +449,62 @@ to be in read-only mode for a while. During this time,
you can pull from the Container Registry, but you cannot push.
1. Optional: To reduce the amount of data to be migrated, run the [garbage collection tool without downtime](#performing-garbage-collection-without-downtime).
-1. Copy initial data to your S3 bucket, for example with the AWS CLI [`cp`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/cp.html)
+1. This example uses the `aws` CLI. If you haven't configured the
+ CLI before, you have to configure your credentials by running `sudo aws configure`.
+ Because a non-admin user likely can't access the Container Registry folder,
+ ensure you use `sudo`. To check your credential configuration, run
+ [`ls`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/ls.html) to list
+ all buckets.
+
+ ```shell
+ sudo aws --endpoint-url https://your-object-storage-backend.com s3 ls
+ ```
+
+ If you are using AWS as your back end, you do not need the [`--endpoint-url`](https://docs.aws.amazon.com/cli/latest/reference/#options).
+1. Copy initial data to your S3 bucket, for example with the `aws` CLI
+ [`cp`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/cp.html)
or [`sync`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html)
command. Make sure to keep the `docker` folder as the top-level folder inside the bucket.
```shell
- aws s3 sync registry s3://mybucket
+ sudo aws --endpoint-url https://your-object-storage-backend.com s3 sync registry s3://mybucket
```
+ TIP: **Tip:**
+ If you have a lot of data, you may be able to improve performance by
+ [running parallel sync operations](https://aws.amazon.com/premiumsupport/knowledge-center/s3-improve-transfer-sync-command/).
+
1. To perform the final data sync,
[put the Container Registry in `read-only` mode](#performing-garbage-collection-without-downtime) and
[reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
1. Sync any changes since the initial data load to your S3 bucket and delete files that exist in the destination bucket but not in the source:
```shell
- aws s3 sync registry s3://mybucket --delete
+ sudo aws --endpoint-url https://your-object-storage-backend.com s3 sync registry s3://mybucket --delete --dryrun
```
+ After verifying the command is going to perform as expected, remove the
+ [`--dryrun`](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html)
+ flag and run the command.
+
DANGER: **Danger:**
- The `--delete` flag will delete files that exist in the destination but not in the source.
+ The [`--delete`](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html)
+ flag will delete files that exist in the destination but not in the source.
Make sure not to swap the source and destination, or you will delete all data in the Registry.
+1. Verify all Container Registry files have been uploaded to object storage
+ by looking at the file count returned by these two commands:
+
+ ```shell
+ sudo find registry -type f | wc -l
+ ```
+
+ ```shell
+ sudo aws --endpoint-url https://your-object-storage-backend.com s3 ls s3://mybucket --recursive | wc -l
+ ```
+
+ The output of these commands should match, except for the content in the
+ `_uploads` directories and sub-directories.
1. Configure your registry to [use the S3 bucket for storage](#use-object-storage).
1. For the changes to take effect, set the Registry back to `read-write` mode and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
@@ -693,6 +731,35 @@ notifications:
backoff: 1000
```
+## Run the Cleanup policy now
+
+To reduce the amount of [Container Registry disk space used by a given project](../troubleshooting/gitlab_rails_cheat_sheet.md#registry-disk-space-usage-by-project),
+administrators can clean up image tags
+and [run garbage collection](#container-registry-garbage-collection).
+
+To remove image tags by running the cleanup policy, run the following commands in the
+[GitLab Rails console](../troubleshooting/navigating_gitlab_via_rails_console.md):
+
+```ruby
+# Numeric ID of the project whose container registry should be cleaned up
+P = <project_id>
+
+# Numeric ID of a developer, maintainer or owner in that project
+U = <user_id>
+
+# Get required details / objects
+user = User.find_by_id(U)
+project = Project.find_by_id(P)
+repo = ContainerRepository.find_by(project_id: P)
+policy = ContainerExpirationPolicy.find_by(project_id: P)
+
+# Start the tag cleanup
+Projects::ContainerRepository::CleanupTagsService.new(project, user, policy.attributes.except("created_at", "updated_at")).execute(repo)
+```
+
+NOTE: **Note:**
+You can also [run cleanup on a schedule](../../user/packages/container_registry/index.md#cleanup-policy).
+
## Container Registry garbage collection
NOTE: **Note:**
diff --git a/doc/administration/packages/index.md b/doc/administration/packages/index.md
index 5d07136ef40..3af1f0c933b 100644
--- a/doc/administration/packages/index.md
+++ b/doc/administration/packages/index.md
@@ -4,7 +4,7 @@ group: Package
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
---
-# GitLab Package Registry administration **(PREMIUM ONLY)**
+# GitLab Package Registry administration
GitLab Packages allows organizations to utilize GitLab as a private repository
for a variety of common package managers. Users are able to build and publish
diff --git a/doc/administration/pages/index.md b/doc/administration/pages/index.md
index 4efd92eaa07..9e2aa602767 100644
--- a/doc/administration/pages/index.md
+++ b/doc/administration/pages/index.md
@@ -114,7 +114,7 @@ since that is needed in all configurations.
---
-URL scheme: `http://page.example.io`
+URL scheme: `http://<namespace>.example.io/<project_slug>`
This is the minimum setup that you can use Pages with. It is the base for all
other setups as described below. NGINX will proxy all requests to the daemon.
@@ -139,7 +139,7 @@ Watch the [video tutorial](https://youtu.be/dD8c7WNcc6s) for this configuration.
---
-URL scheme: `https://page.example.io`
+URL scheme: `https://<namespace>.example.io/<project_slug>`
NGINX will proxy all requests to the daemon. Pages daemon doesn't listen to the
outside world.
@@ -204,6 +204,7 @@ control over how the Pages daemon runs and serves content in your environment.
| `external_https` | Configure Pages to bind to one or more secondary IP addresses, serving HTTPS requests. Multiple addresses can be given as an array, along with exact ports, for example `['1.2.3.4', '1.2.3.5:8063']`. Sets value for `listen_https`.
| `gitlab_client_http_timeout` | GitLab API HTTP client connection timeout in seconds (default: 10s).
| `gitlab_client_jwt_expiry` | JWT Token expiry time in seconds (default: 30s).
+| `domain_config_source` | Domain configuration source (default: `disk`)
| `gitlab_id` | The OAuth application public ID. Leave blank to automatically fill when Pages authenticates with GitLab.
| `gitlab_secret` | The OAuth application secret. Leave blank to automatically fill when Pages authenticates with GitLab.
| `gitlab_server` | Server to use for authentication when access control is enabled; defaults to GitLab `external_url`.
@@ -254,7 +255,7 @@ you have IPv6 as well as IPv4 addresses, you can use them both.
---
-URL scheme: `http://page.example.io` and `http://domain.com`
+URL scheme: `http://<namespace>.example.io/<project_slug>` and `http://custom-domain.com`
In that case, the Pages daemon is running, NGINX still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
@@ -285,7 +286,7 @@ world. Custom domains are supported, but no TLS.
---
-URL scheme: `https://page.example.io` and `https://domain.com`
+URL scheme: `https://<namespace>.example.io/<project_slug>` and `https://custom-domain.com`
In that case, the Pages daemon is running, NGINX still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
@@ -388,9 +389,9 @@ To do that:
1. Click **Save changes**.
CAUTION: **Warning:**
-This action will not make all currently public web-sites private until they redeployed.
-This issue among others will be resolved by
-[changing GitLab Pages configuration mechanism](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/282).
+For self-managed installations, all public websites remain private until they are
+redeployed. This issue will be resolved by
+[sourcing domain configuration from the GitLab API](https://gitlab.com/gitlab-org/gitlab/-/issues/218357).
### Running behind a proxy
@@ -409,7 +410,7 @@ pages:
### Using a custom Certificate Authority (CA)
NOTE: **Note:**
-[Before 13.2](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4289), when using Omnibus, a [workaround was required](https://docs.gitlab.com/13.1/ee/administration/pages/index.html#using-a-custom-certificate-authority-ca).
+[Before 13.3](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4411), when using Omnibus, a [workaround was required](https://docs.gitlab.com/13.1/ee/administration/pages/index.html#using-a-custom-certificate-authority-ca).
When using certificates issued by a custom CA, [Access Control](../../user/project/pages/pages_access_control.md#gitlab-pages-access-control) and
the [online view of HTML job artifacts](../../ci/pipelines/job_artifacts.md#browsing-artifacts)
@@ -536,7 +537,7 @@ database encryption. Proceed with caution.
1. Set up a new server. This will become the **Pages server**.
-1. Create an [NFS share](../high_availability/nfs_host_client_setup.md)
+1. Create an [NFS share](../nfs.md)
on the **Pages server** and configure this share to
allow access from your main **GitLab server**.
Note that the example there is more general and
@@ -601,6 +602,43 @@ configuring a load balancer to work at the IP level, and so on. If you wish to
set up GitLab Pages on multiple servers, perform the above procedure for each
Pages server.
+## Domain source configuration
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/217912) in GitLab 13.3.
+
+GitLab Pages can use different sources to get domain configuration.
+The default value is `nil`; however, GitLab Pages will default to `disk`.
+
+ ```ruby
+ gitlab_pages['domain_config_source'] = nil
+ ```
+
+You can specify `gitlab` to enable [API-based configuration](#gitlab-api-based-configuration).
+
+For more details see this [blog post](https://about.gitlab.com/blog/2020/08/03/how-gitlab-pages-uses-the-gitlab-api-to-serve-content/).
+
+### GitLab API-based configuration
+
+GitLab Pages can use an API-based configuration. This replaces disk source configuration, which
+was used prior to GitLab 13.0. Follow these steps to enable it:
+
+1. Add the following to your `/etc/gitlab/gitlab.erb` file:
+
+ ```ruby
+ gitlab_pages['domain_config_source'] = "gitlab"
+ ```
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+If you encounter an issue, you can disable it by choosing `disk` or `nil`:
+
+```ruby
+gitlab_pages['domain_config_source'] = nil
+```
+
+For other common issues, see the [troubleshooting section](#failed-to-connect-to-the-internal-gitlab-api)
+or report an issue.
+
## Backup
GitLab Pages are part of the [regular backup](../../raketasks/backup_restore.md), so there is no separate backup to configure.
@@ -696,3 +734,24 @@ date > /var/opt/gitlab/gitlab-rails/shared/pages/.update
```
If you've customized the Pages storage path, adjust the command above to use your custom path.
+
+### Failed to connect to the internal GitLab API
+
+If you have enabled [API-based configuration](#gitlab-api-based-configuration) and see the following error:
+
+```plaintext
+ERRO[0010] Failed to connect to the internal GitLab API after 0.50s error="failed to connect to internal Pages API: HTTP status: 401"
+```
+
+If you are [Running GitLab Pages on a separate server](#running-gitlab-pages-on-a-separate-server)
+you must copy the `/etc/gitlab/gitlab-secrets.json` file
+from the **GitLab server** to the **Pages server** after upgrading to GitLab 13.3,
+as described in that section.
+
+Other reasons may include network connectivity issues between your
+**GitLab server** and your **Pages server** such as firewall configurations or closed ports.
+For example, if there is a connection timeout:
+
+```plaintext
+error="failed to connect to internal Pages API: Get \"https://gitlab.example.com:3000/api/v4/internal/pages/status\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
+```
diff --git a/doc/administration/pages/source.md b/doc/administration/pages/source.md
index d5b49bdf839..486bc7a8777 100644
--- a/doc/administration/pages/source.md
+++ b/doc/administration/pages/source.md
@@ -6,7 +6,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# GitLab Pages administration for source installations
->**Note:**
+NOTE: **Note:**
Before attempting to enable GitLab Pages, first make sure you have
[installed GitLab](../../install/installation.md) successfully.
@@ -77,7 +77,7 @@ host that GitLab runs. For example, an entry would look like this:
where `example.io` is the domain under which GitLab Pages will be served
and `192.0.2.1` is the IP address of your GitLab instance.
-> **Note:**
+NOTE: **Note:**
You should not use the GitLab domain to serve user pages. For more information
see the [security section](#security).
@@ -94,7 +94,7 @@ since that is needed in all configurations.
- [Wildcard DNS setup](#dns-configuration)
-URL scheme: `http://page.example.io`
+URL scheme: `http://<namespace>.example.io/<project_slug>`
This is the minimum setup that you can use Pages with. It is the base for all
other setups as described below. NGINX will proxy all requests to the daemon.
@@ -157,7 +157,7 @@ The Pages daemon doesn't listen to the outside world.
- [Wildcard DNS setup](#dns-configuration)
- Wildcard TLS certificate
-URL scheme: `https://page.example.io`
+URL scheme: `https://<namespace>.example.io/<project_slug>`
NGINX will proxy all requests to the daemon. Pages daemon doesn't listen to the
outside world.
@@ -221,7 +221,7 @@ that without TLS certificates.
- [Wildcard DNS setup](#dns-configuration)
- Secondary IP
-URL scheme: `http://page.example.io` and `http://domain.com`
+URL scheme: `http://<namespace>.example.io/<project_slug>` and `http://custom-domain.com`
In that case, the pages daemon is running, NGINX still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
@@ -286,7 +286,7 @@ world. Custom domains are supported, but no TLS.
- Wildcard TLS certificate
- Secondary IP
-URL scheme: `https://page.example.io` and `https://domain.com`
+URL scheme: `https://<namespace>.example.io/<project_slug>` and `https://custom-domain.com`
In that case, the pages daemon is running, NGINX still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
@@ -349,7 +349,7 @@ world. Custom domains and TLS are supported.
## NGINX caveats
->**Note:**
+NOTE: **Note:**
The following information applies only for installations from source.
Be extra careful when setting up the domain name in the NGINX configuration. You must
diff --git a/doc/administration/postgresql/external.md b/doc/administration/postgresql/external.md
index 6e2bbc0aae1..e2cfb95ec48 100644
--- a/doc/administration/postgresql/external.md
+++ b/doc/administration/postgresql/external.md
@@ -32,7 +32,7 @@ If you use a cloud-managed service, or provide your own PostgreSQL instance:
gitlab_rails['db_password'] = 'DB password'
```
- For more information on GitLab HA setups, refer to [configuring GitLab for HA](../high_availability/gitlab.md).
+ For more information on GitLab multi-node setups, refer to the [reference architectures](../reference_architectures/index.md).
1. Reconfigure for the changes to take effect:
diff --git a/doc/administration/postgresql/index.md b/doc/administration/postgresql/index.md
index 7e0a2f3cae1..2720d8e696b 100644
--- a/doc/administration/postgresql/index.md
+++ b/doc/administration/postgresql/index.md
@@ -13,7 +13,7 @@ There are essentially three setups to choose from.
This setup is for when you have installed GitLab using the
[Omnibus GitLab **Enterprise Edition** (EE) package](https://about.gitlab.com/install/?version=ee).
-All the tools that are needed like PostgreSQL, PgBouncer, Repmgr are bundled in
+All the tools that are needed like PostgreSQL, PgBouncer, Patroni, and repmgr are bundled in
the package, so you can it to set up the whole PostgreSQL infrastructure (primary, replica).
[> Read how to set up PostgreSQL replication and failover using Omnibus GitLab](replication_and_failover.md)
diff --git a/doc/administration/postgresql/pgbouncer.md b/doc/administration/postgresql/pgbouncer.md
new file mode 100644
index 00000000000..9db3e017359
--- /dev/null
+++ b/doc/administration/postgresql/pgbouncer.md
@@ -0,0 +1,168 @@
+---
+type: reference
+---
+
+# Working with the bundled PgBouncer service **(PREMIUM ONLY)**
+
+[PgBouncer](http://www.pgbouncer.org/) is used to seamlessly migrate database
+connections between servers in a failover scenario. Additionally, it can be used
+in a non-fault-tolerant setup to pool connections, speeding up response time
+while reducing resource usage.
+
+GitLab Premium includes a bundled version of PgBouncer that can be managed
+through `/etc/gitlab/gitlab.rb`.
+
+## PgBouncer as part of a fault-tolerant GitLab installation
+
+This content has been moved to a [new location](replication_and_failover.md#configuring-the-pgbouncer-node).
+
+## PgBouncer as part of a non-fault-tolerant GitLab installation
+
+1. Generate PGBOUNCER_USER_PASSWORD_HASH with the command `gitlab-ctl pg-password-md5 pgbouncer`
+
+1. Generate SQL_USER_PASSWORD_HASH with the command `gitlab-ctl pg-password-md5 gitlab`. We'll also need to enter the plaintext SQL_USER_PASSWORD later
+
+1. On your database node, ensure the following is set in your `/etc/gitlab/gitlab.rb`
+
+ ```ruby
+ postgresql['pgbouncer_user_password'] = 'PGBOUNCER_USER_PASSWORD_HASH'
+ postgresql['sql_user_password'] = 'SQL_USER_PASSWORD_HASH'
+ postgresql['listen_address'] = 'XX.XX.XX.Y' # Where XX.XX.XX.Y is the ip address on the node postgresql should listen on
+ postgresql['md5_auth_cidr_addresses'] = %w(AA.AA.AA.B/32) # Where AA.AA.AA.B is the IP address of the pgbouncer node
+ ```
+
+1. Run `gitlab-ctl reconfigure`
+
+ NOTE: **Note:**
+ If the database was already running, it will need to be restarted after reconfigure by running `gitlab-ctl restart postgresql`.
+
+1. On the node you are running PgBouncer on, make sure the following is set in `/etc/gitlab/gitlab.rb`
+
+ ```ruby
+ pgbouncer['enable'] = true
+ pgbouncer['databases'] = {
+ gitlabhq_production: {
+ host: 'DATABASE_HOST',
+ user: 'pgbouncer',
+ password: 'PGBOUNCER_USER_PASSWORD_HASH'
+ }
+ }
+ ```
+
+1. Run `gitlab-ctl reconfigure`
+
+1. On the node running Puma, make sure the following is set in `/etc/gitlab/gitlab.rb`
+
+ ```ruby
+ gitlab_rails['db_host'] = 'PGBOUNCER_HOST'
+ gitlab_rails['db_port'] = '6432'
+ gitlab_rails['db_password'] = 'SQL_USER_PASSWORD'
+ ```
+
+1. Run `gitlab-ctl reconfigure`
+
+1. At this point, your instance should connect to the database through PgBouncer. If you are having issues, see the [Troubleshooting](#troubleshooting) section
+
+## Enable Monitoring
+
+> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
+
+If you enable Monitoring, it must be enabled on **all** PgBouncer servers.
+
+1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
+
+ ```ruby
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Replace placeholders
+ # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
+ # with the addresses of the Consul server nodes
+ consul['configuration'] = {
+ retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
+ ```
+
+1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
+
+## Administrative console
+
+As part of Omnibus GitLab, a command is provided to automatically connect to the
+PgBouncer administrative console. See the
+[PgBouncer documentation](https://www.pgbouncer.org/usage.html#admin-console)
+for detailed instructions on how to interact with the console.
+
+To start a session run the following and provide the password for the `pgbouncer`
+user:
+
+```shell
+sudo gitlab-ctl pgb-console
+```
+
+To get some basic information about the instance:
+
+```shell
+pgbouncer=# show databases; show clients; show servers;
+ name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
+---------------------+-----------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
+ gitlabhq_production | 127.0.0.1 | 5432 | gitlabhq_production | | 100 | 5 | | 0 | 1
+ pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
+(2 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link
+| remote_pid | tls
+------+-----------+---------------------+--------+-----------+-------+------------+------------+---------------------+---------------------+-----------+------
++------------+-----
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44590 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12444c0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44592 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12447c0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44594 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x1244940 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44706 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:16:31 | 0x1244ac0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44708 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:15:15 | 0x1244c40 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44794 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:15:15 | 0x1244dc0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44798 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:16:31 | 0x1244f40 |
+| 0 |
+ C | pgbouncer | pgbouncer | active | 127.0.0.1 | 44660 | 127.0.0.1 | 6432 | 2018-04-24 22:13:51 | 2018-04-24 22:17:12 | 0x1244640 |
+| 0 |
+(8 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | rem
+ote_pid | tls
+------+--------+---------------------+-------+-----------+------+------------+------------+---------------------+---------------------+-----------+------+----
+--------+-----
+ S | gitlab | gitlabhq_production | idle | 127.0.0.1 | 5432 | 127.0.0.1 | 35646 | 2018-04-24 22:15:15 | 2018-04-24 22:17:10 | 0x124dca0 | |
+ 19980 |
+(1 row)
+```
+
+## Troubleshooting
+
+In case you are experiencing any issues connecting through PgBouncer, the first
+place to check is always the logs:
+
+```shell
+sudo gitlab-ctl tail pgbouncer
+```
+
+Additionally, you can check the output from `show databases` in the
+[administrative console](#administrative-console). In the output, you would expect
+to see values in the `host` field for the `gitlabhq_production` database.
+Additionally, `current_connections` should be greater than 1.
+
+### Message: `LOG: invalid CIDR mask in address`
+
+See the suggested fix [in Geo documentation](../geo/replication/troubleshooting.md#message-log--invalid-cidr-mask-in-address).
+
+### Message: `LOG: invalid IP mask "md5": Name or service not known`
+
+See the suggested fix [in Geo documentation](../geo/replication/troubleshooting.md#message-log--invalid-ip-mask-md5-name-or-service-not-known).
diff --git a/doc/administration/postgresql/replication_and_failover.md b/doc/administration/postgresql/replication_and_failover.md
index 5f550f09e5b..bc2af167e6c 100644
--- a/doc/administration/postgresql/replication_and_failover.md
+++ b/doc/administration/postgresql/replication_and_failover.md
@@ -29,6 +29,11 @@ You also need to take into consideration the underlying network topology, making
sure you have redundant connectivity between all Database and GitLab instances
to avoid the network becoming a single point of failure.
+NOTE: **Note:**
+As of GitLab 13.3, PostgreSQL 12 is shipped with Omnibus GitLab. Clustering for PostgreSQL 12 is only supported with
+Patroni. See the [Patroni](#patroni) section for further details. The support for repmgr will not be extended beyond
+PostgreSQL 11.
+
### Database node
Each database node runs three services:
@@ -97,7 +102,7 @@ This is why you will need:
When using default setup, minimum configuration requires:
-- `CONSUL_USERNAME`. Defaults to `gitlab-consul`
+- `CONSUL_USERNAME`. The default user for Omnibus GitLab is `gitlab-consul`
- `CONSUL_DATABASE_PASSWORD`. Password for the database user.
- `CONSUL_PASSWORD_HASH`. This is a hash generated out of Consul username/password pair.
Can be generated with:
@@ -140,7 +145,7 @@ server nodes.
We will need the following password information for the application's database user:
-- `POSTGRESQL_USERNAME`. Defaults to `gitlab`
+- `POSTGRESQL_USERNAME`. The default user for Omnibus GitLab is `gitlab`
- `POSTGRESQL_USER_PASSWORD`. The password for the database user
- `POSTGRESQL_PASSWORD_HASH`. This is a hash generated out of the username/password pair.
Can be generated with:
@@ -153,7 +158,7 @@ We will need the following password information for the application's database u
When using default setup, minimum configuration requires:
-- `PGBOUNCER_USERNAME`. Defaults to `pgbouncer`
+- `PGBOUNCER_USERNAME`. The default user for Omnibus GitLab is `pgbouncer`
- `PGBOUNCER_PASSWORD`. This is a password for PgBouncer service.
- `PGBOUNCER_PASSWORD_HASH`. This is a hash generated out of PgBouncer username/password pair.
Can be generated with:
@@ -198,7 +203,7 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
### Configuring the Database nodes
-1. Make sure to [configure the Consul nodes](../high_availability/consul.md).
+1. Make sure to [configure the Consul nodes](../consul.md).
1. Make sure you collect [`CONSUL_SERVER_NODES`](#consul-information), [`PGBOUNCER_PASSWORD_HASH`](#pgbouncer-information), [`POSTGRESQL_PASSWORD_HASH`](#postgresql-information), the [number of db nodes](#postgresql-information), and the [network address](#network-information) before executing the next step.
1. On the master database node, edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
@@ -305,6 +310,12 @@ Select one node as a primary node.
CREATE EXTENSION pg_trgm;
```
+1. Enable the `btree_gist` extension:
+
+ ```shell
+ CREATE EXTENSION btree_gist;
+ ```
+
1. Exit the database prompt by typing `\q` and Enter.
1. Verify the cluster is initialized with one node:
@@ -455,7 +466,7 @@ Check the [Troubleshooting section](#troubleshooting) before proceeding.
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
-1. [Enable monitoring](../high_availability/pgbouncer.md#enable-monitoring)
+1. [Enable monitoring](../postgresql/pgbouncer.md#enable-monitoring)
#### PgBouncer Checkpoint
@@ -736,9 +747,9 @@ consul['configuration'] = {
After deploying the configuration follow these steps:
-1. On `10.6.0.31`, our primary database
+1. On `10.6.0.31`, our primary database:
- Enable the `pg_trgm` extension
+ Enable the `pg_trgm` and `btree_gist` extensions:
```shell
gitlab-psql -d gitlabhq_production
@@ -746,33 +757,34 @@ After deploying the configuration follow these steps:
```shell
CREATE EXTENSION pg_trgm;
+ CREATE EXTENSION btree_gist;
```
-1. On `10.6.0.32`, our first standby database
+1. On `10.6.0.32`, our first standby database:
- Make this node a standby of the primary
+ Make this node a standby of the primary:
```shell
gitlab-ctl repmgr standby setup 10.6.0.21
```
-1. On `10.6.0.33`, our second standby database
+1. On `10.6.0.33`, our second standby database:
- Make this node a standby of the primary
+ Make this node a standby of the primary:
```shell
gitlab-ctl repmgr standby setup 10.6.0.21
```
-1. On `10.6.0.41`, our application server
+1. On `10.6.0.41`, our application server:
- Set `gitlab-consul` user's PgBouncer password to `toomanysecrets`
+ Set `gitlab-consul` user's PgBouncer password to `toomanysecrets`:
```shell
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
- Run database migrations
+ Run database migrations:
```shell
gitlab-rake gitlab:db:configure
@@ -783,7 +795,7 @@ After deploying the configuration follow these steps:
This example uses 3 PostgreSQL servers, and 1 application node (with PgBouncer setup alongside).
It differs from the [recommended setup](#example-recommended-setup) by moving the Consul servers into the same servers we use for PostgreSQL.
-The trade-off is between reducing server counts, against the increased operational complexity of needing to deal with PostgreSQL [failover](#failover-procedure) and [restore](#restore-procedure) procedures in addition to [Consul outage recovery](../high_availability/consul.md#outage-recovery) on the same set of machines.
+The trade-off is between reducing server counts, against the increased operational complexity of needing to deal with PostgreSQL [failover](#failover-procedure) and [restore](#restore-procedure) procedures in addition to [Consul outage recovery](../consul.md#outage-recovery) on the same set of machines.
In this example we start with all servers on the same 10.6.0.0/16 private network range, they can connect to each freely other on those addresses.
@@ -1075,7 +1087,7 @@ To restart either service, run `gitlab-ctl restart SERVICE`
For PostgreSQL, it is usually safe to restart the master node by default. Automatic failover defaults to a 1 minute timeout. Provided the database returns before then, nothing else needs to be done. To be safe, you can stop `repmgrd` on the standby nodes first with `gitlab-ctl stop repmgrd`, then start afterwards with `gitlab-ctl start repmgrd`.
-On the Consul server nodes, it is important to restart the Consul service in a controlled fashion. Read our [Consul documentation](../high_availability/consul.md#restarting-the-server-cluster) for instructions on how to restart the service.
+On the Consul server nodes, it is important to [restart the Consul service](../consul.md#restart-consul) in a controlled manner.
### `gitlab-ctl repmgr-check-master` command produces errors
@@ -1122,11 +1134,10 @@ postgresql['trust_auth_cidr_addresses'] = %w(123.123.123.123/32 <other_cidrs>)
### Issues with other components
-If you're running into an issue with a component not outlined here, be sure to check the troubleshooting section of their specific documentation page.
+If you're running into an issue with a component not outlined here, be sure to check the troubleshooting section of their specific documentation page:
-- [Consul](../high_availability/consul.md#troubleshooting)
+- [Consul](../consul.md#troubleshooting-consul)
- [PostgreSQL](https://docs.gitlab.com/omnibus/settings/database.html#troubleshooting)
-- [GitLab application](../high_availability/gitlab.md#troubleshooting)
## Patroni
@@ -1324,7 +1335,7 @@ You can switch an exiting database cluster to use Patroni instead of repmgr with
NOTE: **Note:**
Ensure that there is no `walsender` process running on the primary node.
- `ps aux | grep walsender` must not show any running process.
+ `ps aux | grep walsender` must not show any running process.
1. On the primary node, [configure Patroni](#configuring-patroni-cluster). Remove `repmgr` and any other
repmgr-specific configuration. Also remove any configuration that is related to PostgreSQL replication.
diff --git a/doc/administration/raketasks/check.md b/doc/administration/raketasks/check.md
index da5caf3966f..15014fffd01 100644
--- a/doc/administration/raketasks/check.md
+++ b/doc/administration/raketasks/check.md
@@ -135,3 +135,22 @@ The LDAP check Rake task will test the bind DN and password credentials
(if configured) and will list a sample of LDAP users. This task is also
executed as part of the `gitlab:check` task, but can run independently.
See [LDAP Rake Tasks - LDAP Check](ldap.md#check) for details.
+
+## Troubleshooting
+
+The following are solutions to problems you might discover using the Rake tasks documented
+above.
+
+### Dangling commits
+
+`gitlab:git:fsck` can find dangling commits. To fix them, try
+[manually triggering housekeeping](../housekeeping.md#manual-housekeeping)
+for the affected project(s).
+
+If the issue persists, try triggering `gc` via the
+[Rails Console](../troubleshooting/navigating_gitlab_via_rails_console.md#starting-a-rails-console-session):
+
+```ruby
+p = Project.find_by_path("project-name")
+Projects::HousekeepingService.new(p, :gc).execute
+```
diff --git a/doc/administration/raketasks/maintenance.md b/doc/administration/raketasks/maintenance.md
index 19781b6a5db..553afba78e3 100644
--- a/doc/administration/raketasks/maintenance.md
+++ b/doc/administration/raketasks/maintenance.md
@@ -23,32 +23,39 @@ Example output:
```plaintext
System information
-System: Debian 7.8
-Current User: git
-Using RVM: no
-Ruby Version: 2.1.5p273
-Gem Version: 2.4.3
-Bundler Version: 1.7.6
-Rake Version: 10.3.2
-Redis Version: 3.2.5
-Sidekiq Version: 2.17.8
+System: Ubuntu 20.04
+Proxy: no
+Current User: git
+Using RVM: no
+Ruby Version: 2.6.6p146
+Gem Version: 2.7.10
+Bundler Version:1.17.3
+Rake Version: 12.3.3
+Redis Version: 5.0.9
+Git Version: 2.27.0
+Sidekiq Version:5.2.9
+Go Version: unknown
GitLab information
-Version: 7.7.1
-Revision: 41ab9e1
-Directory: /home/git/gitlab
-DB Adapter: postgresql
-URL: https://gitlab.example.com
-HTTP Clone URL: https://gitlab.example.com/some-project.git
-SSH Clone URL: git@gitlab.example.com:some-project.git
-Using LDAP: no
-Using Omniauth: no
+Version: 13.2.2-ee
+Revision: 618883a1f9d
+Directory: /opt/gitlab/embedded/service/gitlab-rails
+DB Adapter: PostgreSQL
+DB Version: 11.7
+URL: http://gitlab.example.com
+HTTP Clone URL: http://gitlab.example.com/some-group/some-project.git
+SSH Clone URL: git@gitlab.example.com:some-group/some-project.git
+Elasticsearch: no
+Geo: no
+Using LDAP: no
+Using Omniauth: yes
+Omniauth Providers:
GitLab Shell
-Version: 2.4.1
-Repositories: /home/git/repositories/
-Hooks: /home/git/gitlab-shell/hooks/
-Git: /usr/bin/git
+Version: 13.3.0
+Repository storage paths:
+- default: /var/opt/gitlab/git-data/repositories
+GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell
```
## Show GitLab license information **(STARTER ONLY)**
diff --git a/doc/administration/raketasks/project_import_export.md b/doc/administration/raketasks/project_import_export.md
index b807e03b01f..51e0e2e46b6 100644
--- a/doc/administration/raketasks/project_import_export.md
+++ b/doc/administration/raketasks/project_import_export.md
@@ -46,6 +46,6 @@ Note the following:
compatible as described in the [Version history](../../user/project/settings/import_export.md#version-history).
- The project import option must be enabled in
application settings (`/admin/application_settings/general`) under **Import sources**, which is available
- under **{admin}** **Admin Area >** **{settings}** **Settings > Visibility and access controls**.
+ under **Admin Area > Settings > Visibility and access controls**.
- The exports are stored in a temporary [shared directory](../../development/shared_files.md)
and are deleted every 24 hours by a specific worker.
diff --git a/doc/administration/raketasks/storage.md b/doc/administration/raketasks/storage.md
index 74fd2c2ebb6..a97bff83290 100644
--- a/doc/administration/raketasks/storage.md
+++ b/doc/administration/raketasks/storage.md
@@ -100,7 +100,7 @@ to project IDs 50 to 100 in an Omnibus GitLab installation:
sudo gitlab-rake gitlab:storage:migrate_to_hashed ID_FROM=50 ID_TO=100
```
-You can monitor the progress in the **{admin}** **Admin Area > Monitoring > Background Jobs** page.
+You can monitor the progress in the **Admin Area > Monitoring > Background Jobs** page.
There is a specific queue you can watch to see how long it will take to finish:
`hashed_storage:hashed_storage_project_migrate`.
@@ -150,7 +150,7 @@ to project IDs 50 to 100 in an Omnibus GitLab installation:
sudo gitlab-rake gitlab:storage:rollback_to_legacy ID_FROM=50 ID_TO=100
```
-You can monitor the progress in the **{admin}** **Admin Area > Monitoring > Background Jobs** page.
+You can monitor the progress in the **Admin Area > Monitoring > Background Jobs** page.
On the **Queues** tab, you can watch the `hashed_storage:hashed_storage_project_rollback` queue to see how long the process will take to finish.
After it reaches zero, you can confirm every project has been rolled back by running the commands bellow.
diff --git a/doc/administration/raketasks/uploads/migrate.md b/doc/administration/raketasks/uploads/migrate.md
index b164c4e744d..8c020e91a15 100644
--- a/doc/administration/raketasks/uploads/migrate.md
+++ b/doc/administration/raketasks/uploads/migrate.md
@@ -141,7 +141,15 @@ keeping in mind the task name in this case is `gitlab:uploads:migrate_to_local`.
To migrate uploads from object storage to local storage:
-1. Disable both `direct_upload` and `background_upload` under `uploads` settings in `gitlab.rb`.
+1. Disable both `direct_upload` and `background_upload` under `uploads` settings in `gitlab.rb`:
+
+ ```ruby
+ gitlab_rails['uploads_object_store_direct_upload'] = false
+ gitlab_rails['uploads_object_store_background_upload'] = false
+ ```
+
+ Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
1. Run the Rake task:
**Omnibus Installation**
diff --git a/doc/administration/redis/replication_and_failover.md b/doc/administration/redis/replication_and_failover.md
index ac31b909c89..ca041adb1d8 100644
--- a/doc/administration/redis/replication_and_failover.md
+++ b/doc/administration/redis/replication_and_failover.md
@@ -466,7 +466,7 @@ While it doesn't require a list of all Sentinel nodes, in case of a failure,
it needs to access at least one of the listed.
NOTE: **Note:**
-The following steps should be performed in the [GitLab application server](../high_availability/gitlab.md)
+The following steps should be performed in the GitLab application server
which ideally should not have Redis or Sentinels on it for a HA setup.
1. SSH into the server where the GitLab application is installed.
@@ -637,47 +637,63 @@ lives easier. If you want to know what happens underneath keep reading.
### Running multiple Redis clusters
-GitLab supports running [separate Redis clusters for different persistent
-classes](https://docs.gitlab.com/omnibus/settings/redis.html#running-with-multiple-redis-instances):
-cache, queues, and shared_state. To make this work with Sentinel:
+Omnibus GitLab supports running separate Redis and Sentinel instances for different
+persistence classes.
-1. Set the appropriate variable in `/etc/gitlab/gitlab.rb` for each instance you are using:
+| Class | Purpose |
+| -------------- | ------------------------------------------------ |
+| `cache` | Store cached data. |
+| `queues` | Store Sidekiq background jobs. |
+| `shared_state` | Store session-related and other persistent data. |
+| `actioncable` | Pub/Sub queue backend for ActionCable. |
+
+To make this work with Sentinel:
+
+1. [Configure the different Redis/Sentinels](#configuring-redis) instances based on your needs.
+1. For each Rails application instance, edit its `/etc/gitlab/gitlab.rb` file:
```ruby
gitlab_rails['redis_cache_instance'] = REDIS_CACHE_URL
gitlab_rails['redis_queues_instance'] = REDIS_QUEUES_URL
gitlab_rails['redis_shared_state_instance'] = REDIS_SHARED_STATE_URL
- ```
-
- **Note**: Redis URLs should be in the format: `redis://:PASSWORD@SENTINEL_PRIMARY_NAME`
+ gitlab_rails['redis_actioncable_instance'] = REDIS_ACTIONCABLE_URL
- 1. PASSWORD is the plaintext password for the Redis instance
- 1. SENTINEL_PRIMARY_NAME is the Sentinel primary name (e.g. `gitlab-redis-cache`)
-
-1. Include an array of hashes with host/port combinations, such as the following:
-
- ```ruby
+ # Configure the Sentinels
gitlab_rails['redis_cache_sentinels'] = [
- { host: REDIS_CACHE_SENTINEL_HOST, port: PORT1 },
- { host: REDIS_CACHE_SENTINEL_HOST2, port: PORT2 }
+ { host: REDIS_CACHE_SENTINEL_HOST, port: 26379 },
+ { host: REDIS_CACHE_SENTINEL_HOST2, port: 26379 }
]
gitlab_rails['redis_queues_sentinels'] = [
- { host: REDIS_QUEUES_SENTINEL_HOST, port: PORT1 },
- { host: REDIS_QUEUES_SENTINEL_HOST2, port: PORT2 }
+ { host: REDIS_QUEUES_SENTINEL_HOST, port: 26379 },
+ { host: REDIS_QUEUES_SENTINEL_HOST2, port: 26379 }
]
gitlab_rails['redis_shared_state_sentinels'] = [
- { host: SHARED_STATE_SENTINEL_HOST, port: PORT1 },
- { host: SHARED_STATE_SENTINEL_HOST2, port: PORT2 }
+ { host: SHARED_STATE_SENTINEL_HOST, port: 26379 },
+ { host: SHARED_STATE_SENTINEL_HOST2, port: 26379 }
+ ]
+ gitlab_rails['redis_actioncable_sentinels'] = [
+ { host: ACTIONCABLE_SENTINEL_HOST, port: 26379 },
+ { host: ACTIONCABLE_SENTINEL_HOST2, port: 26379 }
]
```
-1. Note that for each persistence class, GitLab will default to using the
- configuration specified in `gitlab_rails['redis_sentinels']` unless
- overridden by the settings above.
-1. Be sure to include BOTH configuration options for each persistent classes. For example,
- if you choose to configure a cache instance, you must specify both `gitlab_rails['redis_cache_instance']`
- and `gitlab_rails['redis_cache_sentinels']` for GitLab to generate the proper configuration files.
-1. Run `gitlab-ctl reconfigure`
+ Note that:
+
+ - Redis URLs should be in the format: `redis://:PASSWORD@SENTINEL_PRIMARY_NAME`, where:
+ - `PASSWORD` is the plaintext password for the Redis instance.
+ - `SENTINEL_PRIMARY_NAME` is the Sentinel primary name set with `redis['master_name']`,
+ for example `gitlab-redis-cache`.
+
+1. Save the file and reconfigure GitLab for the change to take effect:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+NOTE: **Note:**
+For each persistence class, GitLab will default to using the
+configuration specified in `gitlab_rails['redis_sentinels']` unless
+overridden by the previously described settings.
### Control running services
@@ -736,6 +752,5 @@ Read more:
1. [Reference architectures](../reference_architectures/index.md)
1. [Configure the database](../postgresql/replication_and_failover.md)
-1. [Configure NFS](../high_availability/nfs.md)
-1. [Configure the GitLab application servers](../high_availability/gitlab.md)
-1. [Configure the load balancers](../high_availability/load_balancer.md)
+1. [Configure NFS](../nfs.md)
+1. [Configure the load balancers](../load_balancer.md)
diff --git a/doc/administration/redis/replication_and_failover_external.md b/doc/administration/redis/replication_and_failover_external.md
index 244b44dd76a..d530e6a8fd7 100644
--- a/doc/administration/redis/replication_and_failover_external.md
+++ b/doc/administration/redis/replication_and_failover_external.md
@@ -19,13 +19,7 @@ The following are the requirements for providing your own Redis instance:
- Redis version 5.0 or higher is recommended, as this is what ships with
Omnibus GitLab packages starting with GitLab 12.7.
-- Support for Redis 3.2 is deprecated with GitLab 12.10 and will be completely
- removed in GitLab 13.0.
-- GitLab 12.0 and later requires Redis version 3.2 or higher. Older Redis
- versions do not support an optional count argument to SPOP which is now
- required for [Merge Trains](../../ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md).
-- In addition, if Redis 4 or later is available, GitLab makes use of certain
- commands like `UNLINK` and `USAGE` which were introduced only in Redis 4.
+- GitLab 13.0 and later requires Redis version 4.0 or higher.
- Standalone Redis or Redis high availability with Sentinel are supported. Redis
Cluster is not supported.
- Managed Redis from cloud providers such as AWS ElastiCache will work. If these
@@ -221,7 +215,7 @@ the correct credentials for the Sentinel nodes.
While it doesn't require a list of all Sentinel nodes, in case of a failure,
it needs to access at least one of listed ones.
-The following steps should be performed in the [GitLab application server](../high_availability/gitlab.md)
+The following steps should be performed in the GitLab application server
which ideally should not have Redis or Sentinels in the same machine:
1. Edit `/home/git/gitlab/config/resque.yml` following the example in
diff --git a/doc/administration/redis/standalone.md b/doc/administration/redis/standalone.md
index 12e932dbc5e..ea5a7850244 100644
--- a/doc/administration/redis/standalone.md
+++ b/doc/administration/redis/standalone.md
@@ -15,7 +15,7 @@ is generally stable and can handle many requests, so it is an acceptable
trade off to have only a single instance. See the [reference architectures](../reference_architectures/index.md)
page for an overview of GitLab scaling options.
-## Set up a standalone Redis instance
+## Set up the standalone Redis instance
The steps below are the minimum necessary to configure a Redis server with
Omnibus GitLab:
@@ -28,36 +28,49 @@ Omnibus GitLab:
1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
```ruby
- ## Enable Redis
- redis['enable'] = true
-
- ## Disable all other services
- sidekiq['enable'] = false
- gitlab_workhorse['enable'] = false
- puma['enable'] = false
- postgresql['enable'] = false
- nginx['enable'] = false
- prometheus['enable'] = false
- alertmanager['enable'] = false
- pgbouncer_exporter['enable'] = false
- gitlab_exporter['enable'] = false
- gitaly['enable'] = false
+ ## Enable Redis and disable all other services
+ ## https://docs.gitlab.com/omnibus/roles/
+ roles ['redis_master_role']
+ ## Redis configuration
redis['bind'] = '0.0.0.0'
redis['port'] = 6379
- redis['password'] = 'SECRET_PASSWORD_HERE'
+ redis['password'] = '<redis_password>'
- gitlab_rails['enable'] = false
+ ## Disable automatic database migrations
+ ## Only the primary GitLab application server should handle migrations
+ gitlab_rails['auto_migrate'] = false
```
1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
1. Note the Redis node's IP address or hostname, port, and
- Redis password. These will be necessary when configuring the GitLab
- application servers later.
+ Redis password. These will be necessary when [configuring the GitLab
+ application servers](#set-up-the-gitlab-rails-application-instance).
[Advanced configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
are supported and can be added if needed.
+## Set up the GitLab Rails application instance
+
+On the instance where GitLab is installed:
+
+1. Edit the `/etc/gitlab/gitlab.rb` file and add the following contents:
+
+ ```ruby
+ ## Disable Redis
+ redis['enable'] = false
+
+ gitlab_rails['redis_host'] = 'redis.example.com'
+ gitlab_rails['redis_port'] = 6379
+
+ ## Required if Redis authentication is configured on the Redis node
+ gitlab_rails['redis_password'] = '<redis_password>'
+ ```
+
+1. Save your changes to `/etc/gitlab/gitlab.rb`.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
## Troubleshooting
See the [Redis troubleshooting guide](troubleshooting.md).
diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md
index 5367021af4e..fe2dad41066 100644
--- a/doc/administration/reference_architectures/10k_users.md
+++ b/doc/administration/reference_architectures/10k_users.md
@@ -1,76 +1,2047 @@
-# Reference architecture: up to 10,000 users
+---
+reading_time: true
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
-This page describes GitLab reference architecture for up to 10,000 users.
-For a full list of reference architectures, see
+# Reference architecture: up to 10,000 users **(PREMIUM ONLY)**
+
+This page describes GitLab reference architecture for up to 10,000 users. For a
+full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
> - **Supported users (approximate):** 10,000
-> - **High Availability:** True
-> - **Test RPS rates:** API: 200 RPS, Web: 20 RPS, Git: 20 RPS
-
-| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS | Azure |
-|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|----------------|
-| GitLab Rails ([1](#footnotes)) | 3 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge | F32s v2 |
-| PostgreSQL | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 |
-| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 |
-| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 16 vCPU, 60GB Memory | n1-standard-16 | m5.4xlarge | D16s v3 |
-| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 |
-| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 |
-| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS |
-| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS |
-| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 |
-| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 |
-| Object Storage ([4](#footnotes)) | - | - | - | - | - |
-| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
-| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
-| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 |
-| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 |
-
-## Footnotes
-
-1. In our architectures we run each GitLab Rails node using the Puma webserver
- and have its number of workers set to 90% of available CPUs along with four threads. For
- nodes that are running Rails with other components the worker value should be reduced
- accordingly where we've found 50% achieves a good balance but this is dependent
- on workload.
-
-1. Gitaly node requirements are dependent on customer data, specifically the number of
- projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
- and at least four nodes should be used when supporting 50,000 or more users.
- We also recommend that each Gitaly node should store no more than 5TB of data
- and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
- set to 20% of available CPUs. Additional nodes should be considered in conjunction
- with a review of expected data size and spread based on the recommendations above.
-
-1. Recommended Redis setup differs depending on the size of the architecture.
- For smaller architectures (less than 3,000 users) a single instance should suffice.
- For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
- classes and that Redis Sentinel is hosted alongside Consul.
- For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../redis/replication_and_failover.md#running-multiple-redis-clusters) for the Cache class
- and another for the Queues and Shared State classes respectively. We also recommend
- that you run the Redis Sentinel clusters separately for each Redis Cluster.
-
-1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
- over NFS where possible, due to better performance and availability.
-
-1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
- object storage but this isn't typically recommended for performance reasons. Note however it is required for
- [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196).
-
-1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
- as the load balancer. Although other load balancers with similar feature sets
- could also be used, those load balancers have not been validated.
-
-1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
- HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
- as these components have heavy I/O. These IOPS values are recommended only as a starter
- as with time they may be adjusted higher or lower depending on the scale of your
- environment's workload. If you're running the environment on a Cloud provider
- you may need to refer to their documentation on how configure IOPS correctly.
-
-1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
- CPU platform on GCP. On different hardware you may find that adjustments, either lower
- or higher, are required for your CPU or Node counts accordingly. For more information, a
- [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
- [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+> - **High Availability:** Yes
+> - **Test requests per second (RPS) rates:** API: 200 RPS, Web: 20 RPS, Git: 20 RPS
+
+| Service | Nodes | Configuration | GCP | AWS | Azure |
+|--------------------------------------------|-------------|-------------------------|-----------------|-------------|----------|
+| External load balancing node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Consul | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| PostgreSQL | 3 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| PgBouncer | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Internal load balancing node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Redis - Cache | 3 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| Redis - Queues / Shared State | 3 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| Redis Sentinel - Cache | 3 | 1 vCPU, 1.7GB memory | g1-small | t2.small | B1MS |
+| Redis Sentinel - Queues / Shared State | 3 | 1 vCPU, 1.7GB memory | g1-small | t2.small | B1MS |
+| Gitaly | 2 (minimum) | 16 vCPU, 60GB memory | n1-standard-16 | m5.4xlarge | D16s v3 |
+| Sidekiq | 4 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| GitLab Rails | 3 | 32 vCPU, 28.8GB memory | n1-highcpu-32 | c5.9xlarge | F32s v2 |
+| Monitoring node | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
+| Object Storage | n/a | n/a | n/a | n/a | n/a |
+| NFS Server | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
+
+The Google Cloud Platform (GCP) architectures were built and tested using the
+[Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+CPU platform. On different hardware you may find that adjustments, either lower
+or higher, are required for your CPU or node counts. For more information, see
+our [Sysbench](https://github.com/akopytov/sysbench)-based
+[CPU benchmark](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+For data objects (such as LFS, Uploads, or Artifacts), an
+[object storage service](#configure-the-object-storage) is recommended instead
+of NFS where possible, due to better performance and availability. Since this
+doesn't require a node to be set up, *Object Storage* is noted as not
+applicable (n/a) in the previous table.
+
+## Setup components
+
+To set up GitLab and its components to accommodate up to 10,000 users:
+
+1. [Configure the external load balancing node](#configure-the-external-load-balancer)
+ that will handle the load balancing of the three GitLab application services nodes.
+1. [Configure Consul](#configure-consul).
+1. [Configure PostgreSQL](#configure-postgresql), the database for GitLab.
+1. [Configure PgBouncer](#configure-pgbouncer).
+1. [Configure the internal load balancing node](#configure-the-internal-load-balancer)
+1. [Configure Redis](#configure-redis).
+1. [Configure Gitaly](#configure-gitaly),
+ which provides access to the Git repositories.
+1. [Configure Sidekiq](#configure-sidekiq).
+1. [Configure the main GitLab Rails application](#configure-gitlab-rails)
+ to run Puma/Unicorn, Workhorse, GitLab Shell, and to serve all frontend requests (UI, API, Git
+ over HTTP/SSH).
+1. [Configure Prometheus](#configure-prometheus) to monitor your GitLab environment.
+1. [Configure the Object Storage](#configure-the-object-storage)
+ used for shared data objects.
+1. [Configure NFS (Optional)](#configure-nfs-optional)
+ to have shared disk storage service as an alternative to Gitaly and/or Object Storage (although
+ not recommended). NFS is required for GitLab Pages, you can skip this step if you're not using
+ that feature.
+
+We start with all servers on the same 10.6.0.0/24 private network range, they
+can connect to each other freely on those addresses.
+
+Here is a list and description of each machine and the assigned IP:
+
+- `10.6.0.10`: External Load Balancer
+- `10.6.0.11`: Consul 1
+- `10.6.0.12`: Consul 2
+- `10.6.0.13`: Consul 3
+- `10.6.0.21`: PostgreSQL primary
+- `10.6.0.22`: PostgreSQL secondary 1
+- `10.6.0.23`: PostgreSQL secondary 2
+- `10.6.0.31`: PgBouncer 1
+- `10.6.0.32`: PgBouncer 2
+- `10.6.0.33`: PgBouncer 3
+- `10.6.0.40`: Internal Load Balancer
+- `10.6.0.51`: Redis - Cache Primary
+- `10.6.0.52`: Redis - Cache Replica 1
+- `10.6.0.53`: Redis - Cache Replica 2
+- `10.6.0.71`: Sentinel - Cache 1
+- `10.6.0.72`: Sentinel - Cache 2
+- `10.6.0.73`: Sentinel - Cache 3
+- `10.6.0.61`: Redis - Queues Primary
+- `10.6.0.62`: Redis - Queues Replica 1
+- `10.6.0.63`: Redis - Queues Replica 2
+- `10.6.0.81`: Sentinel - Queues 1
+- `10.6.0.82`: Sentinel - Queues 2
+- `10.6.0.83`: Sentinel - Queues 3
+- `10.6.0.91`: Gitaly 1
+- `10.6.0.92`: Gitaly 2
+- `10.6.0.101`: Sidekiq 1
+- `10.6.0.102`: Sidekiq 2
+- `10.6.0.103`: Sidekiq 3
+- `10.6.0.104`: Sidekiq 4
+- `10.6.0.111`: GitLab application 1
+- `10.6.0.112`: GitLab application 2
+- `10.6.0.113`: GitLab application 3
+- `10.6.0.121`: Prometheus
+
+## Configure the external load balancer
+
+NOTE: **Note:**
+This architecture has been tested and validated with [HAProxy](https://www.haproxy.org/)
+as the load balancer. Although other load balancers with similar feature sets
+could also be used, those load balancers have not been validated.
+
+In an active/active GitLab configuration, you will need a load balancer to route
+traffic to the application servers. The specifics on which load balancer to use
+or the exact configuration is beyond the scope of GitLab documentation. We hope
+that if you're managing multi-node systems like GitLab you have a load balancer of
+choice already. Some examples including HAProxy (open-source), F5 Big-IP LTM,
+and Citrix Net Scaler. This documentation will outline what ports and protocols
+you need to use with GitLab.
+
+The next question is how you will handle SSL in your environment.
+There are several different options:
+
+- [The application node terminates SSL](#application-node-terminates-ssl).
+- [The load balancer terminates SSL without backend SSL](#load-balancer-terminates-ssl-without-backend-ssl)
+ and communication is not secure between the load balancer and the application node.
+- [The load balancer terminates SSL with backend SSL](#load-balancer-terminates-ssl-with-backend-ssl)
+ and communication is *secure* between the load balancer and the application node.
+
+### Application node terminates SSL
+
+Configure your load balancer to pass connections on port 443 as `TCP` rather
+than `HTTP(S)` protocol. This will pass the connection to the application node's
+NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
+
+See the [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Load balancer terminates SSL without backend SSL
+
+Configure your load balancer to use the `HTTP(S)` protocol rather than `TCP`.
+The load balancer will then be responsible for managing SSL certificates and
+terminating SSL.
+
+Since communication between the load balancer and GitLab will not be secure,
+there is some additional configuration needed. See the
+[NGINX proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
+for details.
+
+### Load balancer terminates SSL with backend SSL
+
+Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
+The load balancer(s) will be responsible for managing SSL certificates that
+end users will see.
+
+Traffic will also be secure between the load balancer(s) and NGINX in this
+scenario. There is no need to add configuration for proxied SSL since the
+connection will be secure all the way. However, configuration will need to be
+added to GitLab to configure SSL certificates. See
+[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Ports
+
+The basic ports to be used are shown in the table below.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | ------------------------ |
+| 80 | 80 | HTTP (*1*) |
+| 443 | 443 | TCP or HTTPS (*1*) (*2*) |
+| 22 | 22 | TCP |
+
+- (*1*): [Web terminal](../../ci/environments/index.md#web-terminals) support requires
+ your load balancer to correctly handle WebSocket connections. When using
+ HTTP or HTTPS proxying, this means your load balancer must be configured
+ to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
+ [web terminal](../integration/terminal.md) integration guide for
+ more details.
+- (*2*): When using HTTPS protocol for port 443, you will need to add an SSL
+ certificate to the load balancers. If you wish to terminate SSL at the
+ GitLab application server instead, use TCP protocol.
+
+If you're using GitLab Pages with custom domain support you will need some
+additional port configurations.
+GitLab Pages requires a separate virtual IP address. Configure DNS to point the
+`pages_external_url` from `/etc/gitlab/gitlab.rb` at the new virtual IP address. See the
+[GitLab Pages documentation](../pages/index.md) for more information.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------- | --------- |
+| 80 | Varies (*1*) | HTTP |
+| 443 | Varies (*1*) | TCP (*2*) |
+
+- (*1*): The backend port for GitLab Pages depends on the
+ `gitlab_pages['external_http']` and `gitlab_pages['external_https']`
+ setting. See [GitLab Pages documentation](../pages/index.md) for more details.
+- (*2*): Port 443 for GitLab Pages should always use the TCP protocol. Users can
+ configure custom domains with custom SSL, which would not be possible
+ if SSL was terminated at the load balancer.
+
+#### Alternate SSH Port
+
+Some organizations have policies against opening SSH port 22. In this case,
+it may be helpful to configure an alternate SSH hostname that allows users
+to use SSH on port 443. An alternate SSH hostname will require a new virtual IP address
+compared to the other GitLab HTTP configuration above.
+
+Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | -------- |
+| 443 | 22 | TCP |
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Consul
+
+The following IPs will be used as an example:
+
+- `10.6.0.11`: Consul 1
+- `10.6.0.12`: Consul 2
+- `10.6.0.13`: Consul 3
+
+NOTE: **Note:**
+The configuration processes for the other servers in your reference architecture will
+use the `/etc/gitlab/gitlab-secrets.json` file from your Consul server to connect
+with the other servers.
+
+To configure Consul:
+
+1. SSH into the server that will host Consul.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['consul_role']
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ server: true,
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Consul nodes, and
+ make sure you set up the correct IPs.
+
+NOTE: **Note:**
+A Consul leader will be elected when the provisioning of the third Consul server is completed.
+Viewing the Consul logs `sudo gitlab-ctl tail consul` will display
+`...[INFO] consul: New leader elected: ...`
+
+You can list the current Consul members (server, client):
+
+```shell
+sudo /opt/gitlab/embedded/bin/consul members
+```
+
+You can verify the GitLab services are running:
+
+```shell
+sudo gitlab-ctl status
+```
+
+The output should be similar to the following:
+
+```plaintext
+run: consul: (pid 30074) 76834s; run: log: (pid 29740) 76844s
+run: logrotate: (pid 30925) 3041s; run: log: (pid 29649) 76861s
+run: node-exporter: (pid 30093) 76833s; run: log: (pid 29663) 76855s
+```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure PostgreSQL
+
+In this section, you'll be guided through configuring an external PostgreSQL database
+to be used with GitLab.
+
+### Provide your own PostgreSQL instance
+
+If you're hosting GitLab on a cloud provider, you can optionally use a
+managed service for PostgreSQL. For example, AWS offers a managed Relational
+Database Service (RDS) that runs PostgreSQL.
+
+If you use a cloud-managed service, or provide your own PostgreSQL:
+
+1. Set up PostgreSQL according to the
+ [database requirements document](../../install/requirements.md#database).
+1. Set up a `gitlab` username with a password of your choice. The `gitlab` user
+ needs privileges to create the `gitlabhq_production` database.
+1. Configure the GitLab application servers with the appropriate details.
+ This step is covered in [Configuring the GitLab Rails application](#configure-gitlab-rails).
+
+### Standalone PostgreSQL using Omnibus GitLab
+
+The following IPs will be used as an example:
+
+- `10.6.0.21`: PostgreSQL primary
+- `10.6.0.22`: PostgreSQL secondary 1
+- `10.6.0.23`: PostgreSQL secondary 2
+
+First, make sure to [install](https://about.gitlab.com/install/)
+the Linux GitLab package **on each node**. Following the steps,
+install the necessary dependencies from step 1, and add the
+GitLab package repository from step 2. When installing GitLab
+in the second step, do not supply the `EXTERNAL_URL` value.
+
+#### PostgreSQL primary node
+
+1. SSH into the PostgreSQL primary node.
+1. Generate a password hash for the PostgreSQL username/password pair. This assumes you will use the default
+ username of `gitlab` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<postgresql_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 gitlab
+ ```
+
+1. Generate a password hash for the PgBouncer username/password pair. This assumes you will use the default
+ username of `pgbouncer` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<pgbouncer_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 pgbouncer
+ ```
+
+1. Generate a password hash for the Consul database username/password pair. This assumes you will use the default
+ username of `gitlab-consul` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<consul_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 gitlab-consul
+ ```
+
+1. On the primary database node, edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
+
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the Consul agent
+ consul['services'] = %w(postgresql)
+
+ # START user configuration
+ # Please set the real values as explained in Required Information section
+ #
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = '<postgresql_password_hash>'
+ # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
+ # This is used to prevent replication from using up all of the
+ # available database connections.
+ postgresql['max_wal_senders'] = 4
+ postgresql['max_replication_slots'] = 4
+
+ # Replace XXX.XXX.XXX.XXX/YY with Network Address
+ postgresql['trust_auth_cidr_addresses'] = %w(10.6.0.0/24)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+
+ ## Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ #
+ # END user configuration
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### PostgreSQL secondary nodes
+
+1. On both the secondary nodes, add the same configuration specified above for the primary node
+ with an additional setting (`repmgr['master_on_initialization'] = false`) that will inform `gitlab-ctl` that they are standby nodes initially
+ and there's no need to attempt to register them as a primary node:
+
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the Consul agent
+ consul['services'] = %w(postgresql)
+
+ # Specify if a node should attempt to be primary on initialization.
+ repmgr['master_on_initialization'] = false
+
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = '<postgresql_password_hash>'
+ # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
+ # This is used to prevent replication from using up all of the
+ # available database connections.
+ postgresql['max_wal_senders'] = 4
+ postgresql['max_replication_slots'] = 4
+
+ # Replace with your network addresses
+ postgresql['trust_auth_cidr_addresses'] = %w(10.6.0.0/24)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+
+ ## Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/database.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### PostgreSQL post-configuration
+
+SSH into the **primary node**:
+
+1. Open a database prompt:
+
+ ```shell
+ gitlab-psql -d gitlabhq_production
+ ```
+
+1. Make sure the `pg_trgm` extension is enabled (it might already be):
+
+ ```shell
+ CREATE EXTENSION pg_trgm;
+ ```
+
+1. Exit the database prompt by typing `\q` and Enter.
+
+1. Verify the cluster is initialized with one node:
+
+ ```shell
+ gitlab-ctl repmgr cluster show
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ Role | Name | Upstream | Connection String
+ ----------+----------|----------|----------------------------------------
+ * master | HOSTNAME | | host=HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
+ ```
+
+1. Note down the hostname or IP address in the connection string: `host=HOSTNAME`. We will
+ refer to the hostname in the next section as `<primary_node_name>`. If the value
+ is not an IP address, it will need to be a resolvable name (via DNS or
+ `/etc/hosts`)
+
+SSH into the **secondary node**:
+
+1. Set up the repmgr standby:
+
+ ```shell
+ gitlab-ctl repmgr standby setup <primary_node_name>
+ ```
+
+ Do note that this will remove the existing data on the node. The command
+ has a wait time.
+
+ The output should be similar to the following:
+
+ ```console
+ Doing this will delete the entire contents of /var/opt/gitlab/postgresql/data
+ If this is not what you want, hit Ctrl-C now to exit
+ To skip waiting, rerun with the -w option
+ Sleeping for 30 seconds
+ Stopping the database
+ Removing the data
+ Cloning the data
+ Starting the database
+ Registering the node with the cluster
+ ok: run: repmgrd: (pid 19068) 0s
+ ```
+
+Before moving on, make sure the databases are configured correctly. Run the
+following command on the **primary** node to verify that replication is working
+properly and the secondary nodes appear in the cluster:
+
+```shell
+gitlab-ctl repmgr cluster show
+```
+
+The output should be similar to the following:
+
+```plaintext
+Role | Name | Upstream | Connection String
+----------+---------|-----------|------------------------------------------------
+* master | MASTER | | host=<primary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+```
+
+If the 'Role' column for any node says "FAILED", check the
+[Troubleshooting section](troubleshooting.md) before proceeding.
+
+Also, check that the `repmgr-check-master` command works successfully on each node:
+
+```shell
+su - gitlab-consul
+gitlab-ctl repmgr-check-master || echo 'This node is a standby repmgr node'
+```
+
+This command relies on exit codes to tell Consul whether a particular node is a master
+or secondary. The most important thing here is that this command does not produce errors.
+If there are errors it's most likely due to incorrect `gitlab-consul` database user permissions.
+Check the [Troubleshooting section](troubleshooting.md) before proceeding.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure PgBouncer
+
+Now that the PostgreSQL servers are all set up, let's configure PgBouncer.
+The following IPs will be used as an example:
+
+- `10.6.0.31`: PgBouncer 1
+- `10.6.0.32`: PgBouncer 2
+- `10.6.0.33`: PgBouncer 3
+
+1. On each PgBouncer node, edit `/etc/gitlab/gitlab.rb`, and replace
+ `<consul_password_hash>` and `<pgbouncer_password_hash>` with the
+ password hashes you [set up previously](#postgresql-primary-node):
+
+ ```ruby
+ # Disable all components except Pgbouncer and Consul agent
+ roles ['pgbouncer_role']
+
+ # Configure PgBouncer
+ pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
+
+ pgbouncer['users'] = {
+ 'gitlab-consul': {
+ password: '<consul_password_hash>'
+ },
+ 'pgbouncer': {
+ password: '<pgbouncer_password_hash>'
+ }
+ }
+
+ # Configure Consul agent
+ consul['watchers'] = %w(postgresql)
+ consul['enable'] = true
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+
+ # Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+ NOTE: **Note:**
+ If an error `execute[generate databases.ini]` occurs, this is due to an existing
+ [known issue](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4713).
+ It will be resolved when you run a second `reconfigure` after the next step.
+
+1. Create a `.pgpass` file so Consul is able to
+ reload PgBouncer. Enter the PgBouncer password twice when asked:
+
+ ```shell
+ gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
+ ```
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) once again
+ to resolve any potential errors from the previous steps.
+1. Ensure each node is talking to the current primary:
+
+ ```shell
+ gitlab-ctl pgb-console # You will be prompted for PGBOUNCER_PASSWORD
+ ```
+
+1. Once the console prompt is available, run the following queries:
+
+ ```shell
+ show databases ; show clients ;
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
+ ---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
+ gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
+ pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
+ (2 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
+ ------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
+ C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
+ (2 rows)
+ ```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+### Configure the internal load balancer
+
+If you're running more than one PgBouncer node as recommended, then at this time you'll need to set
+up a TCP internal load balancer to serve each correctly.
+
+The following IP will be used as an example:
+
+- `10.6.0.40`: Internal Load Balancer
+
+Here's how you could do it with [HAProxy](https://www.haproxy.org/):
+
+```plaintext
+global
+ log /dev/log local0
+ log localhost local1 notice
+ log stdout format raw local0
+
+defaults
+ log global
+ default-server inter 10s fall 3 rise 2
+ balance leastconn
+
+frontend internal-pgbouncer-tcp-in
+ bind *:6432
+ mode tcp
+ option tcplog
+
+ default_backend pgbouncer
+
+backend pgbouncer
+ mode tcp
+ option tcp-check
+
+ server pgbouncer1 10.6.0.21:6432 check
+ server pgbouncer2 10.6.0.22:6432 check
+ server pgbouncer3 10.6.0.23:6432 check
+```
+
+Refer to your preferred Load Balancer's documentation for further guidance.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Redis
+
+Using [Redis](https://redis.io/) in scalable environment is possible using a **Primary** x **Replica**
+topology with a [Redis Sentinel](https://redis.io/topics/sentinel) service to watch and automatically
+start the failover procedure.
+
+Redis requires authentication if used with Sentinel. See
+[Redis Security](https://redis.io/topics/security) documentation for more
+information. We recommend using a combination of a Redis password and tight
+firewall rules to secure your Redis service.
+You are highly encouraged to read the [Redis Sentinel](https://redis.io/topics/sentinel) documentation
+before configuring Redis with GitLab to fully understand the topology and
+architecture.
+
+The requirements for a Redis setup are the following:
+
+1. All Redis nodes must be able to talk to each other and accept incoming
+ connections over Redis (`6379`) and Sentinel (`26379`) ports (unless you
+ change the default ones).
+1. The server that hosts the GitLab application must be able to access the
+ Redis nodes.
+1. Protect the nodes from access from external networks
+ ([Internet](https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png)),
+ using a firewall.
+
+In this section, you'll be guided through configuring two external Redis clusters
+to be used with GitLab. The following IPs will be used as an example:
+
+- `10.6.0.51`: Redis - Cache Primary
+- `10.6.0.52`: Redis - Cache Replica 1
+- `10.6.0.53`: Redis - Cache Replica 2
+- `10.6.0.71`: Sentinel - Cache 1
+- `10.6.0.72`: Sentinel - Cache 2
+- `10.6.0.73`: Sentinel - Cache 3
+- `10.6.0.61`: Redis - Queues Primary
+- `10.6.0.62`: Redis - Queues Replica 1
+- `10.6.0.63`: Redis - Queues Replica 2
+- `10.6.0.81`: Sentinel - Queues 1
+- `10.6.0.82`: Sentinel - Queues 2
+- `10.6.0.83`: Sentinel - Queues 3
+
+NOTE: **Providing your own Redis instance:**
+Managed Redis from cloud providers such as AWS ElastiCache will work. If these
+services support high availability, be sure it is **not** the Redis Cluster type.
+Redis version 5.0 or higher is required, as this is what ships with
+Omnibus GitLab packages starting with GitLab 13.0. Older Redis versions
+do not support an optional count argument to SPOP which is now required for
+[Merge Trains](../../ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md).
+Note the Redis node's IP address or hostname, port, and password (if required).
+These will be necessary when configuring the
+[GitLab application servers](#configure-gitlab-rails) later.
+
+### Configure the Redis and Sentinel Cache cluster
+
+This is the section where we install and set up the new Redis Cache instances.
+
+NOTE: **Note:**
+Redis nodes (both primary and replica) will need the same password defined in
+`redis['password']`. At any time during a failover the Sentinels can
+reconfigure a node and change its status from primary to replica and vice versa.
+
+#### Configure the primary Redis Cache node
+
+1. SSH into the **Primary** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_master_role'
+ roles ['redis_master_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.51'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Set up password authentication for Redis (use the same password in all nodes).
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER'
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Prevent database migrations from running on upgrade
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+#### Configure the replica Redis Cache nodes
+
+1. SSH into the **replica** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_replica_role'
+ roles ['redis_replica_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.52'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.51'
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Prevent database migrations from running on upgrade
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other replica nodes, and
+ make sure to set up the IPs correctly.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+These values don't have to be changed again in `/etc/gitlab/gitlab.rb` after
+a failover, as the nodes will be managed by the [Sentinels](#configure-the-sentinel-cache-nodes), and even after a
+`gitlab-ctl reconfigure`, they will get their configuration restored by
+the same Sentinels.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### Configure the Sentinel Cache nodes
+
+NOTE: **Note:**
+If you are using an external Redis Sentinel instance, be sure
+to exclude the `requirepass` parameter from the Sentinel
+configuration. This parameter will cause clients to report `NOAUTH
+Authentication required.`. [Redis Sentinel 3.2.x does not support
+password authentication](https://github.com/antirez/redis/issues/3279).
+
+Now that the Redis servers are all set up, let's configure the Sentinel
+servers. The following IPs will be used as an example:
+
+- `10.6.0.71`: Sentinel - Cache 1
+- `10.6.0.72`: Sentinel - Cache 2
+- `10.6.0.73`: Sentinel - Cache 3
+
+To configure the Sentinel Cache server:
+
+1. SSH into the server that will host Consul/Sentinel.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['redis_sentinel_role']
+
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis-cache'
+
+ ## The same password for Redis authentication you set up for the primary node.
+ redis['master_password'] = 'REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER'
+
+ ## The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.51'
+
+ ## Define a port so Redis can listen for TCP requests which will allow other
+ ## machines to connect to it.
+ redis['port'] = 6379
+
+ ## Port of primary Redis server, uncomment to change to non default. Defaults
+ ## to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Configure Sentinel's IP
+ sentinel['bind'] = '10.6.0.71'
+
+ ## Port that Sentinel listens on, uncomment to change to non default. Defaults
+ ## to `26379`.
+ #sentinel['port'] = 26379
+
+ ## Quorum must reflect the amount of voting sentinels it take to start a failover.
+ ## Value must NOT be greater then the amount of sentinels.
+ ##
+ ## The quorum can be used to tune Sentinel in two ways:
+ ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
+ ## we deploy, we are basically making Sentinel more sensible to primary failures,
+ ## triggering a failover as soon as even just a minority of Sentinels is no longer
+ ## able to talk with the primary.
+ ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
+ ## making Sentinel able to failover only when there are a very large number (larger
+ ## than majority) of well connected Sentinels which agree about the primary being down.s
+ sentinel['quorum'] = 2
+
+ ## Consider unresponsive server down after x amount of ms.
+ #sentinel['down_after_milliseconds'] = 10000
+
+ ## Specifies the failover timeout in milliseconds. It is used in many ways:
+ ##
+ ## - The time needed to re-start a failover after a previous failover was
+ ## already tried against the same primary by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## - The time needed for a replica replicating to a wrong primary according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right primary, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## - The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (REPLICAOF NO ONE yet not
+ ## acknowledged by the promoted replica).
+ ##
+ ## - The maximum time a failover in progress waits for all the replica to be
+ ## reconfigured as replicas of the new primary. However even after this time
+ ## the replicas will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ #sentinel['failover_timeout'] = 60000
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Consul/Sentinel nodes, and
+ make sure you set up the correct IPs.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+### Configure the Redis and Sentinel Queues cluster
+
+This is the section where we install and set up the new Redis Queues instances.
+
+NOTE: **Note:**
+Redis nodes (both primary and replica) will need the same password defined in
+`redis['password']`. At any time during a failover the Sentinels can
+reconfigure a node and change its status from primary to replica and vice versa.
+
+#### Configure the primary Redis Queues node
+
+1. SSH into the **Primary** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_master_role'
+ roles ['redis_master_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.61'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Set up password authentication for Redis (use the same password in all nodes).
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER'
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+ ```
+
+1. Only the primary GitLab application server should handle migrations. To
+ prevent database migrations from running on upgrade, add the following
+ configuration to your `/etc/gitlab/gitlab.rb` file:
+
+ ```ruby
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+#### Configure the replica Redis Queues nodes
+
+1. SSH into the **replica** Redis Queue server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_replica_role'
+ roles ['redis_replica_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.62'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.61'
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other replica nodes, and
+ make sure to set up the IPs correctly.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+These values don't have to be changed again in `/etc/gitlab/gitlab.rb` after
+a failover, as the nodes will be managed by the [Sentinels](#configure-the-sentinel-queues-nodes), and even after a
+`gitlab-ctl reconfigure`, they will get their configuration restored by
+the same Sentinels.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### Configure the Sentinel Queues nodes
+
+NOTE: **Note:**
+If you are using an external Redis Sentinel instance, be sure
+to exclude the `requirepass` parameter from the Sentinel
+configuration. This parameter will cause clients to report `NOAUTH
+Authentication required.`. [Redis Sentinel 3.2.x does not support
+password authentication](https://github.com/antirez/redis/issues/3279).
+
+Now that the Redis servers are all set up, let's configure the Sentinel
+servers. The following IPs will be used as an example:
+
+- `10.6.0.81`: Sentinel - Queues 1
+- `10.6.0.82`: Sentinel - Queues 2
+- `10.6.0.83`: Sentinel - Queues 3
+
+To configure the Sentinel Queues server:
+
+1. SSH into the server that will host Sentinel.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['redis_sentinel_role']
+
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis-persistent'
+
+ ## The same password for Redis authentication you set up for the primary node.
+ redis['master_password'] = 'REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER'
+
+ ## The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.61'
+
+ ## Define a port so Redis can listen for TCP requests which will allow other
+ ## machines to connect to it.
+ redis['port'] = 6379
+
+ ## Port of primary Redis server, uncomment to change to non default. Defaults
+ ## to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Configure Sentinel's IP
+ sentinel['bind'] = '10.6.0.81'
+
+ ## Port that Sentinel listens on, uncomment to change to non default. Defaults
+ ## to `26379`.
+ #sentinel['port'] = 26379
+
+ ## Quorum must reflect the amount of voting sentinels it take to start a failover.
+ ## Value must NOT be greater then the amount of sentinels.
+ ##
+ ## The quorum can be used to tune Sentinel in two ways:
+ ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
+ ## we deploy, we are basically making Sentinel more sensible to primary failures,
+ ## triggering a failover as soon as even just a minority of Sentinels is no longer
+ ## able to talk with the primary.
+ ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
+ ## making Sentinel able to failover only when there are a very large number (larger
+ ## than majority) of well connected Sentinels which agree about the primary being down.s
+ sentinel['quorum'] = 2
+
+ ## Consider unresponsive server down after x amount of ms.
+ #sentinel['down_after_milliseconds'] = 10000
+
+ ## Specifies the failover timeout in milliseconds. It is used in many ways:
+ ##
+ ## - The time needed to re-start a failover after a previous failover was
+ ## already tried against the same primary by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## - The time needed for a replica replicating to a wrong primary according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right primary, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## - The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (REPLICAOF NO ONE yet not
+ ## acknowledged by the promoted replica).
+ ##
+ ## - The maximum time a failover in progress waits for all the replica to be
+ ## reconfigured as replicas of the new primary. However even after this time
+ ## the replicas will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ #sentinel['failover_timeout'] = 60000
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. To prevent database migrations from running on upgrade, run:
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+ Only the primary GitLab application server should handle migrations.
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Sentinel nodes, and
+ make sure you set up the correct IPs.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Gitaly
+
+Deploying Gitaly in its own server can benefit GitLab installations that are
+larger than a single machine.
+
+The Gitaly node requirements are dependent on customer data, specifically the number of
+projects and their repository sizes. Two nodes are recommended as an absolute minimum.
+Each Gitaly node should store no more than 5TB of data and have the number of
+[`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) set to 20% of available CPUs.
+Additional nodes should be considered in conjunction with a review of expected
+data size and spread based on the recommendations above.
+
+It is also strongly recommended that all Gitaly nodes be set up with SSD disks with
+a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write,
+as Gitaly has heavy I/O. These IOPS values are recommended only as a starter as with
+time they may be adjusted higher or lower depending on the scale of your environment's workload.
+If you're running the environment on a Cloud provider, you may need to refer to
+their documentation on how to configure IOPS correctly.
+
+Some things to note:
+
+- The GitLab Rails application shards repositories into [repository storages](../repository_storage_paths.md).
+- A Gitaly server can host one or more storages.
+- A GitLab server can use one or more Gitaly servers.
+- Gitaly addresses must be specified in such a way that they resolve
+ correctly for ALL Gitaly clients.
+- Gitaly servers must not be exposed to the public internet, as Gitaly's network
+ traffic is unencrypted by default. The use of a firewall is highly recommended
+ to restrict access to the Gitaly server. Another option is to
+ [use TLS](#gitaly-tls-support).
+
+TIP: **Tip:**
+For more information about Gitaly's history and network architecture see the
+[standalone Gitaly documentation](../gitaly/index.md).
+
+Note: **Note:**
+The token referred to throughout the Gitaly documentation is
+just an arbitrary password selected by the administrator. It is unrelated to
+tokens created for the GitLab API or other similar web API tokens.
+
+Below we describe how to configure two Gitaly servers, with IPs and
+domain names:
+
+- `10.6.0.91`: Gitaly 1 (`gitaly1.internal`)
+- `10.6.0.92`: Gitaly 2 (`gitaly2.internal`)
+
+The secret token is assumed to be `gitalysecret` and that
+your GitLab installation has three repository storages:
+
+- `default` on Gitaly 1
+- `storage1` on Gitaly 1
+- `storage2` on Gitaly 2
+
+On each node:
+
+1. [Download/Install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page but
+ **without** providing the `EXTERNAL_URL` value.
+1. Edit `/etc/gitlab/gitlab.rb` to configure storage paths, enable
+ the network listener and configure the token:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ # /etc/gitlab/gitlab.rb
+
+ # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
+ # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
+ # The following two values must be the same as their respective values
+ # of the GitLab Rails application setup
+ gitaly['auth_token'] = 'gitalysecret'
+ gitlab_shell['secret_token'] = 'shellsecret'
+
+ # Avoid running unnecessary services on the Gitaly server
+ postgresql['enable'] = false
+ redis['enable'] = false
+ nginx['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
+ sidekiq['enable'] = false
+ gitlab_workhorse['enable'] = false
+ grafana['enable'] = false
+
+ # If you run a seperate monitoring node you can disable these services
+ alertmanager['enable'] = false
+ prometheus['enable'] = false
+
+ # Prevent database connections during 'gitlab-ctl reconfigure'
+ gitlab_rails['rake_cache_clear'] = false
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the gitlab-shell API callback URL. Without this, `git push` will
+ # fail. This can be your 'front door' GitLab URL or an internal load
+ # balancer.
+ # Don't forget to copy `/etc/gitlab/gitlab-secrets.json` from web server to Gitaly server.
+ gitlab_rails['internal_api_url'] = 'https://gitlab.example.com'
+
+ # Make Gitaly accept connections on all network interfaces. You must use
+ # firewalls to restrict access to this address/port.
+ # Comment out following line if you only want to support TLS connections
+ gitaly['listen_addr'] = "0.0.0.0:8075"
+ ```
+
+1. Append the following to `/etc/gitlab/gitlab.rb` for each respective server:
+ 1. On `gitaly1.internal`:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => {
+ 'path' => '/var/opt/gitlab/git-data'
+ },
+ 'storage1' => {
+ 'path' => '/mnt/gitlab/git-data'
+ },
+ })
+ ```
+
+ 1. On `gitaly2.internal`:
+
+ ```ruby
+ git_data_dirs({
+ 'storage2' => {
+ 'path' => '/mnt/gitlab/git-data'
+ },
+ })
+ ```
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+### Gitaly TLS support
+
+Gitaly supports TLS encryption. To be able to communicate
+with a Gitaly instance that listens for secure connections you will need to use `tls://` URL
+scheme in the `gitaly_address` of the corresponding storage entry in the GitLab configuration.
+
+You will need to bring your own certificates as this isn't provided automatically.
+The certificate, or its certificate authority, must be installed on all Gitaly
+nodes (including the Gitaly node using the certificate) and on all client nodes
+that communicate with it following the procedure described in
+[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates).
+
+NOTE: **Note:**
+The self-signed certificate must specify the address you use to access the
+Gitaly server. If you are addressing the Gitaly server by a hostname, you can
+either use the Common Name field for this, or add it as a Subject Alternative
+Name. If you are addressing the Gitaly server by its IP address, you must add it
+as a Subject Alternative Name to the certificate.
+[gRPC does not support using an IP address as Common Name in a certificate](https://github.com/grpc/grpc/issues/2691).
+
+NOTE: **Note:**
+It is possible to configure Gitaly servers with both an
+unencrypted listening address `listen_addr` and an encrypted listening
+address `tls_listen_addr` at the same time. This allows you to do a
+gradual transition from unencrypted to encrypted traffic, if necessary.
+
+To configure Gitaly with TLS:
+
+1. Create the `/etc/gitlab/ssl` directory and copy your key and certificate there:
+
+ ```shell
+ sudo mkdir -p /etc/gitlab/ssl
+ sudo chmod 755 /etc/gitlab/ssl
+ sudo cp key.pem cert.pem /etc/gitlab/ssl/
+ sudo chmod 644 key.pem cert.pem
+ ```
+
+1. Copy the cert to `/etc/gitlab/trusted-certs` so Gitaly will trust the cert when
+ calling into itself:
+
+ ```shell
+ sudo cp /etc/gitlab/ssl/cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and add:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ gitaly['tls_listen_addr'] = "0.0.0.0:9999"
+ gitaly['certificate_path'] = "/etc/gitlab/ssl/cert.pem"
+ gitaly['key_path'] = "/etc/gitlab/ssl/key.pem"
+ ```
+
+1. Delete `gitaly['listen_addr']` to allow only encrypted connections.
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Sidekiq
+
+Sidekiq requires connections to the Redis, PostgreSQL and Gitaly instances.
+The following IPs will be used as an example:
+
+- `10.6.0.101`: Sidekiq 1
+- `10.6.0.102`: Sidekiq 2
+- `10.6.0.103`: Sidekiq 3
+- `10.6.0.104`: Sidekiq 4
+
+To configure the Sidekiq nodes, on each one:
+
+1. SSH into the Sidekiq server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab package
+you want using steps 1 and 2 from the GitLab downloads page.
+**Do not complete any other steps on the download page.**
+1. Open `/etc/gitlab/gitlab.rb` with your editor:
+
+ ```ruby
+ ########################################
+ ##### Services Disabled ###
+ ########################################
+
+ nginx['enable'] = false
+ grafana['enable'] = false
+ prometheus['enable'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = false
+ puma['enable'] = false
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ ########################################
+ #### Redis ###
+ ########################################
+
+ ## Redis connection details
+ ## First cluster that will host the cache
+ gitlab_rails['redis_cache_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER>@gitlab-redis-cache'
+
+ gitlab_rails['redis_cache_sentinels'] = [
+ {host: '10.6.0.71', port: 26379},
+ {host: '10.6.0.72', port: 26379},
+ {host: '10.6.0.73', port: 26379},
+ ]
+
+ ## Second cluster that will host the queues, shared state, and actioncable
+ gitlab_rails['redis_queues_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_shared_state_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_actioncable_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+
+ gitlab_rails['redis_queues_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_shared_state_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_actioncable_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+
+ #######################################
+ ### Gitaly ###
+ #######################################
+
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
+ gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
+
+ #######################################
+ ### Postgres ###
+ #######################################
+ gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = '<postgresql_user_password>'
+ gitlab_rails['db_adapter'] = 'postgresql'
+ gitlab_rails['db_encoding'] = 'unicode'
+ gitlab_rails['auto_migrate'] = false
+
+ #######################################
+ ### Sidekiq configuration ###
+ #######################################
+ sidekiq['listen_address'] = "0.0.0.0"
+ sidekiq['cluster'] = true # no need to set this after GitLab 13.0
+
+ #######################################
+ ### Monitoring configuration ###
+ #######################################
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+
+ # Rails Status for prometheus
+ gitlab_rails['monitoring_whitelist'] = ['10.6.0.121/32', '127.0.0.0/8']
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+TIP: **Tip:**
+You can also run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure GitLab Rails
+
+NOTE: **Note:**
+In our architectures we run each GitLab Rails node using the Puma webserver
+and have its number of workers set to 90% of available CPUs along with four threads. For
+nodes that are running Rails with other components the worker value should be reduced
+accordingly where we've found 50% achieves a good balance but this is dependent
+on workload.
+
+This section describes how to configure the GitLab application (Rails) component.
+
+The following IPs will be used as an example:
+
+- `10.6.0.111`: GitLab application 1
+- `10.6.0.112`: GitLab application 2
+- `10.6.0.113`: GitLab application 3
+
+On each node perform the following:
+
+1. Download/install Omnibus GitLab using **steps 1 and 2** from
+ [GitLab downloads](https://about.gitlab.com/install/). Do not complete other
+ steps on the download page.
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. Edit `/etc/gitlab/gitlab.rb` and use the following configuration.
+ To maintain uniformity of links across nodes, the `external_url`
+ on the application server should point to the external URL that users will use
+ to access GitLab. This would be the URL of the [external load balancer](#configure-the-external-load-balancer)
+ which will route traffic to the GitLab application server:
+
+ ```ruby
+ external_url 'https://gitlab.example.com'
+
+ # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
+ # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
+ # The following two values must be the same as their respective values
+ # of the Gitaly setup
+ gitlab_rails['gitaly_token'] = 'gitalysecret'
+ gitlab_shell['secret_token'] = 'shellsecret'
+
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
+
+ ## Disable components that will not be on the GitLab application server
+ roles ['application_role']
+ gitaly['enable'] = false
+ nginx['enable'] = true
+
+ ## PostgreSQL connection details
+ # Disable PostgreSQL on the application node
+ postgresql['enable'] = false
+ gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = '<postgresql_user_password>'
+ gitlab_rails['auto_migrate'] = false
+
+ ## Redis connection details
+ ## First cluster that will host the cache
+ gitlab_rails['redis_cache_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER>@gitlab-redis-cache'
+
+ gitlab_rails['redis_cache_sentinels'] = [
+ {host: '10.6.0.71', port: 26379},
+ {host: '10.6.0.72', port: 26379},
+ {host: '10.6.0.73', port: 26379},
+ ]
+
+ ## Second cluster that will host the queues, shared state, and actionable
+ gitlab_rails['redis_queues_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_shared_state_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_actioncable_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+
+ gitlab_rails['redis_queues_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_shared_state_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_actioncable_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+
+ # Set the network addresses that the exporters used for monitoring will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ gitlab_workhorse['prometheus_listen_addr'] = '0.0.0.0:9229'
+ sidekiq['listen_address'] = "0.0.0.0"
+ puma['listen'] = '0.0.0.0'
+
+ # Add the monitoring node's IP address to the monitoring whitelist and allow it to
+ # scrape the NGINX metrics
+ gitlab_rails['monitoring_whitelist'] = ['10.6.0.121/32', '127.0.0.0/8']
+ nginx['status']['options']['allow'] = ['10.6.0.121/32', '127.0.0.0/8']
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. If you're using [Gitaly with TLS support](#gitaly-tls-support), make sure the
+ `git_data_dirs` entry is configured with `tls` instead of `tcp`:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage1' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage2' => { 'gitaly_address' => 'tls://gitaly2.internal:9999' },
+ })
+ ```
+
+ 1. Copy the cert into `/etc/gitlab/trusted-certs`:
+
+ ```shell
+ sudo cp cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. If you're [using NFS](#configure-nfs-optional):
+ 1. If necessary, install the NFS client utility packages using the following
+ commands:
+
+ ```shell
+ # Ubuntu/Debian
+ apt-get install nfs-common
+
+ # CentOS/Red Hat
+ yum install nfs-utils nfs-utils-lib
+ ```
+
+ 1. Specify the necessary NFS mounts in `/etc/fstab`.
+ The exact contents of `/etc/fstab` will depend on how you chose
+ to configure your NFS server. See the [NFS documentation](../high_availability/nfs.md)
+ for examples and the various options.
+
+ 1. Create the shared directories. These may be different depending on your NFS
+ mount locations.
+
+ ```shell
+ mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
+ ```
+
+ 1. Edit `/etc/gitlab/gitlab.rb` and use the following configuration:
+
+ ```ruby
+ ## Prevent GitLab from starting if NFS data mounts are not available
+ high_availability['mountpoint'] = '/var/opt/gitlab/git-data'
+
+ ## Ensure UIDs and GIDs match between servers for permissions via NFS
+ user['uid'] = 9000
+ user['gid'] = 9000
+ web_server['uid'] = 9001
+ web_server['gid'] = 9001
+ registry['uid'] = 9002
+ registry['gid'] = 9002
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Confirm the node can connect to Gitaly:
+
+ ```shell
+ sudo gitlab-rake gitlab:gitaly:check
+ ```
+
+ Then, tail the logs to see the requests:
+
+ ```shell
+ sudo gitlab-ctl tail gitaly
+ ```
+
+1. Optionally, from the Gitaly servers, confirm that Gitaly can perform callbacks to the internal API:
+
+ ```shell
+ sudo /opt/gitlab/embedded/service/gitlab-shell/bin/check -config /opt/gitlab/embedded/service/gitlab-shell/config.yml
+ ```
+
+NOTE: **Note:**
+When you specify `https` in the `external_url`, as in the example
+above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
+certificates are not present, NGINX will fail to start. See the
+[NGINX documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for more information.
+
+### GitLab Rails post-configuration
+
+Initialize the GitLab database, by running the following in one of the Rails nodes:
+
+```shell
+sudo gitlab-rake gitlab:db:configure
+```
+
+NOTE: **Note:**
+If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to
+PostgreSQL it may be that your PgBouncer node's IP address is missing from
+PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
+[PgBouncer error `ERROR: pgbouncer cannot connect to server`](troubleshooting.md#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
+in the Troubleshooting section before proceeding.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Prometheus
+
+The Omnibus GitLab package can be used to configure a standalone Monitoring node
+running [Prometheus](../monitoring/prometheus/index.md) and
+[Grafana](../monitoring/performance/grafana_configuration.md).
+
+The following IP will be used as an example:
+
+- `10.6.0.121`: Prometheus
+
+To configure the Monitoring node:
+
+1. SSH into the Monitoring node.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ Do not complete any other steps on the download page.
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ external_url 'http://gitlab.example.com'
+
+ # Disable all other services
+ gitlab_rails['auto_migrate'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_exporter['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = true
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ sidekiq['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
+ node_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ # Enable Prometheus
+ prometheus['enable'] = true
+ prometheus['listen_address'] = '0.0.0.0:9090'
+ prometheus['monitor_kubernetes'] = false
+
+ # Enable Login form
+ grafana['disable_login_form'] = false
+
+ # Enable Grafana
+ grafana['enable'] = true
+ grafana['admin_password'] = '<grafana_password>'
+
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. In the GitLab UI, set `admin/application_settings/metrics_and_profiling` > Metrics - Grafana to `/-/grafana` to
+`http[s]://<MONITOR NODE>/-/grafana`
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure the object storage
+
+GitLab supports using an object storage service for holding numerous types of data.
+It's recommended over [NFS](#configure-nfs-optional) and in general it's better
+in larger setups as object storage is typically much more performant, reliable,
+and scalable.
+
+Object storage options that GitLab has tested, or is aware of customers using include:
+
+- SaaS/Cloud solutions such as [Amazon S3](https://aws.amazon.com/s3/), [Google cloud storage](https://cloud.google.com/storage).
+- On-premises hardware and appliances from various storage vendors.
+- MinIO. There is [a guide to deploying this](https://docs.gitlab.com/charts/advanced/external-object-storage/minio.html) within our Helm Chart documentation.
+
+For configuring GitLab to use Object Storage refer to the following guides
+based on what features you intend to use:
+
+1. Configure [object storage for backups](../../raketasks/backup_restore.md#uploading-backups-to-a-remote-cloud-storage).
+1. Configure [object storage for job artifacts](../job_artifacts.md#using-object-storage)
+ including [incremental logging](../job_logs.md#new-incremental-logging-architecture).
+1. Configure [object storage for LFS objects](../lfs/index.md#storing-lfs-objects-in-remote-object-storage).
+1. Configure [object storage for uploads](../uploads.md#using-object-storage-core-only).
+1. Configure [object storage for merge request diffs](../merge_request_diffs.md#using-object-storage).
+1. Configure [object storage for Container Registry](../packages/container_registry.md#use-object-storage) (optional feature).
+1. Configure [object storage for Mattermost](https://docs.mattermost.com/administration/config-settings.html#file-storage) (optional feature).
+1. Configure [object storage for packages](../packages/index.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. Configure [object storage for Dependency Proxy](../packages/dependency_proxy.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. Configure [object storage for Pseudonymizer](../pseudonymizer.md#configuration) (optional feature). **(ULTIMATE ONLY)**
+1. Configure [object storage for autoscale Runner caching](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching) (optional - for improved performance).
+1. Configure [object storage for Terraform state files](../terraform_state.md#using-object-storage-core-only).
+
+Using separate buckets for each data type is the recommended approach for GitLab.
+
+A limitation of our configuration is that each use of object storage is separately configured.
+[We have an issue for improving this](https://gitlab.com/gitlab-org/gitlab/-/issues/23345)
+and easily using one bucket with separate folders is one improvement that this might bring.
+
+There is at least one specific issue with using the same bucket:
+when GitLab is deployed with the Helm chart restore from backup
+[will not properly function](https://docs.gitlab.com/charts/advanced/external-object-storage/#lfs-artifacts-uploads-packages-external-diffs-pseudonymizer)
+unless separate buckets are used.
+
+One risk of using a single bucket would be if your organization decided to
+migrate GitLab to the Helm deployment in the future. GitLab would run, but the situation with
+backups might not be realized until the organization had a critical requirement for the backups to
+work.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure NFS (optional)
+
+[Object storage](#configure-the-object-storage), along with [Gitaly](#configure-gitaly)
+are recommended over NFS wherever possible for improved performance. If you intend
+to use GitLab Pages, this currently [requires NFS](troubleshooting.md#gitlab-pages-requires-nfs).
+
+See how to [configure NFS](../high_availability/nfs.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Troubleshooting
+
+See the [troubleshooting documentation](troubleshooting.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
diff --git a/doc/administration/reference_architectures/1k_users.md b/doc/administration/reference_architectures/1k_users.md
index def23619a5c..d3cf5f49413 100644
--- a/doc/administration/reference_architectures/1k_users.md
+++ b/doc/administration/reference_architectures/1k_users.md
@@ -1,86 +1,49 @@
-# Reference architecture: up to 1,000 users
+---
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
-This page describes GitLab reference architecture for up to 1,000 users.
-For a full list of reference architectures, see
-[Available reference architectures](index.md#available-reference-architectures).
-
-> - **Supported users (approximate):** 1,000
-> - **High Availability:** False
+# Reference architecture: up to 1,000 users **(CORE ONLY)**
-| Users | Configuration([8](#footnotes)) | GCP | AWS | Azure |
-|-------|------------------------------------|----------------|---------------------|------------------------|
-| 500 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | F4s v2 |
-| 1000 | 8 vCPU, 7.2GB Memory | `n1-highcpu-8` | `c5.2xlarge` | F8s v2 |
+This page describes GitLab reference architecture for up to 1,000 users. For a
+full list of reference architectures, see
+[Available reference architectures](index.md#available-reference-architectures).
-In addition to the above, we recommend having at least
-2GB of swap on your server, even if you currently have
-enough available RAM. Having swap will help reduce the chance of errors occurring
-if your available memory changes. We also recommend
-configuring the kernel's swappiness setting
-to a low value like `10` to make the most of your RAM while still having the swap
-available when needed.
+If you need to serve up to 1,000 users and you don't have strict availability
+requirements, a single-node solution with
+[frequent backups](index.md#automated-backups-core-only) is appropriate for
+many organizations .
-For situations where you need to serve up to 1,000 users, a single-node
-solution with [frequent backups](index.md#automated-backups-core-only) is appropriate
-for many organizations. With automatic backup of the GitLab repositories,
-configuration, and the database, if you don't have strict availability
-requirements, this is the ideal solution.
+> - **Supported users (approximate):** 1,000
+> - **High Availability:** No
+
+| Users | Configuration | GCP | AWS | Azure |
+|--------------|-------------------------|----------------|-----------------|----------------|
+| Up to 500 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
+| Up to 1,000 | 8 vCPU, 7.2GB memory | n1-highcpu-8 | c5.2xlarge | F8s v2 |
+
+The Google Cloud Platform (GCP) architectures were built and tested using the
+[Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+CPU platform. On different hardware you may find that adjustments, either lower
+or higher, are required for your CPU or node counts. For more information, see
+our [Sysbench](https://github.com/akopytov/sysbench)-based
+[CPU benchmark](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+In addition to the stated configurations, we recommend having at least 2GB of
+swap on your server, even if you currently have enough available memory. Having
+swap will help reduce the chance of errors occurring if your available memory
+changes. We also recommend configuring the kernel's swappiness setting to a
+lower value (such as `10`) to make the most of your memory, while still having
+the swap available when needed.
## Setup instructions
-- For this default reference architecture, use the standard [installation instructions](../../install/README.md) to install GitLab.
+For this default reference architecture, to install GitLab use the standard
+[installation instructions](../../install/README.md).
NOTE: **Note:**
You can also optionally configure GitLab to use an
[external PostgreSQL service](../postgresql/external.md) or an
[external object storage service](../high_availability/object_storage.md) for
added performance and reliability at a reduced complexity cost.
-
-## Footnotes
-
-1. In our architectures we run each GitLab Rails node using the Puma webserver
- and have its number of workers set to 90% of available CPUs along with four threads. For
- nodes that are running Rails with other components the worker value should be reduced
- accordingly where we've found 50% achieves a good balance but this is dependent
- on workload.
-
-1. Gitaly node requirements are dependent on customer data, specifically the number of
- projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
- and at least four nodes should be used when supporting 50,000 or more users.
- We also recommend that each Gitaly node should store no more than 5TB of data
- and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
- set to 20% of available CPUs. Additional nodes should be considered in conjunction
- with a review of expected data size and spread based on the recommendations above.
-
-1. Recommended Redis setup differs depending on the size of the architecture.
- For smaller architectures (less than 3,000 users) a single instance should suffice.
- For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
- classes and that Redis Sentinel is hosted alongside Consul.
- For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../redis/replication_and_failover.md#running-multiple-redis-clusters) for the Cache class
- and another for the Queues and Shared State classes respectively. We also recommend
- that you run the Redis Sentinel clusters separately for each Redis Cluster.
-
-1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
- over NFS where possible, due to better performance and availability.
-
-1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
- object storage but this isn't typically recommended for performance reasons. Note however it is required for
- [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196).
-
-1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
- as the load balancer. Although other load balancers with similar feature sets
- could also be used, those load balancers have not been validated.
-
-1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
- HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
- as these components have heavy I/O. These IOPS values are recommended only as a starter
- as with time they may be adjusted higher or lower depending on the scale of your
- environment's workload. If you're running the environment on a Cloud provider
- you may need to refer to their documentation on how configure IOPS correctly.
-
-1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
- CPU platform on GCP. On different hardware you may find that adjustments, either lower
- or higher, are required for your CPU or Node counts accordingly. For more information, a
- [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
- [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md
index 17f4300eb03..1cfa2565893 100644
--- a/doc/administration/reference_architectures/25k_users.md
+++ b/doc/administration/reference_architectures/25k_users.md
@@ -1,76 +1,2047 @@
-# Reference architecture: up to 25,000 users
+---
+reading_time: true
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
-This page describes GitLab reference architecture for up to 25,000 users.
-For a full list of reference architectures, see
+# Reference architecture: up to 25,000 users **(PREMIUM ONLY)**
+
+This page describes GitLab reference architecture for up to 25,000 users. For a
+full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
> - **Supported users (approximate):** 25,000
-> - **High Availability:** True
-> - **Test RPS rates:** API: 500 RPS, Web: 50 RPS, Git: 50 RPS
-
-| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS | Azure |
-|--------------------------------------------------------------|-------|---------------------------------|------------------|-----------------------|----------------|
-| GitLab Rails ([1](#footnotes)) | 5 | 32 vCPU, 28.8GB Memory | `n1-highcpu-32` | `c5.9xlarge` | F32s v2 |
-| PostgreSQL | 3 | 8 vCPU, 30GB Memory | `n1-standard-8` | `m5.2xlarge` | D8s v3 |
-| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 32 vCPU, 120GB Memory | `n1-standard-32` | `m5.8xlarge` | D32s v3 |
-| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | D4s v3 |
-| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | D4s v3 |
-| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | `g1-small` | `t2.small` | B1MS |
-| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | `g1-small` | `t2.small` | B1MS |
-| Consul | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| Sidekiq | 4 | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | D4s v3 |
-| Object Storage ([4](#footnotes)) | - | - | - | - | - |
-| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | F4s v2 |
-| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | F4s v2 |
-| External load balancing node ([6](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | F4s v2 |
-| Internal load balancing node ([6](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | F4s v2 |
-
-## Footnotes
-
-1. In our architectures we run each GitLab Rails node using the Puma webserver
- and have its number of workers set to 90% of available CPUs along with four threads. For
- nodes that are running Rails with other components the worker value should be reduced
- accordingly where we've found 50% achieves a good balance but this is dependent
- on workload.
-
-1. Gitaly node requirements are dependent on customer data, specifically the number of
- projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
- and at least four nodes should be used when supporting 50,000 or more users.
- We also recommend that each Gitaly node should store no more than 5TB of data
- and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
- set to 20% of available CPUs. Additional nodes should be considered in conjunction
- with a review of expected data size and spread based on the recommendations above.
-
-1. Recommended Redis setup differs depending on the size of the architecture.
- For smaller architectures (less than 3,000 users) a single instance should suffice.
- For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
- classes and that Redis Sentinel is hosted alongside Consul.
- For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../redis/replication_and_failover.md#running-multiple-redis-clusters) for the Cache class
- and another for the Queues and Shared State classes respectively. We also recommend
- that you run the Redis Sentinel clusters separately for each Redis Cluster.
-
-1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
- over NFS where possible, due to better performance and availability.
-
-1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
- object storage but this isn't typically recommended for performance reasons. Note however it is required for
- [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196).
-
-1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
- as the load balancer. Although other load balancers with similar feature sets
- could also be used, those load balancers have not been validated.
-
-1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
- HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
- as these components have heavy I/O. These IOPS values are recommended only as a starter
- as with time they may be adjusted higher or lower depending on the scale of your
- environment's workload. If you're running the environment on a Cloud provider
- you may need to refer to their documentation on how configure IOPS correctly.
-
-1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
- CPU platform on GCP. On different hardware you may find that adjustments, either lower
- or higher, are required for your CPU or Node counts accordingly. For more information, a
- [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
- [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+> - **High Availability:** Yes
+> - **Test requests per second (RPS) rates:** API: 500 RPS, Web: 50 RPS, Git: 50 RPS
+
+| Service | Nodes | Configuration | GCP | AWS | Azure |
+|-----------------------------------------|-------------|-------------------------|-----------------|-------------|----------|
+| External load balancing node | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
+| Consul | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| PostgreSQL | 3 | 8 vCPU, 30GB memory | n1-standard-8 | m5.2xlarge | D8s v3 |
+| PgBouncer | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Internal load balancing node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Redis - Cache | 3 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| Redis - Queues / Shared State | 3 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| Redis Sentinel - Cache | 3 | 1 vCPU, 1.7GB memory | g1-small | t2.small | B1MS |
+| Redis Sentinel - Queues / Shared State | 3 | 1 vCPU, 1.7GB memory | g1-small | t2.small | B1MS |
+| Gitaly | 2 (minimum) | 32 vCPU, 120GB memory | n1-standard-32 | m5.8xlarge | D32s v3 |
+| Sidekiq | 4 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| GitLab Rails | 5 | 32 vCPU, 28.8GB memory | n1-highcpu-32 | c5.9xlarge | F32s v2 |
+| Monitoring node | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
+| Object Storage | n/a | n/a | n/a | n/a | n/a |
+| NFS Server | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
+
+The Google Cloud Platform (GCP) architectures were built and tested using the
+[Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+CPU platform. On different hardware you may find that adjustments, either lower
+or higher, are required for your CPU or node counts. For more information, see
+our [Sysbench](https://github.com/akopytov/sysbench)-based
+[CPU benchmark](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+For data objects (such as LFS, Uploads, or Artifacts), an
+[object storage service](#configure-the-object-storage) is recommended instead
+of NFS where possible, due to better performance and availability. Since this
+doesn't require a node to be set up, *Object Storage* is noted as not
+applicable (n/a) in the previous table.
+
+## Setup components
+
+To set up GitLab and its components to accommodate up to 25,000 users:
+
+1. [Configure the external load balancing node](#configure-the-external-load-balancer)
+ that will handle the load balancing of the three GitLab application services nodes.
+1. [Configure Consul](#configure-consul).
+1. [Configure PostgreSQL](#configure-postgresql), the database for GitLab.
+1. [Configure PgBouncer](#configure-pgbouncer).
+1. [Configure the internal load balancing node](#configure-the-internal-load-balancer)
+1. [Configure Redis](#configure-redis).
+1. [Configure Gitaly](#configure-gitaly),
+ which provides access to the Git repositories.
+1. [Configure Sidekiq](#configure-sidekiq).
+1. [Configure the main GitLab Rails application](#configure-gitlab-rails)
+ to run Puma/Unicorn, Workhorse, GitLab Shell, and to serve all frontend requests (UI, API, Git
+ over HTTP/SSH).
+1. [Configure Prometheus](#configure-prometheus) to monitor your GitLab environment.
+1. [Configure the Object Storage](#configure-the-object-storage)
+ used for shared data objects.
+1. [Configure NFS (Optional)](#configure-nfs-optional)
+ to have shared disk storage service as an alternative to Gitaly and/or Object Storage (although
+ not recommended). NFS is required for GitLab Pages, you can skip this step if you're not using
+ that feature.
+
+We start with all servers on the same 10.6.0.0/24 private network range, they
+can connect to each other freely on those addresses.
+
+Here is a list and description of each machine and the assigned IP:
+
+- `10.6.0.10`: External Load Balancer
+- `10.6.0.11`: Consul 1
+- `10.6.0.12`: Consul 2
+- `10.6.0.13`: Consul 3
+- `10.6.0.21`: PostgreSQL primary
+- `10.6.0.22`: PostgreSQL secondary 1
+- `10.6.0.23`: PostgreSQL secondary 2
+- `10.6.0.31`: PgBouncer 1
+- `10.6.0.32`: PgBouncer 2
+- `10.6.0.33`: PgBouncer 3
+- `10.6.0.40`: Internal Load Balancer
+- `10.6.0.51`: Redis - Cache Primary
+- `10.6.0.52`: Redis - Cache Replica 1
+- `10.6.0.53`: Redis - Cache Replica 2
+- `10.6.0.71`: Sentinel - Cache 1
+- `10.6.0.72`: Sentinel - Cache 2
+- `10.6.0.73`: Sentinel - Cache 3
+- `10.6.0.61`: Redis - Queues Primary
+- `10.6.0.62`: Redis - Queues Replica 1
+- `10.6.0.63`: Redis - Queues Replica 2
+- `10.6.0.81`: Sentinel - Queues 1
+- `10.6.0.82`: Sentinel - Queues 2
+- `10.6.0.83`: Sentinel - Queues 3
+- `10.6.0.91`: Gitaly 1
+- `10.6.0.92`: Gitaly 2
+- `10.6.0.101`: Sidekiq 1
+- `10.6.0.102`: Sidekiq 2
+- `10.6.0.103`: Sidekiq 3
+- `10.6.0.104`: Sidekiq 4
+- `10.6.0.111`: GitLab application 1
+- `10.6.0.112`: GitLab application 2
+- `10.6.0.113`: GitLab application 3
+- `10.6.0.121`: Prometheus
+
+## Configure the external load balancer
+
+NOTE: **Note:**
+This architecture has been tested and validated with [HAProxy](https://www.haproxy.org/)
+as the load balancer. Although other load balancers with similar feature sets
+could also be used, those load balancers have not been validated.
+
+In an active/active GitLab configuration, you will need a load balancer to route
+traffic to the application servers. The specifics on which load balancer to use
+or the exact configuration is beyond the scope of GitLab documentation. We hope
+that if you're managing multi-node systems like GitLab you have a load balancer of
+choice already. Some examples including HAProxy (open-source), F5 Big-IP LTM,
+and Citrix Net Scaler. This documentation will outline what ports and protocols
+you need to use with GitLab.
+
+The next question is how you will handle SSL in your environment.
+There are several different options:
+
+- [The application node terminates SSL](#application-node-terminates-ssl).
+- [The load balancer terminates SSL without backend SSL](#load-balancer-terminates-ssl-without-backend-ssl)
+ and communication is not secure between the load balancer and the application node.
+- [The load balancer terminates SSL with backend SSL](#load-balancer-terminates-ssl-with-backend-ssl)
+ and communication is *secure* between the load balancer and the application node.
+
+### Application node terminates SSL
+
+Configure your load balancer to pass connections on port 443 as `TCP` rather
+than `HTTP(S)` protocol. This will pass the connection to the application node's
+NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
+
+See the [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Load balancer terminates SSL without backend SSL
+
+Configure your load balancer to use the `HTTP(S)` protocol rather than `TCP`.
+The load balancer will then be responsible for managing SSL certificates and
+terminating SSL.
+
+Since communication between the load balancer and GitLab will not be secure,
+there is some additional configuration needed. See the
+[NGINX proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
+for details.
+
+### Load balancer terminates SSL with backend SSL
+
+Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
+The load balancer(s) will be responsible for managing SSL certificates that
+end users will see.
+
+Traffic will also be secure between the load balancer(s) and NGINX in this
+scenario. There is no need to add configuration for proxied SSL since the
+connection will be secure all the way. However, configuration will need to be
+added to GitLab to configure SSL certificates. See
+[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Ports
+
+The basic ports to be used are shown in the table below.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | ------------------------ |
+| 80 | 80 | HTTP (*1*) |
+| 443 | 443 | TCP or HTTPS (*1*) (*2*) |
+| 22 | 22 | TCP |
+
+- (*1*): [Web terminal](../../ci/environments/index.md#web-terminals) support requires
+ your load balancer to correctly handle WebSocket connections. When using
+ HTTP or HTTPS proxying, this means your load balancer must be configured
+ to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
+ [web terminal](../integration/terminal.md) integration guide for
+ more details.
+- (*2*): When using HTTPS protocol for port 443, you will need to add an SSL
+ certificate to the load balancers. If you wish to terminate SSL at the
+ GitLab application server instead, use TCP protocol.
+
+If you're using GitLab Pages with custom domain support you will need some
+additional port configurations.
+GitLab Pages requires a separate virtual IP address. Configure DNS to point the
+`pages_external_url` from `/etc/gitlab/gitlab.rb` at the new virtual IP address. See the
+[GitLab Pages documentation](../pages/index.md) for more information.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------- | --------- |
+| 80 | Varies (*1*) | HTTP |
+| 443 | Varies (*1*) | TCP (*2*) |
+
+- (*1*): The backend port for GitLab Pages depends on the
+ `gitlab_pages['external_http']` and `gitlab_pages['external_https']`
+ setting. See [GitLab Pages documentation](../pages/index.md) for more details.
+- (*2*): Port 443 for GitLab Pages should always use the TCP protocol. Users can
+ configure custom domains with custom SSL, which would not be possible
+ if SSL was terminated at the load balancer.
+
+#### Alternate SSH Port
+
+Some organizations have policies against opening SSH port 22. In this case,
+it may be helpful to configure an alternate SSH hostname that allows users
+to use SSH on port 443. An alternate SSH hostname will require a new virtual IP address
+compared to the other GitLab HTTP configuration above.
+
+Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | -------- |
+| 443 | 22 | TCP |
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Consul
+
+The following IPs will be used as an example:
+
+- `10.6.0.11`: Consul 1
+- `10.6.0.12`: Consul 2
+- `10.6.0.13`: Consul 3
+
+NOTE: **Note:**
+The configuration processes for the other servers in your reference architecture will
+use the `/etc/gitlab/gitlab-secrets.json` file from your Consul server to connect
+with the other servers.
+
+To configure Consul:
+
+1. SSH into the server that will host Consul.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['consul_role']
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ server: true,
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Consul nodes, and
+ make sure you set up the correct IPs.
+
+NOTE: **Note:**
+A Consul leader will be elected when the provisioning of the third Consul server is completed.
+Viewing the Consul logs `sudo gitlab-ctl tail consul` will display
+`...[INFO] consul: New leader elected: ...`
+
+You can list the current Consul members (server, client):
+
+```shell
+sudo /opt/gitlab/embedded/bin/consul members
+```
+
+You can verify the GitLab services are running:
+
+```shell
+sudo gitlab-ctl status
+```
+
+The output should be similar to the following:
+
+```plaintext
+run: consul: (pid 30074) 76834s; run: log: (pid 29740) 76844s
+run: logrotate: (pid 30925) 3041s; run: log: (pid 29649) 76861s
+run: node-exporter: (pid 30093) 76833s; run: log: (pid 29663) 76855s
+```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure PostgreSQL
+
+In this section, you'll be guided through configuring an external PostgreSQL database
+to be used with GitLab.
+
+### Provide your own PostgreSQL instance
+
+If you're hosting GitLab on a cloud provider, you can optionally use a
+managed service for PostgreSQL. For example, AWS offers a managed Relational
+Database Service (RDS) that runs PostgreSQL.
+
+If you use a cloud-managed service, or provide your own PostgreSQL:
+
+1. Set up PostgreSQL according to the
+ [database requirements document](../../install/requirements.md#database).
+1. Set up a `gitlab` username with a password of your choice. The `gitlab` user
+ needs privileges to create the `gitlabhq_production` database.
+1. Configure the GitLab application servers with the appropriate details.
+ This step is covered in [Configuring the GitLab Rails application](#configure-gitlab-rails).
+
+### Standalone PostgreSQL using Omnibus GitLab
+
+The following IPs will be used as an example:
+
+- `10.6.0.21`: PostgreSQL primary
+- `10.6.0.22`: PostgreSQL secondary 1
+- `10.6.0.23`: PostgreSQL secondary 2
+
+First, make sure to [install](https://about.gitlab.com/install/)
+the Linux GitLab package **on each node**. Following the steps,
+install the necessary dependencies from step 1, and add the
+GitLab package repository from step 2. When installing GitLab
+in the second step, do not supply the `EXTERNAL_URL` value.
+
+#### PostgreSQL primary node
+
+1. SSH into the PostgreSQL primary node.
+1. Generate a password hash for the PostgreSQL username/password pair. This assumes you will use the default
+ username of `gitlab` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<postgresql_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 gitlab
+ ```
+
+1. Generate a password hash for the PgBouncer username/password pair. This assumes you will use the default
+ username of `pgbouncer` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<pgbouncer_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 pgbouncer
+ ```
+
+1. Generate a password hash for the Consul database username/password pair. This assumes you will use the default
+ username of `gitlab-consul` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<consul_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 gitlab-consul
+ ```
+
+1. On the primary database node, edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
+
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the Consul agent
+ consul['services'] = %w(postgresql)
+
+ # START user configuration
+ # Please set the real values as explained in Required Information section
+ #
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = '<postgresql_password_hash>'
+ # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
+ # This is used to prevent replication from using up all of the
+ # available database connections.
+ postgresql['max_wal_senders'] = 4
+ postgresql['max_replication_slots'] = 4
+
+ # Replace XXX.XXX.XXX.XXX/YY with Network Address
+ postgresql['trust_auth_cidr_addresses'] = %w(10.6.0.0/24)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+
+ ## Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ #
+ # END user configuration
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### PostgreSQL secondary nodes
+
+1. On both the secondary nodes, add the same configuration specified above for the primary node
+ with an additional setting (`repmgr['master_on_initialization'] = false`) that will inform `gitlab-ctl` that they are standby nodes initially
+ and there's no need to attempt to register them as a primary node:
+
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the Consul agent
+ consul['services'] = %w(postgresql)
+
+ # Specify if a node should attempt to be primary on initialization.
+ repmgr['master_on_initialization'] = false
+
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = '<postgresql_password_hash>'
+ # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
+ # This is used to prevent replication from using up all of the
+ # available database connections.
+ postgresql['max_wal_senders'] = 4
+ postgresql['max_replication_slots'] = 4
+
+ # Replace with your network addresses
+ postgresql['trust_auth_cidr_addresses'] = %w(10.6.0.0/24)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+
+ ## Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/database.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### PostgreSQL post-configuration
+
+SSH into the **primary node**:
+
+1. Open a database prompt:
+
+ ```shell
+ gitlab-psql -d gitlabhq_production
+ ```
+
+1. Make sure the `pg_trgm` extension is enabled (it might already be):
+
+ ```shell
+ CREATE EXTENSION pg_trgm;
+ ```
+
+1. Exit the database prompt by typing `\q` and Enter.
+
+1. Verify the cluster is initialized with one node:
+
+ ```shell
+ gitlab-ctl repmgr cluster show
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ Role | Name | Upstream | Connection String
+ ----------+----------|----------|----------------------------------------
+ * master | HOSTNAME | | host=HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
+ ```
+
+1. Note down the hostname or IP address in the connection string: `host=HOSTNAME`. We will
+ refer to the hostname in the next section as `<primary_node_name>`. If the value
+ is not an IP address, it will need to be a resolvable name (via DNS or
+ `/etc/hosts`)
+
+SSH into the **secondary node**:
+
+1. Set up the repmgr standby:
+
+ ```shell
+ gitlab-ctl repmgr standby setup <primary_node_name>
+ ```
+
+ Do note that this will remove the existing data on the node. The command
+ has a wait time.
+
+ The output should be similar to the following:
+
+ ```console
+ Doing this will delete the entire contents of /var/opt/gitlab/postgresql/data
+ If this is not what you want, hit Ctrl-C now to exit
+ To skip waiting, rerun with the -w option
+ Sleeping for 30 seconds
+ Stopping the database
+ Removing the data
+ Cloning the data
+ Starting the database
+ Registering the node with the cluster
+ ok: run: repmgrd: (pid 19068) 0s
+ ```
+
+Before moving on, make sure the databases are configured correctly. Run the
+following command on the **primary** node to verify that replication is working
+properly and the secondary nodes appear in the cluster:
+
+```shell
+gitlab-ctl repmgr cluster show
+```
+
+The output should be similar to the following:
+
+```plaintext
+Role | Name | Upstream | Connection String
+----------+---------|-----------|------------------------------------------------
+* master | MASTER | | host=<primary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+```
+
+If the 'Role' column for any node says "FAILED", check the
+[Troubleshooting section](troubleshooting.md) before proceeding.
+
+Also, check that the `repmgr-check-master` command works successfully on each node:
+
+```shell
+su - gitlab-consul
+gitlab-ctl repmgr-check-master || echo 'This node is a standby repmgr node'
+```
+
+This command relies on exit codes to tell Consul whether a particular node is a master
+or secondary. The most important thing here is that this command does not produce errors.
+If there are errors it's most likely due to incorrect `gitlab-consul` database user permissions.
+Check the [Troubleshooting section](troubleshooting.md) before proceeding.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure PgBouncer
+
+Now that the PostgreSQL servers are all set up, let's configure PgBouncer.
+The following IPs will be used as an example:
+
+- `10.6.0.31`: PgBouncer 1
+- `10.6.0.32`: PgBouncer 2
+- `10.6.0.33`: PgBouncer 3
+
+1. On each PgBouncer node, edit `/etc/gitlab/gitlab.rb`, and replace
+ `<consul_password_hash>` and `<pgbouncer_password_hash>` with the
+ password hashes you [set up previously](#postgresql-primary-node):
+
+ ```ruby
+ # Disable all components except Pgbouncer and Consul agent
+ roles ['pgbouncer_role']
+
+ # Configure PgBouncer
+ pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
+
+ pgbouncer['users'] = {
+ 'gitlab-consul': {
+ password: '<consul_password_hash>'
+ },
+ 'pgbouncer': {
+ password: '<pgbouncer_password_hash>'
+ }
+ }
+
+ # Configure Consul agent
+ consul['watchers'] = %w(postgresql)
+ consul['enable'] = true
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+
+ # Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+ NOTE: **Note:**
+ If an error `execute[generate databases.ini]` occurs, this is due to an existing
+ [known issue](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4713).
+ It will be resolved when you run a second `reconfigure` after the next step.
+
+1. Create a `.pgpass` file so Consul is able to
+ reload PgBouncer. Enter the PgBouncer password twice when asked:
+
+ ```shell
+ gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
+ ```
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) once again
+ to resolve any potential errors from the previous steps.
+1. Ensure each node is talking to the current primary:
+
+ ```shell
+ gitlab-ctl pgb-console # You will be prompted for PGBOUNCER_PASSWORD
+ ```
+
+1. Once the console prompt is available, run the following queries:
+
+ ```shell
+ show databases ; show clients ;
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
+ ---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
+ gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
+ pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
+ (2 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
+ ------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
+ C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
+ (2 rows)
+ ```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+### Configure the internal load balancer
+
+If you're running more than one PgBouncer node as recommended, then at this time you'll need to set
+up a TCP internal load balancer to serve each correctly.
+
+The following IP will be used as an example:
+
+- `10.6.0.40`: Internal Load Balancer
+
+Here's how you could do it with [HAProxy](https://www.haproxy.org/):
+
+```plaintext
+global
+ log /dev/log local0
+ log localhost local1 notice
+ log stdout format raw local0
+
+defaults
+ log global
+ default-server inter 10s fall 3 rise 2
+ balance leastconn
+
+frontend internal-pgbouncer-tcp-in
+ bind *:6432
+ mode tcp
+ option tcplog
+
+ default_backend pgbouncer
+
+backend pgbouncer
+ mode tcp
+ option tcp-check
+
+ server pgbouncer1 10.6.0.21:6432 check
+ server pgbouncer2 10.6.0.22:6432 check
+ server pgbouncer3 10.6.0.23:6432 check
+```
+
+Refer to your preferred Load Balancer's documentation for further guidance.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Redis
+
+Using [Redis](https://redis.io/) in scalable environment is possible using a **Primary** x **Replica**
+topology with a [Redis Sentinel](https://redis.io/topics/sentinel) service to watch and automatically
+start the failover procedure.
+
+Redis requires authentication if used with Sentinel. See
+[Redis Security](https://redis.io/topics/security) documentation for more
+information. We recommend using a combination of a Redis password and tight
+firewall rules to secure your Redis service.
+You are highly encouraged to read the [Redis Sentinel](https://redis.io/topics/sentinel) documentation
+before configuring Redis with GitLab to fully understand the topology and
+architecture.
+
+The requirements for a Redis setup are the following:
+
+1. All Redis nodes must be able to talk to each other and accept incoming
+ connections over Redis (`6379`) and Sentinel (`26379`) ports (unless you
+ change the default ones).
+1. The server that hosts the GitLab application must be able to access the
+ Redis nodes.
+1. Protect the nodes from access from external networks
+ ([Internet](https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png)),
+ using a firewall.
+
+In this section, you'll be guided through configuring two external Redis clusters
+to be used with GitLab. The following IPs will be used as an example:
+
+- `10.6.0.51`: Redis - Cache Primary
+- `10.6.0.52`: Redis - Cache Replica 1
+- `10.6.0.53`: Redis - Cache Replica 2
+- `10.6.0.71`: Sentinel - Cache 1
+- `10.6.0.72`: Sentinel - Cache 2
+- `10.6.0.73`: Sentinel - Cache 3
+- `10.6.0.61`: Redis - Queues Primary
+- `10.6.0.62`: Redis - Queues Replica 1
+- `10.6.0.63`: Redis - Queues Replica 2
+- `10.6.0.81`: Sentinel - Queues 1
+- `10.6.0.82`: Sentinel - Queues 2
+- `10.6.0.83`: Sentinel - Queues 3
+
+NOTE: **Providing your own Redis instance:**
+Managed Redis from cloud providers such as AWS ElastiCache will work. If these
+services support high availability, be sure it is **not** the Redis Cluster type.
+Redis version 5.0 or higher is required, as this is what ships with
+Omnibus GitLab packages starting with GitLab 13.0. Older Redis versions
+do not support an optional count argument to SPOP which is now required for
+[Merge Trains](../../ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md).
+Note the Redis node's IP address or hostname, port, and password (if required).
+These will be necessary when configuring the
+[GitLab application servers](#configure-gitlab-rails) later.
+
+### Configure the Redis and Sentinel Cache cluster
+
+This is the section where we install and set up the new Redis Cache instances.
+
+NOTE: **Note:**
+Redis nodes (both primary and replica) will need the same password defined in
+`redis['password']`. At any time during a failover the Sentinels can
+reconfigure a node and change its status from primary to replica and vice versa.
+
+#### Configure the primary Redis Cache node
+
+1. SSH into the **Primary** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_master_role'
+ roles ['redis_master_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.51'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Set up password authentication for Redis (use the same password in all nodes).
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER'
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Prevent database migrations from running on upgrade
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+#### Configure the replica Redis Cache nodes
+
+1. SSH into the **replica** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_replica_role'
+ roles ['redis_replica_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.52'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.51'
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Prevent database migrations from running on upgrade
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other replica nodes, and
+ make sure to set up the IPs correctly.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+These values don't have to be changed again in `/etc/gitlab/gitlab.rb` after
+a failover, as the nodes will be managed by the [Sentinels](#configure-the-sentinel-cache-nodes), and even after a
+`gitlab-ctl reconfigure`, they will get their configuration restored by
+the same Sentinels.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### Configure the Sentinel Cache nodes
+
+NOTE: **Note:**
+If you are using an external Redis Sentinel instance, be sure
+to exclude the `requirepass` parameter from the Sentinel
+configuration. This parameter will cause clients to report `NOAUTH
+Authentication required.`. [Redis Sentinel 3.2.x does not support
+password authentication](https://github.com/antirez/redis/issues/3279).
+
+Now that the Redis servers are all set up, let's configure the Sentinel
+servers. The following IPs will be used as an example:
+
+- `10.6.0.71`: Sentinel - Cache 1
+- `10.6.0.72`: Sentinel - Cache 2
+- `10.6.0.73`: Sentinel - Cache 3
+
+To configure the Sentinel Cache server:
+
+1. SSH into the server that will host Consul/Sentinel.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['redis_sentinel_role']
+
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis-cache'
+
+ ## The same password for Redis authentication you set up for the primary node.
+ redis['master_password'] = 'REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER'
+
+ ## The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.51'
+
+ ## Define a port so Redis can listen for TCP requests which will allow other
+ ## machines to connect to it.
+ redis['port'] = 6379
+
+ ## Port of primary Redis server, uncomment to change to non default. Defaults
+ ## to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Configure Sentinel's IP
+ sentinel['bind'] = '10.6.0.71'
+
+ ## Port that Sentinel listens on, uncomment to change to non default. Defaults
+ ## to `26379`.
+ #sentinel['port'] = 26379
+
+ ## Quorum must reflect the amount of voting sentinels it take to start a failover.
+ ## Value must NOT be greater then the amount of sentinels.
+ ##
+ ## The quorum can be used to tune Sentinel in two ways:
+ ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
+ ## we deploy, we are basically making Sentinel more sensible to primary failures,
+ ## triggering a failover as soon as even just a minority of Sentinels is no longer
+ ## able to talk with the primary.
+ ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
+ ## making Sentinel able to failover only when there are a very large number (larger
+ ## than majority) of well connected Sentinels which agree about the primary being down.s
+ sentinel['quorum'] = 2
+
+ ## Consider unresponsive server down after x amount of ms.
+ #sentinel['down_after_milliseconds'] = 10000
+
+ ## Specifies the failover timeout in milliseconds. It is used in many ways:
+ ##
+ ## - The time needed to re-start a failover after a previous failover was
+ ## already tried against the same primary by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## - The time needed for a replica replicating to a wrong primary according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right primary, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## - The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (REPLICAOF NO ONE yet not
+ ## acknowledged by the promoted replica).
+ ##
+ ## - The maximum time a failover in progress waits for all the replica to be
+ ## reconfigured as replicas of the new primary. However even after this time
+ ## the replicas will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ #sentinel['failover_timeout'] = 60000
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Consul/Sentinel nodes, and
+ make sure you set up the correct IPs.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+### Configure the Redis and Sentinel Queues cluster
+
+This is the section where we install and set up the new Redis Queues instances.
+
+NOTE: **Note:**
+Redis nodes (both primary and replica) will need the same password defined in
+`redis['password']`. At any time during a failover the Sentinels can
+reconfigure a node and change its status from primary to replica and vice versa.
+
+#### Configure the primary Redis Queues node
+
+1. SSH into the **Primary** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_master_role'
+ roles ['redis_master_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.61'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Set up password authentication for Redis (use the same password in all nodes).
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER'
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+ ```
+
+1. Only the primary GitLab application server should handle migrations. To
+ prevent database migrations from running on upgrade, add the following
+ configuration to your `/etc/gitlab/gitlab.rb` file:
+
+ ```ruby
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+#### Configure the replica Redis Queues nodes
+
+1. SSH into the **replica** Redis Queue server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_replica_role'
+ roles ['redis_replica_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.62'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.61'
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other replica nodes, and
+ make sure to set up the IPs correctly.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+These values don't have to be changed again in `/etc/gitlab/gitlab.rb` after
+a failover, as the nodes will be managed by the [Sentinels](#configure-the-sentinel-queues-nodes), and even after a
+`gitlab-ctl reconfigure`, they will get their configuration restored by
+the same Sentinels.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### Configure the Sentinel Queues nodes
+
+NOTE: **Note:**
+If you are using an external Redis Sentinel instance, be sure
+to exclude the `requirepass` parameter from the Sentinel
+configuration. This parameter will cause clients to report `NOAUTH
+Authentication required.`. [Redis Sentinel 3.2.x does not support
+password authentication](https://github.com/antirez/redis/issues/3279).
+
+Now that the Redis servers are all set up, let's configure the Sentinel
+servers. The following IPs will be used as an example:
+
+- `10.6.0.81`: Sentinel - Queues 1
+- `10.6.0.82`: Sentinel - Queues 2
+- `10.6.0.83`: Sentinel - Queues 3
+
+To configure the Sentinel Queues server:
+
+1. SSH into the server that will host Sentinel.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['redis_sentinel_role']
+
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis-persistent'
+
+ ## The same password for Redis authentication you set up for the primary node.
+ redis['master_password'] = 'REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER'
+
+ ## The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.61'
+
+ ## Define a port so Redis can listen for TCP requests which will allow other
+ ## machines to connect to it.
+ redis['port'] = 6379
+
+ ## Port of primary Redis server, uncomment to change to non default. Defaults
+ ## to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Configure Sentinel's IP
+ sentinel['bind'] = '10.6.0.81'
+
+ ## Port that Sentinel listens on, uncomment to change to non default. Defaults
+ ## to `26379`.
+ #sentinel['port'] = 26379
+
+ ## Quorum must reflect the amount of voting sentinels it take to start a failover.
+ ## Value must NOT be greater then the amount of sentinels.
+ ##
+ ## The quorum can be used to tune Sentinel in two ways:
+ ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
+ ## we deploy, we are basically making Sentinel more sensible to primary failures,
+ ## triggering a failover as soon as even just a minority of Sentinels is no longer
+ ## able to talk with the primary.
+ ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
+ ## making Sentinel able to failover only when there are a very large number (larger
+ ## than majority) of well connected Sentinels which agree about the primary being down.s
+ sentinel['quorum'] = 2
+
+ ## Consider unresponsive server down after x amount of ms.
+ #sentinel['down_after_milliseconds'] = 10000
+
+ ## Specifies the failover timeout in milliseconds. It is used in many ways:
+ ##
+ ## - The time needed to re-start a failover after a previous failover was
+ ## already tried against the same primary by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## - The time needed for a replica replicating to a wrong primary according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right primary, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## - The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (REPLICAOF NO ONE yet not
+ ## acknowledged by the promoted replica).
+ ##
+ ## - The maximum time a failover in progress waits for all the replica to be
+ ## reconfigured as replicas of the new primary. However even after this time
+ ## the replicas will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ #sentinel['failover_timeout'] = 60000
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. To prevent database migrations from running on upgrade, run:
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+ Only the primary GitLab application server should handle migrations.
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Sentinel nodes, and
+ make sure you set up the correct IPs.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Gitaly
+
+Deploying Gitaly in its own server can benefit GitLab installations that are
+larger than a single machine.
+
+The Gitaly node requirements are dependent on customer data, specifically the number of
+projects and their repository sizes. Two nodes are recommended as an absolute minimum.
+Each Gitaly node should store no more than 5TB of data and have the number of
+[`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) set to 20% of available CPUs.
+Additional nodes should be considered in conjunction with a review of expected
+data size and spread based on the recommendations above.
+
+It is also strongly recommended that all Gitaly nodes be set up with SSD disks with
+a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write,
+as Gitaly has heavy I/O. These IOPS values are recommended only as a starter as with
+time they may be adjusted higher or lower depending on the scale of your environment's workload.
+If you're running the environment on a Cloud provider, you may need to refer to
+their documentation on how to configure IOPS correctly.
+
+Some things to note:
+
+- The GitLab Rails application shards repositories into [repository storages](../repository_storage_paths.md).
+- A Gitaly server can host one or more storages.
+- A GitLab server can use one or more Gitaly servers.
+- Gitaly addresses must be specified in such a way that they resolve
+ correctly for ALL Gitaly clients.
+- Gitaly servers must not be exposed to the public internet, as Gitaly's network
+ traffic is unencrypted by default. The use of a firewall is highly recommended
+ to restrict access to the Gitaly server. Another option is to
+ [use TLS](#gitaly-tls-support).
+
+TIP: **Tip:**
+For more information about Gitaly's history and network architecture see the
+[standalone Gitaly documentation](../gitaly/index.md).
+
+Note: **Note:**
+The token referred to throughout the Gitaly documentation is
+just an arbitrary password selected by the administrator. It is unrelated to
+tokens created for the GitLab API or other similar web API tokens.
+
+Below we describe how to configure two Gitaly servers, with IPs and
+domain names:
+
+- `10.6.0.91`: Gitaly 1 (`gitaly1.internal`)
+- `10.6.0.92`: Gitaly 2 (`gitaly2.internal`)
+
+The secret token is assumed to be `gitalysecret` and that
+your GitLab installation has three repository storages:
+
+- `default` on Gitaly 1
+- `storage1` on Gitaly 1
+- `storage2` on Gitaly 2
+
+On each node:
+
+1. [Download/Install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page but
+ **without** providing the `EXTERNAL_URL` value.
+1. Edit `/etc/gitlab/gitlab.rb` to configure storage paths, enable
+ the network listener and configure the token:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ # /etc/gitlab/gitlab.rb
+
+ # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
+ # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
+ # The following two values must be the same as their respective values
+ # of the GitLab Rails application setup
+ gitaly['auth_token'] = 'gitalysecret'
+ gitlab_shell['secret_token'] = 'shellsecret'
+
+ # Avoid running unnecessary services on the Gitaly server
+ postgresql['enable'] = false
+ redis['enable'] = false
+ nginx['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
+ sidekiq['enable'] = false
+ gitlab_workhorse['enable'] = false
+ grafana['enable'] = false
+
+ # If you run a seperate monitoring node you can disable these services
+ alertmanager['enable'] = false
+ prometheus['enable'] = false
+
+ # Prevent database connections during 'gitlab-ctl reconfigure'
+ gitlab_rails['rake_cache_clear'] = false
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the gitlab-shell API callback URL. Without this, `git push` will
+ # fail. This can be your 'front door' GitLab URL or an internal load
+ # balancer.
+ # Don't forget to copy `/etc/gitlab/gitlab-secrets.json` from web server to Gitaly server.
+ gitlab_rails['internal_api_url'] = 'https://gitlab.example.com'
+
+ # Make Gitaly accept connections on all network interfaces. You must use
+ # firewalls to restrict access to this address/port.
+ # Comment out following line if you only want to support TLS connections
+ gitaly['listen_addr'] = "0.0.0.0:8075"
+ ```
+
+1. Append the following to `/etc/gitlab/gitlab.rb` for each respective server:
+ 1. On `gitaly1.internal`:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => {
+ 'path' => '/var/opt/gitlab/git-data'
+ },
+ 'storage1' => {
+ 'path' => '/mnt/gitlab/git-data'
+ },
+ })
+ ```
+
+ 1. On `gitaly2.internal`:
+
+ ```ruby
+ git_data_dirs({
+ 'storage2' => {
+ 'path' => '/mnt/gitlab/git-data'
+ },
+ })
+ ```
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+### Gitaly TLS support
+
+Gitaly supports TLS encryption. To be able to communicate
+with a Gitaly instance that listens for secure connections you will need to use `tls://` URL
+scheme in the `gitaly_address` of the corresponding storage entry in the GitLab configuration.
+
+You will need to bring your own certificates as this isn't provided automatically.
+The certificate, or its certificate authority, must be installed on all Gitaly
+nodes (including the Gitaly node using the certificate) and on all client nodes
+that communicate with it following the procedure described in
+[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates).
+
+NOTE: **Note:**
+The self-signed certificate must specify the address you use to access the
+Gitaly server. If you are addressing the Gitaly server by a hostname, you can
+either use the Common Name field for this, or add it as a Subject Alternative
+Name. If you are addressing the Gitaly server by its IP address, you must add it
+as a Subject Alternative Name to the certificate.
+[gRPC does not support using an IP address as Common Name in a certificate](https://github.com/grpc/grpc/issues/2691).
+
+NOTE: **Note:**
+It is possible to configure Gitaly servers with both an
+unencrypted listening address `listen_addr` and an encrypted listening
+address `tls_listen_addr` at the same time. This allows you to do a
+gradual transition from unencrypted to encrypted traffic, if necessary.
+
+To configure Gitaly with TLS:
+
+1. Create the `/etc/gitlab/ssl` directory and copy your key and certificate there:
+
+ ```shell
+ sudo mkdir -p /etc/gitlab/ssl
+ sudo chmod 755 /etc/gitlab/ssl
+ sudo cp key.pem cert.pem /etc/gitlab/ssl/
+ sudo chmod 644 key.pem cert.pem
+ ```
+
+1. Copy the cert to `/etc/gitlab/trusted-certs` so Gitaly will trust the cert when
+ calling into itself:
+
+ ```shell
+ sudo cp /etc/gitlab/ssl/cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and add:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ gitaly['tls_listen_addr'] = "0.0.0.0:9999"
+ gitaly['certificate_path'] = "/etc/gitlab/ssl/cert.pem"
+ gitaly['key_path'] = "/etc/gitlab/ssl/key.pem"
+ ```
+
+1. Delete `gitaly['listen_addr']` to allow only encrypted connections.
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Sidekiq
+
+Sidekiq requires connections to the Redis, PostgreSQL and Gitaly instances.
+The following IPs will be used as an example:
+
+- `10.6.0.101`: Sidekiq 1
+- `10.6.0.102`: Sidekiq 2
+- `10.6.0.103`: Sidekiq 3
+- `10.6.0.104`: Sidekiq 4
+
+To configure the Sidekiq nodes, on each one:
+
+1. SSH into the Sidekiq server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab package
+you want using steps 1 and 2 from the GitLab downloads page.
+**Do not complete any other steps on the download page.**
+1. Open `/etc/gitlab/gitlab.rb` with your editor:
+
+ ```ruby
+ ########################################
+ ##### Services Disabled ###
+ ########################################
+
+ nginx['enable'] = false
+ grafana['enable'] = false
+ prometheus['enable'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = false
+ puma['enable'] = false
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ ########################################
+ #### Redis ###
+ ########################################
+
+ ## Redis connection details
+ ## First cluster that will host the cache
+ gitlab_rails['redis_cache_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER>@gitlab-redis-cache'
+
+ gitlab_rails['redis_cache_sentinels'] = [
+ {host: '10.6.0.71', port: 26379},
+ {host: '10.6.0.72', port: 26379},
+ {host: '10.6.0.73', port: 26379},
+ ]
+
+ ## Second cluster that will host the queues, shared state, and actioncable
+ gitlab_rails['redis_queues_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_shared_state_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_actioncable_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+
+ gitlab_rails['redis_queues_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_shared_state_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_actioncable_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+
+ #######################################
+ ### Gitaly ###
+ #######################################
+
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
+ gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
+
+ #######################################
+ ### Postgres ###
+ #######################################
+ gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = '<postgresql_user_password>'
+ gitlab_rails['db_adapter'] = 'postgresql'
+ gitlab_rails['db_encoding'] = 'unicode'
+ gitlab_rails['auto_migrate'] = false
+
+ #######################################
+ ### Sidekiq configuration ###
+ #######################################
+ sidekiq['listen_address'] = "0.0.0.0"
+ sidekiq['cluster'] = true # no need to set this after GitLab 13.0
+
+ #######################################
+ ### Monitoring configuration ###
+ #######################################
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+
+ # Rails Status for prometheus
+ gitlab_rails['monitoring_whitelist'] = ['10.6.0.121/32', '127.0.0.0/8']
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+TIP: **Tip:**
+You can also run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure GitLab Rails
+
+NOTE: **Note:**
+In our architectures we run each GitLab Rails node using the Puma webserver
+and have its number of workers set to 90% of available CPUs along with four threads. For
+nodes that are running Rails with other components the worker value should be reduced
+accordingly where we've found 50% achieves a good balance but this is dependent
+on workload.
+
+This section describes how to configure the GitLab application (Rails) component.
+
+The following IPs will be used as an example:
+
+- `10.6.0.111`: GitLab application 1
+- `10.6.0.112`: GitLab application 2
+- `10.6.0.113`: GitLab application 3
+
+On each node perform the following:
+
+1. Download/install Omnibus GitLab using **steps 1 and 2** from
+ [GitLab downloads](https://about.gitlab.com/install/). Do not complete other
+ steps on the download page.
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. Edit `/etc/gitlab/gitlab.rb` and use the following configuration.
+ To maintain uniformity of links across nodes, the `external_url`
+ on the application server should point to the external URL that users will use
+ to access GitLab. This would be the URL of the [external load balancer](#configure-the-external-load-balancer)
+ which will route traffic to the GitLab application server:
+
+ ```ruby
+ external_url 'https://gitlab.example.com'
+
+ # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
+ # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
+ # The following two values must be the same as their respective values
+ # of the Gitaly setup
+ gitlab_rails['gitaly_token'] = 'gitalysecret'
+ gitlab_shell['secret_token'] = 'shellsecret'
+
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
+
+ ## Disable components that will not be on the GitLab application server
+ roles ['application_role']
+ gitaly['enable'] = false
+ nginx['enable'] = true
+
+ ## PostgreSQL connection details
+ # Disable PostgreSQL on the application node
+ postgresql['enable'] = false
+ gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = '<postgresql_user_password>'
+ gitlab_rails['auto_migrate'] = false
+
+ ## Redis connection details
+ ## First cluster that will host the cache
+ gitlab_rails['redis_cache_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER>@gitlab-redis-cache'
+
+ gitlab_rails['redis_cache_sentinels'] = [
+ {host: '10.6.0.71', port: 26379},
+ {host: '10.6.0.72', port: 26379},
+ {host: '10.6.0.73', port: 26379},
+ ]
+
+ ## Second cluster that will host the queues, shared state, and actionable
+ gitlab_rails['redis_queues_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_shared_state_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_actioncable_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+
+ gitlab_rails['redis_queues_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_shared_state_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_actioncable_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+
+ # Set the network addresses that the exporters used for monitoring will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ gitlab_workhorse['prometheus_listen_addr'] = '0.0.0.0:9229'
+ sidekiq['listen_address'] = "0.0.0.0"
+ puma['listen'] = '0.0.0.0'
+
+ # Add the monitoring node's IP address to the monitoring whitelist and allow it to
+ # scrape the NGINX metrics
+ gitlab_rails['monitoring_whitelist'] = ['10.6.0.121/32', '127.0.0.0/8']
+ nginx['status']['options']['allow'] = ['10.6.0.121/32', '127.0.0.0/8']
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. If you're using [Gitaly with TLS support](#gitaly-tls-support), make sure the
+ `git_data_dirs` entry is configured with `tls` instead of `tcp`:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage1' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage2' => { 'gitaly_address' => 'tls://gitaly2.internal:9999' },
+ })
+ ```
+
+ 1. Copy the cert into `/etc/gitlab/trusted-certs`:
+
+ ```shell
+ sudo cp cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. If you're [using NFS](#configure-nfs-optional):
+ 1. If necessary, install the NFS client utility packages using the following
+ commands:
+
+ ```shell
+ # Ubuntu/Debian
+ apt-get install nfs-common
+
+ # CentOS/Red Hat
+ yum install nfs-utils nfs-utils-lib
+ ```
+
+ 1. Specify the necessary NFS mounts in `/etc/fstab`.
+ The exact contents of `/etc/fstab` will depend on how you chose
+ to configure your NFS server. See the [NFS documentation](../high_availability/nfs.md)
+ for examples and the various options.
+
+ 1. Create the shared directories. These may be different depending on your NFS
+ mount locations.
+
+ ```shell
+ mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
+ ```
+
+ 1. Edit `/etc/gitlab/gitlab.rb` and use the following configuration:
+
+ ```ruby
+ ## Prevent GitLab from starting if NFS data mounts are not available
+ high_availability['mountpoint'] = '/var/opt/gitlab/git-data'
+
+ ## Ensure UIDs and GIDs match between servers for permissions via NFS
+ user['uid'] = 9000
+ user['gid'] = 9000
+ web_server['uid'] = 9001
+ web_server['gid'] = 9001
+ registry['uid'] = 9002
+ registry['gid'] = 9002
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Confirm the node can connect to Gitaly:
+
+ ```shell
+ sudo gitlab-rake gitlab:gitaly:check
+ ```
+
+ Then, tail the logs to see the requests:
+
+ ```shell
+ sudo gitlab-ctl tail gitaly
+ ```
+
+1. Optionally, from the Gitaly servers, confirm that Gitaly can perform callbacks to the internal API:
+
+ ```shell
+ sudo /opt/gitlab/embedded/service/gitlab-shell/bin/check -config /opt/gitlab/embedded/service/gitlab-shell/config.yml
+ ```
+
+NOTE: **Note:**
+When you specify `https` in the `external_url`, as in the example
+above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
+certificates are not present, NGINX will fail to start. See the
+[NGINX documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for more information.
+
+### GitLab Rails post-configuration
+
+Initialize the GitLab database, by running the following in one of the Rails nodes:
+
+```shell
+sudo gitlab-rake gitlab:db:configure
+```
+
+NOTE: **Note:**
+If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to
+PostgreSQL it may be that your PgBouncer node's IP address is missing from
+PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
+[PgBouncer error `ERROR: pgbouncer cannot connect to server`](troubleshooting.md#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
+in the Troubleshooting section before proceeding.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Prometheus
+
+The Omnibus GitLab package can be used to configure a standalone Monitoring node
+running [Prometheus](../monitoring/prometheus/index.md) and
+[Grafana](../monitoring/performance/grafana_configuration.md).
+
+The following IP will be used as an example:
+
+- `10.6.0.121`: Prometheus
+
+To configure the Monitoring node:
+
+1. SSH into the Monitoring node.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ Do not complete any other steps on the download page.
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ external_url 'http://gitlab.example.com'
+
+ # Disable all other services
+ gitlab_rails['auto_migrate'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_exporter['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = true
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ sidekiq['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
+ node_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ # Enable Prometheus
+ prometheus['enable'] = true
+ prometheus['listen_address'] = '0.0.0.0:9090'
+ prometheus['monitor_kubernetes'] = false
+
+ # Enable Login form
+ grafana['disable_login_form'] = false
+
+ # Enable Grafana
+ grafana['enable'] = true
+ grafana['admin_password'] = '<grafana_password>'
+
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. In the GitLab UI, set `admin/application_settings/metrics_and_profiling` > Metrics - Grafana to `/-/grafana` to
+`http[s]://<MONITOR NODE>/-/grafana`
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure the object storage
+
+GitLab supports using an object storage service for holding numerous types of data.
+It's recommended over [NFS](#configure-nfs-optional) and in general it's better
+in larger setups as object storage is typically much more performant, reliable,
+and scalable.
+
+Object storage options that GitLab has tested, or is aware of customers using include:
+
+- SaaS/Cloud solutions such as [Amazon S3](https://aws.amazon.com/s3/), [Google cloud storage](https://cloud.google.com/storage).
+- On-premises hardware and appliances from various storage vendors.
+- MinIO. There is [a guide to deploying this](https://docs.gitlab.com/charts/advanced/external-object-storage/minio.html) within our Helm Chart documentation.
+
+For configuring GitLab to use Object Storage refer to the following guides
+based on what features you intend to use:
+
+1. Configure [object storage for backups](../../raketasks/backup_restore.md#uploading-backups-to-a-remote-cloud-storage).
+1. Configure [object storage for job artifacts](../job_artifacts.md#using-object-storage)
+ including [incremental logging](../job_logs.md#new-incremental-logging-architecture).
+1. Configure [object storage for LFS objects](../lfs/index.md#storing-lfs-objects-in-remote-object-storage).
+1. Configure [object storage for uploads](../uploads.md#using-object-storage-core-only).
+1. Configure [object storage for merge request diffs](../merge_request_diffs.md#using-object-storage).
+1. Configure [object storage for Container Registry](../packages/container_registry.md#use-object-storage) (optional feature).
+1. Configure [object storage for Mattermost](https://docs.mattermost.com/administration/config-settings.html#file-storage) (optional feature).
+1. Configure [object storage for packages](../packages/index.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. Configure [object storage for Dependency Proxy](../packages/dependency_proxy.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. Configure [object storage for Pseudonymizer](../pseudonymizer.md#configuration) (optional feature). **(ULTIMATE ONLY)**
+1. Configure [object storage for autoscale Runner caching](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching) (optional - for improved performance).
+1. Configure [object storage for Terraform state files](../terraform_state.md#using-object-storage-core-only).
+
+Using separate buckets for each data type is the recommended approach for GitLab.
+
+A limitation of our configuration is that each use of object storage is separately configured.
+[We have an issue for improving this](https://gitlab.com/gitlab-org/gitlab/-/issues/23345)
+and easily using one bucket with separate folders is one improvement that this might bring.
+
+There is at least one specific issue with using the same bucket:
+when GitLab is deployed with the Helm chart restore from backup
+[will not properly function](https://docs.gitlab.com/charts/advanced/external-object-storage/#lfs-artifacts-uploads-packages-external-diffs-pseudonymizer)
+unless separate buckets are used.
+
+One risk of using a single bucket would be if your organization decided to
+migrate GitLab to the Helm deployment in the future. GitLab would run, but the situation with
+backups might not be realized until the organization had a critical requirement for the backups to
+work.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure NFS (optional)
+
+[Object storage](#configure-the-object-storage), along with [Gitaly](#configure-gitaly)
+are recommended over NFS wherever possible for improved performance. If you intend
+to use GitLab Pages, this currently [requires NFS](troubleshooting.md#gitlab-pages-requires-nfs).
+
+See how to [configure NFS](../high_availability/nfs.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Troubleshooting
+
+See the [troubleshooting documentation](troubleshooting.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
diff --git a/doc/administration/reference_architectures/2k_users.md b/doc/administration/reference_architectures/2k_users.md
index d182daf45b3..a7feb78a365 100644
--- a/doc/administration/reference_architectures/2k_users.md
+++ b/doc/administration/reference_architectures/2k_users.md
@@ -1,27 +1,30 @@
---
reading_time: true
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
---
-# Reference architecture: up to 2,000 users
+# Reference architecture: up to 2,000 users **(CORE ONLY)**
This page describes GitLab reference architecture for up to 2,000 users.
For a full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
> - **Supported users (approximate):** 2,000
-> - **High Availability:** False
+> - **High Availability:** No
> - **Test requests per second (RPS) rates:** API: 40 RPS, Web: 4 RPS, Git: 4 RPS
-| Service | Nodes | Configuration | GCP | AWS | Azure |
-|------------------------------------------|--------|-------------------------|-----------------|----------------|-----------|
-| Load balancer | 1 | 2 vCPU, 1.8GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| PostgreSQL | 1 | 2 vCPU, 7.5GB memory | `n1-standard-2` | `m5.large` | `D2s v3` |
-| Redis | 1 | 1 vCPU, 3.75GB memory | `n1-standard-1` | `m5.large` | `D2s v3` |
-| Gitaly | 1 | 4 vCPU, 15GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` |
-| GitLab Rails | 2 | 8 vCPU, 7.2GB memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` |
-| Monitoring node | 1 | 2 vCPU, 1.8GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| Object storage | n/a | n/a | n/a | n/a | n/a |
-| NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` |
+| Service | Nodes | Configuration | GCP | AWS | Azure |
+|------------------------------------------|--------|-------------------------|----------------|--------------|---------|
+| Load balancer | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| PostgreSQL | 1 | 2 vCPU, 7.5GB memory | n1-standard-2 | m5.large | D2s v3 |
+| Redis | 1 | 1 vCPU, 3.75GB memory | n1-standard-1 | m5.large | D2s v3 |
+| Gitaly | 1 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| GitLab Rails | 2 | 8 vCPU, 7.2GB memory | n1-highcpu-8 | c5.2xlarge | F8s v2 |
+| Monitoring node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Object storage | n/a | n/a | n/a | n/a | n/a |
+| NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
The Google Cloud Platform (GCP) architectures were built and tested using the
[Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
@@ -554,7 +557,7 @@ On each node perform the following:
1. Specify the necessary NFS mounts in `/etc/fstab`.
The exact contents of `/etc/fstab` will depend on how you chose
- to configure your NFS server. See the [NFS documentation](../high_availability/nfs.md)
+ to configure your NFS server. See the [NFS documentation](../nfs.md)
for examples and the various options.
1. Create the shared directories. These may be different depending on your NFS
@@ -852,7 +855,7 @@ along with [Gitaly](#configure-gitaly), are recommended over using NFS whenever
possible. However, if you intend to use GitLab Pages,
[you must use NFS](troubleshooting.md#gitlab-pages-requires-nfs).
-For information about configuring NFS, see the [NFS documentation page](../high_availability/nfs.md).
+For information about configuring NFS, see the [NFS documentation page](../nfs.md).
<div align="right">
<a type="button" class="btn btn-default" href="#setup-components">
diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md
index 04cb9fa4769..2f88413de6f 100644
--- a/doc/administration/reference_architectures/3k_users.md
+++ b/doc/administration/reference_architectures/3k_users.md
@@ -1,49 +1,54 @@
---
reading_time: true
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
---
-# Reference architecture: up to 3,000 users
+# Reference architecture: up to 3,000 users **(PREMIUM ONLY)**
-This page describes GitLab reference architecture for up to 3,000 users.
-For a full list of reference architectures, see
+This page describes GitLab reference architecture for up to 3,000 users. For a
+full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
NOTE: **Note:**
-The 3,000-user reference architecture documented below is
-designed to help your organization achieve a highly-available GitLab deployment.
-If you do not have the expertise or need to maintain a highly-available
-environment, you can have a simpler and less costly-to-operate environment by
-following the [2,000-user reference architecture](2k_users.md).
+This reference architecture is designed to help your organization achieve a
+highly-available GitLab deployment. If you do not have the expertise or need to
+maintain a highly-available environment, you can have a simpler and less
+costly-to-operate environment by using the
+[2,000-user reference architecture](2k_users.md).
> - **Supported users (approximate):** 3,000
-> - **High Availability:** True
-> - **Test RPS rates:** API: 60 RPS, Web: 6 RPS, Git: 6 RPS
-
-| Service | Nodes | Configuration | GCP | AWS | Azure |
-|--------------------------------------------------------------|-------|---------------------------------|-----------------|-------------------------|----------------|
-| External load balancing node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| Redis | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
-| Consul + Sentinel | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
-| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| Internal load balancing node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| Gitaly | 2 minimum | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` |
-| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
-| GitLab Rails | 3 | 8 vCPU, 7.2GB Memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` |
-| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| Object Storage | n/a | n/a | n/a | n/a | n/a |
-| NFS Server (optional, not recommended) | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` |
-
-The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
-CPU platform on GCP. On different hardware you may find that adjustments, either lower
-or higher, are required for your CPU or Node counts accordingly. For more information, a
-[Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
-[here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
-
-For data objects such as LFS, Uploads, Artifacts, etc, an [object storage service](#configure-the-object-storage)
-is recommended over NFS where possible, due to better performance and availability.
-Since this doesn't require a node to be set up, it's marked as not applicable (n/a)
-in the table above.
+> - **High Availability:** Yes
+> - **Test requests per second (RPS) rates:** API: 60 RPS, Web: 6 RPS, Git: 6 RPS
+
+| Service | Nodes | Configuration | GCP | AWS | Azure |
+|--------------------------------------------|-------------|-----------------------|----------------|-------------|---------|
+| External load balancing node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Redis | 3 | 2 vCPU, 7.5GB memory | n1-standard-2 | m5.large | D2s v3 |
+| Consul + Sentinel | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| PostgreSQL | 3 | 2 vCPU, 7.5GB memory | n1-standard-2 | m5.large | D2s v3 |
+| PgBouncer | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Internal load balancing node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Gitaly | 2 (minimum) | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| Sidekiq | 4 | 2 vCPU, 7.5GB memory | n1-standard-2 | m5.large | D2s v3 |
+| GitLab Rails | 3 | 8 vCPU, 7.2GB memory | n1-highcpu-8 | c5.2xlarge | F8s v2 |
+| Monitoring node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Object Storage | n/a | n/a | n/a | n/a | n/a |
+| NFS Server (optional, not recommended) | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
+
+The Google Cloud Platform (GCP) architectures were built and tested using the
+[Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+CPU platform. On different hardware you may find that adjustments, either lower
+or higher, are required for your CPU or node counts. For more information, see
+our [Sysbench](https://github.com/akopytov/sysbench)-based
+[CPU benchmark](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+For data objects (such as LFS, Uploads, or Artifacts), an
+[object storage service](#configure-the-object-storage) is recommended instead
+of NFS where possible, due to better performance and availability. Since this
+doesn't require a node to be set up, *Object Storage* is noted as not
+applicable (n/a) in the previous table.
## Setup components
@@ -804,10 +809,11 @@ SSH into the **primary node**:
gitlab-psql -d gitlabhq_production
```
-1. Enable the `pg_trgm` extension:
+1. Enable the `pg_trgm` and `btree_gist` extensions:
```shell
CREATE EXTENSION pg_trgm;
+ CREATE EXTENSION btree_gist;
```
1. Exit the database prompt by typing `\q` and Enter.
@@ -1439,7 +1445,7 @@ On each node perform the following:
1. Specify the necessary NFS mounts in `/etc/fstab`.
The exact contents of `/etc/fstab` will depend on how you chose
- to configure your NFS server. See the [NFS documentation](../high_availability/nfs.md)
+ to configure your NFS server. See the [NFS documentation](../nfs.md)
for examples and the various options.
1. Create the shared directories. These may be different depending on your NFS
@@ -1748,7 +1754,7 @@ work.
are recommended over NFS wherever possible for improved performance. If you intend
to use GitLab Pages, this currently [requires NFS](troubleshooting.md#gitlab-pages-requires-nfs).
-See how to [configure NFS](../high_availability/nfs.md).
+See how to [configure NFS](../nfs.md).
<div align="right">
<a type="button" class="btn btn-default" href="#setup-components">
diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md
index 2540fe68dac..565845b4bf5 100644
--- a/doc/administration/reference_architectures/50k_users.md
+++ b/doc/administration/reference_architectures/50k_users.md
@@ -1,76 +1,2047 @@
-# Reference architecture: up to 50,000 users
+---
+reading_time: true
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
-This page describes GitLab reference architecture for up to 50,000 users.
-For a full list of reference architectures, see
+# Reference architecture: up to 50,000 users **(PREMIUM ONLY)**
+
+This page describes GitLab reference architecture for up to 50,000 users. For a
+full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
> - **Supported users (approximate):** 50,000
-> - **High Availability:** True
-> - **Test RPS rates:** API: 1000 RPS, Web: 100 RPS, Git: 100 RPS
-
-| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS | Azure |
-|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|----------------|
-| GitLab Rails ([1](#footnotes)) | 12 | 32 vCPU, 28.8GB Memory | `n1-highcpu-32` | `c5.9xlarge` | F32s v2 |
-| PostgreSQL | 3 | 16 vCPU, 60GB Memory | `n1-standard-16` | `m5.4xlarge` | D16s v3 |
-| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 64 vCPU, 240GB Memory | `n1-standard-64` | `m5.16xlarge` | D64s v3 |
-| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | D4s v3 |
-| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | D4s v3 |
-| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | `g1-small` | `t2.small` | B1MS |
-| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | `g1-small` | `t2.small` | B1MS |
-| Consul | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| Sidekiq | 4 | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | D4s v3 |
-| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | F4s v2 |
-| Object Storage ([4](#footnotes)) | - | - | - | - | - |
-| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | F4s v2 |
-| External load balancing node ([6](#footnotes)) | 1 | 8 vCPU, 7.2GB Memory | `n1-highcpu-8` | `c5.2xlarge` | F8s v2 |
-| Internal load balancing node ([6](#footnotes)) | 1 | 8 vCPU, 7.2GB Memory | `n1-highcpu-8` | `c5.2xlarge` | F8s v2 |
-
-## Footnotes
-
-1. In our architectures we run each GitLab Rails node using the Puma webserver
- and have its number of workers set to 90% of available CPUs along with four threads. For
- nodes that are running Rails with other components the worker value should be reduced
- accordingly where we've found 50% achieves a good balance but this is dependent
- on workload.
-
-1. Gitaly node requirements are dependent on customer data, specifically the number of
- projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
- and at least four nodes should be used when supporting 50,000 or more users.
- We also recommend that each Gitaly node should store no more than 5TB of data
- and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
- set to 20% of available CPUs. Additional nodes should be considered in conjunction
- with a review of expected data size and spread based on the recommendations above.
-
-1. Recommended Redis setup differs depending on the size of the architecture.
- For smaller architectures (less than 3,000 users) a single instance should suffice.
- For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
- classes and that Redis Sentinel is hosted alongside Consul.
- For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../redis/replication_and_failover.md#running-multiple-redis-clusters) for the Cache class
- and another for the Queues and Shared State classes respectively. We also recommend
- that you run the Redis Sentinel clusters separately for each Redis Cluster.
-
-1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
- over NFS where possible, due to better performance and availability.
-
-1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
- object storage but this isn't typically recommended for performance reasons. Note however it is required for
- [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196).
-
-1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
- as the load balancer. Although other load balancers with similar feature sets
- could also be used, those load balancers have not been validated.
-
-1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
- HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
- as these components have heavy I/O. These IOPS values are recommended only as a starter
- as with time they may be adjusted higher or lower depending on the scale of your
- environment's workload. If you're running the environment on a Cloud provider
- you may need to refer to their documentation on how configure IOPS correctly.
-
-1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
- CPU platform on GCP. On different hardware you may find that adjustments, either lower
- or higher, are required for your CPU or Node counts accordingly. For more information, a
- [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
- [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+> - **High Availability:** Yes
+> - **Test requests per second (RPS) rates:** API: 1000 RPS, Web: 100 RPS, Git: 100 RPS
+
+| Service | Nodes | Configuration | GCP | AWS | Azure |
+|-----------------------------------------|-------------|-------------------------|-----------------|--------------|----------|
+| External load balancing node | 1 | 8 vCPU, 7.2GB memory | n1-highcpu-8 | c5.2xlarge | F8s v2 |
+| Consul | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| PostgreSQL | 3 | 16 vCPU, 60GB memory | n1-standard-16 | m5.4xlarge | D16s v3 |
+| PgBouncer | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Internal load balancing node | 1 | 8 vCPU, 7.2GB memory | n1-highcpu-8 | c5.2xlarge | F8s v2 |
+| Redis - Cache | 3 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| Redis - Queues / Shared State | 3 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| Redis Sentinel - Cache | 3 | 1 vCPU, 1.7GB memory | g1-small | t2.small | B1MS |
+| Redis Sentinel - Queues / Shared State | 3 | 1 vCPU, 1.7GB memory | g1-small | t2.small | B1MS |
+| Gitaly | 2 (minimum) | 64 vCPU, 240GB memory | n1-standard-64 | m5.16xlarge | D64s v3 |
+| Sidekiq | 4 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
+| GitLab Rails | 12 | 32 vCPU, 28.8GB memory | n1-highcpu-32 | c5.9xlarge | F32s v2 |
+| Monitoring node | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
+| Object Storage | n/a | n/a | n/a | n/a | n/a |
+| NFS Server | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
+
+The Google Cloud Platform (GCP) architectures were built and tested using the
+[Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+CPU platform. On different hardware you may find that adjustments, either lower
+or higher, are required for your CPU or node counts. For more information, see
+our [Sysbench](https://github.com/akopytov/sysbench)-based
+[CPU benchmark](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+For data objects (such as LFS, Uploads, or Artifacts), an
+[object storage service](#configure-the-object-storage) is recommended instead
+of NFS where possible, due to better performance and availability. Since this
+doesn't require a node to be set up, *Object Storage* is noted as not
+applicable (n/a) in the previous table.
+
+## Setup components
+
+To set up GitLab and its components to accommodate up to 50,000 users:
+
+1. [Configure the external load balancing node](#configure-the-external-load-balancer)
+ that will handle the load balancing of the three GitLab application services nodes.
+1. [Configure Consul](#configure-consul).
+1. [Configure PostgreSQL](#configure-postgresql), the database for GitLab.
+1. [Configure PgBouncer](#configure-pgbouncer).
+1. [Configure the internal load balancing node](#configure-the-internal-load-balancer)
+1. [Configure Redis](#configure-redis).
+1. [Configure Gitaly](#configure-gitaly),
+ which provides access to the Git repositories.
+1. [Configure Sidekiq](#configure-sidekiq).
+1. [Configure the main GitLab Rails application](#configure-gitlab-rails)
+ to run Puma/Unicorn, Workhorse, GitLab Shell, and to serve all frontend requests (UI, API, Git
+ over HTTP/SSH).
+1. [Configure Prometheus](#configure-prometheus) to monitor your GitLab environment.
+1. [Configure the Object Storage](#configure-the-object-storage)
+ used for shared data objects.
+1. [Configure NFS (Optional)](#configure-nfs-optional)
+ to have shared disk storage service as an alternative to Gitaly and/or Object Storage (although
+ not recommended). NFS is required for GitLab Pages, you can skip this step if you're not using
+ that feature.
+
+We start with all servers on the same 10.6.0.0/24 private network range, they
+can connect to each other freely on those addresses.
+
+Here is a list and description of each machine and the assigned IP:
+
+- `10.6.0.10`: External Load Balancer
+- `10.6.0.11`: Consul 1
+- `10.6.0.12`: Consul 2
+- `10.6.0.13`: Consul 3
+- `10.6.0.21`: PostgreSQL primary
+- `10.6.0.22`: PostgreSQL secondary 1
+- `10.6.0.23`: PostgreSQL secondary 2
+- `10.6.0.31`: PgBouncer 1
+- `10.6.0.32`: PgBouncer 2
+- `10.6.0.33`: PgBouncer 3
+- `10.6.0.40`: Internal Load Balancer
+- `10.6.0.51`: Redis - Cache Primary
+- `10.6.0.52`: Redis - Cache Replica 1
+- `10.6.0.53`: Redis - Cache Replica 2
+- `10.6.0.71`: Sentinel - Cache 1
+- `10.6.0.72`: Sentinel - Cache 2
+- `10.6.0.73`: Sentinel - Cache 3
+- `10.6.0.61`: Redis - Queues Primary
+- `10.6.0.62`: Redis - Queues Replica 1
+- `10.6.0.63`: Redis - Queues Replica 2
+- `10.6.0.81`: Sentinel - Queues 1
+- `10.6.0.82`: Sentinel - Queues 2
+- `10.6.0.83`: Sentinel - Queues 3
+- `10.6.0.91`: Gitaly 1
+- `10.6.0.92`: Gitaly 2
+- `10.6.0.101`: Sidekiq 1
+- `10.6.0.102`: Sidekiq 2
+- `10.6.0.103`: Sidekiq 3
+- `10.6.0.104`: Sidekiq 4
+- `10.6.0.111`: GitLab application 1
+- `10.6.0.112`: GitLab application 2
+- `10.6.0.113`: GitLab application 3
+- `10.6.0.121`: Prometheus
+
+## Configure the external load balancer
+
+NOTE: **Note:**
+This architecture has been tested and validated with [HAProxy](https://www.haproxy.org/)
+as the load balancer. Although other load balancers with similar feature sets
+could also be used, those load balancers have not been validated.
+
+In an active/active GitLab configuration, you will need a load balancer to route
+traffic to the application servers. The specifics on which load balancer to use
+or the exact configuration is beyond the scope of GitLab documentation. We hope
+that if you're managing multi-node systems like GitLab you have a load balancer of
+choice already. Some examples including HAProxy (open-source), F5 Big-IP LTM,
+and Citrix Net Scaler. This documentation will outline what ports and protocols
+you need to use with GitLab.
+
+The next question is how you will handle SSL in your environment.
+There are several different options:
+
+- [The application node terminates SSL](#application-node-terminates-ssl).
+- [The load balancer terminates SSL without backend SSL](#load-balancer-terminates-ssl-without-backend-ssl)
+ and communication is not secure between the load balancer and the application node.
+- [The load balancer terminates SSL with backend SSL](#load-balancer-terminates-ssl-with-backend-ssl)
+ and communication is *secure* between the load balancer and the application node.
+
+### Application node terminates SSL
+
+Configure your load balancer to pass connections on port 443 as `TCP` rather
+than `HTTP(S)` protocol. This will pass the connection to the application node's
+NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
+
+See the [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Load balancer terminates SSL without backend SSL
+
+Configure your load balancer to use the `HTTP(S)` protocol rather than `TCP`.
+The load balancer will then be responsible for managing SSL certificates and
+terminating SSL.
+
+Since communication between the load balancer and GitLab will not be secure,
+there is some additional configuration needed. See the
+[NGINX proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
+for details.
+
+### Load balancer terminates SSL with backend SSL
+
+Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
+The load balancer(s) will be responsible for managing SSL certificates that
+end users will see.
+
+Traffic will also be secure between the load balancer(s) and NGINX in this
+scenario. There is no need to add configuration for proxied SSL since the
+connection will be secure all the way. However, configuration will need to be
+added to GitLab to configure SSL certificates. See
+[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Ports
+
+The basic ports to be used are shown in the table below.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | ------------------------ |
+| 80 | 80 | HTTP (*1*) |
+| 443 | 443 | TCP or HTTPS (*1*) (*2*) |
+| 22 | 22 | TCP |
+
+- (*1*): [Web terminal](../../ci/environments/index.md#web-terminals) support requires
+ your load balancer to correctly handle WebSocket connections. When using
+ HTTP or HTTPS proxying, this means your load balancer must be configured
+ to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
+ [web terminal](../integration/terminal.md) integration guide for
+ more details.
+- (*2*): When using HTTPS protocol for port 443, you will need to add an SSL
+ certificate to the load balancers. If you wish to terminate SSL at the
+ GitLab application server instead, use TCP protocol.
+
+If you're using GitLab Pages with custom domain support you will need some
+additional port configurations.
+GitLab Pages requires a separate virtual IP address. Configure DNS to point the
+`pages_external_url` from `/etc/gitlab/gitlab.rb` at the new virtual IP address. See the
+[GitLab Pages documentation](../pages/index.md) for more information.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------- | --------- |
+| 80 | Varies (*1*) | HTTP |
+| 443 | Varies (*1*) | TCP (*2*) |
+
+- (*1*): The backend port for GitLab Pages depends on the
+ `gitlab_pages['external_http']` and `gitlab_pages['external_https']`
+ setting. See [GitLab Pages documentation](../pages/index.md) for more details.
+- (*2*): Port 443 for GitLab Pages should always use the TCP protocol. Users can
+ configure custom domains with custom SSL, which would not be possible
+ if SSL was terminated at the load balancer.
+
+#### Alternate SSH Port
+
+Some organizations have policies against opening SSH port 22. In this case,
+it may be helpful to configure an alternate SSH hostname that allows users
+to use SSH on port 443. An alternate SSH hostname will require a new virtual IP address
+compared to the other GitLab HTTP configuration above.
+
+Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | -------- |
+| 443 | 22 | TCP |
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Consul
+
+The following IPs will be used as an example:
+
+- `10.6.0.11`: Consul 1
+- `10.6.0.12`: Consul 2
+- `10.6.0.13`: Consul 3
+
+NOTE: **Note:**
+The configuration processes for the other servers in your reference architecture will
+use the `/etc/gitlab/gitlab-secrets.json` file from your Consul server to connect
+with the other servers.
+
+To configure Consul:
+
+1. SSH into the server that will host Consul.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['consul_role']
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ server: true,
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Consul nodes, and
+ make sure you set up the correct IPs.
+
+NOTE: **Note:**
+A Consul leader will be elected when the provisioning of the third Consul server is completed.
+Viewing the Consul logs `sudo gitlab-ctl tail consul` will display
+`...[INFO] consul: New leader elected: ...`
+
+You can list the current Consul members (server, client):
+
+```shell
+sudo /opt/gitlab/embedded/bin/consul members
+```
+
+You can verify the GitLab services are running:
+
+```shell
+sudo gitlab-ctl status
+```
+
+The output should be similar to the following:
+
+```plaintext
+run: consul: (pid 30074) 76834s; run: log: (pid 29740) 76844s
+run: logrotate: (pid 30925) 3041s; run: log: (pid 29649) 76861s
+run: node-exporter: (pid 30093) 76833s; run: log: (pid 29663) 76855s
+```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure PostgreSQL
+
+In this section, you'll be guided through configuring an external PostgreSQL database
+to be used with GitLab.
+
+### Provide your own PostgreSQL instance
+
+If you're hosting GitLab on a cloud provider, you can optionally use a
+managed service for PostgreSQL. For example, AWS offers a managed Relational
+Database Service (RDS) that runs PostgreSQL.
+
+If you use a cloud-managed service, or provide your own PostgreSQL:
+
+1. Set up PostgreSQL according to the
+ [database requirements document](../../install/requirements.md#database).
+1. Set up a `gitlab` username with a password of your choice. The `gitlab` user
+ needs privileges to create the `gitlabhq_production` database.
+1. Configure the GitLab application servers with the appropriate details.
+ This step is covered in [Configuring the GitLab Rails application](#configure-gitlab-rails).
+
+### Standalone PostgreSQL using Omnibus GitLab
+
+The following IPs will be used as an example:
+
+- `10.6.0.21`: PostgreSQL primary
+- `10.6.0.22`: PostgreSQL secondary 1
+- `10.6.0.23`: PostgreSQL secondary 2
+
+First, make sure to [install](https://about.gitlab.com/install/)
+the Linux GitLab package **on each node**. Following the steps,
+install the necessary dependencies from step 1, and add the
+GitLab package repository from step 2. When installing GitLab
+in the second step, do not supply the `EXTERNAL_URL` value.
+
+#### PostgreSQL primary node
+
+1. SSH into the PostgreSQL primary node.
+1. Generate a password hash for the PostgreSQL username/password pair. This assumes you will use the default
+ username of `gitlab` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<postgresql_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 gitlab
+ ```
+
+1. Generate a password hash for the PgBouncer username/password pair. This assumes you will use the default
+ username of `pgbouncer` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<pgbouncer_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 pgbouncer
+ ```
+
+1. Generate a password hash for the Consul database username/password pair. This assumes you will use the default
+ username of `gitlab-consul` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<consul_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 gitlab-consul
+ ```
+
+1. On the primary database node, edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
+
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the Consul agent
+ consul['services'] = %w(postgresql)
+
+ # START user configuration
+ # Please set the real values as explained in Required Information section
+ #
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = '<postgresql_password_hash>'
+ # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
+ # This is used to prevent replication from using up all of the
+ # available database connections.
+ postgresql['max_wal_senders'] = 4
+ postgresql['max_replication_slots'] = 4
+
+ # Replace XXX.XXX.XXX.XXX/YY with Network Address
+ postgresql['trust_auth_cidr_addresses'] = %w(10.6.0.0/24)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+
+ ## Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ #
+ # END user configuration
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### PostgreSQL secondary nodes
+
+1. On both the secondary nodes, add the same configuration specified above for the primary node
+ with an additional setting (`repmgr['master_on_initialization'] = false`) that will inform `gitlab-ctl` that they are standby nodes initially
+ and there's no need to attempt to register them as a primary node:
+
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the Consul agent
+ consul['services'] = %w(postgresql)
+
+ # Specify if a node should attempt to be primary on initialization.
+ repmgr['master_on_initialization'] = false
+
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = '<postgresql_password_hash>'
+ # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
+ # This is used to prevent replication from using up all of the
+ # available database connections.
+ postgresql['max_wal_senders'] = 4
+ postgresql['max_replication_slots'] = 4
+
+ # Replace with your network addresses
+ postgresql['trust_auth_cidr_addresses'] = %w(10.6.0.0/24)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+
+ ## Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/database.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### PostgreSQL post-configuration
+
+SSH into the **primary node**:
+
+1. Open a database prompt:
+
+ ```shell
+ gitlab-psql -d gitlabhq_production
+ ```
+
+1. Make sure the `pg_trgm` extension is enabled (it might already be):
+
+ ```shell
+ CREATE EXTENSION pg_trgm;
+ ```
+
+1. Exit the database prompt by typing `\q` and Enter.
+
+1. Verify the cluster is initialized with one node:
+
+ ```shell
+ gitlab-ctl repmgr cluster show
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ Role | Name | Upstream | Connection String
+ ----------+----------|----------|----------------------------------------
+ * master | HOSTNAME | | host=HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
+ ```
+
+1. Note down the hostname or IP address in the connection string: `host=HOSTNAME`. We will
+ refer to the hostname in the next section as `<primary_node_name>`. If the value
+ is not an IP address, it will need to be a resolvable name (via DNS or
+ `/etc/hosts`)
+
+SSH into the **secondary node**:
+
+1. Set up the repmgr standby:
+
+ ```shell
+ gitlab-ctl repmgr standby setup <primary_node_name>
+ ```
+
+ Do note that this will remove the existing data on the node. The command
+ has a wait time.
+
+ The output should be similar to the following:
+
+ ```console
+ Doing this will delete the entire contents of /var/opt/gitlab/postgresql/data
+ If this is not what you want, hit Ctrl-C now to exit
+ To skip waiting, rerun with the -w option
+ Sleeping for 30 seconds
+ Stopping the database
+ Removing the data
+ Cloning the data
+ Starting the database
+ Registering the node with the cluster
+ ok: run: repmgrd: (pid 19068) 0s
+ ```
+
+Before moving on, make sure the databases are configured correctly. Run the
+following command on the **primary** node to verify that replication is working
+properly and the secondary nodes appear in the cluster:
+
+```shell
+gitlab-ctl repmgr cluster show
+```
+
+The output should be similar to the following:
+
+```plaintext
+Role | Name | Upstream | Connection String
+----------+---------|-----------|------------------------------------------------
+* master | MASTER | | host=<primary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+```
+
+If the 'Role' column for any node says "FAILED", check the
+[Troubleshooting section](troubleshooting.md) before proceeding.
+
+Also, check that the `repmgr-check-master` command works successfully on each node:
+
+```shell
+su - gitlab-consul
+gitlab-ctl repmgr-check-master || echo 'This node is a standby repmgr node'
+```
+
+This command relies on exit codes to tell Consul whether a particular node is a master
+or secondary. The most important thing here is that this command does not produce errors.
+If there are errors it's most likely due to incorrect `gitlab-consul` database user permissions.
+Check the [Troubleshooting section](troubleshooting.md) before proceeding.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure PgBouncer
+
+Now that the PostgreSQL servers are all set up, let's configure PgBouncer.
+The following IPs will be used as an example:
+
+- `10.6.0.31`: PgBouncer 1
+- `10.6.0.32`: PgBouncer 2
+- `10.6.0.33`: PgBouncer 3
+
+1. On each PgBouncer node, edit `/etc/gitlab/gitlab.rb`, and replace
+ `<consul_password_hash>` and `<pgbouncer_password_hash>` with the
+ password hashes you [set up previously](#postgresql-primary-node):
+
+ ```ruby
+ # Disable all components except Pgbouncer and Consul agent
+ roles ['pgbouncer_role']
+
+ # Configure PgBouncer
+ pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
+
+ pgbouncer['users'] = {
+ 'gitlab-consul': {
+ password: '<consul_password_hash>'
+ },
+ 'pgbouncer': {
+ password: '<pgbouncer_password_hash>'
+ }
+ }
+
+ # Configure Consul agent
+ consul['watchers'] = %w(postgresql)
+ consul['enable'] = true
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+
+ # Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+ NOTE: **Note:**
+ If an error `execute[generate databases.ini]` occurs, this is due to an existing
+ [known issue](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4713).
+ It will be resolved when you run a second `reconfigure` after the next step.
+
+1. Create a `.pgpass` file so Consul is able to
+ reload PgBouncer. Enter the PgBouncer password twice when asked:
+
+ ```shell
+ gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
+ ```
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) once again
+ to resolve any potential errors from the previous steps.
+1. Ensure each node is talking to the current primary:
+
+ ```shell
+ gitlab-ctl pgb-console # You will be prompted for PGBOUNCER_PASSWORD
+ ```
+
+1. Once the console prompt is available, run the following queries:
+
+ ```shell
+ show databases ; show clients ;
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
+ ---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
+ gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
+ pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
+ (2 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
+ ------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
+ C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
+ (2 rows)
+ ```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+### Configure the internal load balancer
+
+If you're running more than one PgBouncer node as recommended, then at this time you'll need to set
+up a TCP internal load balancer to serve each correctly.
+
+The following IP will be used as an example:
+
+- `10.6.0.40`: Internal Load Balancer
+
+Here's how you could do it with [HAProxy](https://www.haproxy.org/):
+
+```plaintext
+global
+ log /dev/log local0
+ log localhost local1 notice
+ log stdout format raw local0
+
+defaults
+ log global
+ default-server inter 10s fall 3 rise 2
+ balance leastconn
+
+frontend internal-pgbouncer-tcp-in
+ bind *:6432
+ mode tcp
+ option tcplog
+
+ default_backend pgbouncer
+
+backend pgbouncer
+ mode tcp
+ option tcp-check
+
+ server pgbouncer1 10.6.0.21:6432 check
+ server pgbouncer2 10.6.0.22:6432 check
+ server pgbouncer3 10.6.0.23:6432 check
+```
+
+Refer to your preferred Load Balancer's documentation for further guidance.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Redis
+
+Using [Redis](https://redis.io/) in scalable environment is possible using a **Primary** x **Replica**
+topology with a [Redis Sentinel](https://redis.io/topics/sentinel) service to watch and automatically
+start the failover procedure.
+
+Redis requires authentication if used with Sentinel. See
+[Redis Security](https://redis.io/topics/security) documentation for more
+information. We recommend using a combination of a Redis password and tight
+firewall rules to secure your Redis service.
+You are highly encouraged to read the [Redis Sentinel](https://redis.io/topics/sentinel) documentation
+before configuring Redis with GitLab to fully understand the topology and
+architecture.
+
+The requirements for a Redis setup are the following:
+
+1. All Redis nodes must be able to talk to each other and accept incoming
+ connections over Redis (`6379`) and Sentinel (`26379`) ports (unless you
+ change the default ones).
+1. The server that hosts the GitLab application must be able to access the
+ Redis nodes.
+1. Protect the nodes from access from external networks
+ ([Internet](https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png)),
+ using a firewall.
+
+In this section, you'll be guided through configuring two external Redis clusters
+to be used with GitLab. The following IPs will be used as an example:
+
+- `10.6.0.51`: Redis - Cache Primary
+- `10.6.0.52`: Redis - Cache Replica 1
+- `10.6.0.53`: Redis - Cache Replica 2
+- `10.6.0.71`: Sentinel - Cache 1
+- `10.6.0.72`: Sentinel - Cache 2
+- `10.6.0.73`: Sentinel - Cache 3
+- `10.6.0.61`: Redis - Queues Primary
+- `10.6.0.62`: Redis - Queues Replica 1
+- `10.6.0.63`: Redis - Queues Replica 2
+- `10.6.0.81`: Sentinel - Queues 1
+- `10.6.0.82`: Sentinel - Queues 2
+- `10.6.0.83`: Sentinel - Queues 3
+
+NOTE: **Providing your own Redis instance:**
+Managed Redis from cloud providers such as AWS ElastiCache will work. If these
+services support high availability, be sure it is **not** the Redis Cluster type.
+Redis version 5.0 or higher is required, as this is what ships with
+Omnibus GitLab packages starting with GitLab 13.0. Older Redis versions
+do not support an optional count argument to SPOP which is now required for
+[Merge Trains](../../ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md).
+Note the Redis node's IP address or hostname, port, and password (if required).
+These will be necessary when configuring the
+[GitLab application servers](#configure-gitlab-rails) later.
+
+### Configure the Redis and Sentinel Cache cluster
+
+This is the section where we install and set up the new Redis Cache instances.
+
+NOTE: **Note:**
+Redis nodes (both primary and replica) will need the same password defined in
+`redis['password']`. At any time during a failover the Sentinels can
+reconfigure a node and change its status from primary to replica and vice versa.
+
+#### Configure the primary Redis Cache node
+
+1. SSH into the **Primary** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_master_role'
+ roles ['redis_master_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.51'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Set up password authentication for Redis (use the same password in all nodes).
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER'
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Prevent database migrations from running on upgrade
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+#### Configure the replica Redis Cache nodes
+
+1. SSH into the **replica** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_replica_role'
+ roles ['redis_replica_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.52'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.51'
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Prevent database migrations from running on upgrade
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other replica nodes, and
+ make sure to set up the IPs correctly.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+These values don't have to be changed again in `/etc/gitlab/gitlab.rb` after
+a failover, as the nodes will be managed by the [Sentinels](#configure-the-sentinel-cache-nodes), and even after a
+`gitlab-ctl reconfigure`, they will get their configuration restored by
+the same Sentinels.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### Configure the Sentinel Cache nodes
+
+NOTE: **Note:**
+If you are using an external Redis Sentinel instance, be sure
+to exclude the `requirepass` parameter from the Sentinel
+configuration. This parameter will cause clients to report `NOAUTH
+Authentication required.`. [Redis Sentinel 3.2.x does not support
+password authentication](https://github.com/antirez/redis/issues/3279).
+
+Now that the Redis servers are all set up, let's configure the Sentinel
+servers. The following IPs will be used as an example:
+
+- `10.6.0.71`: Sentinel - Cache 1
+- `10.6.0.72`: Sentinel - Cache 2
+- `10.6.0.73`: Sentinel - Cache 3
+
+To configure the Sentinel Cache server:
+
+1. SSH into the server that will host Consul/Sentinel.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['redis_sentinel_role']
+
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis-cache'
+
+ ## The same password for Redis authentication you set up for the primary node.
+ redis['master_password'] = 'REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER'
+
+ ## The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.51'
+
+ ## Define a port so Redis can listen for TCP requests which will allow other
+ ## machines to connect to it.
+ redis['port'] = 6379
+
+ ## Port of primary Redis server, uncomment to change to non default. Defaults
+ ## to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Configure Sentinel's IP
+ sentinel['bind'] = '10.6.0.71'
+
+ ## Port that Sentinel listens on, uncomment to change to non default. Defaults
+ ## to `26379`.
+ #sentinel['port'] = 26379
+
+ ## Quorum must reflect the amount of voting sentinels it take to start a failover.
+ ## Value must NOT be greater then the amount of sentinels.
+ ##
+ ## The quorum can be used to tune Sentinel in two ways:
+ ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
+ ## we deploy, we are basically making Sentinel more sensible to primary failures,
+ ## triggering a failover as soon as even just a minority of Sentinels is no longer
+ ## able to talk with the primary.
+ ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
+ ## making Sentinel able to failover only when there are a very large number (larger
+ ## than majority) of well connected Sentinels which agree about the primary being down.s
+ sentinel['quorum'] = 2
+
+ ## Consider unresponsive server down after x amount of ms.
+ #sentinel['down_after_milliseconds'] = 10000
+
+ ## Specifies the failover timeout in milliseconds. It is used in many ways:
+ ##
+ ## - The time needed to re-start a failover after a previous failover was
+ ## already tried against the same primary by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## - The time needed for a replica replicating to a wrong primary according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right primary, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## - The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (REPLICAOF NO ONE yet not
+ ## acknowledged by the promoted replica).
+ ##
+ ## - The maximum time a failover in progress waits for all the replica to be
+ ## reconfigured as replicas of the new primary. However even after this time
+ ## the replicas will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ #sentinel['failover_timeout'] = 60000
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Consul/Sentinel nodes, and
+ make sure you set up the correct IPs.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+### Configure the Redis and Sentinel Queues cluster
+
+This is the section where we install and set up the new Redis Queues instances.
+
+NOTE: **Note:**
+Redis nodes (both primary and replica) will need the same password defined in
+`redis['password']`. At any time during a failover the Sentinels can
+reconfigure a node and change its status from primary to replica and vice versa.
+
+#### Configure the primary Redis Queues node
+
+1. SSH into the **Primary** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_master_role'
+ roles ['redis_master_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.61'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Set up password authentication for Redis (use the same password in all nodes).
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER'
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+ ```
+
+1. Only the primary GitLab application server should handle migrations. To
+ prevent database migrations from running on upgrade, add the following
+ configuration to your `/etc/gitlab/gitlab.rb` file:
+
+ ```ruby
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+#### Configure the replica Redis Queues nodes
+
+1. SSH into the **replica** Redis Queue server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_replica_role'
+ roles ['redis_replica_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.62'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['password'] = 'REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.61'
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other replica nodes, and
+ make sure to set up the IPs correctly.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+These values don't have to be changed again in `/etc/gitlab/gitlab.rb` after
+a failover, as the nodes will be managed by the [Sentinels](#configure-the-sentinel-queues-nodes), and even after a
+`gitlab-ctl reconfigure`, they will get their configuration restored by
+the same Sentinels.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### Configure the Sentinel Queues nodes
+
+NOTE: **Note:**
+If you are using an external Redis Sentinel instance, be sure
+to exclude the `requirepass` parameter from the Sentinel
+configuration. This parameter will cause clients to report `NOAUTH
+Authentication required.`. [Redis Sentinel 3.2.x does not support
+password authentication](https://github.com/antirez/redis/issues/3279).
+
+Now that the Redis servers are all set up, let's configure the Sentinel
+servers. The following IPs will be used as an example:
+
+- `10.6.0.81`: Sentinel - Queues 1
+- `10.6.0.82`: Sentinel - Queues 2
+- `10.6.0.83`: Sentinel - Queues 3
+
+To configure the Sentinel Queues server:
+
+1. SSH into the server that will host Sentinel.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['redis_sentinel_role']
+
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis-persistent'
+
+ ## The same password for Redis authentication you set up for the primary node.
+ redis['master_password'] = 'REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER'
+
+ ## The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.61'
+
+ ## Define a port so Redis can listen for TCP requests which will allow other
+ ## machines to connect to it.
+ redis['port'] = 6379
+
+ ## Port of primary Redis server, uncomment to change to non default. Defaults
+ ## to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Configure Sentinel's IP
+ sentinel['bind'] = '10.6.0.81'
+
+ ## Port that Sentinel listens on, uncomment to change to non default. Defaults
+ ## to `26379`.
+ #sentinel['port'] = 26379
+
+ ## Quorum must reflect the amount of voting sentinels it take to start a failover.
+ ## Value must NOT be greater then the amount of sentinels.
+ ##
+ ## The quorum can be used to tune Sentinel in two ways:
+ ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
+ ## we deploy, we are basically making Sentinel more sensible to primary failures,
+ ## triggering a failover as soon as even just a minority of Sentinels is no longer
+ ## able to talk with the primary.
+ ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
+ ## making Sentinel able to failover only when there are a very large number (larger
+ ## than majority) of well connected Sentinels which agree about the primary being down.s
+ sentinel['quorum'] = 2
+
+ ## Consider unresponsive server down after x amount of ms.
+ #sentinel['down_after_milliseconds'] = 10000
+
+ ## Specifies the failover timeout in milliseconds. It is used in many ways:
+ ##
+ ## - The time needed to re-start a failover after a previous failover was
+ ## already tried against the same primary by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## - The time needed for a replica replicating to a wrong primary according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right primary, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## - The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (REPLICAOF NO ONE yet not
+ ## acknowledged by the promoted replica).
+ ##
+ ## - The maximum time a failover in progress waits for all the replica to be
+ ## reconfigured as replicas of the new primary. However even after this time
+ ## the replicas will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ #sentinel['failover_timeout'] = 60000
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. To prevent database migrations from running on upgrade, run:
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+ Only the primary GitLab application server should handle migrations.
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Sentinel nodes, and
+ make sure you set up the correct IPs.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Gitaly
+
+Deploying Gitaly in its own server can benefit GitLab installations that are
+larger than a single machine.
+
+The Gitaly node requirements are dependent on customer data, specifically the number of
+projects and their repository sizes. Two nodes are recommended as an absolute minimum.
+Each Gitaly node should store no more than 5TB of data and have the number of
+[`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) set to 20% of available CPUs.
+Additional nodes should be considered in conjunction with a review of expected
+data size and spread based on the recommendations above.
+
+It is also strongly recommended that all Gitaly nodes be set up with SSD disks with
+a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write,
+as Gitaly has heavy I/O. These IOPS values are recommended only as a starter as with
+time they may be adjusted higher or lower depending on the scale of your environment's workload.
+If you're running the environment on a Cloud provider, you may need to refer to
+their documentation on how to configure IOPS correctly.
+
+Some things to note:
+
+- The GitLab Rails application shards repositories into [repository storages](../repository_storage_paths.md).
+- A Gitaly server can host one or more storages.
+- A GitLab server can use one or more Gitaly servers.
+- Gitaly addresses must be specified in such a way that they resolve
+ correctly for ALL Gitaly clients.
+- Gitaly servers must not be exposed to the public internet, as Gitaly's network
+ traffic is unencrypted by default. The use of a firewall is highly recommended
+ to restrict access to the Gitaly server. Another option is to
+ [use TLS](#gitaly-tls-support).
+
+TIP: **Tip:**
+For more information about Gitaly's history and network architecture see the
+[standalone Gitaly documentation](../gitaly/index.md).
+
+Note: **Note:**
+The token referred to throughout the Gitaly documentation is
+just an arbitrary password selected by the administrator. It is unrelated to
+tokens created for the GitLab API or other similar web API tokens.
+
+Below we describe how to configure two Gitaly servers, with IPs and
+domain names:
+
+- `10.6.0.91`: Gitaly 1 (`gitaly1.internal`)
+- `10.6.0.92`: Gitaly 2 (`gitaly2.internal`)
+
+The secret token is assumed to be `gitalysecret` and that
+your GitLab installation has three repository storages:
+
+- `default` on Gitaly 1
+- `storage1` on Gitaly 1
+- `storage2` on Gitaly 2
+
+On each node:
+
+1. [Download/Install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page but
+ **without** providing the `EXTERNAL_URL` value.
+1. Edit `/etc/gitlab/gitlab.rb` to configure storage paths, enable
+ the network listener and configure the token:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ # /etc/gitlab/gitlab.rb
+
+ # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
+ # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
+ # The following two values must be the same as their respective values
+ # of the GitLab Rails application setup
+ gitaly['auth_token'] = 'gitalysecret'
+ gitlab_shell['secret_token'] = 'shellsecret'
+
+ # Avoid running unnecessary services on the Gitaly server
+ postgresql['enable'] = false
+ redis['enable'] = false
+ nginx['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
+ sidekiq['enable'] = false
+ gitlab_workhorse['enable'] = false
+ grafana['enable'] = false
+
+ # If you run a seperate monitoring node you can disable these services
+ alertmanager['enable'] = false
+ prometheus['enable'] = false
+
+ # Prevent database connections during 'gitlab-ctl reconfigure'
+ gitlab_rails['rake_cache_clear'] = false
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the gitlab-shell API callback URL. Without this, `git push` will
+ # fail. This can be your 'front door' GitLab URL or an internal load
+ # balancer.
+ # Don't forget to copy `/etc/gitlab/gitlab-secrets.json` from web server to Gitaly server.
+ gitlab_rails['internal_api_url'] = 'https://gitlab.example.com'
+
+ # Make Gitaly accept connections on all network interfaces. You must use
+ # firewalls to restrict access to this address/port.
+ # Comment out following line if you only want to support TLS connections
+ gitaly['listen_addr'] = "0.0.0.0:8075"
+ ```
+
+1. Append the following to `/etc/gitlab/gitlab.rb` for each respective server:
+ 1. On `gitaly1.internal`:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => {
+ 'path' => '/var/opt/gitlab/git-data'
+ },
+ 'storage1' => {
+ 'path' => '/mnt/gitlab/git-data'
+ },
+ })
+ ```
+
+ 1. On `gitaly2.internal`:
+
+ ```ruby
+ git_data_dirs({
+ 'storage2' => {
+ 'path' => '/mnt/gitlab/git-data'
+ },
+ })
+ ```
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+### Gitaly TLS support
+
+Gitaly supports TLS encryption. To be able to communicate
+with a Gitaly instance that listens for secure connections you will need to use `tls://` URL
+scheme in the `gitaly_address` of the corresponding storage entry in the GitLab configuration.
+
+You will need to bring your own certificates as this isn't provided automatically.
+The certificate, or its certificate authority, must be installed on all Gitaly
+nodes (including the Gitaly node using the certificate) and on all client nodes
+that communicate with it following the procedure described in
+[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates).
+
+NOTE: **Note:**
+The self-signed certificate must specify the address you use to access the
+Gitaly server. If you are addressing the Gitaly server by a hostname, you can
+either use the Common Name field for this, or add it as a Subject Alternative
+Name. If you are addressing the Gitaly server by its IP address, you must add it
+as a Subject Alternative Name to the certificate.
+[gRPC does not support using an IP address as Common Name in a certificate](https://github.com/grpc/grpc/issues/2691).
+
+NOTE: **Note:**
+It is possible to configure Gitaly servers with both an
+unencrypted listening address `listen_addr` and an encrypted listening
+address `tls_listen_addr` at the same time. This allows you to do a
+gradual transition from unencrypted to encrypted traffic, if necessary.
+
+To configure Gitaly with TLS:
+
+1. Create the `/etc/gitlab/ssl` directory and copy your key and certificate there:
+
+ ```shell
+ sudo mkdir -p /etc/gitlab/ssl
+ sudo chmod 755 /etc/gitlab/ssl
+ sudo cp key.pem cert.pem /etc/gitlab/ssl/
+ sudo chmod 644 key.pem cert.pem
+ ```
+
+1. Copy the cert to `/etc/gitlab/trusted-certs` so Gitaly will trust the cert when
+ calling into itself:
+
+ ```shell
+ sudo cp /etc/gitlab/ssl/cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and add:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ gitaly['tls_listen_addr'] = "0.0.0.0:9999"
+ gitaly['certificate_path'] = "/etc/gitlab/ssl/cert.pem"
+ gitaly['key_path'] = "/etc/gitlab/ssl/key.pem"
+ ```
+
+1. Delete `gitaly['listen_addr']` to allow only encrypted connections.
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Sidekiq
+
+Sidekiq requires connections to the Redis, PostgreSQL and Gitaly instances.
+The following IPs will be used as an example:
+
+- `10.6.0.101`: Sidekiq 1
+- `10.6.0.102`: Sidekiq 2
+- `10.6.0.103`: Sidekiq 3
+- `10.6.0.104`: Sidekiq 4
+
+To configure the Sidekiq nodes, on each one:
+
+1. SSH into the Sidekiq server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab package
+you want using steps 1 and 2 from the GitLab downloads page.
+**Do not complete any other steps on the download page.**
+1. Open `/etc/gitlab/gitlab.rb` with your editor:
+
+ ```ruby
+ ########################################
+ ##### Services Disabled ###
+ ########################################
+
+ nginx['enable'] = false
+ grafana['enable'] = false
+ prometheus['enable'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = false
+ puma['enable'] = false
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ ########################################
+ #### Redis ###
+ ########################################
+
+ ## Redis connection details
+ ## First cluster that will host the cache
+ gitlab_rails['redis_cache_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER>@gitlab-redis-cache'
+
+ gitlab_rails['redis_cache_sentinels'] = [
+ {host: '10.6.0.71', port: 26379},
+ {host: '10.6.0.72', port: 26379},
+ {host: '10.6.0.73', port: 26379},
+ ]
+
+ ## Second cluster that will host the queues, shared state, and actioncable
+ gitlab_rails['redis_queues_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_shared_state_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_actioncable_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+
+ gitlab_rails['redis_queues_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_shared_state_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_actioncable_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+
+ #######################################
+ ### Gitaly ###
+ #######################################
+
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
+ gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
+
+ #######################################
+ ### Postgres ###
+ #######################################
+ gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = '<postgresql_user_password>'
+ gitlab_rails['db_adapter'] = 'postgresql'
+ gitlab_rails['db_encoding'] = 'unicode'
+ gitlab_rails['auto_migrate'] = false
+
+ #######################################
+ ### Sidekiq configuration ###
+ #######################################
+ sidekiq['listen_address'] = "0.0.0.0"
+ sidekiq['cluster'] = true # no need to set this after GitLab 13.0
+
+ #######################################
+ ### Monitoring configuration ###
+ #######################################
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+
+ # Rails Status for prometheus
+ gitlab_rails['monitoring_whitelist'] = ['10.6.0.121/32', '127.0.0.0/8']
+ ```
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+TIP: **Tip:**
+You can also run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure GitLab Rails
+
+NOTE: **Note:**
+In our architectures we run each GitLab Rails node using the Puma webserver
+and have its number of workers set to 90% of available CPUs along with four threads. For
+nodes that are running Rails with other components the worker value should be reduced
+accordingly where we've found 50% achieves a good balance but this is dependent
+on workload.
+
+This section describes how to configure the GitLab application (Rails) component.
+
+The following IPs will be used as an example:
+
+- `10.6.0.111`: GitLab application 1
+- `10.6.0.112`: GitLab application 2
+- `10.6.0.113`: GitLab application 3
+
+On each node perform the following:
+
+1. Download/install Omnibus GitLab using **steps 1 and 2** from
+ [GitLab downloads](https://about.gitlab.com/install/). Do not complete other
+ steps on the download page.
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. Edit `/etc/gitlab/gitlab.rb` and use the following configuration.
+ To maintain uniformity of links across nodes, the `external_url`
+ on the application server should point to the external URL that users will use
+ to access GitLab. This would be the URL of the [external load balancer](#configure-the-external-load-balancer)
+ which will route traffic to the GitLab application server:
+
+ ```ruby
+ external_url 'https://gitlab.example.com'
+
+ # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
+ # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
+ # The following two values must be the same as their respective values
+ # of the Gitaly setup
+ gitlab_rails['gitaly_token'] = 'gitalysecret'
+ gitlab_shell['secret_token'] = 'shellsecret'
+
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
+
+ ## Disable components that will not be on the GitLab application server
+ roles ['application_role']
+ gitaly['enable'] = false
+ nginx['enable'] = true
+
+ ## PostgreSQL connection details
+ # Disable PostgreSQL on the application node
+ postgresql['enable'] = false
+ gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = '<postgresql_user_password>'
+ gitlab_rails['auto_migrate'] = false
+
+ ## Redis connection details
+ ## First cluster that will host the cache
+ gitlab_rails['redis_cache_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_FIRST_CLUSTER>@gitlab-redis-cache'
+
+ gitlab_rails['redis_cache_sentinels'] = [
+ {host: '10.6.0.71', port: 26379},
+ {host: '10.6.0.72', port: 26379},
+ {host: '10.6.0.73', port: 26379},
+ ]
+
+ ## Second cluster that will host the queues, shared state, and actionable
+ gitlab_rails['redis_queues_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_shared_state_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+ gitlab_rails['redis_actioncable_instance'] = 'redis://:<REDIS_PRIMARY_PASSWORD_OF_SECOND_CLUSTER>@gitlab-redis-persistent'
+
+ gitlab_rails['redis_queues_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_shared_state_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+ gitlab_rails['redis_actioncable_sentinels'] = [
+ {host: '10.6.0.81', port: 26379},
+ {host: '10.6.0.82', port: 26379},
+ {host: '10.6.0.83', port: 26379},
+ ]
+
+ # Set the network addresses that the exporters used for monitoring will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ gitlab_workhorse['prometheus_listen_addr'] = '0.0.0.0:9229'
+ sidekiq['listen_address'] = "0.0.0.0"
+ puma['listen'] = '0.0.0.0'
+
+ # Add the monitoring node's IP address to the monitoring whitelist and allow it to
+ # scrape the NGINX metrics
+ gitlab_rails['monitoring_whitelist'] = ['10.6.0.121/32', '127.0.0.0/8']
+ nginx['status']['options']['allow'] = ['10.6.0.121/32', '127.0.0.0/8']
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. If you're using [Gitaly with TLS support](#gitaly-tls-support), make sure the
+ `git_data_dirs` entry is configured with `tls` instead of `tcp`:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage1' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage2' => { 'gitaly_address' => 'tls://gitaly2.internal:9999' },
+ })
+ ```
+
+ 1. Copy the cert into `/etc/gitlab/trusted-certs`:
+
+ ```shell
+ sudo cp cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. If you're [using NFS](#configure-nfs-optional):
+ 1. If necessary, install the NFS client utility packages using the following
+ commands:
+
+ ```shell
+ # Ubuntu/Debian
+ apt-get install nfs-common
+
+ # CentOS/Red Hat
+ yum install nfs-utils nfs-utils-lib
+ ```
+
+ 1. Specify the necessary NFS mounts in `/etc/fstab`.
+ The exact contents of `/etc/fstab` will depend on how you chose
+ to configure your NFS server. See the [NFS documentation](../high_availability/nfs.md)
+ for examples and the various options.
+
+ 1. Create the shared directories. These may be different depending on your NFS
+ mount locations.
+
+ ```shell
+ mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
+ ```
+
+ 1. Edit `/etc/gitlab/gitlab.rb` and use the following configuration:
+
+ ```ruby
+ ## Prevent GitLab from starting if NFS data mounts are not available
+ high_availability['mountpoint'] = '/var/opt/gitlab/git-data'
+
+ ## Ensure UIDs and GIDs match between servers for permissions via NFS
+ user['uid'] = 9000
+ user['gid'] = 9000
+ web_server['uid'] = 9001
+ web_server['gid'] = 9001
+ registry['uid'] = 9002
+ registry['gid'] = 9002
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Confirm the node can connect to Gitaly:
+
+ ```shell
+ sudo gitlab-rake gitlab:gitaly:check
+ ```
+
+ Then, tail the logs to see the requests:
+
+ ```shell
+ sudo gitlab-ctl tail gitaly
+ ```
+
+1. Optionally, from the Gitaly servers, confirm that Gitaly can perform callbacks to the internal API:
+
+ ```shell
+ sudo /opt/gitlab/embedded/service/gitlab-shell/bin/check -config /opt/gitlab/embedded/service/gitlab-shell/config.yml
+ ```
+
+NOTE: **Note:**
+When you specify `https` in the `external_url`, as in the example
+above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
+certificates are not present, NGINX will fail to start. See the
+[NGINX documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for more information.
+
+### GitLab Rails post-configuration
+
+Initialize the GitLab database, by running the following in one of the Rails nodes:
+
+```shell
+sudo gitlab-rake gitlab:db:configure
+```
+
+NOTE: **Note:**
+If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to
+PostgreSQL it may be that your PgBouncer node's IP address is missing from
+PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
+[PgBouncer error `ERROR: pgbouncer cannot connect to server`](troubleshooting.md#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
+in the Troubleshooting section before proceeding.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Prometheus
+
+The Omnibus GitLab package can be used to configure a standalone Monitoring node
+running [Prometheus](../monitoring/prometheus/index.md) and
+[Grafana](../monitoring/performance/grafana_configuration.md).
+
+The following IP will be used as an example:
+
+- `10.6.0.121`: Prometheus
+
+To configure the Monitoring node:
+
+1. SSH into the Monitoring node.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ Do not complete any other steps on the download page.
+
+1. Copy the `/etc/gitlab/gitlab-secrets.json` file from your Consul server, and replace
+ the file of the same name on this server. If that file is not on this server,
+ add the file from your Consul server to this server.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ external_url 'http://gitlab.example.com'
+
+ # Disable all other services
+ gitlab_rails['auto_migrate'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_exporter['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = true
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ sidekiq['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
+ node_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ # Enable Prometheus
+ prometheus['enable'] = true
+ prometheus['listen_address'] = '0.0.0.0:9090'
+ prometheus['monitor_kubernetes'] = false
+
+ # Enable Login form
+ grafana['disable_login_form'] = false
+
+ # Enable Grafana
+ grafana['enable'] = true
+ grafana['admin_password'] = '<grafana_password>'
+
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. In the GitLab UI, set `admin/application_settings/metrics_and_profiling` > Metrics - Grafana to `/-/grafana` to
+`http[s]://<MONITOR NODE>/-/grafana`
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure the object storage
+
+GitLab supports using an object storage service for holding numerous types of data.
+It's recommended over [NFS](#configure-nfs-optional) and in general it's better
+in larger setups as object storage is typically much more performant, reliable,
+and scalable.
+
+Object storage options that GitLab has tested, or is aware of customers using include:
+
+- SaaS/Cloud solutions such as [Amazon S3](https://aws.amazon.com/s3/), [Google cloud storage](https://cloud.google.com/storage).
+- On-premises hardware and appliances from various storage vendors.
+- MinIO. There is [a guide to deploying this](https://docs.gitlab.com/charts/advanced/external-object-storage/minio.html) within our Helm Chart documentation.
+
+For configuring GitLab to use Object Storage refer to the following guides
+based on what features you intend to use:
+
+1. Configure [object storage for backups](../../raketasks/backup_restore.md#uploading-backups-to-a-remote-cloud-storage).
+1. Configure [object storage for job artifacts](../job_artifacts.md#using-object-storage)
+ including [incremental logging](../job_logs.md#new-incremental-logging-architecture).
+1. Configure [object storage for LFS objects](../lfs/index.md#storing-lfs-objects-in-remote-object-storage).
+1. Configure [object storage for uploads](../uploads.md#using-object-storage-core-only).
+1. Configure [object storage for merge request diffs](../merge_request_diffs.md#using-object-storage).
+1. Configure [object storage for Container Registry](../packages/container_registry.md#use-object-storage) (optional feature).
+1. Configure [object storage for Mattermost](https://docs.mattermost.com/administration/config-settings.html#file-storage) (optional feature).
+1. Configure [object storage for packages](../packages/index.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. Configure [object storage for Dependency Proxy](../packages/dependency_proxy.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. Configure [object storage for Pseudonymizer](../pseudonymizer.md#configuration) (optional feature). **(ULTIMATE ONLY)**
+1. Configure [object storage for autoscale Runner caching](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching) (optional - for improved performance).
+1. Configure [object storage for Terraform state files](../terraform_state.md#using-object-storage-core-only).
+
+Using separate buckets for each data type is the recommended approach for GitLab.
+
+A limitation of our configuration is that each use of object storage is separately configured.
+[We have an issue for improving this](https://gitlab.com/gitlab-org/gitlab/-/issues/23345)
+and easily using one bucket with separate folders is one improvement that this might bring.
+
+There is at least one specific issue with using the same bucket:
+when GitLab is deployed with the Helm chart restore from backup
+[will not properly function](https://docs.gitlab.com/charts/advanced/external-object-storage/#lfs-artifacts-uploads-packages-external-diffs-pseudonymizer)
+unless separate buckets are used.
+
+One risk of using a single bucket would be if your organization decided to
+migrate GitLab to the Helm deployment in the future. GitLab would run, but the situation with
+backups might not be realized until the organization had a critical requirement for the backups to
+work.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure NFS (optional)
+
+[Object storage](#configure-the-object-storage), along with [Gitaly](#configure-gitaly)
+are recommended over NFS wherever possible for improved performance. If you intend
+to use GitLab Pages, this currently [requires NFS](troubleshooting.md#gitlab-pages-requires-nfs).
+
+See how to [configure NFS](../high_availability/nfs.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Troubleshooting
+
+See the [troubleshooting documentation](troubleshooting.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md
index 0b4114bca6e..14685ffa53d 100644
--- a/doc/administration/reference_architectures/5k_users.md
+++ b/doc/administration/reference_architectures/5k_users.md
@@ -1,49 +1,54 @@
---
reading_time: true
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
---
-# Reference architecture: up to 5,000 users
+# Reference architecture: up to 5,000 users **(PREMIUM ONLY)**
-This page describes GitLab reference architecture for up to 5,000 users.
-For a full list of reference architectures, see
+This page describes GitLab reference architecture for up to 5,000 users. For a
+full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
NOTE: **Note:**
-The 5,000-user reference architecture documented below is
-designed to help your organization achieve a highly-available GitLab deployment.
-If you do not have the expertise or need to maintain a highly-available
-environment, you can have a simpler and less costly-to-operate environment by
-following the [2,000-user reference architecture](2k_users.md).
+This reference architecture is designed to help your organization achieve a
+highly-available GitLab deployment. If you do not have the expertise or need to
+maintain a highly-available environment, you can have a simpler and less
+costly-to-operate environment by using the
+[2,000-user reference architecture](2k_users.md).
> - **Supported users (approximate):** 5,000
-> - **High Availability:** True
-> - **Test RPS rates:** API: 100 RPS, Web: 10 RPS, Git: 10 RPS
-
-| Service | Nodes | Configuration | GCP | AWS | Azure |
-|--------------------------------------------------------------|-------|---------------------------------|-----------------|-----------------------|----------------|
-| External load balancing node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| Redis | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
-| Consul + Sentinel | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
-| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| Internal load balancing node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| Gitaly | 2 minimum | 8 vCPU, 30GB Memory | `n1-standard-8` | `m5.2xlarge` | `D8s v3` |
-| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
-| GitLab Rails | 3 | 16 vCPU, 14.4GB Memory | `n1-highcpu-16` | `c5.4xlarge` | `F16s v2` |
-| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
-| Object Storage | n/a | n/a | n/a | n/a | n/a |
-| NFS Server (optional, not recommended) | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` |
-
-The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
-CPU platform on GCP. On different hardware you may find that adjustments, either lower
-or higher, are required for your CPU or Node counts accordingly. For more information, a
-[Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
-[here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
-
-For data objects such as LFS, Uploads, Artifacts, etc, an [object storage service](#configure-the-object-storage)
-is recommended over NFS where possible, due to better performance and availability.
-Since this doesn't require a node to be set up, it's marked as not applicable (n/a)
-in the table above.
+> - **High Availability:** Yes
+> - **Test requests per second (RPS) rates:** API: 100 RPS, Web: 10 RPS, Git: 10 RPS
+
+| Service | Nodes | Configuration | GCP | AWS | Azure |
+|--------------------------------------------|-------------|-------------------------|----------------|-------------|----------|
+| External load balancing node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Redis | 3 | 2 vCPU, 7.5GB memory | n1-standard-2 | m5.large | D2s v3 |
+| Consul + Sentinel | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| PostgreSQL | 3 | 2 vCPU, 7.5GB memory | n1-standard-2 | m5.large | D2s v3 |
+| PgBouncer | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Internal load balancing node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Gitaly | 2 (minimum) | 8 vCPU, 30GB memory | n1-standard-8 | m5.2xlarge | D8s v3 |
+| Sidekiq | 4 | 2 vCPU, 7.5GB memory | n1-standard-2 | m5.large | D2s v3 |
+| GitLab Rails | 3 | 16 vCPU, 14.4GB memory | n1-highcpu-16 | c5.4xlarge | F16s v2 |
+| Monitoring node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Object Storage | n/a | n/a | n/a | n/a | n/a |
+| NFS Server (optional, not recommended) | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
+
+The Google Cloud Platform (GCP) architectures were built and tested using the
+[Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+CPU platform. On different hardware you may find that adjustments, either lower
+or higher, are required for your CPU or node counts. For more information, see
+our [Sysbench](https://github.com/akopytov/sysbench)-based
+[CPU benchmark](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+For data objects (such as LFS, Uploads, or Artifacts), an
+[object storage service](#configure-the-object-storage) is recommended instead
+of NFS where possible, due to better performance and availability. Since this
+doesn't require a node to be set up, *Object Storage* is noted as not
+applicable (n/a) in the previous table.
## Setup components
@@ -1439,7 +1444,7 @@ On each node perform the following:
1. Specify the necessary NFS mounts in `/etc/fstab`.
The exact contents of `/etc/fstab` will depend on how you chose
- to configure your NFS server. See the [NFS documentation](../high_availability/nfs.md)
+ to configure your NFS server. See the [NFS documentation](../nfs.md)
for examples and the various options.
1. Create the shared directories. These may be different depending on your NFS
@@ -1748,7 +1753,7 @@ work.
are recommended over NFS wherever possible for improved performance. If you intend
to use GitLab Pages, this currently [requires NFS](troubleshooting.md#gitlab-pages-requires-nfs).
-See how to [configure NFS](../high_availability/nfs.md).
+See how to [configure NFS](../nfs.md).
<div align="right">
<a type="button" class="btn btn-default" href="#setup-components">
diff --git a/doc/administration/reference_architectures/img/reference-architectures.png b/doc/administration/reference_architectures/img/reference-architectures.png
index e15609e78e1..0f8e663b57b 100644
--- a/doc/administration/reference_architectures/img/reference-architectures.png
+++ b/doc/administration/reference_architectures/img/reference-architectures.png
Binary files differ
diff --git a/doc/administration/reference_architectures/index.md b/doc/administration/reference_architectures/index.md
index 8fde71a66bf..4f7be2413dd 100644
--- a/doc/administration/reference_architectures/index.md
+++ b/doc/administration/reference_architectures/index.md
@@ -1,36 +1,45 @@
---
type: reference, concepts
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
---
-# Reference architectures
-<!-- TBD to be reviewed by Eric -->
+# Reference architectures
You can set up GitLab on a single server or scale it up to serve many users.
-This page details the recommended Reference Architectures that were built and verified by GitLab's Quality and Support teams.
+This page details the recommended Reference Architectures that were built and
+verified by GitLab's Quality and Support teams.
-Below is a chart representing each architecture tier and the number of users they can handle. As your number of users grow with time, it’s recommended that you scale GitLab accordingly.
+Below is a chart representing each architecture tier and the number of users
+they can handle. As your number of users grow with time, it’s recommended that
+you scale GitLab accordingly.
![Reference Architectures](img/reference-architectures.png)
<!-- Internal link: https://docs.google.com/spreadsheets/d/1obYP4fLKkVVDOljaI3-ozhmCiPtEeMblbBKkf2OADKs/edit#gid=1403207183 -->
-Testing on these reference architectures were performed with [GitLab's Performance Tool](https://gitlab.com/gitlab-org/quality/performance)
-at specific coded workloads, and the throughputs used for testing were calculated based on sample customer data.
-After selecting the reference architecture that matches your scale, refer to
-[Configure GitLab to Scale](#configure-gitlab-to-scale) to see the components
-involved, and how to configure them.
+Testing on these reference architectures were performed with
+[GitLab's Performance Tool](https://gitlab.com/gitlab-org/quality/performance)
+at specific coded workloads, and the throughputs used for testing were
+calculated based on sample customer data. Select the
+[reference architecture](#available-reference-architectures) that matches your scale.
-Each endpoint type is tested with the following number of requests per second (RPS) per 1000 users:
+Each endpoint type is tested with the following number of requests per second (RPS)
+per 1,000 users:
- API: 20 RPS
- Web: 2 RPS
- Git: 2 RPS
-For GitLab instances with less than 2,000 users, it's recommended that you use the [default setup](#automated-backups-core-only)
-by [installing GitLab](../../install/README.md) on a single machine to minimize maintenance and resource costs.
+For GitLab instances with less than 2,000 users, it's recommended that you use
+the [default setup](#automated-backups-core-only) by
+[installing GitLab](../../install/README.md) on a single machine to minimize
+maintenance and resource costs.
-If your organization has more than 2,000 users, the recommendation is to scale GitLab's components to multiple
-machine nodes. The machine nodes are grouped by component(s). The addition of these
-nodes increases the performance and scalability of to your GitLab instance.
+If your organization has more than 2,000 users, the recommendation is to scale
+GitLab's components to multiple machine nodes. The machine nodes are grouped by
+components. The addition of these nodes increases the performance and
+scalability of to your GitLab instance.
When scaling GitLab, there are several factors to consider:
@@ -39,12 +48,13 @@ When scaling GitLab, there are several factors to consider:
- The application nodes connects to a shared file server and PostgreSQL and Redis services on the backend.
NOTE: **Note:**
-Depending on your workflow, the following recommended
-reference architectures may need to be adapted accordingly. Your workload
-is influenced by factors including how active your users are,
-how much automation you use, mirroring, and repository/change size. Additionally the
-displayed memory values are provided by [GCP machine types](https://cloud.google.com/compute/docs/machine-types).
-For different cloud vendors, attempt to select options that best match the provided architecture.
+Depending on your workflow, the following recommended reference architectures
+may need to be adapted accordingly. Your workload is influenced by factors
+including how active your users are, how much automation you use, mirroring,
+and repository/change size. Additionally the displayed memory values are
+provided by [GCP machine types](https://cloud.google.com/compute/docs/machine-types).
+For different cloud vendors, attempt to select options that best match the
+provided architecture.
## Available reference architectures
@@ -60,14 +70,14 @@ The following reference architectures are available:
## Availability Components
-GitLab comes with the following components for your use, listed from
-least to most complex:
+GitLab comes with the following components for your use, listed from least to
+most complex:
-1. [Automated backups](#automated-backups-core-only)
-1. [Traffic load balancer](#traffic-load-balancer-starter-only)
-1. [Zero downtime updates](#zero-downtime-updates-starter-only)
-1. [Automated database failover](#automated-database-failover-premium-only)
-1. [Instance level replication with GitLab Geo](#instance-level-replication-with-gitlab-geo-premium-only)
+- [Automated backups](#automated-backups-core-only)
+- [Traffic load balancer](#traffic-load-balancer-starter-only)
+- [Zero downtime updates](#zero-downtime-updates-starter-only)
+- [Automated database failover](#automated-database-failover-premium-only)
+- [Instance level replication with GitLab Geo](#instance-level-replication-with-gitlab-geo-premium-only)
As you implement these components, begin with a single server and then do
backups. Only after completing the first server should you proceed to the next.
@@ -115,7 +125,8 @@ to the default installation:
> - Supported tiers: [GitLab Starter, Premium, and Ultimate](https://about.gitlab.com/pricing/)
GitLab supports [zero-downtime updates](https://docs.gitlab.com/omnibus/update/#zero-downtime-updates).
-Although you can perform zero-downtime updates with a single GitLab node, the recommendation is to separate GitLab into several application nodes.
+Single GitLab nodes can be updated with only a [few minutes of downtime](https://docs.gitlab.com/omnibus/update/README.html#single-node-deployment).
+To avoid this, we recommend to separate GitLab into several application nodes.
As long as at least one of each component is online and capable of handling the instance's usage load, your team's productivity will not be interrupted during the update.
### Automated database failover **(PREMIUM ONLY)**
@@ -140,40 +151,7 @@ is recommended.
instance to other geographical locations as a read-only fully operational instance
that can also be promoted in case of disaster.
-## Configure GitLab to scale
-
-NOTE: **Note:**
-From GitLab 13.0, using NFS for Git repositories is deprecated. In GitLab 14.0, support for NFS for Git repositories is scheduled to be removed. Upgrade to [Gitaly Cluster](../gitaly/praefect.md) as soon as possible.
-
-The following components are the ones you need to configure in order to scale
-GitLab. They are listed in the order you'll typically configure them if they are
-required by your [reference architecture](#reference-architectures) of choice.
-
-Most of them are bundled in the GitLab deb/rpm package (called Omnibus GitLab),
-but depending on your system architecture, you may require some components which are
-not included in it. If required, those should be configured before
-setting up components provided by GitLab. Advice on how to select the right
-solution for your organization is provided in the configuration instructions
-column.
-
-| Component | Description | Configuration instructions | Bundled with Omnibus GitLab |
-|-----------|-------------|----------------------------|
-| Load balancer(s) ([6](#footnotes)) | Handles load balancing, typically when you have multiple GitLab application services nodes | [Load balancer configuration](../high_availability/load_balancer.md) ([6](#footnotes)) | No |
-| Object storage service ([4](#footnotes)) | Recommended store for shared data objects | [Object Storage configuration](../object_storage.md) | No |
-| NFS ([5](#footnotes)) ([7](#footnotes)) | Shared disk storage service. Can be used as an alternative Object Storage. Required for GitLab Pages | [NFS configuration](../high_availability/nfs.md) | No |
-| [Consul](../../development/architecture.md#consul) ([3](#footnotes)) | Service discovery and health checks/failover | [Consul configuration](../high_availability/consul.md) **(PREMIUM ONLY)** | Yes |
-| [PostgreSQL](../../development/architecture.md#postgresql) | Database | [PostgreSQL configuration](https://docs.gitlab.com/omnibus/settings/database.html) | Yes |
-| [PgBouncer](../../development/architecture.md#pgbouncer) | Database connection pooler | [PgBouncer configuration](../high_availability/pgbouncer.md#running-pgbouncer-as-part-of-a-non-ha-gitlab-installation) **(PREMIUM ONLY)** | Yes |
-| Repmgr | PostgreSQL cluster management and failover | [PostgreSQL and Repmgr configuration](../postgresql/replication_and_failover.md) | Yes |
-| Patroni | An alternative PostgreSQL cluster management and failover | [PostgreSQL and Patroni configuration](../postgresql/replication_and_failover.md#patroni) | Yes |
-| [Redis](../../development/architecture.md#redis) ([3](#footnotes)) | Key/value store for fast data lookup and caching | [Redis configuration](../high_availability/redis.md) | Yes |
-| Redis Sentinel | Redis | [Redis Sentinel configuration](../high_availability/redis.md) | Yes |
-| [Gitaly](../../development/architecture.md#gitaly) ([2](#footnotes)) ([7](#footnotes)) | Provides access to Git repositories | [Gitaly configuration](../gitaly/index.md#run-gitaly-on-its-own-server) | Yes |
-| [Sidekiq](../../development/architecture.md#sidekiq) | Asynchronous/background jobs | [Sidekiq configuration](../high_availability/sidekiq.md) | Yes |
-| [GitLab application services](../../development/architecture.md#unicorn)([1](#footnotes)) | Puma/Unicorn, Workhorse, GitLab Shell - serves front-end requests (UI, API, Git over HTTP/SSH) | [GitLab app scaling configuration](../high_availability/gitlab.md) | Yes |
-| [Prometheus](../../development/architecture.md#prometheus) and [Grafana](../../development/architecture.md#grafana) | GitLab environment monitoring | [Monitoring node for scaling](../high_availability/monitoring_node.md) | Yes |
-
-### Configuring select components with Cloud Native Helm
+## Configuring select components with Cloud Native Helm
We also provide [Helm charts](https://docs.gitlab.com/charts/) as a Cloud Native installation
method for GitLab. For the reference architectures, select components can be set up in this
@@ -191,50 +169,3 @@ specs, only translated into Kubernetes resources.
For example, if you were to set up a 50k installation with the Rails nodes being run in Helm,
then the same amount of resources as given for Omnibus should be given to the Kubernetes
cluster with the Rails nodes broken down into a number of smaller Pods across that cluster.
-
-## Footnotes
-
-1. In our architectures we run each GitLab Rails node using the Puma webserver
- and have its number of workers set to 90% of available CPUs along with four threads. For
- nodes that are running Rails with other components the worker value should be reduced
- accordingly where we've found 50% achieves a good balance but this is dependent
- on workload.
-
-1. Gitaly node requirements are dependent on customer data, specifically the number of
- projects and their sizes. We recommend that each Gitaly node should store no more than 5TB of data
- and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
- set to 20% of available CPUs. Additional nodes should be considered in conjunction
- with a review of expected data size and spread based on the recommendations above.
-
-1. Recommended Redis setup differs depending on the size of the architecture.
- For smaller architectures (less than 3,000 users) a single instance should suffice.
- For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
- classes and that Redis Sentinel is hosted alongside Consul.
- For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../redis/replication_and_failover.md#running-multiple-redis-clusters) for the Cache class
- and another for the Queues and Shared State classes respectively. We also recommend
- that you run the Redis Sentinel clusters separately for each Redis Cluster.
-
-1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
- over NFS where possible, due to better performance.
-
-1. NFS can be used as an alternative for object storage but this isn't typically
- recommended for performance reasons. Note however it is required for [GitLab
- Pages](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196).
-
-1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
- as the load balancer. Although other load balancers with similar feature sets
- could also be used, those load balancers have not been validated.
-
-1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
- HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
- as these components have heavy I/O. These IOPS values are recommended only as a starter
- as with time they may be adjusted higher or lower depending on the scale of your
- environment's workload. If you're running the environment on a Cloud provider
- you may need to refer to their documentation on how configure IOPS correctly.
-
-1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
- CPU platform on GCP. On different hardware you may find that adjustments, either lower
- or higher, are required for your CPU or Node counts accordingly. For more information, a
- [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
- [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
diff --git a/doc/administration/reference_architectures/troubleshooting.md b/doc/administration/reference_architectures/troubleshooting.md
index 35bdb65c810..db2e9b89ba2 100644
--- a/doc/administration/reference_architectures/troubleshooting.md
+++ b/doc/administration/reference_architectures/troubleshooting.md
@@ -1,3 +1,9 @@
+---
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Troubleshooting a reference architecture setup
This page serves as the troubleshooting documentation if you followed one of
@@ -17,7 +23,7 @@ with the Fog library that GitLab uses. Symptoms include:
### GitLab Pages requires NFS
If you intend to use [GitLab Pages](../../user/project/pages/index.md), this currently requires
-[NFS](../high_availability/nfs.md). There is [work in progress](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196)
+[NFS](../nfs.md). There is [work in progress](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196)
to remove this dependency. In the future, GitLab Pages may use
[object storage](https://gitlab.com/gitlab-org/gitlab/-/issues/208135).
@@ -524,7 +530,7 @@ To restart either service, run `gitlab-ctl restart SERVICE`
For PostgreSQL, it is usually safe to restart the master node by default. Automatic failover defaults to a 1 minute timeout. Provided the database returns before then, nothing else needs to be done. To be safe, you can stop `repmgrd` on the standby nodes first with `gitlab-ctl stop repmgrd`, then start afterwards with `gitlab-ctl start repmgrd`.
-On the Consul server nodes, it is important to restart the Consul service in a controlled fashion. Read our [Consul documentation](../high_availability/consul.md#restarting-the-server-cluster) for instructions on how to restart the service.
+On the Consul server nodes, it is important to restart the Consul service in a controlled fashion. Read our [Consul documentation](../consul.md#restart-consul) for instructions on how to restart the service.
### `gitlab-ctl repmgr-check-master` command produces errors
diff --git a/doc/administration/reply_by_email.md b/doc/administration/reply_by_email.md
index 6d7069dd461..62645ad17a1 100644
--- a/doc/administration/reply_by_email.md
+++ b/doc/administration/reply_by_email.md
@@ -1,5 +1,13 @@
+---
+stage: Plan
+group: Certify
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Reply by email
+> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/1173) in GitLab 8.0.
+
GitLab can be set up to allow users to comment on issues and merge requests by
replying to notification emails.
diff --git a/doc/administration/reply_by_email_postfix_setup.md b/doc/administration/reply_by_email_postfix_setup.md
index 86854a2a7b6..f950134889d 100644
--- a/doc/administration/reply_by_email_postfix_setup.md
+++ b/doc/administration/reply_by_email_postfix_setup.md
@@ -306,7 +306,7 @@ Courier, which we will install later to add IMAP authentication, requires mailbo
```shell
Trying 123.123.123.123...
- Connected to mail.example.gitlab.com.
+ Connected to mail.gitlab.example.com.
Escape character is '^]'.
- OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE ACL ACL2=UNION] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information.
```
diff --git a/doc/administration/repository_checks.md b/doc/administration/repository_checks.md
index 6d9ab723d2f..7d79840b56d 100644
--- a/doc/administration/repository_checks.md
+++ b/doc/administration/repository_checks.md
@@ -1,3 +1,10 @@
+---
+stage: Create
+group: Editor
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference
+---
+
# Repository checks
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/3232) in GitLab 8.7.
diff --git a/doc/administration/repository_storage_paths.md b/doc/administration/repository_storage_paths.md
index 5520eeace0a..16e17458e10 100644
--- a/doc/administration/repository_storage_paths.md
+++ b/doc/administration/repository_storage_paths.md
@@ -68,7 +68,7 @@ the example below, we add two more mount points that are named `nfs_1` and `nfs_
respectively.
NOTE: **Note:**
-This example uses NFS. We do not recommend using EFS for storage as it may impact GitLab's performance. See the [relevant documentation](high_availability/nfs.md#avoid-using-awss-elastic-file-system-efs) for more details.
+This example uses NFS. We do not recommend using EFS for storage as it may impact GitLab's performance. See the [relevant documentation](nfs.md#avoid-using-awss-elastic-file-system-efs) for more details.
**For installations from source**
@@ -89,7 +89,7 @@ This example uses NFS. We do not recommend using EFS for storage as it may impac
1. [Restart GitLab](restart_gitlab.md#installations-from-source) for the changes to take effect.
->**Note:**
+NOTE: **Note:**
The [`gitlab_shell: repos_path` entry](https://gitlab.com/gitlab-org/gitlab-foss/-/blob/8-9-stable/config/gitlab.yml.example#L457) in `gitlab.yml` will be
deprecated and replaced by `repositories: storages` in the future, so if you
are upgrading from a version prior to 8.10, make sure to add the configuration
diff --git a/doc/administration/server_hooks.md b/doc/administration/server_hooks.md
index 2fea000b799..ab808fc28d8 100644
--- a/doc/administration/server_hooks.md
+++ b/doc/administration/server_hooks.md
@@ -163,13 +163,13 @@ them as they can change.
The following server hooks have been re-implemented in Go:
- `pre-receive`, with the Go implementation used by default. To use the Ruby implementation instead,
- [disable](../operations/feature_flags.md#enable-or-disable-feature-flag-strategies) the
- `:gitaly_go_preceive_hook` feature flag.
+ [disable](feature_flags.md#enable-or-disable-the-feature) the `:gitaly_go_preceive_hook` feature
+ flag.
- `update`, with the Go implementation used by default. To use the Ruby implementation instead,
- [disable](../operations/feature_flags.md#enable-or-disable-feature-flag-strategies) the
- `:gitaly_go_update_hook` feature flag.
+ [disable](feature_flags.md#enable-or-disable-the-feature) the `:gitaly_go_update_hook` feature
+ flag.
- `post-receive`, however the Ruby implementation is used by default. To use the Go implementation
- instead, [enable](../operations/feature_flags.md#enable-or-disable-feature-flag-strategies) the
+ instead, [enable](feature_flags.md#enable-or-disable-the-feature) the
`:gitaly_go_postreceive_hook` feature flag.
## Custom error messages
diff --git a/doc/administration/sidekiq.md b/doc/administration/sidekiq.md
new file mode 100644
index 00000000000..23e870dbb82
--- /dev/null
+++ b/doc/administration/sidekiq.md
@@ -0,0 +1,188 @@
+---
+type: reference
+---
+
+# Configuring Sidekiq
+
+This section discusses how to configure an external Sidekiq instance using the
+bundled Sidekiq in the GitLab package.
+
+Sidekiq requires connection to the Redis, PostgreSQL and Gitaly instance.
+To configure the Sidekiq node:
+
+1. SSH into the Sidekiq server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab package
+you want using steps 1 and 2 from the GitLab downloads page.
+**Do not complete any other steps on the download page.**
+1. Open `/etc/gitlab/gitlab.rb` with your editor.
+1. Generate the Sidekiq configuration:
+
+ ```ruby
+ sidekiq['listen_address'] = "10.10.1.48"
+
+ ## Optional: Enable extra Sidekiq processes
+ sidekiq_cluster['enable'] = true
+ sidekiq_cluster['enable'] = true
+ "elastic_indexer"
+ ]
+ ```
+
+1. Setup Sidekiq's connection to Redis:
+
+ ```ruby
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ ## The same password for Redis authentication you set up for the master node.
+ redis['master_password'] = 'YOUR_PASSOWORD'
+
+ ## A list of sentinels with `host` and `port`
+ gitlab_rails['redis_sentinels'] = [
+ {'host' => '10.10.1.34', 'port' => 26379},
+ {'host' => '10.10.1.35', 'port' => 26379},
+ {'host' => '10.10.1.36', 'port' => 26379},
+ ]
+ ```
+
+1. Set up Sidekiq's connection to Gitaly:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly:8075' },
+ })
+ gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
+ ```
+
+1. Set up Sidekiq's connection to PostgreSQL:
+
+ ```ruby
+ gitlab_rails['db_host'] = '10.10.1.30'
+ gitlab_rails['db_password'] = 'YOUR_PASSOWORD'
+ gitlab_rails['db_port'] = '5432'
+ gitlab_rails['db_adapter'] = 'postgresql'
+ gitlab_rails['db_encoding'] = 'unicode'
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+ Remember to add the Sidekiq nodes to PostgreSQL's trusted addresses:
+
+ ```ruby
+ postgresql['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.10.1.30/32 10.10.1.31/32 10.10.1.32/32 10.10.1.33/32 10.10.1.38/32)
+ ```
+
+1. Disable other services:
+
+ ```ruby
+ nginx['enable'] = false
+ grafana['enable'] = false
+ prometheus['enable'] = false
+ gitlab_rails['auto_migrate'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_monitor['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = false
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ puma['enable'] = false
+ gitlab_exporter['enable'] = false
+ ```
+
+1. Run `gitlab-ctl reconfigure`.
+
+NOTE: **Note:**
+You will need to restart the Sidekiq nodes after an update has occurred and database
+migrations performed.
+
+## Example configuration
+
+Here's what the ending `/etc/gitlab/gitlab.rb` would look like:
+
+```ruby
+########################################
+##### Services Disabled ###
+########################################
+
+nginx['enable'] = false
+grafana['enable'] = false
+prometheus['enable'] = false
+gitlab_rails['auto_migrate'] = false
+alertmanager['enable'] = false
+gitaly['enable'] = false
+gitlab_monitor['enable'] = false
+gitlab_workhorse['enable'] = false
+nginx['enable'] = false
+postgres_exporter['enable'] = false
+postgresql['enable'] = false
+redis['enable'] = false
+redis_exporter['enable'] = false
+puma['enable'] = false
+gitlab_exporter['enable'] = false
+
+########################################
+#### Redis ###
+########################################
+
+## Must be the same in every sentinel node
+redis['master_name'] = 'gitlab-redis'
+
+## The same password for Redis authentication you set up for the master node.
+redis['master_password'] = 'YOUR_PASSOWORD'
+
+## A list of sentinels with `host` and `port`
+gitlab_rails['redis_sentinels'] = [
+ {'host' => '10.10.1.34', 'port' => 26379},
+ {'host' => '10.10.1.35', 'port' => 26379},
+ {'host' => '10.10.1.36', 'port' => 26379},
+ ]
+
+#######################################
+### Gitaly ###
+#######################################
+
+git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly:8075' },
+})
+gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
+
+#######################################
+### Postgres ###
+#######################################
+gitlab_rails['db_host'] = '10.10.1.30'
+gitlab_rails['db_password'] = 'YOUR_PASSOWORD'
+gitlab_rails['db_port'] = '5432'
+gitlab_rails['db_adapter'] = 'postgresql'
+gitlab_rails['db_encoding'] = 'unicode'
+gitlab_rails['auto_migrate'] = false
+
+#######################################
+### Sidekiq configuration ###
+#######################################
+sidekiq['listen_address'] = "10.10.1.48"
+
+#######################################
+### Monitoring configuration ###
+#######################################
+consul['enable'] = true
+consul['monitoring_service_discovery'] = true
+
+consul['configuration'] = {
+ bind_addr: '10.10.1.48',
+ retry_join: %w(10.10.1.34 10.10.1.35 10.10.1.36)
+}
+
+# Set the network addresses that the exporters will listen on
+node_exporter['listen_address'] = '10.10.1.48:9100'
+
+# Rails Status for prometheus
+gitlab_rails['monitoring_whitelist'] = ['10.10.1.42', '127.0.0.1']
+```
+
+## Further reading
+
+Related Sidekiq configuration:
+
+1. [Extra Sidekiq processes](operations/extra_sidekiq_processes.md)
+1. [Using the GitLab-Sidekiq chart](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/)
diff --git a/doc/administration/smime_signing_email.md b/doc/administration/smime_signing_email.md
index 304b65917c1..b1e7e349978 100644
--- a/doc/administration/smime_signing_email.md
+++ b/doc/administration/smime_signing_email.md
@@ -3,7 +3,7 @@
Notification emails sent by GitLab can be signed with S/MIME for improved
security.
-> **Note:**
+NOTE: **Note:**
Please be aware that S/MIME certificates and TLS/SSL certificates are not the
same and are used for different purposes: TLS creates a secure channel, whereas
S/MIME signs and/or encrypts the message itself
diff --git a/doc/administration/snippets/index.md b/doc/administration/snippets/index.md
index cf3d8bec1a6..95de3b8c183 100644
--- a/doc/administration/snippets/index.md
+++ b/doc/administration/snippets/index.md
@@ -1,5 +1,8 @@
---
type: reference, howto
+stage: Create
+group: Editor
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
---
# Snippets settings **(CORE ONLY)**
@@ -10,15 +13,15 @@ Adjust the snippets' settings of your GitLab instance.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/31133) in GitLab 12.6.
-You can set a content size max limit in snippets. This limit can prevent
-abuses of the feature. The default content size limit is **52428800 Bytes** (50MB).
+You can set a maximum content size limit for snippets. This limit can prevent
+abuse of the feature. The default value is **52428800 Bytes** (50 MB).
### How does it work?
-The content size limit will be applied when a snippet is created or
-updated. Nevertheless, in order not to break any existing snippet,
-the limit will only be enforced in stored snippets when the content
-is updated.
+The content size limit will be applied when a snippet is created or updated.
+
+In order not to break any existing snippets, the limit doesn't have any
+effect on them until a snippet is edited again and the content changes.
### Snippets size limit configuration
@@ -27,7 +30,7 @@ In order to configure this setting, use either the Rails console
or the [Application settings API](../../api/settings.md).
NOTE: **IMPORTANT:**
-The value of the limit **must** be in Bytes.
+The value of the limit **must** be in bytes.
#### Through the Rails console
diff --git a/doc/administration/static_objects_external_storage.md b/doc/administration/static_objects_external_storage.md
index 973f4304115..34c7c8947fc 100644
--- a/doc/administration/static_objects_external_storage.md
+++ b/doc/administration/static_objects_external_storage.md
@@ -1,3 +1,10 @@
+---
+stage: Create
+group: Editor
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference
+---
+
# Static objects external storage
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/31025) in GitLab 12.3.
@@ -52,10 +59,10 @@ sequenceDiagram
## Set up external storage
-While this procedure uses [CloudFlare Workers](https://workers.cloudflare.com) for external storage,
+While this procedure uses [Cloudflare Workers](https://workers.cloudflare.com) for external storage,
other CDNs or Function as a Service (FaaS) systems should work using the same principles.
-1. Choose a CloudFlare Worker domain if you haven't done so already.
+1. Choose a Cloudflare Worker domain if you haven't done so already.
1. In the following script, set the following values for the first two constants:
- `ORIGIN_HOSTNAME`: the hostname of your GitLab installation.
diff --git a/doc/administration/terraform_state.md b/doc/administration/terraform_state.md
index 8d3ddc4e306..76e54acce16 100644
--- a/doc/administration/terraform_state.md
+++ b/doc/administration/terraform_state.md
@@ -26,7 +26,6 @@ below.
`/etc/gitlab/gitlab.rb` and add the following line:
```ruby
- gitlab_rails['terraform_state_enabled'] = true
gitlab_rails['terraform_state_storage_path'] = "/mnt/storage/terraform_state"
```
@@ -76,7 +75,6 @@ See [the available connection settings for different providers](object_storage.m
the values you want:
```ruby
- gitlab_rails['terraform_state_enabled'] = true
gitlab_rails['terraform_state_object_store_enabled'] = true
gitlab_rails['terraform_state_object_store_remote_directory'] = "terraform"
gitlab_rails['terraform_state_object_store_connection'] = {
diff --git a/doc/administration/troubleshooting/debug.md b/doc/administration/troubleshooting/debug.md
index 55fd6462bc3..5daf34c1011 100644
--- a/doc/administration/troubleshooting/debug.md
+++ b/doc/administration/troubleshooting/debug.md
@@ -7,6 +7,10 @@ in production.
Troubleshooting and debugging your GitLab instance often requires a
[Rails console](https://guides.rubyonrails.org/command_line.html#rails-console).
+See also:
+
+- [GitLab Rails Console Cheat Sheet](gitlab_rails_cheat_sheet.md).
+- [Navigating GitLab via Rails console](navigating_gitlab_via_rails_console.md).
**For Omnibus installations**
@@ -73,7 +77,7 @@ sudo gitlab-rails runner "RAILS_COMMAND"
# Example with a two-line Ruby script
sudo gitlab-rails runner "user = User.first; puts user.username"
-# Example with a ruby script file
+# Example with a ruby script file (make sure to use the full path)
sudo gitlab-rails runner /path/to/script.rb
```
@@ -85,7 +89,7 @@ sudo -u git -H bundle exec rails runner -e production "RAILS_COMMAND"
# Example with a two-line Ruby script
sudo -u git -H bundle exec rails runner -e production "user = User.first; puts user.username"
-# Example with a ruby script file
+# Example with a ruby script file (make sure to use the full path)
sudo -u git -H bundle exec rails runner -e production /path/to/script.rb
```
diff --git a/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md b/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
index 8e9749fb239..22d699b424b 100644
--- a/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
+++ b/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
@@ -167,7 +167,7 @@ user = User.find_by_username('root')
# Find the project, update the xxx-changeme values from above
project = Project.find_by_full_path('group-changeme/project-changeme')
-# Delete the project
+# Immediately delete the project
::Projects::DestroyService.new(project, user, {}).execute
```
@@ -248,7 +248,8 @@ end
A Projects Wiki can be recreated by
-**Note:** This is a destructive operation, the Wiki will be empty
+NOTE: **Note:**
+This is a destructive operation, the Wiki will be empty.
```ruby
p = Project.find_by_full_path('<username-or-group>/<project-name>') ### enter your projects path
@@ -689,10 +690,10 @@ end
As a GitLab administrator, you may need to reduce disk space consumption.
A common culprit is Docker Registry images that are no longer in use. To find
the storage broken down by each project, run the following in the
-GitLab Rails console:
+[GitLab Rails console](../troubleshooting/navigating_gitlab_via_rails_console.md):
```ruby
-projects_and_size = []
+projects_and_size = [["project_id", "creator_id", "registry_size_bytes", "project path"]]
# You need to specify the projects that you want to look through. You can get these in any manner.
projects = Project.last(100)
@@ -707,17 +708,21 @@ projects.each do |p|
end
if project_total_size > 0
- projects_and_size << [p.full_path,project_total_size]
+ projects_and_size << [p.project_id, p.creator.id, project_total_size, p.full_path]
end
end
# projects_and_size is filled out now
# maybe print it as comma separated output?
projects_and_size.each do |ps|
- puts "%s,%s" % ps
+ puts "%s,%s,%s,%s" % ps
end
```
+### Run the Cleanup policy now
+
+Find this content in the [Container Registry troubleshooting docs](../packages/container_registry.md#run-the-cleanup-policy-now).
+
## Sidekiq
This content has been moved to the [Troubleshooting Sidekiq docs](./sidekiq.md).
@@ -831,19 +836,19 @@ Geo::JobArtifactRegistry.synced.missing_on_primary.pluck(:artifact_id)
#### Get the number of verification failed repositories
```ruby
-Geo::ProjectRegistryFinder.new.count_verification_failed_repositories
+Geo::ProjectRegistry.verification_failed('repository').count
```
#### Find the verification failed repositories
```ruby
-Geo::ProjectRegistry.verification_failed_repos
+Geo::ProjectRegistry.verification_failed('repository')
```
### Find repositories that failed to sync
```ruby
-Geo::ProjectRegistryFinder.new.find_failed_project_registries('repository')
+Geo::ProjectRegistry.sync_failed('repository')
```
### Resync repositories
diff --git a/doc/administration/troubleshooting/group_saml_scim.md b/doc/administration/troubleshooting/group_saml_scim.md
index e2ce72d5a16..7492688ded0 100644
--- a/doc/administration/troubleshooting/group_saml_scim.md
+++ b/doc/administration/troubleshooting/group_saml_scim.md
@@ -2,7 +2,7 @@
type: reference
---
-# Group SAML and SCIM troubleshooting **(SILVER ONLY)**
+# Troubleshooting Group SAML and SCIM **(SILVER ONLY)**
These are notes and screenshots regarding Group SAML and SCIM that the GitLab Support Team sometimes uses while troubleshooting, but which do not fit into the official documentation. GitLab is making this public, so that anyone can make use of the Support team’s collected knowledge.
diff --git a/doc/administration/troubleshooting/img/network_monitor_xid.png b/doc/administration/troubleshooting/img/network_monitor_xid.png
new file mode 100644
index 00000000000..5392f77327f
--- /dev/null
+++ b/doc/administration/troubleshooting/img/network_monitor_xid.png
Binary files differ
diff --git a/doc/administration/troubleshooting/postgresql.md b/doc/administration/troubleshooting/postgresql.md
index 36c1f20818a..b7e33e4501d 100644
--- a/doc/administration/troubleshooting/postgresql.md
+++ b/doc/administration/troubleshooting/postgresql.md
@@ -57,7 +57,7 @@ This section is for links to information elsewhere in the GitLab documentation.
- [GitLab database requirements](../../install/requirements.md#database) including
- Support for MySQL was removed in GitLab 12.1; [migrate to PostgreSQL](../../update/mysql_to_postgresql.md)
- required extension `pg_trgm`
- - required extension `postgres_fdw` for Geo
+ - required extension `btree_gist`
- Errors like this in the `production/sidekiq` log; see: [Set default_transaction_isolation into read committed](https://docs.gitlab.com/omnibus/settings/database.html#set-default_transaction_isolation-into-read-committed):
@@ -84,7 +84,7 @@ This section is for links to information elsewhere in the GitLab documentation.
PANIC: could not write to file ‘pg_xlog/xlogtemp.123’: No space left on device
```
-- [Checking Geo configuration](../geo/replication/troubleshooting.md#checking-configuration) including
+- [Checking Geo configuration](../geo/replication/troubleshooting.md) including
- reconfiguring hosts/ports
- checking and fixing user/password mappings
diff --git a/doc/administration/troubleshooting/sidekiq.md b/doc/administration/troubleshooting/sidekiq.md
index 566e2686735..9125ddf545f 100644
--- a/doc/administration/troubleshooting/sidekiq.md
+++ b/doc/administration/troubleshooting/sidekiq.md
@@ -7,16 +7,18 @@ may be filling up. Users will notice when this happens because new branches
may not show up and merge requests may not be updated. The following are some
troubleshooting steps that will help you diagnose the bottleneck.
-> **Note:** GitLab administrators/users should consider working through these
-> debug steps with GitLab Support so the backtraces can be analyzed by our team.
-> It may reveal a bug or necessary improvement in GitLab.
->
-> **Note:** In any of the backtraces, be wary of suspecting cases where every
-> thread appears to be waiting in the database, Redis, or waiting to acquire
-> a mutex. This **may** mean there's contention in the database, for example,
-> but look for one thread that is different than the rest. This other thread
-> may be using all available CPU, or have a Ruby Global Interpreter Lock,
-> preventing other threads from continuing.
+NOTE **Note:**
+GitLab administrators/users should consider working through these
+debug steps with GitLab Support so the backtraces can be analyzed by our team.
+It may reveal a bug or necessary improvement in GitLab.
+
+NOTE: **Note:**
+In any of the backtraces, be wary of suspecting cases where every
+thread appears to be waiting in the database, Redis, or waiting to acquire
+a mutex. This **may** mean there's contention in the database, for example,
+but look for one thread that is different than the rest. This other thread
+may be using all available CPU, or have a Ruby Global Interpreter Lock,
+preventing other threads from continuing.
## Log arguments to Sidekiq jobs
diff --git a/doc/administration/troubleshooting/tracing_correlation_id.md b/doc/administration/troubleshooting/tracing_correlation_id.md
new file mode 100644
index 00000000000..31f537beae5
--- /dev/null
+++ b/doc/administration/troubleshooting/tracing_correlation_id.md
@@ -0,0 +1,126 @@
+---
+type: reference
+---
+
+# Finding relevant log entries with a correlation ID
+
+Since GitLab 11.6, a unique request tracking ID, known as the "correlation ID" has been
+logged by the GitLab instance for most requests. Each individual request to GitLab gets
+its own correlation ID, which then gets logged in each GitLab component's logs for that
+request. This makes it easier to trace behavior in a
+distributed system. Without this ID it can be difficult or
+impossible to match correlating log entries.
+
+## Identify the correlation ID for a request
+
+The correlation ID is logged in structured logs under the key `correlation_id`
+and in all response headers GitLab sends under the header `x-request-id`.
+You can find your correlation ID by searching in either place.
+
+### Getting the correlation ID in your browser
+
+You can use your browser's developer tools to monitor and inspect network
+activity with the site that you're visiting. See the links below for network monitoring
+documenation for some popular browsers.
+
+- [Network Monitor - Firefox Developer Tools](https://developer.mozilla.org/en-US/docs/Tools/Network_Monitor)
+- [Inspect Network Activity In Chrome DevTools](https://developers.google.com/web/tools/chrome-devtools/network/)
+- [Safari Web Development Tools](https://developer.apple.com/safari/tools/)
+- [Microsoft Edge Network panel](https://docs.microsoft.com/en-us/microsoft-edge/devtools-guide/network#request-details)
+
+To locate a relevant request and view its correlation ID:
+
+1. Enable persistent logging in your network monitor. Some actions in GitLab will redirect you quickly after you submit a form, so this will help capture all relevant activity.
+1. To help isolate the requests you are looking for, you can filter for `document` requests.
+1. Click the request of interest to view further detail.
+1. Go to the **Headers** section and look for **Response Headers**. There you should find an `x-request-id` header with a
+value that was randomly generated by GitLab for the request.
+
+See the following example:
+
+![Firefox's network monitor showing an request id header](img/network_monitor_xid.png)
+
+### Getting the correlation ID from your logs
+
+Another approach to finding the correct correlation ID is to search or watch
+your logs and find the `correlation_id` value for the log entry that you're
+watching for.
+
+For example, let's say that you want learn what's happening or breaking when
+you reproduce an action in GitLab. You could tail the GitLab logs, filtering
+to requests by your user, and then watch the requests until you see what you're
+interested in.
+
+### Getting the correlation ID from curl
+
+If you're using `curl` then you can use the verbose option to show request and response headers, as well as other debug info.
+
+```shell
+➜ ~ curl --verbose https://gitlab.example.com/api/v4/projects
+# look for a line that looks like this
+< x-request-id: 4rAMkV3gof4
+```
+
+#### Using jq
+
+This example uses [jq](https://stedolan.github.io/jq/) to filter results and
+display values we most likely care about.
+
+```shell
+sudo gitlab-ctl tail gitlab-rails/production_json.log | jq 'select(.username == "bob") | "User: \(.username), \(.method) \(.path), \(.controller)#\(.action), ID: \(.correlation_id)"'
+```
+
+```plaintext
+"User: bob, GET /root/linux, ProjectsController#show, ID: U7k7fh6NpW3"
+"User: bob, GET /root/linux/commits/master/signatures, Projects::CommitsController#signatures, ID: XPIHpctzEg1"
+"User: bob, GET /root/linux/blob/master/README, Projects::BlobController#show, ID: LOt9hgi1TV4"
+```
+
+#### Using grep
+
+This example uses only `grep` and `tr`, which are more likely to be installed than `jq`.
+
+```shell
+sudo gitlab-ctl tail gitlab-rails/production_json.log | grep '"username":"bob"' | tr ',' '\n' | egrep 'method|path|correlation_id'
+```
+
+```plaintext
+{"method":"GET"
+"path":"/root/linux"
+"username":"bob"
+"correlation_id":"U7k7fh6NpW3"}
+{"method":"GET"
+"path":"/root/linux/commits/master/signatures"
+"username":"bob"
+"correlation_id":"XPIHpctzEg1"}
+{"method":"GET"
+"path":"/root/linux/blob/master/README"
+"username":"bob"
+"correlation_id":"LOt9hgi1TV4"}
+```
+
+## Searching your logs for the correlation ID
+
+Once you have the correlation ID you can start searching for relevant log
+entries. You can filter the lines by the correlation ID itself.
+Combining a `find` and `grep` should be sufficient to find the entries you are looking for.
+
+```shell
+# find <gitlab log directory> -type f -mtime -0 exec grep '<correlation ID>' '{}' '+'
+find /var/log/gitlab -type f -mtime 0 -exec grep 'LOt9hgi1TV4' '{}' '+'
+```
+
+```plaintext
+/var/log/gitlab/gitlab-workhorse/current:{"correlation_id":"LOt9hgi1TV4","duration_ms":2478,"host":"gitlab.domain.tld","level":"info","method":"GET","msg":"access","proto":"HTTP/1.1","referrer":"https://gitlab.domain.tld/root/linux","remote_addr":"68.0.116.160:0","remote_ip":"[filtered]","status":200,"system":"http","time":"2019-09-17T22:17:19Z","uri":"/root/linux/blob/master/README?format=json\u0026viewer=rich","user_agent":"Mozilla/5.0 (Mac) Gecko Firefox/69.0","written_bytes":1743}
+/var/log/gitlab/gitaly/current:{"correlation_id":"LOt9hgi1TV4","grpc.code":"OK","grpc.meta.auth_version":"v2","grpc.meta.client_name":"gitlab-web","grpc.method":"FindCommits","grpc.request.deadline":"2019-09-17T22:17:47Z","grpc.request.fullMethod":"/gitaly.CommitService/FindCommits","grpc.request.glProjectPath":"root/linux","grpc.request.glRepository":"project-1","grpc.request.repoPath":"@hashed/6b/86/6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b.git","grpc.request.repoStorage":"default","grpc.request.topLevelGroup":"@hashed","grpc.service":"gitaly.CommitService","grpc.start_time":"2019-09-17T22:17:17Z","grpc.time_ms":2319.161,"level":"info","msg":"finished streaming call with code OK","peer.address":"@","span.kind":"server","system":"grpc","time":"2019-09-17T22:17:19Z"}
+/var/log/gitlab/gitlab-rails/production_json.log:{"method":"GET","path":"/root/linux/blob/master/README","format":"json","controller":"Projects::BlobController","action":"show","status":200,"duration":2448.77,"view":0.49,"db":21.63,"time":"2019-09-17T22:17:19.800Z","params":[{"key":"viewer","value":"rich"},{"key":"namespace_id","value":"root"},{"key":"project_id","value":"linux"},{"key":"id","value":"master/README"}],"remote_ip":"[filtered]","user_id":2,"username":"bob","ua":"Mozilla/5.0 (Mac) Gecko Firefox/69.0","queue_duration":3.38,"gitaly_calls":1,"gitaly_duration":0.77,"rugged_calls":4,"rugged_duration_ms":28.74,"correlation_id":"LOt9hgi1TV4"}
+```
+
+### Searching in distributed architectures
+
+If you have done some horizontal scaling in your GitLab infrastructure, then
+you will need to search across _all_ of your GitLab nodes. You can do this with
+some sort of log aggregation software like Loki, ELK, Splunk, or others.
+
+You can use a tool like Ansible or PSSH (parellel SSH) that can execute identical commands across your servers in
+parallel, or craft your own solution.
diff --git a/doc/administration/wikis/index.md b/doc/administration/wikis/index.md
new file mode 100644
index 00000000000..077b4f064dc
--- /dev/null
+++ b/doc/administration/wikis/index.md
@@ -0,0 +1,75 @@
+---
+type: reference, howto
+stage: Create
+group: Knowledge
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Wiki settings **(CORE ONLY)**
+
+Adjust the wiki settings of your GitLab instance.
+
+## Wiki page content size limit
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/31176) in GitLab 13.2.
+
+You can set a maximum content size limit for wiki pages. This limit can prevent
+abuse of the feature. The default value is **52428800 Bytes** (50 MB).
+
+### How does it work?
+
+The content size limit will be applied when a wiki page is created or updated
+through the GitLab UI or API. Local changes pushed via Git will not be validated.
+
+In order not to break any existing wiki pages, the limit doesn't have any
+effect on them until a wiki page is edited again and the content changes.
+
+### Wiki page content size limit configuration
+
+This setting is not available through the [Admin Area settings](../../user/admin_area/settings/index.md).
+In order to configure this setting, use either the Rails console
+or the [Application settings API](../../api/settings.md).
+
+NOTE: **Note:**
+The value of the limit **must** be in bytes. The minimum value is 1024 bytes.
+
+#### Through the Rails console
+
+The steps to configure this setting through the Rails console are:
+
+1. Start the Rails console:
+
+ ```shell
+ # For Omnibus installations
+ sudo gitlab-rails console
+
+ # For installations from source
+ sudo -u git -H bundle exec rails console -e production
+ ```
+
+1. Update the wiki page maximum content size:
+
+ ```ruby
+ ApplicationSetting.first.update!(wiki_page_max_content_bytes: 50.megabytes)
+ ```
+
+To retrieve the current value, start the Rails console and run:
+
+ ```ruby
+ Gitlab::CurrentSettings.wiki_page_max_content_bytes
+ ```
+
+#### Through the API
+
+The process to set the wiki page size limit through the Application Settings API is
+exactly the same as you would do to [update any other setting](../../api/settings.md#change-application-settings).
+
+```shell
+curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/application/settings?wiki_page_max_content_bytes=52428800
+```
+
+You can also use the API to [retrieve the current value](../../api/settings.md#get-current-application-settings).
+
+```shell
+curl --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/application/settings
+```