Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-07-20 15:26:25 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2020-07-20 15:26:25 +0300
commita09983ae35713f5a2bbb100981116d31ce99826e (patch)
tree2ee2af7bd104d57086db360a7e6d8c9d5d43667a /doc/administration
parent18c5ab32b738c0b6ecb4d0df3994000482f34bd8 (diff)
Add latest changes from gitlab-org/gitlab@13-2-stable-ee
Diffstat (limited to 'doc/administration')
-rw-r--r--doc/administration/audit_events.md7
-rw-r--r--doc/administration/auth/jwt.md3
-rw-r--r--doc/administration/auth/ldap/google_secure_ldap.md3
-rw-r--r--doc/administration/auth/ldap/index.md22
-rw-r--r--doc/administration/auth/ldap/ldap-troubleshooting.md38
-rw-r--r--doc/administration/auth/okta.md3
-rw-r--r--doc/administration/auth/smartcard.md2
-rw-r--r--doc/administration/feature_flags.md22
-rw-r--r--doc/administration/geo/disaster_recovery/bring_primary_back.md6
-rw-r--r--doc/administration/geo/disaster_recovery/index.md19
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md15
-rw-r--r--doc/administration/geo/replication/database.md6
-rw-r--r--doc/administration/geo/replication/datatypes.md48
-rw-r--r--doc/administration/geo/replication/disable_geo.md93
-rw-r--r--doc/administration/geo/replication/docker_registry.md2
-rw-r--r--doc/administration/geo/replication/external_database.md3
-rw-r--r--doc/administration/geo/replication/geo_validation_tests.md65
-rw-r--r--doc/administration/geo/replication/index.md30
-rw-r--r--doc/administration/geo/replication/location_aware_git_url.md2
-rw-r--r--doc/administration/geo/replication/multiple_servers.md59
-rw-r--r--doc/administration/geo/replication/troubleshooting.md9
-rw-r--r--doc/administration/geo/replication/updating_the_geo_nodes.md5
-rw-r--r--doc/administration/git_annex.md8
-rw-r--r--doc/administration/gitaly/img/praefect_storage_v12_10.pngbin59531 -> 0 bytes
-rw-r--r--doc/administration/gitaly/index.md564
-rw-r--r--doc/administration/gitaly/praefect.md335
-rw-r--r--doc/administration/gitaly/reference.md2
-rw-r--r--doc/administration/high_availability/consul.md4
-rw-r--r--doc/administration/high_availability/database.md19
-rw-r--r--doc/administration/high_availability/gitlab.md20
-rw-r--r--doc/administration/high_availability/monitoring_node.md13
-rw-r--r--doc/administration/high_availability/nfs.md26
-rw-r--r--doc/administration/high_availability/nfs_host_client_setup.md19
-rw-r--r--doc/administration/high_availability/redis.md1007
-rw-r--r--doc/administration/high_availability/redis_source.md370
-rw-r--r--doc/administration/high_availability/sidekiq.md3
-rw-r--r--doc/administration/img/repository_storages_admin_ui_v12_10.pngbin23718 -> 0 bytes
-rw-r--r--doc/administration/img/repository_storages_admin_ui_v13_1.pngbin0 -> 85160 bytes
-rw-r--r--doc/administration/incoming_email.md22
-rw-r--r--doc/administration/instance_limits.md158
-rw-r--r--doc/administration/integration/plantuml.md3
-rw-r--r--doc/administration/integration/terminal.md3
-rw-r--r--doc/administration/job_artifacts.md46
-rw-r--r--doc/administration/job_logs.md2
-rw-r--r--doc/administration/lfs/index.md51
-rw-r--r--doc/administration/logs.md98
-rw-r--r--doc/administration/merge_request_diffs.md68
-rw-r--r--doc/administration/monitoring/gitlab_self_monitoring_project/img/self_monitoring_default_dashboard.pngbin0 -> 51508 bytes
-rw-r--r--doc/administration/monitoring/gitlab_self_monitoring_project/index.md23
-rw-r--r--doc/administration/monitoring/performance/grafana_configuration.md111
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_configuration_settings.pngbin16455 -> 0 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_frontend.pngbin112089 -> 34521 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_gitaly_calls.pngbin83212 -> 81321 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_gitaly_threshold.pngbin19076 -> 10316 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_redis_calls.pngbin70859 -> 17273 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_request_selector_warning.pngbin17259 -> 10175 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_rugged_calls.pngbin105305 -> 28784 bytes
-rw-r--r--doc/administration/monitoring/performance/performance_bar.md98
-rw-r--r--doc/administration/monitoring/prometheus/gitlab_exporter.md20
-rw-r--r--doc/administration/monitoring/prometheus/gitlab_metrics.md55
-rw-r--r--doc/administration/monitoring/prometheus/index.md102
-rw-r--r--doc/administration/object_storage.md556
-rw-r--r--doc/administration/operations/fast_ssh_key_lookup.md31
-rw-r--r--doc/administration/operations/filesystem_benchmarking.md3
-rw-r--r--doc/administration/operations/puma.md18
-rw-r--r--doc/administration/operations/unicorn.md2
-rw-r--r--doc/administration/packages/container_registry.md103
-rw-r--r--doc/administration/packages/dependency_proxy.md5
-rw-r--r--doc/administration/packages/index.md3
-rw-r--r--doc/administration/pages/index.md119
-rw-r--r--doc/administration/postgresql/index.md36
-rw-r--r--doc/administration/postgresql/replication_and_failover.md234
-rw-r--r--doc/administration/postgresql/standalone.md3
-rw-r--r--doc/administration/raketasks/ldap.md2
-rw-r--r--doc/administration/raketasks/maintenance.md4
-rw-r--r--doc/administration/raketasks/praefect.md18
-rw-r--r--doc/administration/raketasks/project_import_export.md10
-rw-r--r--doc/administration/redis/index.md42
-rw-r--r--doc/administration/redis/replication_and_failover.md741
-rw-r--r--doc/administration/redis/replication_and_failover_external.md376
-rw-r--r--doc/administration/redis/standalone.md63
-rw-r--r--doc/administration/redis/troubleshooting.md158
-rw-r--r--doc/administration/reference_architectures/10k_users.md2
-rw-r--r--doc/administration/reference_architectures/1k_users.md2
-rw-r--r--doc/administration/reference_architectures/25k_users.md2
-rw-r--r--doc/administration/reference_architectures/2k_users.md174
-rw-r--r--doc/administration/reference_architectures/3k_users.md1816
-rw-r--r--doc/administration/reference_architectures/50k_users.md2
-rw-r--r--doc/administration/reference_architectures/5k_users.md1820
-rw-r--r--doc/administration/reference_architectures/index.md10
-rw-r--r--doc/administration/reference_architectures/troubleshooting.md251
-rw-r--r--doc/administration/repository_storage_paths.md15
-rw-r--r--doc/administration/repository_storage_types.md7
-rw-r--r--doc/administration/server_hooks.md237
-rw-r--r--doc/administration/smime_signing_email.md9
-rw-r--r--doc/administration/terraform_state.md19
-rw-r--r--doc/administration/troubleshooting/elasticsearch.md3
-rw-r--r--doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md20
-rw-r--r--doc/administration/troubleshooting/linux_cheat_sheet.md2
-rw-r--r--doc/administration/troubleshooting/navigating_gitlab_via_rails_console.md4
-rw-r--r--doc/administration/troubleshooting/postgresql.md8
-rw-r--r--doc/administration/troubleshooting/sidekiq.md5
-rw-r--r--doc/administration/uploads.md50
103 files changed, 7957 insertions, 2740 deletions
diff --git a/doc/administration/audit_events.md b/doc/administration/audit_events.md
index cc94d756f99..c85fb2d2e47 100644
--- a/doc/administration/audit_events.md
+++ b/doc/administration/audit_events.md
@@ -56,9 +56,7 @@ From there, you can see the following actions:
- User sign-in via [Group SAML](../user/group/saml_sso/index.md)
- Permissions changes of a user assigned to a group
- Removed user from group
-- Project imported in to group
-- Project added to group and with which visibility level
-- Project removed from group
+- Project repository imported into group
- [Project shared with group](../user/project/members/share_project_with_groups.md)
and with which [permissions](../user/permissions.md)
- Removal of a previously shared group with a project
@@ -80,7 +78,7 @@ To view a project's audit events, navigate to **Project > Settings > Audit Event
From there, you can see the following actions:
- Added or removed deploy keys
-- Project created, deleted, renamed, moved(transferred), changed path
+- Project created, deleted, renamed, moved (transferred), changed path
- Project changed visibility level
- User was added to project and with which [permissions](../user/permissions.md)
- Permission changes of a user assigned to a project
@@ -96,6 +94,7 @@ From there, you can see the following actions:
- Permission to approve merge requests by committers was updated ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/7531) in GitLab 12.9)
- Permission to approve merge requests by authors was updated ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/7531) in GitLab 12.9)
- Number of required approvals was updated ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/7531) in GitLab 12.9)
+- Added or removed users and groups from project approval groups ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/213603) in GitLab 13.2)
Project events can also be accessed via the [Project Audit Events API](../api/audit_events.md#project-audit-events-starter)
diff --git a/doc/administration/auth/jwt.md b/doc/administration/auth/jwt.md
index 29b192a4845..d3a77fdaca5 100644
--- a/doc/administration/auth/jwt.md
+++ b/doc/administration/auth/jwt.md
@@ -62,7 +62,8 @@ JWT will provide you with a secret key for you to use.
}
```
- NOTE: **Note:** For more information on each configuration option refer to
+ NOTE: **Note:**
+ For more information on each configuration option refer to
the [OmniAuth JWT usage documentation](https://github.com/mbleigh/omniauth-jwt#usage).
1. Change `YOUR_APP_SECRET` to the client secret and set `auth_url` to your redirect URL.
diff --git a/doc/administration/auth/ldap/google_secure_ldap.md b/doc/administration/auth/ldap/google_secure_ldap.md
index 2271ce93b6f..1f8fca33811 100644
--- a/doc/administration/auth/ldap/google_secure_ldap.md
+++ b/doc/administration/auth/ldap/google_secure_ldap.md
@@ -34,7 +34,8 @@ The steps below cover:
'Entire domain (GitLab)' or 'Selected organizational units' for both 'Verify user
credentials' and 'Read user information'. Select 'Add LDAP Client'
- TIP: **Tip:** If you plan to use GitLab [LDAP Group Sync](index.md#group-sync-starter-only)
+ TIP: **Tip:**
+ If you plan to use GitLab [LDAP Group Sync](index.md#group-sync-starter-only)
, turn on 'Read group information'.
![Add LDAP Client Step 2](img/google_secure_ldap_add_step_2.png)
diff --git a/doc/administration/auth/ldap/index.md b/doc/administration/auth/ldap/index.md
index 4a7a972596f..aef6c70ff92 100644
--- a/doc/administration/auth/ldap/index.md
+++ b/doc/administration/auth/ldap/index.md
@@ -53,7 +53,7 @@ are already logged in or are using Git over SSH will still be able to access
GitLab for up to one hour. Manually block the user in the GitLab Admin Area to
immediately block all access.
-NOTE: **Note**:
+NOTE: **Note:**
GitLab Enterprise Edition Starter supports a
[configurable sync time](#adjusting-ldap-user-sync-schedule-starter-only).
@@ -99,7 +99,7 @@ Normally, if you specify `simple_tls` it will be on port 636, while `start_tls`
would be on port 389. `plain` also operates on port 389. Removed values: `tls` was replaced with `start_tls` and `ssl` was replaced with `simple_tls`.
NOTE: **Note:**
-LDAP users must have an email address set, regardless of whether it is used to log in.
+LDAP users must have an email address set, regardless of whether it is used to sign-in.
### Example Configurations **(CORE ONLY)**
@@ -169,7 +169,7 @@ production:
| Setting | Description | Required | Examples |
| ------- | ----------- | -------- | -------- |
-| `label` | A human-friendly name for your LDAP server. It will be displayed on your login page. | yes | `'Paris'` or `'Acme, Ltd.'` |
+| `label` | A human-friendly name for your LDAP server. It will be displayed on your sign-in page. | yes | `'Paris'` or `'Acme, Ltd.'` |
| `host` | IP address or domain name of your LDAP server. | yes | `'ldap.mydomain.com'` |
| `port` | The port to connect with on your LDAP server. Always an integer, not a string. | yes | `389` or `636` (for SSL) |
| `uid` | LDAP attribute for username. Should be the attribute, not the value that maps to the `uid`. | yes | `'sAMAccountName'`, `'uid'`, `'userPrincipalName'` |
@@ -179,7 +179,7 @@ production:
| `verify_certificates` | Enables SSL certificate verification if encryption method is `start_tls` or `simple_tls`. Defaults to true. | no | boolean |
| `timeout` | Set a timeout, in seconds, for LDAP queries. This helps avoid blocking a request if the LDAP server becomes unresponsive. A value of 0 means there is no timeout. | no | `10` or `30` |
| `active_directory` | This setting specifies if LDAP server is Active Directory LDAP server. For non-AD servers it skips the AD specific queries. If your LDAP server is not AD, set this to false. | no | boolean |
-| `allow_username_or_email_login` | If enabled, GitLab will ignore everything after the first `@` in the LDAP username submitted by the user on login. If you are using `uid: 'userPrincipalName'` on ActiveDirectory you need to disable this setting, because the userPrincipalName contains an `@`. | no | boolean |
+| `allow_username_or_email_login` | If enabled, GitLab will ignore everything after the first `@` in the LDAP username submitted by the user on sign-in. If you are using `uid: 'userPrincipalName'` on ActiveDirectory you need to disable this setting, because the userPrincipalName contains an `@`. | no | boolean |
| `block_auto_created_users` | To maintain tight control over the number of active users on your GitLab installation, enable this setting to keep new users blocked until they have been cleared by the admin (default: false). | no | boolean |
| `base` | Base where we can search for users. | yes | `'ou=people,dc=gitlab,dc=example'` or `'DC=mydomain,DC=com'` |
| `user_filter` | Filter LDAP users. Format: [RFC 4515](https://tools.ietf.org/search/rfc4515) Note: GitLab does not support `omniauth-ldap`'s custom filter syntax. | no | `'(employeeType=developer)'` or `'(&(objectclass=user)(|(samaccountname=momo)(samaccountname=toto)))'` |
@@ -197,7 +197,7 @@ production:
### Attribute Configuration Settings **(CORE ONLY)**
-LDAP attributes that GitLab will use to create an account for the LDAP user. The specified attribute can either be the attribute name as a string (e.g. `'mail'`), or an array of attribute names to try in order (e.g. `['mail', 'email']`). Note that the user's LDAP login will always be the attribute specified as `uid` above.
+LDAP attributes that GitLab will use to create an account for the LDAP user. The specified attribute can either be the attribute name as a string (e.g. `'mail'`), or an array of attribute names to try in order (e.g. `['mail', 'email']`). Note that the user's LDAP sign-in will always be the attribute specified as `uid` above.
| Setting | Description | Required | Examples |
| ------- | ----------- | -------- | -------- |
@@ -396,7 +396,7 @@ Be sure to choose a different provider ID made of letters a-z and numbers 0-9.
This ID will be stored in the database so that GitLab can remember which LDAP
server a user belongs to.
-![Multiple LDAP Servers Login](img/multi_login.gif)
+![Multiple LDAP Servers Sign in](img/multi_login.gif)
Based on the example illustrated on the image above,
our `gitlab.rb` configuration would look like:
@@ -424,7 +424,7 @@ gitlab_rails['ldap_servers'] = {
'port' => 636,
...
}
-
+
}
```
@@ -450,7 +450,7 @@ has bit 2 set. See <https://ctovswild.com/2009/09/03/bitmask-searches-in-ldap/>
for more information.
The user will be set to `ldap_blocked` state in GitLab if the above conditions
-fail. This means the user will not be able to login or push/pull code.
+fail. This means the user will not be able to sign-in or push/pull code.
The process will also update the following user information:
@@ -605,6 +605,12 @@ When enabled, the following applies:
- Users are not allowed to share project with other groups or invite members to
a project created in a group.
+To enable it you need to:
+
+1. [Enable LDAP](#configuration-core-only)
+1. Navigate to **(admin)** **Admin Area > Settings -> Visibility and access controls**.
+1. Make sure the "Lock memberships to LDAP synchronization" checkbox is enabled.
+
### Adjusting LDAP group sync schedule **(STARTER ONLY)**
NOTE: **Note:**
diff --git a/doc/administration/auth/ldap/ldap-troubleshooting.md b/doc/administration/auth/ldap/ldap-troubleshooting.md
index 909802b5dec..75183f54990 100644
--- a/doc/administration/auth/ldap/ldap-troubleshooting.md
+++ b/doc/administration/auth/ldap/ldap-troubleshooting.md
@@ -27,7 +27,7 @@ Could not authenticate you from Ldapmain because "Connection timed out - user sp
```
If your configured LDAP provider and/or endpoint is offline or otherwise
-unreachable by GitLab, no LDAP user will be able to authenticate and log in.
+unreachable by GitLab, no LDAP user will be able to authenticate and sign-in.
GitLab does not cache or store credentials for LDAP users to provide authentication
during an LDAP outage.
@@ -81,7 +81,7 @@ adapter.ldap_search(options)
For examples of how this is run,
[review the `Adapter` module](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/ee/gitlab/auth/ldap/adapter.rb).
-### User logins
+### User sign-ins
#### No users are found
@@ -97,24 +97,24 @@ In this case, you con confirm which of the above is true using
[ldapsearch](#ldapsearch) with the existing LDAP configuration in your
`/etc/gitlab/gitlab.rb`.
-#### User(s) cannot login
+#### User(s) cannot sign-in
-A user can have trouble logging in for any number of reasons. To get started,
+A user can have trouble signing in for any number of reasons. To get started,
here are some questions to ask yourself:
- Does the user fall under the [configured `base`](index.md#configuration-core-only) in
- LDAP? The user must fall under this `base` to login.
+ LDAP? The user must fall under this `base` to sign-in.
- Does the user pass through the [configured `user_filter`](index.md#set-up-ldap-user-filter-core-only)?
If one is not configured, this question can be ignored. If it is, then the
- user must also pass through this filter to be allowed to login.
+ user must also pass through this filter to be allowed to sign-in.
- Refer to our docs on [debugging the `user_filter`](#debug-ldap-user-filter).
If the above are both okay, the next place to look for the problem is
the logs themselves while reproducing the issue.
-- Ask the user to login and let it fail.
+- Ask the user to sign-in and let it fail.
- [Look through the output](#gitlab-logs) for any errors or other
- messages about the login. You may see one of the other error messages on
+ messages about the sign-in. You may see one of the other error messages on
this page, in which case that section can help resolve the issue.
If the logs don't lead to the root of the problem, use the
@@ -125,9 +125,9 @@ It can also be helpful to
[debug a user sync](#sync-all-users-starter-only) to
investigate further.
-#### Invalid credentials on login
+#### Invalid credentials on sign-in
-If that the login credentials used are accurate on LDAP, ensure the following
+If that the sign-in credentials used are accurate on LDAP, ensure the following
are true for the user in question:
- Make sure the user you are binding with has enough permissions to read the user's
@@ -138,7 +138,7 @@ are true for the user in question:
#### Email has already been taken
-A user tries to login with the correct LDAP credentials, is denied access,
+A user tries to sign-in with the correct LDAP credentials, is denied access,
and the [production.log](../../logs.md#productionlog) shows an error that looks like this:
```plaintext
@@ -163,7 +163,7 @@ user.username
This will show you which user has this email address. One of two steps will
have to be taken here:
-- To create a new GitLab user/username for this user when logging in with LDAP,
+- To create a new GitLab user/username for this user when signing in with LDAP,
remove the secondary email to remove the conflict.
- To use an existing GitLab user/username for this user to use with LDAP,
remove this email as a secondary email and make it a primary one so GitLab
@@ -235,7 +235,7 @@ uid: John
There's a lot here, so let's go over what could be helpful when debugging.
First, GitLab will look for all users that have previously
-logged in with LDAP and iterate on them. Each user's sync will start with
+signed in with LDAP and iterate on them. Each user's sync will start with
the following line that contains the user's username and email, as they
exist in GitLab now:
@@ -244,7 +244,7 @@ Syncing user John, email@example.com
```
If you don't find a particular user's GitLab email in the output, then that
-user hasn't logged in with LDAP yet.
+user hasn't signed in with LDAP yet.
Next, GitLab searches its `identities` table for the existing
link between this user and the configured LDAP provider(s):
@@ -313,7 +313,7 @@ things to check to debug the situation.
1. Search for the user
1. Open the user, by clicking on their name. Do not click 'Edit'.
1. Navigate to the **Identities** tab. There should be an LDAP identity with
- an LDAP DN as the 'Identifier'. If not, this user hasn't logged in with
+ an LDAP DN as the 'Identifier'. If not, this user hasn't signed in with
LDAP yet and must do so first.
- You've waited an hour or [the configured
interval](index.md#adjusting-ldap-group-sync-schedule-starter-only) for the group to
@@ -346,7 +346,7 @@ the following are true:
- A [`group_base` is also configured](index.md#group-sync-starter-only).
- The configured `admin_group` in the `gitlab.rb` is a CN, rather than a DN or an array.
- This CN falls under the scope of the configured `group_base`.
-- The members of the `admin_group` have already logged into GitLab with their LDAP
+- The members of the `admin_group` have already signed into GitLab with their LDAP
credentials. GitLab will only grant this admin access to the users whose
accounts are already connected to LDAP.
@@ -357,7 +357,7 @@ GitLab syncs the `admin_group`.
#### Sync all groups **(STARTER ONLY)**
-NOTE: **NOTE:**
+NOTE: **Note:**
To sync all groups manually when debugging is unnecessary, [use the Rake
task](../../raketasks/ldap.md#run-a-group-sync-starter-only) instead.
@@ -571,7 +571,7 @@ If a user account is blocked or unblocked due to the LDAP configuration, a
message will be [logged to `application.log`](../../logs.md#applicationlog).
If there is an unexpected error during an LDAP lookup (configuration error,
-timeout), the login is rejected and a message will be [logged to
+timeout), the sign-in is rejected and a message will be [logged to
`production.log`](../../logs.md#productionlog).
### ldapsearch
@@ -653,7 +653,7 @@ adfind -h ad.example.org:636 -ssl -u "CN=GitLabSRV,CN=Users,DC=GitLab,DC=org" -u
### Rails console
-CAUTION: **CAUTION:**
+CAUTION: **Caution:**
Please note that it is very easy to create, read, modify, and destroy data on the
rails console, so please be sure to run commands exactly as listed.
diff --git a/doc/administration/auth/okta.md b/doc/administration/auth/okta.md
index f7ab60ab56b..78b10aedb77 100644
--- a/doc/administration/auth/okta.md
+++ b/doc/administration/auth/okta.md
@@ -159,8 +159,7 @@ You might want to try this out on an incognito browser window.
>**Note:**
Make sure the groups exist and are assigned to the Okta app.
-You can take a look of the [SAML documentation](../../integration/saml.md#marking-users-as-external-based-on-saml-groups) on external groups since
-it works the same.
+You can take a look of the [SAML documentation](../../integration/saml.md#saml-groups) on configuring groups.
<!-- ## Troubleshooting
diff --git a/doc/administration/auth/smartcard.md b/doc/administration/auth/smartcard.md
index 9ad1e0641f6..80d2efbad84 100644
--- a/doc/administration/auth/smartcard.md
+++ b/doc/administration/auth/smartcard.md
@@ -208,7 +208,7 @@ attribute. As a prerequisite, you must use an LDAP server that:
client_certificate_required_port: 3443
```
- NOTE: **Note**
+ NOTE: **Note:**
Assign a value to at least one of the following variables:
`client_certificate_required_host` or `client_certificate_required_port`.
diff --git a/doc/administration/feature_flags.md b/doc/administration/feature_flags.md
index 9e87eed4508..678ab6c5d7b 100644
--- a/doc/administration/feature_flags.md
+++ b/doc/administration/feature_flags.md
@@ -103,10 +103,28 @@ Some feature flags can be enabled or disabled on a per project basis:
Feature.enable(:<feature flag>, Project.find(<project id>))
```
-For example, to enable the [`:release_evidence_collection`](../ci/junit_test_reports.md#enabling-the-feature) feature flag for project `1234`:
+For example, to enable the [`:junit_pipeline_view`](../ci/junit_test_reports.md#enabling-the-junit-test-reports-feature-core-only) feature flag for project `1234`:
```ruby
-Feature.enable(:release_evidence_collection, Project.find(1234))
+Feature.enable(:junit_pipeline_view, Project.find(1234))
+```
+
+`Feature.enable` and `Feature.disable` always return `nil`, this is not an indication that the command failed:
+
+```ruby
+irb(main):001:0> Feature.enable(:release_evidence_collection)
+=> nil
+```
+
+To check if a flag is enabled or disabled you can use `Feature.enabled?` or `Feature.disabled?`:
+
+```ruby
+Feature.enable(:release_evidence_collection)
+=> nil
+Feature.enabled?(:release_evidence_collection)
+=> true
+Feature.disabled?(:release_evidence_collection)
+=> false
```
When the feature is ready, GitLab will remove the feature flag, the option for
diff --git a/doc/administration/geo/disaster_recovery/bring_primary_back.md b/doc/administration/geo/disaster_recovery/bring_primary_back.md
index b19e55595e7..3b7c7fd549c 100644
--- a/doc/administration/geo/disaster_recovery/bring_primary_back.md
+++ b/doc/administration/geo/disaster_recovery/bring_primary_back.md
@@ -32,13 +32,15 @@ To bring the former **primary** node up to date:
sudo gitlab-ctl start
```
- NOTE: **Note:** If you [disabled the **primary** node permanently](index.md#step-2-permanently-disable-the-primary-node),
+ NOTE: **Note:**
+ If you [disabled the **primary** node permanently](index.md#step-2-permanently-disable-the-primary-node),
you need to undo those steps now. For Debian/Ubuntu you just need to run
`sudo systemctl enable gitlab-runsvdir`. For CentOS 6, you need to install
the GitLab instance from scratch and set it up as a **secondary** node by
following [Setup instructions](../replication/index.md#setup-instructions). In this case, you don't need to follow the next step.
- NOTE: **Note:** If you [changed the DNS records](index.md#step-4-optional-updating-the-primary-domain-dns-record)
+ NOTE: **Note:**
+ If you [changed the DNS records](index.md#step-4-optional-updating-the-primary-domain-dns-record)
for this node during disaster recovery procedure you may need to [block
all the writes to this node](planned_failover.md#prevent-updates-to-the-primary-node)
during this procedure.
diff --git a/doc/administration/geo/disaster_recovery/index.md b/doc/administration/geo/disaster_recovery/index.md
index f690ec63cf9..2d837ebb369 100644
--- a/doc/administration/geo/disaster_recovery/index.md
+++ b/doc/administration/geo/disaster_recovery/index.md
@@ -122,18 +122,31 @@ Note the following when promoting a secondary:
roles ['geo_secondary_role']
```
-1. Promote the **secondary** node to the **primary** node. Execute:
+1. Promote the **secondary** node to the **primary** node.
+
+ Before promoting a secondary node to primary, preflight checks should be run. They can be run separately or along with the promotion script.
+
+ To promote the secondary node to primary along with preflight checks:
```shell
gitlab-ctl promote-to-primary-node
```
- If you have already run the [preflight checks](planned_failover.md#preflight-checks), you can skip them with:
+ CAUTION: **Warning:**
+ Skipping preflight checks will promote the secondary to a primary without any further confirmation!
+
+ If you have already run the [preflight checks](planned_failover.md#preflight-checks) or don't want to run them, you can skip preflight checks with:
```shell
gitlab-ctl promote-to-primary-node --skip-preflight-check
```
+ You can also run preflight checks separately:
+
+ ```shell
+ gitlab-ctl promotion-preflight-checks
+ ```
+
1. Verify you can connect to the newly promoted **primary** node using the URL used
previously for the **secondary** node.
1. If successful, the **secondary** node has now been promoted to the **primary** node.
@@ -261,7 +274,7 @@ secondary domain, like changing Git remotes and API URLs.
external_url 'https://<new_external_url>'
```
- NOTE: **Note**
+ NOTE: **Note:**
Changing `external_url` won't prevent access via the old secondary URL, as
long as the secondary DNS records are still intact.
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index 0ce1325a537..ef0e67434d0 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -45,8 +45,19 @@ be found in `/var/opt/gitlab/gitlab-rails/shared/pages` if using Omnibus).
## Preflight checks
-Follow these steps before scheduling a planned failover to ensure the process
-will go smoothly.
+Run this command to list out all preflight checks and automatically check if replication and verification are complete before scheduling a planned failover to ensure the process will go smoothly:
+
+```shell
+gitlab-ctl promotion-preflight-checks
+```
+
+You can run this command in `force` mode to promote to primary even if preflight checks fail:
+
+```shell
+sudo gitlab-ctl promotion-preflight-checks --force
+```
+
+Each step is described in more detail below.
### Object storage
diff --git a/doc/administration/geo/replication/database.md b/doc/administration/geo/replication/database.md
index 3f2d46ba457..02f51e79907 100644
--- a/doc/administration/geo/replication/database.md
+++ b/doc/administration/geo/replication/database.md
@@ -130,7 +130,8 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
connect to the **primary** node's database. For this reason, we need the address of
each node.
- NOTE: **Note:** For external PostgreSQL instances, see [additional instructions](external_database.md).
+ NOTE: **Note:**
+ For external PostgreSQL instances, see [additional instructions](external_database.md).
If you are using a cloud provider, you can lookup the addresses for each
Geo node through your cloud provider's management console.
@@ -419,7 +420,8 @@ data before running `pg_basebackup`.
1. Execute the command below to start a backup/restore and begin the replication
- CAUTION: **Warning:** Each Geo **secondary** node must have its own unique replication slot name.
+ CAUTION: **Warning:**
+ Each Geo **secondary** node must have its own unique replication slot name.
Using the same slot name between two secondaries will break PostgreSQL replication.
```shell
diff --git a/doc/administration/geo/replication/datatypes.md b/doc/administration/geo/replication/datatypes.md
index f50da27d82f..5636ff79189 100644
--- a/doc/administration/geo/replication/datatypes.md
+++ b/doc/administration/geo/replication/datatypes.md
@@ -45,6 +45,8 @@ verification methods:
| Blobs | Archived CI build traces _(object storage)_ | Geo with API/Managed (*2*) | _Not implemented_ |
| Blobs | Container registry _(filesystem)_ | Geo with API/Docker API | _Not implemented_ |
| Blobs | Container registry _(object storage)_ | Geo with API/Managed/Docker API (*2*) | _Not implemented_ |
+| Blobs | Package registry _(filesystem)_ | Geo with API | _Not implemented_ |
+| Blobs | Package registry _(object storage)_ | Geo with API/Managed (*2*) | _Not implemented_ |
- (*1*): Redis replication can be used as part of HA with Redis sentinel. It's not used between Geo nodes.
- (*2*): Object storage replication can be performed by Geo or by your object storage provider/appliance
@@ -124,41 +126,41 @@ these epics/issues:
- [Unreplicated Data Types](https://gitlab.com/groups/gitlab-org/-/epics/893)
- [Verify all replicated data](https://gitlab.com/groups/gitlab-org/-/epics/1430)
-DANGER: **DANGER**
+DANGER: **Danger:**
Features not on this list, or with **No** in the **Replicated** column,
are not replicated on the **secondary** node. Failing over without manually
replicating data from those features will cause the data to be **lost**.
If you wish to use those features on a **secondary** node, or to execute a failover
successfully, you must replicate their data using some other means.
-| Feature | Replicated | Verified | Notes |
+| Feature | Replicated (added in GitLab version) | Verified (added in GitLab version) | Notes |
|:---------------------------------------------------------------------|:---------------------------------------------------------|:--------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------|
-| Application data in PostgreSQL | **Yes** | **Yes** | |
-| Project repository | **Yes** | **Yes** | |
-| Project wiki repository | **Yes** | **Yes** | |
-| Project designs repository | **Yes** | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/32467) | |
-| Uploads | **Yes** | [No](https://gitlab.com/groups/gitlab-org/-/epics/1817) | Verified only on transfer, or manually (*1*) |
-| LFS objects | **Yes** | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/8922) | Verified only on transfer, or manually (*1*). Unavailable for new LFS objects in 11.11.x and 12.0.x (*2*). |
-| CI job artifacts (other than traces) | **Yes** | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/8923) | Verified only manually (*1*) |
-| Archived traces | **Yes** | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/8923) | Verified only on transfer, or manually (*1*) |
-| Personal snippets | **Yes** | **Yes** | |
+| Application data in PostgreSQL | **Yes** (10.2) | **Yes** (10.2) | |
+| Project repository | **Yes** (10.2) | **Yes** (10.7) | |
+| Project wiki repository | **Yes** (10.2) | **Yes** (10.7) | |
+| Project designs repository | **Yes** (12.7) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/32467) | |
+| Uploads | **Yes** (10.2) | [No](https://gitlab.com/groups/gitlab-org/-/epics/1817) | Verified only on transfer, or manually (*1*) |
+| LFS objects | **Yes** (10.2) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/8922) | Verified only on transfer, or manually (*1*). Unavailable for new LFS objects in 11.11.x and 12.0.x (*2*). |
+| CI job artifacts (other than traces) | **Yes** (10.4) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/8923) | Verified only manually (*1*) |
+| Archived traces | **Yes** (10.4) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/8923) | Verified only on transfer, or manually (*1*) |
+| Personal snippets | **Yes** (10.2) | **Yes** (10.2) | |
| [Versioned snippets](../../../user/snippets.md#versioned-snippets) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2809) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2810) | |
-| Project snippets | **Yes** | **Yes** | |
-| Object pools for forked project deduplication | **Yes** | No | |
-| [Server-side Git Hooks](../../custom_hooks.md) | No | No | |
+| Project snippets | **Yes** (10.2) | **Yes** (10.2) | |
+| Object pools for forked project deduplication | **Yes** | No | |
+| [Server-side Git hooks](../../server_hooks.md) | No | No | |
| [Elasticsearch integration](../../../integration/elasticsearch.md) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/1186) | No | |
| [GitLab Pages](../../pages/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/589) | No | |
-| [Container Registry](../../packages/container_registry.md) | **Yes** | No | |
-| [NPM Registry](../../../user/packages/npm_registry/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2346) | No | |
-| [Maven Repository](../../../user/packages/maven_repository/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2346) | No | |
-| [Conan Repository](../../../user/packages/conan_repository/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2346) | No | |
-| [NuGet Repository](../../../user/packages/nuget_repository/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2346) | No | |
-| [PyPi Repository](../../../user/packages/pypi_repository/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2554) | No | |
-| [Composer Repository](../../../user/packages/composer_repository/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/3096) | No | |
+| [Container Registry](../../packages/container_registry.md) | **Yes** (12.3) | No | |
+| [NPM Registry](../../../user/packages/npm_registry/index.md) | **Yes** (13.2) | No | |
+| [Maven Repository](../../../user/packages/maven_repository/index.md) | **Yes** (13.2) | No | |
+| [Conan Repository](../../../user/packages/conan_repository/index.md) | **Yes** (13.2) | No | |
+| [NuGet Repository](../../../user/packages/nuget_repository/index.md) | **Yes** (13.2) | No | |
+| [PyPi Repository](../../../user/packages/pypi_repository/index.md) | **Yes** (13.2) | No | |
+| [Composer Repository](../../../user/packages/composer_repository/index.md) | **Yes** (13.2) | No | |
| [External merge request diffs](../../merge_request_diffs.md) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/33817) | No | |
| [Terraform State](../../terraform_state.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/3112)(*3*) | No | |
-| [Vulnerability Export](../../../user/application_security/security_dashboard/#export-vulnerabilities-1) | [No](https://gitlab.com/groups/gitlab-org/-/epics/3111)(*3*) | No | | |
-| Content in object storage | **Yes** | No | |
+| [Vulnerability Export](../../../user/application_security/security_dashboard/#export-vulnerabilities) | [No](https://gitlab.com/groups/gitlab-org/-/epics/3111)(*3*) | No | | |
+| Content in object storage | **Yes** (12.4) | No | |
- (*1*): The integrity can be verified manually using
[Integrity Check Rake Task](../../raketasks/check.md) on both nodes and comparing
diff --git a/doc/administration/geo/replication/disable_geo.md b/doc/administration/geo/replication/disable_geo.md
new file mode 100644
index 00000000000..f53b4c1b796
--- /dev/null
+++ b/doc/administration/geo/replication/disable_geo.md
@@ -0,0 +1,93 @@
+---
+stage: Enablement
+group: Geo
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+type: howto
+---
+
+# Disabling Geo **(PREMIUM ONLY)**
+
+If you want to revert to a regular Omnibus setup after a test, or you have encountered a Disaster Recovery
+situation and you want to disable Geo momentarily, you can use these instructions to disable your
+Geo setup.
+
+There should be no functional difference between disabling Geo and having an active Geo setup with
+no secondary Geo nodes if you remove them correctly.
+
+To disable Geo, follow these steps:
+
+1. [Remove all secondary Geo nodes](#remove-all-secondary-geo-nodes).
+1. [Remove the primary node from the UI](#remove-the-primary-node-from-the-ui).
+1. [Remove secondary replication slots](#remove-secondary-replication-slots).
+1. [Remove Geo-related configuration](#remove-geo-related-configuration).
+1. [(Optional) Revert PostgreSQL settings to use a password and listen on an IP](#optional-revert-postgresql-settings-to-use-a-password-and-listen-on-an-ip).
+
+## Remove all secondary Geo nodes
+
+To disable Geo, you need to first remove all your secondary Geo nodes, which means replication will not happen
+anymore on these nodes. You can follow our docs to [remove your secondary Geo nodes](./remove_geo_node.md).
+
+If the current node that you want to keep using is a secondary node, you need to first promote it to primary.
+You can use our steps on [how to promote a secondary node](../disaster_recovery/#step-3-promoting-a-secondary-node)
+in order to do that.
+
+## Remove the primary node from the UI
+
+1. Go to **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`).
+1. Click the **Remove** button for the **primary** node.
+1. Confirm by clicking **Remove** when the prompt appears.
+
+## Remove secondary replication slots
+
+To remove secondary replication slots, run one of the following queries on your primary
+Geo node in a PostgreSQL console (`sudo gitlab-psql`):
+
+- If you already have a PostgreSQL cluster, drop individual replication slots by name to prevent
+ removing your secondary databases from the same cluster. You can use the following to get
+ all names and then drop each individual slot:
+
+ ```sql
+ SELECT slot_name, slot_type, active FROM pg_replication_slots; -- view present replication slots
+ SELECT pg_drop_replication_slot('slot_name'); -- where slot_name is the one expected from above
+ ```
+
+- To remove all secondary replication slots:
+
+ ```sql
+ SELECT pg_drop_replication_slot(slot_name) FROM pg_replication_slots;
+ ```
+
+## Remove Geo-related configuration
+
+1. SSH into your primary Geo node and log in as root:
+
+ ```shell
+ sudo -i
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and remove the Geo related configuration by
+ removing any lines that enabled `geo_primary_role`:
+
+ ```ruby
+ ## In pre-11.5 documentation, the role was enabled as follows. Remove this line.
+ geo_primary_role['enable'] = true
+
+ ## In 11.5+ documentation, the role was enabled as follows. Remove this line.
+ roles ['geo_primary_role']
+ ```
+
+1. After making these changes, [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure)
+ for the changes to take effect.
+
+## (Optional) Revert PostgreSQL settings to use a password and listen on an IP
+
+If you want to remove the PostgreSQL-specific settings and revert
+to the defaults (using a socket instead), you can safely remove the following
+lines from the `/etc/gitlab/gitlab.rb` file:
+
+```ruby
+postgresql['sql_user_password'] = '...'
+gitlab_rails['db_password'] = '...'
+postgresql['listen_address'] = '...'
+postgresql['md5_auth_cidr_addresses'] = ['...', '...']
+```
diff --git a/doc/administration/geo/replication/docker_registry.md b/doc/administration/geo/replication/docker_registry.md
index bea6528dc9b..c34732cba67 100644
--- a/doc/administration/geo/replication/docker_registry.md
+++ b/doc/administration/geo/replication/docker_registry.md
@@ -18,7 +18,7 @@ Registry on the **primary** node, you can use the same storage for a **secondary
Docker Registry as well. For more information, read the
[Load balancing considerations](https://docs.docker.com/registry/deploying/#load-balancing-considerations)
when deploying the Registry, and how to set up the storage driver for GitLab's
-integrated [Container Registry](../../packages/container_registry.md#container-registry-storage-driver).
+integrated [Container Registry](../../packages/container_registry.md#use-object-storage).
## Replicating Docker Registry
diff --git a/doc/administration/geo/replication/external_database.md b/doc/administration/geo/replication/external_database.md
index 491b3278ead..e85a82fa414 100644
--- a/doc/administration/geo/replication/external_database.md
+++ b/doc/administration/geo/replication/external_database.md
@@ -270,7 +270,8 @@ the tracking database on port 5432.
query_exec "GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO ${GEO_DB_USER};"
```
- NOTE: **Note:** The script template above uses `gitlab-psql` as it's intended to be executed from the Geo machine,
+ NOTE: **Note:**
+ The script template above uses `gitlab-psql` as it's intended to be executed from the Geo machine,
but you can change it to `psql` and run it from any machine that has access to the database. We also recommend using
`psql` for AWS RDS.
diff --git a/doc/administration/geo/replication/geo_validation_tests.md b/doc/administration/geo/replication/geo_validation_tests.md
index 7b186d15fae..0255e5c9883 100644
--- a/doc/administration/geo/replication/geo_validation_tests.md
+++ b/doc/administration/geo/replication/geo_validation_tests.md
@@ -16,20 +16,53 @@ This section contains a journal of recent validation tests and links to the rele
The following are GitLab upgrade validation tests we performed.
+### July 2020
+
+[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/225359):
+
+- Description: Tested upgrading from GitLab 12.10.12 to 13.0.10 package in a multi-node
+ configuration. As part of the issue to [Fix zero-downtime upgrade process/instructions for multi-node Geo deployments](https://gitlab.com/gitlab-org/gitlab/-/issues/22568), we monitored for downtime using the looping pipeline, HAProxy stats dashboards, and a script to log readiness status on both nodes.
+- Outcome: Partial success because we observed downtime during the upgrade of the primary and secondary sites.
+- Follow up issues/actions:
+ - [Investigate why `reconfigure` and `hup` cause downtime on multi-node Geo deployments](https://gitlab.com/gitlab-org/gitlab/-/issues/228898)
+ - [Geo multi-node deployment upgrade: investigate order when upgrading non-deploy nodes](https://gitlab.com/gitlab-org/gitlab/-/issues/228954)
+
+### June 2020
+
+[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/223284):
+
+- Description: Tested upgrading from GitLab 12.9.10 to 12.10.12 package in a multi-node
+ configuration. Monitored for downtime using the looping pipeline and HAProxy stats dashboards.
+- Outcome: Partial success because we observed downtime during the upgrade of the primary and secondary sites.
+- Follow up issues/actions:
+ - [Fix zero-downtime upgrade process/instructions for multi-node Geo deployments](https://gitlab.com/gitlab-org/gitlab/-/issues/225684)
+ - [Geo:check Rake task: Exclude AuthorizedKeysCommand check if node not running Puma/Unicorn](https://gitlab.com/gitlab-org/gitlab/-/issues/225454)
+ - [Update instructions in the next upgrade issue to include monitoring HAProxy dashboards](https://gitlab.com/gitlab-org/gitlab/-/issues/225359)
+
+[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/208104):
+
+- Description: Tested upgrading from GitLab 12.8.1 to 12.9.10 package in a multi-node
+ configuration.
+- Outcome: Partial success because we did not run the looping pipeline during the demo to validate
+ zero-downtime.
+- Follow up issues:
+ - [Clarify hup Puma/Unicorn should include deploy node](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5460)
+ - [Investigate MR creation failure after upgrade to 12.9.10](https://gitlab.com/gitlab-org/gitlab/-/issues/223282) Closed as false positive.
+
### February 2020
-[Upgrade Geo multi-server installation](https://gitlab.com/gitlab-org/gitlab/-/issues/201837):
+[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/201837):
-- Description: Tested upgrading from GitLab 12.7.5 to the latest GitLab 12.8 package in a multi-server
+- Description: Tested upgrading from GitLab 12.7.5 to the latest GitLab 12.8 package in a multi-node
configuration.
- Outcome: Partial success because we did not run the looping pipeline during the demo to monitor
downtime.
### January 2020
-[Upgrade Geo multi-server installation](https://gitlab.com/gitlab-org/gitlab/-/issues/200085):
+[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/200085):
-- Description: Tested upgrading from GitLab 12.6.x to the latest GitLab 12.7 package in a multi-server
+- Description: Tested upgrading from GitLab 12.6.x to the latest GitLab 12.7 package in a multi-node
configuration.
- Outcome: Upgrade test was successful.
- Follow up issues:
@@ -37,16 +70,16 @@ The following are GitLab upgrade validation tests we performed.
- [Add more logging to Geo end-to-end tests](https://gitlab.com/gitlab-org/gitlab/-/issues/201830).
- [Excess service restarts during zero-downtime upgrade](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5047).
-[Upgrade Geo multi-server installation](https://gitlab.com/gitlab-org/gitlab/-/issues/199836):
+[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/199836):
-- Description: Tested upgrading from GitLab 12.5.7 to GitLab 12.6.6 in a multi-server configuration.
+- Description: Tested upgrading from GitLab 12.5.7 to GitLab 12.6.6 in a multi-node configuration.
- Outcome: Upgrade test was successful.
- Follow up issue:
[Update documentation for zero-downtime upgrades to ensure deploy node it not in use](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5046).
-[Upgrade Geo multi-server installation](https://gitlab.com/gitlab-org/gitlab/-/issues/37044):
+[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/37044):
-- Description: Tested upgrading from GitLab 12.4.x to the latest GitLab 12.5 package in a multi-server
+- Description: Tested upgrading from GitLab 12.4.x to the latest GitLab 12.5 package in a multi-node
configuration.
- Outcome: Upgrade test was successful.
- Follow up issues:
@@ -55,17 +88,17 @@ The following are GitLab upgrade validation tests we performed.
### October 2019
-[Upgrade Geo multi-server installation](https://gitlab.com/gitlab-org/gitlab/-/issues/35262):
+[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/35262):
-- Description: Tested upgrading from GitLab 12.3.5 to GitLab 12.4.1 in a multi-server configuration.
+- Description: Tested upgrading from GitLab 12.3.5 to GitLab 12.4.1 in a multi-node configuration.
- Outcome: Upgrade test was successful.
-[Upgrade Geo multi-server installation](https://gitlab.com/gitlab-org/gitlab/-/issues/32437):
+[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/32437):
- Description: Tested upgrading from GitLab 12.2.8 to GitLab 12.3.5.
- Outcome: Upgrade test was successful.
-[Upgrade Geo multi-server installation](https://gitlab.com/gitlab-org/gitlab/-/issues/32435):
+[Upgrade Geo multi-node installation](https://gitlab.com/gitlab-org/gitlab/-/issues/32435):
- Description: Tested upgrading from GitLab 12.1.9 to GitLab 12.2.8.
- Outcome: Partial success due to possible misconfiguration issues.
@@ -80,7 +113,7 @@ The following are PostgreSQL upgrade validation tests we performed.
- Description: Prior to making PostgreSQL 11 the default version of PostgreSQL in GitLab 12.10, we
tested upgrading to PostgreSQL 11 in Geo deployments on GitLab 12.9.
-- Outcome: Partially successful. Issues were discovered in multi-server configurations with a separate
+- Outcome: Partially successful. Issues were discovered in multi-node configurations with a separate
tracking database and concerns were raised about allowing automatic upgrades when Geo enabled.
- Follow up issues:
- [`replicate-geo-database` incorrectly tries to back up repositories](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5241).
@@ -102,6 +135,6 @@ The following are PostgreSQL upgrade validation tests we performed.
various upgrade scenarios from GitLab 11.11.5 through to GitLab 12.1.8.
- Outcome: Multiple issues were found when upgrading and addressed in follow-up issues.
- Follow up issues:
- - [`gitlab-ctl` reconfigure fails on Redis node in multi-server Geo setup](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4706).
- - [Geo multi-server upgrade from 12.0.9 to 12.1.9 does not upgrade PostgreSQL](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4705).
- - [Refresh foreign tables fails on app server in multi-server setup after upgrade to 12.1.9](https://gitlab.com/gitlab-org/gitlab/-/issues/32119).
+ - [`gitlab-ctl` reconfigure fails on Redis node in multi-node Geo setup](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4706).
+ - [Geo multi-node upgrade from 12.0.9 to 12.1.9 does not upgrade PostgreSQL](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4705).
+ - [Refresh foreign tables fails on app server in multi-node setup after upgrade to 12.1.9](https://gitlab.com/gitlab-org/gitlab/-/issues/32119).
diff --git a/doc/administration/geo/replication/index.md b/doc/administration/geo/replication/index.md
index 5b4b476bfa8..51c831abaf3 100644
--- a/doc/administration/geo/replication/index.md
+++ b/doc/administration/geo/replication/index.md
@@ -9,7 +9,7 @@ type: howto
> - Introduced in GitLab Enterprise Edition 8.9.
> - Using Geo in combination with
-> [multi-server architectures](../../reference_architectures/index.md)
+> [multi-node architectures](../../reference_architectures/index.md)
> is considered **Generally Available** (GA) in
> [GitLab Premium](https://about.gitlab.com/pricing/) 10.4.
@@ -213,9 +213,29 @@ For information on configuring Geo, see [Geo configuration](configuration.md).
For information on how to update your Geo nodes to the latest GitLab version, see [Updating the Geo nodes](updating_the_geo_nodes.md).
-### Configuring Geo for multiple servers
+### Pausing and resuming replication
-For information on configuring Geo for multiple servers, see [Geo for multiple servers](multiple_servers.md).
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/35913) in [GitLab Premium](https://about.gitlab.com/pricing/) 13.2.
+
+In some circumstances, like during [upgrades](updating_the_geo_nodes.md) or a [planned failover](../disaster_recovery/planned_failover.md), it is desirable to pause replication between the primary and secondary.
+
+Pausing and resuming replication is done via a command line tool from the secondary node.
+
+**To Pause: (from secondary)**
+
+```shell
+gitlab-ctl geo-replication-pause
+```
+
+**To Resume: (from secondary)**
+
+```shell
+gitlab-ctl geo-replication-resume
+```
+
+### Configuring Geo for multiple nodes
+
+For information on configuring Geo for multiple nodes, see [Geo for multiple servers](multiple_servers.md).
### Configuring Geo with Object Storage
@@ -245,6 +265,10 @@ For an example of how to set up a location-aware Git remote URL with AWS Route53
For more information on removing a Geo node, see [Removing **secondary** Geo nodes](remove_geo_node.md).
+## Disable Geo
+
+To find out how to disable Geo, see [Disabling Geo](disable_geo.md).
+
## Current limitations
CAUTION: **Caution:**
diff --git a/doc/administration/geo/replication/location_aware_git_url.md b/doc/administration/geo/replication/location_aware_git_url.md
index 49c83ee1718..8b086e3ff5f 100644
--- a/doc/administration/geo/replication/location_aware_git_url.md
+++ b/doc/administration/geo/replication/location_aware_git_url.md
@@ -18,7 +18,7 @@ Though these instructions use [AWS Route53](https://aws.amazon.com/route53/),
other services such as [Cloudflare](https://www.cloudflare.com/) could be used
as well.
-NOTE: **Note**
+NOTE: **Note:**
You can also use a load balancer to distribute web UI or API traffic to
[multiple Geo **secondary** nodes](../../../user/admin_area/geo_nodes.md#multiple-secondary-nodes-behind-a-load-balancer).
Importantly, the **primary** node cannot yet be included. See the feature request
diff --git a/doc/administration/geo/replication/multiple_servers.md b/doc/administration/geo/replication/multiple_servers.md
index 2747aaeb105..d8f04e07fb0 100644
--- a/doc/administration/geo/replication/multiple_servers.md
+++ b/doc/administration/geo/replication/multiple_servers.md
@@ -5,15 +5,15 @@ info: To determine the technical writer assigned to the Stage/Group associated w
type: howto
---
-# Geo for multiple servers **(PREMIUM ONLY)**
+# Geo for multiple nodes **(PREMIUM ONLY)**
This document describes a minimal reference architecture for running Geo
-in a multi-server configuration. If your multi-server setup differs from the one
+in a multi-node configuration. If your multi-node setup differs from the one
described, it is possible to adapt these instructions to your needs.
## Architecture overview
-![Geo multi-server diagram](../../high_availability/img/geo-ha-diagram.png)
+![Geo multi-node diagram](../../high_availability/img/geo-ha-diagram.png)
_[diagram source - GitLab employees only](https://docs.google.com/drawings/d/1z0VlizKiLNXVVVaERFwgsIOuEgjcUqDTWPdQYsE7Z4c/edit)_
@@ -30,36 +30,36 @@ The only external way to access the two Geo deployments is by HTTPS at
NOTE: **Note:**
The **primary** and **secondary** Geo deployments must be able to communicate to each other over HTTPS.
-## Redis and PostgreSQL for multiple servers
+## Redis and PostgreSQL for multiple nodes
Geo supports:
-- Redis and PostgreSQL on the **primary** node configured for multiple servers.
-- Redis on **secondary** nodes configured for multiple servers.
+- Redis and PostgreSQL on the **primary** node configured for multiple nodes.
+- Redis on **secondary** nodes configured for multiple nodes.
NOTE: **Note:**
-Support for PostgreSQL on **secondary** nodes in multi-server configuration
+Support for PostgreSQL on **secondary** nodes in multi-node configuration
[is planned](https://gitlab.com/groups/gitlab-org/-/epics/2536).
Because of the additional complexity involved in setting up this configuration
-for PostgreSQL and Redis, it is not covered by this Geo multi-server documentation.
+for PostgreSQL and Redis, it is not covered by this Geo multi-node documentation.
-For more information about setting up a multi-server PostgreSQL cluster and Redis cluster using the omnibus package see the multi-server documentation for
+For more information about setting up a multi-node PostgreSQL cluster and Redis cluster using the omnibus package see the multi-node documentation for
[PostgreSQL](../../postgresql/replication_and_failover.md) and
-[Redis](../../high_availability/redis.md), respectively.
+[Redis](../../redis/replication_and_failover.md), respectively.
NOTE: **Note:**
It is possible to use cloud hosted services for PostgreSQL and Redis, but this is beyond the scope of this document.
-## Prerequisites: Two working GitLab multi-server clusters
+## Prerequisites: Two working GitLab multi-node clusters
One cluster will serve as the **primary** node. Use the
-[GitLab multi-server documentation](../../reference_architectures/index.md) to set this up. If
+[GitLab multi-node documentation](../../reference_architectures/index.md) to set this up. If
you already have a working GitLab instance that is in-use, it can be used as a
**primary**.
The second cluster will serve as the **secondary** node. Again, use the
-[GitLab multi-server documentation](../../reference_architectures/index.md) to set this up.
+[GitLab multi-node documentation](../../reference_architectures/index.md) to set this up.
It's a good idea to log in and test it, however, note that its data will be
wiped out as part of the process of replicating from the **primary**.
@@ -90,12 +90,13 @@ The following steps enable a GitLab cluster to serve as the **primary** node.
After making these changes, [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) so the changes take effect.
-NOTE: **Note:** PostgreSQL and Redis should have already been disabled on the
+NOTE: **Note:**
+PostgreSQL and Redis should have already been disabled on the
application servers, and connections from the application servers to those
-services on the backend servers configured, during normal GitLab multi-server set up. See
-multi-server configuration documentation for
+services on the backend servers configured, during normal GitLab multi-node set up. See
+multi-node configuration documentation for
[PostgreSQL](../../postgresql/replication_and_failover.md#configuring-the-application-nodes)
-and [Redis](../../high_availability/redis.md#example-configuration-for-the-gitlab-application).
+and [Redis](../../redis/replication_and_failover.md#example-configuration-for-the-gitlab-application).
### Step 2: Configure the **primary** database
@@ -110,7 +111,7 @@ and [Redis](../../high_availability/redis.md#example-configuration-for-the-gitla
## Configure a **secondary** node
-A **secondary** cluster is similar to any other GitLab multi-server cluster, with two
+A **secondary** cluster is similar to any other GitLab multi-node cluster, with two
major differences:
- The main PostgreSQL database is a read-only replica of the **primary** node's
@@ -119,8 +120,8 @@ major differences:
called the "tracking database", which tracks the synchronization state of
various resources.
-Therefore, we will set up the multi-server components one-by-one, and include deviations
-from the normal multi-server setup. However, we highly recommend first configuring a
+Therefore, we will set up the multi-node components one-by-one, and include deviations
+from the normal multi-node setup. However, we highly recommend first configuring a
brand-new cluster as if it were not part of a Geo setup so that it can be
tested and verified as a working cluster. And only then should it be modified
for use as a Geo **secondary**. This helps to separate problems that are related
@@ -128,10 +129,10 @@ and are not related to Geo setup.
### Step 1: Configure the Redis and Gitaly services on the **secondary** node
-Configure the following services, again using the non-Geo multi-server
+Configure the following services, again using the non-Geo multi-node
documentation:
-- [Configuring Redis for GitLab](../../high_availability/redis.md) for multiple servers.
+- [Configuring Redis for GitLab](../../redis/replication_and_failover.md#example-configuration-for-the-gitlab-application) for multiple nodes.
- [Gitaly](../../high_availability/gitaly.md), which will store data that is
synchronized from the **primary** node.
@@ -141,8 +142,9 @@ recommended.
### Step 2: Configure the main read-only replica PostgreSQL database on the **secondary** node
-NOTE: **Note:** The following documentation assumes the database will be run on
-a single node only. Multi-server PostgreSQL on **secondary** nodes is
+NOTE: **Note:**
+The following documentation assumes the database will be run on
+a single node only. Multi-node PostgreSQL on **secondary** nodes is
[not currently supported](https://gitlab.com/groups/gitlab-org/-/epics/2536).
Configure the [**secondary** database](database.md) as a read-only replica of
@@ -206,7 +208,8 @@ If using an external PostgreSQL instance, refer also to
### Step 3: Configure the tracking database on the **secondary** node
-NOTE: **Note:** This documentation assumes the tracking database will be run on
+NOTE: **Note:**
+This documentation assumes the tracking database will be run on
only a single machine, rather than as a PostgreSQL cluster.
Configure the tracking database.
@@ -282,7 +285,7 @@ application services. These services are enabled selectively in the
configuration.
Configure the application servers following
-[Configuring GitLab for multiple servers](../../high_availability/gitlab.md), then make the
+[Configuring GitLab for multiple nodes](../../high_availability/gitlab.md), then make the
following modifications:
1. Edit `/etc/gitlab/gitlab.rb` on each application server in the **secondary**
@@ -370,13 +373,13 @@ application servers.
In this topology, a load balancer is required at each geographic location to
route traffic to the application servers.
-See [Load Balancer for GitLab with multiple servers](../../high_availability/load_balancer.md) for
+See [Load Balancer for GitLab with multiple nodes](../../high_availability/load_balancer.md) for
more information.
### Step 6: Configure the backend application servers on the **secondary** node
The minimal reference architecture diagram above shows all application services
-running together on the same machines. However, for multiple servers we
+running together on the same machines. However, for multiple nodes we
[strongly recommend running all services separately](../../reference_architectures/index.md).
For example, a Sidekiq server could be configured similarly to the frontend
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
index b03a2dae971..c2f8aa35d2d 100644
--- a/doc/administration/geo/replication/troubleshooting.md
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -5,7 +5,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
type: howto
---
-# Geo Troubleshooting **(PREMIUM ONLY)**
+# Troubleshooting Geo **(PREMIUM ONLY)**
Setting up Geo requires careful attention to details and sometimes it's easy to
miss a step.
@@ -452,13 +452,13 @@ to start again from scratch, there are a few steps that can help you:
chown git:git /var/opt/gitlab/git-data/repositories
```
- TIP: **Tip**
+ TIP: **Tip:**
You may want to remove the `/var/opt/gitlab/git-data/repositories.old` in the future
as soon as you confirmed that you don't need it anymore, to save disk space.
1. _(Optional)_ Rename other data folders and create new ones
- CAUTION: **Caution**:
+ CAUTION: **Caution:**
You may still have files on the **secondary** node that have been removed from **primary** node but
removal have not been reflected. If you skip this step, they will never be removed
from this Geo node.
@@ -701,7 +701,8 @@ To check the configuration:
Description |
```
- NOTE: **Note:** Pay particular attention to the host and port under
+ NOTE: **Note:**
+ Pay particular attention to the host and port under
FDW options. That configuration should point to the Geo secondary
database.
diff --git a/doc/administration/geo/replication/updating_the_geo_nodes.md b/doc/administration/geo/replication/updating_the_geo_nodes.md
index 6c2778ad0fe..c0b1bc6688c 100644
--- a/doc/administration/geo/replication/updating_the_geo_nodes.md
+++ b/doc/administration/geo/replication/updating_the_geo_nodes.md
@@ -36,15 +36,18 @@ different steps.
## General update steps
-NOTE: **Note:** These general update steps are not intended for [high-availability deployments](https://docs.gitlab.com/omnibus/update/README.html#multi-node--ha-deployment), and will cause downtime. If you want to avoid downtime, consider using [zero downtime updates](https://docs.gitlab.com/omnibus/update/README.html#zero-downtime-updates).
+NOTE: **Note:**
+These general update steps are not intended for [high-availability deployments](https://docs.gitlab.com/omnibus/update/README.html#multi-node--ha-deployment), and will cause downtime. If you want to avoid downtime, consider using [zero downtime updates](https://docs.gitlab.com/omnibus/update/README.html#zero-downtime-updates).
To update the Geo nodes when a new GitLab version is released, update **primary**
and all **secondary** nodes:
+1. **Optional:** [Pause replication on each **secondary** node.](./index.md#pausing-and-resuming-replication)
1. Log into the **primary** node.
1. [Update GitLab on the **primary** node using Omnibus](https://docs.gitlab.com/omnibus/update/README.html).
1. Log into each **secondary** node.
1. [Update GitLab on each **secondary** node using Omnibus](https://docs.gitlab.com/omnibus/update/README.html).
+1. If you paused replication in step 1, [resume replication on each **secondary**](./index.md#pausing-and-resuming-replication)
1. [Test](#check-status-after-updating) **primary** and **secondary** nodes, and check version in each.
### Check status after updating
diff --git a/doc/administration/git_annex.md b/doc/administration/git_annex.md
index c4c38f8e683..ed6218ede91 100644
--- a/doc/administration/git_annex.md
+++ b/doc/administration/git_annex.md
@@ -4,9 +4,9 @@ disqus_identifier: 'https://docs.gitlab.com/ee/workflow/git_annex.html'
# Git annex
-> **Warning:** GitLab has [completely
-removed](https://gitlab.com/gitlab-org/gitlab/-/issues/1648) in GitLab 9.0 (2017/03/22).
-Read through the [migration guide from git-annex to Git LFS](../topics/git/lfs/migrate_from_git_annex_to_git_lfs.md).
+CAUTION: **Warning:**
+[Git Annex support was removed](https://gitlab.com/gitlab-org/gitlab/-/issues/1648)
+in GitLab 9.0. Read through the [migration guide from git-annex to Git LFS](../topics/git/lfs/migrate_from_git_annex_to_git_lfs.md).
The biggest limitation of Git, compared to some older centralized version
control systems has been the maximum size of the repositories.
@@ -198,7 +198,7 @@ can cause `git-annex` to raise unpredicted warnings and errors.
Consult the [Annex upgrade page](https://git-annex.branchable.com/upgrades/) for more information about
the differences between versions. You can find out which version is installed
-on your server by navigating to <https://pkgs.org/download/git-annex> and
+on your server by navigating to `https://pkgs.org/download/git-annex` and
searching for your distribution.
Although there is no general guide for `git-annex` errors, there are a few tips
diff --git a/doc/administration/gitaly/img/praefect_storage_v12_10.png b/doc/administration/gitaly/img/praefect_storage_v12_10.png
deleted file mode 100644
index f60a56fa1fb..00000000000
--- a/doc/administration/gitaly/img/praefect_storage_v12_10.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/gitaly/index.md b/doc/administration/gitaly/index.md
index 1469ed64004..057d0559c14 100644
--- a/doc/administration/gitaly/index.md
+++ b/doc/administration/gitaly/index.md
@@ -19,12 +19,13 @@ In the Gitaly documentation:
- [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell).
- [GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse).
-GitLab end users do not have direct access to Gitaly.
+GitLab end users do not have direct access to Gitaly. Gitaly only manages Git
+repository access for GitLab. Other types of GitLab data aren't accessed using Gitaly.
CAUTION: **Caution:**
-From GitLab 13.0, using NFS for Git repositories is deprecated. In GitLab 14.0,
-support for NFS for Git repositories is scheduled to be removed. Upgrade to
-[Gitaly Cluster](praefect.md) as soon as possible.
+From GitLab 13.0, Gitaly support for NFS is deprecated. In GitLab 14.0, Gitaly support
+for NFS is scheduled to be removed. Upgrade to [Gitaly Cluster](praefect.md) as soon as
+possible.
## Architecture
@@ -49,6 +50,12 @@ To change Gitaly settings:
1. Edit `/home/git/gitaly/config.toml` and add or change the [Gitaly settings](https://gitlab.com/gitlab-org/gitaly/blob/master/config.toml.example).
1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
+The following configuration options are also available:
+
+- Enabling [TLS support](#enable-tls-support).
+- Configuring the [number of `gitaly-ruby` workers](#configure-number-of-gitaly-ruby-workers).
+- Limiting [RPC concurrency](#limit-rpc-concurrency).
+
## Run Gitaly on its own server
By default, Gitaly is run on the same server as Gitaly clients and is
@@ -72,6 +79,7 @@ The process for setting up Gitaly on its own server is:
1. [Configure authentication](#configure-authentication).
1. [Configure Gitaly servers](#configure-gitaly-servers).
1. [Configure Gitaly clients](#configure-gitaly-clients).
+1. [Disable Gitaly where not required](#disable-gitaly-where-not-required-optional) (optional).
When running Gitaly on its own server, note the following regarding GitLab versions:
@@ -116,7 +124,7 @@ The following list depicts the network architecture of Gitaly:
DANGER: **Danger:**
Gitaly servers must not be exposed to the public internet as Gitaly's network traffic is unencrypted
by default. The use of firewall is highly recommended to restrict access to the Gitaly server.
-Another option is to [use TLS](#tls-support).
+Another option is to [use TLS](#enable-tls-support).
In the following sections, we describe how to configure two Gitaly servers with secret token
`abc123secret`:
@@ -380,20 +388,10 @@ Gitaly makes the following assumptions:
clients, and that Gitaly server can read and write to `/mnt/gitlab/storage2`.
- Your `gitaly1.internal` and `gitaly2.internal` Gitaly servers can reach each other.
-Note you can't a use mixed setup, with at least one of your Gitaly servers configured as a local
-server with the `path` setting provided. This is because other Gitaly instances can't communicate
-with it. The following setup is _incorrect_, because:
-
-- You must replace `path` with `gitaly_address` containing a proper value.
-- The address must be reachable from the other two addresses provided.
-
-```ruby
-git_data_dirs({
- 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
- 'storage1' => { 'path' => '/var/opt/gitlab/git-data' },
- 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
-})
-```
+You can't define Gitaly servers with some as a local Gitaly server
+(without `gitaly_address`) and some as remote
+server (with `gitaly_address`) unless you setup with special
+[mixed configuration](#mixed-configuration).
**For Omnibus GitLab**
@@ -457,15 +455,47 @@ If you have [server hooks](../server_hooks.md) configured, either per repository
must move these to the Gitaly servers. If you have multiple Gitaly servers, copy your server hooks
to all Gitaly servers.
-### Disabling the Gitaly service in a cluster environment
+#### Mixed configuration
+
+GitLab can reside on the same server as one of many Gitaly servers, but doesn't support
+configuration that mixes local and remote configuration. The following setup is incorrect, because:
+
+- All addresses must be reachable from the other Gitaly servers.
+- `storage1` will be assigned a Unix socket for `gitaly_address` which is
+ invalid for some of the Gitaly servers.
+
+```ruby
+git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'path' => '/mnt/gitlab/git-data' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+})
+```
+
+To combine local and remote Gitaly servers, use an external address for the local Gitaly server. For
+example:
+
+```ruby
+git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ # Address of the GitLab server that has Gitaly running on it
+ 'storage1' => { 'gitaly_address' => 'tcp://gitlab.internal:8075', 'path' => '/mnt/gitlab/git-data' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+})
+```
+
+`path` can only be included for storage shards on the local Gitaly server.
+If it's excluded, default Git storage directory will be used for that storage shard.
-If you are running Gitaly [as a remote
-service](#run-gitaly-on-its-own-server) you may want to disable
-the local Gitaly service that runs on your GitLab server by default.
-Disabling Gitaly only makes sense when you run GitLab in a custom
-cluster configuration, where different services run on different
-machines. Disabling Gitaly on all machines in the cluster is not a
-valid configuration.
+### Disable Gitaly where not required (optional)
+
+If you are running Gitaly [as a remote service](#run-gitaly-on-its-own-server) you may want to
+disable the local Gitaly service that runs on your GitLab server by default, leaving it only running
+where required.
+
+Disabling Gitaly on the GitLab instance only makes sense when you run GitLab in a custom cluster configuration, where
+Gitaly runs on a separate machine from the GitLab instance. Disabling Gitaly on all machines in the cluster is not
+a valid configuration (some machines much act as Gitaly servers).
To disable Gitaly on a GitLab server:
@@ -489,43 +519,44 @@ To disable Gitaly on a GitLab server:
1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
-## TLS support
+## Enable TLS support
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22602) in GitLab 11.8.
-Gitaly supports TLS encryption. To be able to communicate
-with a Gitaly instance that listens for secure connections you will need to use `tls://` URL
-scheme in the `gitaly_address` of the corresponding storage entry in the GitLab configuration.
+Gitaly supports TLS encryption. To communicate with a Gitaly instance that listens for secure
+connections, you must use `tls://` URL scheme in the `gitaly_address` of the corresponding
+storage entry in the GitLab configuration.
-You will need to bring your own certificates as this isn't provided automatically.
-The certificate corresponding to each Gitaly server will need to be installed
-on that Gitaly server.
+You must supply your own certificates as this isn't provided automatically. The certificate
+corresponding to each Gitaly server must be installed on that Gitaly server.
-Additionally the certificate, or its certificate authority, must be installed on all Gitaly servers
-(including the Gitaly server using the certificate) and on all Gitaly clients
-that communicate with it following the procedure described in
-[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates) (and repeated below).
+Additionally, the certificate (or its certificate authority) must be installed on all:
-NOTE: **Note**
-The certificate must specify the address you use to access the
-Gitaly server. If you are addressing the Gitaly server by a hostname, you can
-either use the Common Name field for this, or add it as a Subject Alternative
-Name. If you are addressing the Gitaly server by its IP address, you must add it
-as a Subject Alternative Name to the certificate.
-[gRPC does not support using an IP address as Common Name in a certificate](https://github.com/grpc/grpc/issues/2691).
+- Gitaly servers, including the Gitaly server using the certificate.
+- Gitaly clients that communicate with it.
-NOTE: **Note:**
-It is possible to configure Gitaly servers with both an
-unencrypted listening address `listen_addr` and an encrypted listening
-address `tls_listen_addr` at the same time. This allows you to do a
-gradual transition from unencrypted to encrypted traffic, if necessary.
+The process is documented in the
+[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates)
+and repeated below.
+
+Note the following:
+
+- The certificate must specify the address you use to access the Gitaly server. If you are:
+ - Addressing the Gitaly server by a hostname, you can either use the Common Name field for this,
+ or add it as a Subject Alternative Name.
+ - Addressing the Gitaly server by its IP address, you must add it as a Subject Alternative Name to
+ the certificate. [gRPC does not support using an IP address as Common Name in a certificate](https://github.com/grpc/grpc/issues/2691).
+- You can configure Gitaly servers with both an unencrypted listening address `listen_addr` and an
+ encrypted listening address `tls_listen_addr` at the same time. This allows you to gradually
+ transition from unencrypted to encrypted traffic if necessary.
To configure Gitaly with TLS:
**For Omnibus GitLab**
1. Create certificates for Gitaly servers.
-1. On the Gitaly clients, copy the certificates, or their certificate authority, into the `/etc/gitlab/trusted-certs`:
+1. On the Gitaly clients, copy the certificates (or their certificate authority) into
+ `/etc/gitlab/trusted-certs`:
```shell
sudo cp cert.pem /etc/gitlab/trusted-certs/
@@ -542,7 +573,8 @@ To configure Gitaly with TLS:
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
-1. On the Gitaly servers, create the `/etc/gitlab/ssl` directory and copy your key and certificate there:
+1. On the Gitaly servers, create the `/etc/gitlab/ssl` directory and copy your key and certificate
+ there:
```shell
sudo mkdir -p /etc/gitlab/ssl
@@ -551,8 +583,9 @@ To configure Gitaly with TLS:
sudo chmod 644 key.pem cert.pem
```
-1. Copy all Gitaly server certificates, or their certificate authority, to `/etc/gitlab/trusted-certs` so Gitaly server will trust the certificate when
-calling into itself or other Gitaly servers:
+1. Copy all Gitaly server certificates (or their certificate authority) to
+ `/etc/gitlab/trusted-certs` so that Gitaly servers will trust the certificate when calling into themselves
+ or other Gitaly servers:
```shell
sudo cp cert1.pem cert2.pem /etc/gitlab/trusted-certs/
@@ -572,10 +605,13 @@ calling into itself or other Gitaly servers:
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
-1. (Optional) After [verifying that all Gitaly traffic is being served over TLS](#observe-type-of-gitaly-connections),
- you can improve security by disabling non-TLS connections by commenting out
- or deleting `gitaly['listen_addr']` in `/etc/gitlab/gitlab.rb`, saving the file,
- and [reconfiguring GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Verify Gitaly traffic is being served over TLS by
+ [observing the types of Gitaly connections](#observe-type-of-gitaly-connections).
+1. (Optional) Improve security by:
+ 1. Disabling non-TLS connections by commenting out or deleting `gitaly['listen_addr']` in
+ `/etc/gitlab/gitlab.rb`.
+ 1. Saving the file.
+ 1. [Reconfiguring GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
**For installations from source**
@@ -605,9 +641,9 @@ calling into itself or other Gitaly servers:
```
NOTE: **Note:**
- `/some/dummy/path` should be set to a local folder that exists, however no
- data will be stored in this folder. This will no longer be necessary after
- [this issue](https://gitlab.com/gitlab-org/gitaly/-/issues/1282) is resolved.
+ `/some/dummy/path` should be set to a local folder that exists, however no data will be stored
+ in this folder. This will no longer be necessary after
+ [Gitaly issue #1282](https://gitlab.com/gitlab-org/gitaly/-/issues/1282) is resolved.
1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
1. On the Gitaly servers, create or edit `/etc/default/gitlab` and add:
@@ -625,7 +661,9 @@ calling into itself or other Gitaly servers:
sudo chmod 644 key.pem cert.pem
```
-1. Copy all Gitaly server certificates, or their certificate authority, to the system trusted certificates so Gitaly server will trust the certificate when calling into itself or other Gitaly servers.
+1. Copy all Gitaly server certificates (or their certificate authority) to the system trusted
+ certificates folder so Gitaly server will trust the certificate when calling into itself or other Gitaly
+ servers.
```shell
sudo cp cert.pem /usr/local/share/ca-certificates/gitaly.crt
@@ -643,15 +681,18 @@ calling into itself or other Gitaly servers:
```
1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
-1. (Optional) After [verifying that all Gitaly traffic is being served over TLS](#observe-type-of-gitaly-connections),
- you can improve security by disabling non-TLS connections by commenting out
- or deleting `listen_addr` in `/home/git/gitaly/config.toml`, saving the file,
- and [restarting GitLab](../restart_gitlab.md#installations-from-source).
+1. Verify Gitaly traffic is being served over TLS by
+ [observing the types of Gitaly connections](#observe-type-of-gitaly-connections).
+1. (Optional) Improve security by:
+ 1. Disabling non-TLS connections by commenting out or deleting `listen_addr` in
+ `/home/git/gitaly/config.toml`.
+ 1. Saving the file.
+ 1. [Restarting GitLab](../restart_gitlab.md#installations-from-source).
### Observe type of Gitaly connections
-To observe what type of connections are actually being used in a
-production environment you can use the following Prometheus query:
+[Prometheus](../monitoring/prometheus/index.md) can be used observe what type of connections Gitaly
+is serving a production environment. Use the following Prometheus query:
```prometheus
sum(rate(gitaly_connections_total[5m])) by (type)
@@ -660,24 +701,26 @@ sum(rate(gitaly_connections_total[5m])) by (type)
## `gitaly-ruby`
Gitaly was developed to replace the Ruby application code in GitLab.
-In order to save time and/or avoid the risk of rewriting existing
-application logic, in some cases we chose to copy some application code
-from GitLab into Gitaly almost as-is. To be able to run that code,
-`gitaly-ruby` was created, which is a "sidecar" process for the main Gitaly Go
-process. Some examples of things that are implemented in `gitaly-ruby` are
-RPCs that deal with wikis, and RPCs that create commits on behalf of
-a user, such as merge commits.
-### Number of `gitaly-ruby` workers
+To save time and avoid the risk of rewriting existing application logic, we chose to copy some
+application code from GitLab into Gitaly.
+
+To be able to run that code, `gitaly-ruby` was created, which is a "sidecar" process for the main
+Gitaly Go process. Some examples of things that are implemented in `gitaly-ruby` are:
+
+- RPCs that deal with wikis.
+- RPCs that create commits on behalf of a user, such as merge commits.
+
+### Configure number of `gitaly-ruby` workers
-`gitaly-ruby` has much less capacity than Gitaly itself. If your Gitaly
-server has to handle a lot of requests, the default setting of having
-just one active `gitaly-ruby` sidecar might not be enough. If you see
-`ResourceExhausted` errors from Gitaly, it's very likely that you have not
-enough `gitaly-ruby` capacity.
+`gitaly-ruby` has much less capacity than Gitaly implemented in Go. If your Gitaly server has to handle lots of
+requests, the default setting of having just one active `gitaly-ruby` sidecar might not be enough.
-You can increase the number of `gitaly-ruby` processes on your Gitaly
-server with the following settings.
+If you see `ResourceExhausted` errors from Gitaly, it's very likely that you have not enough
+`gitaly-ruby` capacity.
+
+You can increase the number of `gitaly-ruby` processes on your Gitaly server with the following
+settings:
**For Omnibus GitLab**
@@ -702,13 +745,16 @@ server with the following settings.
1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
-## Limiting RPC concurrency
+## Limit RPC concurrency
+
+Clone traffic can put a large strain on your Gitaly service. The bulk of the work gets done in the
+either of the following RPCs:
+
+- `SSHUploadPack` (for Git SSH).
+- `PostUploadPack` (for Git HTTP).
-It can happen that CI clone traffic puts a large strain on your Gitaly
-service. The bulk of the work gets done in the SSHUploadPack (for Git
-SSH) and PostUploadPack (for Git HTTP) RPC's. To prevent such workloads
-from overcrowding your Gitaly server you can set concurrency limits in
-Gitaly's configuration file.
+To prevent such workloads from overwhelming your Gitaly server, you can set concurrency limits in
+Gitaly's configuration file. For example:
```ruby
# in /etc/gitlab/gitlab.rb
@@ -725,226 +771,250 @@ gitaly['concurrency'] = [
]
```
-This will limit the number of in-flight RPC calls for the given RPC's.
-The limit is applied per repository. In the example above, each on the
-Gitaly server can have at most 20 simultaneous `PostUploadPack` calls in
-flight, and the same for `SSHUploadPack`. If another request comes in for
-a repository that has used up its 20 slots, that request will get
-queued.
+This limits the number of in-flight RPC calls for the given RPCs. The limit is applied per
+repository. In the example above:
+
+- Each repository served by the Gitaly server can have at most 20 simultaneous `PostUploadPack` RPC
+ calls in flight, and the same for `SSHUploadPack`.
+- If another request comes in for a repository that has used up its 20 slots, that request gets
+ queued.
+
+You can observe the behavior of this queue using the Gitaly logs and Prometheus:
-You can observe the behavior of this queue via the Gitaly logs and via
-Prometheus. In the Gitaly logs, you can look for the string (or
-structured log field) `acquire_ms`. Messages that have this field are
-reporting about the concurrency limiter. In Prometheus, look for the
-`gitaly_rate_limiting_in_progress`, `gitaly_rate_limiting_queued` and
-`gitaly_rate_limiting_seconds` metrics.
+- In the Gitaly logs, look for the string (or structured log field) `acquire_ms`. Messages that have
+ this field are reporting about the concurrency limiter.
+- In Prometheus, look for the following metrics:
-The name of the Prometheus metric is not quite right because this is a
-concurrency limiter, not a rate limiter. If a Gitaly client makes 1000 requests
-in a row in a very short timespan, the concurrency will not exceed 1,
-and this mechanism (the concurrency limiter) will do nothing.
+ - `gitaly_rate_limiting_in_progress`.
+ - `gitaly_rate_limiting_queued`.
+ - `gitaly_rate_limiting_seconds`.
-## Rotating a Gitaly authentication token
+NOTE: **Note:**
+Though the name of the Prometheus metric contains `rate_limiting`, it is a concurrency limiter, not
+a rate limiter. If a Gitaly client makes 1000 requests in a row very quickly, concurrency will not
+exceed 1 and the concurrency limiter has no effect.
+
+## Rotate Gitaly authentication token
+
+Rotating credentials in a production environment often requires downtime, causes outages, or both.
-Rotating credentials in a production environment often either requires
-downtime, or causes outages, or both. If you are careful, though, you
-*can* rotate Gitaly credentials without a service interruption.
+However, you can rotate Gitaly credentials without a service interruption. Rotating a Gitaly
+authentication token involves:
-This procedure also works if you are running GitLab on a single server.
-In that case, "Gitaly server" and "Gitaly client" refers to the same
-machine.
+- [Verifying authentication monitoring](#verify-authentication-monitoring).
+- [Enabling "auth transitioning" mode](#enable-auth-transitioning-mode).
+- [Updating Gitaly authentication tokens](#update-gitaly-authentication-token).
+- [Ensuring there are no authentication failures](#ensure-there-are-no-authentication-failures).
+- [Disabling "auth transitioning" mode](#disable-auth-transitioning-mode).
+- [Verifying authentication is enforced](#verify-authentication-is-enforced).
-### 1. Monitor current authentication behavior
+This procedure also works if you are running GitLab on a single server. In that case, "Gitaly
+server" and "Gitaly client" refers to the same machine.
-Use Prometheus to see what the current authentication behavior of your
-GitLab installation is.
+### Verify authentication monitoring
+
+Before rotating a Gitaly authentication token, verify that you can monitor the authentication
+behavior of your GitLab installation using Prometheus. Use the following Prometheus query:
```prometheus
sum(rate(gitaly_authentications_total[5m])) by (enforced, status)
```
-In a system where authentication is configured correctly, and where you
-have live traffic, you will see something like this:
+In a system where authentication is configured correctly and where you have live traffic, you will
+see something like this:
```prometheus
{enforced="true",status="ok"} 4424.985419441742
```
-There may also be other numbers with rate 0. We only care about the
-non-zero numbers.
+There may also be other numbers with rate 0. We only care about the non-zero numbers.
-The only non-zero number should have `enforced="true",status="ok"`. If
-you have other non-zero numbers, something is wrong in your
-configuration.
+The only non-zero number should have `enforced="true",status="ok"`. If you have other non-zero
+numbers, something is wrong in your configuration.
-The `status="ok"` number reflects your current request rate. In the example
-above, Gitaly is handling about 4000 requests per second.
+The `status="ok"` number reflects your current request rate. In the example above, Gitaly is
+handling about 4000 requests per second.
-Now you have established that you can monitor the Gitaly authentication
-behavior of your GitLab installation.
+Now that you have established that you can monitor the Gitaly authentication behavior of your GitLab
+installation, you can begin the rest of the procedure.
-### 2. Reconfigure all Gitaly servers to be in "auth transitioning" mode
+### Enable "auth transitioning" mode
-The second step is to temporarily disable authentication on the Gitaly servers.
+Temporarily disable Gitaly authentication on the Gitaly servers by putting them into "auth
+transitioning" mode as follows:
```ruby
# in /etc/gitlab/gitlab.rb
gitaly['auth_transitioning'] = true
```
-After you have applied this, your Prometheus query should return
-something like this:
+After you have made this change, your [Prometheus query](#verify-authentication-monitoring)
+should return something like:
```prometheus
{enforced="false",status="would be ok"} 4424.985419441742
```
-Because `enforced="false"`, it will be safe to start rolling out the new
-token.
+Because `enforced="false"`, it is safe to start rolling out the new token.
-### 3. Update Gitaly token on all clients and servers
+### Update Gitaly authentication token
-```ruby
-# in /etc/gitlab/gitlab.rb
+To update to a new Gitaly authentication token, on each Gitaly client **and** Gitaly server:
-gitaly['auth_token'] = 'my new secret token'
-```
+1. Update the configuration:
+
+ ```ruby
+ # in /etc/gitlab/gitlab.rb
+
+ gitaly['auth_token'] = '<new secret token>'
+ ```
-Remember to apply this on both your Gitaly clients *and* servers. If you
-check your Prometheus query while this change is being rolled out, you
-will see non-zero values for the `enforced="false",status="denied"` counter.
+1. Restart Gitaly:
-### 4. Use Prometheus to ensure there are no authentication failures
+ ```shell
+ gitlab-ctl restart gitaly
+ ```
+
+If you run your [Prometheus query](#verify-authentication-monitoring) while this change is
+being rolled out, you will see non-zero values for the `enforced="false",status="denied"` counter.
-After you applied the Gitaly token change everywhere, and all services
-involved have been restarted, you should will temporarily see a mix of
-`status="would be ok"` and `status="denied"`.
+### Ensure there are no authentication failures
-After the new token has been picked up by all Gitaly clients and
-servers, the **only non-zero rate** should be
-`enforced="false",status="would be ok"`.
+After the new token is set, and all services involved have been restarted, you will
+[temporarily see](#verify-authentication-monitoring) a mix of:
-### 5. Disable "auth transitioning" Mode
+- `status="would be ok"`.
+- `status="denied"`.
-Now we turn off the 'auth transitioning' mode. These final steps are
-important: without them, you have **no authentication**.
+After the new token has been picked up by all Gitaly clients and Gitaly servers, the
+**only non-zero rate** should be `enforced="false",status="would be ok"`.
-Update the configuration on your Gitaly servers:
+### Disable "auth transitioning" mode
+
+To re-enable Gitaly authentication, disable "auth transitioning" mode. Update the configuration on
+your Gitaly servers as follows:
```ruby
# in /etc/gitlab/gitlab.rb
gitaly['auth_transitioning'] = false
```
-### 6. Verify that authentication is enforced again
+CAUTION: **Caution:**
+Without completing this step, you have **no Gitaly authentication**.
+
+### Verify authentication is enforced
-Refresh your Prometheus query. You should now see the same kind of
-result as you did in the beginning:
+Refresh your [Prometheus query](#verify-authentication-monitoring). You should now see a similar
+result as you did at the start. For example:
```prometheus
{enforced="true",status="ok"} 4424.985419441742
```
-Note that `enforced="true"`, meaning that authentication is being enforced.
+Note that `enforced="true"` means that authentication is being enforced.
-## Direct Git access in GitLab Rails
+## Direct Git access bypassing Gitaly
-Also known as "the Rugged patches".
+While it is possible to access Gitaly repositories stored on disk directly with a Git client,
+it is not advisable because Gitaly is being continuously improved and changed. Theses improvements may invalidate assumptions, resulting in performance degradation, instability, and even data loss.
+
+Gitaly has optimizations, such as the
+[`info/refs` advertisement cache](https://gitlab.com/gitlab-org/gitaly/blob/master/doc/design_diskcache.md),
+that rely on Gitaly controlling and monitoring access to repositories via the
+official gRPC interface. Likewise, Praefect has optimizations, such as fault
+tolerance and distributed reads, that depend on the gRPC interface and
+database to determine repository state.
+
+For these reasons, **accessing repositories directly is done at your own risk
+and is not supported**.
+
+## Direct access to Git in GitLab
+
+Direct access to Git uses code in GitLab known as the "Rugged patches".
### History
-Before Gitaly existed, the things that are now Gitaly clients used to
-access Git repositories directly. Either on a local disk in the case of
-e.g. a single-machine Omnibus GitLab installation, or via NFS in the
-case of a horizontally scaled GitLab installation.
+Before Gitaly existed, what are now Gitaly clients used to access Git repositories directly, either:
+
+- On a local disk in the case of a single-machine Omnibus GitLab installation
+- Using NFS in the case of a horizontally-scaled GitLab installation.
-Besides running plain `git` commands, in GitLab Rails we also used to
-use a Ruby gem (library) called
+Besides running plain `git` commands, GitLab used to use a Ruby library called
[Rugged](https://github.com/libgit2/rugged). Rugged is a wrapper around
-[libgit2](https://libgit2.org/), a stand-alone implementation of Git in
-the form of a C library.
-
-Over time it has become clear to use that Rugged, and particularly
-Rugged in combination with the [Unicorn](https://yhbt.net/unicorn/)
-web server, is extremely efficient. Because libgit2 is a *library* and
-not an external process, there was very little overhead between GitLab
-application code that tried to look up data in Git repositories, and the
-Git implementation itself.
-
-Because Rugged+Unicorn was so efficient, GitLab's application code ended
-up with lots of duplicate Git object lookups (like looking up the
-`master` commit a dozen times in one request). We could write
-inefficient code without being punished for it.
-
-When we migrated these Git lookups to Gitaly calls, we were suddenly
-getting a much higher fixed cost per Git lookup. Even when Gitaly is
-able to re-use an already-running `git` process to look up e.g. a commit
-you still have the cost of a network roundtrip to Gitaly, and within
-Gitaly a write/read roundtrip on the Unix pipes that connect Gitaly to
-the `git` process.
-
-Using GitLab.com performance as our yardstick, we pushed down the number
-of Gitaly calls per request until the loss of Rugged's efficiency was no
-longer felt. It also helped that we run Gitaly itself directly on the
-Git file severs, rather than via NFS mounts: this gave us a speed boost
-that counteracted the negative effect of not using Rugged anymore.
-
-Unfortunately, some *other* deployments of GitLab could not ditch NFS
-like we did on GitLab.com and they got the worst of both worlds: the
-slowness of NFS and the increased inherent overhead of Gitaly.
-
-As a performance band-aid for these stuck-on-NFS deployments, we
-re-introduced some of the old Rugged code that got deleted from
-GitLab Rails during the Gitaly migration project. These pieces of
-re-introduced code are informally referred to as "the Rugged patches".
-
-### Activation of direct Git access in GitLab Rails
-
-The Ruby methods that perform direct Git access are hidden behind [feature
-flags](../../development/gitaly.md#legacy-rugged-code). These feature
-flags are off by default. It is not good if you need to know about
-feature flags to get the best performance so in a second iteration, we
-added an automatic mechanism that will enable direct Git access.
-
-When GitLab Rails calls a function that has a Rugged patch it performs
-two checks. The result of both of these checks is cached.
-
-1. Is the feature flag for this patch set in the database? If so, do
- what the feature flag says.
-1. If the feature flag is not set (i.e. neither true nor false), try to
- see if we can access filesystem underneath the Gitaly server
- directly. If so, use the Rugged patch.
-
-To see if GitLab Rails can access the repository filesystem directly, we use
-the following heuristic:
-
-- Gitaly ensures that the filesystem has a metadata file in its root
- with a UUID in it.
-- Gitaly reports this UUID to GitLab Rails via the `ServerInfo` RPC.
-- GitLab Rails tries to read the metadata file directly. If it exists,
- and if the UUID's match, assume we have direct access.
-
-Because of the way the UUID check works, and because Omnibus GitLab will
-fill in the correct repository paths in the GitLab Rails config file
-`config/gitlab.yml`, **direct Git access in GitLab Rails is on by default in
-Omnibus**.
-
-### Plans to remove direct Git access in GitLab Rails
-
-For the sake of removing complexity it is desirable that we get rid of
-direct Git access in GitLab Rails. For as long as some GitLab installations are stuck
-with Git repositories on slow NFS, however, we cannot just remove them.
-
-There are two prongs to our efforts to remove direct Git access in GitLab Rails:
-
-1. Reduce the number of (inefficient) Gitaly queries made by
- GitLab Rails.
-1. Persuade everybody who runs a Highly Available / horizontally scaled
- GitLab installation to move off of NFS.
-
-The second prong is the only real solution. For this we need [Gitaly
-HA](https://gitlab.com/groups/gitlab-org/-/epics?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=Gitaly%20HA),
-which is still under development as of December 2019.
+[libgit2](https://libgit2.org/), a stand-alone implementation of Git in the form of a C library.
+
+Over time it became clear that Rugged, particularly in combination with
+[Unicorn](https://yhbt.net/unicorn/), is extremely efficient. Because `libgit2` is a library and
+not an external process, there was very little overhead between:
+
+- GitLab application code that tried to look up data in Git repositories.
+- The Git implementation itself.
+
+Because the combination of Rugged and Unicorn was so efficient, GitLab's application code ended up with lots of
+duplicate Git object lookups. For example, looking up the `master` commit a dozen times in one
+request. We could write inefficient code without poor performance.
+
+When we migrated these Git lookups to Gitaly calls, we suddenly had a much higher fixed cost per Git
+lookup. Even when Gitaly is able to re-use an already-running `git` process (for example, to look up
+a commit), you still have:
+
+- The cost of a network roundtrip to Gitaly.
+- Within Gitaly, a write/read roundtrip on the Unix pipes that connect Gitaly to the `git` process.
+
+Using GitLab.com to measure, we reduced the number of Gitaly calls per request until the loss of
+Rugged's efficiency was no longer felt. It also helped that we run Gitaly itself directly on the Git
+file severs, rather than via NFS mounts. This gave us a speed boost that counteracted the negative
+effect of not using Rugged anymore.
+
+Unfortunately, other deployments of GitLab could not remove NFS like we did on GitLab.com, and they
+got the worst of both worlds:
+
+- The slowness of NFS.
+- The increased inherent overhead of Gitaly.
+
+The code removed from GitLab during the Gitaly migration project affected these deployments. As a
+performance workaround for these NFS-based deployments, we re-introduced some of the old Rugged
+code. This re-introduced code is informally referred to as the "Rugged patches".
+
+### How it works
+
+The Ruby methods that perform direct Git access are behind
+[feature flags](../../development/gitaly.md#legacy-rugged-code), disabled by default. It wasn't
+convenient to set feature flags to get the best performance, so we added an automatic mechanism that
+enables direct Git access.
+
+When GitLab calls a function that has a "Rugged patch", it performs two checks:
+
+- Is the feature flag for this patch set in the database? If so, the feature flag setting controls
+ GitLab's use of "Rugged patch" code.
+- If the feature flag is not set, GitLab tries accessing the filesystem underneath the
+ Gitaly server directly. If it can, it will use the "Rugged patch".
+
+The result of both of these checks is cached.
+
+To see if GitLab can access the repository filesystem directly, we use the following heuristic:
+
+- Gitaly ensures that the filesystem has a metadata file in its root with a UUID in it.
+- Gitaly reports this UUID to GitLab via the `ServerInfo` RPC.
+- GitLab Rails tries to read the metadata file directly. If it exists, and if the UUID's match,
+ assume we have direct access.
+
+Direct Git access is enable by default in Omnibus GitLab because it fills in the correct repository
+paths in the GitLab configuration file `config/gitlab.yml`. This satisfies the UUID check.
+
+### Transition to Gitaly Cluster
+
+For the sake of removing complexity, we must remove direct Git access in GitLab. However, we can't
+remove it as long some GitLab installations require Git repositories on NFS.
+
+There are two facets to our efforts to remove direct Git access in GitLab:
+
+- Reduce the number of inefficient Gitaly queries made by GitLab.
+- Persuade administrators of fault-tolerant or horizontally-scaled GitLab instances to migrate off
+ NFS.
+
+The second facet presents the only real solution. For this, we developed
+[Gitaly Cluster](praefect.md).
## Troubleshooting Gitaly
diff --git a/doc/administration/gitaly/praefect.md b/doc/administration/gitaly/praefect.md
index 3d4e606205e..1f97cd304f9 100644
--- a/doc/administration/gitaly/praefect.md
+++ b/doc/administration/gitaly/praefect.md
@@ -35,8 +35,8 @@ The availability objectives for Gitaly clusters are:
Writes are replicated asynchronously. Any writes that have not been replicated
to the newly promoted primary are lost.
- [Strong Consistency](https://gitlab.com/groups/gitlab-org/-/epics/1189) is
- planned to improve this to "no loss".
+ [Strong consistency](#strong-consistency) can be used to avoid loss in some
+ circumstances.
- **Recovery Time Objective (RTO):** Less than 10 seconds.
@@ -103,7 +103,7 @@ GitLab](https://about.gitlab.com/install/).
You will need the IP/host address for each node.
-1. `LOAD_BALANCER_SERVER_ADDRESS`: the IP/hots address of the load balancer
+1. `LOAD_BALANCER_SERVER_ADDRESS`: the IP/host address of the load balancer
1. `POSTGRESQL_SERVER_ADDRESS`: the IP/host address of the PostgreSQL server
1. `PRAEFECT_HOST`: the IP/host address of the Praefect server
1. `GITALY_HOST`: the IP/host address of each Gitaly server
@@ -131,14 +131,13 @@ with secure tokens as you complete the setup process.
Praefect cluster directly; that could lead to data loss.
1. `PRAEFECT_SQL_PASSWORD`: this password is used by Praefect to connect to
PostgreSQL.
-1. `GRAFANA_PASSWORD`: this password is used to access the `admin`
- account in the Grafana dashboards.
We will note in the instructions below where these secrets are required.
### PostgreSQL
-NOTE: **Note:** do not store the GitLab application database and the Praefect
+NOTE: **Note:**
+Do not store the GitLab application database and the Praefect
database on the same PostgreSQL server if using
[Geo](../geo/replication/index.md). The replication state is internal to each instance
of GitLab and should not be replicated.
@@ -283,9 +282,16 @@ application server, or a Gitaly node.
1. Configure the **Praefect** cluster to connect to each Gitaly node in the
cluster by editing `/etc/gitlab/gitlab.rb`.
- In the example below we have configured one virtual storage (or shard) named
- `storage-1`. This cluster has three Gitaly nodes `gitaly-1`, `gitaly-2`, and
- `gitaly-3`, which will be replicas of each other.
+ The virtual storage's name must match the configured storage name in GitLab
+ configuration. In a later step, we configure the storage name as `default`
+ so we use `default` here as well. This cluster has three Gitaly nodes `gitaly-1`,
+ `gitaly-2`, and `gitaly-3`, which will be replicas of each other.
+
+ CAUTION: **Caution:**
+ If you have data on an already existing storage called
+ `default`, you should configure the virtual storage with another name and
+ [migrate the data to the Praefect storage](#migrating-existing-repositories-to-praefect)
+ afterwards.
Replace `PRAEFECT_INTERNAL_TOKEN` with a strong secret, which will be used by
Praefect when communicating with Gitaly nodes in the cluster. This token is
@@ -296,7 +302,8 @@ application server, or a Gitaly node.
More Gitaly nodes can be added to the cluster to increase the number of
replicas. More clusters can also be added for very large GitLab instances.
- NOTE: **Note:** The `gitaly-1` node is currently denoted the primary. This
+ NOTE: **Note:**
+ The `gitaly-1` node is currently denoted the primary. This
can be used to manually fail from one node to another. This will be removed
in the [future](https://gitlab.com/gitlab-org/gitaly/-/issues/2634).
@@ -304,7 +311,7 @@ application server, or a Gitaly node.
# Name of storage hash must match storage name in git_data_dirs on GitLab
# server ('praefect') and in git_data_dirs on Gitaly nodes ('gitaly-1')
praefect['virtual_storages'] = {
- 'storage-1' => {
+ 'default' => {
'gitaly-1' => {
'address' => 'tcp://GITALY_HOST:8075',
'token' => 'PRAEFECT_INTERNAL_TOKEN',
@@ -322,6 +329,8 @@ application server, or a Gitaly node.
}
```
+1. [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/2013) in GitLab 13.1 and later, enable [distribution of reads](#distributed-reads).
+
1. Save the changes to `/etc/gitlab/gitlab.rb` and [reconfigure
Praefect](../restart_gitlab.md#omnibus-gitlab-reconfigure):
@@ -349,9 +358,146 @@ application server, or a Gitaly node.
**The steps above must be completed for each Praefect node!**
+## Enabling TLS support
+
+> [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/1698) in GitLab 13.2.
+
+Praefect supports TLS encryption. To communicate with a Praefect instance that listens
+for secure connections, you must:
+
+- Use a `tls://` URL scheme in the `gitaly_address` of the corresponding storage entry
+ in the GitLab configuration.
+- Bring your own certificates because this isn't provided automatically. The certificate
+ corresponding to each Praefect server must be installed on that Praefect server.
+
+Additionally the certificate, or its certificate authority, must be installed on all Gitaly servers
+and on all Praefect clients that communicate with it following the procedure described in
+[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates) (and repeated below).
+
+Note the following:
+
+- The certificate must specify the address you use to access the Praefect server. If
+ addressing the Praefect server by:
+
+ - Hostname, you can either use the Common Name field for this, or add it as a Subject
+ Alternative Name.
+ - IP address, you must add it as a Subject Alternative Name to the certificate.
+
+- You can configure Praefect servers with both an unencrypted listening address
+ `listen_addr` and an encrypted listening address `tls_listen_addr` at the same time.
+ This allows you to do a gradual transition from unencrypted to encrypted traffic, if
+ necessary.
+
+To configure Praefect with TLS:
+
+**For Omnibus GitLab**
+
+1. Create certificates for Praefect servers.
+1. On the Praefect servers, create the `/etc/gitlab/ssl` directory and copy your key
+ and certificate there:
+
+ ```shell
+ sudo mkdir -p /etc/gitlab/ssl
+ sudo chmod 755 /etc/gitlab/ssl
+ sudo cp key.pem cert.pem /etc/gitlab/ssl/
+ sudo chmod 644 key.pem cert.pem
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and add:
+
+ ```ruby
+ praefect['tls_listen_addr'] = "0.0.0.0:3305"
+ praefect['certificate_path'] = "/etc/gitlab/ssl/cert.pem"
+ praefect['key_path'] = "/etc/gitlab/ssl/key.pem"
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. On the Praefect clients (including each Gitaly server), copy the certificates,
+ or their certificate authority, into `/etc/gitlab/trusted-certs`:
+
+ ```shell
+ sudo cp cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. On the Praefect clients (except Gitaly servers), edit `git_data_dirs` in
+ `/etc/gitlab/gitlab.rb` as follows:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tls://praefect1.internal:3305' },
+ 'storage1' => { 'gitaly_address' => 'tls://praefect2.internal:3305' },
+ })
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+**For installations from source**
+
+1. Create certificates for Praefect servers.
+1. On the Praefect servers, create the `/etc/gitlab/ssl` directory and copy your key and certificate
+ there:
+
+ ```shell
+ sudo mkdir -p /etc/gitlab/ssl
+ sudo chmod 755 /etc/gitlab/ssl
+ sudo cp key.pem cert.pem /etc/gitlab/ssl/
+ sudo chmod 644 key.pem cert.pem
+ ```
+
+1. On the Praefect clients (including each Gitaly server), copy the certificates,
+ or their certificate authority, into the system trusted certificates:
+
+ ```shell
+ sudo cp cert.pem /usr/local/share/ca-certificates/praefect.crt
+ sudo update-ca-certificates
+ ```
+
+1. On the Praefect clients (except Gitaly servers), edit `storages` in
+ `/home/git/gitlab/config/gitlab.yml` as follows:
+
+ ```yaml
+ gitlab:
+ repositories:
+ storages:
+ default:
+ gitaly_address: tls://praefect1.internal:3305
+ path: /some/dummy/path
+ storage1:
+ gitaly_address: tls://praefect2.internal:3305
+ path: /some/dummy/path
+ ```
+
+ NOTE: **Note:**
+ `/some/dummy/path` should be set to a local folder that exists, however no
+ data will be stored in this folder. This will no longer be necessary after
+ [this issue](https://gitlab.com/gitlab-org/gitaly/-/issues/1282) is resolved.
+
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
+1. Copy all Praefect server certificates, or their certificate authority, to the system
+ trusted certificates on each Gitaly server so the Praefect server will trust the
+ certificate when called by Gitaly servers:
+
+ ```shell
+ sudo cp cert.pem /usr/local/share/ca-certificates/praefect.crt
+ sudo update-ca-certificates
+ ```
+
+1. Edit `/home/git/praefect/config.toml` and add:
+
+ ```toml
+ tls_listen_addr = '0.0.0.0:3305'
+
+ [tls]
+ certificate_path = '/etc/gitlab/ssl/cert.pem'
+ key_path = '/etc/gitlab/ssl/key.pem'
+ ```
+
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
+
### Gitaly
-NOTE: **Note:** Complete these steps for **each** Gitaly node.
+NOTE: **Note:**
+Complete these steps for **each** Gitaly node.
To complete this section you will need:
@@ -555,6 +701,17 @@ Particular attention should be shown to:
external_url 'GITLAB_SERVER_URL'
```
+1. Disable the default Gitaly service running on the GitLab host. It won't be needed
+ as GitLab will connect to the configured cluster.
+
+ CAUTION: **Caution:**
+ If you have existing data stored on the default Gitaly storage,
+ you should [migrate the data your Praefect storage first](#migrating-existing-repositories-to-praefect).
+
+ ```ruby
+ gitaly['enable'] = false
+ ```
+
1. Add the Praefect cluster as a storage location by editing
`/etc/gitlab/gitlab.rb`.
@@ -562,28 +719,17 @@ Particular attention should be shown to:
- `LOAD_BALANCER_SERVER_ADDRESS` with the IP address or hostname of the load
balancer.
- - `GITLAB_HOST` with the IP address or hostname of the GitLab server
- `PRAEFECT_EXTERNAL_TOKEN` with the real secret
```ruby
git_data_dirs({
"default" => {
- "gitaly_address" => "tcp://GITLAB_HOST:8075"
- },
- "storage-1" => {
"gitaly_address" => "tcp://LOAD_BALANCER_SERVER_ADDRESS:2305",
"gitaly_token" => 'PRAEFECT_EXTERNAL_TOKEN'
}
})
```
-1. Allow Gitaly to listen on a TCP port by editing
- `/etc/gitlab/gitlab.rb`
-
- ```ruby
- gitaly['listen_addr'] = '0.0.0.0:8075'
- ```
-
1. Configure the `gitlab_shell['secret_token']` so that callbacks from Gitaly
nodes during a `git push` are properly authenticated by editing
`/etc/gitlab/gitlab.rb`:
@@ -632,14 +778,6 @@ Particular attention should be shown to:
gitlab-ctl reconfigure
```
-1. To ensure that Gitaly [has updated its Prometheus listen
- address](https://gitlab.com/gitlab-org/gitaly/-/issues/2734), [restart
- Gitaly](../restart_gitlab.md#omnibus-gitlab-restart):
-
- ```shell
- gitlab-ctl restart gitaly
- ```
-
1. Verify each `gitlab-shell` on each Gitaly instance can reach GitLab. On each Gitaly instance run:
```shell
@@ -652,16 +790,11 @@ Particular attention should be shown to:
gitlab-rake gitlab:gitaly:check
```
-1. Update the **Repository storage** settings from **Admin Area > Settings >
- Repository > Repository storage** to make the newly configured Praefect
- cluster the storage location for new Git repositories.
-
- - The default option is unchecked.
- - The Praefect option is checked.
+1. Check in **Admin Area > Settings > Repository > Repository storage** that the Praefect storage
+ is configured to store new repositories. Following this guide, the `default` storage should have
+ weight 100 to store all new repositories.
- ![Update repository storage](img/praefect_storage_v12_10.png)
-
-1. Verify everything is still working by creating a new project. Check the
+1. Verify everything is working by creating a new project. Check the
"Initialize repository with a README" box so that there is content in the
repository that viewed. If the project is created, and you can see the
README file, it works!
@@ -712,6 +845,72 @@ To get started quickly:
Congratulations! You've configured an observable highly available Praefect
cluster.
+## Distributed reads
+
+> Introduced in GitLab 13.1 in [beta](https://about.gitlab.com/handbook/product/#alpha-beta-ga) with feature flag `gitaly_distributed_reads` set to disabled.
+
+Praefect supports distribution of read operations across Gitaly nodes that are
+configured for the virtual node.
+
+To allow for [performance testing](https://gitlab.com/gitlab-org/quality/performance/-/issues/231),
+distributed reads are currently in
+[beta](https://about.gitlab.com/handbook/product/#alpha-beta-ga) and disabled by
+default. To enable distributed reads, the `gitaly_distributed_reads`
+[feature flag](../feature_flags.md) must be enabled in a Ruby console:
+
+```ruby
+Feature.enable(:gitaly_distributed_reads)
+```
+
+If enabled, all RPCs marked with `ACCESSOR` option like
+[GetBlob](https://gitlab.com/gitlab-org/gitaly/-/blob/v12.10.6/proto/blob.proto#L16)
+are redirected to an up to date and healthy Gitaly node.
+
+_Up to date_ in this context means that:
+
+- There is no replication operations scheduled for this node.
+- The last replication operation is in _completed_ state.
+
+If there is no such nodes, or any other error occurs during node selection, the primary
+node will be chosen to serve the request.
+
+To track distribution of read operations, you can use the `gitaly_praefect_read_distribution`
+Prometheus counter metric. It has two labels:
+
+- `virtual_storage`.
+- `storage`.
+
+They reflect configuration defined for this instance of Praefect.
+
+## Strong consistency
+
+> Introduced in GitLab 13.1 in [alpha](https://about.gitlab.com/handbook/product/#alpha-beta-ga), disabled by default.
+
+Praefect guarantees eventual consistency by replicating all writes to secondary nodes
+after the write to the primary Gitaly node has happened.
+
+Praefect can instead provide strong consistency by creating a transaction and writing
+changes to all Gitaly nodes at once. Strong consistency is currently in
+[alpha](https://about.gitlab.com/handbook/product/#alpha-beta-ga) and not enabled by
+default. If enabled, transactions are only available for a subset of RPCs. For more
+information, see the [strong consistency epic](https://gitlab.com/groups/gitlab-org/-/epics/1189).
+
+To enable strong consistency:
+
+- In GitLab 13.2 and later, enable the `:gitaly_reference_transactions` feature flag.
+- In GitLab 13.1, enable the `:gitaly_reference_transactions` and `:gitaly_hooks_rpc`
+ feature flags.
+
+Enabling feature flags requires [access to the Rails console](../feature_flags.md#start-the-gitlab-rails-console).
+In the Rails console, enable or disable the flags as required. For example:
+
+```ruby
+Feature.enable(:gitaly_reference_transactions)
+```
+
+To monitor strong consistency, use the `gitaly_praefect_transactions_total` and
+`gitaly_praefect_transactions_delay_seconds` Prometheus counter metrics.
+
## Automatic failover and leader election
Praefect regularly checks the health of each backend Gitaly node. This
@@ -739,38 +938,68 @@ current primary node is found to be unhealthy.
It is likely that we will implement support for Consul, and a cloud native
strategy in the future.
-## Identifying Impact of a Primary Node Failure
+## Primary Node Failure
-When a primary Gitaly node fails, there is a chance of data loss. Data loss can occur if there were outstanding replication jobs the secondaries did not manage to process before the failure. The `dataloss` Praefect sub-command helps identify these cases by counting the number of dead replication jobs for each repository. This command must be executed on a Praefect node.
+Praefect recovers from a failing primary Gitaly node by promoting a healthy secondary as the new primary. To minimize data loss, Praefect elects the secondary with the least unreplicated writes from the primary. There can still be some unreplicated writes, leading to data loss.
-A time frame to search can be specified with `-from` and `-to`:
+Praefect switches a virtual storage in to read-only mode after a failover event. This eases data recovery efforts by preventing new, possibly conflicting writes to the newly elected primary. This allows the administrator to attempt recovering the lost data before allowing new writes.
+
+If you prefer write availability over consistency, this behavior can be turned off by setting `praefect['failover_read_only_after_failover'] = false` in `/etc/gitlab/gitlab.rb` and [reconfiguring Praefect](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+### Checking for data loss
+
+The Praefect `dataloss` sub-command helps identify lost writes by checking for uncompleted replication jobs. This is useful for identifying possible data loss cases after a failover. This command must be executed on a Praefect node.
```shell
-sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dataloss -from <rfc3339-time> -to <rfc3339-time>
+sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dataloss [-virtual-storage <virtual-storage>]
```
-If the time frame is not specified, dead replication jobs from the last six hours are counted:
+If the virtual storage is not specified, every configured virtual storage is checked for data loss.
```shell
sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dataloss
+```
-Failed replication jobs between [2020-01-02 00:00:00 +0000 UTC, 2020-01-02 06:00:00 +0000 UTC):
-@hashed/fa/53/fa539965395b8382145f8370b34eab249cf610d2d6f2943c95b9b9d08a63d4a3.git: 2 jobs
+```shell
+Virtual storage: default
+ Current read-only primary: gitaly-2
+ Previous write-enabled primary: gitaly-1
+ Nodes with data loss from failing over from gitaly-1:
+ @hashed/2c/62/2c624232cdd221771294dfbb310aca000a0df6ac8b66b696d90ef06fdefb64a3.git: gitaly-0
+ @hashed/4b/22/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a.git: gitaly-0, gitaly-2
```
-To specify a time frame in UTC, run:
+Currently `dataloss` only considers a repository up to date if it has been directly replicated to from the previous write-enabled primary. While reconciling from an up to date secondary can recover the data, this is not visible in the data loss report. This is due for improvement via [Gitaly#2866](https://gitlab.com/gitlab-org/gitaly/-/issues/2866).
+
+NOTE: **Note:**
+`dataloss` is still in beta and the output format is subject to change.
+
+### Checking repository checksums
+
+To check a project's repository checksums across on all Gitaly nodes, run the
+[replicas Rake task](../raketasks/praefect.md#replica-checksums) on the main GitLab node.
+
+### Recovering lost writes
+
+The Praefect `reconcile` sub-command can be used to recover lost writes from the
+previous primary once it is back online. This is only possible when the virtual storage
+is still in read-only mode.
```shell
-sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dataloss -from 2020-01-02T00:00:00+00:00 -to 2020-01-02T00:02:00+00:00
+sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml reconcile -virtual <virtual-storage> -reference <previous-primary> -target <current-primary> -f
```
-### Checking repository checksums
+Refer to [Backend Node Recovery](#backend-node-recovery) section for more details on
+the `reconcile` sub-command.
+
+### Enabling Writes
-To check a project's repository checksums across on all Gitaly nodes, the
-replicas Rake task can be run on the main GitLab node:
+Any data recovery attempts should have been made before enabling writes to eliminate
+any chance of conflicting writes. Virtual storage can be re-enabled for writes by using
+the Praefect `enable-writes` sub-command.
```shell
-sudo gitlab-rake "gitlab:praefect:replicas[project_id]"
+sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml enable-writes -virtual-storage <virtual-storage>
```
## Backend Node Recovery
diff --git a/doc/administration/gitaly/reference.md b/doc/administration/gitaly/reference.md
index 52fd6fa6900..0429149ec2d 100644
--- a/doc/administration/gitaly/reference.md
+++ b/doc/administration/gitaly/reference.md
@@ -91,7 +91,7 @@ certificate_path = '/home/git/cert.cert'
key_path = '/home/git/key.pem'
```
-[Read more](index.md#tls-support) about TLS in Gitaly.
+[Read more](index.md#enable-tls-support) about TLS in Gitaly.
### Storage
diff --git a/doc/administration/high_availability/consul.md b/doc/administration/high_availability/consul.md
index a87c1f1027f..978ba08c4fa 100644
--- a/doc/administration/high_availability/consul.md
+++ b/doc/administration/high_availability/consul.md
@@ -113,14 +113,14 @@ Nodes running GitLab-bundled Consul should be:
- Members of a healthy cluster prior to upgrading the Omnibus GitLab package.
- Upgraded one node at a time.
-NOTE: **NOTE:**
+NOTE: **Note:**
Running `curl http://127.0.0.1:8500/v1/health/state/critical` from any Consul node will identify existing health issues in the cluster. The command will return an empty array if the cluster is healthy.
Consul clusters communicate using the raft protocol. If the current leader goes offline, there needs to be a leader election. A leader node must exist to facilitate synchronization across the cluster. If too many nodes go offline at the same time, the cluster will lose quorum and not elect a leader due to [broken consensus](https://www.consul.io/docs/internals/consensus.html).
Consult the [troubleshooting section](#troubleshooting) if the cluster is not able to recover after the upgrade. The [outage recovery](#outage-recovery) may be of particular interest.
-NOTE: **NOTE:**
+NOTE: **Note:**
GitLab only uses Consul to store transient data that is easily regenerated. If the bundled Consul was not used by any process other than GitLab itself, then [rebuilding the cluster from scratch](#recreate-from-scratch) is fine.
## Troubleshooting
diff --git a/doc/administration/high_availability/database.md b/doc/administration/high_availability/database.md
index 75183436046..784e496d10e 100644
--- a/doc/administration/high_availability/database.md
+++ b/doc/administration/high_availability/database.md
@@ -1,20 +1,5 @@
---
-type: reference
+redirect_to: '../postgresql/index.md'
---
-# Configuring PostgreSQL for Scaling and High Availability
-
-In this section, you'll be guided through configuring a PostgreSQL database to
-be used with GitLab in one of our [Scalable and Highly Available Setups](../reference_architectures/index.md).
-
-## Provide your own PostgreSQL instance **(CORE ONLY)**
-
-This content has been moved to a [new location](../postgresql/external.md).
-
-## Standalone PostgreSQL using Omnibus GitLab **(CORE ONLY)**
-
-This content has been moved to a [new location](../postgresql/standalone.md).
-
-## PostgreSQL replication and failover with Omnibus GitLab **(PREMIUM ONLY)**
-
-This content has been moved to a [new location](../postgresql/replication_and_failover.md).
+This document was moved to [another location](../postgresql/index.md).
diff --git a/doc/administration/high_availability/gitlab.md b/doc/administration/high_availability/gitlab.md
index 67a84f99bea..dc8c997bab5 100644
--- a/doc/administration/high_availability/gitlab.md
+++ b/doc/administration/high_availability/gitlab.md
@@ -6,11 +6,13 @@ type: reference
This section describes how to configure the GitLab application (Rails) component.
-NOTE: **Note:** There is some additional configuration near the bottom for
+NOTE: **Note:**
+There is some additional configuration near the bottom for
additional GitLab application servers. It's important to read and understand
these additional steps before proceeding with GitLab installation.
-NOTE: **Note:** [Cloud Object Storage service](object_storage.md) with [Gitaly](gitaly.md)
+NOTE: **Note:**
+[Cloud Object Storage service](object_storage.md) with [Gitaly](gitaly.md)
is recommended over [NFS](nfs.md) wherever possible for improved performance.
1. If necessary, install the NFS client utility packages using the following
@@ -79,19 +81,22 @@ is recommended over [NFS](nfs.md) wherever possible for improved performance.
1. [Enable monitoring](#enable-monitoring)
- NOTE: **Note:** To maintain uniformity of links across HA clusters, the `external_url`
+ NOTE: **Note:**
+ To maintain uniformity of links across HA clusters, the `external_url`
on the first application server as well as the additional application
servers should point to the external URL that users will use to access GitLab.
In a typical HA setup, this will be the URL of the load balancer which will
route traffic to all GitLab application servers in the HA cluster.
- NOTE: **Note:** When you specify `https` in the `external_url`, as in the example
+ NOTE: **Note:**
+ When you specify `https` in the `external_url`, as in the example
above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
certificates are not present, NGINX will fail to start. See
[NGINX documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
for more information.
- NOTE: **Note:** It is best to set the `uid` and `gid`s prior to the initial reconfigure
+ NOTE: **Note:**
+ It is best to set the `uid` and `gid`s prior to the initial reconfigure
of GitLab. Omnibus will not recursively `chown` directories if set after the initial reconfigure.
## First GitLab application server
@@ -126,14 +131,15 @@ need some extra configuration.
from running on upgrade. Only the primary GitLab application server should
handle migrations.
-1. **Recommended** Configure host keys. Copy the contents (primary and public keys) of `/etc/ssh/` on
+1. **Recommended** Configure host keys. Copy the contents (private and public keys) of `/etc/ssh/` on
the primary application server to `/etc/ssh` on all secondary servers. This
prevents false man-in-the-middle-attack alerts when accessing servers in your
High Availability cluster behind a load balancer.
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
-NOTE: **Note:** You will need to restart the GitLab applications nodes after an update has occurred and database
+NOTE: **Note:**
+You will need to restart the GitLab applications nodes after an update has occurred and database
migrations performed.
## Enable Monitoring
diff --git a/doc/administration/high_availability/monitoring_node.md b/doc/administration/high_availability/monitoring_node.md
index 653a0b32ad7..6b6f0ae9ea3 100644
--- a/doc/administration/high_availability/monitoring_node.md
+++ b/doc/administration/high_availability/monitoring_node.md
@@ -71,6 +71,19 @@ Omnibus:
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
+The next step is to tell all the other nodes where the monitoring node is:
+
+1. Edit `/etc/gitlab/gitlab.rb`, and add, or find and uncomment the following line:
+
+ ```ruby
+ gitlab_rails['prometheus_address'] = '10.0.0.1:9090'
+ ```
+
+ Where `10.0.0.1:9090` is the IP address and port of the Prometheus node.
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to
+ take effect.
+
## Migrating to Service Discovery
Once monitoring using Service Discovery is enabled with `consul['monitoring_service_discovery'] = true`,
diff --git a/doc/administration/high_availability/nfs.md b/doc/administration/high_availability/nfs.md
index 6511f9bd85d..6e8dc2c6c57 100644
--- a/doc/administration/high_availability/nfs.md
+++ b/doc/administration/high_availability/nfs.md
@@ -12,11 +12,29 @@ From GitLab 13.0, using NFS for Git repositories is deprecated. In GitLab 14.0,
support for NFS for Git repositories is scheduled to be removed. Upgrade to
[Gitaly Cluster](../gitaly/praefect.md) as soon as possible.
-NOTE: **Note:** Filesystem performance has a big impact on overall GitLab
+NOTE: **Note:**
+Filesystem performance has a big impact on overall GitLab
performance, especially for actions that read or write to Git repositories. See
[Filesystem Performance Benchmarking](../operations/filesystem_benchmarking.md)
for steps to test filesystem performance.
+## Known kernel version incompatibilities
+
+RedHat Enterprise Linux (RHEL) and CentOS v7.7 and v7.8 ship with kernel
+version `3.10.0-1127`, which [contains a
+bug](https://bugzilla.redhat.com/show_bug.cgi?id=1783554) that causes
+[uploads to fail to copy over NFS](https://gitlab.com/gitlab-org/gitlab/-/issues/218999). The
+following GitLab versions include a fix to work properly with that
+kernel version:
+
+1. [12.10.12](https://about.gitlab.com/releases/2020/06/25/gitlab-12-10-12-released/)
+1. [13.0.7](https://about.gitlab.com/releases/2020/06/25/gitlab-13-0-7-released/)
+1. [13.1.1](https://about.gitlab.com/releases/2020/06/24/gitlab-13-1-1-released/)
+1. 13.2 and up
+
+If you are using that kernel version, be sure to upgrade GitLab to avoid
+errors.
+
## NFS Server features
### Required features
@@ -88,7 +106,8 @@ administrators to keep NFS server delegation disabled.
#### Improving NFS performance with Unicorn
-NOTE: **Note:** From GitLab 12.1, it will automatically be detected if Rugged can and should be used per storage.
+NOTE: **Note:**
+From GitLab 12.1, it will automatically be detected if Rugged can and should be used per storage.
If you previously enabled Rugged using the feature flag, you will need to unset the feature flag by using:
@@ -100,7 +119,8 @@ If the Rugged feature flag is explicitly set to either true or false, GitLab wil
#### Improving NFS performance with Puma
-NOTE: **Note:** From GitLab 12.7, Rugged auto-detection is disabled if Puma thread count is greater than 1.
+NOTE: **Note:**
+From GitLab 12.7, Rugged auto-detection is disabled if Puma thread count is greater than 1.
If you want to use Rugged with Puma, it is recommended to [set Puma thread count to 1](https://docs.gitlab.com/omnibus/settings/puma.html#puma-settings).
diff --git a/doc/administration/high_availability/nfs_host_client_setup.md b/doc/administration/high_availability/nfs_host_client_setup.md
index 7519ebf028d..213680e2f64 100644
--- a/doc/administration/high_availability/nfs_host_client_setup.md
+++ b/doc/administration/high_availability/nfs_host_client_setup.md
@@ -38,10 +38,10 @@ In this setup we will share the home directory on the host with the client. Edit
```plaintext
#/etc/exports for one client
-/home <client-ip-address>(rw,sync,no_root_squash,no_subtree_check)
+/home <client_ip_address>(rw,sync,no_root_squash,no_subtree_check)
#/etc/exports for three clients
-/home <client-ip-address>(rw,sync,no_root_squash,no_subtree_check) <client-2-ip-address>(rw,sync,no_root_squash,no_subtree_check) <client-3-ip-address>(rw,sync,no_root_squash,no_subtree_check)
+/home <client_ip_address>(rw,sync,no_root_squash,no_subtree_check) <client_2_ip_address>(rw,sync,no_root_squash,no_subtree_check) <client_3_ip_address>(rw,sync,no_root_squash,no_subtree_check)
```
Restart the NFS server after making changes to the `exports` file for the changes
@@ -54,7 +54,7 @@ systemctl restart nfs-kernel-server
NOTE: **Note:**
You may need to update your server's firewall. See the [firewall section](#nfs-in-a-firewalled-environment) at the end of this guide.
-## Client/ GitLab application node Setup
+## Client / GitLab application node Setup
> Follow the instructions below to connect any GitLab Rails application node running
inside your HA environment to the NFS server configured above.
@@ -90,7 +90,7 @@ df -h
### Step 3 - Set up Automatic Mounts on Boot
-Edit `/etc/fstab` on client as below to mount the remote shares automatically at boot.
+Edit `/etc/fstab` on the client as below to mount the remote shares automatically at boot.
Note that GitLab requires advisory file locking, which is only supported natively in
NFS version 4. NFSv3 also supports locking as long as Linux Kernel 2.6.5+ is used.
We recommend using version 4 and do not specifically test NFSv3.
@@ -98,14 +98,19 @@ See [NFS documentation](nfs.md#nfs-client-mount-options) for guidance on mount o
```plaintext
#/etc/fstab
-10.0.0.1:/nfs/home /nfs/home nfs4 defaults,hard,vers=4.1,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+<host_ip_address>:/home /nfs/home nfs4 defaults,hard,vers=4.1,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
```
Reboot the client and confirm that the mount point is mounted automatically.
+NOTE: **Note:**
+If you followed our guide to [GitLab Pages on a separate server](../pages/index.md#running-gitlab-pages-on-a-separate-server)
+here, please continue there with the pages-specific NFS mounts.
+The step below is for broader use-cases than only sharing pages data.
+
### Step 4 - Set up GitLab to Use NFS mounts
-When using the default Omnibus configuration you will need to share 5 data locations
+When using the default Omnibus configuration you will need to share 4 data locations
between all GitLab cluster nodes. No other locations should be shared. Changing the
default file locations in `gitlab.rb` on the client allows you to have one main mount
point and have all the required locations as subdirectories to use the NFS mount for
@@ -136,7 +141,7 @@ the command: `sudo ufw status`. If it's being blocked, then you can allow traffi
client with the command below.
```shell
-sudo ufw allow from <client-ip-address> to any port nfs
+sudo ufw allow from <client_ip_address> to any port nfs
```
<!-- ## Troubleshooting
diff --git a/doc/administration/high_availability/redis.md b/doc/administration/high_availability/redis.md
index bad50f7ca74..2b5771f49f2 100644
--- a/doc/administration/high_availability/redis.md
+++ b/doc/administration/high_availability/redis.md
@@ -1,1008 +1,5 @@
---
-type: reference
+redirect_to: ../redis/index.md
---
-# Configuring Redis for Scaling and High Availability
-
-## Provide your own Redis instance **(CORE ONLY)**
-
-The following are the requirements for providing your own Redis instance:
-
-- Redis version 5.0 or higher is recommended, as this is what ships with
- Omnibus GitLab packages starting with GitLab 12.7.
-- Support for Redis 3.2 is deprecated with GitLab 12.10 and will be completely
- removed in GitLab 13.0.
-- GitLab 12.0 and later requires Redis version 3.2 or higher. Older Redis
- versions do not support an optional count argument to SPOP which is now
- required for [Merge Trains](../../ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md).
-- In addition, if Redis 4 or later is available, GitLab makes use of certain
- commands like `UNLINK` and `USAGE` which were introduced only in Redis 4.
-- Standalone Redis or Redis high availability with Sentinel are supported. Redis
- Cluster is not supported.
-- Managed Redis from cloud providers such as AWS ElastiCache will work. If these
- services support high availability, be sure it is not the Redis Cluster type.
-
-Note the Redis node's IP address or hostname, port, and password (if required).
-These will be necessary when configuring the GitLab application servers later.
-
-## Redis in a Scaled and Highly Available Environment
-
-This section is relevant for [scalable and highly available setups](../reference_architectures/index.md).
-
-### Provide your own Redis instance **(CORE ONLY)**
-
-If you want to use your own deployed Redis instance(s),
-see [Provide your own Redis instance](#provide-your-own-redis-instance-core-only)
-for more details. However, you can use the Omnibus GitLab package to easily
-deploy the bundled Redis.
-
-### Standalone Redis using Omnibus GitLab **(CORE ONLY)**
-
-The Omnibus GitLab package can be used to configure a standalone Redis server.
-In this configuration Redis is not highly available, and represents a single
-point of failure. However, in a scaled environment the objective is to allow
-the environment to handle more users or to increase throughput. Redis itself
-is generally stable and can handle many requests so it is an acceptable
-trade off to have only a single instance. See the [reference architectures](../reference_architectures/index.md)
-page for an overview of GitLab scaling and high availability options.
-
-The steps below are the minimum necessary to configure a Redis server with
-Omnibus:
-
-1. SSH into the Redis server.
-1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
- package you want using **steps 1 and 2** from the GitLab downloads page.
- - Do not complete any other steps on the download page.
-
-1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
-
- ```ruby
- ## Enable Redis
- redis['enable'] = true
-
- ## Disable all other services
- sidekiq['enable'] = false
- gitlab_workhorse['enable'] = false
- puma['enable'] = false
- postgresql['enable'] = false
- nginx['enable'] = false
- prometheus['enable'] = false
- alertmanager['enable'] = false
- pgbouncer_exporter['enable'] = false
- gitlab_exporter['enable'] = false
- gitaly['enable'] = false
-
- redis['bind'] = '0.0.0.0'
- redis['port'] = 6379
- redis['password'] = 'SECRET_PASSWORD_HERE'
-
- gitlab_rails['enable'] = false
- ```
-
-1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-1. Note the Redis node's IP address or hostname, port, and
- Redis password. These will be necessary when configuring the GitLab
- application servers later.
-1. [Enable Monitoring](#enable-monitoring)
-
-Advanced configuration options are supported and can be added if
-needed.
-
-Continue configuration of other components by going back to the
-[reference architectures](../reference_architectures/index.md#configure-gitlab-to-scale) page.
-
-### High Availability with Omnibus GitLab **(PREMIUM ONLY)**
-
-> Experimental Redis Sentinel support was [introduced in GitLab 8.11](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/1877).
-Starting with 8.14, Redis Sentinel is no longer experimental.
-If you've used it with versions `< 8.14` before, please check the updated
-documentation here.
-
-High Availability with [Redis](https://redis.io/) is possible using a **Master** x **Replica**
-topology with a [Redis Sentinel](https://redis.io/topics/sentinel) service to watch and automatically
-start the failover procedure.
-
-You can choose to install and manage Redis and Sentinel yourself, use
-a hosted cloud solution or you can use the one that comes bundled with
-Omnibus GitLab packages.
-
-> **Notes:**
->
-> - Redis requires authentication for High Availability. See
-> [Redis Security](https://redis.io/topics/security) documentation for more
-> information. We recommend using a combination of a Redis password and tight
-> firewall rules to secure your Redis service.
-> - You are highly encouraged to read the [Redis Sentinel](https://redis.io/topics/sentinel) documentation
-> before configuring Redis HA with GitLab to fully understand the topology and
-> architecture.
-> - This is the documentation for the Omnibus GitLab packages. For installations
-> from source, follow the [Redis HA source installation](redis_source.md) guide.
-> - Redis Sentinel daemon is bundled with Omnibus GitLab Enterprise Edition only.
-> For configuring Sentinel with the Omnibus GitLab Community Edition and
-> installations from source, read the
-> [Available configuration setups](#available-configuration-setups) section
-> below.
-
-## Overview
-
-Before diving into the details of setting up Redis and Redis Sentinel for HA,
-make sure you read this Overview section to better understand how the components
-are tied together.
-
-You need at least `3` independent machines: physical, or VMs running into
-distinct physical machines. It is essential that all master and replica Redis
-instances run in different machines. If you fail to provision the machines in
-that specific way, any issue with the shared environment can bring your entire
-setup down.
-
-It is OK to run a Sentinel alongside of a master or replica Redis instance.
-There should be no more than one Sentinel on the same machine though.
-
-You also need to take into consideration the underlying network topology,
-making sure you have redundant connectivity between Redis / Sentinel and
-GitLab instances, otherwise the networks will become a single point of
-failure.
-
-Make sure that you read this document once as a whole before configuring the
-components below.
-
-> **Notes:**
->
-> - Starting with GitLab `8.11`, you can configure a list of Redis Sentinel
-> servers that will monitor a group of Redis servers to provide failover support.
-> - Starting with GitLab `8.14`, the Omnibus GitLab Enterprise Edition package
-> comes with Redis Sentinel daemon built-in.
-
-High Availability with Redis requires a few things:
-
-- Multiple Redis instances
-- Run Redis in a **Master** x **Replica** topology
-- Multiple Sentinel instances
-- Application support and visibility to all Sentinel and Redis instances
-
-Redis Sentinel can handle the most important tasks in an HA environment and that's
-to help keep servers online with minimal to no downtime. Redis Sentinel:
-
-- Monitors **Master** and **Replicas** instances to see if they are available
-- Promotes a **Replica** to **Master** when the **Master** fails
-- Demotes a **Master** to **Replica** when the failed **Master** comes back online
- (to prevent data-partitioning)
-- Can be queried by the application to always connect to the current **Master**
- server
-
-When a **Master** fails to respond, it's the application's responsibility
-(in our case GitLab) to handle timeout and reconnect (querying a **Sentinel**
-for a new **Master**).
-
-To get a better understanding on how to correctly set up Sentinel, please read
-the [Redis Sentinel documentation](https://redis.io/topics/sentinel) first, as
-failing to configure it correctly can lead to data loss or can bring your
-whole cluster down, invalidating the failover effort.
-
-### Recommended setup
-
-For a minimal setup, you will install the Omnibus GitLab package in `3`
-**independent** machines, both with **Redis** and **Sentinel**:
-
-- Redis Master + Sentinel
-- Redis Replica + Sentinel
-- Redis Replica + Sentinel
-
-If you are not sure or don't understand why and where the amount of nodes come
-from, read [Redis setup overview](#redis-setup-overview) and
-[Sentinel setup overview](#sentinel-setup-overview).
-
-For a recommended setup that can resist more failures, you will install
-the Omnibus GitLab package in `5` **independent** machines, both with
-**Redis** and **Sentinel**:
-
-- Redis Master + Sentinel
-- Redis Replica + Sentinel
-- Redis Replica + Sentinel
-- Redis Replica + Sentinel
-- Redis Replica + Sentinel
-
-### Redis setup overview
-
-You must have at least `3` Redis servers: `1` Master, `2` Replicas, and they
-need to each be on independent machines (see explanation above).
-
-You can have additional Redis nodes, that will help survive a situation
-where more nodes goes down. Whenever there is only `2` nodes online, a failover
-will not be initiated.
-
-As an example, if you have `6` Redis nodes, a maximum of `3` can be
-simultaneously down.
-
-Please note that there are different requirements for Sentinel nodes.
-If you host them in the same Redis machines, you may need to take
-that restrictions into consideration when calculating the amount of
-nodes to be provisioned. See [Sentinel setup overview](#sentinel-setup-overview)
-documentation for more information.
-
-All Redis nodes should be configured the same way and with similar server specs, as
-in a failover situation, any **Replica** can be promoted as the new **Master** by
-the Sentinel servers.
-
-The replication requires authentication, so you need to define a password to
-protect all Redis nodes and the Sentinels. They will all share the same
-password, and all instances must be able to talk to
-each other over the network.
-
-### Sentinel setup overview
-
-Sentinels watch both other Sentinels and Redis nodes. Whenever a Sentinel
-detects that a Redis node is not responding, it will announce that to the
-other Sentinels. They have to reach the **quorum**, that is the minimum amount
-of Sentinels that agrees a node is down, in order to be able to start a failover.
-
-Whenever the **quorum** is met, the **majority** of all known Sentinel nodes
-need to be available and reachable, so that they can elect the Sentinel **leader**
-who will take all the decisions to restore the service availability by:
-
-- Promoting a new **Master**
-- Reconfiguring the other **Replicas** and make them point to the new **Master**
-- Announce the new **Master** to every other Sentinel peer
-- Reconfigure the old **Master** and demote to **Replica** when it comes back online
-
-You must have at least `3` Redis Sentinel servers, and they need to
-be each in an independent machine (that are believed to fail independently),
-ideally in different geographical areas.
-
-You can configure them in the same machines where you've configured the other
-Redis servers, but understand that if a whole node goes down, you loose both
-a Sentinel and a Redis instance.
-
-The number of sentinels should ideally always be an **odd** number, for the
-consensus algorithm to be effective in the case of a failure.
-
-In a `3` nodes topology, you can only afford `1` Sentinel node going down.
-Whenever the **majority** of the Sentinels goes down, the network partition
-protection prevents destructive actions and a failover **will not be started**.
-
-Here are some examples:
-
-- With `5` or `6` sentinels, a maximum of `2` can go down for a failover begin.
-- With `7` sentinels, a maximum of `3` nodes can go down.
-
-The **Leader** election can sometimes fail the voting round when **consensus**
-is not achieved (see the odd number of nodes requirement above). In that case,
-a new attempt will be made after the amount of time defined in
-`sentinel['failover_timeout']` (in milliseconds).
-
->**Note:**
-We will see where `sentinel['failover_timeout']` is defined later.
-
-The `failover_timeout` variable has a lot of different use cases. According to
-the official documentation:
-
-- The time needed to re-start a failover after a previous failover was
- already tried against the same master by a given Sentinel, is two
- times the failover timeout.
-
-- The time needed for a replica replicating to a wrong master according
- to a Sentinel current configuration, to be forced to replicate
- with the right master, is exactly the failover timeout (counting since
- the moment a Sentinel detected the misconfiguration).
-
-- The time needed to cancel a failover that is already in progress but
- did not produced any configuration change (REPLICAOF NO ONE yet not
- acknowledged by the promoted replica).
-
-- The maximum time a failover in progress waits for all the replicas to be
- reconfigured as replicas of the new master. However even after this time
- the replicas will be reconfigured by the Sentinels anyway, but not with
- the exact parallel-syncs progression as specified.
-
-### Available configuration setups
-
-Based on your infrastructure setup and how you have installed GitLab, there are
-multiple ways to configure Redis HA. Omnibus GitLab packages have Redis and/or
-Redis Sentinel bundled with them so you only need to focus on configuration.
-Pick the one that suits your needs.
-
-- [Installations from source](../../install/installation.md): You need to install Redis and Sentinel
- yourself. Use the [Redis HA installation from source](redis_source.md)
- documentation.
-- [Omnibus GitLab **Community Edition** (CE) package](https://about.gitlab.com/install/?version=ce): Redis is bundled, so you
- can use the package with only the Redis service enabled as described in steps
- 1 and 2 of this document (works for both master and replica setups). To install
- and configure Sentinel, jump directly to the Sentinel section in the
- [Redis HA installation from source](redis_source.md#step-3-configuring-the-redis-sentinel-instances) documentation.
-- [Omnibus GitLab **Enterprise Edition** (EE) package](https://about.gitlab.com/install/?version=ee): Both Redis and Sentinel
- are bundled in the package, so you can use the EE package to set up the whole
- Redis HA infrastructure (master, replica and Sentinel) which is described in
- this document.
-- If you have installed GitLab using the Omnibus GitLab packages (CE or EE),
- but you want to use your own external Redis server, follow steps 1-3 in the
- [Redis HA installation from source](redis_source.md) documentation, then go
- straight to step 4 in this guide to
- [set up the GitLab application](#step-4-configuring-the-gitlab-application).
-
-## Configuring Redis HA
-
-This is the section where we install and set up the new Redis instances.
-
-> **Notes:**
->
-> - We assume that you have installed GitLab and all HA components from scratch. If you
-> already have it installed and running, read how to
-> [switch from a single-machine installation to Redis HA](#switching-from-an-existing-single-machine-installation-to-redis-ha).
-> - Redis nodes (both master and replica) will need the same password defined in
-> `redis['password']`. At any time during a failover the Sentinels can
-> reconfigure a node and change its status from master to replica and vice versa.
-
-### Prerequisites
-
-The prerequisites for a HA Redis setup are the following:
-
-1. Provision the minimum required number of instances as specified in the
- [recommended setup](#recommended-setup) section.
-1. We **Do not** recommend installing Redis or Redis Sentinel in the same machines your
- GitLab application is running on as this weakens your HA configuration. You can however opt in to install Redis
- and Sentinel in the same machine.
-1. All Redis nodes must be able to talk to each other and accept incoming
- connections over Redis (`6379`) and Sentinel (`26379`) ports (unless you
- change the default ones).
-1. The server that hosts the GitLab application must be able to access the
- Redis nodes.
-1. Protect the nodes from access from external networks ([Internet](https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png)), using
- firewall.
-
-### Step 1. Configuring the master Redis instance
-
-1. SSH into the **master** Redis server.
-1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
- package you want using **steps 1 and 2** from the GitLab downloads page.
- - Make sure you select the correct Omnibus package, with the same version
- and type (Community, Enterprise editions) of your current install.
- - Do not complete any other steps on the download page.
-
-1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
-
- ```ruby
- # Specify server role as 'redis_master_role'
- roles ['redis_master_role']
-
- # IP address pointing to a local IP that the other machines can reach to.
- # You can also set bind to '0.0.0.0' which listen in all interfaces.
- # If you really need to bind to an external accessible IP, make
- # sure you add extra firewall rules to prevent unauthorized access.
- redis['bind'] = '10.0.0.1'
-
- # Define a port so Redis can listen for TCP requests which will allow other
- # machines to connect to it.
- redis['port'] = 6379
-
- # Set up password authentication for Redis (use the same password in all nodes).
- redis['password'] = 'redis-password-goes-here'
- ```
-
-1. Only the primary GitLab application server should handle migrations. To
- prevent database migrations from running on upgrade, add the following
- configuration to your `/etc/gitlab/gitlab.rb` file:
-
- ```ruby
- gitlab_rails['auto_migrate'] = false
- ```
-
-1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-> Note: You can specify multiple roles like sentinel and Redis as:
-> `roles ['redis_sentinel_role', 'redis_master_role']`. Read more about high
-> availability roles at <https://docs.gitlab.com/omnibus/roles/>.
-
-### Step 2. Configuring the replica Redis instances
-
-1. SSH into the **replica** Redis server.
-1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
- package you want using **steps 1 and 2** from the GitLab downloads page.
- - Make sure you select the correct Omnibus package, with the same version
- and type (Community, Enterprise editions) of your current install.
- - Do not complete any other steps on the download page.
-
-1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
-
- ```ruby
- # Specify server role as 'redis_replica_role'
- roles ['redis_replica_role']
-
- # IP address pointing to a local IP that the other machines can reach to.
- # You can also set bind to '0.0.0.0' which listen in all interfaces.
- # If you really need to bind to an external accessible IP, make
- # sure you add extra firewall rules to prevent unauthorized access.
- redis['bind'] = '10.0.0.2'
-
- # Define a port so Redis can listen for TCP requests which will allow other
- # machines to connect to it.
- redis['port'] = 6379
-
- # The same password for Redis authentication you set up for the master node.
- redis['password'] = 'redis-password-goes-here'
-
- # The IP of the master Redis node.
- redis['master_ip'] = '10.0.0.1'
-
- # Port of master Redis server, uncomment to change to non default. Defaults
- # to `6379`.
- #redis['master_port'] = 6379
- ```
-
-1. To prevent reconfigure from running automatically on upgrade, run:
-
- ```shell
- sudo touch /etc/gitlab/skip-auto-reconfigure
- ```
-
-1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-1. Go through the steps again for all the other replica nodes.
-
-> Note: You can specify multiple roles like sentinel and Redis as:
-> `roles ['redis_sentinel_role', 'redis_replica_role']`. Read more about high
-> availability roles at <https://docs.gitlab.com/omnibus/roles/>.
-
----
-
-These values don't have to be changed again in `/etc/gitlab/gitlab.rb` after
-a failover, as the nodes will be managed by the Sentinels, and even after a
-`gitlab-ctl reconfigure`, they will get their configuration restored by
-the same Sentinels.
-
-### Step 3. Configuring the Redis Sentinel instances
-
->**Note:**
-Redis Sentinel is bundled with Omnibus GitLab Enterprise Edition only. The
-following section assumes you are using Omnibus GitLab Enterprise Edition.
-For the Omnibus Community Edition and installations from source, follow the
-[Redis HA source install](redis_source.md) guide.
-
-NOTE: **Note:** If you are using an external Redis Sentinel instance, be sure
-to exclude the `requirepass` parameter from the Sentinel
-configuration. This parameter will cause clients to report `NOAUTH
-Authentication required.`. [Redis Sentinel 3.2.x does not support
-password authentication](https://github.com/antirez/redis/issues/3279).
-
-Now that the Redis servers are all set up, let's configure the Sentinel
-servers.
-
-If you are not sure if your Redis servers are working and replicating
-correctly, please read the [Troubleshooting Replication](#troubleshooting-redis-replication)
-and fix it before proceeding with Sentinel setup.
-
-You must have at least `3` Redis Sentinel servers, and they need to
-be each in an independent machine. You can configure them in the same
-machines where you've configured the other Redis servers.
-
-With GitLab Enterprise Edition, you can use the Omnibus package to set up
-multiple machines with the Sentinel daemon.
-
----
-
-1. SSH into the server that will host Redis Sentinel.
-1. **You can omit this step if the Sentinels will be hosted in the same node as
- the other Redis instances.**
-
- [Download/install](https://about.gitlab.com/install/) the
- Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
- GitLab downloads page.
- - Make sure you select the correct Omnibus package, with the same version
- the GitLab application is running.
- - Do not complete any other steps on the download page.
-
-1. Edit `/etc/gitlab/gitlab.rb` and add the contents (if you are installing the
- Sentinels in the same node as the other Redis instances, some values might
- be duplicate below):
-
- ```ruby
- roles ['redis_sentinel_role']
-
- # Must be the same in every sentinel node
- redis['master_name'] = 'gitlab-redis'
-
- # The same password for Redis authentication you set up for the master node.
- redis['master_password'] = 'redis-password-goes-here'
-
- # The IP of the master Redis node.
- redis['master_ip'] = '10.0.0.1'
-
- # Define a port so Redis can listen for TCP requests which will allow other
- # machines to connect to it.
- redis['port'] = 6379
-
- # Port of master Redis server, uncomment to change to non default. Defaults
- # to `6379`.
- #redis['master_port'] = 6379
-
- ## Configure Sentinel
- sentinel['bind'] = '10.0.0.1'
-
- # Port that Sentinel listens on, uncomment to change to non default. Defaults
- # to `26379`.
- # sentinel['port'] = 26379
-
- ## Quorum must reflect the amount of voting sentinels it take to start a failover.
- ## Value must NOT be greater then the amount of sentinels.
- ##
- ## The quorum can be used to tune Sentinel in two ways:
- ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
- ## we deploy, we are basically making Sentinel more sensible to master failures,
- ## triggering a failover as soon as even just a minority of Sentinels is no longer
- ## able to talk with the master.
- ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
- ## making Sentinel able to failover only when there are a very large number (larger
- ## than majority) of well connected Sentinels which agree about the master being down.s
- sentinel['quorum'] = 2
-
- ## Consider unresponsive server down after x amount of ms.
- # sentinel['down_after_milliseconds'] = 10000
-
- ## Specifies the failover timeout in milliseconds. It is used in many ways:
- ##
- ## - The time needed to re-start a failover after a previous failover was
- ## already tried against the same master by a given Sentinel, is two
- ## times the failover timeout.
- ##
- ## - The time needed for a replica replicating to a wrong master according
- ## to a Sentinel current configuration, to be forced to replicate
- ## with the right master, is exactly the failover timeout (counting since
- ## the moment a Sentinel detected the misconfiguration).
- ##
- ## - The time needed to cancel a failover that is already in progress but
- ## did not produced any configuration change (REPLICAOF NO ONE yet not
- ## acknowledged by the promoted replica).
- ##
- ## - The maximum time a failover in progress waits for all the replica to be
- ## reconfigured as replicas of the new master. However even after this time
- ## the replicas will be reconfigured by the Sentinels anyway, but not with
- ## the exact parallel-syncs progression as specified.
- # sentinel['failover_timeout'] = 60000
- ```
-
-1. To prevent database migrations from running on upgrade, run:
-
- ```shell
- sudo touch /etc/gitlab/skip-auto-reconfigure
- ```
-
- Only the primary GitLab application server should handle migrations.
-
-1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-1. Go through the steps again for all the other Sentinel nodes.
-
-### Step 4. Configuring the GitLab application
-
-The final part is to inform the main GitLab application server of the Redis
-Sentinels servers and authentication credentials.
-
-You can enable or disable Sentinel support at any time in new or existing
-installations. From the GitLab application perspective, all it requires is
-the correct credentials for the Sentinel nodes.
-
-While it doesn't require a list of all Sentinel nodes, in case of a failure,
-it needs to access at least one of the listed.
-
->**Note:**
-The following steps should be performed in the [GitLab application server](gitlab.md)
-which ideally should not have Redis or Sentinels on it for a HA setup.
-
-1. SSH into the server where the GitLab application is installed.
-1. Edit `/etc/gitlab/gitlab.rb` and add/change the following lines:
-
- ```ruby
- ## Must be the same in every sentinel node
- redis['master_name'] = 'gitlab-redis'
-
- ## The same password for Redis authentication you set up for the master node.
- redis['master_password'] = 'redis-password-goes-here'
-
- ## A list of sentinels with `host` and `port`
- gitlab_rails['redis_sentinels'] = [
- {'host' => '10.0.0.1', 'port' => 26379},
- {'host' => '10.0.0.2', 'port' => 26379},
- {'host' => '10.0.0.3', 'port' => 26379}
- ]
- ```
-
-1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-## Switching from an existing single-machine installation to Redis HA
-
-If you already have a single-machine GitLab install running, you will need to
-replicate from this machine first, before de-activating the Redis instance
-inside it.
-
-Your single-machine install will be the initial **Master**, and the `3` others
-should be configured as **Replica** pointing to this machine.
-
-After replication catches up, you will need to stop services in the
-single-machine install, to rotate the **Master** to one of the new nodes.
-
-Make the required changes in configuration and restart the new nodes again.
-
-To disable Redis in the single install, edit `/etc/gitlab/gitlab.rb`:
-
-```ruby
-redis['enable'] = false
-```
-
-If you fail to replicate first, you may loose data (unprocessed background jobs).
-
-## Example of a minimal configuration with 1 master, 2 replicas and 3 Sentinels
-
->**Note:**
-Redis Sentinel is bundled with Omnibus GitLab Enterprise Edition only. For
-different setups, read the
-[available configuration setups](#available-configuration-setups) section.
-
-In this example we consider that all servers have an internal network
-interface with IPs in the `10.0.0.x` range, and that they can connect
-to each other using these IPs.
-
-In a real world usage, you would also set up firewall rules to prevent
-unauthorized access from other machines and block traffic from the
-outside (Internet).
-
-We will use the same `3` nodes with **Redis** + **Sentinel** topology
-discussed in [Redis setup overview](#redis-setup-overview) and
-[Sentinel setup overview](#sentinel-setup-overview) documentation.
-
-Here is a list and description of each **machine** and the assigned **IP**:
-
-- `10.0.0.1`: Redis Master + Sentinel 1
-- `10.0.0.2`: Redis Replica 1 + Sentinel 2
-- `10.0.0.3`: Redis Replica 2 + Sentinel 3
-- `10.0.0.4`: GitLab application
-
-Please note that after the initial configuration, if a failover is initiated
-by the Sentinel nodes, the Redis nodes will be reconfigured and the **Master**
-will change permanently (including in `redis.conf`) from one node to the other,
-until a new failover is initiated again.
-
-The same thing will happen with `sentinel.conf` that will be overridden after the
-initial execution, after any new sentinel node starts watching the **Master**,
-or a failover promotes a different **Master** node.
-
-### Example configuration for Redis master and Sentinel 1
-
-In `/etc/gitlab/gitlab.rb`:
-
-```ruby
-roles ['redis_sentinel_role', 'redis_master_role']
-redis['bind'] = '10.0.0.1'
-redis['port'] = 6379
-redis['password'] = 'redis-password-goes-here'
-redis['master_name'] = 'gitlab-redis' # must be the same in every sentinel node
-redis['master_password'] = 'redis-password-goes-here' # the same value defined in redis['password'] in the master instance
-redis['master_ip'] = '10.0.0.1' # ip of the initial master redis instance
-#redis['master_port'] = 6379 # port of the initial master redis instance, uncomment to change to non default
-sentinel['bind'] = '10.0.0.1'
-# sentinel['port'] = 26379 # uncomment to change default port
-sentinel['quorum'] = 2
-# sentinel['down_after_milliseconds'] = 10000
-# sentinel['failover_timeout'] = 60000
-```
-
-[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-### Example configuration for Redis replica 1 and Sentinel 2
-
-In `/etc/gitlab/gitlab.rb`:
-
-```ruby
-roles ['redis_sentinel_role', 'redis_replica_role']
-redis['bind'] = '10.0.0.2'
-redis['port'] = 6379
-redis['password'] = 'redis-password-goes-here'
-redis['master_password'] = 'redis-password-goes-here'
-redis['master_ip'] = '10.0.0.1' # IP of master Redis server
-#redis['master_port'] = 6379 # Port of master Redis server, uncomment to change to non default
-redis['master_name'] = 'gitlab-redis' # must be the same in every sentinel node
-sentinel['bind'] = '10.0.0.2'
-# sentinel['port'] = 26379 # uncomment to change default port
-sentinel['quorum'] = 2
-# sentinel['down_after_milliseconds'] = 10000
-# sentinel['failover_timeout'] = 60000
-```
-
-[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-### Example configuration for Redis replica 2 and Sentinel 3
-
-In `/etc/gitlab/gitlab.rb`:
-
-```ruby
-roles ['redis_sentinel_role', 'redis_replica_role']
-redis['bind'] = '10.0.0.3'
-redis['port'] = 6379
-redis['password'] = 'redis-password-goes-here'
-redis['master_password'] = 'redis-password-goes-here'
-redis['master_ip'] = '10.0.0.1' # IP of master Redis server
-#redis['master_port'] = 6379 # Port of master Redis server, uncomment to change to non default
-redis['master_name'] = 'gitlab-redis' # must be the same in every sentinel node
-sentinel['bind'] = '10.0.0.3'
-# sentinel['port'] = 26379 # uncomment to change default port
-sentinel['quorum'] = 2
-# sentinel['down_after_milliseconds'] = 10000
-# sentinel['failover_timeout'] = 60000
-```
-
-[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-### Example configuration for the GitLab application
-
-In `/etc/gitlab/gitlab.rb`:
-
-```ruby
-redis['master_name'] = 'gitlab-redis'
-redis['master_password'] = 'redis-password-goes-here'
-gitlab_rails['redis_sentinels'] = [
- {'host' => '10.0.0.1', 'port' => 26379},
- {'host' => '10.0.0.2', 'port' => 26379},
- {'host' => '10.0.0.3', 'port' => 26379}
-]
-```
-
-[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-## Enable Monitoring
-
-> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
-
-If you enable Monitoring, it must be enabled on **all** Redis servers.
-
-1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
-
-1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
-
- ```ruby
- # Enable service discovery for Prometheus
- consul['enable'] = true
- consul['monitoring_service_discovery'] = true
-
- # Replace placeholders
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses of the Consul server nodes
- consul['configuration'] = {
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
- }
-
- # Set the network addresses that the exporters will listen on
- node_exporter['listen_address'] = '0.0.0.0:9100'
- redis_exporter['listen_address'] = '0.0.0.0:9121'
- ```
-
-1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
-
-## Advanced configuration
-
-Omnibus GitLab configures some things behind the curtains to make the sysadmins'
-lives easier. If you want to know what happens underneath keep reading.
-
-### Running multiple Redis clusters
-
-GitLab supports running [separate Redis clusters for different persistent
-classes](https://docs.gitlab.com/omnibus/settings/redis.html#running-with-multiple-redis-instances):
-cache, queues, and shared_state. To make this work with Sentinel:
-
-1. Set the appropriate variable in `/etc/gitlab/gitlab.rb` for each instance you are using:
-
- ```ruby
- gitlab_rails['redis_cache_instance'] = REDIS_CACHE_URL
- gitlab_rails['redis_queues_instance'] = REDIS_QUEUES_URL
- gitlab_rails['redis_shared_state_instance'] = REDIS_SHARED_STATE_URL
- ```
-
- **Note**: Redis URLs should be in the format: `redis://:PASSWORD@SENTINEL_MASTER_NAME`
-
- 1. PASSWORD is the plaintext password for the Redis instance
- 1. SENTINEL_MASTER_NAME is the Sentinel master name (e.g. `gitlab-redis-cache`)
-
-1. Include an array of hashes with host/port combinations, such as the following:
-
- ```ruby
- gitlab_rails['redis_cache_sentinels'] = [
- { host: REDIS_CACHE_SENTINEL_HOST, port: PORT1 },
- { host: REDIS_CACHE_SENTINEL_HOST2, port: PORT2 }
- ]
- gitlab_rails['redis_queues_sentinels'] = [
- { host: REDIS_QUEUES_SENTINEL_HOST, port: PORT1 },
- { host: REDIS_QUEUES_SENTINEL_HOST2, port: PORT2 }
- ]
- gitlab_rails['redis_shared_state_sentinels'] = [
- { host: SHARED_STATE_SENTINEL_HOST, port: PORT1 },
- { host: SHARED_STATE_SENTINEL_HOST2, port: PORT2 }
- ]
- ```
-
-1. Note that for each persistence class, GitLab will default to using the
- configuration specified in `gitlab_rails['redis_sentinels']` unless
- overridden by the settings above.
-1. Be sure to include BOTH configuration options for each persistent classes. For example,
- if you choose to configure a cache instance, you must specify both `gitlab_rails['redis_cache_instance']`
- and `gitlab_rails['redis_cache_sentinels']` for GitLab to generate the proper configuration files.
-1. Run `gitlab-ctl reconfigure`
-
-### Control running services
-
-In the previous example, we've used `redis_sentinel_role` and
-`redis_master_role` which simplifies the amount of configuration changes.
-
-If you want more control, here is what each one sets for you automatically
-when enabled:
-
-```ruby
-## Redis Sentinel Role
-redis_sentinel_role['enable'] = true
-
-# When Sentinel Role is enabled, the following services are also enabled
-sentinel['enable'] = true
-
-# The following services are disabled
-redis['enable'] = false
-bootstrap['enable'] = false
-nginx['enable'] = false
-postgresql['enable'] = false
-gitlab_rails['enable'] = false
-mailroom['enable'] = false
-
--------
-
-## Redis master/replica Role
-redis_master_role['enable'] = true # enable only one of them
-redis_replica_role['enable'] = true # enable only one of them
-
-# When Redis Master or Replica role are enabled, the following services are
-# enabled/disabled. Note that if Redis and Sentinel roles are combined, both
-# services will be enabled.
-
-# The following services are disabled
-sentinel['enable'] = false
-bootstrap['enable'] = false
-nginx['enable'] = false
-postgresql['enable'] = false
-gitlab_rails['enable'] = false
-mailroom['enable'] = false
-
-# For Redis Replica role, also change this setting from default 'true' to 'false':
-redis['master'] = false
-```
-
-You can find the relevant attributes defined in [`gitlab_rails.rb`](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-cookbooks/gitlab/libraries/gitlab_rails.rb).
-
-## Troubleshooting
-
-There are a lot of moving parts that needs to be taken care carefully
-in order for the HA setup to work as expected.
-
-Before proceeding with the troubleshooting below, check your firewall rules:
-
-- Redis machines
- - Accept TCP connection in `6379`
- - Connect to the other Redis machines via TCP in `6379`
-- Sentinel machines
- - Accept TCP connection in `26379`
- - Connect to other Sentinel machines via TCP in `26379`
- - Connect to the Redis machines via TCP in `6379`
-
-### Troubleshooting Redis replication
-
-You can check if everything is correct by connecting to each server using
-`redis-cli` application, and sending the `info replication` command as below.
-
-```shell
-/opt/gitlab/embedded/bin/redis-cli -h <redis-host-or-ip> -a '<redis-password>' info replication
-```
-
-When connected to a `master` Redis, you will see the number of connected
-`replicas`, and a list of each with connection details:
-
-```plaintext
-# Replication
-role:master
-connected_replicas:1
-replica0:ip=10.133.5.21,port=6379,state=online,offset=208037514,lag=1
-master_repl_offset:208037658
-repl_backlog_active:1
-repl_backlog_size:1048576
-repl_backlog_first_byte_offset:206989083
-repl_backlog_histlen:1048576
-```
-
-When it's a `replica`, you will see details of the master connection and if
-its `up` or `down`:
-
-```plaintext
-# Replication
-role:replica
-master_host:10.133.1.58
-master_port:6379
-master_link_status:up
-master_last_io_seconds_ago:1
-master_sync_in_progress:0
-replica_repl_offset:208096498
-replica_priority:100
-replica_read_only:1
-connected_replicas:0
-master_repl_offset:0
-repl_backlog_active:0
-repl_backlog_size:1048576
-repl_backlog_first_byte_offset:0
-repl_backlog_histlen:0
-```
-
-### Troubleshooting Sentinel
-
-If you get an error like: `Redis::CannotConnectError: No sentinels available.`,
-there may be something wrong with your configuration files or it can be related
-to [this issue](https://github.com/redis/redis-rb/issues/531).
-
-You must make sure you are defining the same value in `redis['master_name']`
-and `redis['master_pasword']` as you defined for your sentinel node.
-
-The way the Redis connector `redis-rb` works with sentinel is a bit
-non-intuitive. We try to hide the complexity in omnibus, but it still requires
-a few extra configurations.
-
----
-
-To make sure your configuration is correct:
-
-1. SSH into your GitLab application server
-1. Enter the Rails console:
-
- ```shell
- # For Omnibus installations
- sudo gitlab-rails console
-
- # For source installations
- sudo -u git rails console -e production
- ```
-
-1. Run in the console:
-
- ```ruby
- redis = Redis.new(Gitlab::Redis::SharedState.params)
- redis.info
- ```
-
- Keep this screen open and try to simulate a failover below.
-
-1. To simulate a failover on master Redis, SSH into the Redis server and run:
-
- ```shell
- # port must match your master redis port, and the sleep time must be a few seconds bigger than defined one
- redis-cli -h localhost -p 6379 DEBUG sleep 20
- ```
-
-1. Then back in the Rails console from the first step, run:
-
- ```ruby
- redis.info
- ```
-
- You should see a different port after a few seconds delay
- (the failover/reconnect time).
-
-## Changelog
-
-Changes to Redis HA over time.
-
-**8.14**
-
-- Redis Sentinel support is production-ready and bundled in the Omnibus GitLab
- Enterprise Edition package
-- Documentation restructure for better readability
-
-**8.11**
-
-- Experimental Redis Sentinel support was added
-
-## Further reading
-
-Read more on High Availability:
-
-1. [High Availability Overview](README.md)
-1. [Configure the database](../postgresql/replication_and_failover.md)
-1. [Configure NFS](nfs.md)
-1. [Configure the GitLab application servers](gitlab.md)
-1. [Configure the load balancers](load_balancer.md)
+This document was moved to [another location](../redis/index.md).
diff --git a/doc/administration/high_availability/redis_source.md b/doc/administration/high_availability/redis_source.md
index 97be480bc3b..75496638979 100644
--- a/doc/administration/high_availability/redis_source.md
+++ b/doc/administration/high_availability/redis_source.md
@@ -1,371 +1,5 @@
---
-type: reference
+redirect_to: ../redis/replication_and_failover_external.md
---
-# Configuring non-Omnibus Redis for GitLab HA
-
-This is the documentation for configuring a Highly Available Redis setup when
-you have installed Redis all by yourself and not using the bundled one that
-comes with the Omnibus packages.
-
-Note also that you may elect to override all references to
-`/home/git/gitlab/config/resque.yml` in accordance with the advanced Redis
-settings outlined in
-[Configuration Files Documentation](https://gitlab.com/gitlab-org/gitlab/blob/master/config/README.md).
-
-We cannot stress enough the importance of reading the
-[Overview section](redis.md#overview) of the Omnibus Redis HA as it provides
-some invaluable information to the configuration of Redis. Please proceed to
-read it before going forward with this guide.
-
-We also highly recommend that you use the Omnibus GitLab packages, as we
-optimize them specifically for GitLab, and we will take care of upgrading Redis
-to the latest supported version.
-
-If you're not sure whether this guide is for you, please refer to
-[Available configuration setups](redis.md#available-configuration-setups) in
-the Omnibus Redis HA documentation.
-
-## Configuring your own Redis server
-
-This is the section where we install and set up the new Redis instances.
-
-### Prerequisites
-
-- All Redis servers in this guide must be configured to use a TCP connection
- instead of a socket. To configure Redis to use TCP connections you need to
- define both `bind` and `port` in the Redis config file. You can bind to all
- interfaces (`0.0.0.0`) or specify the IP of the desired interface
- (e.g., one from an internal network).
-- Since Redis 3.2, you must define a password to receive external connections
- (`requirepass`).
-- If you are using Redis with Sentinel, you will also need to define the same
- password for the replica password definition (`masterauth`) in the same instance.
-
-In addition, read the prerequisites as described in the
-[Omnibus Redis HA document](redis.md#prerequisites) since they provide some
-valuable information for the general setup.
-
-### Step 1. Configuring the master Redis instance
-
-Assuming that the Redis master instance IP is `10.0.0.1`:
-
-1. [Install Redis](../../install/installation.md#7-redis).
-1. Edit `/etc/redis/redis.conf`:
-
- ```conf
- ## Define a `bind` address pointing to a local IP that your other machines
- ## can reach you. If you really need to bind to an external accessible IP, make
- ## sure you add extra firewall rules to prevent unauthorized access:
- bind 10.0.0.1
-
- ## Define a `port` to force redis to listen on TCP so other machines can
- ## connect to it (default port is `6379`).
- port 6379
-
- ## Set up password authentication (use the same password in all nodes).
- ## The password should be defined equal for both `requirepass` and `masterauth`
- ## when setting up Redis to use with Sentinel.
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
- ```
-
-1. Restart the Redis service for the changes to take effect.
-
-### Step 2. Configuring the replica Redis instances
-
-Assuming that the Redis replica instance IP is `10.0.0.2`:
-
-1. [Install Redis](../../install/installation.md#7-redis).
-1. Edit `/etc/redis/redis.conf`:
-
- ```conf
- ## Define a `bind` address pointing to a local IP that your other machines
- ## can reach you. If you really need to bind to an external accessible IP, make
- ## sure you add extra firewall rules to prevent unauthorized access:
- bind 10.0.0.2
-
- ## Define a `port` to force redis to listen on TCP so other machines can
- ## connect to it (default port is `6379`).
- port 6379
-
- ## Set up password authentication (use the same password in all nodes).
- ## The password should be defined equal for both `requirepass` and `masterauth`
- ## when setting up Redis to use with Sentinel.
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
-
- ## Define `replicaof` pointing to the Redis master instance with IP and port.
- replicaof 10.0.0.1 6379
- ```
-
-1. Restart the Redis service for the changes to take effect.
-1. Go through the steps again for all the other replica nodes.
-
-### Step 3. Configuring the Redis Sentinel instances
-
-Sentinel is a special type of Redis server. It inherits most of the basic
-configuration options you can define in `redis.conf`, with specific ones
-starting with `sentinel` prefix.
-
-Assuming that the Redis Sentinel is installed on the same instance as Redis
-master with IP `10.0.0.1` (some settings might overlap with the master):
-
-1. [Install Redis Sentinel](https://redis.io/topics/sentinel)
-1. Edit `/etc/redis/sentinel.conf`:
-
- ```conf
- ## Define a `bind` address pointing to a local IP that your other machines
- ## can reach you. If you really need to bind to an external accessible IP, make
- ## sure you add extra firewall rules to prevent unauthorized access:
- bind 10.0.0.1
-
- ## Define a `port` to force Sentinel to listen on TCP so other machines can
- ## connect to it (default port is `6379`).
- port 26379
-
- ## Set up password authentication (use the same password in all nodes).
- ## The password should be defined equal for both `requirepass` and `masterauth`
- ## when setting up Redis to use with Sentinel.
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
-
- ## Define with `sentinel auth-pass` the same shared password you have
- ## defined for both Redis master and replicas instances.
- sentinel auth-pass gitlab-redis redis-password-goes-here
-
- ## Define with `sentinel monitor` the IP and port of the Redis
- ## master node, and the quorum required to start a failover.
- sentinel monitor gitlab-redis 10.0.0.1 6379 2
-
- ## Define with `sentinel down-after-milliseconds` the time in `ms`
- ## that an unresponsive server will be considered down.
- sentinel down-after-milliseconds gitlab-redis 10000
-
- ## Define a value for `sentinel failover_timeout` in `ms`. This has multiple
- ## meanings:
- ##
- ## * The time needed to re-start a failover after a previous failover was
- ## already tried against the same master by a given Sentinel, is two
- ## times the failover timeout.
- ##
- ## * The time needed for a replica replicating to a wrong master according
- ## to a Sentinel current configuration, to be forced to replicate
- ## with the right master, is exactly the failover timeout (counting since
- ## the moment a Sentinel detected the misconfiguration).
- ##
- ## * The time needed to cancel a failover that is already in progress but
- ## did not produced any configuration change (REPLICAOF NO ONE yet not
- ## acknowledged by the promoted replica).
- ##
- ## * The maximum time a failover in progress waits for all the replicas to be
- ## reconfigured as replicas of the new master. However even after this time
- ## the replicas will be reconfigured by the Sentinels anyway, but not with
- ## the exact parallel-syncs progression as specified.
- sentinel failover_timeout 30000
- ```
-
-1. Restart the Redis service for the changes to take effect.
-1. Go through the steps again for all the other Sentinel nodes.
-
-### Step 4. Configuring the GitLab application
-
-You can enable or disable Sentinel support at any time in new or existing
-installations. From the GitLab application perspective, all it requires is
-the correct credentials for the Sentinel nodes.
-
-While it doesn't require a list of all Sentinel nodes, in case of a failure,
-it needs to access at least one of listed ones.
-
-The following steps should be performed in the [GitLab application server](gitlab.md)
-which ideally should not have Redis or Sentinels in the same machine for a HA
-setup:
-
-1. Edit `/home/git/gitlab/config/resque.yml` following the example in
- [resque.yml.example](https://gitlab.com/gitlab-org/gitlab/blob/master/config/resque.yml.example), and uncomment the Sentinel lines, pointing to
- the correct server credentials:
-
- ```yaml
- # resque.yaml
- production:
- url: redis://:redi-password-goes-here@gitlab-redis/
- sentinels:
- -
- host: 10.0.0.1
- port: 26379 # point to sentinel, not to redis port
- -
- host: 10.0.0.2
- port: 26379 # point to sentinel, not to redis port
- -
- host: 10.0.0.3
- port: 26379 # point to sentinel, not to redis port
- ```
-
-1. [Restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
-
-## Example of minimal configuration with 1 master, 2 replicas and 3 Sentinels
-
-In this example we consider that all servers have an internal network
-interface with IPs in the `10.0.0.x` range, and that they can connect
-to each other using these IPs.
-
-In a real world usage, you would also set up firewall rules to prevent
-unauthorized access from other machines, and block traffic from the
-outside ([Internet](https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png)).
-
-For this example, **Sentinel 1** will be configured in the same machine as the
-**Redis Master**, **Sentinel 2** and **Sentinel 3** in the same machines as the
-**Replica 1** and **Replica 2** respectively.
-
-Here is a list and description of each **machine** and the assigned **IP**:
-
-- `10.0.0.1`: Redis Master + Sentinel 1
-- `10.0.0.2`: Redis Replica 1 + Sentinel 2
-- `10.0.0.3`: Redis Replica 2 + Sentinel 3
-- `10.0.0.4`: GitLab application
-
-Please note that after the initial configuration, if a failover is initiated
-by the Sentinel nodes, the Redis nodes will be reconfigured and the **Master**
-will change permanently (including in `redis.conf`) from one node to the other,
-until a new failover is initiated again.
-
-The same thing will happen with `sentinel.conf` that will be overridden after the
-initial execution, after any new sentinel node starts watching the **Master**,
-or a failover promotes a different **Master** node.
-
-### Example configuration for Redis master and Sentinel 1
-
-1. In `/etc/redis/redis.conf`:
-
- ```conf
- bind 10.0.0.1
- port 6379
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
- ```
-
-1. In `/etc/redis/sentinel.conf`:
-
- ```conf
- bind 10.0.0.1
- port 26379
- sentinel auth-pass gitlab-redis redis-password-goes-here
- sentinel monitor gitlab-redis 10.0.0.1 6379 2
- sentinel down-after-milliseconds gitlab-redis 10000
- sentinel failover_timeout 30000
- ```
-
-1. Restart the Redis service for the changes to take effect.
-
-### Example configuration for Redis replica 1 and Sentinel 2
-
-1. In `/etc/redis/redis.conf`:
-
- ```conf
- bind 10.0.0.2
- port 6379
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
- replicaof 10.0.0.1 6379
- ```
-
-1. In `/etc/redis/sentinel.conf`:
-
- ```conf
- bind 10.0.0.2
- port 26379
- sentinel auth-pass gitlab-redis redis-password-goes-here
- sentinel monitor gitlab-redis 10.0.0.1 6379 2
- sentinel down-after-milliseconds gitlab-redis 10000
- sentinel failover_timeout 30000
- ```
-
-1. Restart the Redis service for the changes to take effect.
-
-### Example configuration for Redis replica 2 and Sentinel 3
-
-1. In `/etc/redis/redis.conf`:
-
- ```conf
- bind 10.0.0.3
- port 6379
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
- replicaof 10.0.0.1 6379
- ```
-
-1. In `/etc/redis/sentinel.conf`:
-
- ```conf
- bind 10.0.0.3
- port 26379
- sentinel auth-pass gitlab-redis redis-password-goes-here
- sentinel monitor gitlab-redis 10.0.0.1 6379 2
- sentinel down-after-milliseconds gitlab-redis 10000
- sentinel failover_timeout 30000
- ```
-
-1. Restart the Redis service for the changes to take effect.
-
-### Example configuration of the GitLab application
-
-1. Edit `/home/git/gitlab/config/resque.yml`:
-
- ```yaml
- production:
- url: redis://:redi-password-goes-here@gitlab-redis/
- sentinels:
- -
- host: 10.0.0.1
- port: 26379 # point to sentinel, not to redis port
- -
- host: 10.0.0.2
- port: 26379 # point to sentinel, not to redis port
- -
- host: 10.0.0.3
- port: 26379 # point to sentinel, not to redis port
- ```
-
-1. [Restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
-
-## Troubleshooting
-
-We have a more detailed [Troubleshooting](redis.md#troubleshooting) explained
-in the documentation for Omnibus GitLab installations. Here we will list only
-the things that are specific to a source installation.
-
-If you get an error in GitLab like `Redis::CannotConnectError: No sentinels available.`,
-there may be something wrong with your configuration files or it can be related
-to [this upstream issue](https://github.com/redis/redis-rb/issues/531).
-
-You must make sure that `resque.yml` and `sentinel.conf` are configured correctly,
-otherwise `redis-rb` will not work properly.
-
-The `master-group-name` (`gitlab-redis`) defined in (`sentinel.conf`)
-**must** be used as the hostname in GitLab (`resque.yml`):
-
-```conf
-# sentinel.conf:
-sentinel monitor gitlab-redis 10.0.0.1 6379 2
-sentinel down-after-milliseconds gitlab-redis 10000
-sentinel config-epoch gitlab-redis 0
-sentinel leader-epoch gitlab-redis 0
-```
-
-```yaml
-# resque.yaml
-production:
- url: redis://:myredispassword@gitlab-redis/
- sentinels:
- -
- host: 10.0.0.1
- port: 26379 # point to sentinel, not to redis port
- -
- host: 10.0.0.2
- port: 26379 # point to sentinel, not to redis port
- -
- host: 10.0.0.3
- port: 26379 # point to sentinel, not to redis port
-```
-
-When in doubt, please read [Redis Sentinel documentation](https://redis.io/topics/sentinel).
+This document was moved to [another location](../redis/replication_and_failover_external.md).
diff --git a/doc/administration/high_availability/sidekiq.md b/doc/administration/high_availability/sidekiq.md
index 493929dcf3b..98a9af64e5e 100644
--- a/doc/administration/high_availability/sidekiq.md
+++ b/doc/administration/high_availability/sidekiq.md
@@ -94,7 +94,8 @@ you want using steps 1 and 2 from the GitLab downloads page.
1. Run `gitlab-ctl reconfigure`.
-NOTE: **Note:** You will need to restart the Sidekiq nodes after an update has occurred and database
+NOTE: **Note:**
+You will need to restart the Sidekiq nodes after an update has occurred and database
migrations performed.
## Example configuration
diff --git a/doc/administration/img/repository_storages_admin_ui_v12_10.png b/doc/administration/img/repository_storages_admin_ui_v12_10.png
deleted file mode 100644
index e5ac09524b8..00000000000
--- a/doc/administration/img/repository_storages_admin_ui_v12_10.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/img/repository_storages_admin_ui_v13_1.png b/doc/administration/img/repository_storages_admin_ui_v13_1.png
new file mode 100644
index 00000000000..f8c13a35369
--- /dev/null
+++ b/doc/administration/img/repository_storages_admin_ui_v13_1.png
Binary files differ
diff --git a/doc/administration/incoming_email.md b/doc/administration/incoming_email.md
index e078a7c1098..36156c4a580 100644
--- a/doc/administration/incoming_email.md
+++ b/doc/administration/incoming_email.md
@@ -17,10 +17,15 @@ GitLab has several features based on receiving incoming emails:
allow GitLab users to create a new merge request by sending an email to a
user-specific email address.
- [Service Desk](../user/project/service_desk.md): provide e-mail support to
- your customers through GitLab. **(PREMIUM)**
+ your customers through GitLab.
## Requirements
+NOTE: **Note:**
+It is **not** recommended to use an email address that receives or will receive any
+messages not intended for the GitLab instance. Any incoming emails not intended
+for GitLab will receive a reject notice.
+
Handling incoming emails requires an [IMAP](https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol)-enabled
email account. GitLab requires one of the following three strategies:
@@ -69,6 +74,11 @@ and [allowed less secure apps to access the account](https://support.google.com/
or [turn-on 2-step validation](https://support.google.com/accounts/answer/185839)
and use [an application password](https://support.google.com/mail/answer/185833).
+If you want to use Office 365, and two-factor authentication is enabled, make sure
+you're using an
+[app password](https://docs.microsoft.com/en-us/azure/active-directory/user-help/multi-factor-authentication-end-user-app-passwords)
+instead of the regular password for the mailbox.
+
To set up a basic Postfix mail server with IMAP access on Ubuntu, follow the
[Postfix setup documentation](reply_by_email_postfix_setup.md).
@@ -101,6 +111,16 @@ Alternatively, use a dedicated domain for GitLab email communications such as
See GitLab issue [#30366](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/30366)
for a real-world example of this exploit.
+CAUTION:**Caution:**
+Be sure to use a mail server that has been configured to reduce
+spam.
+A Postfix mail server that is running on a default configuration, for example,
+can result in abuse. All messages received on the configured mailbox will be processed
+and messages that are not intended for the GitLab instance will receive a reject notice.
+If the sender's address is spoofed, the reject notice will be delivered to the spoofed
+`FROM` address, which can cause the mail server's IP or domain to appear on a block
+list.
+
### Omnibus package installations
1. Find the `incoming_email` section in `/etc/gitlab/gitlab.rb`, enable the feature
diff --git a/doc/administration/instance_limits.md b/doc/administration/instance_limits.md
index 3d2f7380494..6d2fbd95d5e 100644
--- a/doc/administration/instance_limits.md
+++ b/doc/administration/instance_limits.md
@@ -8,6 +8,99 @@ GitLab, like most large applications, enforces limits within certain features to
minimum quality of performance. Allowing some features to be limitless could affect security,
performance, data, or could even exhaust the allocated resources for the application.
+## Rate limits
+
+Rate limits can be used to improve the security and durability of GitLab.
+
+For example, a simple script can make thousands of web requests per second. Whether malicious, apathetic, or just a bug, your application and infrastructure may not be able to cope with the load. Rate limits can help mitigate these types of attacks.
+
+Read more about [configuring rate limits](../security/rate_limits.md) in the Security documentation.
+
+### Issue creation
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28129) in GitLab 12.10.
+
+This setting limits the request rate to the issue creation endpoint.
+
+Read more on [issue creation rate limits](../user/admin_area/settings/rate_limit_on_issues_creation.md).
+
+- **Default rate limit** - Disabled by default
+
+### By User or IP
+
+This setting limits the request rate per user or IP.
+
+Read more on [User and IP rate limits](../user/admin_area/settings/user_and_ip_rate_limits.md).
+
+- **Default rate limit** - Disabled by default
+
+### By raw endpoint
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/30829) in GitLab 12.2.
+
+This setting limits the request rate per endpoint.
+
+Read more on [raw endpoint rate limits](../user/admin_area/settings/rate_limits_on_raw_endpoints.md).
+
+- **Default rate limit** - 300 requests per project, per commit and per file path
+
+### By protected path
+
+This setting limits the request rate on specific paths.
+
+GitLab rate limits the following paths by default:
+
+```plaintext
+'/users/password',
+'/users/sign_in',
+'/api/#{API::API.version}/session.json',
+'/api/#{API::API.version}/session',
+'/users',
+'/users/confirmation',
+'/unsubscribes/',
+'/import/github/personal_access_token',
+'/admin/session'
+```
+
+Read more on [protected path rate limits](../user/admin_area/settings/protected_paths.md).
+
+- **Default rate limit** - After 10 requests, the client must wait 60 seconds before trying again
+
+### Import/Export
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/35728) in GitLab 13.2.
+
+This setting limits the import/export actions for groups and projects.
+
+| Limit | Default (per minute per user) |
+| ----- | ----------------------------- |
+| Project Import | 6 |
+| Project Export | 6 |
+| Project Export Download | 1 |
+| Group Import | 6 |
+| Group Export | 6 |
+| Group Export | Download | 1 |
+
+Read more on [import/export rate limits](../user/admin_area/settings/import_export_rate_limits.md).
+
+### Rack attack
+
+This method of rate limiting is cumbersome, but has some advantages. It allows
+throttling of specific paths, and is also integrated into Git and container
+registry requests.
+
+Read more on the [Rack Attack initializer](../security/rack_attack.md) method of setting rate limits.
+
+- **Default rate limit** - Disabled
+
+## Gitaly concurrency limit
+
+Clone traffic can put a large strain on your Gitaly service. To prevent such workloads from overwhelming your Gitaly server, you can set concurrency limits in Gitaly’s configuration file.
+
+Read more on [Gitaly concurrency limits](gitaly/index.md#limit-rpc-concurrency).
+
+- **Default rate limit** - Disabled
+
## Number of comments per issue, merge request or commit
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/22388) in GitLab 12.4.
@@ -77,13 +170,14 @@ To set this limit on a self-managed installation, run the following in the
# Plan.default.create_limits!
# For project webhooks
-Plan.default.limits.update!(project_hooks: 100)
+Plan.default.actual_limits.update!(project_hooks: 100)
# For group webhooks
-Plan.default.limits.update!(group_hooks: 100)
+Plan.default.actual_limits.update!(group_hooks: 100)
```
-NOTE: **Note:** Set the limit to `0` to disable it.
+NOTE: **Note:**
+Set the limit to `0` to disable it.
## Incoming emails from auto-responders
@@ -115,12 +209,13 @@ To set this limit on a self-managed installation, run the following in the
# If limits don't exist for the default plan, you can create one with:
# Plan.default.create_limits!
-Plan.default.limits.update!(offset_pagination_limit: 10000)
+Plan.default.actual_limits.update!(offset_pagination_limit: 10000)
```
- **Default offset pagination limit:** 50000
-NOTE: **Note:** Set the limit to `0` to disable it.
+NOTE: **Note:**
+Set the limit to `0` to disable it.
## CI/CD limits
@@ -149,10 +244,11 @@ To set this limit on a self-managed installation, run the following in the
# If limits don't exist for the default plan, you can create one with:
# Plan.default.create_limits!
-Plan.default.limits.update!(ci_active_jobs: 500)
+Plan.default.actual_limits.update!(ci_active_jobs: 500)
```
-NOTE: **Note:** Set the limit to `0` to disable it.
+NOTE: **Note:**
+Set the limit to `0` to disable it.
### Number of CI/CD subscriptions to a project
@@ -171,10 +267,11 @@ To set this limit on a self-managed installation, run the following in the
[GitLab Rails console](troubleshooting/debug.md#starting-a-rails-console-session):
```ruby
-Plan.default.limits.update!(ci_project_subscriptions: 500)
+Plan.default.actual_limits.update!(ci_project_subscriptions: 500)
```
-NOTE: **Note:** Set the limit to `0` to disable it.
+NOTE: **Note:**
+Set the limit to `0` to disable it.
### Number of pipeline schedules
@@ -196,7 +293,7 @@ To set this limit on a self-managed installation, run the following in the
[GitLab Rails console](troubleshooting/debug.md#starting-a-rails-console-session):
```ruby
-Plan.default.limits.update!(ci_pipeline_schedules: 100)
+Plan.default.actual_limits.update!(ci_pipeline_schedules: 100)
```
### Number of instance level variables
@@ -214,7 +311,7 @@ To update this limit to a new value on a self-managed installation, run the foll
[GitLab Rails console](troubleshooting/debug.md#starting-a-rails-console-session):
```ruby
-Plan.default.limits.update!(ci_instance_level_variables: 30)
+Plan.default.actual_limits.update!(ci_instance_level_variables: 30)
```
## Instance monitoring and metrics
@@ -243,6 +340,38 @@ Prometheus alert payloads sent to the `notify.json` endpoint are limited to 1 MB
Alert payloads sent to the `notify.json` endpoint are limited to 1 MB in size.
+### Metrics dashboard YAML files
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/34834) in GitLab 13.2.
+
+The memory occupied by a parsed metrics dashboard YAML file cannot exceed 1 MB.
+
+The maximum depth of each YAML file is limited to 100. The maximum depth of a YAML
+file is the amount of nesting of its most nested key. Each hash and array on the
+path of the most nested key counts towards its depth. For example, the depth of the
+most nested key in the following YAML is 7:
+
+```yaml
+dashboard: 'Test dashboard'
+links:
+- title: Link 1
+ url: https://gitlab.com
+panel_groups:
+- group: Group A
+ priority: 1
+ panels:
+ - title: "Super Chart A1"
+ type: "area-chart"
+ y_label: "y_label"
+ weight: 1
+ max_value: 1
+ metrics:
+ - id: metric_a1
+ query_range: 'query'
+ unit: unit
+ label: Legend Label
+```
+
## Environment data on Deploy Boards
[Deploy Boards](../user/project/deploy_boards.md) load information from Kubernetes about
@@ -274,7 +403,8 @@ characters and the rest will not be indexed and hence will not be searchable.
This limit can be configured for self-managed installations when [enabling
Elasticsearch](../integration/elasticsearch.md#enabling-elasticsearch).
-NOTE: **Note:** Set the limit to `0` to disable it.
+NOTE: **Note:**
+Set the limit to `0` to disable it.
## Wiki limits
@@ -284,6 +414,10 @@ NOTE: **Note:** Set the limit to `0` to disable it.
See the [documentation on Snippets settings](snippets/index.md).
+## Design Management limits
+
+See the [Design Management Limitations](../user/project/issues/design_management.md#limitations) section.
+
## Push Event Limits
### Webhooks and Project Services
diff --git a/doc/administration/integration/plantuml.md b/doc/administration/integration/plantuml.md
index cd25fbf351b..2a30eced7c4 100644
--- a/doc/administration/integration/plantuml.md
+++ b/doc/administration/integration/plantuml.md
@@ -130,7 +130,8 @@ that, login with an Admin account and do following:
- Check **Enable PlantUML** checkbox.
- Set the PlantUML instance as `https://gitlab.example.com/-/plantuml/`.
-NOTE: **Note:** If you are using a PlantUML server running v1.2020.9 and
+NOTE: **Note:**
+If you are using a PlantUML server running v1.2020.9 and
above (for example, [plantuml.com](https://plantuml.com)), set the `PLANTUML_ENCODING`
environment variable to enable the `deflate` compression. On Omnibus,
this can be done set in `/etc/gitlab.rb`:
diff --git a/doc/administration/integration/terminal.md b/doc/administration/integration/terminal.md
index bc5816ce120..fd9e09dc17a 100644
--- a/doc/administration/integration/terminal.md
+++ b/doc/administration/integration/terminal.md
@@ -45,7 +45,8 @@ detail below.
## Enabling and disabling terminal support
-NOTE: **Note:** AWS Elastic Load Balancers (ELBs) do not support web sockets.
+NOTE: **Note:**
+AWS Elastic Load Balancers (ELBs) do not support web sockets.
AWS Application Load Balancers (ALBs) must be used if you want web terminals
to work. See [AWS Elastic Load Balancing Product Comparison](https://aws.amazon.com/elasticloadbalancing/features/#compare)
for more information.
diff --git a/doc/administration/job_artifacts.md b/doc/administration/job_artifacts.md
index 0dbda67d39b..cb31a8934b1 100644
--- a/doc/administration/job_artifacts.md
+++ b/doc/administration/job_artifacts.md
@@ -106,6 +106,11 @@ If you configure GitLab to store CI logs and artifacts on object storage, you mu
#### Object Storage Settings
+NOTE: **Note:**
+In GitLab 13.2 and later, we recommend using the
+[consolidated object storage settings](object_storage.md#consolidated-object-storage-configuration).
+This section describes the earlier configuration format.
+
For source installations the following settings are nested under `artifacts:` and then `object_store:`. On Omnibus GitLab installs they are prefixed by `artifacts_object_store_`.
| Setting | Description | Default |
@@ -117,22 +122,9 @@ For source installations the following settings are nested under `artifacts:` an
| `proxy_download` | Set to true to enable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
| `connection` | Various connection options described below | |
-##### S3 compatible connection settings
-
-The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
+#### Connection settings
-| Setting | Description | Default |
-|---------|-------------|---------|
-| `provider` | Always `AWS` for compatible hosts | AWS |
-| `aws_access_key_id` | AWS credentials, or compatible | |
-| `aws_secret_access_key` | AWS credentials, or compatible | |
-| `aws_signature_version` | AWS signature version to use. 2 or 4 are valid options. Digital Ocean Spaces and other providers may need 2. | 4 |
-| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
-| `region` | AWS region | us-east-1 |
-| `host` | S3 compatible host for when not using AWS, for example `localhost` or `storage.example.com` | s3.amazonaws.com |
-| `endpoint` | Can be used when configuring an S3 compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
-| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
-| `use_iam_profile` | Set to true to use IAM profile instead of access keys | false
+See [the available connection settings for different providers](object_storage.md#connection-settings).
**In Omnibus installations:**
@@ -172,7 +164,7 @@ _The artifacts are stored by default in
gitlab-rake gitlab:artifacts:migrate
```
-CAUTION: **CAUTION:**
+CAUTION: **Caution:**
JUnit test report artifact (`junit.xml.gz`) migration
[is not supported](https://gitlab.com/gitlab-org/gitlab/-/issues/27698)
by the `gitlab:artifacts:migrate` script.
@@ -205,24 +197,14 @@ _The artifacts are stored by default in
sudo -u git -H bundle exec rake gitlab:artifacts:migrate RAILS_ENV=production
```
-CAUTION: **CAUTION:**
+CAUTION: **Caution:**
JUnit test report artifact (`junit.xml.gz`) migration
[is not supported](https://gitlab.com/gitlab-org/gitlab/-/issues/27698)
by the `gitlab:artifacts:migrate` script.
-### OpenStack compatible connection settings
+### OpenStack example
-The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
-
-| Setting | Description | Default |
-|---------|-------------|---------|
-| `provider` | Always `OpenStack` for compatible hosts | OpenStack |
-| `openstack_username` | OpenStack username | |
-| `openstack_api_key` | OpenStack API key | |
-| `openstack_temp_url_key` | OpenStack key for generating temporary URLs | |
-| `openstack_auth_url` | OpenStack authentication endpoint | |
-| `openstack_region` | OpenStack region | |
-| `openstack_tenant_id` | OpenStack tenant ID |
+See [the available connection settings for OpenStack](object_storage.md#openstack-compatible-connection-settings).
**In Omnibus installations:**
@@ -453,7 +435,7 @@ the number you want.
#### Delete job artifacts from jobs completed before a specific date
-CAUTION: **CAUTION:**
+CAUTION: **Caution:**
These commands remove data permanently from the database and from disk. We
highly recommend running them only under the guidance of a Support Engineer, or
running them in a test environment with a backup of the instance ready to be
@@ -479,7 +461,7 @@ If you need to manually remove job artifacts associated with multiple jobs while
1. Delete job artifacts older than a specific date:
- NOTE: **NOTE:**
+ NOTE: **Note:**
This step will also erase artifacts that users have chosen to
["keep"](../ci/pipelines/job_artifacts.md#browsing-artifacts).
@@ -500,7 +482,7 @@ If you need to manually remove job artifacts associated with multiple jobs while
#### Delete job artifacts and logs from jobs completed before a specific date
-CAUTION: **CAUTION:**
+CAUTION: **Caution:**
These commands remove data permanently from the database and from disk. We
highly recommend running them only under the guidance of a Support Engineer, or
running them in a test environment with a backup of the instance ready to be
diff --git a/doc/administration/job_logs.md b/doc/administration/job_logs.md
index d3484536a76..4dba33b796a 100644
--- a/doc/administration/job_logs.md
+++ b/doc/administration/job_logs.md
@@ -73,7 +73,7 @@ job output in the UI will be empty.
For example, to delete all job logs older than 60 days, run the following from a shell in your GitLab instance:
-DANGER: **Warning:**
+DANGER: **Danger:**
This command will permanently delete the log files and is irreversible.
```shell
diff --git a/doc/administration/lfs/index.md b/doc/administration/lfs/index.md
index dd0e25b05f1..4a8151bd091 100644
--- a/doc/administration/lfs/index.md
+++ b/doc/administration/lfs/index.md
@@ -63,6 +63,11 @@ GitLab provides two different options for the uploading mechanism: "Direct uploa
[Read more about using object storage with GitLab](../object_storage.md).
+NOTE: **Note:**
+In GitLab 13.2 and later, we recommend using the
+[consolidated object storage settings](../object_storage.md#consolidated-object-storage-configuration).
+This section describes the earlier configuration format.
+
**Option 1. Direct upload**
1. User pushes an `lfs` file to the GitLab instance
@@ -86,54 +91,10 @@ The following general settings are supported.
| `proxy_download` | Set to true to enable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
| `connection` | Various connection options described below | |
-The `connection` settings match those provided by [Fog](https://github.com/fog).
+See [the available connection settings for different providers](../object_storage.md#connection-settings).
Here is a configuration example with S3.
-| Setting | Description | example |
-|---------|-------------|---------|
-| `provider` | The provider name | AWS |
-| `aws_access_key_id` | AWS credentials, or compatible | `ABC123DEF456` |
-| `aws_secret_access_key` | AWS credentials, or compatible | `ABC123DEF456ABC123DEF456ABC123DEF456` |
-| `aws_signature_version` | AWS signature version to use. 2 or 4 are valid options. Digital Ocean Spaces and other providers may need 2. | 4 |
-| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
-| `region` | AWS region | us-east-1 |
-| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
-| `endpoint` | Can be used when configuring an S3 compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
-| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
-| `use_iam_profile` | Set to true to use IAM profile instead of access keys | false
-
-Here is a configuration example with GCS.
-
-| Setting | Description | example |
-|---------|-------------|---------|
-| `provider` | The provider name | `Google` |
-| `google_project` | GCP project name | `gcp-project-12345` |
-| `google_client_email` | The email address of the service account | `foo@gcp-project-12345.iam.gserviceaccount.com` |
-| `google_json_key_location` | The JSON key path | `/path/to/gcp-project-12345-abcde.json` |
-
-NOTE: **Note:**
-The service account must have permission to access the bucket.
-[See more](https://cloud.google.com/storage/docs/authentication)
-
-Here is a configuration example with Rackspace Cloud Files.
-
-| Setting | Description | example |
-|---------|-------------|---------|
-| `provider` | The provider name | `Rackspace` |
-| `rackspace_username` | The username of the Rackspace account with access to the container | `joe.smith` |
-| `rackspace_api_key` | The API key of the Rackspace account with access to the container | `ABC123DEF456ABC123DEF456ABC123DE` |
-| `rackspace_region` | The Rackspace storage region to use, a three letter code from the [list of service access endpoints](https://developer.rackspace.com/docs/cloud-files/v1/general-api-info/service-access/) | `iad` |
-| `rackspace_temp_url_key` | The private key you have set in the Rackspace API for temporary URLs. Read more [here](https://developer.rackspace.com/docs/cloud-files/v1/use-cases/public-access-to-your-cloud-files-account/#tempurl) | `ABC123DEF456ABC123DEF456ABC123DE` |
-
-NOTE: **Note:**
-Regardless of whether the container has public access enabled or disabled, Fog will
-use the TempURL method to grant access to LFS objects. If you see errors in logs referencing
-instantiating storage with a `temp-url-key`, ensure that you have set the key properly
-on the Rackspace API and in `gitlab.rb`. You can verify the value of the key Rackspace
-has set by sending a GET request with token header to the service access endpoint URL
-and comparing the output of the returned headers.
-
### Manual uploading to an object storage
There are two ways to manually do the same thing as automatic uploading (described above).
diff --git a/doc/administration/logs.md b/doc/administration/logs.md
index 7d7053a26db..3db9d32563e 100644
--- a/doc/administration/logs.md
+++ b/doc/administration/logs.md
@@ -26,7 +26,7 @@ GitLab, thanks to [Lograge](https://github.com/roidrage/lograge/). Note that
requests from the API are logged to a separate file in `api_json.log`.
Each line contains a JSON line that can be ingested by services like Elasticsearch and Splunk.
-Line breaks have been added to this example for legibility:
+Line breaks were added to examples for legibility:
```json
{
@@ -79,7 +79,7 @@ seconds:
User clone and fetch activity using HTTP transport appears in this log as `action: git_upload_pack`.
In addition, the log contains the originating IP address,
-(`remote_ip`),the user's ID (`user_id`), and username (`username`).
+(`remote_ip`), the user's ID (`user_id`), and username (`username`).
Some endpoints such as `/search` may make requests to Elasticsearch if using
[Advanced Global Search](../user/search/advanced_global_search.md). These will
@@ -112,7 +112,8 @@ The ActionCable connection or channel class is used as the `controller`.
}
```
-NOTE: **Note:** Starting with GitLab 12.5, if an error occurs, an
+NOTE: **Note:**
+Starting with GitLab 12.5, if an error occurs, an
`exception` field is included with `class`, `message`, and
`backtrace`. Previous versions included an `error` field instead of
`exception.class` and `exception.message`. For example:
@@ -227,7 +228,7 @@ It helps you see requests made directly to the API. For example:
}
```
-This entry shows an access to an internal endpoint to check whether an
+This entry shows an internal endpoint accessed to check whether an
associated SSH key can download the project in question via a `git fetch` or
`git clone`. In this example, we see:
@@ -320,7 +321,7 @@ packages or in `/home/git/gitlab/log/kubernetes.log` for
installations from source.
It logs information related to the Kubernetes Integration including errors
-during installing cluster applications on your GitLab managed Kubernetes
+during installing cluster applications on your managed Kubernetes
clusters.
Each line contains a JSON line that can be ingested by services like Elasticsearch and Splunk.
@@ -362,7 +363,7 @@ After 12.2, this file was renamed from `githost.log` to
`git_json.log` and stored in JSON format.
GitLab has to interact with Git repositories, but in some rare cases
-something can go wrong, and in this case you will know what exactly
+something can go wrong, and in this case you may need know what exactly
happened. This log file contains all failed requests from GitLab to Git
repositories. In the majority of cases this file will be useful for developers
only. For example:
@@ -473,8 +474,8 @@ This file lives in `/var/log/gitlab/gitlab-rails/sidekiq_client.log` for
Omnibus GitLab packages or in `/home/git/gitlab/log/sidekiq_client.log` for
installations from source.
-This file contains logging information about jobs before they are start
-being processed by Sidekiq, for example before being enqueued.
+This file contains logging information about jobs before Sidekiq starts
+processing them, such as before being enqueued.
This log file follows the same structure as
[`sidekiq.log`](#sidekiqlog), so it will be structured as JSON if
@@ -571,32 +572,45 @@ User clone/fetch activity using SSH transport appears in this log as `executing
## Gitaly Logs
-This file lives in `/var/log/gitlab/gitaly/current` and is produced by [runit](http://smarden.org/runit/). `runit` is packaged with Omnibus and a brief explanation of its purpose is available [in the omnibus documentation](https://docs.gitlab.com/omnibus/architecture/#runit). [Log files are rotated](http://smarden.org/runit/svlogd.8.html), renamed in Unix timestamp format and `gzip`-compressed (e.g. `@1584057562.s`).
+This file lives in `/var/log/gitlab/gitaly/current` and is produced by [runit](http://smarden.org/runit/). `runit` is packaged with Omnibus GitLab and a brief explanation of its purpose is available [in the Omnibus GitLab documentation](https://docs.gitlab.com/omnibus/architecture/#runit). [Log files are rotated](http://smarden.org/runit/svlogd.8.html), renamed in Unix timestamp format, and `gzip`-compressed (like `@1584057562.s`).
### `grpc.log`
This file lives in `/var/log/gitlab/gitlab-rails/grpc.log` for Omnibus GitLab packages. Native [gRPC](https://grpc.io/) logging used by Gitaly.
-## `puma_stderr.log` & `puma_stdout.log`
+## Puma Logs
-This file lives in `/var/log/gitlab/puma/puma_stderr.log` and `/var/log/gitlab/puma/puma_stdout.log` for
-Omnibus GitLab packages or in `/home/git/gitlab/log/puma_stderr.log` and `/home/git/gitlab/log/puma_stdout.log`
-for installations from source.
+### `puma_stdout.log`
+
+This file lives in `/var/log/gitlab/puma/puma_stdout.log` for
+Omnibus GitLab packages, and `/home/git/gitlab/log/puma_stdout.log` for
+installations from source.
+
+### `puma_stderr.log`
+
+This file lives in `/var/log/gitlab/puma/puma_stderr.log` for
+Omnibus GitLab packages, or in `/home/git/gitlab/log/puma_stderr.log` for
+installations from source.
-## `unicorn_stderr.log` & `unicorn_stdout.log`
+## Unicorn Logs
NOTE: **Note:**
Starting with GitLab 13.0, Puma is the default web server used in GitLab
all-in-one package based installations as well as GitLab Helm chart deployments.
-This file lives in `/var/log/gitlab/unicorn/unicorn_stderr.log` and `/var/log/gitlab/unicorn/unicorn_stdout.log` for
-Omnibus GitLab packages or in `/home/git/gitlab/log/unicorn_stderr.log` and `/home/git/gitlab/log/unicorn_stdout.log`
+### `unicorn_stdout.log`
+
+This file lives in `/var/log/gitlab/unicorn/unicorn_stdout.log` for
+Omnibus GitLab packages or in `/home/git/gitlab/log/unicorn_stdout.log` for
+for installations from source.
+
+### `unicorn_stderr.log`
+
+This file lives in `/var/log/gitlab/unicorn/unicorn_stderr.log` for
+Omnibus GitLab packages or in `/home/git/gitlab/log/unicorn_stderr.log` for
for installations from source.
-Unicorn is a high-performance forking Web server which is used for
-serving the GitLab application. You can look at this log if, for
-example, your application does not respond. This log contains all
-information about the state of Unicorn processes at any given time.
+These logs contain all information about the state of Unicorn processes at any given time.
```plaintext
I, [2015-02-13T06:14:46.680381 #9047] INFO -- : Refreshing Gem list
@@ -657,7 +671,7 @@ This log records:
- [Protected paths](../user/admin_area/settings/protected_paths.md) abusive requests.
NOTE: **Note:**
-From [%12.3](https://gitlab.com/gitlab-org/gitlab/-/issues/29239), user ID and username are also
+In GitLab versions [12.3](https://gitlab.com/gitlab-org/gitlab/-/issues/29239) and greater, user ID and username are also
recorded on this log, if available.
## `graphql_json.log`
@@ -686,7 +700,7 @@ installations from source.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/19186) in GitLab 12.6.
-This file lives in `/var/log/gitlab/mailroom/mail_room_json.log` for
+This file lives in `/var/log/gitlab/mailroom/current` for
Omnibus GitLab packages or in `/home/git/gitlab/log/mail_room_json.log` for
installations from source.
@@ -793,8 +807,8 @@ This file lives in `/var/log/gitlab/gitlab-rails/service_measurement.log` for
Omnibus GitLab packages or in `/home/git/gitlab/log/service_measurement.log` for
installations from source.
-It contain only a single structured log with measurements for each service execution.
-It will contain measurement such as: number of SQL calls, `execution_time`, `gc_stats`, memory usage, etc...
+It contains only a single structured log with measurements for each service execution.
+It will contain measurements such as the number of SQL calls, `execution_time`, `gc_stats`, and `memory usage`.
For example:
@@ -870,22 +884,46 @@ For example:
}
```
+## Mattermost Logs
+
+For Omnibus GitLab installations, Mattermost logs reside in `/var/log/gitlab/mattermost/mattermost.log`.
+
## Workhorse Logs
-For Omnibus installations, Workhorse logs reside in `/var/log/gitlab/gitlab-workhorse/current`.
+For Omnibus GitLab installations, Workhorse logs reside in `/var/log/gitlab/gitlab-workhorse/`.
## PostgreSQL Logs
-For Omnibus installations, PostgreSQL logs reside in `/var/log/gitlab/postgresql/current`.
+For Omnibus GitLab installations, PostgreSQL logs reside in `/var/log/gitlab/postgresql/`.
## Prometheus Logs
-For Omnibus installations, Prometheus logs reside in `/var/log/gitlab/prometheus/current`.
+For Omnibus GitLab installations, Prometheus logs reside in `/var/log/gitlab/prometheus/`.
## Redis Logs
-For Omnibus installations, Redis logs reside in `/var/log/gitlab/redis/current`.
+For Omnibus GitLab installations, Redis logs reside in `/var/log/gitlab/redis/`.
-## Mattermost Logs
+## Alertmanager Logs
+
+For Omnibus GitLab installations, Alertmanager logs reside in `/var/log/gitlab/alertmanager/`.
+
+## Crond Logs
+
+For Omnibus GitLab installations, crond logs reside in `/var/log/gitlab/crond/`.
+
+## Grafana Logs
+
+For Omnibus GitLab installations, Grafana logs reside in `/var/log/gitlab/grafana/`.
+
+## LogRotate Logs
+
+For Omnibus GitLab installations, logrotate logs reside in `/var/log/gitlab/logrotate/`.
+
+## GitLab Monitor Logs
+
+For Omnibus GitLab installations, GitLab Monitor logs reside in `/var/log/gitlab/gitlab-monitor/`.
+
+## GitLab Exporter
-For Omnibus installations, Mattermost logs reside in `/var/log/gitlab/mattermost/mattermost.log`.
+For Omnibus GitLab installations, GitLab Exporter logs reside in `/var/log/gitlab/gitlab-exporter/`.
diff --git a/doc/administration/merge_request_diffs.md b/doc/administration/merge_request_diffs.md
index da9ca42ed9e..93cdbff6621 100644
--- a/doc/administration/merge_request_diffs.md
+++ b/doc/administration/merge_request_diffs.md
@@ -72,6 +72,11 @@ be configured already.
## Object Storage Settings
+NOTE: **Note:**
+In GitLab 13.2 and later, we recommend using the
+[consolidated object storage settings](object_storage.md#consolidated-object-storage-configuration).
+This section describes the earlier configuration format.
+
For source installations, these settings are nested under `external_diffs:` and
then `object_store:`. On Omnibus installations, they are prefixed by
`external_diffs_object_store_`.
@@ -87,20 +92,7 @@ then `object_store:`. On Omnibus installations, they are prefixed by
### S3 compatible connection settings
-The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
-
-| Setting | Description | Default |
-|---------|-------------|---------|
-| `provider` | Always `AWS` for compatible hosts | AWS |
-| `aws_access_key_id` | AWS credentials, or compatible | |
-| `aws_secret_access_key` | AWS credentials, or compatible | |
-| `aws_signature_version` | AWS signature version to use. 2 or 4 are valid options. Digital Ocean Spaces and other providers may need 2. | 4 |
-| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
-| `region` | AWS region | us-east-1 |
-| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
-| `endpoint` | Can be used when configuring an S3 compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
-| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
-| `use_iam_profile` | Set to true to use IAM profile instead of access keys | false
+See [the available connection settings for different providers](object_storage.md#connection-settings).
**In Omnibus installations:**
@@ -195,3 +187,51 @@ conditions become true:
These rules strike a balance between space and performance by only storing
frequently-accessed diffs in the database. Diffs that are less likely to be
accessed are moved to external storage instead.
+
+## Correcting incorrectly-migrated diffs
+
+Versions of GitLab earlier than `v13.0.0` would incorrectly record the location
+of some merge request diffs when [external diffs in object storage](#object-storage-settings)
+were enabled. This mainly affected imported merge requests, and was resolved
+with [this merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/31005).
+
+If you are using object storage, have never used on-disk storage for external
+diffs, the "changes" tab for some merge requests fails to load with a 500 error,
+and the exception for that error is of this form:
+
+```plain
+Errno::ENOENT (No such file or directory @ rb_sysopen - /var/opt/gitlab/gitlab-rails/shared/external-diffs/merge_request_diffs/mr-6167082/diff-8199789)
+```
+
+Then you are affected by this issue. Since it's not possible to safely determine
+all these conditions automatically, we've provided a Rake task in GitLab v13.2.0
+that you can run manually to correct the data:
+
+**In Omnibus installations:**
+
+```shell
+sudo gitlab-rake gitlab:external_diffs:force_object_storage
+```
+
+**In installations from source:**
+
+```shell
+sudo -u git -H bundle exec rake gitlab:external_diffs:force_object_storage RAILS_ENV=production
+```
+
+Environment variables can be provided to modify the behavior of the task. The
+available variables are:
+
+| Name | Default value | Purpose |
+| ---- | ------------- | ------- |
+| `ANSI` | `true` | Use ANSI escape codes to make output more understandable |
+| `BATCH_SIZE` | `1000` | Iterate through the table in batches of this size |
+| `START_ID` | `nil` | If set, begin scanning at this ID |
+| `END_ID` | `nil` | If set, stop scanning at this ID |
+| `UPDATE_DELAY` | `1` | Number of seconds to sleep between updates |
+
+The `START_ID` and `END_ID` variables may be used to run the update in parallel,
+by assigning different processes to different parts of the table. The `BATCH`
+and `UPDATE_DELAY` parameters allow the speed of the migration to be traded off
+against concurrent access to the table. The `ANSI` parameter should be set to
+false if your terminal does not support ANSI escape codes.
diff --git a/doc/administration/monitoring/gitlab_self_monitoring_project/img/self_monitoring_default_dashboard.png b/doc/administration/monitoring/gitlab_self_monitoring_project/img/self_monitoring_default_dashboard.png
new file mode 100644
index 00000000000..1d61823ce41
--- /dev/null
+++ b/doc/administration/monitoring/gitlab_self_monitoring_project/img/self_monitoring_default_dashboard.png
Binary files differ
diff --git a/doc/administration/monitoring/gitlab_self_monitoring_project/index.md b/doc/administration/monitoring/gitlab_self_monitoring_project/index.md
index d05fb803c5c..44ba26296b9 100644
--- a/doc/administration/monitoring/gitlab_self_monitoring_project/index.md
+++ b/doc/administration/monitoring/gitlab_self_monitoring_project/index.md
@@ -26,7 +26,7 @@ of the project shows some basic resource usage charts, such as CPU and memory us
of each server in [Omnibus GitLab](https://docs.gitlab.com/omnibus/) installations.
You can also use the project to configure your own
-[custom metrics](../../../user/project/integrations/prometheus.md#adding-custom-metrics) using
+[custom metrics](../../../operations/metrics/index.md#adding-custom-metrics) using
metrics exposed by the [GitLab exporter](../prometheus/gitlab_metrics.md#metrics-available).
## Creating the self monitoring project
@@ -47,6 +47,19 @@ project. If you create the project again, it will be created in its default stat
It can take a few seconds for it to be deleted.
1. After the project is deleted, GitLab displays a message confirming your action.
+## Dashboards available in Omnibus GitLab
+
+Omnibus GitLab provides a dashboard that displays CPU and memory usage
+of each GitLab server. To select the servers to be displayed in the
+panels, provide a regular expression in the **Instance label regex** field.
+The dashboard uses metrics available in
+[Omnibus GitLab](https://docs.gitlab.com/omnibus/) installations.
+
+![GitLab self monitoring default dashboard](img/self_monitoring_default_dashboard.png)
+
+You can also
+[create your own dashboards](../../../operations/metrics/dashboards/index.md#defining-custom-dashboards-per-project).
+
## Connection to Prometheus
The project will be automatically configured to connect to the
@@ -60,18 +73,18 @@ you should
## Taking action on Prometheus alerts **(ULTIMATE)**
-You can [add a webhook](../../../user/project/integrations/prometheus.md#external-prometheus-instances)
+You can [add a webhook](../../../operations/metrics/alerts.md#external-prometheus-instances)
to the Prometheus configuration in order for GitLab to receive notifications of any alerts.
Once the webhook is setup, you can
-[take action on incoming alerts](../../../user/project/integrations/prometheus.md#taking-action-on-incidents-ultimate).
+[take action on incoming alerts](../../../operations/metrics/alerts.md#trigger-actions-from-alerts-ultimate).
## Adding custom metrics to the self monitoring project
You can add custom metrics in the self monitoring project by:
-1. [Duplicating](../../../user/project/integrations/prometheus.md#duplicating-a-gitlab-defined-dashboard) the default dashboard.
-1. [Editing](../../../user/project/integrations/prometheus.md#view-and-edit-the-source-file-of-a-custom-dashboard) the newly created dashboard file and configuring it with [dashboard YAML properties](../../../user/project/integrations/prometheus.md#dashboard-yaml-properties).
+1. [Duplicating](../../../operations/metrics/dashboards/index.md#duplicating-a-gitlab-defined-dashboard) the default dashboard.
+1. [Editing](../../../operations/metrics/dashboards/index.md#view-and-edit-the-source-file-of-a-custom-dashboard) the newly created dashboard file and configuring it with [dashboard YAML properties](../../../operations/metrics/dashboards/yaml.md).
## Troubleshooting
diff --git a/doc/administration/monitoring/performance/grafana_configuration.md b/doc/administration/monitoring/performance/grafana_configuration.md
index 4dbe3aed84e..96f1377fb73 100644
--- a/doc/administration/monitoring/performance/grafana_configuration.md
+++ b/doc/administration/monitoring/performance/grafana_configuration.md
@@ -6,85 +6,90 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Grafana Configuration
-[Grafana](https://grafana.com/) is a tool that allows you to visualize time
-series metrics through graphs and dashboards. GitLab writes performance data to Prometheus
-and Grafana will allow you to query to display useful graphs.
+[Grafana](https://grafana.com/) is a tool that enables you to visualize time
+series metrics through graphs and dashboards. GitLab writes performance data to Prometheus,
+and Grafana allows you to query the data to display useful graphs.
## Installation
-[Omnibus GitLab can help you install Grafana (recommended)](https://docs.gitlab.com/omnibus/settings/grafana.html)
+Omnibus GitLab can [help you install Grafana (recommended)](https://docs.gitlab.com/omnibus/settings/grafana.html)
or Grafana supplies package repositories (Yum/Apt) for easy installation.
See [Grafana installation documentation](https://grafana.com/docs/grafana/latest/installation/)
for detailed steps.
NOTE: **Note:**
Before starting Grafana for the first time, set the admin user
-and password in `/etc/grafana/grafana.ini`. Otherwise, the default password
-will be `admin`.
+and password in `/etc/grafana/grafana.ini`. If you don't, the default password
+is `admin`.
## Configuration
-Login as the admin user. Expand the menu by clicking the Grafana logo in the
-top left corner. Choose 'Data Sources' from the menu. Then, click 'Add new'
-in the top bar.
-
-![Grafana empty data source page](img/grafana_data_source_empty.png)
-
-![Grafana data source configurations](img/grafana_data_source_configuration.png)
+1. Log in to Grafana as the admin user.
+1. Expand the menu by clicking the Grafana logo in the top left corner.
+1. Choose **Data Sources** from the menu.
+1. Click **Add new** in the top bar:
+ ![Grafana empty data source page](img/grafana_data_source_empty.png)
+1. Edit the data source to fit your needs:
+ ![Grafana data source configurations](img/grafana_data_source_configuration.png)
+1. Click **Save**.
## Import Dashboards
-You can now import a set of default dashboards that will give you a good
-start on displaying useful information. GitLab has published a set of default
-[Grafana dashboards](https://gitlab.com/gitlab-org/grafana-dashboards) to get you started. Clone the
-repository or download a zip/tarball, then follow these steps to import each
-JSON file.
-
-Open the dashboard dropdown menu and click 'Import'
-
-![Grafana dashboard dropdown](img/grafana_dashboard_dropdown.png)
-
-Click 'Choose file' and browse to the location where you downloaded or cloned
-the dashboard repository. Pick one of the JSON files to import.
-
-![Grafana dashboard import](img/grafana_dashboard_import.png)
-
-Once the dashboard is imported, be sure to click save icon in the top bar. If
-you do not save the dashboard after importing it will be removed when you
-navigate away.
-
-![Grafana save icon](img/grafana_save_icon.png)
+You can now import a set of default dashboards to start displaying useful information.
+GitLab has published a set of default
+[Grafana dashboards](https://gitlab.com/gitlab-org/grafana-dashboards) to get you started.
+Clone the repository, or download a ZIP file or tarball, then follow these steps to import each
+JSON file individually:
+
+1. Log in to Grafana as the admin user.
+1. Open the dashboard dropdown menu and click **Import**:
+ ![Grafana dashboard dropdown](img/grafana_dashboard_dropdown.png)
+1. Click **Choose file**, and browse to the location where you downloaded or
+ cloned the dashboard repository. Select a JSON file to import:
+ ![Grafana dashboard import](img/grafana_dashboard_import.png)
+1. After the dashboard is imported, click the **Save dashboard** icon in the top bar:
+ ![Grafana save icon](img/grafana_save_icon.png)
+
+ NOTE: **Note:**
+ If you don't save the dashboard after importing it, the dashboard is removed
+ when you navigate away from the page.
Repeat this process for each dashboard you wish to import.
-Alternatively you can automatically import all the dashboards into your Grafana
-instance. See the README of the [Grafana dashboards](https://gitlab.com/gitlab-org/grafana-dashboards)
-repository for more information on this process.
+Alternatively, you can import all the dashboards into your Grafana
+instance. For more information about this process, see the
+[README of the Grafana dashboards](https://gitlab.com/gitlab-org/grafana-dashboards)
+repository.
## Integration with GitLab UI
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/61005) in GitLab 12.1.
-If you have set up Grafana, you can enable a link to access it easily from the sidebar:
+After setting up Grafana, you can enable a link to access it easily from the
+GitLab sidebar:
-1. Go to the **Admin Area > Settings > Metrics and profiling**.
+1. Navigate to the **{admin}** **Admin Area > Settings > Metrics and profiling**.
1. Expand **Metrics - Grafana**.
-1. Check the "Enable access to Grafana" checkbox.
-1. If Grafana is enabled through Omnibus GitLab and on the same server,
- leave **Grafana URL** unchanged. It should be `/-/grafana`.
-
- In any other case, enter the full URL of the Grafana instance.
+1. Check the **Enable access to Grafana** checkbox.
+1. Configure the **Grafana URL**:
+ - *If Grafana is enabled through Omnibus GitLab and on the same server,*
+ leave **Grafana URL** unchanged. It should be `/-/grafana`.
+ - *Otherwise,* enter the full URL of the Grafana instance.
1. Click **Save changes**.
-1. The new link will be available in the **Admin Area > Monitoring > Metrics Dashboard**.
+
+GitLab displays your link in the **{admin}** **Admin Area > Monitoring > Metrics Dashboard**.
## Security Update
-Users running GitLab version 12.0 or later should immediately upgrade to one of the following security releases due to a known vulnerability with the embedded Grafana dashboard:
+Users running GitLab version 12.0 or later should immediately upgrade to one of the
+following security releases due to a known vulnerability with the embedded Grafana dashboard:
- 12.0.6
- 12.1.6
-After upgrading, the Grafana dashboard will be disabled and the location of your existing Grafana data will be changed from `/var/opt/gitlab/grafana/data/` to `/var/opt/gitlab/grafana/data.bak.#{Date.today}/`.
+After upgrading, the Grafana dashboard is disabled, and the location of your
+existing Grafana data is changed from `/var/opt/gitlab/grafana/data/` to
+`/var/opt/gitlab/grafana/data.bak.#{Date.today}/`.
To prevent the data from being relocated, you can run the following command prior to upgrading:
@@ -100,19 +105,23 @@ sudo mv /var/opt/gitlab/grafana/data.bak.xxxx/ /var/opt/gitlab/grafana/data/
However, you should **not** reinstate your old data _except_ under one of the following conditions:
-1. If you are certain that you changed your default admin password when you enabled Grafana
-1. If you run GitLab in a private network, accessed only by trusted users, and your Grafana login page has not been exposed to the internet
+1. If you're certain that you changed your default admin password when you enabled Grafana.
+1. If you run GitLab in a private network, accessed only by trusted users, and your
+ Grafana login page has not been exposed to the internet.
-If you require access to your old Grafana data but do not meet one of these criteria, you may consider:
+If you require access to your old Grafana data but don't meet one of these criteria, you may consider:
1. Reinstating it temporarily.
1. [Exporting the dashboards](https://grafana.com/docs/grafana/latest/reference/export_import/#exporting-a-dashboard) you need.
1. Refreshing the data and [re-importing your dashboards](https://grafana.com/docs/grafana/latest/reference/export_import/#importing-a-dashboard).
DANGER: **Danger:**
-This poses a temporary vulnerability while your old Grafana data is in use and the decision to do so should be weighed carefully with your need to access existing data and dashboards.
+These actions pose a temporary vulnerability while your old Grafana data is in use.
+Deciding to take any of these actions should be weighed carefully with your need to access
+existing data and dashboards.
-For more information and further mitigation details, please refer to our [blog post on the security release](https://about.gitlab.com/releases/2019/08/12/critical-security-release-gitlab-12-dot-1-dot-6-released/).
+For more information and further mitigation details, please refer to our
+[blog post on the security release](https://about.gitlab.com/releases/2019/08/12/critical-security-release-gitlab-12-dot-1-dot-6-released/).
---
diff --git a/doc/administration/monitoring/performance/img/performance_bar_configuration_settings.png b/doc/administration/monitoring/performance/img/performance_bar_configuration_settings.png
deleted file mode 100644
index fafc50cd000..00000000000
--- a/doc/administration/monitoring/performance/img/performance_bar_configuration_settings.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_frontend.png b/doc/administration/monitoring/performance/img/performance_bar_frontend.png
index 32a241e032b..9cc874c883a 100644
--- a/doc/administration/monitoring/performance/img/performance_bar_frontend.png
+++ b/doc/administration/monitoring/performance/img/performance_bar_frontend.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_gitaly_calls.png b/doc/administration/monitoring/performance/img/performance_bar_gitaly_calls.png
index 2c43201cbd0..6caa2869d9b 100644
--- a/doc/administration/monitoring/performance/img/performance_bar_gitaly_calls.png
+++ b/doc/administration/monitoring/performance/img/performance_bar_gitaly_calls.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_gitaly_threshold.png b/doc/administration/monitoring/performance/img/performance_bar_gitaly_threshold.png
index 4e42d904cdf..386558da813 100644
--- a/doc/administration/monitoring/performance/img/performance_bar_gitaly_threshold.png
+++ b/doc/administration/monitoring/performance/img/performance_bar_gitaly_threshold.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_redis_calls.png b/doc/administration/monitoring/performance/img/performance_bar_redis_calls.png
index ecb2dffbf8d..f2df8c794db 100644
--- a/doc/administration/monitoring/performance/img/performance_bar_redis_calls.png
+++ b/doc/administration/monitoring/performance/img/performance_bar_redis_calls.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_request_selector_warning.png b/doc/administration/monitoring/performance/img/performance_bar_request_selector_warning.png
index 74711387ffe..3872c8b41b2 100644
--- a/doc/administration/monitoring/performance/img/performance_bar_request_selector_warning.png
+++ b/doc/administration/monitoring/performance/img/performance_bar_request_selector_warning.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_rugged_calls.png b/doc/administration/monitoring/performance/img/performance_bar_rugged_calls.png
index 210f80a713d..0340ca1b2f7 100644
--- a/doc/administration/monitoring/performance/img/performance_bar_rugged_calls.png
+++ b/doc/administration/monitoring/performance/img/performance_bar_rugged_calls.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/performance_bar.md b/doc/administration/monitoring/performance/performance_bar.md
index c681dedbca8..8002a9ab296 100644
--- a/doc/administration/monitoring/performance/performance_bar.md
+++ b/doc/administration/monitoring/performance/performance_bar.md
@@ -6,71 +6,83 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Performance Bar
-A Performance Bar can be displayed, to dig into the performance of a page. When
-activated, it looks as follows:
+You can display the GitLab Performance Bar to see statistics for the performance
+of a page. When activated, it looks as follows:
![Performance Bar](img/performance_bar.png)
-It allows you to see (from left to right):
+From left to right, it displays:
-- the current host serving the page
-- time taken and number of DB queries; click through for details of these queries
+- **Current Host**: the current host serving the page.
+- **Database queries**: the time taken (in milliseconds) and the total number
+ of database queries, displayed in the format `00ms / 00pg`. Click to display
+ a modal window with more details:
![SQL profiling using the Performance Bar](img/performance_bar_sql_queries.png)
-- time taken and number of [Gitaly](../../gitaly/index.md) calls; click through for details of these calls
+- **Gitaly calls**: the time taken (in milliseconds) and the total number of
+ [Gitaly](../../gitaly/index.md) calls. Click to display a modal window with more
+ details:
![Gitaly profiling using the Performance Bar](img/performance_bar_gitaly_calls.png)
-- time taken and number of [Rugged](../../high_availability/nfs.md#improving-nfs-performance-with-gitlab) calls; click through for details of these calls
+- **Rugged calls**: the time taken (in milliseconds) and the total number of
+ [Rugged](../../high_availability/nfs.md#improving-nfs-performance-with-gitlab) calls.
+ Click to display a modal window with more details:
![Rugged profiling using the Performance Bar](img/performance_bar_rugged_calls.png)
-- time taken and number of Redis calls; click through for details of these calls
+- **Redis calls**: the time taken (in milliseconds) and the total number of
+ Redis calls. Click to display a modal window with more details:
![Redis profiling using the Performance Bar](img/performance_bar_redis_calls.png)
-- total load timings of the page; click through for details of these calls. Values in the following order:
- - Backend - Time that the actual base page took to load
- - [First Contentful Paint](hhttps://web.dev/first-contentful-paint/) - Time until something was visible to the user
- - [DomContentLoaded](https://developers.google.com/web/fundamentals/performance/critical-rendering-path/measure-crp) Event
- - Number of Requests that the page loaded
- ![Frontend requests using the Performance Bar](img/performance_bar_frontend.png)
-- a link to add a request's details to the performance bar; the request can be
- added by its full URL (authenticated as the current user), or by the value of
- its `X-Request-Id` header
-- a link to download the raw JSON used to generate the Performance Bar reports
-
-On the far right is a request selector that allows you to view the same metrics
-(excluding the page timing and line profiler) for any requests made while the
-page was open. Only the first two requests per unique URL are captured.
+- **Elasticsearch calls**: the time taken (in milliseconds) and the total number of
+ Elasticsearch calls. Click to display a modal window with more details.
+- **Load timings** of the page: if your browser supports load timings (Chromium
+ and Chrome) several values in milliseconds, separated by slashes.
+ Click to display a modal window with more details. The values, from left to right:
+ - **Backend**: time needed for the base page to load.
+ - [**First Contentful Paint**](https://web.dev/first-contentful-paint/):
+ Time until something was visible to the user.
+ - [**DomContentLoaded**](https://developers.google.com/web/fundamentals/performance/critical-rendering-path/measure-crp) Event.
+ - **Total number of requests** the page loaded:
+ ![Frontend requests using the Performance Bar](img/performance_bar_frontend.png)
+- **Trace**: If Jaeger is integrated, **Trace** links to a Jaeger tracing page
+ with the current request's `correlation_id` included.
+- **+**: A link to add a request's details to the performance bar. The request
+ can be added by its full URL (authenticated as the current user), or by the value of
+ its `X-Request-Id` header.
+- **Download**: a link to download the raw JSON used to generate the Performance Bar reports.
+- **Request Selector**: a select box displayed on the right-hand side of the
+ Performance Bar which enables you to view these metrics for any requests made while
+ the current page was open. Only the first two requests per unique URL are captured.
## Request warnings
-For requests exceeding predefined limits, a warning icon will be shown
-next to the failing metric, along with an explanation. In this example,
-the Gitaly call duration exceeded the threshold:
+Requests exceeding predefined limits display a warning **{warning}** icon and
+explanation next to the failing metric. In this example, the Gitaly call duration
+exceeded the threshold:
![Gitaly call duration exceeded threshold](img/performance_bar_gitaly_threshold.png)
-If any requests on the current page generated warnings, the icon will
-appear next to the request selector:
+If any requests on the current page generated warnings, the warning icon displays
+next to the **Request selector**:
![Request selector showing two requests with warnings](img/performance_bar_request_selector_warning.png)
-And requests with warnings are indicated in the request selector with a
-`(!)` after their path:
+Requests with warnings display `(!)` after their path in the **Request selector**:
![Request selector showing dropdown](img/performance_bar_request_selector_warning_expanded.png)
## Enable the Performance Bar via the Admin panel
-GitLab Performance Bar is disabled by default. To enable it for a given group,
-navigate to **Admin Area > Settings > Metrics and profiling**
-(`admin/application_settings/metrics_and_profiling`), and expand the section
-**Profiling - Performance bar**.
+The GitLab Performance Bar is disabled by default. To enable it for a given group:
-The only required setting you need to set is the full path of the group that
-will be allowed to display the Performance Bar.
-Make sure _Enable the Performance Bar_ is checked and hit
-**Save** to save the changes.
+1. Sign in as a user with Administrator [permissions](../../../user/permissions.md).
+1. In the menu bar, click the **{admin}** **Admin Area** icon.
+1. Navigate to **{settings}** **Settings > Metrics and profiling**
+ (`admin/application_settings/metrics_and_profiling`), and expand the section
+ **Profiling - Performance bar**.
+1. Click **Enable access to the Performance Bar**.
+1. In the **Allowed group** field, provide the full path of the group allowed
+ to access the GitLab Performance Bar.
+1. Click **Save changes**.
-Once the Performance Bar is enabled, you will need to press the [<kbd>p</kbd> +
-<kbd>b</kbd> keyboard shortcut](../../../user/shortcuts.md) to actually
-display it.
+## Keyboard shortcut for the Performance Bar
-You can toggle the Bar using the same shortcut.
-
-![GitLab Performance Bar Admin Settings](img/performance_bar_configuration_settings.png)
+After enabling the GitLab Performance Bar, press the [<kbd>p</kbd> +
+<kbd>b</kbd> keyboard shortcut](../../../user/shortcuts.md) to display it, and
+again to hide it.
diff --git a/doc/administration/monitoring/prometheus/gitlab_exporter.md b/doc/administration/monitoring/prometheus/gitlab_exporter.md
index 3effca4a2bf..686ed14ba42 100644
--- a/doc/administration/monitoring/prometheus/gitlab_exporter.md
+++ b/doc/administration/monitoring/prometheus/gitlab_exporter.md
@@ -9,27 +9,25 @@ info: To determine the technical writer assigned to the Stage/Group associated w
>- Available since [Omnibus GitLab 8.17](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/1132).
>- Renamed from `GitLab monitor exporter` to `GitLab exporter` in [GitLab 12.3](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/16511).
-The [GitLab exporter](https://gitlab.com/gitlab-org/gitlab-exporter) allows you to
-measure various GitLab metrics, pulled from Redis and the database, in Omnibus GitLab
+The [GitLab exporter](https://gitlab.com/gitlab-org/gitlab-exporter) enables you to
+measure various GitLab metrics pulled from Redis and the database in Omnibus GitLab
instances.
NOTE: **Note:**
-For installations from source you'll have to install and configure it yourself.
+For installations from source you must install and configure it yourself.
To enable the GitLab exporter in an Omnibus GitLab instance:
-1. [Enable Prometheus](index.md#configuring-prometheus)
-1. Edit `/etc/gitlab/gitlab.rb`
-1. Add or find and uncomment the following line, making sure it's set to `true`:
+1. [Enable Prometheus](index.md#configuring-prometheus).
+1. Edit `/etc/gitlab/gitlab.rb`.
+1. Add, or find and uncomment, the following line, making sure it's set to `true`:
```ruby
gitlab_exporter['enable'] = true
```
1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure)
- for the changes to take effect
+ for the changes to take effect.
-Prometheus will now automatically begin collecting performance data from
-the GitLab exporter exposed under `localhost:9168`.
-
-[← Back to the main Prometheus page](index.md)
+Prometheus automatically begins collecting performance data from
+the GitLab exporter exposed at `localhost:9168`.
diff --git a/doc/administration/monitoring/prometheus/gitlab_metrics.md b/doc/administration/monitoring/prometheus/gitlab_metrics.md
index f3084b1732e..fa5e5da80c3 100644
--- a/doc/administration/monitoring/prometheus/gitlab_metrics.md
+++ b/doc/administration/monitoring/prometheus/gitlab_metrics.md
@@ -14,18 +14,18 @@ To enable the GitLab Prometheus metrics:
1. [Restart GitLab](../../restart_gitlab.md#omnibus-gitlab-restart) for the changes to take effect.
NOTE: **Note:**
-For installations from source you'll have to configure it yourself.
+For installations from source you must configure it yourself.
## Collecting the metrics
GitLab monitors its own internal service metrics, and makes them available at the
`/-/metrics` endpoint. Unlike other [Prometheus](https://prometheus.io) exporters, to access
-it, the client IP address needs to be [explicitly allowed](../ip_whitelist.md).
+the metrics, the client IP address must be [explicitly allowed](../ip_whitelist.md).
For [Omnibus GitLab](https://docs.gitlab.com/omnibus/) and Chart installations,
these metrics are enabled and collected as of
[GitLab 9.4](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/1702).
-For source installations or earlier versions, these metrics must be enabled
+For source installations, these metrics must be enabled
manually and collected by a Prometheus server.
For enabling and viewing metrics from Sidekiq nodes, see [Sidekiq metrics](#sidekiq-metrics).
@@ -40,7 +40,7 @@ The following metrics are available:
| `gitlab_banzai_cacheless_render_real_duration_seconds` | Histogram | 9.4 | Duration of rendering Markdown into HTML when cached output does not exist | `controller`, `action` |
| `gitlab_cache_misses_total` | Counter | 10.2 | Cache read miss | `controller`, `action` |
| `gitlab_cache_operation_duration_seconds` | Histogram | 10.2 | Cache access time | |
-| `gitlab_cache_operations_total` | Counter | 12.2 | Cache operations by controller/action | `controller`, `action`, `operation` |
+| `gitlab_cache_operations_total` | Counter | 12.2 | Cache operations by controller or action | `controller`, `action`, `operation` |
| `gitlab_ci_pipeline_creation_duration_seconds` | Histogram | 13.0 | Time in seconds it takes to create a CI/CD pipeline | |
| `gitlab_ci_pipeline_size_builds` | Histogram | 13.1 | Total number of builds within a pipeline grouped by a pipeline source | `source` |
| `job_waiter_started_total` | Counter | 12.9 | Number of batches of jobs started where a web request is waiting for the jobs to complete | `worker` |
@@ -92,16 +92,16 @@ The following metrics are available:
| `gitlab_view_rendering_duration_seconds` | Histogram | 10.2 | Duration for views (histogram) | `controller`, `action`, `view` |
| `http_requests_total` | Counter | 9.4 | Rack request count | `method` |
| `http_request_duration_seconds` | Histogram | 9.4 | HTTP response time from rack middleware | `method`, `status` |
-| `gitlab_transaction_db_count_total` | Counter | 13.1 | Counter for total number of sql calls | `controller`, `action` |
-| `gitlab_transaction_db_write_count_total` | Counter | 13.1 | Counter for total number of write sql calls | `controller`, `action` |
-| `gitlab_transaction_db_cached_count_total` | Counter | 13.1 | Counter for total number of cached sql calls | `controller`, `action` |
+| `gitlab_transaction_db_count_total` | Counter | 13.1 | Counter for total number of SQL calls | `controller`, `action` |
+| `gitlab_transaction_db_write_count_total` | Counter | 13.1 | Counter for total number of write SQL calls | `controller`, `action` |
+| `gitlab_transaction_db_cached_count_total` | Counter | 13.1 | Counter for total number of cached SQL calls | `controller`, `action` |
| `http_redis_requests_duration_seconds` | Histogram | 13.1 | Redis requests duration during web transactions | `controller`, `action` |
| `http_redis_requests_total` | Counter | 13.1 | Redis requests count during web transactions | `controller`, `action` |
| `http_elasticsearch_requests_duration_seconds` **(STARTER)** | Histogram | 13.1 | Elasticsearch requests duration during web transactions | `controller`, `action` |
| `http_elasticsearch_requests_total` **(STARTER)** | Counter | 13.1 | Elasticsearch requests count during web transactions | `controller`, `action` |
| `pipelines_created_total` | Counter | 9.4 | Counter of pipelines created | |
| `rack_uncaught_errors_total` | Counter | 9.4 | Rack connections handling uncaught errors count | |
-| `user_session_logins_total` | Counter | 9.4 | Counter of how many users have logged in | |
+| `user_session_logins_total` | Counter | 9.4 | Counter of how many users have logged in since GitLab was started or restarted | |
| `upload_file_does_not_exist` | Counter | 10.7 in EE, 11.5 in CE | Number of times an upload record could not find its file | |
| `failed_login_captcha_total` | Gauge | 11.0 | Counter of failed CAPTCHA attempts during login | |
| `successful_login_captcha_total` | Gauge | 11.0 | Counter of successful CAPTCHA attempts during login | |
@@ -120,7 +120,7 @@ The following metrics can be controlled by feature flags:
## Sidekiq metrics
Sidekiq jobs may also gather metrics, and these metrics can be accessed if the
-Sidekiq exporter is enabled (for example, using the `monitoring.sidekiq_exporter`
+Sidekiq exporter is enabled: for example, using the `monitoring.sidekiq_exporter`
configuration option in `gitlab.yml`. These metrics are served from the
`/metrics` path on the configured port.
@@ -173,6 +173,7 @@ configuration option in `gitlab.yml`. These metrics are served from the
| `geo_repositories_retrying_verification_count` | Gauge | 11.2 | Number of repositories verification failures that Geo is actively trying to correct on secondary | `url` |
| `geo_wikis_retrying_verification_count` | Gauge | 11.2 | Number of wikis verification failures that Geo is actively trying to correct on secondary | `url` |
| `global_search_bulk_cron_queue_size` | Gauge | 12.10 | Number of database records waiting to be synchronized to Elasticsearch | |
+| `global_search_awaiting_indexing_queue_size` | Gauge | 13.2 | Number of database updates waiting to be synchronized to Elasticsearch while indexing is paused | |
| `package_files_count` | Gauge | 13.0 | Number of package files on primary | `url` |
| `package_files_checksummed_count` | Gauge | 13.0 | Number of package files checksummed on primary | `url` |
| `package_files_checksum_failed_count` | Gauge | 13.0 | Number of package files failed to calculate the checksum on primary
@@ -187,16 +188,16 @@ The following metrics are available:
## Connection pool metrics
-These metrics record the status of the database [connection pools](https://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html).
+These metrics record the status of the database
+[connection pools](https://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html),
+and the metrics all have these labels:
-They all have these labels:
-
-1. `class` - the Ruby class being recorded.
- 1. `ActiveRecord::Base` is the main database connection.
- 1. `Geo::TrackingBase` is the connection to the Geo tracking database, if
- enabled.
-1. `host` - the host name used to connect to the database.
-1. `port` - the port used to connect to the database.
+- `class` - the Ruby class being recorded.
+ - `ActiveRecord::Base` is the main database connection.
+ - `Geo::TrackingBase` is the connection to the Geo tracking database, if
+ enabled.
+- `host` - the host name used to connect to the database.
+- `port` - the port used to connect to the database.
| Metric | Type | Since | Description |
|:----------------------------------------------|:------|:------|:--------------------------------------------------|
@@ -251,15 +252,29 @@ When Puma is used instead of Unicorn, the following metrics are available:
| `puma_idle_threads` | Gauge | 12.0 | Number of spawned threads which are not processing a request |
| `puma_killer_terminations_total` | Gauge | 12.0 | Number of workers terminated by PumaWorkerKiller |
+## Redis metrics
+
+These client metrics are meant to complement Redis server metrics.
+These metrics are broken down per [Redis
+instance](https://docs.gitlab.com/omnibus/settings/redis.html#running-with-multiple-redis-instances).
+These metrics all have a `storage` label which indicates the Redis
+instance (`cache`, `shared_state` etc.).
+
+| Metric | Type | Since | Description |
+|:--------------------------------- |:------- |:----- |:----------- |
+| `gitlab_redis_client_exceptions_total` | Counter | 13.2 | Number of Redis client exceptions, broken down by exception class |
+| `gitlab_redis_client_requests_total` | Counter | 13.2 | Number of Redis client requests |
+| `gitlab_redis_client_requests_duration_seconds` | Histogram | 13.2 | Redis request latency, excluding blocking commands |
+
## Metrics shared directory
GitLab's Prometheus client requires a directory to store metrics data shared between multi-process services.
Those files are shared among all instances running under Unicorn server.
The directory must be accessible to all running Unicorn's processes, or
-metrics won't function correctly.
+metrics can't function correctly.
This directory's location is configured using environment variable `prometheus_multiproc_dir`.
For best performance, create this directory in `tmpfs`.
If GitLab is installed using [Omnibus GitLab](https://docs.gitlab.com/omnibus/)
-and `tmpfs` is available, then the metrics directory will be configured for you.
+and `tmpfs` is available, then GitLab configures the metrics directory for you.
diff --git a/doc/administration/monitoring/prometheus/index.md b/doc/administration/monitoring/prometheus/index.md
index 1e233b890a2..f0ad0a1a2e6 100644
--- a/doc/administration/monitoring/prometheus/index.md
+++ b/doc/administration/monitoring/prometheus/index.md
@@ -145,6 +145,12 @@ To use an external Prometheus server:
gitlab_rails['monitoring_whitelist'] = ['127.0.0.0/8', '192.168.0.1']
```
+1. On **all** GitLab Rails(Puma/Unicorn, Sidekiq) servers, set the Prometheus server IP address and listen port. For example:
+
+ ```ruby
+ gitlab_rails['prometheus_address'] = '192.168.0.1:9090'
+ ```
+
1. To scrape NGINX metrics, you'll also need to configure NGINX to allow the Prometheus server
IP. For example:
@@ -165,54 +171,54 @@ To use an external Prometheus server:
```yaml
scrape_configs:
- - job_name: nginx
- static_configs:
- - targets:
- - 1.1.1.1:8060
- - job_name: redis
- static_configs:
- - targets:
- - 1.1.1.1:9121
- - job_name: postgres
- static_configs:
- - targets:
- - 1.1.1.1:9187
- - job_name: node
- static_configs:
- - targets:
- - 1.1.1.1:9100
- - job_name: gitlab-workhorse
- static_configs:
- - targets:
- - 1.1.1.1:9229
- - job_name: gitlab-rails
- metrics_path: "/-/metrics"
- static_configs:
- - targets:
- - 1.1.1.1:8080
- - job_name: gitlab-sidekiq
- static_configs:
- - targets:
- - 1.1.1.1:8082
- - job_name: gitlab_exporter_database
- metrics_path: "/database"
- static_configs:
- - targets:
- - 1.1.1.1:9168
- - job_name: gitlab_exporter_sidekiq
- metrics_path: "/sidekiq"
- static_configs:
- - targets:
- - 1.1.1.1:9168
- - job_name: gitlab_exporter_process
- metrics_path: "/process"
- static_configs:
- - targets:
- - 1.1.1.1:9168
- - job_name: gitaly
- static_configs:
- - targets:
- - 1.1.1.1:9236
+ - job_name: nginx
+ static_configs:
+ - targets:
+ - 1.1.1.1:8060
+ - job_name: redis
+ static_configs:
+ - targets:
+ - 1.1.1.1:9121
+ - job_name: postgres
+ static_configs:
+ - targets:
+ - 1.1.1.1:9187
+ - job_name: node
+ static_configs:
+ - targets:
+ - 1.1.1.1:9100
+ - job_name: gitlab-workhorse
+ static_configs:
+ - targets:
+ - 1.1.1.1:9229
+ - job_name: gitlab-rails
+ metrics_path: "/-/metrics"
+ static_configs:
+ - targets:
+ - 1.1.1.1:8080
+ - job_name: gitlab-sidekiq
+ static_configs:
+ - targets:
+ - 1.1.1.1:8082
+ - job_name: gitlab_exporter_database
+ metrics_path: "/database"
+ static_configs:
+ - targets:
+ - 1.1.1.1:9168
+ - job_name: gitlab_exporter_sidekiq
+ metrics_path: "/sidekiq"
+ static_configs:
+ - targets:
+ - 1.1.1.1:9168
+ - job_name: gitlab_exporter_process
+ metrics_path: "/process"
+ static_configs:
+ - targets:
+ - 1.1.1.1:9168
+ - job_name: gitaly
+ static_configs:
+ - targets:
+ - 1.1.1.1:9236
```
1. Reload the Prometheus server.
diff --git a/doc/administration/object_storage.md b/doc/administration/object_storage.md
index 1dea2de73f6..b51b722fbd7 100644
--- a/doc/administration/object_storage.md
+++ b/doc/administration/object_storage.md
@@ -11,29 +11,430 @@ typically much more performant, reliable, and scalable.
## Options
-Object storage options that GitLab has tested, or is aware of customers using include:
+GitLab has been tested on a number of object storage providers:
-- SaaS/Cloud solutions such as [Amazon S3](https://aws.amazon.com/s3/), [Google cloud storage](https://cloud.google.com/storage).
+- [Amazon S3](https://aws.amazon.com/s3/)
+- [Google Cloud Storage](https://cloud.google.com/storage)
+- [Digital Ocean Spaces](https://www.digitalocean.com/products/spaces)
+- [Oracle Cloud Infrastructure](https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm)
+- [Openstack Swift](https://docs.openstack.org/swift/latest/s3_compat.html)
- On-premises hardware and appliances from various storage vendors.
- MinIO. We have [a guide to deploying this](https://docs.gitlab.com/charts/advanced/external-object-storage/minio.html) within our Helm Chart documentation.
## Configuration guides
-For configuring GitLab to use Object Storage refer to the following guides:
-
-1. Configure [object storage for backups](../raketasks/backup_restore.md#uploading-backups-to-a-remote-cloud-storage).
-1. Configure [object storage for job artifacts](job_artifacts.md#using-object-storage)
- including [incremental logging](job_logs.md#new-incremental-logging-architecture).
-1. Configure [object storage for LFS objects](lfs/index.md#storing-lfs-objects-in-remote-object-storage).
-1. Configure [object storage for uploads](uploads.md#using-object-storage-core-only).
-1. Configure [object storage for merge request diffs](merge_request_diffs.md#using-object-storage).
-1. Configure [object storage for Container Registry](packages/container_registry.md#container-registry-storage-driver) (optional feature).
-1. Configure [object storage for Mattermost](https://docs.mattermost.com/administration/config-settings.html#file-storage) (optional feature).
-1. Configure [object storage for packages](packages/index.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
-1. Configure [object storage for Dependency Proxy](packages/dependency_proxy.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
-1. Configure [object storage for Pseudonymizer](pseudonymizer.md#configuration) (optional feature). **(ULTIMATE ONLY)**
-1. Configure [object storage for autoscale Runner caching](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching) (optional - for improved performance).
-1. Configure [object storage for Terraform state files](terraform_state.md#using-object-storage-core-only)
+There are two ways of specifying object storage configuration in GitLab:
+
+- [Consolidated form](#consolidated-object-storage-configuration): A single credential is
+ shared by all supported object types.
+- [Storage-specific form](#storage-specific-configuration): Every object defines its
+ own object storage [connection and configuration](#connection-settings).
+
+For more information on the differences and to transition from one form to another, see
+[Transition to consolidated form](#transition-to-consolidated-form).
+
+### Consolidated object storage configuration
+
+> Introduced in [GitLab 13.2](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4368).
+
+Using the consolidated object storage configuration has a number of advantages:
+
+- It can simplify your GitLab configuration since the connection details are shared
+ across object types.
+- It enables the use of [encrypted S3 buckets](#encrypted-s3-buckets).
+- It [uploads files to S3 with proper `Content-MD5` headers](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/222).
+
+NOTE: **Note:**
+Only AWS S3-compatible providers and Google are
+supported at the moment since [direct upload
+mode](../development/uploads.md#direct-upload) must be used. Background
+upload is not supported in this mode. We recommend direct upload mode because
+it does not require a shared folder, and [this setting may become the default](https://gitlab.com/gitlab-org/gitlab/-/issues/27331).
+
+NOTE: **Note:**
+Consolidated object storage configuration cannot be used for
+backups or Mattermost. See [the full table for a complete list](#storage-specific-configuration).
+
+Most types of objects, such as CI artifacts, LFS files, upload
+attachments, and so on can be saved in object storage by specifying a single
+credential for object storage with multiple buckets. A [different bucket
+for each type must be used](#use-separate-buckets).
+
+When the consolidated form is:
+
+- Used with an S3-compatible object storage, Workhorse uses its internal S3 client to
+ upload files.
+- Not used with an S3-compatible object storage, Workhorse falls back to using
+ pre-signed URLs.
+
+See the section on [ETag mismatch errors](#etag-mismatch) for more details.
+
+**In Omnibus installations:**
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the following lines, substituting
+ the values you want:
+
+ ```ruby
+ # Consolidated object storage configuration
+ gitlab_rails['object_store']['enabled'] = true
+ gitlab_rails['object_store']['proxy_download'] = true
+ gitlab_rails['object_store']['connection'] = {
+ 'provider' => 'AWS',
+ 'region' => '<eu-central-1>',
+ 'aws_access_key_id' => '<AWS_ACCESS_KEY_ID>',
+ 'aws_secret_access_key' => '<AWS_SECRET_ACCESS_KEY>'
+ }
+ gitlab_rails['object_store']['objects']['artifacts']['bucket'] = '<artifacts>'
+ gitlab_rails['object_store']['objects']['external_diffs']['bucket'] = '<external-diffs>'
+ gitlab_rails['object_store']['objects']['lfs']['bucket'] = '<lfs-objects>'
+ gitlab_rails['object_store']['objects']['uploads']['bucket'] = '<uploads>'
+ gitlab_rails['object_store']['objects']['packages']['bucket'] = '<packages>'
+ gitlab_rails['object_store']['objects']['dependency_proxy']['bucket'] = '<dependency-proxy>'
+ gitlab_rails['object_store']['objects']['terraform_state']['bucket'] = '<terraform-state>'
+ ```
+
+ NOTE: For GitLab 9.4 or later, if you're using AWS IAM profiles, be sure to omit the
+ AWS access key and secret access key/value pairs. For example:
+
+ ```ruby
+ gitlab_rails['object_store_connection'] = {
+ 'provider' => 'AWS',
+ 'region' => '<eu-central-1>',
+ 'use_iam_profile' => true
+ }
+ ```
+
+1. Save the file and [reconfigure GitLab](restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+**In installations from source:**
+
+1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
+
+ ```yaml
+ object_store:
+ enabled: true
+ proxy_download: true
+ connection:
+ provider: AWS
+ aws_access_key_id: <AWS_ACCESS_KEY_ID>
+ aws_secret_access_key: <AWS_SECRET_ACCESS_KEY>
+ region: <eu-central-1>
+ objects:
+ artifacts:
+ bucket: <artifacts>
+ external_diffs:
+ bucket: <external-diffs>
+ lfs:
+ bucket: <lfs-objects>
+ uploads:
+ bucket: <uploads>
+ packages:
+ bucket: <packages>
+ dependency_proxy:
+ bucket: <dependency_proxy>
+ terraform_state:
+ bucket: <terraform>
+ ```
+
+1. Edit `/home/git/gitlab-workhorse/config.toml` and add or amend the following lines:
+
+ ```toml
+ [object_storage]
+ enabled = true
+ provider = "AWS"
+
+ [object_storage.s3]
+ aws_access_key_id = "<AWS_ACCESS_KEY_ID>"
+ aws_secret_access_key = "<AWS_SECRET_ACCESS_KEY>"
+ ```
+
+1. Save the file and [restart GitLab](restart_gitlab.md#installations-from-source) for the changes to take effect.
+
+#### Common parameters
+
+In the consolidated configuration, the `object_store` section defines a
+common set of parameters. Here we use the YAML from the source
+installation because it's easier to see the inheritance:
+
+```yaml
+ object_store:
+ enabled: true
+ proxy_download: true
+ connection:
+ provider: AWS
+ aws_access_key_id: <AWS_ACCESS_KEY_ID>
+ aws_secret_access_key: <AWS_SECRET_ACCESS_KEY>
+ objects:
+ ...
+```
+
+The Omnibus configuration maps directly to this:
+
+```ruby
+gitlab_rails['object_store']['enabled'] = true
+gitlab_rails['object_store']['proxy_download'] = true
+gitlab_rails['object_store']['connection'] = {
+ 'provider' => 'AWS',
+ 'aws_access_key_id' => '<AWS_ACCESS_KEY_ID',
+ 'aws_secret_access_key' => '<AWS_SECRET_ACCESS_KEY>'
+}
+```
+
+| Setting | Description |
+|---------|-------------|
+| `enabled` | Enable/disable object storage |
+| `proxy_download` | Set to `true` to [enable proxying all files served](#proxy-download). Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data |
+| `connection` | Various connection options described below |
+| `objects` | [Object-specific configuration](#object-specific-configuration)
+
+### Connection settings
+
+Both consolidated configuration form and storage-specific configuration form must configure a connection. The following sections describe parameters that can be used
+in the `connection` setting.
+
+#### S3-compatible connection settings
+
+The connection settings match those provided by [fog-aws](https://github.com/fog/fog-aws):
+
+| Setting | Description | Default |
+|---------|-------------|---------|
+| `provider` | Always `AWS` for compatible hosts | `AWS` |
+| `aws_access_key_id` | AWS credentials, or compatible | |
+| `aws_secret_access_key` | AWS credentials, or compatible | |
+| `aws_signature_version` | AWS signature version to use. `2` or `4` are valid options. Digital Ocean Spaces and other providers may need `2`. | `4` |
+| `enable_signature_v4_streaming` | Set to `true` to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be `false`. | `true` |
+| `region` | AWS region | us-east-1 |
+| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com`. HTTPS and port 443 is assumed. | `s3.amazonaws.com` |
+| `endpoint` | Can be used when configuring an S3 compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000`. This takes precedence over `host`. | (optional) |
+| `path_style` | Set to `true` to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as `false` for AWS S3. | `false` |
+| `use_iam_profile` | Set to `true` to use IAM profile instead of access keys | `false`
+
+#### Oracle Cloud S3 connection settings
+
+Note that Oracle Cloud S3 must be sure to use the following settings:
+
+| Setting | Value |
+|---------|-------|
+| `enable_signature_v4_streaming` | `false` |
+| `path_style` | `true` |
+
+If `enable_signature_v4_streaming` is set to `true`, you may see the
+following error in `production.log`:
+
+```plaintext
+STREAMING-AWS4-HMAC-SHA256-PAYLOAD is not supported
+```
+
+#### Google Cloud Storage (GCS)
+
+Here are the valid connection parameters for GCS:
+
+| Setting | Description | example |
+|---------|-------------|---------|
+| `provider` | The provider name | `Google` |
+| `google_project` | GCP project name | `gcp-project-12345` |
+| `google_client_email` | The email address of the service account | `foo@gcp-project-12345.iam.gserviceaccount.com` |
+| `google_json_key_location` | The JSON key path | `/path/to/gcp-project-12345-abcde.json` |
+
+NOTE: **Note:**
+The service account must have permission to access the bucket.
+[See more](https://cloud.google.com/storage/docs/authentication)
+
+##### Google example (consolidated form)
+
+For Omnibus installations, this is an example of the `connection` setting:
+
+```ruby
+gitlab_rails['object_store']['connection'] = {
+ 'provider' => 'Google',
+ 'google_project' => '<GOOGLE PROJECT>',
+ 'google_client_email' => '<GOOGLE CLIENT EMAIL>',
+ 'google_json_key_location' => '<FILENAME>'
+}
+```
+
+#### OpenStack-compatible connection settings
+
+NOTE: **Note:**
+This is not compatible with the consolidated object storage form.
+OpenStack Swift is only supported with the storage-specific form. See the
+[S3 settings](#s3-compatible-connection-settings) if you want to use the consolidated form.
+
+While OpenStack Swift provides S3 compatibliity, some users may want to use the
+[Swift API](https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html).
+Here are the valid connection settings below for the Swift API, provided by
+[fog-openstack](https://github.com/fog/fog-openstack):
+
+| Setting | Description | Default |
+|---------|-------------|---------|
+| `provider` | Always `OpenStack` for compatible hosts | `OpenStack` |
+| `openstack_username` | OpenStack username | |
+| `openstack_api_key` | OpenStack API key | |
+| `openstack_temp_url_key` | OpenStack key for generating temporary URLs | |
+| `openstack_auth_url` | OpenStack authentication endpoint | |
+| `openstack_region` | OpenStack region | |
+| `openstack_tenant` | OpenStack tenant ID |
+
+#### Rackspace Cloud Files
+
+NOTE: **Note:**
+This is not compatible with the consolidated object
+storage form. Rackspace Cloud is only supported with the storage-specific form.
+
+Here are the valid connection parameters for Rackspace Cloud, provided by
+[fog-rackspace](https://github.com/fog/fog-rackspace/):
+
+| Setting | Description | example |
+|---------|-------------|---------|
+| `provider` | The provider name | `Rackspace` |
+| `rackspace_username` | The username of the Rackspace account with access to the container | `joe.smith` |
+| `rackspace_api_key` | The API key of the Rackspace account with access to the container | `ABC123DEF456ABC123DEF456ABC123DE` |
+| `rackspace_region` | The Rackspace storage region to use, a three letter code from the [list of service access endpoints](https://developer.rackspace.com/docs/cloud-files/v1/general-api-info/service-access/) | `iad` |
+| `rackspace_temp_url_key` | The private key you have set in the Rackspace API for temporary URLs. Read more [here](https://developer.rackspace.com/docs/cloud-files/v1/use-cases/public-access-to-your-cloud-files-account/#tempurl) | `ABC123DEF456ABC123DEF456ABC123DE` |
+
+NOTE: **Note:**
+Regardless of whether the container has public access enabled or disabled, Fog will
+use the TempURL method to grant access to LFS objects. If you see errors in logs referencing
+instantiating storage with a `temp-url-key`, ensure that you have set the key properly
+on the Rackspace API and in `gitlab.rb`. You can verify the value of the key Rackspace
+has set by sending a GET request with token header to the service access endpoint URL
+and comparing the output of the returned headers.
+
+### Object-specific configuration
+
+The following YAML shows how the `object_store` section defines
+object-specific configuration block and how the `enabled` and
+`proxy_download` flags can be overriden. The `bucket` is the only
+required parameter within each type:
+
+```yaml
+ object_store:
+ connection:
+ ...
+ objects:
+ artifacts:
+ bucket: artifacts
+ proxy_download: false
+ external_diffs:
+ bucket: external-diffs
+ lfs:
+ bucket: lfs-objects
+ uploads:
+ bucket: uploads
+ packages:
+ bucket: packages
+ dependency_proxy:
+ enabled: false
+ bucket: dependency_proxy
+ terraform_state:
+ bucket: terraform
+```
+
+This maps to this Omnibus GitLab configuration:
+
+```ruby
+gitlab_rails['object_store']['objects']['artifacts']['bucket'] = 'artifacts'
+gitlab_rails['object_store']['objects']['artifacts']['proxy_download'] = false
+gitlab_rails['object_store']['objects']['external_diffs']['bucket'] = 'external-diffs'
+gitlab_rails['object_store']['objects']['lfs']['bucket'] = 'lfs-objects'
+gitlab_rails['object_store']['objects']['uploads']['bucket'] = 'uploads'
+gitlab_rails['object_store']['objects']['packages']['bucket'] = 'packages'
+gitlab_rails['object_store']['objects']['dependency_proxy']['enabled'] = false
+gitlab_rails['object_store']['objects']['dependency_proxy']['bucket'] = 'dependency-proxy'
+gitlab_rails['object_store']['objects']['terraform_state']['bucket'] = 'terraform-state'
+```
+
+This is the list of valid `objects` that can be used:
+
+| Type | Description |
+|--------------------|---------------|
+| `artifacts` | [CI artifacts](job_artifacts.md) |
+| `external_diffs` | [Merge request diffs](merge_request_diffs.md) |
+| `uploads` | [User uploads](uploads.md) |
+| `lfs` | [Git Large File Storage objects](lfs/index.md) |
+| `packages` | [Project packages (e.g. PyPI, Maven, NuGet, etc.)](packages/index.md) |
+| `dependency_proxy` | [GitLab Dependency Proxy](packages/dependency_proxy.md) |
+| `terraform_state` | [Terraform state files](terraform_state.md) |
+
+Within each object type, three parameters can be defined:
+
+| Setting | Required? | Description |
+|------------------|-----------|-------------|
+| `bucket` | Yes | The bucket name for the object storage. |
+| `enabled` | No | Overrides the common parameter |
+| `proxy_download` | No | Overrides the common parameter |
+
+#### Selectively disabling object storage
+
+As seen above, object storage can be disabled for specific types by
+setting the `enabled` flag to `false`. For example, to disable object
+storage for CI artifacts:
+
+```ruby
+gitlab_rails['object_store']['objects']['artifacts']['enabled'] = false
+```
+
+A bucket is not needed if the feature is disabled entirely. For example,
+no bucket is needed if CI artifacts are disabled with this setting:
+
+```ruby
+gitlab_rails['artifacts_enabled'] = false
+```
+
+### Transition to consolidated form
+
+Prior to GitLab 13.2:
+
+- Object storage configuration for all types of objects such as CI/CD artifacts, LFS
+ files, upload attachments, and so on had to be configured independently.
+- Object store connection parameters such as passwords and endpoint URLs had to be
+ duplicated for each type.
+
+For example, an Omnibus GitLab install might have the following configuration:
+
+```ruby
+# Original object storage configuration
+gitlab_rails['artifacts_object_store_enabled'] = true
+gitlab_rails['artifacts_object_store_direct_upload'] = true
+gitlab_rails['artifacts_object_store_proxy_download'] = true
+gitlab_rails['artifacts_object_store_remote_directory'] = 'artifacts'
+gitlab_rails['artifacts_object_store_connection'] = { 'provider' => 'AWS', 'aws_access_key_id' => 'access_key', 'aws_secret_access_key' => 'secret' }
+gitlab_rails['uploads_object_store_enabled'] = true
+gitlab_rails['uploads_object_store_direct_upload'] = true
+gitlab_rails['uploads_object_store_proxy_download'] = true
+gitlab_rails['uploads_object_store_remote_directory'] = 'uploads'
+gitlab_rails['uploads_object_store_connection'] = { 'provider' => 'AWS', 'aws_access_key_id' => 'access_key', 'aws_secret_access_key' => 'secret' }
+```
+
+While this provides flexibility in that it makes it possible for GitLab
+to store objects across different cloud providers, it also creates
+additional complexity and unnecessary redundancy. Since both GitLab
+Rails and Workhorse components need access to object storage, the
+consolidated form avoids excessive duplication of credentials.
+
+NOTE: **Note:**
+The consolidated object storage configuration is **only** used if all
+lines from the original form is omitted. To move to the consolidated form, remove the original configuration (for example, `artifacts_object_store_enabled`, `uploads_object_store_connection`, and so on.)
+
+## Storage-specific configuration
+
+For configuring object storage in GitLab 13.1 and earlier, or for storage types not
+supported by consolidated configuration form, refer to the following guides:
+
+|Object storage type|Supported by consolidated configuration?|
+|-------------------|----------------------------------------|
+| [Backups](../raketasks/backup_restore.md#uploading-backups-to-a-remote-cloud-storage)|No|
+| [Job artifacts](job_artifacts.md#using-object-storage) and [incremental logging](job_logs.md#new-incremental-logging-architecture) | Yes |
+| [LFS objects](lfs/index.md#storing-lfs-objects-in-remote-object-storage) | Yes |
+| [Uploads](uploads.md#using-object-storage-core-only) | Yes |
+| [Container Registry](packages/container_registry.md#use-object-storage) (optional feature) | No |
+| [Merge request diffs](merge_request_diffs.md#using-object-storage) | Yes |
+| [Mattermost](https://docs.mattermost.com/administration/config-settings.html#file-storage)| No |
+| [Packages](packages/index.md#using-object-storage) (optional feature) **(PREMIUM ONLY)** | Yes |
+| [Dependency Proxy](packages/dependency_proxy.md#using-object-storage) (optional feature) **(PREMIUM ONLY)** | Yes |
+| [Pseudonymizer](pseudonymizer.md#configuration) (optional feature) **(ULTIMATE ONLY)** | No |
+| [Autoscale Runner caching](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching) (optional for improved performance) | No |
+| [Terraform state files](terraform_state.md#using-object-storage-core-only) | Yes |
### Other alternatives to filesystem storage
@@ -69,7 +470,7 @@ backups might not be realised until the organisation had a critical requirement
### S3 API compatibility issues
Not all S3 providers [are fully compatible](../raketasks/backup_restore.md#other-s3-providers)
-with the Fog library that GitLab uses. Symptoms include:
+with the Fog library that GitLab uses. Symptoms include an error in `production.log`:
```plaintext
411 Length Required
@@ -143,14 +544,26 @@ might generate `ETag mismatch` errors.
If you are seeing this ETag mismatch error with Amazon Web Services S3,
it's likely this is due to [encryption settings on your bucket](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html).
-See the section on [using Amazon instance profiles](#using-amazon-instance-profiles) on how to fix this issue.
+To fix this issue, you have two options:
-When using GitLab direct upload, the
+- [Use the consolidated object configuration](#consolidated-object-storage-configuration).
+- [Use Amazon instance profiles](#using-amazon-instance-profiles).
+
+The first option is recommended for MinIO. Otherwise, the
[workaround for MinIO](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/1564#note_244497658)
is to use the `--compat` parameter on the server.
-We are working on a fix to the [GitLab Workhorse
-component](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/222).
+Without consolidated object store configuration or instance profiles enabled,
+GitLab Workhorse will upload files to S3 using pre-signed URLs that do
+not have a `Content-MD5` HTTP header computed for them. To ensure data
+is not corrupted, Workhorse checks that the MD5 hash of the data sent
+equals the ETag header returned from the S3 server. When encryption is
+enabled, this is not the case, which causes Workhorse to report an `ETag
+mismatch` error during an upload.
+
+With the consolidated object configuration and instance profile, Workhorse has
+S3 credentials so that it can compute the `Content-MD5` header. This
+eliminates the need to compare ETag headers returned from the S3 server.
### Using Amazon instance profiles
@@ -163,66 +576,57 @@ configuration.
#### Encrypted S3 buckets
-> Introduced in [GitLab 13.1](https://gitlab.com/gitlab-org/gitlab-workhorse/-/merge_requests/466) only for instance profiles.
+> - Introduced in [GitLab 13.1](https://gitlab.com/gitlab-org/gitlab-workhorse/-/merge_requests/466) for instance profiles only.
+> - Introduced in [GitLab 13.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/34460) for static credentials when [consolidated object storage configuration](#consolidated-object-storage-configuration) is used.
-When configured to use an instance profile, GitLab Workhorse
-will properly upload files to S3 buckets that have [SSE-S3 or SSE-KMS
-encryption enabled by default](https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html).
-Note that customer master keys (CMKs) and SSE-C encryption are not yet
-supported since this requires supplying keys to the GitLab
-configuration.
+When configured either with an instance profile or with the consolidated
+object configuration, GitLab Workhorse properly uploads files to S3 buckets
+that have [SSE-S3 or SSE-KMS encryption enabled by
+default](https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html).
+Note that customer master keys (CMKs) and
+SSE-C encryption are [not yet supported since this requires supplying
+keys to the GitLab configuration](https://gitlab.com/gitlab-org/gitlab/-/issues/226006).
-Without instance profiles enabled (or prior to GitLab 13.1), GitLab
-Workhorse will upload files to S3 using pre-signed URLs that do not have
-a `Content-MD5` HTTP header computed for them. To ensure data is not
-corrupted, Workhorse checks that the MD5 hash of the data sent equals
-the ETag header returned from the S3 server. When encryption is enabled,
-this is not the case, which causes Workhorse to report an `ETag
-mismatch` error during an upload.
+##### Disabling the feature
-With instance profiles enabled, GitLab Workhorse uses an AWS S3 client
-that properly computes and sends the `Content-MD5` header to the server,
-which eliminates the need for comparing ETag headers. If the data is
-corrupted in transit, the S3 server will reject the file.
+The Workhorse S3 client is enabled by default when the
+[`use_iam_profile` configuration option](#iam-permissions) is set to `true`.
-#### IAM Permissions
-
-To set up an instance profile, create an Amazon Identity Access and
-Management (IAM) role with the necessary permissions. The following
-example is a role for an S3 bucket named `test-bucket`:
-
-```json
-{
- "Version": "2012-10-17",
- "Statement": [
- {
- "Sid": "VisualEditor0",
- "Effect": "Allow",
- "Action": [
- "s3:PutObject",
- "s3:GetObject",
- "s3:AbortMultipartUpload",
- "s3:DeleteObject"
- ],
- "Resource": "arn:aws:s3:::test-bucket/*"
- }
- ]
-}
-```
-
-Associate this role with your GitLab instance, and then configure GitLab
-to use it via the `use_iam_profile` configuration option. For example,
-when configuring uploads to use object storage, see the `AWS IAM profiles`
-section in [S3 compatible connection settings](uploads.md#s3-compatible-connection-settings).
-
-#### Disabling the feature
-
-The Workhorse S3 client is only enabled when the `use_iam_profile`
-configuration flag is `true`.
-
-To disable this feature, ask a GitLab administrator with [Rails console access](feature_flags.md#how-to-enable-and-disable-features-behind-flags) to run the
+The feature can be disabled using the `:use_workhorse_s3_client` feature flag. To disable the
+feature, ask a GitLab administrator with
+[Rails console access](feature_flags.md#how-to-enable-and-disable-features-behind-flags) to run the
following command:
```ruby
Feature.disable(:use_workhorse_s3_client)
```
+
+#### IAM Permissions
+
+To set up an instance profile:
+
+1. Create an Amazon Identity Access and Management (IAM) role with the necessary permissions. The
+ following example is a role for an S3 bucket named `test-bucket`:
+
+ ```json
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "VisualEditor0",
+ "Effect": "Allow",
+ "Action": [
+ "s3:PutObject",
+ "s3:GetObject",
+ "s3:AbortMultipartUpload",
+ "s3:DeleteObject"
+ ],
+ "Resource": "arn:aws:s3:::test-bucket/*"
+ }
+ ]
+ }
+ ```
+
+1. [Attach this role](https://aws.amazon.com/premiumsupport/knowledge-center/attach-replace-ec2-instance-profile/)
+ to the EC2 instance hosting your GitLab instance.
+1. Configure GitLab to use it via the `use_iam_profile` configuration option.
diff --git a/doc/administration/operations/fast_ssh_key_lookup.md b/doc/administration/operations/fast_ssh_key_lookup.md
index 9f67c927128..b874a4257f0 100644
--- a/doc/administration/operations/fast_ssh_key_lookup.md
+++ b/doc/administration/operations/fast_ssh_key_lookup.md
@@ -3,7 +3,8 @@
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/1631) in [GitLab Starter](https://about.gitlab.com/pricing/) 9.3.
> - [Available in](https://gitlab.com/gitlab-org/gitlab/-/issues/3953) GitLab Community Edition 10.4.
-NOTE: **Note:** This document describes a drop-in replacement for the
+NOTE: **Note:**
+This document describes a drop-in replacement for the
`authorized_keys` file. For normal (non-deploy key) users, consider using
[SSH certificates](ssh_certificates.md). They are even faster, but are not a
drop-in replacement.
@@ -67,19 +68,25 @@ sudo service ssh reload
sudo service sshd reload
```
-Confirm that SSH is working by removing your user's SSH key in the UI, adding a
-new one, and attempting to pull a repository.
+Confirm that SSH is working by commenting out your user's key in the `authorized_keys`
+(start the line with a `#` to comment it), and attempting to pull a repository.
-NOTE: **Note:** For Omnibus Docker, `AuthorizedKeysCommand` is setup by default in
+A successful pull would mean that GitLab was able to find the key in the database,
+since it is not present in the file anymore.
+
+NOTE: **Note:**
+For Omnibus Docker, `AuthorizedKeysCommand` is setup by default in
GitLab 11.11 and later.
-NOTE: **Note:** For Installations from source, the command would be located at
+NOTE: **Note:**
+For Installations from source, the command would be located at
`/home/git/gitlab-shell/bin/gitlab-shell-authorized-keys-check` if [the install from source](../../install/installation.md#install-gitlab-shell) instructions were followed.
You might want to consider creating a wrapper script somewhere else since this command needs to be
owned by `root` and not be writable by group or others. You could also consider changing the ownership of this command
as required, but that might require temporary ownership changes during `gitlab-shell` upgrades.
-CAUTION: **Caution:** Do not disable writes until SSH is confirmed to be working
+CAUTION: **Caution:**
+Do not disable writes until SSH is confirmed to be working
perfectly, because the file will quickly become out-of-date.
In the case of lookup failures (which are common), the `authorized_keys`
@@ -96,6 +103,8 @@ Again, confirm that SSH is working by removing your user's SSH key in the UI,
adding a new one, and attempting to pull a repository.
Then you can backup and delete your `authorized_keys` file for best performance.
+The current users' keys are already present in the database, so there is no need for migration
+or for asking users to re-add their keys.
## How to go back to using the `authorized_keys` file
@@ -200,3 +209,13 @@ the database. The following instructions can be used to build OpenSSH 7.5:
# Only run this if you run into a problem logging in
yum downgrade openssh-server openssh openssh-clients
```
+
+## SELinux support and limitations
+
+> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/2855) in GitLab 10.5.
+
+GitLab supports `authorized_keys` database lookups with [SELinux](https://en.wikipedia.org/wiki/Security-Enhanced_Linux).
+
+Because the SELinux policy is static, GitLab doesn't support the ability to change
+internal Unicorn ports at the moment. Admins would have to create a special `.te`
+file for the environment, since it isn't generated dynamically.
diff --git a/doc/administration/operations/filesystem_benchmarking.md b/doc/administration/operations/filesystem_benchmarking.md
index c5c5a8b4313..856061348ed 100644
--- a/doc/administration/operations/filesystem_benchmarking.md
+++ b/doc/administration/operations/filesystem_benchmarking.md
@@ -65,7 +65,8 @@ operations per second.
### Simple benchmarking
-NOTE: **Note:** This test is naive but may be useful if `fio` is not
+NOTE: **Note:**
+This test is naive but may be useful if `fio` is not
available on the system. It's possible to receive good results on this
test but still have poor performance due to read speed and various other
factors.
diff --git a/doc/administration/operations/puma.md b/doc/administration/operations/puma.md
index af28335ef91..62b93d40a6b 100644
--- a/doc/administration/operations/puma.md
+++ b/doc/administration/operations/puma.md
@@ -1,11 +1,11 @@
# Switching to Puma
-## Puma
-
As of GitLab 12.9, [Puma](https://github.com/puma/puma) has replaced [Unicorn](https://yhbt.net/unicorn/).
-as the default web server. Starting with 13.0, both all-in-one package based
-installations as well as Helm chart based installations will run Puma instead of
-Unicorn unless explicitly specified not to.
+as the default web server. From GitLab 13.0, the following run Puma instead of Unicorn unless
+explicitly configured not to:
+
+- All-in-one package-based installations.
+- Helm chart-based installations.
## Why switch to Puma?
@@ -32,10 +32,12 @@ Additionally we strongly recommend that multi-node deployments [configure their
## Performance caveat when using Puma with Rugged
For deployments where NFS is used to store Git repository, we allow GitLab to use
-[Direct Git Access](../gitaly/#direct-git-access-in-gitlab-rails) to improve performance via usage of [Rugged](https://github.com/libgit2/rugged).
+[direct Git access](../gitaly/index.md#direct-access-to-git-in-gitlab) to improve performance using
+[Rugged](https://github.com/libgit2/rugged).
-Rugged usage is automatically enabled if Direct Git Access is present, unless it
-is disabled by [feature flags](../../development/gitaly.md#legacy-rugged-code).
+Rugged usage is automatically enabled if direct Git access
+[is available](../gitaly/index.md#how-it-works), unless it is disabled by
+[feature flags](../../development/gitaly.md#legacy-rugged-code).
MRI Ruby uses a GVL. This allows MRI Ruby to be multi-threaded, but running at
most on a single core. Since Rugged can use a thread for long periods of
diff --git a/doc/administration/operations/unicorn.md b/doc/administration/operations/unicorn.md
index eabf99eb08c..a1593c3a6c3 100644
--- a/doc/administration/operations/unicorn.md
+++ b/doc/administration/operations/unicorn.md
@@ -45,7 +45,7 @@ master process has PID 56227 below.
The main tunable options for Unicorn are the number of worker processes and the
request timeout after which the Unicorn master terminates a worker process.
See the [Omnibus GitLab Unicorn settings
-documentation](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/unicorn.md)
+documentation](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/unicorn.html)
if you want to adjust these settings.
## unicorn-worker-killer
diff --git a/doc/administration/packages/container_registry.md b/doc/administration/packages/container_registry.md
index 8f55345a9a8..44f1d075a5e 100644
--- a/doc/administration/packages/container_registry.md
+++ b/doc/administration/packages/container_registry.md
@@ -76,7 +76,7 @@ where:
| `port` | The port under which the external Registry domain will listen on. |
| `api_url` | The internal API URL under which the Registry is exposed to. It defaults to `http://localhost:5000`. |
| `key` | The private key location that is a pair of Registry's `rootcertbundle`. Read the [token auth configuration documentation](https://docs.docker.com/registry/configuration/#token). |
-| `path` | This should be the same directory like specified in Registry's `rootdirectory`. Read the [storage configuration documentation](https://docs.docker.com/registry/configuration/#storage). This path needs to be readable by the GitLab user, the web-server user and the Registry user. Read more in [#container-registry-storage-path](#container-registry-storage-path). |
+| `path` | This should be the same directory like specified in Registry's `rootdirectory`. Read the [storage configuration documentation](https://docs.docker.com/registry/configuration/#storage). This path needs to be readable by the GitLab user, the web-server user and the Registry user. Read more in [#configure-storage-for-the-container-registry](#configure-storage-for-the-container-registry). |
| `issuer` | This should be the same value as configured in Registry's `issuer`. Read the [token auth configuration documentation](https://docs.docker.com/registry/configuration/#token). |
NOTE: **Note:**
@@ -313,11 +313,28 @@ the Container Registry by themselves, follow the steps below.
1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
-## Container Registry storage path
+## Configure storage for the Container Registry
-NOTE: **Note:**
-For configuring storage in the cloud instead of the filesystem, see the
-[storage driver configuration](#container-registry-storage-driver).
+You can configure the Container Registry to use various storage backends by
+configuring a storage driver. By default the GitLab Container Registry
+is configured to use the [filesystem driver](#use-filesystem)
+configuration.
+
+The different supported drivers are:
+
+| Driver | Description |
+|------------|-------------------------------------|
+| filesystem | Uses a path on the local filesystem |
+| Azure | Microsoft Azure Blob Storage |
+| gcs | Google Cloud Storage |
+| s3 | Amazon Simple Storage Service. Be sure to configure your storage bucket with the correct [S3 Permission Scopes](https://docs.docker.com/registry/storage-drivers/s3/#s3-permission-scopes). |
+| swift | OpenStack Swift Object Storage |
+| oss | Aliyun OSS |
+
+Read more about the individual driver's configuration options in the
+[Docker Registry docs](https://docs.docker.com/registry/configuration/#storage).
+
+### Use filesystem
If you want to store your images on the filesystem, you can change the storage
path for the Container Registry, follow the steps below.
@@ -327,7 +344,8 @@ This path is accessible to:
- The user running the Container Registry daemon.
- The user running GitLab.
-CAUTION: **Warning** You should confirm that all GitLab, Registry and web server users
+CAUTION: **Warning:**
+You should confirm that all GitLab, Registry and web server users
have access to this directory.
**Omnibus GitLab installations**
@@ -358,30 +376,15 @@ The default location where images are stored in source installations, is
1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
-### Container Registry storage driver
+### Use object storage
-You can configure the Container Registry to use a different storage backend by
-configuring a different storage driver. By default the GitLab Container Registry
-is configured to use the filesystem driver, which makes use of [storage path](#container-registry-storage-path)
-configuration.
-
-The different supported drivers are:
-
-| Driver | Description |
-|------------|-------------------------------------|
-| filesystem | Uses a path on the local filesystem |
-| Azure | Microsoft Azure Blob Storage |
-| gcs | Google Cloud Storage |
-| s3 | Amazon Simple Storage Service. Be sure to configure your storage bucket with the correct [S3 Permission Scopes](https://docs.docker.com/registry/storage-drivers/s3/#s3-permission-scopes). |
-| swift | OpenStack Swift Object Storage |
-| oss | Aliyun OSS |
-
-Read more about the individual driver's configuration options in the
-[Docker Registry docs](https://docs.docker.com/registry/configuration/#storage).
+If you want to store your images on object storage, you can change the storage
+driver for the Container Registry.
[Read more about using object storage with GitLab](../object_storage.md).
-CAUTION: **Warning:** GitLab will not backup Docker images that are not stored on the
+CAUTION: **Warning:**
+GitLab will not backup Docker images that are not stored on the
filesystem. Remember to enable backups with your object storage provider if
desired.
@@ -435,21 +438,43 @@ storage:
NOTE: **Note:**
`your-s3-bucket` should only be the name of a bucket that exists, and can't include subdirectories.
-**Migrate without downtime**
+#### Migrate to object storage without downtime
+
+To migrate storage without stopping the Container Registry, set the Container Registry
+to read-only mode. On large instances, this may require the Container Registry
+to be in read-only mode for a while. During this time,
+you can pull from the Container Registry, but you cannot push.
+
+1. Optional: To reduce the amount of data to be migrated, run the [garbage collection tool without downtime](#performing-garbage-collection-without-downtime).
+1. Copy initial data to your S3 bucket, for example with the AWS CLI [`cp`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/cp.html)
+ or [`sync`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html)
+ command. Make sure to keep the `docker` folder as the top-level folder inside the bucket.
+
+ ```shell
+ aws s3 sync registry s3://mybucket
+ ```
+
+1. To perform the final data sync,
+ [put the Container Registry in `read-only` mode](#performing-garbage-collection-without-downtime) and
+ [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Sync any changes since the initial data load to your S3 bucket and delete files that exist in the destination bucket but not in the source:
+
+ ```shell
+ aws s3 sync registry s3://mybucket --delete
+ ```
-To migrate the data to AWS S3 without downtime:
+ DANGER: **Danger:**
+ The `--delete` flag will delete files that exist in the destination but not in the source.
+ Make sure not to swap the source and destination, or you will delete all data in the Registry.
-1. To reduce the amount of data to be migrated, run the [garbage collection tool without downtime](#performing-garbage-collection-without-downtime). Part of this process sets the registry to `read-only`.
-1. Copy the data to your AWS S3 bucket, for example with [AWS CLI's `cp`](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html) command.
-1. Configure your registry to use the S3 bucket for storage.
-1. Put the registry back to `read-write`.
-1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Configure your registry to [use the S3 bucket for storage](#use-object-storage).
+1. For the changes to take effect, set the Registry back to `read-write` mode and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
### Disable redirect for storage driver
By default, users accessing a registry configured with a remote backend are redirected to the default backend for the storage driver. For example, registries can be configured using the `s3` storage driver, which redirects requests to a remote S3 bucket to alleviate load on the GitLab server.
-However, this behavior is undesirable for registries used by internal hosts that usually can't access public servers. To disable redirects, set the `disable` flag to true as follows. This makes all traffic to always go through the Registry service. This results in improved security (less surface attack as the storage backend is not publicly accessible), but worse performance (all traffic is redirected via the service).
+However, this behavior is undesirable for registries used by internal hosts that usually can't access public servers. To disable redirects and [proxy download](../object_storage.md#proxy-download), set the `disable` flag to true as follows. This makes all traffic always go through the Registry service. This results in improved security (less surface attack as the storage backend is not publicly accessible), but worse performance (all traffic is redirected via the service).
**Omnibus GitLab installations**
@@ -779,13 +804,15 @@ that you have backed up all registry data.
> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/764) in GitLab 8.8.
-You can perform a garbage collection without stopping the Container Registry by setting
-it into a read-only mode and by not using the built-in command. During this time,
+You can perform garbage collection without stopping the Container Registry by putting
+it in read-only mode and by not using the built-in command. On large instances
+this could require Container Registry to be in read-only mode for a while.
+During this time,
you will be able to pull from the Container Registry, but you will not be able to
push.
NOTE: **Note:**
-By default, the [registry storage path](#container-registry-storage-path)
+By default, the [registry storage path](#configure-storage-for-the-container-registry)
is `/var/opt/gitlab/gitlab-rails/shared/registry`.
To enable the read-only mode:
@@ -927,7 +954,7 @@ larger images, or images that take longer than 5 minutes to push, users may
encounter this error.
Administrators can increase the token duration in **Admin area > Settings >
-Container Registry > Authorization token duration (minutes)**.
+CI/CD > Container Registry > Authorization token duration (minutes)**.
### AWS S3 with the GitLab registry error when pushing large images
diff --git a/doc/administration/packages/dependency_proxy.md b/doc/administration/packages/dependency_proxy.md
index 565a4521c2a..2f9cfecc9d4 100644
--- a/doc/administration/packages/dependency_proxy.md
+++ b/doc/administration/packages/dependency_proxy.md
@@ -87,6 +87,11 @@ store the blobs of the dependency proxy.
[Read more about using object storage with GitLab](../object_storage.md).
+NOTE: **Note:**
+In GitLab 13.2 and later, we recommend using the
+[consolidated object storage settings](../object_storage.md#consolidated-object-storage-configuration).
+This section describes the earlier configuration format.
+
**Omnibus GitLab installations**
1. Edit `/etc/gitlab/gitlab.rb` and add the following lines (uncomment where
diff --git a/doc/administration/packages/index.md b/doc/administration/packages/index.md
index 5088dd86a86..5d07136ef40 100644
--- a/doc/administration/packages/index.md
+++ b/doc/administration/packages/index.md
@@ -99,6 +99,9 @@ store packages.
[Read more about using object storage with GitLab](../object_storage.md).
+NOTE: **Note:**
+We recommend using the [consolidated object storage settings](../object_storage.md#consolidated-object-storage-configuration). The following instructions apply to the original config format.
+
**Omnibus GitLab installations**
1. Edit `/etc/gitlab/gitlab.rb` and add the following lines (uncomment where
diff --git a/doc/administration/pages/index.md b/doc/administration/pages/index.md
index a7a3a86de8e..4efd92eaa07 100644
--- a/doc/administration/pages/index.md
+++ b/doc/administration/pages/index.md
@@ -190,7 +190,7 @@ control over how the Pages daemon runs and serves content in your environment.
| Setting | Description |
| ------- | ----------- |
| `pages_external_url` | The URL where GitLab Pages is accessible, including protocol (HTTP / HTTPS). If `https://` is used, you must also set `gitlab_pages['ssl_certificate']` and `gitlab_pages['ssl_certificate_key']`.
-| **gitlab_pages[]** | |
+| `gitlab_pages[]` | |
| `access_control` | Whether to enable [access control](index.md#access-control).
| `api_secret_key` | Full path to file with secret key used to authenticate with the GitLab API. Auto-generated when left unset.
| `artifacts_server` | Enable viewing [artifacts](../job_artifacts.md) in GitLab Pages.
@@ -213,7 +213,7 @@ control over how the Pages daemon runs and serves content in your environment.
| `internal_gitlab_server` | Internal GitLab server address used exclusively for API requests. Useful if you want to send that traffic over an internal load balancer. Defaults to GitLab `external_url`.
| `listen_proxy` | The addresses to listen on for reverse-proxy requests. Pages will bind to these addresses' network socket and receives incoming requests from it. Sets the value of `proxy_pass` in `$nginx-dir/conf/gitlab-pages.conf`.
| `log_directory` | Absolute path to a log directory.
-| `log_format` | The log output format: 'text' or 'json'.
+| `log_format` | The log output format: `text` or `json`.
| `log_verbose` | Verbose logging, true/false.
| `max_connections` | Limit on the number of concurrent connections to the HTTP, HTTPS or proxy listeners.
| `metrics_address` | The address to listen on for metrics requests.
@@ -225,14 +225,14 @@ control over how the Pages daemon runs and serves content in your environment.
| `tls_max_version` | Specifies the maximum SSL/TLS version ("ssl3", "tls1.0", "tls1.1" or "tls1.2").
| `tls_min_version` | Specifies the minimum SSL/TLS version ("ssl3", "tls1.0", "tls1.1" or "tls1.2").
| `use_http2` | Enable HTTP2 support.
-| **gitlab_pages['env'][]** | |
+| `gitlab_pages['env'][]` | |
| `http_proxy` | Configure GitLab Pages to use an HTTP Proxy to mediate traffic between Pages and GitLab. Sets an environment variable `http_proxy` when starting Pages daemon.
-| **gitlab_rails[]** | |
+| `gitlab_rails[]` | |
| `pages_domain_verification_cron_worker` | Schedule for verifying custom GitLab Pages domains.
| `pages_domain_ssl_renewal_cron_worker` | Schedule for obtaining and renewing SSL certificates through Let's Encrypt for GitLab Pages domains.
| `pages_domain_removal_cron_worker` | Schedule for removing unverified custom GitLab Pages domains.
| `pages_path` | The directory on disk where pages are stored, defaults to `GITLAB-RAILS/shared/pages`.
-| **pages_nginx[]** | |
+| `pages_nginx[]` | |
| `enable` | Include a virtual host `server{}` block for Pages inside NGINX. Needed for NGINX to proxy traffic back to the Pages daemon. Set to `false` if the Pages daemon should directly receive all requests, for example, when using [custom domains](index.md#custom-domains).
---
@@ -408,6 +408,9 @@ pages:
### Using a custom Certificate Authority (CA)
+NOTE: **Note:**
+[Before 13.2](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4289), when using Omnibus, a [workaround was required](https://docs.gitlab.com/13.1/ee/administration/pages/index.html#using-a-custom-certificate-authority-ca).
+
When using certificates issued by a custom CA, [Access Control](../../user/project/pages/pages_access_control.md#gitlab-pages-access-control) and
the [online view of HTML job artifacts](../../ci/pipelines/job_artifacts.md#browsing-artifacts)
will fail to work if the custom CA is not recognized.
@@ -415,28 +418,10 @@ will fail to work if the custom CA is not recognized.
This usually results in this error:
`Post /oauth/token: x509: certificate signed by unknown authority`.
-For installation from source this can be fixed by installing the custom Certificate
+For installation from source, this can be fixed by installing the custom Certificate
Authority (CA) in the system certificate store.
-For Omnibus, normally this would be fixed by [installing a custom CA in Omnibus GitLab](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates)
-but a [bug](https://gitlab.com/gitlab-org/gitlab/-/issues/25411) is currently preventing
-that method from working. Use the following workaround:
-
-1. Append your GitLab server TLS/SSL certificate to `/opt/gitlab/embedded/ssl/certs/cacert.pem` where `gitlab-domain-example.com` is your GitLab application URL
-
- ```shell
- printf "\ngitlab-domain-example.com\n===========================\n" | sudo tee --append /opt/gitlab/embedded/ssl/certs/cacert.pem
- echo -n | openssl s_client -connect gitlab-domain-example.com:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | sudo tee --append /opt/gitlab/embedded/ssl/certs/cacert.pem
- ```
-
-1. [Restart](../restart_gitlab.md) the GitLab Pages Daemon. For Omnibus GitLab instances:
-
- ```shell
- sudo gitlab-ctl restart gitlab-pages
- ```
-
-CAUTION: **Caution:**
-Some Omnibus GitLab upgrades will revert this workaround and you'll need to apply it again.
+For Omnibus, this is fixed by [installing a custom CA in Omnibus GitLab](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates).
## Activate verbose logging for daemon
@@ -526,10 +511,17 @@ The following procedure includes steps to back up and edit the
`gitlab-secrets.json` file. This file contains secrets that control
database encryption. Proceed with caution.
+1. Create a backup of the secrets file on the **GitLab server**:
+
+ ```shell
+ cp /etc/gitlab/gitlab-secrets.json /etc/gitlab/gitlab-secrets.json.bak
+ ```
+
1. On the **GitLab server**, to enable Pages, add the following to `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_pages['enable'] = true
+ pages_external_url "http://<pages_server_URL>"
```
1. Optionally, to enable [access control](#access-control), add the following to `/etc/gitlab/gitlab.rb`:
@@ -542,26 +534,25 @@ database encryption. Proceed with caution.
changes to take effect. The `gitlab-secrets.json` file is now updated with the
new configuration.
-1. Create a backup of the secrets file on the **GitLab server**:
-
- ```shell
- cp /etc/gitlab/gitlab-secrets.json /etc/gitlab/gitlab-secrets.json.bak
- ```
-
1. Set up a new server. This will become the **Pages server**.
-1. Create an [NFS share](../high_availability/nfs_host_client_setup.md) on the new server and configure this share to
- allow access from your main **GitLab server**. For this example, we use the
+1. Create an [NFS share](../high_availability/nfs_host_client_setup.md)
+ on the **Pages server** and configure this share to
+ allow access from your main **GitLab server**.
+ Note that the example there is more general and
+ shares several sub-directories from `/home` to several `/nfs/home` mountpoints.
+ For our Pages-specific example here, we instead share only the
default GitLab Pages folder `/var/opt/gitlab/gitlab-rails/shared/pages`
- as the shared folder on the new server and we will mount it to `/mnt/pages`
+ from the **Pages server** and we mount it to `/mnt/pages`
on the **GitLab server**.
+ Therefore, omit "Step 4" there.
1. On the **Pages server**, install Omnibus GitLab and modify `/etc/gitlab/gitlab.rb`
to include:
```ruby
- external_url 'http://<ip-address-of-the-server>'
- pages_external_url "http://<your-pages-server-URL>"
+ external_url 'http://<gitlab_server_IP_or_URL>'
+ pages_external_url "http://<pages_server_URL>"
postgresql['enable'] = false
redis['enable'] = false
prometheus['enable'] = false
@@ -581,7 +572,15 @@ database encryption. Proceed with caution.
```
1. Copy the `/etc/gitlab/gitlab-secrets.json` file from the **GitLab server**
- to the **Pages server**.
+ to the **Pages server**, for example via the NFS share.
+
+ ```shell
+ # On the GitLab server
+ cp /etc/gitlab/gitlab-secrets.json /mnt/pages/gitlab-secrets.json
+
+ # On the Pages server
+ mv /var/opt/gitlab/gitlab-rails/shared/pages/gitlab-secrets.json /etc/gitlab/gitlab-secrets.json
+ ```
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
@@ -589,13 +588,13 @@ database encryption. Proceed with caution.
```ruby
gitlab_pages['enable'] = false
- pages_external_url "http://<your-pages-server-URL>"
+ pages_external_url "http://<pages_server_URL>"
gitlab_rails['pages_path'] = "/mnt/pages"
```
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-It is possible to run GitLab Pages on multiple servers if you wish to distribute
+It's possible to run GitLab Pages on multiple servers if you wish to distribute
the load. You can do this through standard load balancing practices such as
configuring your DNS server to return multiple IPs for your Pages server,
configuring a load balancer to work at the IP level, and so on. If you wish to
@@ -655,3 +654,45 @@ The fix is to correct the source file permissions and restart Pages:
sudo chmod 644 /opt/gitlab/embedded/ssl/certs/cacert.pem
sudo gitlab-ctl restart gitlab-pages
```
+
+### `dial tcp: lookup gitlab.example.com` and `x509: certificate signed by unknown authority`
+
+When setting both `inplace_chroot` and `access_control` to `true`, you might encounter errors like:
+
+```plaintext
+dial tcp: lookup gitlab.example.com on [::1]:53: dial udp [::1]:53: connect: cannot assign requested address
+```
+
+Or:
+
+```plaintext
+open /opt/gitlab/embedded/ssl/certs/cacert.pem: no such file or directory
+x509: certificate signed by unknown authority
+```
+
+The reason for those errors is that the files `resolv.conf` and `ca-bundle.pem` are missing inside the chroot.
+The fix is to copy the host's `/etc/resolv.conf` and GitLab's certificate bundle inside the chroot:
+
+```shell
+sudo mkdir -p /var/opt/gitlab/gitlab-rails/shared/pages/etc/ssl
+sudo mkdir -p /var/opt/gitlab/gitlab-rails/shared/pages/opt/gitlab/embedded/ssl/certs/
+
+sudo cp /etc/resolv.conf /var/opt/gitlab/gitlab-rails/shared/pages/etc
+sudo cp /opt/gitlab/embedded/ssl/certs/cacert.pem /var/opt/gitlab/gitlab-rails/shared/pages/opt/gitlab/embedded/ssl/certs/
+sudo cp /opt/gitlab/embedded/ssl/certs/cacert.pem /var/opt/gitlab/gitlab-rails/shared/pages/etc/ssl/ca-bundle.pem
+```
+
+### 404 error after transferring project to a different group or user
+
+If you encounter a `404 Not Found` error a Pages site after transferring a project to
+another group or user, you must trigger adomain configuration update for Pages. To do
+so, write something in the `.update` file. The Pages daemon monitors for changes to this
+file, and reloads the configuration when changes occur.
+
+Use this example to fix a `404 Not Found` error after transferring a project with Pages:
+
+```shell
+date > /var/opt/gitlab/gitlab-rails/shared/pages/.update
+```
+
+If you've customized the Pages storage path, adjust the command above to use your custom path.
diff --git a/doc/administration/postgresql/index.md b/doc/administration/postgresql/index.md
new file mode 100644
index 00000000000..7e0a2f3cae1
--- /dev/null
+++ b/doc/administration/postgresql/index.md
@@ -0,0 +1,36 @@
+---
+type: reference
+---
+
+# Configuring PostgreSQL for scaling
+
+In this section, you'll be guided through configuring a PostgreSQL database to
+be used with GitLab in one of our [Scalable and Highly Available Setups](../reference_architectures/index.md).
+There are essentially three setups to choose from.
+
+## PostgreSQL replication and failover with Omnibus GitLab **(PREMIUM ONLY)**
+
+This setup is for when you have installed GitLab using the
+[Omnibus GitLab **Enterprise Edition** (EE) package](https://about.gitlab.com/install/?version=ee).
+
+All the tools that are needed like PostgreSQL, PgBouncer, Repmgr are bundled in
+the package, so you can it to set up the whole PostgreSQL infrastructure (primary, replica).
+
+[> Read how to set up PostgreSQL replication and failover using Omnibus GitLab](replication_and_failover.md)
+
+## Standalone PostgreSQL using Omnibus GitLab **(CORE ONLY)**
+
+This setup is for when you have installed the
+[Omnibus GitLab packages](https://about.gitlab.com/install/) (CE or EE),
+to use the bundled PostgreSQL having only its service enabled.
+
+[> Read how to set up a standalone PostgreSQL instance using Omnibus GitLab](standalone.md)
+
+## Provide your own PostgreSQL instance **(CORE ONLY)**
+
+This setup is for when you have installed GitLab using the
+[Omnibus GitLab packages](https://about.gitlab.com/install/) (CE or EE),
+or installed it [from source](../../install/installation.md), but you want to use
+your own external PostgreSQL server.
+
+[> Read how to set up an external PostgreSQL instance](external.md)
diff --git a/doc/administration/postgresql/replication_and_failover.md b/doc/administration/postgresql/replication_and_failover.md
index aa95b983d20..5f550f09e5b 100644
--- a/doc/administration/postgresql/replication_and_failover.md
+++ b/doc/administration/postgresql/replication_and_failover.md
@@ -1,16 +1,15 @@
# PostgreSQL replication and failover with Omnibus GitLab **(PREMIUM ONLY)**
-> Important notes:
->
-> - This document will focus only on configuration supported with [GitLab Premium](https://about.gitlab.com/pricing/), using the Omnibus GitLab package.
-> - If you are a Community Edition or Starter user, consider using a cloud hosted solution.
-> - This document will not cover installations from source.
->
-> - If a setup with replication and failover is not what you were looking for, see the [database configuration document](https://docs.gitlab.com/omnibus/settings/database.html)
-> for the Omnibus GitLab packages.
->
-> Please read this document fully before attempting to configure PostgreSQL with
-> replication and failover for GitLab.
+This document will focus only on configuration supported with [GitLab Premium](https://about.gitlab.com/pricing/), using the Omnibus GitLab package.
+If you are a Community Edition or Starter user, consider using a cloud hosted solution.
+This document will not cover installations from source.
+
+If a setup with replication and failover is not what you were looking for, see
+the [database configuration document](https://docs.gitlab.com/omnibus/settings/database.html)
+for the Omnibus GitLab packages.
+
+It's recommended to read this document fully before attempting to configure PostgreSQL with
+replication and failover for GitLab.
## Architecture
@@ -967,7 +966,8 @@ after it has been restored to service.
gitlab-ctl restart repmgrd
```
- CAUTION: **Warning:** When the server is brought back online, and before
+ CAUTION: **Warning:**
+ When the server is brought back online, and before
you switch it to a standby node, repmgr will report that there are two masters.
If there are any clients that are still attempting to write to the old master,
this will cause a split, and the old master will need to be resynced from
@@ -1127,3 +1127,213 @@ If you're running into an issue with a component not outlined here, be sure to c
- [Consul](../high_availability/consul.md#troubleshooting)
- [PostgreSQL](https://docs.gitlab.com/omnibus/settings/database.html#troubleshooting)
- [GitLab application](../high_availability/gitlab.md#troubleshooting)
+
+## Patroni
+
+NOTE: **Note:**
+Starting from GitLab 13.1, Patroni is available for **experimental** use to replace repmgr. Due to its
+experimental nature, Patroni support is **subject to change without notice.**
+
+Patroni is an opinionated solution for PostgreSQL high-availability. It takes the control of PostgreSQL, overrides its
+configuration and manages its lifecycle (start, stop, restart). This is a more active approach when compared to repmgr.
+Both repmgr and Patroni are both supported and available. But Patroni will be the default (and perhaps the only) option
+for PostgreSQL 12 clustering and cascading replication for Geo deployments.
+
+The [architecture](#example-recommended-setup-manual-steps) (that was mentioned above) does not change for Patroni.
+You do not need any special consideration for Patroni while provisioning your database nodes. Patroni heavily relies on
+Consul to store the state of the cluster and elect a leader. Any failure in Consul cluster and its leader election will
+propagate to Patroni cluster as well.
+
+Similar to repmgr, Patroni monitors the cluster and handles failover. When the primary node fails it works with Consul
+to notify PgBouncer. However, as opposed to repmgr, on failure, Patroni handles the transitioning of the old primary to
+a replica and rejoins it to the cluster automatically. So you do not need any manual operation for recovering the
+cluster as you do with repmgr.
+
+With Patroni the connection flow is slightly different. Patroni on each node connects to Consul agent to join the
+cluster. Only after this point it decides if the node is the primary or a replica. Based on this decision, it configures
+and starts PostgreSQL which it communicates with directly over a Unix socket. This implies that if Consul cluster is not
+functional or does not have a leader, Patroni and by extension PostgreSQL will not start. Patroni also exposes a REST
+API which can be accessed via its [default port](https://docs.gitlab.com/omnibus/package-information/defaults.html#patroni)
+on each node.
+
+### Configuring Patroni cluster
+
+You must enable Patroni explicitly to be able to use it (with `patroni['enable'] = true`). When Patroni is enabled
+repmgr will be disabled automatically.
+
+Any PostgreSQL configuration item that controls replication, for example `wal_level`, `max_wal_senders`, etc, are strictly
+controlled by Patroni and will override the original settings that you make with the `postgresql[...]` configuration key.
+Hence, they are all separated and placed under `patroni['postgresql'][...]`. This behavior is limited to replication.
+Patroni honours any other PostgreSQL configuration that was made with the `postgresql[...]` configuration key. For example,
+`max_wal_senders` by default is set to `5`. If you wish to change this you must set it with the `patroni['postgresql']['max_wal_senders']`
+configuration key.
+
+The configuration of Patroni node is very similar to a repmgr but shorter. When Patroni is enabled, first you can ignore
+any replication setting of PostgreSQL (it will be overwritten anyway). Then you can remove any `repmgr[...]` or
+repmgr-specific configuration as well. Especially, make sure that you remove `postgresql['shared_preload_libraries'] = 'repmgr_funcs'`.
+
+Here is an example similar to [the one that was done with repmgr](#configuring-the-database-nodes):
+
+```ruby
+# Disable all components except PostgreSQL and Repmgr and Consul
+roles['postgres_role']
+
+# Enable Patroni
+patroni['enable'] = true
+
+# PostgreSQL configuration
+postgresql['listen_address'] = '0.0.0.0'
+
+# Disable automatic database migrations
+gitlab_rails['auto_migrate'] = false
+
+# Configure the Consul agent
+consul['services'] = %w(postgresql)
+
+# START user configuration
+# Please set the real values as explained in Required Information section
+#
+# Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
+# Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
+
+# Replace X with value of number of db nodes + 1 (OPTIONAL the default value is 5)
+patroni['postgresql']['max_wal_senders'] = X
+patroni['postgresql']['max_replication_slots'] = X
+
+# Replace XXX.XXX.XXX.XXX/YY with Network Address
+postgresql['trust_auth_cidr_addresses'] = %w(XXX.XXX.XXX.XXX/YY)
+
+# Replace placeholders:
+#
+# Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
+# with the addresses gathered for CONSUL_SERVER_NODES
+consul['configuration'] = {
+ retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
+}
+#
+# END user configuration
+```
+
+You do not need an additional or different configuration for replica nodes. As a matter of fact, you don't have to have
+a predetermined primary node. Therefore all database nodes use the same configuration.
+
+Once the configuration of a node is done, you must [reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure)
+on each node for the changes to take effect.
+
+Generally, when Consul cluster is ready, the first node that [reconfigures](../restart_gitlab.md#omnibus-gitlab-reconfigure)
+becomes the leader. You do not need to sequence the nodes reconfiguration. You can run them in parallel or in any order.
+If you choose an arbitrary order you do not have any predetermined master.
+
+As opposed to repmgr, once the nodes are reconfigured you do not need any further action or additional command to join
+the replicas.
+
+#### Database authorization for Patroni
+
+Patroni uses Unix socket to manage PostgreSQL instance. Therefore, the connection from the `local` socket must be trusted.
+
+Also, replicas use the replication user (`gitlab_replicator` by default) to communicate with the leader. For this user,
+you can choose between `trust` and `md5` authentication. If you set `postgresql['sql_replication_password']`,
+Patroni will use `md5` authentication, otherwise it falls back to `trust`. You must to specify the cluster CIDR in
+`postgresql['md5_auth_cidr_addresses']` or `postgresql['trust_auth_cidr_addresses']` respectively.
+
+### Interacting with Patroni cluster
+
+You can use `gitlab-ctl patroni members` to check the status of the cluster members. To check the status of each node
+`gitlab-ctl patroni` provides two additional sub-commands, `check-leader` and `check-replica` which indicate if a node
+is the primary or a replica.
+
+When Patroni is enabled, you don't have direct control over `postgresql` service. Patroni will signal PostgreSQL's startup,
+shutdown, and restart. For example, for shutting down PostgreSQL on a node, you must shutdown Patroni on the same node
+with:
+
+```shell
+sudo gitlab-ctl stop patroni
+```
+
+### Manual failover procedure for Patroni
+
+While Patroni supports automatic failover, you also have the ability to perform
+a manual one, where you have two slightly different options:
+
+- **Failover**: allows you to perform a manual failover when there are no healthy nodes.
+ You can perform this action in any PostgreSQL node:
+
+ ```shell
+ sudo gitlab-ctl patroni failover
+ ```
+
+- **Switchover**: only works when the cluster is healthy and allows you to schedule a switchover (it can happen immediately).
+ You can perform this action in any PostgreSQL node:
+
+ ```shell
+ sudo gitlab-ctl patroni switchover
+ ```
+
+For further details on this subject, see the
+[Patroni documentation](https://patroni.readthedocs.io/en/latest/rest_api.html#switchover-and-failover-endpoints).
+
+### Recovering the Patroni cluster
+
+To recover the old primary and rejoin it to the cluster as a replica, you can simply start Patroni with:
+
+```shell
+sudo gitlab-ctl start patroni
+```
+
+No further configuration or intervention is needed.
+
+### Maintenance procedure for Patroni
+
+With Patroni enabled, you can run a planned maintenance. If you want to do some maintenance work on one node and you
+don't want Patroni to manage it, you can use put it into maintenance mode:
+
+```shell
+sudo gitlab-ctl patroni pause
+```
+
+When Patroni runs in a paused mode, it does not change the state of PostgreSQL. Once you are done you can resume Patroni:
+
+```shell
+sudo gitlab-ctl patroni resume
+```
+
+For further details, see [Patroni documentation on this subject](https://patroni.readthedocs.io/en/latest/pause.html).
+
+### Switching from repmgr to Patroni
+
+CAUTION: **Warning:**
+Although switching from repmgr to Patroni is fairly straightforward the other way around is not. Rolling back from
+Patroni to repmgr can be complicated and may involve deletion of data directory. If you need to do that, please contact
+GitLab support.
+
+You can switch an exiting database cluster to use Patroni instead of repmgr with the following steps:
+
+1. Stop repmgr on all replica nodes and lastly with the primary node:
+
+ ```shell
+ sudo gitlab-ctl stop repmgrd
+ ```
+
+1. Stop PostgreSQL on all replica nodes:
+
+ ```shell
+ sudo gitlab-ctl stop postgresql
+ ```
+
+ NOTE: **Note:**
+ Ensure that there is no `walsender` process running on the primary node.
+ `ps aux | grep walsender` must not show any running process.
+
+1. On the primary node, [configure Patroni](#configuring-patroni-cluster). Remove `repmgr` and any other
+ repmgr-specific configuration. Also remove any configuration that is related to PostgreSQL replication.
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) on the primary node. It will become
+ the leader. You can check this with:
+
+ ```shell
+ sudo gitlab-ctl tail patroni
+ ```
+
+1. Repeat the last two steps for all replica nodes. `gitlab.rb` should look the same on all nodes.
+1. Optional: You can remove `gitlab_repmgr` database and role on the primary.
diff --git a/doc/administration/postgresql/standalone.md b/doc/administration/postgresql/standalone.md
index 3e7826ce009..2747749066e 100644
--- a/doc/administration/postgresql/standalone.md
+++ b/doc/administration/postgresql/standalone.md
@@ -53,7 +53,8 @@ together with Omnibus GitLab. This is recommended as part of our
gitlab_rails['auto_migrate'] = false
```
- NOTE: **Note:** The role `postgres_role` was introduced with GitLab 10.3
+ NOTE: **Note:**
+ The role `postgres_role` was introduced with GitLab 10.3
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
1. Note the PostgreSQL node's IP address or hostname, port, and
diff --git a/doc/administration/raketasks/ldap.md b/doc/administration/raketasks/ldap.md
index d168e3d568c..30bb9828aa0 100644
--- a/doc/administration/raketasks/ldap.md
+++ b/doc/administration/raketasks/ldap.md
@@ -36,7 +36,7 @@ The following task will run a [group sync](../auth/ldap/index.md#group-sync-star
when you'd like to update all configured group memberships against LDAP without
waiting for the next scheduled group sync to be run.
-NOTE: **NOTE:**
+NOTE: **Note:**
If you'd like to change the frequency at which a group sync is performed,
[adjust the cron schedule](../auth/ldap/index.md#adjusting-ldap-group-sync-schedule-starter-only)
instead.
diff --git a/doc/administration/raketasks/maintenance.md b/doc/administration/raketasks/maintenance.md
index 78094f00a43..19781b6a5db 100644
--- a/doc/administration/raketasks/maintenance.md
+++ b/doc/administration/raketasks/maintenance.md
@@ -260,7 +260,7 @@ clear it.
To clear all exclusive leases:
-DANGER: **DANGER**:
+DANGER: **Danger:**
Don't run it while GitLab or Sidekiq is running
```shell
@@ -317,7 +317,7 @@ migrations are completed (have an `up` status).
Sometimes you may need to re-import the common metrics that power the Metrics dashboards.
-This could be as a result of [updating existing metrics](../../development/prometheus_metrics.md#update-existing-metrics), or as a [troubleshooting measure](../../user/project/integrations/prometheus.md#troubleshooting).
+This could be as a result of [updating existing metrics](../../development/prometheus_metrics.md#update-existing-metrics), or as a [troubleshooting measure](../../operations/metrics/dashboards/index.md#troubleshooting).
To re-import the metrics you can run:
diff --git a/doc/administration/raketasks/praefect.md b/doc/administration/raketasks/praefect.md
index c48e23df77a..ca9c2065904 100644
--- a/doc/administration/raketasks/praefect.md
+++ b/doc/administration/raketasks/praefect.md
@@ -1,13 +1,23 @@
-# Praefect Rake Tasks **(CORE ONLY)**
+---
+stage: Create
+group: Gitaly
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+type: reference
+---
+
+# Praefect Rake tasks **(CORE ONLY)**
> [Introduced]( https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28369) in GitLab 12.10.
+Rake tasks are available for projects that have been created on Praefect storage. See the
+[Praefect documentation](../gitaly/praefect.md) for information on configuring Praefect.
+
## Replica checksums
-Prints out checksums of the repository of a given project_id on the primary as well as secondary internal Gitaly nodes.
+`gitlab:praefect:replicas` prints out checksums of the repository of a given `project_id` on:
-NOTE: **Note:**
-This only is relevant and works for projects that have been created on a Praefect storage. See the [Praefect Documentation](../gitaly/praefect.md) for configuring Praefect.
+- The primary Gitaly node.
+- Secondary internal Gitaly nodes.
**Omnibus Installation**
diff --git a/doc/administration/raketasks/project_import_export.md b/doc/administration/raketasks/project_import_export.md
index 2e65e889c90..b807e03b01f 100644
--- a/doc/administration/raketasks/project_import_export.md
+++ b/doc/administration/raketasks/project_import_export.md
@@ -7,6 +7,16 @@ GitLab provides Rake tasks relating to project import and export. For more infor
- [Project import/export documentation](../../user/project/settings/import_export.md).
- [Project import/export API](../../api/project_import_export.md).
+- [Developer documentation: project import/export](../../development/import_export.md)
+
+## Project import status
+
+You can query an import through the [Project import/export API](../../api/project_import_export.md#import-status).
+As described in the API documentation, the query may return an import error or exceptions.
+
+## Import large projects
+
+If you have a larger project, consider using a Rake task, as described in our [developer documentation](../../development/import_project.md#importing-via-a-rake-task).
## Import/export tasks
diff --git a/doc/administration/redis/index.md b/doc/administration/redis/index.md
new file mode 100644
index 00000000000..0bd56666ab8
--- /dev/null
+++ b/doc/administration/redis/index.md
@@ -0,0 +1,42 @@
+---
+type: index
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Configuring Redis for scaling
+
+Based on your infrastructure setup and how you have installed GitLab, there are
+multiple ways to configure Redis.
+
+You can choose to install and manage Redis and Sentinel yourself, use a hosted
+cloud solution, or you can use the ones that come bundled with the Omnibus GitLab
+packages so you only need to focus on configuration. Pick the one that suits your needs.
+
+## Redis replication and failover using Omnibus GitLab
+
+This setup is for when you have installed GitLab using the
+[Omnibus GitLab **Enterprise Edition** (EE) package](https://about.gitlab.com/install/?version=ee).
+
+Both Redis and Sentinel are bundled in the package, so you can it to set up the whole
+Redis infrastructure (primary, replica and sentinel).
+
+[> Read how to set up Redis replication and failover using Omnibus GitLab](replication_and_failover.md)
+
+## Redis replication and failover using the non-bundled Redis
+
+This setup is for when you have installed GitLab using the
+[Omnibus GitLab packages](https://about.gitlab.com/install/) (CE or EE),
+or installed it [from source](../../install/installation.md), but you want to use
+your own external Redis and sentinel servers.
+
+[> Read how to set up Redis replication and failover using the non-bundled Redis](replication_and_failover_external.md)
+
+## Standalone Redis using Omnibus GitLab
+
+This setup is for when you have installed the
+[Omnibus GitLab **Community Edition** (CE) package](https://about.gitlab.com/install/?version=ce)
+to use the bundled Redis, so you can use the package with only the Redis service enabled.
+
+[> Read how to set up a standalone Redis instance using Omnibus GitLab](standalone.md)
diff --git a/doc/administration/redis/replication_and_failover.md b/doc/administration/redis/replication_and_failover.md
new file mode 100644
index 00000000000..ac31b909c89
--- /dev/null
+++ b/doc/administration/redis/replication_and_failover.md
@@ -0,0 +1,741 @@
+---
+type: howto
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Redis replication and failover with Omnibus GitLab **(PREMIUM ONLY)**
+
+NOTE: **Note:**
+This is the documentation for the Omnibus GitLab packages. For using your own
+non-bundled Redis, follow the [relevant documentation](replication_and_failover_external.md).
+
+NOTE: **Note:**
+In Redis lingo, primary is called master. In this document, primary is used
+instead of master, except the settings where `master` is required.
+
+Using [Redis](https://redis.io/) in scalable environment is possible using a **Primary** x **Replica**
+topology with a [Redis Sentinel](https://redis.io/topics/sentinel) service to watch and automatically
+start the failover procedure.
+
+Redis requires authentication if used with Sentinel. See
+[Redis Security](https://redis.io/topics/security) documentation for more
+information. We recommend using a combination of a Redis password and tight
+firewall rules to secure your Redis service.
+You are highly encouraged to read the [Redis Sentinel](https://redis.io/topics/sentinel) documentation
+before configuring Redis with GitLab to fully understand the topology and
+architecture.
+
+Before diving into the details of setting up Redis and Redis Sentinel for a
+replicated topology, make sure you read this document once as a whole to better
+understand how the components are tied together.
+
+You need at least `3` independent machines: physical, or VMs running into
+distinct physical machines. It is essential that all primary and replica Redis
+instances run in different machines. If you fail to provision the machines in
+that specific way, any issue with the shared environment can bring your entire
+setup down.
+
+It is OK to run a Sentinel alongside of a primary or replica Redis instance.
+There should be no more than one Sentinel on the same machine though.
+
+You also need to take into consideration the underlying network topology,
+making sure you have redundant connectivity between Redis / Sentinel and
+GitLab instances, otherwise the networks will become a single point of
+failure.
+
+Running Redis in a scaled environment requires a few things:
+
+- Multiple Redis instances
+- Run Redis in a **Primary** x **Replica** topology
+- Multiple Sentinel instances
+- Application support and visibility to all Sentinel and Redis instances
+
+Redis Sentinel can handle the most important tasks in an HA environment and that's
+to help keep servers online with minimal to no downtime. Redis Sentinel:
+
+- Monitors **Primary** and **Replicas** instances to see if they are available
+- Promotes a **Replica** to **Primary** when the **Primary** fails
+- Demotes a **Primary** to **Replica** when the failed **Primary** comes back online
+ (to prevent data-partitioning)
+- Can be queried by the application to always connect to the current **Primary**
+ server
+
+When a **Primary** fails to respond, it's the application's responsibility
+(in our case GitLab) to handle timeout and reconnect (querying a **Sentinel**
+for a new **Primary**).
+
+To get a better understanding on how to correctly set up Sentinel, please read
+the [Redis Sentinel documentation](https://redis.io/topics/sentinel) first, as
+failing to configure it correctly can lead to data loss or can bring your
+whole cluster down, invalidating the failover effort.
+
+## Recommended setup
+
+For a minimal setup, you will install the Omnibus GitLab package in `3`
+**independent** machines, both with **Redis** and **Sentinel**:
+
+- Redis Primary + Sentinel
+- Redis Replica + Sentinel
+- Redis Replica + Sentinel
+
+If you are not sure or don't understand why and where the amount of nodes come
+from, read [Redis setup overview](#redis-setup-overview) and
+[Sentinel setup overview](#sentinel-setup-overview).
+
+For a recommended setup that can resist more failures, you will install
+the Omnibus GitLab package in `5` **independent** machines, both with
+**Redis** and **Sentinel**:
+
+- Redis Primary + Sentinel
+- Redis Replica + Sentinel
+- Redis Replica + Sentinel
+- Redis Replica + Sentinel
+- Redis Replica + Sentinel
+
+### Redis setup overview
+
+You must have at least `3` Redis servers: `1` primary, `2` Replicas, and they
+need to each be on independent machines (see explanation above).
+
+You can have additional Redis nodes, that will help survive a situation
+where more nodes goes down. Whenever there is only `2` nodes online, a failover
+will not be initiated.
+
+As an example, if you have `6` Redis nodes, a maximum of `3` can be
+simultaneously down.
+
+Please note that there are different requirements for Sentinel nodes.
+If you host them in the same Redis machines, you may need to take
+that restrictions into consideration when calculating the amount of
+nodes to be provisioned. See [Sentinel setup overview](#sentinel-setup-overview)
+documentation for more information.
+
+All Redis nodes should be configured the same way and with similar server specs, as
+in a failover situation, any **Replica** can be promoted as the new **Primary** by
+the Sentinel servers.
+
+The replication requires authentication, so you need to define a password to
+protect all Redis nodes and the Sentinels. They will all share the same
+password, and all instances must be able to talk to
+each other over the network.
+
+### Sentinel setup overview
+
+Sentinels watch both other Sentinels and Redis nodes. Whenever a Sentinel
+detects that a Redis node is not responding, it will announce that to the
+other Sentinels. They have to reach the **quorum**, that is the minimum amount
+of Sentinels that agrees a node is down, in order to be able to start a failover.
+
+Whenever the **quorum** is met, the **majority** of all known Sentinel nodes
+need to be available and reachable, so that they can elect the Sentinel **leader**
+who will take all the decisions to restore the service availability by:
+
+- Promoting a new **Primary**
+- Reconfiguring the other **Replicas** and make them point to the new **Primary**
+- Announce the new **Primary** to every other Sentinel peer
+- Reconfigure the old **Primary** and demote to **Replica** when it comes back online
+
+You must have at least `3` Redis Sentinel servers, and they need to
+be each in an independent machine (that are believed to fail independently),
+ideally in different geographical areas.
+
+You can configure them in the same machines where you've configured the other
+Redis servers, but understand that if a whole node goes down, you loose both
+a Sentinel and a Redis instance.
+
+The number of sentinels should ideally always be an **odd** number, for the
+consensus algorithm to be effective in the case of a failure.
+
+In a `3` nodes topology, you can only afford `1` Sentinel node going down.
+Whenever the **majority** of the Sentinels goes down, the network partition
+protection prevents destructive actions and a failover **will not be started**.
+
+Here are some examples:
+
+- With `5` or `6` sentinels, a maximum of `2` can go down for a failover begin.
+- With `7` sentinels, a maximum of `3` nodes can go down.
+
+The **Leader** election can sometimes fail the voting round when **consensus**
+is not achieved (see the odd number of nodes requirement above). In that case,
+a new attempt will be made after the amount of time defined in
+`sentinel['failover_timeout']` (in milliseconds).
+
+NOTE: **Note:**
+We will see where `sentinel['failover_timeout']` is defined later.
+
+The `failover_timeout` variable has a lot of different use cases. According to
+the official documentation:
+
+- The time needed to re-start a failover after a previous failover was
+ already tried against the same primary by a given Sentinel, is two
+ times the failover timeout.
+
+- The time needed for a replica replicating to a wrong primary according
+ to a Sentinel current configuration, to be forced to replicate
+ with the right primary, is exactly the failover timeout (counting since
+ the moment a Sentinel detected the misconfiguration).
+
+- The time needed to cancel a failover that is already in progress but
+ did not produced any configuration change (REPLICAOF NO ONE yet not
+ acknowledged by the promoted replica).
+
+- The maximum time a failover in progress waits for all the replicas to be
+ reconfigured as replicas of the new primary. However even after this time
+ the replicas will be reconfigured by the Sentinels anyway, but not with
+ the exact parallel-syncs progression as specified.
+
+## Configuring Redis
+
+This is the section where we install and set up the new Redis instances.
+
+It is assumed that you have installed GitLab and all its components from scratch.
+If you already have Redis installed and running, read how to
+[switch from a single-machine installation](#switching-from-an-existing-single-machine-installation).
+
+NOTE: **Note:**
+Redis nodes (both primary and replica) will need the same password defined in
+`redis['password']`. At any time during a failover the Sentinels can
+reconfigure a node and change its status from primary to replica and vice versa.
+
+### Requirements
+
+The requirements for a Redis setup are the following:
+
+1. Provision the minimum required number of instances as specified in the
+ [recommended setup](#recommended-setup) section.
+1. We **Do not** recommend installing Redis or Redis Sentinel in the same machines your
+ GitLab application is running on as this weakens your HA configuration. You can however opt in to install Redis
+ and Sentinel in the same machine.
+1. All Redis nodes must be able to talk to each other and accept incoming
+ connections over Redis (`6379`) and Sentinel (`26379`) ports (unless you
+ change the default ones).
+1. The server that hosts the GitLab application must be able to access the
+ Redis nodes.
+1. Protect the nodes from access from external networks ([Internet](https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png)), using
+ firewall.
+
+### Switching from an existing single-machine installation
+
+If you already have a single-machine GitLab install running, you will need to
+replicate from this machine first, before de-activating the Redis instance
+inside it.
+
+Your single-machine install will be the initial **Primary**, and the `3` others
+should be configured as **Replica** pointing to this machine.
+
+After replication catches up, you will need to stop services in the
+single-machine install, to rotate the **Primary** to one of the new nodes.
+
+Make the required changes in configuration and restart the new nodes again.
+
+To disable Redis in the single install, edit `/etc/gitlab/gitlab.rb`:
+
+```ruby
+redis['enable'] = false
+```
+
+If you fail to replicate first, you may loose data (unprocessed background jobs).
+
+### Step 1. Configuring the primary Redis instance
+
+1. SSH into the **Primary** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_master_role'
+ roles ['redis_master_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.0.0.1'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Set up password authentication for Redis (use the same password in all nodes).
+ redis['password'] = 'redis-password-goes-here'
+ ```
+
+1. Only the primary GitLab application server should handle migrations. To
+ prevent database migrations from running on upgrade, add the following
+ configuration to your `/etc/gitlab/gitlab.rb` file:
+
+ ```ruby
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+### Step 2. Configuring the replica Redis instances
+
+1. SSH into the **replica** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_replica_role'
+ roles ['redis_replica_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.0.0.2'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['password'] = 'redis-password-goes-here'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.0.0.1'
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+ ```
+
+1. To prevent reconfigure from running automatically on upgrade, run:
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other replica nodes.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+These values don't have to be changed again in `/etc/gitlab/gitlab.rb` after
+a failover, as the nodes will be managed by the Sentinels, and even after a
+`gitlab-ctl reconfigure`, they will get their configuration restored by
+the same Sentinels.
+
+### Step 3. Configuring the Redis Sentinel instances
+
+NOTE: **Note:**
+If you are using an external Redis Sentinel instance, be sure
+to exclude the `requirepass` parameter from the Sentinel
+configuration. This parameter will cause clients to report `NOAUTH
+Authentication required.`. [Redis Sentinel 3.2.x does not support
+password authentication](https://github.com/antirez/redis/issues/3279).
+
+Now that the Redis servers are all set up, let's configure the Sentinel
+servers.
+
+If you are not sure if your Redis servers are working and replicating
+correctly, please read the [Troubleshooting Replication](troubleshooting.md#troubleshooting-redis-replication)
+and fix it before proceeding with Sentinel setup.
+
+You must have at least `3` Redis Sentinel servers, and they need to
+be each in an independent machine. You can configure them in the same
+machines where you've configured the other Redis servers.
+
+With GitLab Enterprise Edition, you can use the Omnibus package to set up
+multiple machines with the Sentinel daemon.
+
+---
+
+1. SSH into the server that will host Redis Sentinel.
+1. **You can omit this step if the Sentinels will be hosted in the same node as
+ the other Redis instances.**
+
+ [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents (if you are installing the
+ Sentinels in the same node as the other Redis instances, some values might
+ be duplicate below):
+
+ ```ruby
+ roles ['redis_sentinel_role']
+
+ # Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['master_password'] = 'redis-password-goes-here'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.0.0.1'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Configure Sentinel
+ sentinel['bind'] = '10.0.0.1'
+
+ # Port that Sentinel listens on, uncomment to change to non default. Defaults
+ # to `26379`.
+ # sentinel['port'] = 26379
+
+ ## Quorum must reflect the amount of voting sentinels it take to start a failover.
+ ## Value must NOT be greater then the amount of sentinels.
+ ##
+ ## The quorum can be used to tune Sentinel in two ways:
+ ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
+ ## we deploy, we are basically making Sentinel more sensible to primary failures,
+ ## triggering a failover as soon as even just a minority of Sentinels is no longer
+ ## able to talk with the primary.
+ ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
+ ## making Sentinel able to failover only when there are a very large number (larger
+ ## than majority) of well connected Sentinels which agree about the primary being down.s
+ sentinel['quorum'] = 2
+
+ ## Consider unresponsive server down after x amount of ms.
+ # sentinel['down_after_milliseconds'] = 10000
+
+ ## Specifies the failover timeout in milliseconds. It is used in many ways:
+ ##
+ ## - The time needed to re-start a failover after a previous failover was
+ ## already tried against the same primary by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## - The time needed for a replica replicating to a wrong primary according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right primary, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## - The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (REPLICAOF NO ONE yet not
+ ## acknowledged by the promoted replica).
+ ##
+ ## - The maximum time a failover in progress waits for all the replica to be
+ ## reconfigured as replicas of the new primary. However even after this time
+ ## the replicas will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ # sentinel['failover_timeout'] = 60000
+ ```
+
+1. To prevent database migrations from running on upgrade, run:
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+ Only the primary GitLab application server should handle migrations.
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Sentinel nodes.
+
+### Step 4. Configuring the GitLab application
+
+The final part is to inform the main GitLab application server of the Redis
+Sentinels servers and authentication credentials.
+
+You can enable or disable Sentinel support at any time in new or existing
+installations. From the GitLab application perspective, all it requires is
+the correct credentials for the Sentinel nodes.
+
+While it doesn't require a list of all Sentinel nodes, in case of a failure,
+it needs to access at least one of the listed.
+
+NOTE: **Note:**
+The following steps should be performed in the [GitLab application server](../high_availability/gitlab.md)
+which ideally should not have Redis or Sentinels on it for a HA setup.
+
+1. SSH into the server where the GitLab application is installed.
+1. Edit `/etc/gitlab/gitlab.rb` and add/change the following lines:
+
+ ```ruby
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ ## The same password for Redis authentication you set up for the primary node.
+ redis['master_password'] = 'redis-password-goes-here'
+
+ ## A list of sentinels with `host` and `port`
+ gitlab_rails['redis_sentinels'] = [
+ {'host' => '10.0.0.1', 'port' => 26379},
+ {'host' => '10.0.0.2', 'port' => 26379},
+ {'host' => '10.0.0.3', 'port' => 26379}
+ ]
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+### Step 5. Enable Monitoring
+
+> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
+
+If you enable Monitoring, it must be enabled on **all** Redis servers.
+
+1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
+
+1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
+
+ ```ruby
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Replace placeholders
+ # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
+ # with the addresses of the Consul server nodes
+ consul['configuration'] = {
+ retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+ ```
+
+1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
+
+## Example of a minimal configuration with 1 primary, 2 replicas and 3 Sentinels
+
+In this example we consider that all servers have an internal network
+interface with IPs in the `10.0.0.x` range, and that they can connect
+to each other using these IPs.
+
+In a real world usage, you would also set up firewall rules to prevent
+unauthorized access from other machines and block traffic from the
+outside (Internet).
+
+We will use the same `3` nodes with **Redis** + **Sentinel** topology
+discussed in [Redis setup overview](#redis-setup-overview) and
+[Sentinel setup overview](#sentinel-setup-overview) documentation.
+
+Here is a list and description of each **machine** and the assigned **IP**:
+
+- `10.0.0.1`: Redis primary + Sentinel 1
+- `10.0.0.2`: Redis Replica 1 + Sentinel 2
+- `10.0.0.3`: Redis Replica 2 + Sentinel 3
+- `10.0.0.4`: GitLab application
+
+Please note that after the initial configuration, if a failover is initiated
+by the Sentinel nodes, the Redis nodes will be reconfigured and the **Primary**
+will change permanently (including in `redis.conf`) from one node to the other,
+until a new failover is initiated again.
+
+The same thing will happen with `sentinel.conf` that will be overridden after the
+initial execution, after any new sentinel node starts watching the **Primary**,
+or a failover promotes a different **Primary** node.
+
+### Example configuration for Redis primary and Sentinel 1
+
+In `/etc/gitlab/gitlab.rb`:
+
+```ruby
+roles ['redis_sentinel_role', 'redis_master_role']
+redis['bind'] = '10.0.0.1'
+redis['port'] = 6379
+redis['password'] = 'redis-password-goes-here'
+redis['master_name'] = 'gitlab-redis' # must be the same in every sentinel node
+redis['master_password'] = 'redis-password-goes-here' # the same value defined in redis['password'] in the primary instance
+redis['master_ip'] = '10.0.0.1' # ip of the initial primary redis instance
+#redis['master_port'] = 6379 # port of the initial primary redis instance, uncomment to change to non default
+sentinel['bind'] = '10.0.0.1'
+# sentinel['port'] = 26379 # uncomment to change default port
+sentinel['quorum'] = 2
+# sentinel['down_after_milliseconds'] = 10000
+# sentinel['failover_timeout'] = 60000
+```
+
+[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+### Example configuration for Redis replica 1 and Sentinel 2
+
+In `/etc/gitlab/gitlab.rb`:
+
+```ruby
+roles ['redis_sentinel_role', 'redis_replica_role']
+redis['bind'] = '10.0.0.2'
+redis['port'] = 6379
+redis['password'] = 'redis-password-goes-here'
+redis['master_password'] = 'redis-password-goes-here'
+redis['master_ip'] = '10.0.0.1' # IP of primary Redis server
+#redis['master_port'] = 6379 # Port of primary Redis server, uncomment to change to non default
+redis['master_name'] = 'gitlab-redis' # must be the same in every sentinel node
+sentinel['bind'] = '10.0.0.2'
+# sentinel['port'] = 26379 # uncomment to change default port
+sentinel['quorum'] = 2
+# sentinel['down_after_milliseconds'] = 10000
+# sentinel['failover_timeout'] = 60000
+```
+
+[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+### Example configuration for Redis replica 2 and Sentinel 3
+
+In `/etc/gitlab/gitlab.rb`:
+
+```ruby
+roles ['redis_sentinel_role', 'redis_replica_role']
+redis['bind'] = '10.0.0.3'
+redis['port'] = 6379
+redis['password'] = 'redis-password-goes-here'
+redis['master_password'] = 'redis-password-goes-here'
+redis['master_ip'] = '10.0.0.1' # IP of primary Redis server
+#redis['master_port'] = 6379 # Port of primary Redis server, uncomment to change to non default
+redis['master_name'] = 'gitlab-redis' # must be the same in every sentinel node
+sentinel['bind'] = '10.0.0.3'
+# sentinel['port'] = 26379 # uncomment to change default port
+sentinel['quorum'] = 2
+# sentinel['down_after_milliseconds'] = 10000
+# sentinel['failover_timeout'] = 60000
+```
+
+[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+### Example configuration for the GitLab application
+
+In `/etc/gitlab/gitlab.rb`:
+
+```ruby
+redis['master_name'] = 'gitlab-redis'
+redis['master_password'] = 'redis-password-goes-here'
+gitlab_rails['redis_sentinels'] = [
+ {'host' => '10.0.0.1', 'port' => 26379},
+ {'host' => '10.0.0.2', 'port' => 26379},
+ {'host' => '10.0.0.3', 'port' => 26379}
+]
+```
+
+[Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+## Advanced configuration
+
+Omnibus GitLab configures some things behind the curtains to make the sysadmins'
+lives easier. If you want to know what happens underneath keep reading.
+
+### Running multiple Redis clusters
+
+GitLab supports running [separate Redis clusters for different persistent
+classes](https://docs.gitlab.com/omnibus/settings/redis.html#running-with-multiple-redis-instances):
+cache, queues, and shared_state. To make this work with Sentinel:
+
+1. Set the appropriate variable in `/etc/gitlab/gitlab.rb` for each instance you are using:
+
+ ```ruby
+ gitlab_rails['redis_cache_instance'] = REDIS_CACHE_URL
+ gitlab_rails['redis_queues_instance'] = REDIS_QUEUES_URL
+ gitlab_rails['redis_shared_state_instance'] = REDIS_SHARED_STATE_URL
+ ```
+
+ **Note**: Redis URLs should be in the format: `redis://:PASSWORD@SENTINEL_PRIMARY_NAME`
+
+ 1. PASSWORD is the plaintext password for the Redis instance
+ 1. SENTINEL_PRIMARY_NAME is the Sentinel primary name (e.g. `gitlab-redis-cache`)
+
+1. Include an array of hashes with host/port combinations, such as the following:
+
+ ```ruby
+ gitlab_rails['redis_cache_sentinels'] = [
+ { host: REDIS_CACHE_SENTINEL_HOST, port: PORT1 },
+ { host: REDIS_CACHE_SENTINEL_HOST2, port: PORT2 }
+ ]
+ gitlab_rails['redis_queues_sentinels'] = [
+ { host: REDIS_QUEUES_SENTINEL_HOST, port: PORT1 },
+ { host: REDIS_QUEUES_SENTINEL_HOST2, port: PORT2 }
+ ]
+ gitlab_rails['redis_shared_state_sentinels'] = [
+ { host: SHARED_STATE_SENTINEL_HOST, port: PORT1 },
+ { host: SHARED_STATE_SENTINEL_HOST2, port: PORT2 }
+ ]
+ ```
+
+1. Note that for each persistence class, GitLab will default to using the
+ configuration specified in `gitlab_rails['redis_sentinels']` unless
+ overridden by the settings above.
+1. Be sure to include BOTH configuration options for each persistent classes. For example,
+ if you choose to configure a cache instance, you must specify both `gitlab_rails['redis_cache_instance']`
+ and `gitlab_rails['redis_cache_sentinels']` for GitLab to generate the proper configuration files.
+1. Run `gitlab-ctl reconfigure`
+
+### Control running services
+
+In the previous example, we've used `redis_sentinel_role` and
+`redis_master_role` which simplifies the amount of configuration changes.
+
+If you want more control, here is what each one sets for you automatically
+when enabled:
+
+```ruby
+## Redis Sentinel Role
+redis_sentinel_role['enable'] = true
+
+# When Sentinel Role is enabled, the following services are also enabled
+sentinel['enable'] = true
+
+# The following services are disabled
+redis['enable'] = false
+bootstrap['enable'] = false
+nginx['enable'] = false
+postgresql['enable'] = false
+gitlab_rails['enable'] = false
+mailroom['enable'] = false
+
+-------
+
+## Redis primary/replica Role
+redis_master_role['enable'] = true # enable only one of them
+redis_replica_role['enable'] = true # enable only one of them
+
+# When Redis primary or Replica role are enabled, the following services are
+# enabled/disabled. Note that if Redis and Sentinel roles are combined, both
+# services will be enabled.
+
+# The following services are disabled
+sentinel['enable'] = false
+bootstrap['enable'] = false
+nginx['enable'] = false
+postgresql['enable'] = false
+gitlab_rails['enable'] = false
+mailroom['enable'] = false
+
+# For Redis Replica role, also change this setting from default 'true' to 'false':
+redis['master'] = false
+```
+
+You can find the relevant attributes defined in [`gitlab_rails.rb`](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-cookbooks/gitlab/libraries/gitlab_rails.rb).
+
+## Troubleshooting
+
+See the [Redis troubleshooting guide](troubleshooting.md).
+
+## Further reading
+
+Read more:
+
+1. [Reference architectures](../reference_architectures/index.md)
+1. [Configure the database](../postgresql/replication_and_failover.md)
+1. [Configure NFS](../high_availability/nfs.md)
+1. [Configure the GitLab application servers](../high_availability/gitlab.md)
+1. [Configure the load balancers](../high_availability/load_balancer.md)
diff --git a/doc/administration/redis/replication_and_failover_external.md b/doc/administration/redis/replication_and_failover_external.md
new file mode 100644
index 00000000000..244b44dd76a
--- /dev/null
+++ b/doc/administration/redis/replication_and_failover_external.md
@@ -0,0 +1,376 @@
+---
+type: howto
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Redis replication and failover providing your own instance **(CORE ONLY)**
+
+If you’re hosting GitLab on a cloud provider, you can optionally use a managed
+service for Redis. For example, AWS offers ElastiCache that runs Redis.
+
+Alternatively, you may opt to manage your own Redis instance separate from the
+Omnibus GitLab package.
+
+## Requirements
+
+The following are the requirements for providing your own Redis instance:
+
+- Redis version 5.0 or higher is recommended, as this is what ships with
+ Omnibus GitLab packages starting with GitLab 12.7.
+- Support for Redis 3.2 is deprecated with GitLab 12.10 and will be completely
+ removed in GitLab 13.0.
+- GitLab 12.0 and later requires Redis version 3.2 or higher. Older Redis
+ versions do not support an optional count argument to SPOP which is now
+ required for [Merge Trains](../../ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md).
+- In addition, if Redis 4 or later is available, GitLab makes use of certain
+ commands like `UNLINK` and `USAGE` which were introduced only in Redis 4.
+- Standalone Redis or Redis high availability with Sentinel are supported. Redis
+ Cluster is not supported.
+- Managed Redis from cloud providers such as AWS ElastiCache will work. If these
+ services support high availability, be sure it is **not** the Redis Cluster type.
+
+Note the Redis node's IP address or hostname, port, and password (if required).
+
+## Redis as a managed service in a cloud provider
+
+1. Set up Redis according to the [requirements](#requirements).
+1. Configure the GitLab application servers with the appropriate connection details
+ for your external Redis service in your `/etc/gitlab/gitlab.rb` file:
+
+ ```ruby
+ redis['enable'] = false
+
+ gitlab_rails['redis_host'] = 'redis.example.com'
+ gitlab_rails['redis_port'] = 6379
+
+ # Required if Redis authentication is configured on the Redis node
+ gitlab_rails['redis_password'] = 'Redis Password'
+ ```
+
+1. Reconfigure for the changes to take effect:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+## Redis replication and failover with your own Redis servers
+
+This is the documentation for configuring a scalable Redis setup when
+you have installed Redis all by yourself and not using the bundled one that
+comes with the Omnibus packages, although using the Omnibus GitLab packages is
+highly recommend as we optimize them specifically for GitLab, and we will take
+care of upgrading Redis to the latest supported version.
+
+Note also that you may elect to override all references to
+`/home/git/gitlab/config/resque.yml` in accordance with the advanced Redis
+settings outlined in
+[Configuration Files Documentation](https://gitlab.com/gitlab-org/gitlab/blob/master/config/README.md).
+
+We cannot stress enough the importance of reading the
+[replication and failover](replication_and_failover.md) documentation of the
+Omnibus Redis HA as it provides some invaluable information to the configuration
+of Redis. Please proceed to read it before going forward with this guide.
+
+Before proceeding on setting up the new Redis instances, here are some
+requirements:
+
+- All Redis servers in this guide must be configured to use a TCP connection
+ instead of a socket. To configure Redis to use TCP connections you need to
+ define both `bind` and `port` in the Redis config file. You can bind to all
+ interfaces (`0.0.0.0`) or specify the IP of the desired interface
+ (e.g., one from an internal network).
+- Since Redis 3.2, you must define a password to receive external connections
+ (`requirepass`).
+- If you are using Redis with Sentinel, you will also need to define the same
+ password for the replica password definition (`masterauth`) in the same instance.
+
+In addition, read the prerequisites as described in the
+[Omnibus Redis document](replication_and_failover.md#requirements) since they provide some
+valuable information for the general setup.
+
+### Step 1. Configuring the primary Redis instance
+
+Assuming that the Redis primary instance IP is `10.0.0.1`:
+
+1. [Install Redis](../../install/installation.md#7-redis).
+1. Edit `/etc/redis/redis.conf`:
+
+ ```conf
+ ## Define a `bind` address pointing to a local IP that your other machines
+ ## can reach you. If you really need to bind to an external accessible IP, make
+ ## sure you add extra firewall rules to prevent unauthorized access:
+ bind 10.0.0.1
+
+ ## Define a `port` to force redis to listen on TCP so other machines can
+ ## connect to it (default port is `6379`).
+ port 6379
+
+ ## Set up password authentication (use the same password in all nodes).
+ ## The password should be defined equal for both `requirepass` and `masterauth`
+ ## when setting up Redis to use with Sentinel.
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+ ```
+
+1. Restart the Redis service for the changes to take effect.
+
+### Step 2. Configuring the replica Redis instances
+
+Assuming that the Redis replica instance IP is `10.0.0.2`:
+
+1. [Install Redis](../../install/installation.md#7-redis).
+1. Edit `/etc/redis/redis.conf`:
+
+ ```conf
+ ## Define a `bind` address pointing to a local IP that your other machines
+ ## can reach you. If you really need to bind to an external accessible IP, make
+ ## sure you add extra firewall rules to prevent unauthorized access:
+ bind 10.0.0.2
+
+ ## Define a `port` to force redis to listen on TCP so other machines can
+ ## connect to it (default port is `6379`).
+ port 6379
+
+ ## Set up password authentication (use the same password in all nodes).
+ ## The password should be defined equal for both `requirepass` and `masterauth`
+ ## when setting up Redis to use with Sentinel.
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+
+ ## Define `replicaof` pointing to the Redis primary instance with IP and port.
+ replicaof 10.0.0.1 6379
+ ```
+
+1. Restart the Redis service for the changes to take effect.
+1. Go through the steps again for all the other replica nodes.
+
+### Step 3. Configuring the Redis Sentinel instances
+
+Sentinel is a special type of Redis server. It inherits most of the basic
+configuration options you can define in `redis.conf`, with specific ones
+starting with `sentinel` prefix.
+
+Assuming that the Redis Sentinel is installed on the same instance as Redis
+primary with IP `10.0.0.1` (some settings might overlap with the primary):
+
+1. [Install Redis Sentinel](https://redis.io/topics/sentinel).
+1. Edit `/etc/redis/sentinel.conf`:
+
+ ```conf
+ ## Define a `bind` address pointing to a local IP that your other machines
+ ## can reach you. If you really need to bind to an external accessible IP, make
+ ## sure you add extra firewall rules to prevent unauthorized access:
+ bind 10.0.0.1
+
+ ## Define a `port` to force Sentinel to listen on TCP so other machines can
+ ## connect to it (default port is `6379`).
+ port 26379
+
+ ## Set up password authentication (use the same password in all nodes).
+ ## The password should be defined equal for both `requirepass` and `masterauth`
+ ## when setting up Redis to use with Sentinel.
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+
+ ## Define with `sentinel auth-pass` the same shared password you have
+ ## defined for both Redis primary and replicas instances.
+ sentinel auth-pass gitlab-redis redis-password-goes-here
+
+ ## Define with `sentinel monitor` the IP and port of the Redis
+ ## primary node, and the quorum required to start a failover.
+ sentinel monitor gitlab-redis 10.0.0.1 6379 2
+
+ ## Define with `sentinel down-after-milliseconds` the time in `ms`
+ ## that an unresponsive server will be considered down.
+ sentinel down-after-milliseconds gitlab-redis 10000
+
+ ## Define a value for `sentinel failover_timeout` in `ms`. This has multiple
+ ## meanings:
+ ##
+ ## * The time needed to re-start a failover after a previous failover was
+ ## already tried against the same primary by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## * The time needed for a replica replicating to a wrong primary according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right primary, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## * The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (REPLICAOF NO ONE yet not
+ ## acknowledged by the promoted replica).
+ ##
+ ## * The maximum time a failover in progress waits for all the replicas to be
+ ## reconfigured as replicas of the new primary. However even after this time
+ ## the replicas will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ sentinel failover_timeout 30000
+ ```
+
+1. Restart the Redis service for the changes to take effect.
+1. Go through the steps again for all the other Sentinel nodes.
+
+### Step 4. Configuring the GitLab application
+
+You can enable or disable Sentinel support at any time in new or existing
+installations. From the GitLab application perspective, all it requires is
+the correct credentials for the Sentinel nodes.
+
+While it doesn't require a list of all Sentinel nodes, in case of a failure,
+it needs to access at least one of listed ones.
+
+The following steps should be performed in the [GitLab application server](../high_availability/gitlab.md)
+which ideally should not have Redis or Sentinels in the same machine:
+
+1. Edit `/home/git/gitlab/config/resque.yml` following the example in
+ [resque.yml.example](https://gitlab.com/gitlab-org/gitlab/blob/master/config/resque.yml.example), and uncomment the Sentinel lines, pointing to
+ the correct server credentials:
+
+ ```yaml
+ # resque.yaml
+ production:
+ url: redis://:redi-password-goes-here@gitlab-redis/
+ sentinels:
+ -
+ host: 10.0.0.1
+ port: 26379 # point to sentinel, not to redis port
+ -
+ host: 10.0.0.2
+ port: 26379 # point to sentinel, not to redis port
+ -
+ host: 10.0.0.3
+ port: 26379 # point to sentinel, not to redis port
+ ```
+
+1. [Restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
+
+## Example of minimal configuration with 1 primary, 2 replicas and 3 sentinels
+
+In this example we consider that all servers have an internal network
+interface with IPs in the `10.0.0.x` range, and that they can connect
+to each other using these IPs.
+
+In a real world usage, you would also set up firewall rules to prevent
+unauthorized access from other machines, and block traffic from the
+outside ([Internet](https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png)).
+
+For this example, **Sentinel 1** will be configured in the same machine as the
+**Redis Primary**, **Sentinel 2** and **Sentinel 3** in the same machines as the
+**Replica 1** and **Replica 2** respectively.
+
+Here is a list and description of each **machine** and the assigned **IP**:
+
+- `10.0.0.1`: Redis Primary + Sentinel 1
+- `10.0.0.2`: Redis Replica 1 + Sentinel 2
+- `10.0.0.3`: Redis Replica 2 + Sentinel 3
+- `10.0.0.4`: GitLab application
+
+Please note that after the initial configuration, if a failover is initiated
+by the Sentinel nodes, the Redis nodes will be reconfigured and the **Primary**
+will change permanently (including in `redis.conf`) from one node to the other,
+until a new failover is initiated again.
+
+The same thing will happen with `sentinel.conf` that will be overridden after the
+initial execution, after any new sentinel node starts watching the **Primary**,
+or a failover promotes a different **Primary** node.
+
+### Example configuration for Redis primary and Sentinel 1
+
+1. In `/etc/redis/redis.conf`:
+
+ ```conf
+ bind 10.0.0.1
+ port 6379
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+ ```
+
+1. In `/etc/redis/sentinel.conf`:
+
+ ```conf
+ bind 10.0.0.1
+ port 26379
+ sentinel auth-pass gitlab-redis redis-password-goes-here
+ sentinel monitor gitlab-redis 10.0.0.1 6379 2
+ sentinel down-after-milliseconds gitlab-redis 10000
+ sentinel failover_timeout 30000
+ ```
+
+1. Restart the Redis service for the changes to take effect.
+
+### Example configuration for Redis replica 1 and Sentinel 2
+
+1. In `/etc/redis/redis.conf`:
+
+ ```conf
+ bind 10.0.0.2
+ port 6379
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+ replicaof 10.0.0.1 6379
+ ```
+
+1. In `/etc/redis/sentinel.conf`:
+
+ ```conf
+ bind 10.0.0.2
+ port 26379
+ sentinel auth-pass gitlab-redis redis-password-goes-here
+ sentinel monitor gitlab-redis 10.0.0.1 6379 2
+ sentinel down-after-milliseconds gitlab-redis 10000
+ sentinel failover_timeout 30000
+ ```
+
+1. Restart the Redis service for the changes to take effect.
+
+### Example configuration for Redis replica 2 and Sentinel 3
+
+1. In `/etc/redis/redis.conf`:
+
+ ```conf
+ bind 10.0.0.3
+ port 6379
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+ replicaof 10.0.0.1 6379
+ ```
+
+1. In `/etc/redis/sentinel.conf`:
+
+ ```conf
+ bind 10.0.0.3
+ port 26379
+ sentinel auth-pass gitlab-redis redis-password-goes-here
+ sentinel monitor gitlab-redis 10.0.0.1 6379 2
+ sentinel down-after-milliseconds gitlab-redis 10000
+ sentinel failover_timeout 30000
+ ```
+
+1. Restart the Redis service for the changes to take effect.
+
+### Example configuration of the GitLab application
+
+1. Edit `/home/git/gitlab/config/resque.yml`:
+
+ ```yaml
+ production:
+ url: redis://:redi-password-goes-here@gitlab-redis/
+ sentinels:
+ -
+ host: 10.0.0.1
+ port: 26379 # point to sentinel, not to redis port
+ -
+ host: 10.0.0.2
+ port: 26379 # point to sentinel, not to redis port
+ -
+ host: 10.0.0.3
+ port: 26379 # point to sentinel, not to redis port
+ ```
+
+1. [Restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
+
+## Troubleshooting
+
+See the [Redis troubleshooting guide](troubleshooting.md).
diff --git a/doc/administration/redis/standalone.md b/doc/administration/redis/standalone.md
new file mode 100644
index 00000000000..12e932dbc5e
--- /dev/null
+++ b/doc/administration/redis/standalone.md
@@ -0,0 +1,63 @@
+---
+type: howto
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Standalone Redis using Omnibus GitLab **(CORE ONLY)**
+
+The Omnibus GitLab package can be used to configure a standalone Redis server.
+In this configuration, Redis is not scaled, and represents a single
+point of failure. However, in a scaled environment the objective is to allow
+the environment to handle more users or to increase throughput. Redis itself
+is generally stable and can handle many requests, so it is an acceptable
+trade off to have only a single instance. See the [reference architectures](../reference_architectures/index.md)
+page for an overview of GitLab scaling options.
+
+## Set up a standalone Redis instance
+
+The steps below are the minimum necessary to configure a Redis server with
+Omnibus GitLab:
+
+1. SSH into the Redis server.
+1. [Download and install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want by using **steps 1 and 2** from the GitLab downloads page.
+ Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ ## Enable Redis
+ redis['enable'] = true
+
+ ## Disable all other services
+ sidekiq['enable'] = false
+ gitlab_workhorse['enable'] = false
+ puma['enable'] = false
+ postgresql['enable'] = false
+ nginx['enable'] = false
+ prometheus['enable'] = false
+ alertmanager['enable'] = false
+ pgbouncer_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+ gitaly['enable'] = false
+
+ redis['bind'] = '0.0.0.0'
+ redis['port'] = 6379
+ redis['password'] = 'SECRET_PASSWORD_HERE'
+
+ gitlab_rails['enable'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Note the Redis node's IP address or hostname, port, and
+ Redis password. These will be necessary when configuring the GitLab
+ application servers later.
+
+[Advanced configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
+are supported and can be added if needed.
+
+## Troubleshooting
+
+See the [Redis troubleshooting guide](troubleshooting.md).
diff --git a/doc/administration/redis/troubleshooting.md b/doc/administration/redis/troubleshooting.md
new file mode 100644
index 00000000000..402b60e5b7b
--- /dev/null
+++ b/doc/administration/redis/troubleshooting.md
@@ -0,0 +1,158 @@
+---
+type: reference
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Troubleshooting Redis
+
+There are a lot of moving parts that needs to be taken care carefully
+in order for the HA setup to work as expected.
+
+Before proceeding with the troubleshooting below, check your firewall rules:
+
+- Redis machines
+ - Accept TCP connection in `6379`
+ - Connect to the other Redis machines via TCP in `6379`
+- Sentinel machines
+ - Accept TCP connection in `26379`
+ - Connect to other Sentinel machines via TCP in `26379`
+ - Connect to the Redis machines via TCP in `6379`
+
+## Troubleshooting Redis replication
+
+You can check if everything is correct by connecting to each server using
+`redis-cli` application, and sending the `info replication` command as below.
+
+```shell
+/opt/gitlab/embedded/bin/redis-cli -h <redis-host-or-ip> -a '<redis-password>' info replication
+```
+
+When connected to a `Primary` Redis, you will see the number of connected
+`replicas`, and a list of each with connection details:
+
+```plaintext
+# Replication
+role:master
+connected_replicas:1
+replica0:ip=10.133.5.21,port=6379,state=online,offset=208037514,lag=1
+master_repl_offset:208037658
+repl_backlog_active:1
+repl_backlog_size:1048576
+repl_backlog_first_byte_offset:206989083
+repl_backlog_histlen:1048576
+```
+
+When it's a `replica`, you will see details of the primary connection and if
+its `up` or `down`:
+
+```plaintext
+# Replication
+role:replica
+master_host:10.133.1.58
+master_port:6379
+master_link_status:up
+master_last_io_seconds_ago:1
+master_sync_in_progress:0
+replica_repl_offset:208096498
+replica_priority:100
+replica_read_only:1
+connected_replicas:0
+master_repl_offset:0
+repl_backlog_active:0
+repl_backlog_size:1048576
+repl_backlog_first_byte_offset:0
+repl_backlog_histlen:0
+```
+
+## Troubleshooting Sentinel
+
+If you get an error like: `Redis::CannotConnectError: No sentinels available.`,
+there may be something wrong with your configuration files or it can be related
+to [this issue](https://github.com/redis/redis-rb/issues/531).
+
+You must make sure you are defining the same value in `redis['master_name']`
+and `redis['master_pasword']` as you defined for your sentinel node.
+
+The way the Redis connector `redis-rb` works with sentinel is a bit
+non-intuitive. We try to hide the complexity in omnibus, but it still requires
+a few extra configurations.
+
+---
+
+To make sure your configuration is correct:
+
+1. SSH into your GitLab application server
+1. Enter the Rails console:
+
+ ```shell
+ # For Omnibus installations
+ sudo gitlab-rails console
+
+ # For source installations
+ sudo -u git rails console -e production
+ ```
+
+1. Run in the console:
+
+ ```ruby
+ redis = Redis.new(Gitlab::Redis::SharedState.params)
+ redis.info
+ ```
+
+ Keep this screen open and try to simulate a failover below.
+
+1. To simulate a failover on primary Redis, SSH into the Redis server and run:
+
+ ```shell
+ # port must match your primary redis port, and the sleep time must be a few seconds bigger than defined one
+ redis-cli -h localhost -p 6379 DEBUG sleep 20
+ ```
+
+1. Then back in the Rails console from the first step, run:
+
+ ```ruby
+ redis.info
+ ```
+
+ You should see a different port after a few seconds delay
+ (the failover/reconnect time).
+
+## Troubleshooting a non-bundled Redis with an installation from source
+
+If you get an error in GitLab like `Redis::CannotConnectError: No sentinels available.`,
+there may be something wrong with your configuration files or it can be related
+to [this upstream issue](https://github.com/redis/redis-rb/issues/531).
+
+You must make sure that `resque.yml` and `sentinel.conf` are configured correctly,
+otherwise `redis-rb` will not work properly.
+
+The `master-group-name` (`gitlab-redis`) defined in (`sentinel.conf`)
+**must** be used as the hostname in GitLab (`resque.yml`):
+
+```conf
+# sentinel.conf:
+sentinel monitor gitlab-redis 10.0.0.1 6379 2
+sentinel down-after-milliseconds gitlab-redis 10000
+sentinel config-epoch gitlab-redis 0
+sentinel leader-epoch gitlab-redis 0
+```
+
+```yaml
+# resque.yaml
+production:
+ url: redis://:myredispassword@gitlab-redis/
+ sentinels:
+ -
+ host: 10.0.0.1
+ port: 26379 # point to sentinel, not to redis port
+ -
+ host: 10.0.0.2
+ port: 26379 # point to sentinel, not to redis port
+ -
+ host: 10.0.0.3
+ port: 26379 # point to sentinel, not to redis port
+```
+
+When in doubt, read the [Redis Sentinel documentation](https://redis.io/topics/sentinel).
diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md
index a15fcf722a5..5367021af4e 100644
--- a/doc/administration/reference_architectures/10k_users.md
+++ b/doc/administration/reference_architectures/10k_users.md
@@ -47,7 +47,7 @@ For a full list of reference architectures, see
For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
classes and that Redis Sentinel is hosted alongside Consul.
For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+ [Redis Cluster](../redis/replication_and_failover.md#running-multiple-redis-clusters) for the Cache class
and another for the Queues and Shared State classes respectively. We also recommend
that you run the Redis Sentinel clusters separately for each Redis Cluster.
diff --git a/doc/administration/reference_architectures/1k_users.md b/doc/administration/reference_architectures/1k_users.md
index 34805a8ac68..def23619a5c 100644
--- a/doc/administration/reference_architectures/1k_users.md
+++ b/doc/administration/reference_architectures/1k_users.md
@@ -57,7 +57,7 @@ added performance and reliability at a reduced complexity cost.
For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
classes and that Redis Sentinel is hosted alongside Consul.
For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+ [Redis Cluster](../redis/replication_and_failover.md#running-multiple-redis-clusters) for the Cache class
and another for the Queues and Shared State classes respectively. We also recommend
that you run the Redis Sentinel clusters separately for each Redis Cluster.
diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md
index d851fa124c6..17f4300eb03 100644
--- a/doc/administration/reference_architectures/25k_users.md
+++ b/doc/administration/reference_architectures/25k_users.md
@@ -47,7 +47,7 @@ For a full list of reference architectures, see
For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
classes and that Redis Sentinel is hosted alongside Consul.
For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+ [Redis Cluster](../redis/replication_and_failover.md#running-multiple-redis-clusters) for the Cache class
and another for the Queues and Shared State classes respectively. We also recommend
that you run the Redis Sentinel clusters separately for each Redis Cluster.
diff --git a/doc/administration/reference_architectures/2k_users.md b/doc/administration/reference_architectures/2k_users.md
index 0a3ade1acf1..d182daf45b3 100644
--- a/doc/administration/reference_architectures/2k_users.md
+++ b/doc/administration/reference_architectures/2k_users.md
@@ -12,16 +12,16 @@ For a full list of reference architectures, see
> - **High Availability:** False
> - **Test requests per second (RPS) rates:** API: 40 RPS, Web: 4 RPS, Git: 4 RPS
-| Service | Nodes | Configuration | GCP | AWS | Azure |
-|--------------------------------------------------------------|-----------|---------------------------------|---------------|-----------------------|----------------|
-| Load balancer | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
-| Object storage | n/a | n/a | n/a | n/a | n/a |
-| NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
-| PostgreSQL | 1 | 2 vCPU, 7.5GB memory | n1-standard-2 | m5.large | D2s v3 |
-| Redis | 1 | 1 vCPU, 3.75GB memory | n1-standard-1 | m5.large | D2s v3 |
-| Gitaly | 1 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
-| GitLab Rails | 2 | 8 vCPU, 7.2GB memory | n1-highcpu-8 | c5.2xlarge | F8s v2 |
-| Monitoring node | 1 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
+| Service | Nodes | Configuration | GCP | AWS | Azure |
+|------------------------------------------|--------|-------------------------|-----------------|----------------|-----------|
+| Load balancer | 1 | 2 vCPU, 1.8GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| PostgreSQL | 1 | 2 vCPU, 7.5GB memory | `n1-standard-2` | `m5.large` | `D2s v3` |
+| Redis | 1 | 1 vCPU, 3.75GB memory | `n1-standard-1` | `m5.large` | `D2s v3` |
+| Gitaly | 1 | 4 vCPU, 15GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` |
+| GitLab Rails | 2 | 8 vCPU, 7.2GB memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` |
+| Monitoring node | 1 | 2 vCPU, 1.8GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| Object storage | n/a | n/a | n/a | n/a | n/a |
+| NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` |
The Google Cloud Platform (GCP) architectures were built and tested using the
[Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
@@ -30,9 +30,6 @@ or higher, are required for your CPU or node counts. For more information, see
our [Sysbench](https://github.com/akopytov/sysbench)-based
[CPU benchmark](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
-AWS-equivalent and Azure-equivalent configurations are rough suggestions that
-may change in the future, and haven't been tested or validated.
-
Due to better performance and availability, for data objects (such as LFS,
uploads, or artifacts), using an [object storage service](#configure-the-object-storage)
is recommended instead of using NFS. Using an object storage service also
@@ -44,12 +41,6 @@ To set up GitLab and its components to accommodate up to 2,000 users:
1. [Configure the external load balancing node](#configure-the-load-balancer)
to handle the load balancing of the two GitLab application services nodes.
-1. [Configure the object storage](#configure-the-object-storage) used for
- shared data objects.
-1. [Configure NFS](#configure-nfs-optional) (optional, and not recommended)
- to have shared disk storage service as an alternative to Gitaly or object
- storage. You can skip this step if you're not using GitLab Pages (which
- requires NFS).
1. [Configure PostgreSQL](#configure-postgresql), the database for GitLab.
1. [Configure Redis](#configure-redis).
1. [Configure Gitaly](#configure-gitaly), which provides access to the Git
@@ -59,6 +50,12 @@ To set up GitLab and its components to accommodate up to 2,000 users:
requests (which include UI, API, and Git over HTTP/SSH).
1. [Configure Prometheus](#configure-prometheus) to monitor your GitLab
environment.
+1. [Configure the object storage](#configure-the-object-storage) used for
+ shared data objects.
+1. [Configure NFS](#configure-nfs-optional) (optional, and not recommended)
+ to have shared disk storage service as an alternative to Gitaly or object
+ storage. You can skip this step if you're not using GitLab Pages (which
+ requires NFS).
## Configure the load balancer
@@ -173,73 +170,6 @@ Configure DNS for an alternate SSH hostname, such as `altssh.gitlab.example.com`
</a>
</div>
-## Configure the object storage
-
-GitLab supports using an object storage service for holding several types of
-data, and is recommended over [NFS](#configure-nfs-optional). In general,
-object storage services are better for larger environments, as object storage
-is typically much more performant, reliable, and scalable.
-
-Object storage options that GitLab has either tested or is aware of customers
-using, includes:
-
-- SaaS/Cloud solutions (such as [Amazon S3](https://aws.amazon.com/s3/) or
- [Google Cloud Storage](https://cloud.google.com/storage)).
-- On-premises hardware and appliances, from various storage vendors.
-- MinIO ([Deployment guide](https://docs.gitlab.com/charts/advanced/external-object-storage/minio.html)).
-
-To configure GitLab to use object storage, refer to the following guides based
-on the features you intend to use:
-
-1. [Object storage for backups](../../raketasks/backup_restore.md#uploading-backups-to-a-remote-cloud-storage).
-1. [Object storage for job artifacts](../job_artifacts.md#using-object-storage)
- including [incremental logging](../job_logs.md#new-incremental-logging-architecture).
-1. [Object storage for LFS objects](../lfs/index.md#storing-lfs-objects-in-remote-object-storage).
-1. [Object storage for uploads](../uploads.md#using-object-storage-core-only).
-1. [Object storage for merge request diffs](../merge_request_diffs.md#using-object-storage).
-1. [Object storage for Container Registry](../packages/container_registry.md#container-registry-storage-driver) (optional feature).
-1. [Object storage for Mattermost](https://docs.mattermost.com/administration/config-settings.html#file-storage) (optional feature).
-1. [Object storage for packages](../packages/index.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
-1. [Object storage for Dependency Proxy](../packages/dependency_proxy.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
-1. [Object storage for Pseudonymizer](../pseudonymizer.md#configuration) (optional feature). **(ULTIMATE ONLY)**
-1. [Object storage for autoscale Runner caching](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching) (optional, for improved performance).
-1. [Object storage for Terraform state files](../terraform_state.md#using-object-storage-core-only).
-
-Using separate buckets for each data type is the recommended approach for GitLab.
-
-A limitation of our configuration is that each use of object storage is
-separately configured. We have an [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/23345)
-for improving this, which would allow for one bucket with separate folders.
-
-Using a single bucket when GitLab is deployed with the Helm chart causes
-restoring from a backup to
-[not function properly](https://docs.gitlab.com/charts/advanced/external-object-storage/#lfs-artifacts-uploads-packages-external-diffs-pseudonymizer).
-Although you may not be using a Helm deployment right now, if you migrate
-GitLab to a Helm deployment later, GitLab would still work, but you may not
-realize backups aren't working correctly until a critical requirement for
-functioning backups is encountered.
-
-<div align="right">
- <a type="button" class="btn btn-default" href="#setup-components">
- Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
- </a>
-</div>
-
-## Configure NFS (optional)
-
-For improved performance, [object storage](#configure-the-object-storage),
-along with [Gitaly](#configure-gitaly), are recommended over using NFS whenever
-possible. However, if you intend to use GitLab Pages,
-[you must use NFS](troubleshooting.md#gitlab-pages-requires-nfs).
-
-For information about configuring NFS, see the [NFS documentation page](../high_availability/nfs.md).
-
-<div align="right">
- <a type="button" class="btn btn-default" href="#setup-components">
- Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
- </a>
-</div>
-
## Configure PostgreSQL
In this section, you'll be guided through configuring an external PostgreSQL database
@@ -543,7 +473,7 @@ nodes (including the Gitaly node using the certificate) and on all client nodes
that communicate with it following the procedure described in
[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates).
-NOTE: **Note**
+NOTE: **Note:**
The self-signed certificate must specify the address you use to access the
Gitaly server. If you are addressing the Gitaly server by a hostname, you can
either use the Common Name field for this, or add it as a Subject Alternative
@@ -728,7 +658,8 @@ On each node perform the following:
sudo gitlab-ctl tail gitaly
```
-NOTE: **Note:** When you specify `https` in the `external_url`, as in the example
+NOTE: **Note:**
+When you specify `https` in the `external_url`, as in the example
above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
certificates are not present, NGINX will fail to start. See the
[NGINX documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
@@ -862,6 +793,73 @@ running [Prometheus](../monitoring/prometheus/index.md) and
</a>
</div>
+## Configure the object storage
+
+GitLab supports using an object storage service for holding several types of
+data, and is recommended over [NFS](#configure-nfs-optional). In general,
+object storage services are better for larger environments, as object storage
+is typically much more performant, reliable, and scalable.
+
+Object storage options that GitLab has either tested or is aware of customers
+using, includes:
+
+- SaaS/Cloud solutions (such as [Amazon S3](https://aws.amazon.com/s3/) or
+ [Google Cloud Storage](https://cloud.google.com/storage)).
+- On-premises hardware and appliances, from various storage vendors.
+- MinIO ([Deployment guide](https://docs.gitlab.com/charts/advanced/external-object-storage/minio.html)).
+
+To configure GitLab to use object storage, refer to the following guides based
+on the features you intend to use:
+
+1. [Object storage for backups](../../raketasks/backup_restore.md#uploading-backups-to-a-remote-cloud-storage).
+1. [Object storage for job artifacts](../job_artifacts.md#using-object-storage)
+ including [incremental logging](../job_logs.md#new-incremental-logging-architecture).
+1. [Object storage for LFS objects](../lfs/index.md#storing-lfs-objects-in-remote-object-storage).
+1. [Object storage for uploads](../uploads.md#using-object-storage-core-only).
+1. [Object storage for merge request diffs](../merge_request_diffs.md#using-object-storage).
+1. [Object storage for Container Registry](../packages/container_registry.md#use-object-storage) (optional feature).
+1. [Object storage for Mattermost](https://docs.mattermost.com/administration/config-settings.html#file-storage) (optional feature).
+1. [Object storage for packages](../packages/index.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. [Object storage for Dependency Proxy](../packages/dependency_proxy.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. [Object storage for Pseudonymizer](../pseudonymizer.md#configuration) (optional feature). **(ULTIMATE ONLY)**
+1. [Object storage for autoscale Runner caching](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching) (optional, for improved performance).
+1. [Object storage for Terraform state files](../terraform_state.md#using-object-storage-core-only).
+
+Using separate buckets for each data type is the recommended approach for GitLab.
+
+A limitation of our configuration is that each use of object storage is
+separately configured. We have an [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/23345)
+for improving this, which would allow for one bucket with separate folders.
+
+Using a single bucket when GitLab is deployed with the Helm chart causes
+restoring from a backup to
+[not function properly](https://docs.gitlab.com/charts/advanced/external-object-storage/#lfs-artifacts-uploads-packages-external-diffs-pseudonymizer).
+Although you may not be using a Helm deployment right now, if you migrate
+GitLab to a Helm deployment later, GitLab would still work, but you may not
+realize backups aren't working correctly until a critical requirement for
+functioning backups is encountered.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure NFS (optional)
+
+For improved performance, [object storage](#configure-the-object-storage),
+along with [Gitaly](#configure-gitaly), are recommended over using NFS whenever
+possible. However, if you intend to use GitLab Pages,
+[you must use NFS](troubleshooting.md#gitlab-pages-requires-nfs).
+
+For information about configuring NFS, see the [NFS documentation page](../high_availability/nfs.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
## Troubleshooting
See the [troubleshooting documentation](troubleshooting.md).
diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md
index efeed3e9ffd..04cb9fa4769 100644
--- a/doc/administration/reference_architectures/3k_users.md
+++ b/doc/administration/reference_architectures/3k_users.md
@@ -1,10 +1,15 @@
+---
+reading_time: true
+---
+
# Reference architecture: up to 3,000 users
This page describes GitLab reference architecture for up to 3,000 users.
For a full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
-NOTE: **Note:** The 3,000-user reference architecture documented below is
+NOTE: **Note:**
+The 3,000-user reference architecture documented below is
designed to help your organization achieve a highly-available GitLab deployment.
If you do not have the expertise or need to maintain a highly-available
environment, you can have a simpler and less costly-to-operate environment by
@@ -14,66 +19,1749 @@ following the [2,000-user reference architecture](2k_users.md).
> - **High Availability:** True
> - **Test RPS rates:** API: 60 RPS, Web: 6 RPS, Git: 6 RPS
-| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS | Azure |
-|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|----------------|
-| GitLab Rails ([1](#footnotes)) | 3 | 8 vCPU, 7.2GB Memory | `n1-highcpu-8` | `c5.2xlarge` | F8s v2 |
-| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | D2s v3 |
-| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | D4s v3 |
-| Redis ([3](#footnotes)) | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | D2s v3 |
-| Consul + Sentinel ([3](#footnotes)) | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | D2s v3 |
-| Object Storage ([4](#footnotes)) | - | - | - | - | - |
-| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | F4s v2 |
-| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-
-## Footnotes
-
-1. In our architectures we run each GitLab Rails node using the Puma webserver
- and have its number of workers set to 90% of available CPUs along with four threads. For
- nodes that are running Rails with other components the worker value should be reduced
- accordingly where we've found 50% achieves a good balance but this is dependent
- on workload.
-
-1. Gitaly node requirements are dependent on customer data, specifically the number of
- projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
- and at least four nodes should be used when supporting 50,000 or more users.
- We also recommend that each Gitaly node should store no more than 5TB of data
- and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
- set to 20% of available CPUs. Additional nodes should be considered in conjunction
- with a review of expected data size and spread based on the recommendations above.
-
-1. Recommended Redis setup differs depending on the size of the architecture.
- For smaller architectures (less than 3,000 users) a single instance should suffice.
- For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
- classes and that Redis Sentinel is hosted alongside Consul.
- For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
- and another for the Queues and Shared State classes respectively. We also recommend
- that you run the Redis Sentinel clusters separately for each Redis Cluster.
-
-1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
- over NFS where possible, due to better performance and availability.
-
-1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
- object storage but this isn't typically recommended for performance reasons. Note however it is required for
- [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196).
-
-1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
- as the load balancer. Although other load balancers with similar feature sets
- could also be used, those load balancers have not been validated.
-
-1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
- HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
- as these components have heavy I/O. These IOPS values are recommended only as a starter
- as with time they may be adjusted higher or lower depending on the scale of your
- environment's workload. If you're running the environment on a Cloud provider
- you may need to refer to their documentation on how configure IOPS correctly.
-
-1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
- CPU platform on GCP. On different hardware you may find that adjustments, either lower
- or higher, are required for your CPU or Node counts accordingly. For more information, a
- [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
- [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+| Service | Nodes | Configuration | GCP | AWS | Azure |
+|--------------------------------------------------------------|-------|---------------------------------|-----------------|-------------------------|----------------|
+| External load balancing node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| Redis | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
+| Consul + Sentinel | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
+| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| Internal load balancing node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| Gitaly | 2 minimum | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` |
+| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
+| GitLab Rails | 3 | 8 vCPU, 7.2GB Memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` |
+| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| Object Storage | n/a | n/a | n/a | n/a | n/a |
+| NFS Server (optional, not recommended) | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` |
+
+The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+CPU platform on GCP. On different hardware you may find that adjustments, either lower
+or higher, are required for your CPU or Node counts accordingly. For more information, a
+[Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
+[here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+For data objects such as LFS, Uploads, Artifacts, etc, an [object storage service](#configure-the-object-storage)
+is recommended over NFS where possible, due to better performance and availability.
+Since this doesn't require a node to be set up, it's marked as not applicable (n/a)
+in the table above.
+
+## Setup components
+
+To set up GitLab and its components to accommodate up to 3,000 users:
+
+1. [Configure the external load balancing node](#configure-the-external-load-balancer)
+ that will handle the load balancing of the two GitLab application services nodes.
+1. [Configure Redis](#configure-redis).
+1. [Configure Consul and Sentinel](#configure-consul-and-sentinel).
+1. [Configure PostgreSQL](#configure-postgresql), the database for GitLab.
+1. [Configure PgBouncer](#configure-pgbouncer).
+1. [Configure the internal load balancing node](#configure-the-internal-load-balancer)
+1. [Configure Gitaly](#configure-gitaly),
+ which provides access to the Git repositories.
+1. [Configure Sidekiq](#configure-sidekiq).
+1. [Configure the main GitLab Rails application](#configure-gitlab-rails)
+ to run Puma/Unicorn, Workhorse, GitLab Shell, and to serve all frontend requests (UI, API, Git
+ over HTTP/SSH).
+1. [Configure Prometheus](#configure-prometheus) to monitor your GitLab environment.
+1. [Configure the Object Storage](#configure-the-object-storage)
+ used for shared data objects.
+1. [Configure NFS (Optional)](#configure-nfs-optional)
+ to have shared disk storage service as an alternative to Gitaly and/or Object Storage (although
+ not recommended). NFS is required for GitLab Pages, you can skip this step if you're not using
+ that feature.
+
+We start with all servers on the same 10.6.0.0/16 private network range, they
+can connect to each other freely on those addresses.
+
+Here is a list and description of each machine and the assigned IP:
+
+- `10.6.0.10`: External Load Balancer
+- `10.6.0.61`: Redis Primary
+- `10.6.0.62`: Redis Replica 1
+- `10.6.0.63`: Redis Replica 2
+- `10.6.0.11`: Consul/Sentinel 1
+- `10.6.0.12`: Consul/Sentinel 2
+- `10.6.0.13`: Consul/Sentinel 3
+- `10.6.0.31`: PostgreSQL primary
+- `10.6.0.32`: PostgreSQL secondary 1
+- `10.6.0.33`: PostgreSQL secondary 2
+- `10.6.0.21`: PgBouncer 1
+- `10.6.0.22`: PgBouncer 2
+- `10.6.0.23`: PgBouncer 3
+- `10.6.0.20`: Internal Load Balancer
+- `10.6.0.51`: Gitaly 1
+- `10.6.0.52`: Gitaly 2
+- `10.6.0.71`: Sidekiq 1
+- `10.6.0.72`: Sidekiq 2
+- `10.6.0.73`: Sidekiq 3
+- `10.6.0.74`: Sidekiq 4
+- `10.6.0.41`: GitLab application 1
+- `10.6.0.42`: GitLab application 2
+- `10.6.0.43`: GitLab application 3
+- `10.6.0.81`: Prometheus
+
+## Configure the external load balancer
+
+NOTE: **Note:**
+This architecture has been tested and validated with [HAProxy](https://www.haproxy.org/)
+as the load balancer. Although other load balancers with similar feature sets
+could also be used, those load balancers have not been validated.
+
+In an active/active GitLab configuration, you will need a load balancer to route
+traffic to the application servers. The specifics on which load balancer to use
+or the exact configuration is beyond the scope of GitLab documentation. We hope
+that if you're managing multi-node systems like GitLab you have a load balancer of
+choice already. Some examples including HAProxy (open-source), F5 Big-IP LTM,
+and Citrix Net Scaler. This documentation will outline what ports and protocols
+you need to use with GitLab.
+
+The next question is how you will handle SSL in your environment.
+There are several different options:
+
+- [The application node terminates SSL](#application-node-terminates-ssl).
+- [The load balancer terminates SSL without backend SSL](#load-balancer-terminates-ssl-without-backend-ssl)
+ and communication is not secure between the load balancer and the application node.
+- [The load balancer terminates SSL with backend SSL](#load-balancer-terminates-ssl-with-backend-ssl)
+ and communication is *secure* between the load balancer and the application node.
+
+### Application node terminates SSL
+
+Configure your load balancer to pass connections on port 443 as `TCP` rather
+than `HTTP(S)` protocol. This will pass the connection to the application node's
+NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
+
+See the [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Load balancer terminates SSL without backend SSL
+
+Configure your load balancer to use the `HTTP(S)` protocol rather than `TCP`.
+The load balancer will then be responsible for managing SSL certificates and
+terminating SSL.
+
+Since communication between the load balancer and GitLab will not be secure,
+there is some additional configuration needed. See the
+[NGINX proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
+for details.
+
+### Load balancer terminates SSL with backend SSL
+
+Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
+The load balancer(s) will be responsible for managing SSL certificates that
+end users will see.
+
+Traffic will also be secure between the load balancer(s) and NGINX in this
+scenario. There is no need to add configuration for proxied SSL since the
+connection will be secure all the way. However, configuration will need to be
+added to GitLab to configure SSL certificates. See
+[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Ports
+
+The basic ports to be used are shown in the table below.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | ------------------------ |
+| 80 | 80 | HTTP (*1*) |
+| 443 | 443 | TCP or HTTPS (*1*) (*2*) |
+| 22 | 22 | TCP |
+
+- (*1*): [Web terminal](../../ci/environments/index.md#web-terminals) support requires
+ your load balancer to correctly handle WebSocket connections. When using
+ HTTP or HTTPS proxying, this means your load balancer must be configured
+ to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
+ [web terminal](../integration/terminal.md) integration guide for
+ more details.
+- (*2*): When using HTTPS protocol for port 443, you will need to add an SSL
+ certificate to the load balancers. If you wish to terminate SSL at the
+ GitLab application server instead, use TCP protocol.
+
+If you're using GitLab Pages with custom domain support you will need some
+additional port configurations.
+GitLab Pages requires a separate virtual IP address. Configure DNS to point the
+`pages_external_url` from `/etc/gitlab/gitlab.rb` at the new virtual IP address. See the
+[GitLab Pages documentation](../pages/index.md) for more information.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------- | --------- |
+| 80 | Varies (*1*) | HTTP |
+| 443 | Varies (*1*) | TCP (*2*) |
+
+- (*1*): The backend port for GitLab Pages depends on the
+ `gitlab_pages['external_http']` and `gitlab_pages['external_https']`
+ setting. See [GitLab Pages documentation](../pages/index.md) for more details.
+- (*2*): Port 443 for GitLab Pages should always use the TCP protocol. Users can
+ configure custom domains with custom SSL, which would not be possible
+ if SSL was terminated at the load balancer.
+
+#### Alternate SSH Port
+
+Some organizations have policies against opening SSH port 22. In this case,
+it may be helpful to configure an alternate SSH hostname that allows users
+to use SSH on port 443. An alternate SSH hostname will require a new virtual IP address
+compared to the other GitLab HTTP configuration above.
+
+Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | -------- |
+| 443 | 22 | TCP |
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Redis
+
+Using [Redis](https://redis.io/) in scalable environment is possible using a **Primary** x **Replica**
+topology with a [Redis Sentinel](https://redis.io/topics/sentinel) service to watch and automatically
+start the failover procedure.
+
+Redis requires authentication if used with Sentinel. See
+[Redis Security](https://redis.io/topics/security) documentation for more
+information. We recommend using a combination of a Redis password and tight
+firewall rules to secure your Redis service.
+You are highly encouraged to read the [Redis Sentinel](https://redis.io/topics/sentinel) documentation
+before configuring Redis with GitLab to fully understand the topology and
+architecture.
+
+In this section, you'll be guided through configuring an external Redis instance
+to be used with GitLab. The following IPs will be used as an example:
+
+- `10.6.0.61`: Redis Primary
+- `10.6.0.62`: Redis Replica 1
+- `10.6.0.63`: Redis Replica 2
+
+### Provide your own Redis instance
+
+Managed Redis from cloud providers such as AWS ElastiCache will work. If these
+services support high availability, be sure it is **not** the Redis Cluster type.
+
+Redis version 5.0 or higher is required, as this is what ships with
+Omnibus GitLab packages starting with GitLab 13.0. Older Redis versions
+do not support an optional count argument to SPOP which is now required for
+[Merge Trains](../../ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md).
+
+Note the Redis node's IP address or hostname, port, and password (if required).
+These will be necessary when configuring the
+[GitLab application servers](#configure-gitlab-rails) later.
+
+### Standalone Redis using Omnibus GitLab
+
+This is the section where we install and set up the new Redis instances.
+
+The requirements for a Redis setup are the following:
+
+1. All Redis nodes must be able to talk to each other and accept incoming
+ connections over Redis (`6379`) and Sentinel (`26379`) ports (unless you
+ change the default ones).
+1. The server that hosts the GitLab application must be able to access the
+ Redis nodes.
+1. Protect the nodes from access from external networks
+ ([Internet](https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png)),
+ using a firewall.
+
+NOTE: **Note:**
+Redis nodes (both primary and replica) will need the same password defined in
+`redis['password']`. At any time during a failover the Sentinels can
+reconfigure a node and change its status from primary to replica and vice versa.
+
+#### Configuring the primary Redis instance
+
+1. SSH into the **Primary** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_master_role'
+ roles ['redis_master_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.61'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Set up password authentication for Redis (use the same password in all nodes).
+ redis['password'] = 'redis-password-goes-here'
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+ redis_exporter['flags'] = {
+ 'redis.addr' => 'redis://10.6.0.61:6379',
+ 'redis.password' => 'redis-password-goes-here',
+ }
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+You can list the current Redis Primary, Replica status via:
+
+```shell
+/opt/gitlab/embedded/bin/redis-cli -h <host> -a 'redis-password-goes-here' info replication
+```
+
+Show running GitLab services via:
+
+```shell
+gitlab-ctl status
+```
+
+The output should be similar to the following:
+
+```plaintext
+run: consul: (pid 30043) 76863s; run: log: (pid 29691) 76892s
+run: logrotate: (pid 31152) 3070s; run: log: (pid 29595) 76908s
+run: node-exporter: (pid 30064) 76862s; run: log: (pid 29624) 76904s
+run: redis: (pid 30070) 76861s; run: log: (pid 29573) 76914s
+run: redis-exporter: (pid 30075) 76861s; run: log: (pid 29674) 76896s
+```
+
+#### Configuring the replica Redis instances
+
+1. SSH into the **replica** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_replica_role'
+ roles ['redis_replica_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.62'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['password'] = 'redis-password-goes-here'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.61'
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+ redis_exporter['flags'] = {
+ 'redis.addr' => 'redis://10.6.0.62:6379',
+ 'redis.password' => 'redis-password-goes-here',
+ }
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other replica nodes, and
+ make sure to set up the IPs correctly.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+These values don't have to be changed again in `/etc/gitlab/gitlab.rb` after
+a failover, as the nodes will be managed by the [Sentinels](#configure-consul-and-sentinel), and even after a
+`gitlab-ctl reconfigure`, they will get their configuration restored by
+the same Sentinels.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Consul and Sentinel
+
+NOTE: **Note:**
+If you are using an external Redis Sentinel instance, be sure
+to exclude the `requirepass` parameter from the Sentinel
+configuration. This parameter will cause clients to report `NOAUTH
+Authentication required.`. [Redis Sentinel 3.2.x does not support
+password authentication](https://github.com/antirez/redis/issues/3279).
+
+Now that the Redis servers are all set up, let's configure the Sentinel
+servers. The following IPs will be used as an example:
+
+- `10.6.0.11`: Consul/Sentinel 1
+- `10.6.0.12`: Consul/Sentinel 2
+- `10.6.0.13`: Consul/Sentinel 3
+
+To configure the Sentinel:
+
+1. SSH into the server that will host Consul/Sentinel.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['redis_sentinel_role', 'consul_role']
+
+ # Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['master_password'] = 'redis-password-goes-here'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.61'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Configure Sentinel
+ sentinel['bind'] = '10.6.0.11'
+
+ # Port that Sentinel listens on, uncomment to change to non default. Defaults
+ # to `26379`.
+ # sentinel['port'] = 26379
+
+ ## Quorum must reflect the amount of voting sentinels it take to start a failover.
+ ## Value must NOT be greater then the amount of sentinels.
+ ##
+ ## The quorum can be used to tune Sentinel in two ways:
+ ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
+ ## we deploy, we are basically making Sentinel more sensible to primary failures,
+ ## triggering a failover as soon as even just a minority of Sentinels is no longer
+ ## able to talk with the primary.
+ ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
+ ## making Sentinel able to failover only when there are a very large number (larger
+ ## than majority) of well connected Sentinels which agree about the primary being down.s
+ sentinel['quorum'] = 2
+
+ ## Consider unresponsive server down after x amount of ms.
+ # sentinel['down_after_milliseconds'] = 10000
+
+ ## Specifies the failover timeout in milliseconds. It is used in many ways:
+ ##
+ ## - The time needed to re-start a failover after a previous failover was
+ ## already tried against the same primary by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## - The time needed for a replica replicating to a wrong primary according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right primary, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## - The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (REPLICAOF NO ONE yet not
+ ## acknowledged by the promoted replica).
+ ##
+ ## - The maximum time a failover in progress waits for all the replica to be
+ ## reconfigured as replicas of the new primary. However even after this time
+ ## the replicas will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ # sentinel['failover_timeout'] = 60000
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ server: true,
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Consul/Sentinel nodes, and
+ make sure you set up the correct IPs.
+
+NOTE: **Note:**
+A Consul leader will be elected when the provisioning of the third Consul server is completed.
+Viewing the Consul logs `sudo gitlab-ctl tail consul` will display
+`...[INFO] consul: New leader elected: ...`
+
+You can list the current Consul members (server, client):
+
+```shell
+sudo /opt/gitlab/embedded/bin/consul members
+```
+
+You can verify the GitLab services are running:
+
+```shell
+sudo gitlab-ctl status
+```
+
+The output should be similar to the following:
+
+```plaintext
+run: consul: (pid 30074) 76834s; run: log: (pid 29740) 76844s
+run: logrotate: (pid 30925) 3041s; run: log: (pid 29649) 76861s
+run: node-exporter: (pid 30093) 76833s; run: log: (pid 29663) 76855s
+run: sentinel: (pid 30098) 76832s; run: log: (pid 29704) 76850s
+```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure PostgreSQL
+
+In this section, you'll be guided through configuring an external PostgreSQL database
+to be used with GitLab.
+
+### Provide your own PostgreSQL instance
+
+If you're hosting GitLab on a cloud provider, you can optionally use a
+managed service for PostgreSQL. For example, AWS offers a managed Relational
+Database Service (RDS) that runs PostgreSQL.
+
+If you use a cloud-managed service, or provide your own PostgreSQL:
+
+1. Set up PostgreSQL according to the
+ [database requirements document](../../install/requirements.md#database).
+1. Set up a `gitlab` username with a password of your choice. The `gitlab` user
+ needs privileges to create the `gitlabhq_production` database.
+1. Configure the GitLab application servers with the appropriate details.
+ This step is covered in [Configuring the GitLab Rails application](#configure-gitlab-rails).
+
+### Standalone PostgreSQL using Omnibus GitLab
+
+The following IPs will be used as an example:
+
+- `10.6.0.31`: PostgreSQL primary
+- `10.6.0.32`: PostgreSQL secondary 1
+- `10.6.0.33`: PostgreSQL secondary 2
+
+First, make sure to [install](https://about.gitlab.com/install/)
+the Linux GitLab package **on each node**. Following the steps,
+install the necessary dependencies from step 1, and add the
+GitLab package repository from step 2. When installing GitLab
+in the second step, do not supply the `EXTERNAL_URL` value.
+
+#### PostgreSQL primary node
+
+1. SSH into the PostgreSQL primary node.
+1. Generate a password hash for the PostgreSQL username/password pair. This assumes you will use the default
+ username of `gitlab` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<postgresql_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 gitlab
+ ```
+
+1. Generate a password hash for the PgBouncer username/password pair. This assumes you will use the default
+ username of `pgbouncer` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<pgbouncer_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 pgbouncer
+ ```
+
+1. Generate a password hash for the Consul database username/password pair. This assumes you will use the default
+ username of `gitlab-consul` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<consul_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 gitlab-consul
+ ```
+
+1. On the primary database node, edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
+
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the Consul agent
+ consul['services'] = %w(postgresql)
+
+ # START user configuration
+ # Please set the real values as explained in Required Information section
+ #
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = '<postgresql_password_hash>'
+ # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
+ # This is used to prevent replication from using up all of the
+ # available database connections.
+ postgresql['max_wal_senders'] = 4
+ postgresql['max_replication_slots'] = 4
+
+ # Replace XXX.XXX.XXX.XXX/YY with Network Address
+ postgresql['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+ postgres_exporter['dbname'] = 'gitlabhq_production'
+ postgres_exporter['password'] = '<postgresql_password_hash>'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ #
+ # END user configuration
+ ```
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. You can list the current PostgreSQL primary, secondary nodes status via:
+
+ ```shell
+ sudo /opt/gitlab/bin/gitlab-ctl repmgr cluster show
+ ```
+
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 30593) 77133s; run: log: (pid 29912) 77156s
+ run: logrotate: (pid 23449) 3341s; run: log: (pid 29794) 77175s
+ run: node-exporter: (pid 30613) 77133s; run: log: (pid 29824) 77170s
+ run: postgres-exporter: (pid 30620) 77132s; run: log: (pid 29894) 77163s
+ run: postgresql: (pid 30630) 77132s; run: log: (pid 29618) 77181s
+ run: repmgrd: (pid 30639) 77132s; run: log: (pid 29985) 77150s
+ ```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### PostgreSQL secondary nodes
+
+1. On both the secondary nodes, add the same configuration specified above for the primary node
+ with an additional setting that will inform `gitlab-ctl` that they are standby nodes initially
+ and there's no need to attempt to register them as a primary node:
+
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the Consul agent
+ consul['services'] = %w(postgresql)
+
+ # Specify if a node should attempt to be primary on initialization.
+ repmgr['master_on_initialization'] = false
+
+ # START user configuration
+ # Please set the real values as explained in Required Information section
+ #
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = '<postgresql_password_hash>'
+ # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
+ # This is used to prevent replication from using up all of the
+ # available database connections.
+ postgresql['max_wal_senders'] = 4
+ postgresql['max_replication_slots'] = 4
+
+ # Replace XXX.XXX.XXX.XXX/YY with Network Address
+ postgresql['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+ postgres_exporter['dbname'] = 'gitlabhq_production'
+ postgres_exporter['password'] = '<postgresql_password_hash>'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ # END user configuration
+ ```
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/database.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### PostgreSQL post-configuration
+
+SSH into the **primary node**:
+
+1. Open a database prompt:
+
+ ```shell
+ gitlab-psql -d gitlabhq_production
+ ```
+
+1. Enable the `pg_trgm` extension:
+
+ ```shell
+ CREATE EXTENSION pg_trgm;
+ ```
+
+1. Exit the database prompt by typing `\q` and Enter.
+
+1. Verify the cluster is initialized with one node:
+
+ ```shell
+ gitlab-ctl repmgr cluster show
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ Role | Name | Upstream | Connection String
+ ----------+----------|----------|----------------------------------------
+ * master | HOSTNAME | | host=HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
+ ```
+
+1. Note down the hostname or IP address in the connection string: `host=HOSTNAME`. We will
+ refer to the hostname in the next section as `<primary_node_name>`. If the value
+ is not an IP address, it will need to be a resolvable name (via DNS or
+ `/etc/hosts`)
+
+SSH into the **secondary node**:
+
+1. Set up the repmgr standby:
+
+ ```shell
+ gitlab-ctl repmgr standby setup <primary_node_name>
+ ```
+
+ Do note that this will remove the existing data on the node. The command
+ has a wait time.
+
+ The output should be similar to the following:
+
+ ```console
+ Doing this will delete the entire contents of /var/opt/gitlab/postgresql/data
+ If this is not what you want, hit Ctrl-C now to exit
+ To skip waiting, rerun with the -w option
+ Sleeping for 30 seconds
+ Stopping the database
+ Removing the data
+ Cloning the data
+ Starting the database
+ Registering the node with the cluster
+ ok: run: repmgrd: (pid 19068) 0s
+ ```
+
+Before moving on, make sure the databases are configured correctly. Run the
+following command on the **primary** node to verify that replication is working
+properly and the secondary nodes appear in the cluster:
+
+```shell
+gitlab-ctl repmgr cluster show
+```
+
+The output should be similar to the following:
+
+```plaintext
+Role | Name | Upstream | Connection String
+----------+---------|-----------|------------------------------------------------
+* master | MASTER | | host=<primary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+```
+
+If the 'Role' column for any node says "FAILED", check the
+[Troubleshooting section](troubleshooting.md) before proceeding.
+
+Also, check that the `repmgr-check-master` command works successfully on each node:
+
+```shell
+su - gitlab-consul
+gitlab-ctl repmgr-check-master || echo 'This node is a standby repmgr node'
+```
+
+This command relies on exit codes to tell Consul whether a particular node is a master
+or secondary. The most important thing here is that this command does not produce errors.
+If there are errors it's most likely due to incorrect `gitlab-consul` database user permissions.
+Check the [Troubleshooting section](troubleshooting.md) before proceeding.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure PgBouncer
+
+Now that the PostgreSQL servers are all set up, let's configure PgBouncer.
+The following IPs will be used as an example:
+
+- `10.6.0.21`: PgBouncer 1
+- `10.6.0.22`: PgBouncer 2
+- `10.6.0.23`: PgBouncer 3
+
+1. On each PgBouncer node, edit `/etc/gitlab/gitlab.rb`, and replace
+ `<consul_password_hash>` and `<pgbouncer_password_hash>` with the
+ password hashes you [set up previously](#postgresql-primary-node):
+
+ ```ruby
+ # Disable all components except Pgbouncer and Consul agent
+ roles ['pgbouncer_role']
+
+ # Configure PgBouncer
+ pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
+
+ pgbouncer['users'] = {
+ 'gitlab-consul': {
+ password: '<consul_password_hash>'
+ },
+ 'pgbouncer': {
+ password: '<pgbouncer_password_hash>'
+ }
+ }
+
+ # Configure Consul agent
+ consul['watchers'] = %w(postgresql)
+ consul['enable'] = true
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+
+ # Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+1. Create a `.pgpass` file so Consul is able to
+ reload PgBouncer. Enter the PgBouncer password twice when asked:
+
+ ```shell
+ gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
+ ```
+
+1. Ensure each node is talking to the current master:
+
+ ```shell
+ gitlab-ctl pgb-console # You will be prompted for PGBOUNCER_PASSWORD
+ ```
+
+ If there is an error `psql: ERROR: Auth failed` after typing in the
+ password, ensure you previously generated the MD5 password hashes with the correct
+ format. The correct format is to concatenate the password and the username:
+ `PASSWORDUSERNAME`. For example, `Sup3rS3cr3tpgbouncer` would be the text
+ needed to generate an MD5 password hash for the `pgbouncer` user.
+
+1. Once the console prompt is available, run the following queries:
+
+ ```shell
+ show databases ; show clients ;
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
+ ---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
+ gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
+ pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
+ (2 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
+ ------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
+ C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
+ (2 rows)
+ ```
+
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 31530) 77150s; run: log: (pid 31106) 77182s
+ run: logrotate: (pid 32613) 3357s; run: log: (pid 30107) 77500s
+ run: node-exporter: (pid 31550) 77149s; run: log: (pid 30138) 77493s
+ run: pgbouncer: (pid 32033) 75593s; run: log: (pid 31117) 77175s
+ run: pgbouncer-exporter: (pid 31558) 77148s; run: log: (pid 31498) 77156s
+ ```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+### Configure the internal load balancer
+
+If you're running more than one PgBouncer node as recommended, then at this time you'll need to set
+up a TCP internal load balancer to serve each correctly.
+
+The following IP will be used as an example:
+
+- `10.6.0.20`: Internal Load Balancer
+
+Here's how you could do it with [HAProxy](https://www.haproxy.org/):
+
+```plaintext
+global
+ log /dev/log local0
+ log localhost local1 notice
+ log stdout format raw local0
+
+defaults
+ log global
+ default-server inter 10s fall 3 rise 2
+ balance leastconn
+
+frontend internal-pgbouncer-tcp-in
+ bind *:6432
+ mode tcp
+ option tcplog
+
+ default_backend pgbouncer
+
+backend pgbouncer
+ mode tcp
+ option tcp-check
+
+ server pgbouncer1 10.6.0.21:6432 check
+ server pgbouncer2 10.6.0.22:6432 check
+ server pgbouncer3 10.6.0.23:6432 check
+```
+
+Refer to your preferred Load Balancer's documentation for further guidance.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Gitaly
+
+Deploying Gitaly in its own server can benefit GitLab installations that are
+larger than a single machine.
+
+The Gitaly node requirements are dependent on customer data, specifically the number of
+projects and their repository sizes. Two nodes are recommended as an absolute minimum.
+Each Gitaly node should store no more than 5TB of data and have the number of
+[`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) set to 20% of available CPUs.
+Additional nodes should be considered in conjunction with a review of expected
+data size and spread based on the recommendations above.
+
+It is also strongly recommended that all Gitaly nodes be set up with SSD disks with
+a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write,
+as Gitaly has heavy I/O. These IOPS values are recommended only as a starter as with
+time they may be adjusted higher or lower depending on the scale of your environment's workload.
+If you're running the environment on a Cloud provider, you may need to refer to
+their documentation on how to configure IOPS correctly.
+
+Some things to note:
+
+- The GitLab Rails application shards repositories into [repository storages](../repository_storage_paths.md).
+- A Gitaly server can host one or more storages.
+- A GitLab server can use one or more Gitaly servers.
+- Gitaly addresses must be specified in such a way that they resolve
+ correctly for ALL Gitaly clients.
+- Gitaly servers must not be exposed to the public internet, as Gitaly's network
+ traffic is unencrypted by default. The use of a firewall is highly recommended
+ to restrict access to the Gitaly server. Another option is to
+ [use TLS](#gitaly-tls-support).
+
+TIP: **Tip:**
+For more information about Gitaly's history and network architecture see the
+[standalone Gitaly documentation](../gitaly/index.md).
+
+Note: **Note:** The token referred to throughout the Gitaly documentation is
+just an arbitrary password selected by the administrator. It is unrelated to
+tokens created for the GitLab API or other similar web API tokens.
+
+Below we describe how to configure two Gitaly servers, with IPs and
+domain names:
+
+- `10.6.0.51`: Gitaly 1 (`gitaly1.internal`)
+- `10.6.0.52`: Gitaly 2 (`gitaly2.internal`)
+
+The secret token is assumed to be `gitalysecret` and that
+your GitLab installation has three repository storages:
+
+- `default` on Gitaly 1
+- `storage1` on Gitaly 1
+- `storage2` on Gitaly 2
+
+On each node:
+
+1. [Download/Install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page but
+ **without** providing the `EXTERNAL_URL` value.
+1. Edit `/etc/gitlab/gitlab.rb` to configure storage paths, enable
+ the network listener and configure the token:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ # /etc/gitlab/gitlab.rb
+
+ # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
+ # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
+ # The following two values must be the same as their respective values
+ # of the GitLab Rails application setup
+ gitaly['auth_token'] = 'gitlaysecret'
+ gitlab_shell['secret_token'] = 'shellsecret'
+
+ # Avoid running unnecessary services on the Gitaly server
+ postgresql['enable'] = false
+ redis['enable'] = false
+ nginx['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
+ sidekiq['enable'] = false
+ gitlab_workhorse['enable'] = false
+ grafana['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ # If you run a seperate monitoring node you can disable these services
+ alertmanager['enable'] = false
+ prometheus['enable'] = false
+
+ # Prevent database connections during 'gitlab-ctl reconfigure'
+ gitlab_rails['rake_cache_clear'] = false
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the gitlab-shell API callback URL. Without this, `git push` will
+ # fail. This can be your 'front door' GitLab URL or an internal load
+ # balancer.
+ # Don't forget to copy `/etc/gitlab/gitlab-secrets.json` from web server to Gitaly server.
+ gitlab_rails['internal_api_url'] = 'https://gitlab.example.com'
+
+ # Make Gitaly accept connections on all network interfaces. You must use
+ # firewalls to restrict access to this address/port.
+ # Comment out following line if you only want to support TLS connections
+ gitaly['listen_addr'] = "0.0.0.0:8075"
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ gitaly['prometheus_listen_addr'] = "0.0.0.0:9236"
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ ```
+
+1. Append the following to `/etc/gitlab/gitlab.rb` for each respective server:
+ 1. On `gitaly1.internal`:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => {
+ 'path' => '/var/opt/gitlab/git-data'
+ },
+ 'storage1' => {
+ 'path' => '/mnt/gitlab/git-data'
+ },
+ })
+ ```
+
+ 1. On `gitaly2.internal`:
+
+ ```ruby
+ git_data_dirs({
+ 'storage2' => {
+ 'path' => '/mnt/gitlab/git-data'
+ },
+ })
+ ```
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Confirm that Gitaly can perform callbacks to the internal API:
+
+ ```shell
+ sudo /opt/gitlab/embedded/service/gitlab-shell/bin/check -config /opt/gitlab/embedded/service/gitlab-shell/config.yml
+ ```
+
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 30339) 77006s; run: log: (pid 29878) 77020s
+ run: gitaly: (pid 30351) 77005s; run: log: (pid 29660) 77040s
+ run: logrotate: (pid 7760) 3213s; run: log: (pid 29782) 77032s
+ run: node-exporter: (pid 30378) 77004s; run: log: (pid 29812) 77026s
+ ```
+
+### Gitaly TLS support
+
+Gitaly supports TLS encryption. To be able to communicate
+with a Gitaly instance that listens for secure connections you will need to use `tls://` URL
+scheme in the `gitaly_address` of the corresponding storage entry in the GitLab configuration.
+
+You will need to bring your own certificates as this isn't provided automatically.
+The certificate, or its certificate authority, must be installed on all Gitaly
+nodes (including the Gitaly node using the certificate) and on all client nodes
+that communicate with it following the procedure described in
+[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates).
+
+NOTE: **Note:**
+The self-signed certificate must specify the address you use to access the
+Gitaly server. If you are addressing the Gitaly server by a hostname, you can
+either use the Common Name field for this, or add it as a Subject Alternative
+Name. If you are addressing the Gitaly server by its IP address, you must add it
+as a Subject Alternative Name to the certificate.
+[gRPC does not support using an IP address as Common Name in a certificate](https://github.com/grpc/grpc/issues/2691).
+
+NOTE: **Note:**
+It is possible to configure Gitaly servers with both an
+unencrypted listening address `listen_addr` and an encrypted listening
+address `tls_listen_addr` at the same time. This allows you to do a
+gradual transition from unencrypted to encrypted traffic, if necessary.
+
+To configure Gitaly with TLS:
+
+1. Create the `/etc/gitlab/ssl` directory and copy your key and certificate there:
+
+ ```shell
+ sudo mkdir -p /etc/gitlab/ssl
+ sudo chmod 755 /etc/gitlab/ssl
+ sudo cp key.pem cert.pem /etc/gitlab/ssl/
+ sudo chmod 644 key.pem cert.pem
+ ```
+
+1. Copy the cert to `/etc/gitlab/trusted-certs` so Gitaly will trust the cert when
+ calling into itself:
+
+ ```shell
+ sudo cp /etc/gitlab/ssl/cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and add:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ gitaly['tls_listen_addr'] = "0.0.0.0:9999"
+ gitaly['certificate_path'] = "/etc/gitlab/ssl/cert.pem"
+ gitaly['key_path'] = "/etc/gitlab/ssl/key.pem"
+ ```
+
+1. Delete `gitaly['listen_addr']` to allow only encrypted connections.
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Sidekiq
+
+Sidekiq requires connection to the Redis, PostgreSQL and Gitaly instance.
+The following IPs will be used as an example:
+
+- `10.6.0.71`: Sidekiq 1
+- `10.6.0.72`: Sidekiq 2
+- `10.6.0.73`: Sidekiq 3
+- `10.6.0.74`: Sidekiq 4
+
+To configure the Sidekiq nodes, one each one:
+
+1. SSH into the Sidekiq server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab package
+you want using steps 1 and 2 from the GitLab downloads page.
+**Do not complete any other steps on the download page.**
+1. Open `/etc/gitlab/gitlab.rb` with your editor:
+
+ ```ruby
+ ########################################
+ ##### Services Disabled ###
+ ########################################
+
+ nginx['enable'] = false
+ grafana['enable'] = false
+ prometheus['enable'] = false
+ gitlab_rails['auto_migrate'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = false
+ puma['enable'] = false
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ ########################################
+ #### Redis ###
+ ########################################
+
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ ## The same password for Redis authentication you set up for the master node.
+ redis['master_password'] = '<redis_primary_password>'
+
+ ## A list of sentinels with `host` and `port`
+ gitlab_rails['redis_sentinels'] = [
+ {'host' => '10.6.0.11', 'port' => 26379},
+ {'host' => '10.6.0.12', 'port' => 26379},
+ {'host' => '10.6.0.13', 'port' => 26379},
+ ]
+
+ #######################################
+ ### Gitaly ###
+ #######################################
+
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
+ gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
+
+ #######################################
+ ### Postgres ###
+ #######################################
+ gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = '<postgresql_user_password>'
+ gitlab_rails['db_adapter'] = 'postgresql'
+ gitlab_rails['db_encoding'] = 'unicode'
+ gitlab_rails['auto_migrate'] = false
+
+ #######################################
+ ### Sidekiq configuration ###
+ #######################################
+ sidekiq['listen_address'] = "0.0.0.0"
+
+ #######################################
+ ### Monitoring configuration ###
+ #######################################
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+
+ # Rails Status for prometheus
+ gitlab_rails['monitoring_whitelist'] = ['10.6.0.81/32', '127.0.0.0/8']
+ gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 30114) 77353s; run: log: (pid 29756) 77367s
+ run: logrotate: (pid 9898) 3561s; run: log: (pid 29653) 77380s
+ run: node-exporter: (pid 30134) 77353s; run: log: (pid 29706) 77372s
+ run: sidekiq: (pid 30142) 77351s; run: log: (pid 29638) 77386s
+ ```
+
+TIP: **Tip:**
+You can also run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure GitLab Rails
+
+NOTE: **Note:**
+In our architectures we run each GitLab Rails node using the Puma webserver
+and have its number of workers set to 90% of available CPUs along with four threads. For
+nodes that are running Rails with other components the worker value should be reduced
+accordingly where we've found 50% achieves a good balance but this is dependent
+on workload.
+
+This section describes how to configure the GitLab application (Rails) component.
+On each node perform the following:
+
+1. If you're [using NFS](#configure-nfs-optional):
+
+ 1. If necessary, install the NFS client utility packages using the following
+ commands:
+
+ ```shell
+ # Ubuntu/Debian
+ apt-get install nfs-common
+
+ # CentOS/Red Hat
+ yum install nfs-utils nfs-utils-lib
+ ```
+
+ 1. Specify the necessary NFS mounts in `/etc/fstab`.
+ The exact contents of `/etc/fstab` will depend on how you chose
+ to configure your NFS server. See the [NFS documentation](../high_availability/nfs.md)
+ for examples and the various options.
+
+ 1. Create the shared directories. These may be different depending on your NFS
+ mount locations.
+
+ ```shell
+ mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
+ ```
+
+1. Download/install Omnibus GitLab using **steps 1 and 2** from
+ [GitLab downloads](https://about.gitlab.com/install/). Do not complete other
+ steps on the download page.
+1. Create/edit `/etc/gitlab/gitlab.rb` and use the following configuration.
+ To maintain uniformity of links across nodes, the `external_url`
+ on the application server should point to the external URL that users will use
+ to access GitLab. This would be the URL of the [external load balancer](#configure-the-external-load-balancer)
+ which will route traffic to the GitLab application server:
+
+ ```ruby
+ external_url 'https://gitlab.example.com'
+
+ # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
+ # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
+ # The following two values must be the same as their respective values
+ # of the Gitaly setup
+ gitlab_rails['gitaly_token'] = 'gitalyecret'
+ gitlab_shell['secret_token'] = 'shellsecret'
+
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
+
+ ## Disable components that will not be on the GitLab application server
+ roles ['application_role']
+ gitaly['enable'] = false
+ nginx['enable'] = true
+ sidekiq['enable'] = false
+
+ ## PostgreSQL connection details
+ # Disable PostgreSQL on the application node
+ postgresql['enable'] = false
+ gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = '<postgresql_user_password>'
+ gitlab_rails['auto_migrate'] = false
+
+ ## Redis connection details
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ ## The same password for Redis authentication you set up for the Redis primary node.
+ redis['master_password'] = '<redis_primary_password>'
+
+ ## A list of sentinels with `host` and `port`
+ gitlab_rails['redis_sentinels'] = [
+ {'host' => '10.6.0.11', 'port' => 26379},
+ {'host' => '10.6.0.12', 'port' => 26379},
+ {'host' => '10.6.0.13', 'port' => 26379}
+ ]
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters used for monitoring will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ gitlab_workhorse['prometheus_listen_addr'] = '0.0.0.0:9229'
+ sidekiq['listen_address'] = "0.0.0.0"
+ puma['listen'] = '0.0.0.0'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Add the monitoring node's IP address to the monitoring whitelist and allow it to
+ # scrape the NGINX metrics
+ gitlab_rails['monitoring_whitelist'] = ['10.6.0.81/32', '127.0.0.0/8']
+ nginx['status']['options']['allow'] = ['10.6.0.81/32', '127.0.0.0/8']
+ gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
+
+ ## Uncomment and edit the following options if you have set up NFS
+ ##
+ ## Prevent GitLab from starting if NFS data mounts are not available
+ ##
+ #high_availability['mountpoint'] = '/var/opt/gitlab/git-data'
+ ##
+ ## Ensure UIDs and GIDs match between servers for permissions via NFS
+ ##
+ #user['uid'] = 9000
+ #user['gid'] = 9000
+ #web_server['uid'] = 9001
+ #web_server['gid'] = 9001
+ #registry['uid'] = 9002
+ #registry['gid'] = 9002
+ ```
+
+1. If you're using [Gitaly with TLS support](#gitaly-tls-support), make sure the
+ `git_data_dirs` entry is configured with `tls` instead of `tcp`:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage1' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage2' => { 'gitaly_address' => 'tls://gitaly2.internal:9999' },
+ })
+ ```
+
+ 1. Copy the cert into `/etc/gitlab/trusted-certs`:
+
+ ```shell
+ sudo cp cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Run `sudo gitlab-rake gitlab:gitaly:check` to confirm the node can connect to Gitaly.
+1. Tail the logs to see the requests:
+
+ ```shell
+ sudo gitlab-ctl tail gitaly
+ ```
+
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 4890) 8647s; run: log: (pid 29962) 79128s
+ run: gitlab-exporter: (pid 4902) 8647s; run: log: (pid 29913) 79134s
+ run: gitlab-workhorse: (pid 4904) 8646s; run: log: (pid 29713) 79155s
+ run: logrotate: (pid 12425) 1446s; run: log: (pid 29798) 79146s
+ run: nginx: (pid 4925) 8646s; run: log: (pid 29726) 79152s
+ run: node-exporter: (pid 4931) 8645s; run: log: (pid 29855) 79140s
+ run: puma: (pid 4936) 8645s; run: log: (pid 29656) 79161s
+ ```
+
+NOTE: **Note:**
+When you specify `https` in the `external_url`, as in the example
+above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
+certificates are not present, NGINX will fail to start. See the
+[NGINX documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for more information.
+
+### GitLab Rails post-configuration
+
+1. Ensure that all migrations ran:
+
+ ```shell
+ gitlab-rake gitlab:db:configure
+ ```
+
+ NOTE: **Note:**
+ If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to
+ PostgreSQL it may be that your PgBouncer node's IP address is missing from
+ PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
+ [PgBouncer error `ERROR: pgbouncer cannot connect to server`](troubleshooting.md#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
+ in the Troubleshooting section before proceeding.
+
+1. [Configure fast lookup of authorized SSH keys in the database](../operations/fast_ssh_key_lookup.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Prometheus
+
+The Omnibus GitLab package can be used to configure a standalone Monitoring node
+running [Prometheus](../monitoring/prometheus/index.md) and
+[Grafana](../monitoring/performance/grafana_configuration.md):
+
+1. SSH into the Monitoring node.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ Do not complete any other steps on the download page.
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ external_url 'http://gitlab.example.com'
+
+ # Disable all other services
+ gitlab_rails['auto_migrate'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_exporter['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = true
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ sidekiq['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
+ node_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ # Enable Prometheus
+ prometheus['enable'] = true
+ prometheus['listen_address'] = '0.0.0.0:9090'
+ prometheus['monitor_kubernetes'] = false
+
+ # Enable Login form
+ grafana['disable_login_form'] = false
+
+ # Enable Grafana
+ grafana['enable'] = true
+ grafana['admin_password'] = '<grafana_password>'
+
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. In the GitLab UI, set `admin/application_settings/metrics_and_profiling` > Metrics - Grafana to `/-/grafana` to
+ `http[s]://<MONITOR NODE>/-/grafana`.
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 31637) 17337s; run: log: (pid 29748) 78432s
+ run: grafana: (pid 31644) 17337s; run: log: (pid 29719) 78438s
+ run: logrotate: (pid 31809) 2936s; run: log: (pid 29581) 78462s
+ run: nginx: (pid 31665) 17335s; run: log: (pid 29556) 78468s
+ run: prometheus: (pid 31672) 17335s; run: log: (pid 29633) 78456s
+ ```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure the object storage
+
+GitLab supports using an object storage service for holding numerous types of data.
+It's recommended over [NFS](#configure-nfs-optional) and in general it's better
+in larger setups as object storage is typically much more performant, reliable,
+and scalable.
+
+Object storage options that GitLab has tested, or is aware of customers using include:
+
+- SaaS/Cloud solutions such as [Amazon S3](https://aws.amazon.com/s3/), [Google cloud storage](https://cloud.google.com/storage).
+- On-premises hardware and appliances from various storage vendors.
+- MinIO. There is [a guide to deploying this](https://docs.gitlab.com/charts/advanced/external-object-storage/minio.html) within our Helm Chart documentation.
+
+For configuring GitLab to use Object Storage refer to the following guides
+based on what features you intend to use:
+
+1. Configure [object storage for backups](../../raketasks/backup_restore.md#uploading-backups-to-a-remote-cloud-storage).
+1. Configure [object storage for job artifacts](../job_artifacts.md#using-object-storage)
+ including [incremental logging](../job_logs.md#new-incremental-logging-architecture).
+1. Configure [object storage for LFS objects](../lfs/index.md#storing-lfs-objects-in-remote-object-storage).
+1. Configure [object storage for uploads](../uploads.md#using-object-storage-core-only).
+1. Configure [object storage for merge request diffs](../merge_request_diffs.md#using-object-storage).
+1. Configure [object storage for Container Registry](../packages/container_registry.md#use-object-storage) (optional feature).
+1. Configure [object storage for Mattermost](https://docs.mattermost.com/administration/config-settings.html#file-storage) (optional feature).
+1. Configure [object storage for packages](../packages/index.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. Configure [object storage for Dependency Proxy](../packages/dependency_proxy.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. Configure [object storage for Pseudonymizer](../pseudonymizer.md#configuration) (optional feature). **(ULTIMATE ONLY)**
+1. Configure [object storage for autoscale Runner caching](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching) (optional - for improved performance).
+1. Configure [object storage for Terraform state files](../terraform_state.md#using-object-storage-core-only).
+
+Using separate buckets for each data type is the recommended approach for GitLab.
+
+A limitation of our configuration is that each use of object storage is separately configured.
+[We have an issue for improving this](https://gitlab.com/gitlab-org/gitlab/-/issues/23345)
+and easily using one bucket with separate folders is one improvement that this might bring.
+
+There is at least one specific issue with using the same bucket:
+when GitLab is deployed with the Helm chart restore from backup
+[will not properly function](https://docs.gitlab.com/charts/advanced/external-object-storage/#lfs-artifacts-uploads-packages-external-diffs-pseudonymizer)
+unless separate buckets are used.
+
+One risk of using a single bucket would be if your organization decided to
+migrate GitLab to the Helm deployment in the future. GitLab would run, but the situation with
+backups might not be realized until the organization had a critical requirement for the backups to
+work.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure NFS (optional)
+
+[Object storage](#configure-the-object-storage), along with [Gitaly](#configure-gitaly)
+are recommended over NFS wherever possible for improved performance. If you intend
+to use GitLab Pages, this currently [requires NFS](troubleshooting.md#gitlab-pages-requires-nfs).
+
+See how to [configure NFS](../high_availability/nfs.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Troubleshooting
+
+See the [troubleshooting documentation](troubleshooting.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md
index dd94f5470b4..2540fe68dac 100644
--- a/doc/administration/reference_architectures/50k_users.md
+++ b/doc/administration/reference_architectures/50k_users.md
@@ -47,7 +47,7 @@ For a full list of reference architectures, see
For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
classes and that Redis Sentinel is hosted alongside Consul.
For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+ [Redis Cluster](../redis/replication_and_failover.md#running-multiple-redis-clusters) for the Cache class
and another for the Queues and Shared State classes respectively. We also recommend
that you run the Redis Sentinel clusters separately for each Redis Cluster.
diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md
index 604572b083e..0b4114bca6e 100644
--- a/doc/administration/reference_architectures/5k_users.md
+++ b/doc/administration/reference_architectures/5k_users.md
@@ -1,73 +1,1767 @@
+---
+reading_time: true
+---
+
# Reference architecture: up to 5,000 users
This page describes GitLab reference architecture for up to 5,000 users.
For a full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
+NOTE: **Note:**
+The 5,000-user reference architecture documented below is
+designed to help your organization achieve a highly-available GitLab deployment.
+If you do not have the expertise or need to maintain a highly-available
+environment, you can have a simpler and less costly-to-operate environment by
+following the [2,000-user reference architecture](2k_users.md).
+
> - **Supported users (approximate):** 5,000
> - **High Availability:** True
> - **Test RPS rates:** API: 100 RPS, Web: 10 RPS, Git: 10 RPS
-| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS | Azure |
-|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|----------------|
-| GitLab Rails ([1](#footnotes)) | 3 | 16 vCPU, 14.4GB Memory | `n1-highcpu-16` | `c5.4xlarge` | F16s v2 |
-| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | D2s v3 |
-| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 8 vCPU, 30GB Memory | `n1-standard-8` | `m5.2xlarge` | D8s v3 |
-| Redis ([3](#footnotes)) | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | D2s v3 |
-| Consul + Sentinel ([3](#footnotes)) | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | D2s v3 |
-| Object Storage ([4](#footnotes)) | - | - | - | - | - |
-| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | F4s v2 |
-| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | F2s v2 |
-
-## Footnotes
-
-1. In our architectures we run each GitLab Rails node using the Puma webserver
- and have its number of workers set to 90% of available CPUs along with four threads. For
- nodes that are running Rails with other components the worker value should be reduced
- accordingly where we've found 50% achieves a good balance but this is dependent
- on workload.
-
-1. Gitaly node requirements are dependent on customer data, specifically the number of
- projects and their sizes. We recommend two nodes as an absolute minimum for HA environments
- and at least four nodes should be used when supporting 50,000 or more users.
- We also recommend that each Gitaly node should store no more than 5TB of data
- and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby)
- set to 20% of available CPUs. Additional nodes should be considered in conjunction
- with a review of expected data size and spread based on the recommendations above.
-
-1. Recommended Redis setup differs depending on the size of the architecture.
- For smaller architectures (less than 3,000 users) a single instance should suffice.
- For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
- classes and that Redis Sentinel is hosted alongside Consul.
- For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
- and another for the Queues and Shared State classes respectively. We also recommend
- that you run the Redis Sentinel clusters separately for each Redis Cluster.
-
-1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md)
- over NFS where possible, due to better performance and availability.
-
-1. NFS can be used as an alternative for both repository data (replacing Gitaly) and
- object storage but this isn't typically recommended for performance reasons. Note however it is required for
- [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196).
-
-1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/)
- as the load balancer. Although other load balancers with similar feature sets
- could also be used, those load balancers have not been validated.
-
-1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over
- HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write
- as these components have heavy I/O. These IOPS values are recommended only as a starter
- as with time they may be adjusted higher or lower depending on the scale of your
- environment's workload. If you're running the environment on a Cloud provider
- you may need to refer to their documentation on how configure IOPS correctly.
-
-1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
- CPU platform on GCP. On different hardware you may find that adjustments, either lower
- or higher, are required for your CPU or Node counts accordingly. For more information, a
- [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
- [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+| Service | Nodes | Configuration | GCP | AWS | Azure |
+|--------------------------------------------------------------|-------|---------------------------------|-----------------|-----------------------|----------------|
+| External load balancing node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| Redis | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
+| Consul + Sentinel | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
+| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| Internal load balancing node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| Gitaly | 2 minimum | 8 vCPU, 30GB Memory | `n1-standard-8` | `m5.2xlarge` | `D8s v3` |
+| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | `n1-standard-2` | `m5.large` | `D2s v3` |
+| GitLab Rails | 3 | 16 vCPU, 14.4GB Memory | `n1-highcpu-16` | `c5.4xlarge` | `F16s v2` |
+| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
+| Object Storage | n/a | n/a | n/a | n/a | n/a |
+| NFS Server (optional, not recommended) | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` |
+
+The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
+CPU platform on GCP. On different hardware you may find that adjustments, either lower
+or higher, are required for your CPU or Node counts accordingly. For more information, a
+[Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
+[here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
+
+For data objects such as LFS, Uploads, Artifacts, etc, an [object storage service](#configure-the-object-storage)
+is recommended over NFS where possible, due to better performance and availability.
+Since this doesn't require a node to be set up, it's marked as not applicable (n/a)
+in the table above.
+
+## Setup components
+
+To set up GitLab and its components to accommodate up to 5,000 users:
+
+1. [Configure the external load balancing node](#configure-the-external-load-balancer)
+ that will handle the load balancing of the two GitLab application services nodes.
+1. [Configure Redis](#configure-redis).
+1. [Configure Consul and Sentinel](#configure-consul-and-sentinel).
+1. [Configure PostgreSQL](#configure-postgresql), the database for GitLab.
+1. [Configure PgBouncer](#configure-pgbouncer).
+1. [Configure the internal load balancing node](#configure-the-internal-load-balancer)
+1. [Configure Gitaly](#configure-gitaly),
+ which provides access to the Git repositories.
+1. [Configure Sidekiq](#configure-sidekiq).
+1. [Configure the main GitLab Rails application](#configure-gitlab-rails)
+ to run Puma/Unicorn, Workhorse, GitLab Shell, and to serve all frontend requests (UI, API, Git
+ over HTTP/SSH).
+1. [Configure Prometheus](#configure-prometheus) to monitor your GitLab environment.
+1. [Configure the Object Storage](#configure-the-object-storage)
+ used for shared data objects.
+1. [Configure NFS (Optional)](#configure-nfs-optional)
+ to have shared disk storage service as an alternative to Gitaly and/or Object Storage (although
+ not recommended). NFS is required for GitLab Pages, you can skip this step if you're not using
+ that feature.
+
+We start with all servers on the same 10.6.0.0/16 private network range, they
+can connect to each other freely on those addresses.
+
+Here is a list and description of each machine and the assigned IP:
+
+- `10.6.0.10`: External Load Balancer
+- `10.6.0.61`: Redis Primary
+- `10.6.0.62`: Redis Replica 1
+- `10.6.0.63`: Redis Replica 2
+- `10.6.0.11`: Consul/Sentinel 1
+- `10.6.0.12`: Consul/Sentinel 2
+- `10.6.0.13`: Consul/Sentinel 3
+- `10.6.0.31`: PostgreSQL primary
+- `10.6.0.32`: PostgreSQL secondary 1
+- `10.6.0.33`: PostgreSQL secondary 2
+- `10.6.0.21`: PgBouncer 1
+- `10.6.0.22`: PgBouncer 2
+- `10.6.0.23`: PgBouncer 3
+- `10.6.0.20`: Internal Load Balancer
+- `10.6.0.51`: Gitaly 1
+- `10.6.0.52`: Gitaly 2
+- `10.6.0.71`: Sidekiq 1
+- `10.6.0.72`: Sidekiq 2
+- `10.6.0.73`: Sidekiq 3
+- `10.6.0.74`: Sidekiq 4
+- `10.6.0.41`: GitLab application 1
+- `10.6.0.42`: GitLab application 2
+- `10.6.0.43`: GitLab application 3
+- `10.6.0.81`: Prometheus
+
+## Configure the external load balancer
+
+NOTE: **Note:**
+This architecture has been tested and validated with [HAProxy](https://www.haproxy.org/)
+as the load balancer. Although other load balancers with similar feature sets
+could also be used, those load balancers have not been validated.
+
+In an active/active GitLab configuration, you will need a load balancer to route
+traffic to the application servers. The specifics on which load balancer to use
+or the exact configuration is beyond the scope of GitLab documentation. We hope
+that if you're managing multi-node systems like GitLab you have a load balancer of
+choice already. Some examples including HAProxy (open-source), F5 Big-IP LTM,
+and Citrix Net Scaler. This documentation will outline what ports and protocols
+you need to use with GitLab.
+
+The next question is how you will handle SSL in your environment.
+There are several different options:
+
+- [The application node terminates SSL](#application-node-terminates-ssl).
+- [The load balancer terminates SSL without backend SSL](#load-balancer-terminates-ssl-without-backend-ssl)
+ and communication is not secure between the load balancer and the application node.
+- [The load balancer terminates SSL with backend SSL](#load-balancer-terminates-ssl-with-backend-ssl)
+ and communication is *secure* between the load balancer and the application node.
+
+### Application node terminates SSL
+
+Configure your load balancer to pass connections on port 443 as `TCP` rather
+than `HTTP(S)` protocol. This will pass the connection to the application node's
+NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
+
+See the [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Load balancer terminates SSL without backend SSL
+
+Configure your load balancer to use the `HTTP(S)` protocol rather than `TCP`.
+The load balancer will then be responsible for managing SSL certificates and
+terminating SSL.
+
+Since communication between the load balancer and GitLab will not be secure,
+there is some additional configuration needed. See the
+[NGINX proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
+for details.
+
+### Load balancer terminates SSL with backend SSL
+
+Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
+The load balancer(s) will be responsible for managing SSL certificates that
+end users will see.
+
+Traffic will also be secure between the load balancer(s) and NGINX in this
+scenario. There is no need to add configuration for proxied SSL since the
+connection will be secure all the way. However, configuration will need to be
+added to GitLab to configure SSL certificates. See
+[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
+
+### Ports
+
+The basic ports to be used are shown in the table below.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | ------------------------ |
+| 80 | 80 | HTTP (*1*) |
+| 443 | 443 | TCP or HTTPS (*1*) (*2*) |
+| 22 | 22 | TCP |
+
+- (*1*): [Web terminal](../../ci/environments/index.md#web-terminals) support requires
+ your load balancer to correctly handle WebSocket connections. When using
+ HTTP or HTTPS proxying, this means your load balancer must be configured
+ to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
+ [web terminal](../integration/terminal.md) integration guide for
+ more details.
+- (*2*): When using HTTPS protocol for port 443, you will need to add an SSL
+ certificate to the load balancers. If you wish to terminate SSL at the
+ GitLab application server instead, use TCP protocol.
+
+If you're using GitLab Pages with custom domain support you will need some
+additional port configurations.
+GitLab Pages requires a separate virtual IP address. Configure DNS to point the
+`pages_external_url` from `/etc/gitlab/gitlab.rb` at the new virtual IP address. See the
+[GitLab Pages documentation](../pages/index.md) for more information.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------- | --------- |
+| 80 | Varies (*1*) | HTTP |
+| 443 | Varies (*1*) | TCP (*2*) |
+
+- (*1*): The backend port for GitLab Pages depends on the
+ `gitlab_pages['external_http']` and `gitlab_pages['external_https']`
+ setting. See [GitLab Pages documentation](../pages/index.md) for more details.
+- (*2*): Port 443 for GitLab Pages should always use the TCP protocol. Users can
+ configure custom domains with custom SSL, which would not be possible
+ if SSL was terminated at the load balancer.
+
+#### Alternate SSH Port
+
+Some organizations have policies against opening SSH port 22. In this case,
+it may be helpful to configure an alternate SSH hostname that allows users
+to use SSH on port 443. An alternate SSH hostname will require a new virtual IP address
+compared to the other GitLab HTTP configuration above.
+
+Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
+
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | -------- |
+| 443 | 22 | TCP |
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Redis
+
+Using [Redis](https://redis.io/) in scalable environment is possible using a **Primary** x **Replica**
+topology with a [Redis Sentinel](https://redis.io/topics/sentinel) service to watch and automatically
+start the failover procedure.
+
+Redis requires authentication if used with Sentinel. See
+[Redis Security](https://redis.io/topics/security) documentation for more
+information. We recommend using a combination of a Redis password and tight
+firewall rules to secure your Redis service.
+You are highly encouraged to read the [Redis Sentinel](https://redis.io/topics/sentinel) documentation
+before configuring Redis with GitLab to fully understand the topology and
+architecture.
+
+In this section, you'll be guided through configuring an external Redis instance
+to be used with GitLab. The following IPs will be used as an example:
+
+- `10.6.0.61`: Redis Primary
+- `10.6.0.62`: Redis Replica 1
+- `10.6.0.63`: Redis Replica 2
+
+### Provide your own Redis instance
+
+Managed Redis from cloud providers such as AWS ElastiCache will work. If these
+services support high availability, be sure it is **not** the Redis Cluster type.
+
+Redis version 5.0 or higher is required, as this is what ships with
+Omnibus GitLab packages starting with GitLab 13.0. Older Redis versions
+do not support an optional count argument to SPOP which is now required for
+[Merge Trains](../../ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md).
+
+Note the Redis node's IP address or hostname, port, and password (if required).
+These will be necessary when configuring the
+[GitLab application servers](#configure-gitlab-rails) later.
+
+### Standalone Redis using Omnibus GitLab
+
+This is the section where we install and set up the new Redis instances.
+
+The requirements for a Redis setup are the following:
+
+1. All Redis nodes must be able to talk to each other and accept incoming
+ connections over Redis (`6379`) and Sentinel (`26379`) ports (unless you
+ change the default ones).
+1. The server that hosts the GitLab application must be able to access the
+ Redis nodes.
+1. Protect the nodes from access from external networks
+ ([Internet](https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png)),
+ using a firewall.
+
+NOTE: **Note:**
+Redis nodes (both primary and replica) will need the same password defined in
+`redis['password']`. At any time during a failover the Sentinels can
+reconfigure a node and change its status from primary to replica and vice versa.
+
+#### Configuring the primary Redis instance
+
+1. SSH into the **Primary** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_master_role'
+ roles ['redis_master_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.61'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Set up password authentication for Redis (use the same password in all nodes).
+ redis['password'] = 'redis-password-goes-here'
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+ redis_exporter['flags'] = {
+ 'redis.addr' => 'redis://10.6.0.61:6379',
+ 'redis.password' => 'redis-password-goes-here',
+ }
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+You can list the current Redis Primary, Replica status via:
+
+```shell
+/opt/gitlab/embedded/bin/redis-cli -h <host> -a 'redis-password-goes-here' info replication
+```
+
+Show running GitLab services via:
+
+```shell
+gitlab-ctl status
+```
+
+The output should be similar to the following:
+
+```plaintext
+run: consul: (pid 30043) 76863s; run: log: (pid 29691) 76892s
+run: logrotate: (pid 31152) 3070s; run: log: (pid 29595) 76908s
+run: node-exporter: (pid 30064) 76862s; run: log: (pid 29624) 76904s
+run: redis: (pid 30070) 76861s; run: log: (pid 29573) 76914s
+run: redis-exporter: (pid 30075) 76861s; run: log: (pid 29674) 76896s
+```
+
+#### Configuring the replica Redis instances
+
+1. SSH into the **replica** Redis server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ # Specify server role as 'redis_replica_role'
+ roles ['redis_replica_role']
+
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.6.0.62'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['password'] = 'redis-password-goes-here'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.61'
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+ redis_exporter['flags'] = {
+ 'redis.addr' => 'redis://10.6.0.62:6379',
+ 'redis.password' => 'redis-password-goes-here',
+ }
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other replica nodes, and
+ make sure to set up the IPs correctly.
+
+NOTE: **Note:**
+You can specify multiple roles like sentinel and Redis as:
+`roles ['redis_sentinel_role', 'redis_master_role']`.
+Read more about [roles](https://docs.gitlab.com/omnibus/roles/).
+
+These values don't have to be changed again in `/etc/gitlab/gitlab.rb` after
+a failover, as the nodes will be managed by the [Sentinels](#configure-consul-and-sentinel), and even after a
+`gitlab-ctl reconfigure`, they will get their configuration restored by
+the same Sentinels.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Consul and Sentinel
+
+NOTE: **Note:**
+If you are using an external Redis Sentinel instance, be sure
+to exclude the `requirepass` parameter from the Sentinel
+configuration. This parameter will cause clients to report `NOAUTH
+Authentication required.`. [Redis Sentinel 3.2.x does not support
+password authentication](https://github.com/antirez/redis/issues/3279).
+
+Now that the Redis servers are all set up, let's configure the Sentinel
+servers. The following IPs will be used as an example:
+
+- `10.6.0.11`: Consul/Sentinel 1
+- `10.6.0.12`: Consul/Sentinel 2
+- `10.6.0.13`: Consul/Sentinel 3
+
+To configure the Sentinel:
+
+1. SSH into the server that will host Consul/Sentinel.
+1. [Download/install](https://about.gitlab.com/install/) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ roles ['redis_sentinel_role', 'consul_role']
+
+ # Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ # The same password for Redis authentication you set up for the primary node.
+ redis['master_password'] = 'redis-password-goes-here'
+
+ # The IP of the primary Redis node.
+ redis['master_ip'] = '10.6.0.61'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Port of primary Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Configure Sentinel
+ sentinel['bind'] = '10.6.0.11'
+
+ # Port that Sentinel listens on, uncomment to change to non default. Defaults
+ # to `26379`.
+ # sentinel['port'] = 26379
+
+ ## Quorum must reflect the amount of voting sentinels it take to start a failover.
+ ## Value must NOT be greater then the amount of sentinels.
+ ##
+ ## The quorum can be used to tune Sentinel in two ways:
+ ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
+ ## we deploy, we are basically making Sentinel more sensible to primary failures,
+ ## triggering a failover as soon as even just a minority of Sentinels is no longer
+ ## able to talk with the primary.
+ ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
+ ## making Sentinel able to failover only when there are a very large number (larger
+ ## than majority) of well connected Sentinels which agree about the primary being down.s
+ sentinel['quorum'] = 2
+
+ ## Consider unresponsive server down after x amount of ms.
+ # sentinel['down_after_milliseconds'] = 10000
+
+ ## Specifies the failover timeout in milliseconds. It is used in many ways:
+ ##
+ ## - The time needed to re-start a failover after a previous failover was
+ ## already tried against the same primary by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## - The time needed for a replica replicating to a wrong primary according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right primary, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## - The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (REPLICAOF NO ONE yet not
+ ## acknowledged by the promoted replica).
+ ##
+ ## - The maximum time a failover in progress waits for all the replica to be
+ ## reconfigured as replicas of the new primary. However even after this time
+ ## the replicas will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ # sentinel['failover_timeout'] = 60000
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ server: true,
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. Go through the steps again for all the other Consul/Sentinel nodes, and
+ make sure you set up the correct IPs.
+
+NOTE: **Note:**
+A Consul leader will be elected when the provisioning of the third Consul server is completed.
+Viewing the Consul logs `sudo gitlab-ctl tail consul` will display
+`...[INFO] consul: New leader elected: ...`
+
+You can list the current Consul members (server, client):
+
+```shell
+sudo /opt/gitlab/embedded/bin/consul members
+```
+
+You can verify the GitLab services are running:
+
+```shell
+sudo gitlab-ctl status
+```
+
+The output should be similar to the following:
+
+```plaintext
+run: consul: (pid 30074) 76834s; run: log: (pid 29740) 76844s
+run: logrotate: (pid 30925) 3041s; run: log: (pid 29649) 76861s
+run: node-exporter: (pid 30093) 76833s; run: log: (pid 29663) 76855s
+run: sentinel: (pid 30098) 76832s; run: log: (pid 29704) 76850s
+```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure PostgreSQL
+
+In this section, you'll be guided through configuring an external PostgreSQL database
+to be used with GitLab.
+
+### Provide your own PostgreSQL instance
+
+If you're hosting GitLab on a cloud provider, you can optionally use a
+managed service for PostgreSQL. For example, AWS offers a managed Relational
+Database Service (RDS) that runs PostgreSQL.
+
+If you use a cloud-managed service, or provide your own PostgreSQL:
+
+1. Set up PostgreSQL according to the
+ [database requirements document](../../install/requirements.md#database).
+1. Set up a `gitlab` username with a password of your choice. The `gitlab` user
+ needs privileges to create the `gitlabhq_production` database.
+1. Configure the GitLab application servers with the appropriate details.
+ This step is covered in [Configuring the GitLab Rails application](#configure-gitlab-rails).
+
+### Standalone PostgreSQL using Omnibus GitLab
+
+The following IPs will be used as an example:
+
+- `10.6.0.31`: PostgreSQL primary
+- `10.6.0.32`: PostgreSQL secondary 1
+- `10.6.0.33`: PostgreSQL secondary 2
+
+First, make sure to [install](https://about.gitlab.com/install/)
+the Linux GitLab package **on each node**. Following the steps,
+install the necessary dependencies from step 1, and add the
+GitLab package repository from step 2. When installing GitLab
+in the second step, do not supply the `EXTERNAL_URL` value.
+
+#### PostgreSQL primary node
+
+1. SSH into the PostgreSQL primary node.
+1. Generate a password hash for the PostgreSQL username/password pair. This assumes you will use the default
+ username of `gitlab` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<postgresql_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 gitlab
+ ```
+
+1. Generate a password hash for the PgBouncer username/password pair. This assumes you will use the default
+ username of `pgbouncer` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<pgbouncer_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 pgbouncer
+ ```
+
+1. Generate a password hash for the Consul database username/password pair. This assumes you will use the default
+ username of `gitlab-consul` (recommended). The command will request a password
+ and confirmation. Use the value that is output by this command in the next
+ step as the value of `<consul_password_hash>`:
+
+ ```shell
+ sudo gitlab-ctl pg-password-md5 gitlab-consul
+ ```
+
+1. On the primary database node, edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
+
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the Consul agent
+ consul['services'] = %w(postgresql)
+
+ # START user configuration
+ # Please set the real values as explained in Required Information section
+ #
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = '<postgresql_password_hash>'
+ # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
+ # This is used to prevent replication from using up all of the
+ # available database connections.
+ postgresql['max_wal_senders'] = 4
+ postgresql['max_replication_slots'] = 4
+
+ # Replace XXX.XXX.XXX.XXX/YY with Network Address
+ postgresql['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+ postgres_exporter['dbname'] = 'gitlabhq_production'
+ postgres_exporter['password'] = '<postgresql_password_hash>'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ #
+ # END user configuration
+ ```
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+1. You can list the current PostgreSQL primary, secondary nodes status via:
+
+ ```shell
+ sudo /opt/gitlab/bin/gitlab-ctl repmgr cluster show
+ ```
+
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 30593) 77133s; run: log: (pid 29912) 77156s
+ run: logrotate: (pid 23449) 3341s; run: log: (pid 29794) 77175s
+ run: node-exporter: (pid 30613) 77133s; run: log: (pid 29824) 77170s
+ run: postgres-exporter: (pid 30620) 77132s; run: log: (pid 29894) 77163s
+ run: postgresql: (pid 30630) 77132s; run: log: (pid 29618) 77181s
+ run: repmgrd: (pid 30639) 77132s; run: log: (pid 29985) 77150s
+ ```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### PostgreSQL secondary nodes
+
+1. On both the secondary nodes, add the same configuration specified above for the primary node
+ with an additional setting that will inform `gitlab-ctl` that they are standby nodes initially
+ and there's no need to attempt to register them as a primary node:
+
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the Consul agent
+ consul['services'] = %w(postgresql)
+
+ # Specify if a node should attempt to be primary on initialization.
+ repmgr['master_on_initialization'] = false
+
+ # START user configuration
+ # Please set the real values as explained in Required Information section
+ #
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = '<postgresql_password_hash>'
+ # Set `max_wal_senders` to one more than the number of database nodes in the cluster.
+ # This is used to prevent replication from using up all of the
+ # available database connections.
+ postgresql['max_wal_senders'] = 4
+ postgresql['max_replication_slots'] = 4
+
+ # Replace XXX.XXX.XXX.XXX/YY with Network Address
+ postgresql['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+ postgres_exporter['dbname'] = 'gitlabhq_production'
+ postgres_exporter['password'] = '<postgresql_password_hash>'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ # END user configuration
+ ```
+
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+Advanced [configuration options](https://docs.gitlab.com/omnibus/settings/database.html)
+are supported and can be added if needed.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+#### PostgreSQL post-configuration
+
+SSH into the **primary node**:
+
+1. Open a database prompt:
+
+ ```shell
+ gitlab-psql -d gitlabhq_production
+ ```
+
+1. Enable the `pg_trgm` extension:
+
+ ```shell
+ CREATE EXTENSION pg_trgm;
+ ```
+
+1. Exit the database prompt by typing `\q` and Enter.
+
+1. Verify the cluster is initialized with one node:
+
+ ```shell
+ gitlab-ctl repmgr cluster show
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ Role | Name | Upstream | Connection String
+ ----------+----------|----------|----------------------------------------
+ * master | HOSTNAME | | host=HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
+ ```
+
+1. Note down the hostname or IP address in the connection string: `host=HOSTNAME`. We will
+ refer to the hostname in the next section as `<primary_node_name>`. If the value
+ is not an IP address, it will need to be a resolvable name (via DNS or
+ `/etc/hosts`)
+
+SSH into the **secondary node**:
+
+1. Set up the repmgr standby:
+
+ ```shell
+ gitlab-ctl repmgr standby setup <primary_node_name>
+ ```
+
+ Do note that this will remove the existing data on the node. The command
+ has a wait time.
+
+ The output should be similar to the following:
+
+ ```console
+ Doing this will delete the entire contents of /var/opt/gitlab/postgresql/data
+ If this is not what you want, hit Ctrl-C now to exit
+ To skip waiting, rerun with the -w option
+ Sleeping for 30 seconds
+ Stopping the database
+ Removing the data
+ Cloning the data
+ Starting the database
+ Registering the node with the cluster
+ ok: run: repmgrd: (pid 19068) 0s
+ ```
+
+Before moving on, make sure the databases are configured correctly. Run the
+following command on the **primary** node to verify that replication is working
+properly and the secondary nodes appear in the cluster:
+
+```shell
+gitlab-ctl repmgr cluster show
+```
+
+The output should be similar to the following:
+
+```plaintext
+Role | Name | Upstream | Connection String
+----------+---------|-----------|------------------------------------------------
+* master | MASTER | | host=<primary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
+```
+
+If the 'Role' column for any node says "FAILED", check the
+[Troubleshooting section](troubleshooting.md) before proceeding.
+
+Also, check that the `repmgr-check-master` command works successfully on each node:
+
+```shell
+su - gitlab-consul
+gitlab-ctl repmgr-check-master || echo 'This node is a standby repmgr node'
+```
+
+This command relies on exit codes to tell Consul whether a particular node is a master
+or secondary. The most important thing here is that this command does not produce errors.
+If there are errors it's most likely due to incorrect `gitlab-consul` database user permissions.
+Check the [Troubleshooting section](troubleshooting.md) before proceeding.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure PgBouncer
+
+Now that the PostgreSQL servers are all set up, let's configure PgBouncer.
+The following IPs will be used as an example:
+
+- `10.6.0.21`: PgBouncer 1
+- `10.6.0.22`: PgBouncer 2
+- `10.6.0.23`: PgBouncer 3
+
+1. On each PgBouncer node, edit `/etc/gitlab/gitlab.rb`, and replace
+ `<consul_password_hash>` and `<pgbouncer_password_hash>` with the
+ password hashes you [set up previously](#postgresql-primary-node):
+
+ ```ruby
+ # Disable all components except Pgbouncer and Consul agent
+ roles ['pgbouncer_role']
+
+ # Configure PgBouncer
+ pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
+
+ pgbouncer['users'] = {
+ 'gitlab-consul': {
+ password: '<consul_password_hash>'
+ },
+ 'pgbouncer': {
+ password: '<pgbouncer_password_hash>'
+ }
+ }
+
+ # Configure Consul agent
+ consul['watchers'] = %w(postgresql)
+ consul['enable'] = true
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+
+ # Enable service discovery for Prometheus
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
+ ```
+
+1. [Reconfigure Omnibus GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
+
+1. Create a `.pgpass` file so Consul is able to
+ reload PgBouncer. Enter the PgBouncer password twice when asked:
+
+ ```shell
+ gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
+ ```
+
+1. Ensure each node is talking to the current master:
+
+ ```shell
+ gitlab-ctl pgb-console # You will be prompted for PGBOUNCER_PASSWORD
+ ```
+
+ If there is an error `psql: ERROR: Auth failed` after typing in the
+ password, ensure you previously generated the MD5 password hashes with the correct
+ format. The correct format is to concatenate the password and the username:
+ `PASSWORDUSERNAME`. For example, `Sup3rS3cr3tpgbouncer` would be the text
+ needed to generate an MD5 password hash for the `pgbouncer` user.
+
+1. Once the console prompt is available, run the following queries:
+
+ ```shell
+ show databases ; show clients ;
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
+ ---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
+ gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
+ pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
+ (2 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
+ ------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
+ C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
+ (2 rows)
+ ```
+
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 31530) 77150s; run: log: (pid 31106) 77182s
+ run: logrotate: (pid 32613) 3357s; run: log: (pid 30107) 77500s
+ run: node-exporter: (pid 31550) 77149s; run: log: (pid 30138) 77493s
+ run: pgbouncer: (pid 32033) 75593s; run: log: (pid 31117) 77175s
+ run: pgbouncer-exporter: (pid 31558) 77148s; run: log: (pid 31498) 77156s
+ ```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+### Configure the internal load balancer
+
+If you're running more than one PgBouncer node as recommended, then at this time you'll need to set
+up a TCP internal load balancer to serve each correctly.
+
+The following IP will be used as an example:
+
+- `10.6.0.20`: Internal Load Balancer
+
+Here's how you could do it with [HAProxy](https://www.haproxy.org/):
+
+```plaintext
+global
+ log /dev/log local0
+ log localhost local1 notice
+ log stdout format raw local0
+
+defaults
+ log global
+ default-server inter 10s fall 3 rise 2
+ balance leastconn
+
+frontend internal-pgbouncer-tcp-in
+ bind *:6432
+ mode tcp
+ option tcplog
+
+ default_backend pgbouncer
+
+backend pgbouncer
+ mode tcp
+ option tcp-check
+
+ server pgbouncer1 10.6.0.21:6432 check
+ server pgbouncer2 10.6.0.22:6432 check
+ server pgbouncer3 10.6.0.23:6432 check
+```
+
+Refer to your preferred Load Balancer's documentation for further guidance.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Gitaly
+
+Deploying Gitaly in its own server can benefit GitLab installations that are
+larger than a single machine.
+
+The Gitaly node requirements are dependent on customer data, specifically the number of
+projects and their repository sizes. Two nodes are recommended as an absolute minimum.
+Each Gitaly node should store no more than 5TB of data and have the number of
+[`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) set to 20% of available CPUs.
+Additional nodes should be considered in conjunction with a review of expected
+data size and spread based on the recommendations above.
+
+It is also strongly recommended that all Gitaly nodes be set up with SSD disks with
+a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write,
+as Gitaly has heavy I/O. These IOPS values are recommended only as a starter as with
+time they may be adjusted higher or lower depending on the scale of your environment's workload.
+If you're running the environment on a Cloud provider, you may need to refer to
+their documentation on how to configure IOPS correctly.
+
+Some things to note:
+
+- The GitLab Rails application shards repositories into [repository storages](../repository_storage_paths.md).
+- A Gitaly server can host one or more storages.
+- A GitLab server can use one or more Gitaly servers.
+- Gitaly addresses must be specified in such a way that they resolve
+ correctly for ALL Gitaly clients.
+- Gitaly servers must not be exposed to the public internet, as Gitaly's network
+ traffic is unencrypted by default. The use of a firewall is highly recommended
+ to restrict access to the Gitaly server. Another option is to
+ [use TLS](#gitaly-tls-support).
+
+TIP: **Tip:**
+For more information about Gitaly's history and network architecture see the
+[standalone Gitaly documentation](../gitaly/index.md).
+
+Note: **Note:** The token referred to throughout the Gitaly documentation is
+just an arbitrary password selected by the administrator. It is unrelated to
+tokens created for the GitLab API or other similar web API tokens.
+
+Below we describe how to configure two Gitaly servers, with IPs and
+domain names:
+
+- `10.6.0.51`: Gitaly 1 (`gitaly1.internal`)
+- `10.6.0.52`: Gitaly 2 (`gitaly2.internal`)
+
+The secret token is assumed to be `gitalysecret` and that
+your GitLab installation has three repository storages:
+
+- `default` on Gitaly 1
+- `storage1` on Gitaly 1
+- `storage2` on Gitaly 2
+
+On each node:
+
+1. [Download/Install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page but
+ **without** providing the `EXTERNAL_URL` value.
+1. Edit `/etc/gitlab/gitlab.rb` to configure storage paths, enable
+ the network listener and configure the token:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ # /etc/gitlab/gitlab.rb
+
+ # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
+ # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
+ # The following two values must be the same as their respective values
+ # of the GitLab Rails application setup
+ gitaly['auth_token'] = 'gitlaysecret'
+ gitlab_shell['secret_token'] = 'shellsecret'
+
+ # Avoid running unnecessary services on the Gitaly server
+ postgresql['enable'] = false
+ redis['enable'] = false
+ nginx['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
+ sidekiq['enable'] = false
+ gitlab_workhorse['enable'] = false
+ grafana['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ # If you run a seperate monitoring node you can disable these services
+ alertmanager['enable'] = false
+ prometheus['enable'] = false
+
+ # Prevent database connections during 'gitlab-ctl reconfigure'
+ gitlab_rails['rake_cache_clear'] = false
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the gitlab-shell API callback URL. Without this, `git push` will
+ # fail. This can be your 'front door' GitLab URL or an internal load
+ # balancer.
+ # Don't forget to copy `/etc/gitlab/gitlab-secrets.json` from web server to Gitaly server.
+ gitlab_rails['internal_api_url'] = 'https://gitlab.example.com'
+
+ # Make Gitaly accept connections on all network interfaces. You must use
+ # firewalls to restrict access to this address/port.
+ # Comment out following line if you only want to support TLS connections
+ gitaly['listen_addr'] = "0.0.0.0:8075"
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters will listen on for monitoring
+ gitaly['prometheus_listen_addr'] = "0.0.0.0:9236"
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+ ```
+
+1. Append the following to `/etc/gitlab/gitlab.rb` for each respective server:
+ 1. On `gitaly1.internal`:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => {
+ 'path' => '/var/opt/gitlab/git-data'
+ },
+ 'storage1' => {
+ 'path' => '/mnt/gitlab/git-data'
+ },
+ })
+ ```
+
+ 1. On `gitaly2.internal`:
+
+ ```ruby
+ git_data_dirs({
+ 'storage2' => {
+ 'path' => '/mnt/gitlab/git-data'
+ },
+ })
+ ```
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Confirm that Gitaly can perform callbacks to the internal API:
+
+ ```shell
+ sudo /opt/gitlab/embedded/service/gitlab-shell/bin/check -config /opt/gitlab/embedded/service/gitlab-shell/config.yml
+ ```
+
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 30339) 77006s; run: log: (pid 29878) 77020s
+ run: gitaly: (pid 30351) 77005s; run: log: (pid 29660) 77040s
+ run: logrotate: (pid 7760) 3213s; run: log: (pid 29782) 77032s
+ run: node-exporter: (pid 30378) 77004s; run: log: (pid 29812) 77026s
+ ```
+
+### Gitaly TLS support
+
+Gitaly supports TLS encryption. To be able to communicate
+with a Gitaly instance that listens for secure connections you will need to use `tls://` URL
+scheme in the `gitaly_address` of the corresponding storage entry in the GitLab configuration.
+
+You will need to bring your own certificates as this isn't provided automatically.
+The certificate, or its certificate authority, must be installed on all Gitaly
+nodes (including the Gitaly node using the certificate) and on all client nodes
+that communicate with it following the procedure described in
+[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates).
+
+NOTE: **Note:**
+The self-signed certificate must specify the address you use to access the
+Gitaly server. If you are addressing the Gitaly server by a hostname, you can
+either use the Common Name field for this, or add it as a Subject Alternative
+Name. If you are addressing the Gitaly server by its IP address, you must add it
+as a Subject Alternative Name to the certificate.
+[gRPC does not support using an IP address as Common Name in a certificate](https://github.com/grpc/grpc/issues/2691).
+
+NOTE: **Note:**
+It is possible to configure Gitaly servers with both an
+unencrypted listening address `listen_addr` and an encrypted listening
+address `tls_listen_addr` at the same time. This allows you to do a
+gradual transition from unencrypted to encrypted traffic, if necessary.
+
+To configure Gitaly with TLS:
+
+1. Create the `/etc/gitlab/ssl` directory and copy your key and certificate there:
+
+ ```shell
+ sudo mkdir -p /etc/gitlab/ssl
+ sudo chmod 755 /etc/gitlab/ssl
+ sudo cp key.pem cert.pem /etc/gitlab/ssl/
+ sudo chmod 644 key.pem cert.pem
+ ```
+
+1. Copy the cert to `/etc/gitlab/trusted-certs` so Gitaly will trust the cert when
+ calling into itself:
+
+ ```shell
+ sudo cp /etc/gitlab/ssl/cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and add:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ gitaly['tls_listen_addr'] = "0.0.0.0:9999"
+ gitaly['certificate_path'] = "/etc/gitlab/ssl/cert.pem"
+ gitaly['key_path'] = "/etc/gitlab/ssl/key.pem"
+ ```
+
+1. Delete `gitaly['listen_addr']` to allow only encrypted connections.
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Sidekiq
+
+Sidekiq requires connection to the Redis, PostgreSQL and Gitaly instance.
+The following IPs will be used as an example:
+
+- `10.6.0.71`: Sidekiq 1
+- `10.6.0.72`: Sidekiq 2
+- `10.6.0.73`: Sidekiq 3
+- `10.6.0.74`: Sidekiq 4
+
+To configure the Sidekiq nodes, one each one:
+
+1. SSH into the Sidekiq server.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab package
+you want using steps 1 and 2 from the GitLab downloads page.
+**Do not complete any other steps on the download page.**
+1. Open `/etc/gitlab/gitlab.rb` with your editor:
+
+ ```ruby
+ ########################################
+ ##### Services Disabled ###
+ ########################################
+
+ nginx['enable'] = false
+ grafana['enable'] = false
+ prometheus['enable'] = false
+ gitlab_rails['auto_migrate'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = false
+ puma['enable'] = false
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ ########################################
+ #### Redis ###
+ ########################################
+
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ ## The same password for Redis authentication you set up for the master node.
+ redis['master_password'] = '<redis_primary_password>'
+
+ ## A list of sentinels with `host` and `port`
+ gitlab_rails['redis_sentinels'] = [
+ {'host' => '10.6.0.11', 'port' => 26379},
+ {'host' => '10.6.0.12', 'port' => 26379},
+ {'host' => '10.6.0.13', 'port' => 26379},
+ ]
+
+ #######################################
+ ### Gitaly ###
+ #######################################
+
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
+ gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
+
+ #######################################
+ ### Postgres ###
+ #######################################
+ gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = '<postgresql_user_password>'
+ gitlab_rails['db_adapter'] = 'postgresql'
+ gitlab_rails['db_encoding'] = 'unicode'
+ gitlab_rails['auto_migrate'] = false
+
+ #######################################
+ ### Sidekiq configuration ###
+ #######################################
+ sidekiq['listen_address'] = "0.0.0.0"
+
+ #######################################
+ ### Monitoring configuration ###
+ #######################################
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+
+ # Rails Status for prometheus
+ gitlab_rails['monitoring_whitelist'] = ['10.6.0.81/32', '127.0.0.0/8']
+ gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 30114) 77353s; run: log: (pid 29756) 77367s
+ run: logrotate: (pid 9898) 3561s; run: log: (pid 29653) 77380s
+ run: node-exporter: (pid 30134) 77353s; run: log: (pid 29706) 77372s
+ run: sidekiq: (pid 30142) 77351s; run: log: (pid 29638) 77386s
+ ```
+
+TIP: **Tip:**
+You can also run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure GitLab Rails
+
+NOTE: **Note:**
+In our architectures we run each GitLab Rails node using the Puma webserver
+and have its number of workers set to 90% of available CPUs along with four threads. For
+nodes that are running Rails with other components the worker value should be reduced
+accordingly where we've found 50% achieves a good balance but this is dependent
+on workload.
+
+This section describes how to configure the GitLab application (Rails) component.
+On each node perform the following:
+
+1. If you're [using NFS](#configure-nfs-optional):
+
+ 1. If necessary, install the NFS client utility packages using the following
+ commands:
+
+ ```shell
+ # Ubuntu/Debian
+ apt-get install nfs-common
+
+ # CentOS/Red Hat
+ yum install nfs-utils nfs-utils-lib
+ ```
+
+ 1. Specify the necessary NFS mounts in `/etc/fstab`.
+ The exact contents of `/etc/fstab` will depend on how you chose
+ to configure your NFS server. See the [NFS documentation](../high_availability/nfs.md)
+ for examples and the various options.
+
+ 1. Create the shared directories. These may be different depending on your NFS
+ mount locations.
+
+ ```shell
+ mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
+ ```
+
+1. Download/install Omnibus GitLab using **steps 1 and 2** from
+ [GitLab downloads](https://about.gitlab.com/install/). Do not complete other
+ steps on the download page.
+1. Create/edit `/etc/gitlab/gitlab.rb` and use the following configuration.
+ To maintain uniformity of links across nodes, the `external_url`
+ on the application server should point to the external URL that users will use
+ to access GitLab. This would be the URL of the [external load balancer](#configure-the-external-load-balancer)
+ which will route traffic to the GitLab application server:
+
+ ```ruby
+ external_url 'https://gitlab.example.com'
+
+ # Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
+ # to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
+ # The following two values must be the same as their respective values
+ # of the Gitaly setup
+ gitlab_rails['gitaly_token'] = 'gitalyecret'
+ gitlab_shell['secret_token'] = 'shellsecret'
+
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
+
+ ## Disable components that will not be on the GitLab application server
+ roles ['application_role']
+ gitaly['enable'] = false
+ nginx['enable'] = true
+ sidekiq['enable'] = false
+
+ ## PostgreSQL connection details
+ # Disable PostgreSQL on the application node
+ postgresql['enable'] = false
+ gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = '<postgresql_user_password>'
+ gitlab_rails['auto_migrate'] = false
+
+ ## Redis connection details
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ ## The same password for Redis authentication you set up for the Redis primary node.
+ redis['master_password'] = '<redis_primary_password>'
+
+ ## A list of sentinels with `host` and `port`
+ gitlab_rails['redis_sentinels'] = [
+ {'host' => '10.6.0.11', 'port' => 26379},
+ {'host' => '10.6.0.12', 'port' => 26379},
+ {'host' => '10.6.0.13', 'port' => 26379}
+ ]
+
+ ## Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Set the network addresses that the exporters used for monitoring will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ gitlab_workhorse['prometheus_listen_addr'] = '0.0.0.0:9229'
+ sidekiq['listen_address'] = "0.0.0.0"
+ puma['listen'] = '0.0.0.0'
+
+ ## The IPs of the Consul server nodes
+ ## You can also use FQDNs and intermix them with IPs
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
+ }
+
+ # Add the monitoring node's IP address to the monitoring whitelist and allow it to
+ # scrape the NGINX metrics
+ gitlab_rails['monitoring_whitelist'] = ['10.6.0.81/32', '127.0.0.0/8']
+ nginx['status']['options']['allow'] = ['10.6.0.81/32', '127.0.0.0/8']
+ gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
+
+ ## Uncomment and edit the following options if you have set up NFS
+ ##
+ ## Prevent GitLab from starting if NFS data mounts are not available
+ ##
+ #high_availability['mountpoint'] = '/var/opt/gitlab/git-data'
+ ##
+ ## Ensure UIDs and GIDs match between servers for permissions via NFS
+ ##
+ #user['uid'] = 9000
+ #user['gid'] = 9000
+ #web_server['uid'] = 9001
+ #web_server['gid'] = 9001
+ #registry['uid'] = 9002
+ #registry['gid'] = 9002
+ ```
+
+1. If you're using [Gitaly with TLS support](#gitaly-tls-support), make sure the
+ `git_data_dirs` entry is configured with `tls` instead of `tcp`:
+
+ ```ruby
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage1' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage2' => { 'gitaly_address' => 'tls://gitaly2.internal:9999' },
+ })
+ ```
+
+ 1. Copy the cert into `/etc/gitlab/trusted-certs`:
+
+ ```shell
+ sudo cp cert.pem /etc/gitlab/trusted-certs/
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Run `sudo gitlab-rake gitlab:gitaly:check` to confirm the node can connect to Gitaly.
+1. Tail the logs to see the requests:
+
+ ```shell
+ sudo gitlab-ctl tail gitaly
+ ```
+
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 4890) 8647s; run: log: (pid 29962) 79128s
+ run: gitlab-exporter: (pid 4902) 8647s; run: log: (pid 29913) 79134s
+ run: gitlab-workhorse: (pid 4904) 8646s; run: log: (pid 29713) 79155s
+ run: logrotate: (pid 12425) 1446s; run: log: (pid 29798) 79146s
+ run: nginx: (pid 4925) 8646s; run: log: (pid 29726) 79152s
+ run: node-exporter: (pid 4931) 8645s; run: log: (pid 29855) 79140s
+ run: puma: (pid 4936) 8645s; run: log: (pid 29656) 79161s
+ ```
+
+NOTE: **Note:**
+When you specify `https` in the `external_url`, as in the example
+above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
+certificates are not present, NGINX will fail to start. See the
+[NGINX documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for more information.
+
+### GitLab Rails post-configuration
+
+1. Ensure that all migrations ran:
+
+ ```shell
+ gitlab-rake gitlab:db:configure
+ ```
+
+ NOTE: **Note:**
+ If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to
+ PostgreSQL it may be that your PgBouncer node's IP address is missing from
+ PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
+ [PgBouncer error `ERROR: pgbouncer cannot connect to server`](troubleshooting.md#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
+ in the Troubleshooting section before proceeding.
+
+1. [Configure fast lookup of authorized SSH keys in the database](../operations/fast_ssh_key_lookup.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure Prometheus
+
+The Omnibus GitLab package can be used to configure a standalone Monitoring node
+running [Prometheus](../monitoring/prometheus/index.md) and
+[Grafana](../monitoring/performance/grafana_configuration.md):
+
+1. SSH into the Monitoring node.
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page.
+ Do not complete any other steps on the download page.
+1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
+
+ ```ruby
+ external_url 'http://gitlab.example.com'
+
+ # Disable all other services
+ gitlab_rails['auto_migrate'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_exporter['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = true
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ sidekiq['enable'] = false
+ puma['enable'] = false
+ unicorn['enable'] = false
+ node_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
+
+ # Enable Prometheus
+ prometheus['enable'] = true
+ prometheus['listen_address'] = '0.0.0.0:9090'
+ prometheus['monitor_kubernetes'] = false
+
+ # Enable Login form
+ grafana['disable_login_form'] = false
+
+ # Enable Grafana
+ grafana['enable'] = true
+ grafana['admin_password'] = '<grafana_password>'
+
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+ consul['configuration'] = {
+ retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
+ }
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. In the GitLab UI, set `admin/application_settings/metrics_and_profiling` > Metrics - Grafana to `/-/grafana` to
+ `http[s]://<MONITOR NODE>/-/grafana`.
+1. Verify the GitLab services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+ The output should be similar to the following:
+
+ ```plaintext
+ run: consul: (pid 31637) 17337s; run: log: (pid 29748) 78432s
+ run: grafana: (pid 31644) 17337s; run: log: (pid 29719) 78438s
+ run: logrotate: (pid 31809) 2936s; run: log: (pid 29581) 78462s
+ run: nginx: (pid 31665) 17335s; run: log: (pid 29556) 78468s
+ run: prometheus: (pid 31672) 17335s; run: log: (pid 29633) 78456s
+ ```
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure the object storage
+
+GitLab supports using an object storage service for holding numerous types of data.
+It's recommended over [NFS](#configure-nfs-optional) and in general it's better
+in larger setups as object storage is typically much more performant, reliable,
+and scalable.
+
+Object storage options that GitLab has tested, or is aware of customers using include:
+
+- SaaS/Cloud solutions such as [Amazon S3](https://aws.amazon.com/s3/), [Google cloud storage](https://cloud.google.com/storage).
+- On-premises hardware and appliances from various storage vendors.
+- MinIO. There is [a guide to deploying this](https://docs.gitlab.com/charts/advanced/external-object-storage/minio.html) within our Helm Chart documentation.
+
+For configuring GitLab to use Object Storage refer to the following guides
+based on what features you intend to use:
+
+1. Configure [object storage for backups](../../raketasks/backup_restore.md#uploading-backups-to-a-remote-cloud-storage).
+1. Configure [object storage for job artifacts](../job_artifacts.md#using-object-storage)
+ including [incremental logging](../job_logs.md#new-incremental-logging-architecture).
+1. Configure [object storage for LFS objects](../lfs/index.md#storing-lfs-objects-in-remote-object-storage).
+1. Configure [object storage for uploads](../uploads.md#using-object-storage-core-only).
+1. Configure [object storage for merge request diffs](../merge_request_diffs.md#using-object-storage).
+1. Configure [object storage for Container Registry](../packages/container_registry.md#use-object-storage) (optional feature).
+1. Configure [object storage for Mattermost](https://docs.mattermost.com/administration/config-settings.html#file-storage) (optional feature).
+1. Configure [object storage for packages](../packages/index.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. Configure [object storage for Dependency Proxy](../packages/dependency_proxy.md#using-object-storage) (optional feature). **(PREMIUM ONLY)**
+1. Configure [object storage for Pseudonymizer](../pseudonymizer.md#configuration) (optional feature). **(ULTIMATE ONLY)**
+1. Configure [object storage for autoscale Runner caching](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching) (optional - for improved performance).
+1. Configure [object storage for Terraform state files](../terraform_state.md#using-object-storage-core-only).
+
+Using separate buckets for each data type is the recommended approach for GitLab.
+
+A limitation of our configuration is that each use of object storage is separately configured.
+[We have an issue for improving this](https://gitlab.com/gitlab-org/gitlab/-/issues/23345)
+and easily using one bucket with separate folders is one improvement that this might bring.
+
+There is at least one specific issue with using the same bucket:
+when GitLab is deployed with the Helm chart restore from backup
+[will not properly function](https://docs.gitlab.com/charts/advanced/external-object-storage/#lfs-artifacts-uploads-packages-external-diffs-pseudonymizer)
+unless separate buckets are used.
+
+One risk of using a single bucket would be if your organization decided to
+migrate GitLab to the Helm deployment in the future. GitLab would run, but the situation with
+backups might not be realized until the organization had a critical requirement for the backups to
+work.
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Configure NFS (optional)
+
+[Object storage](#configure-the-object-storage), along with [Gitaly](#configure-gitaly)
+are recommended over NFS wherever possible for improved performance. If you intend
+to use GitLab Pages, this currently [requires NFS](troubleshooting.md#gitlab-pages-requires-nfs).
+
+See how to [configure NFS](../high_availability/nfs.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
+
+## Troubleshooting
+
+See the [troubleshooting documentation](troubleshooting.md).
+
+<div align="right">
+ <a type="button" class="btn btn-default" href="#setup-components">
+ Back to setup components <i class="fa fa-angle-double-up" aria-hidden="true"></i>
+ </a>
+</div>
diff --git a/doc/administration/reference_architectures/index.md b/doc/administration/reference_architectures/index.md
index 623d7f3f776..8fde71a66bf 100644
--- a/doc/administration/reference_architectures/index.md
+++ b/doc/administration/reference_architectures/index.md
@@ -38,7 +38,8 @@ When scaling GitLab, there are several factors to consider:
- A load balancer is added in front to distribute traffic across the application nodes.
- The application nodes connects to a shared file server and PostgreSQL and Redis services on the backend.
-NOTE: **Note:** Depending on your workflow, the following recommended
+NOTE: **Note:**
+Depending on your workflow, the following recommended
reference architectures may need to be adapted accordingly. Your workload
is influenced by factors including how active your users are,
how much automation you use, mirroring, and repository/change size. Additionally the
@@ -120,13 +121,13 @@ As long as at least one of each component is online and capable of handling the
### Automated database failover **(PREMIUM ONLY)**
> - Level of complexity: **High**
-> - Required domain knowledge: PgBouncer, Repmgr, shared storage, distributed systems
+> - Required domain knowledge: PgBouncer, Repmgr or Patroni, shared storage, distributed systems
> - Supported tiers: [GitLab Premium and Ultimate](https://about.gitlab.com/pricing/)
By adding automatic failover for database systems, you can enable higher uptime
with additional database nodes. This extends the default database with
cluster management and failover policies.
-[PgBouncer in conjunction with Repmgr](../postgresql/replication_and_failover.md)
+[PgBouncer in conjunction with Repmgr or Patroni](../postgresql/replication_and_failover.md)
is recommended.
### Instance level replication with GitLab Geo **(PREMIUM ONLY)**
@@ -164,6 +165,7 @@ column.
| [PostgreSQL](../../development/architecture.md#postgresql) | Database | [PostgreSQL configuration](https://docs.gitlab.com/omnibus/settings/database.html) | Yes |
| [PgBouncer](../../development/architecture.md#pgbouncer) | Database connection pooler | [PgBouncer configuration](../high_availability/pgbouncer.md#running-pgbouncer-as-part-of-a-non-ha-gitlab-installation) **(PREMIUM ONLY)** | Yes |
| Repmgr | PostgreSQL cluster management and failover | [PostgreSQL and Repmgr configuration](../postgresql/replication_and_failover.md) | Yes |
+| Patroni | An alternative PostgreSQL cluster management and failover | [PostgreSQL and Patroni configuration](../postgresql/replication_and_failover.md#patroni) | Yes |
| [Redis](../../development/architecture.md#redis) ([3](#footnotes)) | Key/value store for fast data lookup and caching | [Redis configuration](../high_availability/redis.md) | Yes |
| Redis Sentinel | Redis | [Redis Sentinel configuration](../high_availability/redis.md) | Yes |
| [Gitaly](../../development/architecture.md#gitaly) ([2](#footnotes)) ([7](#footnotes)) | Provides access to Git repositories | [Gitaly configuration](../gitaly/index.md#run-gitaly-on-its-own-server) | Yes |
@@ -209,7 +211,7 @@ cluster with the Rails nodes broken down into a number of smaller Pods across th
For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all
classes and that Redis Sentinel is hosted alongside Consul.
For larger architectures (10,000 users or more) we suggest running a separate
- [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class
+ [Redis Cluster](../redis/replication_and_failover.md#running-multiple-redis-clusters) for the Cache class
and another for the Queues and Shared State classes respectively. We also recommend
that you run the Redis Sentinel clusters separately for each Redis Cluster.
diff --git a/doc/administration/reference_architectures/troubleshooting.md b/doc/administration/reference_architectures/troubleshooting.md
index 15e377fe183..35bdb65c810 100644
--- a/doc/administration/reference_architectures/troubleshooting.md
+++ b/doc/administration/reference_architectures/troubleshooting.md
@@ -1,4 +1,4 @@
-# Troubleshooting a reference architecture set up
+# Troubleshooting a reference architecture setup
This page serves as the troubleshooting documentation if you followed one of
the [reference architectures](index.md#reference-architectures).
@@ -17,7 +17,7 @@ with the Fog library that GitLab uses. Symptoms include:
### GitLab Pages requires NFS
If you intend to use [GitLab Pages](../../user/project/pages/index.md), this currently requires
-[NFS](../high_availability/nfs.md). There is [work in progress](https://gitlab.com/gitlab-org/gitlab-pages/issues/196)
+[NFS](../high_availability/nfs.md). There is [work in progress](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/196)
to remove this dependency. In the future, GitLab Pages may use
[object storage](https://gitlab.com/gitlab-org/gitlab/-/issues/208135).
@@ -86,8 +86,117 @@ a workaround, in the mean time, to
## Troubleshooting Redis
-If the application node cannot connect to the Redis node, check your firewall rules and
-make sure Redis can accept TCP connections under port `6379`.
+There are a lot of moving parts that needs to be taken care carefully
+in order for the HA setup to work as expected.
+
+Before proceeding with the troubleshooting below, check your firewall rules:
+
+- Redis machines
+ - Accept TCP connection in `6379`
+ - Connect to the other Redis machines via TCP in `6379`
+- Sentinel machines
+ - Accept TCP connection in `26379`
+ - Connect to other Sentinel machines via TCP in `26379`
+ - Connect to the Redis machines via TCP in `6379`
+
+### Troubleshooting Redis replication
+
+You can check if everything is correct by connecting to each server using
+`redis-cli` application, and sending the `info replication` command as below.
+
+```shell
+/opt/gitlab/embedded/bin/redis-cli -h <redis-host-or-ip> -a '<redis-password>' info replication
+```
+
+When connected to a `Primary` Redis, you will see the number of connected
+`replicas`, and a list of each with connection details:
+
+```plaintext
+# Replication
+role:master
+connected_replicas:1
+replica0:ip=10.133.5.21,port=6379,state=online,offset=208037514,lag=1
+master_repl_offset:208037658
+repl_backlog_active:1
+repl_backlog_size:1048576
+repl_backlog_first_byte_offset:206989083
+repl_backlog_histlen:1048576
+```
+
+When it's a `replica`, you will see details of the primary connection and if
+its `up` or `down`:
+
+```plaintext
+# Replication
+role:replica
+master_host:10.133.1.58
+master_port:6379
+master_link_status:up
+master_last_io_seconds_ago:1
+master_sync_in_progress:0
+replica_repl_offset:208096498
+replica_priority:100
+replica_read_only:1
+connected_replicas:0
+master_repl_offset:0
+repl_backlog_active:0
+repl_backlog_size:1048576
+repl_backlog_first_byte_offset:0
+repl_backlog_histlen:0
+```
+
+### Troubleshooting Sentinel
+
+If you get an error like: `Redis::CannotConnectError: No sentinels available.`,
+there may be something wrong with your configuration files or it can be related
+to [this issue](https://github.com/redis/redis-rb/issues/531).
+
+You must make sure you are defining the same value in `redis['master_name']`
+and `redis['master_pasword']` as you defined for your sentinel node.
+
+The way the Redis connector `redis-rb` works with sentinel is a bit
+non-intuitive. We try to hide the complexity in omnibus, but it still requires
+a few extra configurations.
+
+---
+
+To make sure your configuration is correct:
+
+1. SSH into your GitLab application server
+1. Enter the Rails console:
+
+ ```shell
+ # For Omnibus installations
+ sudo gitlab-rails console
+
+ # For source installations
+ sudo -u git rails console -e production
+ ```
+
+1. Run in the console:
+
+ ```ruby
+ redis = Redis.new(Gitlab::Redis::SharedState.params)
+ redis.info
+ ```
+
+ Keep this screen open and try to simulate a failover below.
+
+1. To simulate a failover on primary Redis, SSH into the Redis server and run:
+
+ ```shell
+ # port must match your primary redis port, and the sleep time must be a few seconds bigger than defined one
+ redis-cli -h localhost -p 6379 DEBUG sleep 20
+ ```
+
+1. Then back in the Rails console from the first step, run:
+
+ ```ruby
+ redis.info
+ ```
+
+ You should see a different port after a few seconds delay
+ (the failover/reconnect time).
## Troubleshooting Gitaly
@@ -291,7 +400,7 @@ unset https_proxy
When updating the `gitaly['listen_addr']` or `gitaly['prometheus_listen_addr']` values, Gitaly may continue to listen on the old address after a `sudo gitlab-ctl reconfigure`.
-When this occurs, performing a `sudo gitlab-ctl restart` will resolve the issue. This will no longer be necessary after [this issue](https://gitlab.com/gitlab-org/gitaly/issues/2521) is resolved.
+When this occurs, performing a `sudo gitlab-ctl restart` will resolve the issue. This will no longer be necessary after [this issue](https://gitlab.com/gitlab-org/gitaly/-/issues/2521) is resolved.
### Permission denied errors appearing in Gitaly logs when accessing repositories from a standalone Gitaly node
@@ -327,3 +436,135 @@ or
```shell
curl http[s]://localhost:<EXPORTER LISTENING PORT>/-/metric
```
+
+## Troubleshooting PgBouncer
+
+In case you are experiencing any issues connecting through PgBouncer, the first place to check is always the logs:
+
+```shell
+sudo gitlab-ctl tail pgbouncer
+```
+
+Additionally, you can check the output from `show databases` in the [administrative console](#pgbouncer-administrative-console). In the output, you would expect to see values in the `host` field for the `gitlabhq_production` database. Additionally, `current_connections` should be greater than 1.
+
+### PgBouncer administrative console
+
+As part of Omnibus GitLab, the `gitlab-ctl pgb-console` command is provided to automatically connect to the PgBouncer administrative console. See the [PgBouncer documentation](https://www.pgbouncer.org/usage.html#admin-console) for detailed instructions on how to interact with the console.
+
+To start a session:
+
+```shell
+sudo gitlab-ctl pgb-console
+```
+
+The password you will be prompted for is the `pgbouncer_user_password`
+
+To get some basic information about the instance, run
+
+```shell
+pgbouncer=# show databases; show clients; show servers;
+ name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
+---------------------+-----------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
+ gitlabhq_production | 127.0.0.1 | 5432 | gitlabhq_production | | 100 | 5 | | 0 | 1
+ pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
+(2 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link
+| remote_pid | tls
+------+-----------+---------------------+--------+-----------+-------+------------+------------+---------------------+---------------------+-----------+------
++------------+-----
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44590 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12444c0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44592 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12447c0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44594 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x1244940 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44706 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:16:31 | 0x1244ac0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44708 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:15:15 | 0x1244c40 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44794 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:15:15 | 0x1244dc0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44798 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:16:31 | 0x1244f40 |
+| 0 |
+ C | pgbouncer | pgbouncer | active | 127.0.0.1 | 44660 | 127.0.0.1 | 6432 | 2018-04-24 22:13:51 | 2018-04-24 22:17:12 | 0x1244640 |
+| 0 |
+(8 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | rem
+ote_pid | tls
+------+--------+---------------------+-------+-----------+------+------------+------------+---------------------+---------------------+-----------+------+----
+--------+-----
+ S | gitlab | gitlabhq_production | idle | 127.0.0.1 | 5432 | 127.0.0.1 | 35646 | 2018-04-24 22:15:15 | 2018-04-24 22:17:10 | 0x124dca0 | |
+ 19980 |
+(1 row)
+```
+
+### Message: `LOG: invalid CIDR mask in address`
+
+See the suggested fix [in Geo documentation](../geo/replication/troubleshooting.md#message-log--invalid-cidr-mask-in-address).
+
+### Message: `LOG: invalid IP mask "md5": Name or service not known`
+
+See the suggested fix [in Geo documentation](../geo/replication/troubleshooting.md#message-log--invalid-ip-mask-md5-name-or-service-not-known).
+
+## Troubleshooting PostgreSQL
+
+In case you are experiencing any issues connecting through PgBouncer, the first place to check is always the logs:
+
+```shell
+sudo gitlab-ctl tail postgresql
+```
+
+### Consul and PostgreSQL changes not taking effect
+
+Due to the potential impacts, `gitlab-ctl reconfigure` only reloads Consul and PostgreSQL, it will not restart the services. However, not all changes can be activated by reloading.
+
+To restart either service, run `gitlab-ctl restart SERVICE`
+
+For PostgreSQL, it is usually safe to restart the master node by default. Automatic failover defaults to a 1 minute timeout. Provided the database returns before then, nothing else needs to be done. To be safe, you can stop `repmgrd` on the standby nodes first with `gitlab-ctl stop repmgrd`, then start afterwards with `gitlab-ctl start repmgrd`.
+
+On the Consul server nodes, it is important to restart the Consul service in a controlled fashion. Read our [Consul documentation](../high_availability/consul.md#restarting-the-server-cluster) for instructions on how to restart the service.
+
+### `gitlab-ctl repmgr-check-master` command produces errors
+
+If this command displays errors about database permissions it is likely that something failed during
+install, resulting in the `gitlab-consul` database user getting incorrect permissions. Follow these
+steps to fix the problem:
+
+1. On the master database node, connect to the database prompt - `gitlab-psql -d template1`
+1. Delete the `gitlab-consul` user - `DROP USER "gitlab-consul";`
+1. Exit the database prompt - `\q`
+1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) and the user will be re-added with the proper permissions.
+1. Change to the `gitlab-consul` user - `su - gitlab-consul`
+1. Try the check command again - `gitlab-ctl repmgr-check-master`.
+
+Now there should not be errors. If errors still occur then there is another problem.
+
+### PgBouncer error `ERROR: pgbouncer cannot connect to server`
+
+You may get this error when running `gitlab-rake gitlab:db:configure` or you
+may see the error in the PgBouncer log file.
+
+```plaintext
+PG::ConnectionBad: ERROR: pgbouncer cannot connect to server
+```
+
+The problem may be that your PgBouncer node's IP address is not included in the
+`trust_auth_cidr_addresses` setting in `/etc/gitlab/gitlab.rb` on the database nodes.
+
+You can confirm that this is the issue by checking the PostgreSQL log on the master
+database node. If you see the following error then `trust_auth_cidr_addresses`
+is the problem.
+
+```plaintext
+2018-03-29_13:59:12.11776 FATAL: no pg_hba.conf entry for host "123.123.123.123", user "pgbouncer", database "gitlabhq_production", SSL off
+```
+
+To fix the problem, add the IP address to `/etc/gitlab/gitlab.rb`.
+
+```ruby
+postgresql['trust_auth_cidr_addresses'] = %w(123.123.123.123/32 <other_cidrs>)
+```
+
+[Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
diff --git a/doc/administration/repository_storage_paths.md b/doc/administration/repository_storage_paths.md
index 87f901becf5..5520eeace0a 100644
--- a/doc/administration/repository_storage_paths.md
+++ b/doc/administration/repository_storage_paths.md
@@ -1,3 +1,10 @@
+---
+stage: Create
+group: Gitaly
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+type: reference, howto
+---
+
# Repository storage paths
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/4578) in GitLab 8.10.
@@ -60,7 +67,8 @@ files and add the full paths of the alternative repository storage paths. In
the example below, we add two more mount points that are named `nfs_1` and `nfs_2`
respectively.
-NOTE: **Note:** This example uses NFS. We do not recommend using EFS for storage as it may impact GitLab's performance. See the [relevant documentation](high_availability/nfs.md#avoid-using-awss-elastic-file-system-efs) for more details.
+NOTE: **Note:**
+This example uses NFS. We do not recommend using EFS for storage as it may impact GitLab's performance. See the [relevant documentation](high_availability/nfs.md#avoid-using-awss-elastic-file-system-efs) for more details.
**For installations from source**
@@ -110,7 +118,10 @@ Once you set the multiple storage paths, you can choose where new repositories
will be stored under **Admin Area > Settings > Repository >
Repository storage > Storage nodes for new repositories**.
-![Choose repository storage path in Admin Area](img/repository_storages_admin_ui_v12_10.png)
+Each storage can be assigned a weight from 0-100. When a new project is created, these
+weights are used to determine the storage location the repository will be created on.
+
+![Choose repository storage path in Admin Area](img/repository_storages_admin_ui_v13_1.png)
Beginning with GitLab 8.13.4, multiple paths can be chosen. New repositories
will be randomly placed on one of the selected paths.
diff --git a/doc/administration/repository_storage_types.md b/doc/administration/repository_storage_types.md
index 825da08b66e..1c036cfe2ac 100644
--- a/doc/administration/repository_storage_types.md
+++ b/doc/administration/repository_storage_types.md
@@ -1,3 +1,10 @@
+---
+stage: Create
+group: Gitaly
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+type: reference, howto
+---
+
# Repository storage types **(CORE ONLY)**
> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/28283) in GitLab 10.0.
diff --git a/doc/administration/server_hooks.md b/doc/administration/server_hooks.md
index 6df0f187a42..2fea000b799 100644
--- a/doc/administration/server_hooks.md
+++ b/doc/administration/server_hooks.md
@@ -8,130 +8,183 @@ disqus_identifier: 'https://docs.gitlab.com/ee/administration/custom_hooks.html'
# Server hooks **(CORE ONLY)**
-> **Notes:**
->
-> - Server hooks were [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/196051) in GitLab 12.8 replacing Custom Hooks.
-> - Server hooks must be configured on the filesystem of the GitLab server. Only GitLab server administrators will be able to complete these tasks. Please explore [webhooks](../user/project/integrations/webhooks.md) and [GitLab CI/CD](../ci/README.md) as an option if you do not have filesystem access. For a user-configurable Git hook interface, see [Push Rules](../push_rules/push_rules.md), available in GitLab Starter **(STARTER)**.
-> - Server hooks won't be replicated to secondary nodes if you use [GitLab Geo](geo/replication/index.md).
-
-Git natively supports hooks that are executed on different actions. These hooks run
-on the server and can be used to enforce specific commit policies or perform other
-tasks based on the state of the repository.
-
-Examples of server-side Git hooks include `pre-receive`, `post-receive`, and `update`.
-See [Git SCM Server-Side Hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks#Server-Side-Hooks)
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/196051) in GitLab 12.8 replacing Custom Hooks.
+
+Git supports hooks that are executed on different actions. These hooks run on the server and can be
+used to enforce specific commit policies or perform other tasks based on the state of the
+repository.
+
+Git supports the following hooks:
+
+- `pre-receive`
+- `post-receive`
+- `update`
+
+See [the Git documentation](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks#_server_side_hooks)
for more information about each hook type.
-## Create a server hook for a repository
+Server-side Git hooks can be configured for:
-Server-side Git hooks are typically placed in the repository's `hooks`
-subdirectory. In GitLab, hook directories are symlinked to the GitLab Shell
-`hooks` directory for ease of maintenance between GitLab Shell upgrades.
-Server hooks are implemented differently, but the behavior is exactly the same
-once the hook is created.
+- [A single repository](#create-a-server-hook-for-a-repository).
+- [All repositories](#create-a-global-server-hook-for-all-repositories).
+
+Note the following about server hooks:
+
+- Server hooks must be configured on the file system of the GitLab server. Only GitLab server
+ administrators are able to complete these tasks. If you don't have file system access, see
+ possible alternatives such as:
+ - [Webhooks](../user/project/integrations/webhooks.md).
+ - [GitLab CI/CD](../ci/README.md).
+ - [Push Rules](../push_rules/push_rules.md), for a user-configurable Git hook
+ interface. **(STARTER)**
+- Server hooks aren't replicated to [Geo](geo/replication/index.md) secondary nodes.
+
+## Create a server hook for a repository
-NOTE: **Note:**
If you are not using [hashed storage](repository_storage_types.md#hashed-storage), the project's
-repository directory might not exactly match the instructions below. In that case,
-for an installation from source the path is usually `/home/git/repositories/<group>/<project>.git`.
-For Omnibus installs the path is usually `/var/opt/gitlab/git-data/repositories/<group>/<project>.git`.
-
-Follow the steps below to set up a server hook for a
-repository:
-
-1. Find that project's path on the GitLab server, by navigating to the
- **Admin area > Projects**. From there, select the project for which you
- would like to add a hook. You can find the path to the project's repository
- under **Gitaly relative path** on that page.
-1. Create a new directory in this location called `custom_hooks`.
-1. Inside the new `custom_hooks` directory, create a file with a name matching
- the hook type. For a pre-receive hook the file name should be `pre-receive`
- with no extension.
-1. Make the hook file executable and make sure it's owned by Git.
-1. Write the code to make the server hook function as expected. Hooks can be
- in any language. Ensure the 'shebang' at the top properly reflects the language
- type. For example, if the script is in Ruby the shebang will probably be
+repository directory might not exactly match the instructions below. In that case:
+
+- For an installation from source, the path is usually
+ `/home/git/repositories/<group>/<project>.git`.
+- For Omnibus GitLab installs, the path is usually
+ `/var/opt/gitlab/git-data/repositories/<group>/<project>.git`.
+
+Follow the steps below to set up a server-side hook for a repository:
+
+1. Navigate to **Admin area > Projects** and click on the project you want to add a server hook to.
+1. Locate the **Gitaly relative path** on the page that appears. This is where the server hook
+ must be implemented. For information on interpreting the relative path, see
+ [Translating hashed storage paths](repository_storage_types.md#translating-hashed-storage-paths).
+1. On the file system, create a new directory in this location called `custom_hooks`.
+1. Inside the new `custom_hooks` directory, create a file with a name matching the hook type. For
+ example, for a pre-receive hook the filename should be `pre-receive` with no extension.
+1. Make the hook file executable and ensure that it's owned by the Git user.
+1. Write the code to make the server hook function as expected. Hooks can be in any language. Ensure
+ the ["shebang"](https://en.wikipedia.org/wiki/Shebang_(Unix)) at the top properly reflects the
+ language type. For example, if the script is in Ruby the shebang is probably
`#!/usr/bin/env ruby`.
-Assuming the hook code is properly implemented the hook will run as appropriate.
-
-## Set a global server hook for all repositories
-
-To create a Git hook that applies to all of your repositories in
-your instance, set a global server hook. Since GitLab will look inside the GitLab Shell
-`hooks` directory for global hooks, adding any hook there will apply it to all repositories.
-Follow the steps below to properly set up a server hook for all repositories:
-
-1. On the GitLab server, navigate to the configured custom hook directory. The
- default is in the GitLab Shell directory. The GitLab Shell `hook` directory
- for an installation from source the path is usually
- `/home/git/gitlab-shell/hooks`. For Omnibus installs the path is usually
- `/opt/gitlab/embedded/service/gitlab-shell/hooks`.
- To look in a different directory for the global custom hooks,
- set `custom_hooks_dir` in the Gitaly config. For Omnibus installations, this is set
- in `gitlab.rb`. For source installations, the configuration location depends on the
- GitLab version. For:
-
- - GitLab 13.0 and earlier, this is set in `gitlab-shell/config.yml`.
- - GitLab 13.1 and later, this is set in `gitaly/config.toml` under the `[hooks]`
- section.
-
- NOTE: **Note:**
- The `custom_hooks_dir` value in `gitlab-shell/config.yml` is still honored in GitLab
- 13.1 and later if the value in `gitaly/config.toml` is blank or non-existent.
-
-1. Create a new directory in this location. Depending on your hook, it will be
- either a `pre-receive.d`, `post-receive.d`, or `update.d` directory.
-1. Inside this new directory, add your hook. Hooks can be
- in any language. Ensure the 'shebang' at the top properly reflects the language
- type. For example, if the script is in Ruby the shebang will probably be
+Assuming the hook code is properly implemented, the hook code is executed as appropriate.
+
+## Create a global server hook for all repositories
+
+To create a Git hook that applies to all of the repositories in your instance, set a global server
+hook. The default global server hook directory is in the GitLab Shell directory. Any
+hook added there applies to all repositories.
+
+The default directory:
+
+- For an installation from source is usually `/home/git/gitlab-shell/hooks`.
+- For Omnibus GitLab installs is usually `/opt/gitlab/embedded/service/gitlab-shell/hooks`.
+
+To use a different directory for global server hooks, set `custom_hooks_dir` in Gitaly
+configuration:
+
+- For Omnibus installations, this is set in `gitlab.rb`.
+- For source installations, the configuration location depends on the GitLab version. For:
+ - GitLab 13.0 and earlier, this is set in `gitlab-shell/config.yml`.
+ - GitLab 13.1 and later, this is set in `gitaly/config.toml` under the `[hooks]` section.
+
+NOTE: **Note:**
+The `custom_hooks_dir` value in `gitlab-shell/config.yml` is still honored in GitLab 13.1 and later
+if the value in `gitaly/config.toml` is blank or non-existent.
+
+Follow the steps below to set up a global server hook for all repositories:
+
+1. On the GitLab server, navigate to the configured global server hook directory.
+1. Create a new directory in this location. Depending on the type of hook, it can be either a
+ `pre-receive.d`, `post-receive.d`, or `update.d` directory.
+1. Inside this new directory, add your hook. Hooks can be in any language. Ensure the
+ ["shebang"](https://en.wikipedia.org/wiki/Shebang_(Unix)) at the top properly reflects the
+ language type. For example, if the script is in Ruby the shebang is probably
`#!/usr/bin/env ruby`.
-1. Make the hook file executable and make sure it's owned by Git.
+1. Make the hook file executable and ensure that it's owned by the Git user.
Now test the hook to check whether it is functioning properly.
-## Chained hooks support
+## Chained hooks
> [Introduced](https://gitlab.com/gitlab-org/gitlab-shell/-/merge_requests/93) in GitLab Shell 4.1.0 and GitLab 8.15.
-Hooks can be also global or be set per project directories and support a chained
-execution of the hooks.
+Server hooks set [per project](#create-a-server-hook-for-a-repository) or
+[globally](#create-a-global-server-hook-for-all-repositories) can be executed in a chain.
-NOTE: **Note:**
-`<hook_name>.d` would need to be either `pre-receive.d`,
-`post-receive.d`, or `update.d` to work properly. Any other names will be ignored.
+Server hooks are searched for and executed in the following order of priority:
+
+- Built-in GitLab server hooks. These are not user-customizable.
+- `<project>.git/custom_hooks/<hook_name>`: Per-project hooks. This was kept for backwards
+ compatibility.
+- `<project>.git/custom_hooks/<hook_name>.d/*`: Location for per-project hooks.
+- `<custom_hooks_dir>/<hook_name>.d/*`: Location for all executable global hook files
+ except editor backup files.
+
+Within a directory, server hooks:
+
+- Are executed in alphabetical order.
+- Stop executing when a hook exits with a non-zero value.
+
+Note:
+
+- `<hook_name>.d` must be either `pre-receive.d`, `post-receive.d`, or `update.d` to work properly.
+ Any other names are ignored.
+- Files in `.d` directories must be executable and not match the backup file pattern (`*~`).
+- For `<project>.git` you need to [translate](repository_storage_types.md#translating-hashed-storage-paths)
+ your project name into the hashed storage format that GitLab uses.
+
+## Environment Variables
+
+The following set of environment variables are available to server hooks.
+
+| Environment variable | Description |
+|:---------------------|:----------------------------------------------------------------------------|
+| `GL_ID` | GitLab identifier of user that initiated the push. For example, `user-2234` |
+| `GL_PROJECT_PATH` | (GitLab 13.2 and later) GitLab project path |
+| `GL_PROTOCOL` | (GitLab 13.2 and later) Protocol used with push |
+| `GL_REPOSITORY` | `project-<id>` where `id` is the ID of the project |
+| `GL_USERNAME` | GitLab username of the user that initiated the push |
+
+Pre-receive and post-receive server hooks can also access the following Git environment variables.
+
+| Environment variable | Description |
+|:-----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `GIT_ALTERNATE_OBJECT_DIRECTORIES` | Alternate object directories in the quarantine environment. See [Git `receive-pack` documentation](https://git-scm.com/docs/git-receive-pack#_quarantine_environment). |
+| `GIT_OBJECT_DIRECTORY` | GitLab project path in the quarantine environment. See [Git `receive-pack` documentation](https://git-scm.com/docs/git-receive-pack#_quarantine_environment). |
+| `GIT_PUSH_OPTION_COUNT` | Number of push options. See [Git `pre-receive` documentation](https://git-scm.com/docs/githooks#pre-receive). |
+| `GIT_PUSH_OPTION_<i>` | Value of push options where `i` is from `0` to `GIT_PUSH_OPTION_COUNT - 1`. See [Git `pre-receive` documentation](https://git-scm.com/docs/githooks#pre-receive). |
NOTE: **Note:**
-Files in `.d` directories need to be executable and not match the backup file
-pattern (`*~`).
+While other environment variables can be passed to server hooks, your application should not rely on
+them as they can change.
-The hooks are searched and executed in this order:
+## Transition to Go
-1. Built-in GitLab server hooks (not user-customizable).
-1. `<project>.git/custom_hooks/<hook_name>` - per-project hook (this was kept as the already existing behavior).
-1. `<project>.git/custom_hooks/<hook_name>.d/*` - per-project hooks.
-1. `<custom_hooks_dir>/<hook_name>.d/*` - global hooks: all executable files (except editor backup files).
+> Introduced in GitLab 13.2 using feature flags.
-The hooks of the same type are executed in order and execution stops on the
-first script exiting with a non-zero value.
+The following server hooks have been re-implemented in Go:
-For `<project>.git` you'll need to
-[translate your project name into the hashed storage format](repository_storage_types.md#translating-hashed-storage-paths)
-that GitLab uses.
+- `pre-receive`, with the Go implementation used by default. To use the Ruby implementation instead,
+ [disable](../operations/feature_flags.md#enable-or-disable-feature-flag-strategies) the
+ `:gitaly_go_preceive_hook` feature flag.
+- `update`, with the Go implementation used by default. To use the Ruby implementation instead,
+ [disable](../operations/feature_flags.md#enable-or-disable-feature-flag-strategies) the
+ `:gitaly_go_update_hook` feature flag.
+- `post-receive`, however the Ruby implementation is used by default. To use the Go implementation
+ instead, [enable](../operations/feature_flags.md#enable-or-disable-feature-flag-strategies) the
+ `:gitaly_go_postreceive_hook` feature flag.
## Custom error messages
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/5073) in GitLab 8.10.
-To have custom error messages appear in GitLab's UI when the commit is
-declined or an error occurs during the Git hook, your script should:
+To have custom error messages appear in GitLab's UI when a commit is declined or an error occurs
+during the Git hook, your script should:
- Send the custom error messages to either the script's `stdout` or `stderr`.
- Prefix each message with `GL-HOOK-ERR:` with no characters appearing before the prefix.
### Example custom error message
-This hook script written in bash will generate the following message in GitLab's UI:
+This hook script written in Bash generates the following message in GitLab's UI:
```shell
#!/bin/sh
diff --git a/doc/administration/smime_signing_email.md b/doc/administration/smime_signing_email.md
index bab7c5c260d..304b65917c1 100644
--- a/doc/administration/smime_signing_email.md
+++ b/doc/administration/smime_signing_email.md
@@ -21,7 +21,8 @@ files must be provided:
Optionally, you can also provide a bundle of CA certs (PEM-encoded) to be
included on each signature. This will typically be an intermediate CA.
-NOTE: **Note:** Be mindful of the access levels for your private keys and visibility to
+NOTE: **Note:**
+Be mindful of the access levels for your private keys and visibility to
third parties.
**For Omnibus installations:**
@@ -38,7 +39,8 @@ third parties.
1. Save the file and [reconfigure GitLab](restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-NOTE: **Note:** The key needs to be readable by the GitLab system user (`git` by default).
+NOTE: **Note:**
+The key needs to be readable by the GitLab system user (`git` by default).
**For installations from source:**
@@ -61,7 +63,8 @@ NOTE: **Note:** The key needs to be readable by the GitLab system user (`git` by
1. Save the file and [restart GitLab](restart_gitlab.md#installations-from-source) for the changes to take effect.
-NOTE: **Note:** The key needs to be readable by the GitLab system user (`git` by default).
+NOTE: **Note:**
+The key needs to be readable by the GitLab system user (`git` by default).
### How to convert S/MIME PKCS#12 / PFX format to PEM encoding
diff --git a/doc/administration/terraform_state.md b/doc/administration/terraform_state.md
index 77d5495418e..8d3ddc4e306 100644
--- a/doc/administration/terraform_state.md
+++ b/doc/administration/terraform_state.md
@@ -68,20 +68,7 @@ The following settings are:
### S3-compatible connection settings
-The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
-
-| Setting | Description | Default |
-|---------|-------------|---------|
-| `provider` | Always `AWS` for compatible hosts | `AWS` |
-| `aws_access_key_id` | Credentials for AWS or compatible provider | |
-| `aws_secret_access_key` | Credentials for AWS or compatible provider | |
-| `aws_signature_version` | AWS signature version to use. 2 or 4 are valid options. Digital Ocean Spaces and other providers may need 2. | 4 |
-| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | `true` |
-| `region` | AWS region | us-east-1 |
-| `host` | S3-compatible host when not using AWS. For example, `localhost` or `storage.example.com` | `s3.amazonaws.com` |
-| `endpoint` | Can be used when configuring an S3-compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
-| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | `false` |
-| `use_iam_profile` | For AWS S3, set to true to use an IAM profile instead of access keys | `false` |
+See [the available connection settings for different providers](object_storage.md#connection-settings).
**In Omnibus installations:**
@@ -91,7 +78,7 @@ The connection settings match those provided by [Fog](https://github.com/fog), a
```ruby
gitlab_rails['terraform_state_enabled'] = true
gitlab_rails['terraform_state_object_store_enabled'] = true
- gitlab_rails['terraform_state_object_store_remote_directory'] = "terraform_state"
+ gitlab_rails['terraform_state_object_store_remote_directory'] = "terraform"
gitlab_rails['terraform_state_object_store_connection'] = {
'provider' => 'AWS',
'region' => 'eu-central-1',
@@ -123,7 +110,7 @@ The connection settings match those provided by [Fog](https://github.com/fog), a
enabled: true
object_store:
enabled: true
- remote_directory: "terraform_state" # The bucket name
+ remote_directory: "terraform" # The bucket name
connection:
provider: AWS # Only AWS supported at the moment
aws_access_key_id: AWS_ACESS_KEY_ID
diff --git a/doc/administration/troubleshooting/elasticsearch.md b/doc/administration/troubleshooting/elasticsearch.md
index 12b82e4bc48..e13261e3074 100644
--- a/doc/administration/troubleshooting/elasticsearch.md
+++ b/doc/administration/troubleshooting/elasticsearch.md
@@ -261,6 +261,9 @@ Beyond that, you will want to review the error. If it is:
- Specifically from the indexer, this could be a bug/issue and should be escalated to
GitLab support.
- An OS issue, you will want to reach out to your systems administrator.
+- A `Faraday::TimeoutError (execution expired)` error **and** you're using a proxy,
+ [set a custom `gitlab_rails['env']` environment variable, called `no_proxy`](https://docs.gitlab.com/omnibus/settings/environment-variables.html)
+ with the IP address of your Elasticsearch host.
### Troubleshooting performance
diff --git a/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md b/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
index c911c617210..8e9749fb239 100644
--- a/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
+++ b/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
@@ -11,13 +11,13 @@ having an issue with GitLab, it is highly recommended that you check your
[support options](https://about.gitlab.com/support/) first, before attempting to use
this information.
-CAUTION: **CAUTION:**
+CAUTION: **Caution:**
Please note that some of these scripts could be damaging if not run correctly,
or under the right conditions. We highly recommend running them under the
guidance of a Support Engineer, or running them in a test environment with a
backup of the instance ready to be restored, just in case.
-CAUTION: **CAUTION:**
+CAUTION: **Caution:**
Please also note that as GitLab changes, changes to the code are inevitable,
and so some scripts may not work as they once used to. These are not kept
up-to-date as these scripts/commands were added as they were found/needed. As
@@ -201,6 +201,20 @@ project.repository_read_only = true; project.save
project.update!(repository_read_only: true)
```
+### Transfer project from one namespace to another
+
+```ruby
+ p= Project.find_by_full_path('')
+
+ # To set the owner of the project
+ current_user= p.creator
+
+# Namespace where you want this to be moved.
+namespace = Namespace.find_by_full_path("")
+
+::Projects::TransferService.new(p, current_user).execute(namespace)
+```
+
### Bulk update service integration password for _all_ projects
For example, change the Jira user's password for all projects that have the Jira
@@ -447,7 +461,7 @@ group = Group.find_by_path_or_name("groupname")
# Count users from subgroup and up (inherited)
group.members_with_parents.count
-# Count users from parent group and down (specific grants)
+# Count users from the parent group and down (specific grants)
parent.members_with_descendants.count
```
diff --git a/doc/administration/troubleshooting/linux_cheat_sheet.md b/doc/administration/troubleshooting/linux_cheat_sheet.md
index 7af4219caa3..06c49d67f40 100644
--- a/doc/administration/troubleshooting/linux_cheat_sheet.md
+++ b/doc/administration/troubleshooting/linux_cheat_sheet.md
@@ -10,7 +10,7 @@ and it may be useful for users with experience with Linux. If you are currently
having an issue with GitLab, you may want to check your [support options](https://about.gitlab.com/support/)
first, before attempting to use this information.
-CAUTION: **CAUTION:**
+CAUTION: **Caution:**
If you are administering GitLab you are expected to know these commands for your distribution
of choice. If you are a GitLab Support Engineer, consider this a cross-reference to
translate `yum` -> `apt-get` and the like.
diff --git a/doc/administration/troubleshooting/navigating_gitlab_via_rails_console.md b/doc/administration/troubleshooting/navigating_gitlab_via_rails_console.md
index 69af7ea6801..571973c12d9 100644
--- a/doc/administration/troubleshooting/navigating_gitlab_via_rails_console.md
+++ b/doc/administration/troubleshooting/navigating_gitlab_via_rails_console.md
@@ -6,7 +6,7 @@ Thanks to this, we also get access to the amazing tools built right into Rails.
In this guide, we'll introduce the [Rails console](debug.md#starting-a-rails-console-session)
and the basics of interacting with your GitLab instance from the command line.
-CAUTION: **CAUTION:**
+CAUTION: **Caution:**
The Rails console interacts directly with your GitLab instance. In many cases,
there are no handrails to prevent you from permanently modifying, corrupting
or destroying production data. If you would like to explore the Rails console
@@ -186,7 +186,7 @@ user.save
Which would return:
```ruby
-Enqueued ActionMailer::DeliveryJob (Job ID: 05915c4e-c849-4e14-80bb-696d5ae22065) to Sidekiq(mailers) with arguments: "DeviseMailer", "password_change", "deliver_now", #<GlobalID:0x00007f42d8ccebe8 @uri=#<URI::GID gid://gitlab/User/1>>
+Enqueued ActionMailer::MailDeliveryJob (Job ID: 05915c4e-c849-4e14-80bb-696d5ae22065) to Sidekiq(mailers) with arguments: "DeviseMailer", "password_change", "deliver_now", #<GlobalID:0x00007f42d8ccebe8 @uri=#<URI::GID gid://gitlab/User/1>>
=> true
```
diff --git a/doc/administration/troubleshooting/postgresql.md b/doc/administration/troubleshooting/postgresql.md
index e5a4dffb3cc..36c1f20818a 100644
--- a/doc/administration/troubleshooting/postgresql.md
+++ b/doc/administration/troubleshooting/postgresql.md
@@ -8,7 +8,8 @@ This page is useful information about PostgreSQL that the GitLab Support
Team sometimes uses while troubleshooting. GitLab is making this public, so that anyone
can make use of the Support team's collected knowledge.
-CAUTION: **Caution:** Some procedures documented here may break your GitLab instance. Use at your own risk.
+CAUTION: **Caution:**
+Some procedures documented here may break your GitLab instance. Use at your own risk.
If you are on a [paid tier](https://about.gitlab.com/pricing/) and are not sure how
to use these commands, it is best to [contact Support](https://about.gitlab.com/support/)
@@ -46,7 +47,7 @@ This section is for links to information elsewhere in the GitLab documentation.
- Managing Omnibus PostgreSQL versions [from the development docs](https://docs.gitlab.com/omnibus/development/managing-postgresql-versions.html)
- [PostgreSQL scaling](../postgresql/replication_and_failover.md)
- - including [troubleshooting](../postgresql/replication_and_failover.md#troubleshooting) `gitlab-ctl repmgr-check-master` and PgBouncer errors
+ - including [troubleshooting](../postgresql/replication_and_failover.md#troubleshooting) `gitlab-ctl repmgr-check-master` (or `gitlab-ctl patroni check-leader` if you are using Patroni) and PgBouncer errors
- [Developer database documentation](../../development/README.md#database-guides) - some of which is absolutely not for production use. Including:
- understanding EXPLAIN plans
@@ -115,7 +116,8 @@ Quoting from issue [#1](https://gitlab.com/gitlab-org/gitlab/-/issues/30528):
> "If a deadlock is hit, and we resolve it through aborting the transaction after a short period, then the retry mechanisms we already have will make the deadlocked piece of work try again, and it's unlikely we'll deadlock multiple times in a row."
-TIP: **Tip:** In support, our general approach to reconfiguring timeouts (applies also to the HTTP stack as well) is that it's acceptable to do it temporarily as a workaround. If it makes GitLab usable for the customer, then it buys time to understand the problem more completely, implement a hot fix, or make some other change that addresses the root cause. Generally, the timeouts should be put back to reasonable defaults once the root cause is resolved.
+TIP: **Tip:**
+In support, our general approach to reconfiguring timeouts (applies also to the HTTP stack as well) is that it's acceptable to do it temporarily as a workaround. If it makes GitLab usable for the customer, then it buys time to understand the problem more completely, implement a hot fix, or make some other change that addresses the root cause. Generally, the timeouts should be put back to reasonable defaults once the root cause is resolved.
In this case, the guidance we had from development was to drop deadlock_timeout and/or statement_timeout but to leave the third setting at 60s. Setting idle_in_transaction protects the database from sessions potentially hanging for days. There's more discussion in [the issue relating to introducing this timeout on GitLab.com](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1053).
diff --git a/doc/administration/troubleshooting/sidekiq.md b/doc/administration/troubleshooting/sidekiq.md
index 5109a3baff2..566e2686735 100644
--- a/doc/administration/troubleshooting/sidekiq.md
+++ b/doc/administration/troubleshooting/sidekiq.md
@@ -38,7 +38,7 @@ Example log output:
```json
{"severity":"INFO","time":"2020-06-08T14:37:37.892Z","class":"AdminEmailsWorker","args":["[FILTERED]","[FILTERED]","[FILTERED]"],"retry":3,"queue":"admin_emails","backtrace":true,"jid":"9e35e2674ac7b12d123e13cc","created_at":"2020-06-08T14:37:37.373Z","meta.user":"root","meta.caller_id":"Admin::EmailsController#create","correlation_id":"37D3lArJmT1","uber-trace-id":"2d942cc98cc1b561:6dc94409cfdd4d77:9fbe19bdee865293:1","enqueued_at":"2020-06-08T14:37:37.410Z","pid":65011,"message":"AdminEmailsWorker JID-9e35e2674ac7b12d123e13cc: done: 0.48085 sec","job_status":"done","scheduling_latency_s":0.001012,"redis_calls":9,"redis_duration_s":0.004608,"redis_read_bytes":696,"redis_write_bytes":6141,"duration_s":0.48085,"cpu_s":0.308849,"completed_at":"2020-06-08T14:37:37.892Z","db_duration_s":0.010742}
-{"severity":"INFO","time":"2020-06-08T14:37:37.894Z","class":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper","wrapped":"ActionMailer::DeliveryJob","queue":"mailers","args":["[FILTERED]"],"retry":3,"backtrace":true,"jid":"e47a4f6793d475378432e3c8","created_at":"2020-06-08T14:37:37.884Z","meta.user":"root","meta.caller_id":"AdminEmailsWorker","correlation_id":"37D3lArJmT1","uber-trace-id":"2d942cc98cc1b561:29344de0f966446d:5c3b0e0e1bef987b:1","enqueued_at":"2020-06-08T14:37:37.885Z","pid":65011,"message":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-e47a4f6793d475378432e3c8: start","job_status":"start","scheduling_latency_s":0.009473}
+{"severity":"INFO","time":"2020-06-08T14:37:37.894Z","class":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper","wrapped":"ActionMailer::MailDeliveryJob","queue":"mailers","args":["[FILTERED]"],"retry":3,"backtrace":true,"jid":"e47a4f6793d475378432e3c8","created_at":"2020-06-08T14:37:37.884Z","meta.user":"root","meta.caller_id":"AdminEmailsWorker","correlation_id":"37D3lArJmT1","uber-trace-id":"2d942cc98cc1b561:29344de0f966446d:5c3b0e0e1bef987b:1","enqueued_at":"2020-06-08T14:37:37.885Z","pid":65011,"message":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-e47a4f6793d475378432e3c8: start","job_status":"start","scheduling_latency_s":0.009473}
{"severity":"INFO","time":"2020-06-08T14:39:50.648Z","class":"NewIssueWorker","args":["455","1"],"retry":3,"queue":"new_issue","backtrace":true,"jid":"a24af71f96fd129ec47f5d1e","created_at":"2020-06-08T14:39:50.643Z","meta.user":"root","meta.project":"h5bp/html5-boilerplate","meta.root_namespace":"h5bp","meta.caller_id":"Projects::IssuesController#create","correlation_id":"f9UCZHqhuP7","uber-trace-id":"28f65730f99f55a3:a5d2b62dec38dffc:48ddd092707fa1b7:1","enqueued_at":"2020-06-08T14:39:50.646Z","pid":65011,"message":"NewIssueWorker JID-a24af71f96fd129ec47f5d1e: start","job_status":"start","scheduling_latency_s":0.001144}
```
@@ -283,7 +283,8 @@ queue = Sidekiq::Queue.new('<queue name>')
queue.each { |job| job.delete if <condition>}
```
-NOTE: **Note:** This will remove jobs that are queued but not started, running jobs will not be killed. Have a look at the section below for cancelling running jobs.
+NOTE: **Note:**
+This will remove jobs that are queued but not started, running jobs will not be killed. Have a look at the section below for cancelling running jobs.
In the method above, `<queue-name>` is the name of the queue that contains the job(s) you want to delete and `<condition>` will decide which jobs get deleted.
diff --git a/doc/administration/uploads.md b/doc/administration/uploads.md
index 620f349912c..d9902208e93 100644
--- a/doc/administration/uploads.md
+++ b/doc/administration/uploads.md
@@ -57,6 +57,9 @@ This configuration relies on valid AWS credentials to be configured already.
[Read more about using object storage with GitLab](object_storage.md).
+NOTE: **Note:**
+We recommend using the [consolidated object storage settings](object_storage.md#consolidated-object-storage-configuration). The following instructions apply to the original config format.
+
## Object Storage Settings
For source installations the following settings are nested under `uploads:` and then `object_store:`. On Omnibus GitLab installs they are prefixed by `uploads_object_store_`.
@@ -70,22 +73,9 @@ For source installations the following settings are nested under `uploads:` and
| `proxy_download` | Set to true to enable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
| `connection` | Various connection options described below | |
-### S3 compatible connection settings
+### Connection settings
-The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
-
-| Setting | Description | Default |
-|---------|-------------|---------|
-| `provider` | Always `AWS` for compatible hosts | `AWS` |
-| `aws_access_key_id` | AWS credentials, or compatible | |
-| `aws_secret_access_key` | AWS credentials, or compatible | |
-| `aws_signature_version` | AWS signature version to use. `2` or `4` are valid options. Digital Ocean Spaces and other providers may need `2`. | `4` |
-| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be `false`. | `true` |
-| `region` | AWS region | us-east-1 |
-| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | `s3.amazonaws.com` |
-| `endpoint` | Can be used when configuring an S3 compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
-| `path_style` | Set to `true` to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as `false` for AWS S3. | `false` |
-| `use_iam_profile` | Set to `true` to use IAM profile instead of access keys | false
+See [the available connection settings for different providers](object_storage.md#connection-settings).
**In Omnibus installations:**
@@ -143,35 +133,7 @@ _The uploads are stored by default in
1. Save the file and [restart GitLab](restart_gitlab.md#installations-from-source) for the changes to take effect.
1. Migrate any existing local uploads to the object storage using [`gitlab:uploads:migrate` Rake task](raketasks/uploads/migrate.md).
-### Oracle Cloud S3 connection settings
-
-Note that Oracle Cloud S3 must be sure to use the following settings:
-
-| Setting | Value |
-|---------|-------|
-| `enable_signature_v4_streaming` | `false` |
-| `path_style` | `true` |
-
-If `enable_signature_v4_streaming` is set to `true`, you may see the
-following error:
-
-```plaintext
-STREAMING-AWS4-HMAC-SHA256-PAYLOAD is not supported
-```
-
-### OpenStack compatible connection settings
-
-The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
-
-| Setting | Description | Default |
-|---------|-------------|---------|
-| `provider` | Always `OpenStack` for compatible hosts | `OpenStack` |
-| `openstack_username` | OpenStack username | |
-| `openstack_api_key` | OpenStack API key | |
-| `openstack_temp_url_key` | OpenStack key for generating temporary URLs | |
-| `openstack_auth_url` | OpenStack authentication endpoint | |
-| `openstack_region` | OpenStack region | |
-| `openstack_tenant` | OpenStack tenant ID |
+### OpenStack example
**In Omnibus installations:**