Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2019-10-22 14:31:16 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2019-10-22 14:31:16 +0300
commit905c1110b08f93a19661cf42a276c7ea90d0a0ff (patch)
tree756d138db422392c00471ab06acdff92c5a9b69c /doc/administration
parent50d93f8d1686950fc58dda4823c4835fd0d8c14b (diff)
Add latest changes from gitlab-org/gitlab@12-4-stable-ee
Diffstat (limited to 'doc/administration')
-rw-r--r--doc/administration/audit_events.md29
-rw-r--r--doc/administration/auth/how_to_configure_ldap_gitlab_ce/index.md14
-rw-r--r--doc/administration/auth/how_to_configure_ldap_gitlab_ee/index.md2
-rw-r--r--doc/administration/auth/ldap-ee.md4
-rw-r--r--doc/administration/auth/ldap.md18
-rw-r--r--doc/administration/auth/oidc.md2
-rw-r--r--doc/administration/auth/smartcard.md2
-rw-r--r--doc/administration/custom_hooks.md7
-rw-r--r--doc/administration/geo/disaster_recovery/index.md3
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md18
-rw-r--r--doc/administration/geo/replication/configuration.md29
-rw-r--r--doc/administration/geo/replication/database.md48
-rw-r--r--doc/administration/geo/replication/external_database.md4
-rw-r--r--doc/administration/geo/replication/faq.md4
-rw-r--r--doc/administration/geo/replication/high_availability.md46
-rw-r--r--doc/administration/geo/replication/img/adding_a_secondary_node.pngbin0 -> 87593 bytes
-rw-r--r--doc/administration/geo/replication/img/single_git_add_geolocation_rule.pngbin0 -> 76035 bytes
-rw-r--r--doc/administration/geo/replication/img/single_git_add_traffic_policy_endpoints.pngbin0 -> 88896 bytes
-rw-r--r--doc/administration/geo/replication/img/single_git_clone_panel.pngbin0 -> 20007 bytes
-rw-r--r--doc/administration/geo/replication/img/single_git_create_policy_records_with_traffic_policy.pngbin0 -> 102350 bytes
-rw-r--r--doc/administration/geo/replication/img/single_git_created_policy_record.pngbin0 -> 141505 bytes
-rw-r--r--doc/administration/geo/replication/img/single_git_name_policy.pngbin0 -> 37964 bytes
-rw-r--r--doc/administration/geo/replication/img/single_git_policy_diagram.pngbin0 -> 56194 bytes
-rw-r--r--doc/administration/geo/replication/img/single_git_traffic_policies.pngbin0 -> 214666 bytes
-rw-r--r--doc/administration/geo/replication/index.md92
-rw-r--r--doc/administration/geo/replication/location_aware_git_url.md119
-rw-r--r--doc/administration/geo/replication/object_storage.md52
-rw-r--r--doc/administration/geo/replication/security_review.md16
-rw-r--r--doc/administration/geo/replication/troubleshooting.md6
-rw-r--r--doc/administration/geo/replication/using_a_geo_server.md2
-rw-r--r--doc/administration/gitaly/index.md219
-rw-r--r--doc/administration/gitaly/praefect.md114
-rw-r--r--doc/administration/gitaly/reference.md2
-rw-r--r--doc/administration/high_availability/README.md162
-rw-r--r--doc/administration/high_availability/consul.md12
-rw-r--r--doc/administration/high_availability/database.md70
-rw-r--r--doc/administration/high_availability/gitlab.md20
-rw-r--r--doc/administration/high_availability/load_balancer.md14
-rw-r--r--doc/administration/high_availability/monitoring_node.md4
-rw-r--r--doc/administration/high_availability/nfs.md9
-rw-r--r--doc/administration/high_availability/nfs_host_client_setup.md6
-rw-r--r--doc/administration/high_availability/pgbouncer.md34
-rw-r--r--doc/administration/high_availability/redis.md18
-rw-r--r--doc/administration/high_availability/redis_source.md4
-rw-r--r--doc/administration/housekeeping.md10
-rw-r--r--doc/administration/img/integration/plantuml-example.pngbin12559 -> 0 bytes
-rw-r--r--doc/administration/incoming_email.md2
-rw-r--r--doc/administration/index.md9
-rw-r--r--doc/administration/integration/plantuml.md51
-rw-r--r--doc/administration/integration/terminal.md4
-rw-r--r--doc/administration/issue_closing_pattern.md4
-rw-r--r--doc/administration/job_artifacts.md16
-rw-r--r--doc/administration/job_logs.md169
-rw-r--r--doc/administration/job_traces.md208
-rw-r--r--doc/administration/libravatar.md101
-rw-r--r--doc/administration/logs.md29
-rw-r--r--doc/administration/merge_request_diffs.md5
-rw-r--r--doc/administration/monitoring/github_imports.md10
-rw-r--r--doc/administration/monitoring/index.md2
-rw-r--r--doc/administration/monitoring/performance/grafana_configuration.md12
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_gitaly_threshold.pngbin0 -> 65296 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_request_selector_warning.pngbin0 -> 58287 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_request_selector_warning_expanded.pngbin0 -> 62514 bytes
-rw-r--r--doc/administration/monitoring/performance/index.md6
-rw-r--r--doc/administration/monitoring/performance/influxdb_configuration.md7
-rw-r--r--doc/administration/monitoring/performance/performance_bar.md18
-rw-r--r--doc/administration/monitoring/performance/prometheus.md2
-rw-r--r--doc/administration/monitoring/prometheus/gitlab_exporter.md23
-rw-r--r--doc/administration/monitoring/prometheus/gitlab_metrics.md17
-rw-r--r--doc/administration/monitoring/prometheus/gitlab_monitor_exporter.md5
-rw-r--r--doc/administration/monitoring/prometheus/index.md15
-rw-r--r--doc/administration/operations/cleaning_up_redis_sessions.md4
-rw-r--r--doc/administration/operations/fast_ssh_key_lookup.md4
-rw-r--r--doc/administration/operations/moving_repositories.md6
-rw-r--r--doc/administration/operations/sidekiq_memory_killer.md66
-rw-r--r--doc/administration/operations/ssh_certificates.md2
-rw-r--r--doc/administration/operations/unicorn.md2
-rw-r--r--doc/administration/packages/container_registry.md14
-rw-r--r--doc/administration/packages/index.md1
-rw-r--r--doc/administration/pages/index.md46
-rw-r--r--doc/administration/pages/source.md26
-rw-r--r--doc/administration/raketasks/check.md2
-rw-r--r--doc/administration/raketasks/geo.md2
-rw-r--r--doc/administration/raketasks/maintenance.md8
-rw-r--r--doc/administration/raketasks/uploads/migrate.md36
-rw-r--r--doc/administration/repository_storage_paths.md4
-rw-r--r--doc/administration/repository_storage_types.md2
-rw-r--r--doc/administration/restart_gitlab.md2
-rw-r--r--doc/administration/smime_signing_email.md2
-rw-r--r--doc/administration/troubleshooting/debug.md16
-rw-r--r--doc/administration/troubleshooting/elasticsearch.md126
-rw-r--r--doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md55
-rw-r--r--doc/administration/troubleshooting/kubernetes_cheat_sheet.md20
-rw-r--r--doc/administration/troubleshooting/postgresql.md146
-rw-r--r--doc/administration/troubleshooting/sidekiq.md2
-rw-r--r--doc/administration/troubleshooting/test_environments.md6
-rw-r--r--doc/administration/uploads.md4
97 files changed, 1740 insertions, 794 deletions
diff --git a/doc/administration/audit_events.md b/doc/administration/audit_events.md
index bd51a3e18d7..61ea673071e 100644
--- a/doc/administration/audit_events.md
+++ b/doc/administration/audit_events.md
@@ -117,6 +117,35 @@ on adding these events into GitLab:
- [Group settings and activity](https://gitlab.com/groups/gitlab-org/-/epics/475)
- [Instance-level settings and activity](https://gitlab.com/groups/gitlab-org/-/epics/476)
+### Disabled events
+
+#### Repository push
+
+The current architecture of audit events is not prepared to receive a very high amount of records.
+It may make your project/admin audit logs UI very busy and the disk space consumed by the
+`audit_events` Postgres table will increase considerably. Thus, it's disabled by default
+to prevent performance degradations on GitLab instances with very high Git write traffic.
+
+In an upcoming release, Audit Logs for Git push events will be enabled
+by default. Follow [#7865](https://gitlab.com/gitlab-org/gitlab/issues/7865) for updates.
+
+If you still wish to enable **Repository push** events in your instance, follow
+the steps bellow.
+
+**In Omnibus installations:**
+
+1. Enter the Rails console:
+
+ ```sh
+ sudo gitlab-rails console
+ ```
+
+1. Flip the switch and enable the feature flag:
+
+ ```ruby
+ Feature.enable(:repository_push_audit_event)
+ ```
+
[ee-2336]: https://gitlab.com/gitlab-org/gitlab/issues/2336
[ee]: https://about.gitlab.com/pricing/
[permissions]: ../user/permissions.md
diff --git a/doc/administration/auth/how_to_configure_ldap_gitlab_ce/index.md b/doc/administration/auth/how_to_configure_ldap_gitlab_ce/index.md
index ef35a2d5266..743893d984a 100644
--- a/doc/administration/auth/how_to_configure_ldap_gitlab_ce/index.md
+++ b/doc/administration/auth/how_to_configure_ldap_gitlab_ce/index.md
@@ -8,7 +8,7 @@ Managing a large number of users in GitLab can become a burden for system admini
In this guide we will focus on configuring GitLab with Active Directory. [Active Directory](https://en.wikipedia.org/wiki/Active_Directory) is a popular LDAP compatible directory service provided by Microsoft, included in all modern Windows Server operating systems.
-GitLab has supported LDAP integration since [version 2.2](https://about.gitlab.com/2012/02/22/gitlab-version-2-2/). With GitLab LDAP [group syncing](../how_to_configure_ldap_gitlab_ee/index.html#group-sync) being added to GitLab Enterprise Edition in [version 6.0](https://about.gitlab.com/2013/08/20/gitlab-6-dot-0-released/). LDAP integration has become one of the most popular features in GitLab.
+GitLab has supported LDAP integration since [version 2.2](https://about.gitlab.com/blog/2012/02/22/gitlab-version-2-2/). With GitLab LDAP [group syncing](../how_to_configure_ldap_gitlab_ee/index.html#group-sync) being added to GitLab Enterprise Edition in [version 6.0](https://about.gitlab.com/blog/2013/08/20/gitlab-6-dot-0-released/). LDAP integration has become one of the most popular features in GitLab.
## Getting started
@@ -18,12 +18,12 @@ The main reason organizations choose to utilize a LDAP server is to keep the ent
There are many commercial and open source [directory servers](https://en.wikipedia.org/wiki/Directory_service#LDAP_implementations) that support the LDAP protocol. Deciding on the right directory server highly depends on the existing IT environment in which the server will be integrated with.
-For example, [Active Directory](https://technet.microsoft.com/en-us/library/hh831484(v=ws.11).aspx) is generally favored in a primarily Windows environment, as this allows quick integration with existing services. Other popular directory services include:
+For example, [Active Directory](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831484(v=ws.11)) is generally favored in a primarily Windows environment, as this allows quick integration with existing services. Other popular directory services include:
-- [Oracle Internet Directory](http://www.oracle.com/technetwork/middleware/id-mgmt/overview/index-082035.html)
+- [Oracle Internet Directory](https://www.oracle.com/middleware/technologies/internet-directory.html)
- [OpenLDAP](http://www.openldap.org/)
- [389 Directory](http://directory.fedoraproject.org/)
-- [OpenDJ](https://forgerock.org/opendj/)
+- [OpenDJ (Renamed to Foregerock Directory Services)](https://www.forgerock.com/platform/directory-services)
- [ApacheDS](https://directory.apache.org/)
> GitLab uses the [Net::LDAP](https://rubygems.org/gems/net-ldap) library under the hood. This means it supports all [IETF](https://tools.ietf.org/html/rfc2251) compliant LDAPv3 servers.
@@ -32,9 +32,9 @@ For example, [Active Directory](https://technet.microsoft.com/en-us/library/hh83
We won't cover the installation and configuration of Windows Server or Active Directory Domain Services in this tutorial. There are a number of resources online to guide you through this process:
-- Install Windows Server 2012 - (_technet.microsoft.com_) - [Installing Windows Server 2012](https://technet.microsoft.com/en-us/library/jj134246(v=ws.11).aspx)
+- Install Windows Server 2012 - (`technet.microsoft.com`) - [Installing Windows Server 2012](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj134246(v=ws.11))
-- Install Active Directory Domain Services (AD DS) (_technet.microsoft.com_)- [Install Active Directory Domain Services](https://technet.microsoft.com/windows-server-docs/identity/ad-ds/deploy/install-active-directory-domain-services--level-100-#BKMK_PS)
+- Install Active Directory Domain Services (AD DS) (`technet.microsoft.com`)- [Install Active Directory Domain Services](https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/deploy/install-active-directory-domain-services--level-100-#BKMK_PS)
> **Shortcut:** You can quickly install AD DS via PowerShell using
`Install-WindowsFeature AD-Domain-Services -IncludeManagementTools`
@@ -97,7 +97,7 @@ People Ops US GitLab.org/GitLab INT/Global Groups/People Ops US
Global Admins GitLab.org/GitLab INT/Global Groups/Global Admins
```
-> See [more information](https://technet.microsoft.com/en-us/library/ff730967.aspx) on searching Active Directory with Windows PowerShell from [The Scripting Guys](https://technet.microsoft.com/en-us/scriptcenter/dd901334.aspx)
+> See [more information](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-powershell-1.0/ff730967(v=technet.10)) on searching Active Directory with Windows PowerShell from [The Scripting Guys](https://devblogs.microsoft.com/scripting/)
## GitLab LDAP configuration
diff --git a/doc/administration/auth/how_to_configure_ldap_gitlab_ee/index.md b/doc/administration/auth/how_to_configure_ldap_gitlab_ee/index.md
index 9977f9aee14..f0eb6c5180b 100644
--- a/doc/administration/auth/how_to_configure_ldap_gitlab_ee/index.md
+++ b/doc/administration/auth/how_to_configure_ldap_gitlab_ee/index.md
@@ -110,7 +110,7 @@ gitlab_rails['ldap_servers'] = {
Integration of GitLab with Active Directory (LDAP) reduces the complexity of user management.
It has the advantage of improving user permission controls, whilst easing the deployment of GitLab into an existing [IT environment](https://www.techopedia.com/definition/29199/it-infrastructure). GitLab EE offers advanced group management and multiple LDAP servers.
-With the assistance of the [GitLab Support](https://about.gitlab.com/support) team, setting up GitLab with an existing AD/LDAP solution will be a smooth and painless process.
+With the assistance of the [GitLab Support](https://about.gitlab.com/support/) team, setting up GitLab with an existing AD/LDAP solution will be a smooth and painless process.
<!-- ## Troubleshooting
diff --git a/doc/administration/auth/ldap-ee.md b/doc/administration/auth/ldap-ee.md
index d9b7d8b4382..e2894318fe5 100644
--- a/doc/administration/auth/ldap-ee.md
+++ b/doc/administration/auth/ldap-ee.md
@@ -281,7 +281,7 @@ sync to run once every 2 hours at the top of the hour.
> Introduced in GitLab Enterprise Edition Starter 8.9.
Using the `external_groups` setting will allow you to mark all users belonging
-to these groups as [external users](../../user/permissions.md#external-users-permissions).
+to these groups as [external users](../../user/permissions.md#external-users-core-only).
Group membership is checked periodically through the `LdapGroupSync` background
task.
@@ -415,7 +415,7 @@ main: # 'main' is the GitLab 'provider ID' of this LDAP server
[^1]: In Active Directory, a user is marked as disabled/blocked if the user
account control attribute (`userAccountControl:1.2.840.113556.1.4.803`)
- has bit 2 set. See <https://ctogonewild.com/2009/09/03/bitmask-searches-in-ldap/>
+ has bit 2 set. See <https://ctovswild.com/2009/09/03/bitmask-searches-in-ldap/>
for more information.
### User DN has changed
diff --git a/doc/administration/auth/ldap.md b/doc/administration/auth/ldap.md
index 186bf4c4825..e02ce1c0a21 100644
--- a/doc/administration/auth/ldap.md
+++ b/doc/administration/auth/ldap.md
@@ -408,12 +408,12 @@ group, you can use the following syntax:
```
Find more information about this "LDAP_MATCHING_RULE_IN_CHAIN" filter at
-<https://docs.microsoft.com/en-us/windows/desktop/ADSI/search-filter-syntax>. Support for
+<https://docs.microsoft.com/en-us/windows/win32/adsi/search-filter-syntax>. Support for
nested members in the user filter should not be confused with
[group sync nested groups support](ldap-ee.md#supported-ldap-group-typesattributes). **(STARTER ONLY)**
Please note that GitLab does not support the custom filter syntax used by
-omniauth-ldap.
+OmniAuth LDAP.
### Escaping special characters
@@ -536,7 +536,7 @@ ldapsearch -H ldaps://$host:$port -D "$bind_dn" -y bind_dn_password.txt -b "$ba
- Variables beginning with a `$` refer to a variable from the LDAP section of
your configuration file.
-- Replace ldaps:// with ldap:// if you are using the plain authentication method.
+- Replace `ldaps://` with `ldap://` if you are using the plain authentication method.
Port `389` is the default `ldap://` port and `636` is the default `ldaps://`
port.
- We are assuming the password for the bind_dn user is in bind_dn_password.txt.
@@ -563,3 +563,15 @@ If you are getting 'Connection Refused' errors when trying to connect to the
LDAP server please double-check the LDAP `port` and `encryption` settings used by
GitLab. Common combinations are `encryption: 'plain'` and `port: 389`, OR
`encryption: 'simple_tls'` and `port: 636`.
+
+### Connection times out
+
+If GitLab cannot reach your LDAP endpoint, you will see a message like this:
+
+```
+Could not authenticate you from Ldapmain because "Connection timed out - user specified timeout".
+```
+
+If your configured LDAP provider and/or endpoint is offline or otherwise unreachable by GitLab, no LDAP user will be able to authenticate and log in. GitLab does not cache or store credentials for LDAP users to provide authentication during an LDAP outage.
+
+Contact your LDAP provider or administrator if you are seeing this error.
diff --git a/doc/administration/auth/oidc.md b/doc/administration/auth/oidc.md
index 3445f916ef4..698a5506b83 100644
--- a/doc/administration/auth/oidc.md
+++ b/doc/administration/auth/oidc.md
@@ -35,7 +35,7 @@ The OpenID Connect will provide you with a client details and secret for you to
{ 'name' => 'openid_connect',
'label' => '<your_oidc_label>',
'args' => {
- "name' => 'openid_connect',
+ 'name' => 'openid_connect',
'scope' => ['openid','profile'],
'response_type' => 'code',
'issuer' => '<your_oidc_url>',
diff --git a/doc/administration/auth/smartcard.md b/doc/administration/auth/smartcard.md
index 2d2734096ed..eb63df6b482 100644
--- a/doc/administration/auth/smartcard.md
+++ b/doc/administration/auth/smartcard.md
@@ -206,7 +206,7 @@ attribute. As a prerequisite, you must use an LDAP server that:
**For installations from source**
-1. Add the `san_extensions` line to config/gitlab.yml` within the smartcard section:
+1. Add the `san_extensions` line to `config/gitlab.yml` within the smartcard section:
```yaml
smartcard:
diff --git a/doc/administration/custom_hooks.md b/doc/administration/custom_hooks.md
index 3e714b446af..0702e0aa141 100644
--- a/doc/administration/custom_hooks.md
+++ b/doc/administration/custom_hooks.md
@@ -48,10 +48,9 @@ as appropriate.
## Set a global Git hook for all repositories
To create a Git hook that applies to all of your repositories in
-your instance, set a global Git hook. Since all the repositories' `hooks`
-directories are symlinked to GitLab Shell's `hooks` directory, adding any hook
-to the GitLab Shell `hooks` directory will also apply it to all repositories. Follow
-the steps below to properly set up a custom hook for all repositories:
+your instance, set a global Git hook. Since GitLab will look inside the GitLab Shell
+`hooks` directory for global hooks, adding any hook there will apply it to all repositories.
+Follow the steps below to properly set up a custom hook for all repositories:
1. On the GitLab server, navigate to the configured custom hook directory. The
default is in the GitLab Shell directory. The GitLab Shell `hook` directory
diff --git a/doc/administration/geo/disaster_recovery/index.md b/doc/administration/geo/disaster_recovery/index.md
index 5eb23422374..ad5284938fa 100644
--- a/doc/administration/geo/disaster_recovery/index.md
+++ b/doc/administration/geo/disaster_recovery/index.md
@@ -51,7 +51,7 @@ must disable the **primary** node.
NOTE: **Note:**
(**CentOS only**) In CentOS 6 or older, there is no easy way to prevent GitLab from being
- started if the machine reboots isn't available (see [gitlab-org/omnibus-gitlab#3058]).
+ started if the machine reboots isn't available (see [Omnibus GitLab issue #3058](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/3058)).
It may be safest to uninstall the GitLab package completely:
```sh
@@ -317,6 +317,5 @@ section to resolve the error. Otherwise, the secret is lost and you'll need to
[setup-geo]: ../replication/index.md#setup-instructions
[updating-geo]: ../replication/version_specific_updates.md#updating-to-gitlab-105
[sec-tfa]: ../../../security/two_factor_authentication.md#disabling-2fa-for-everyone
-[gitlab-org/omnibus-gitlab#3058]: https://gitlab.com/gitlab-org/omnibus-gitlab/issues/3058
[initiate-the-replication-process]: ../replication/database.html#step-3-initiate-the-replication-process
[configure-the-primary-server]: ../replication/database.html#step-1-configure-the-primary-server
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index 75e07bcf863..8fee172ec64 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -43,23 +43,14 @@ will go smoothly.
### Object storage
-Some classes of non-repository data can use object storage in preference to
-file storage. Geo [does not replicate data in object storage](../replication/object_storage.md),
-leaving that task up to the object store itself. For a planned failover, this
-means you can decouple the replication of this data from the failover of the
-GitLab service.
-
-If you're already using object storage, simply verify that your **secondary**
-node has access to the same data as the **primary** node - they must either they share the
-same object storage configuration, or the **secondary** node should be configured to
-access a [geographically-replicated][os-repl] copy provided by the object store
-itself.
-
If you have a large GitLab installation or cannot tolerate downtime, consider
[migrating to Object Storage][os-conf] **before** scheduling a planned failover.
Doing so reduces both the length of the maintenance window, and the risk of data
loss as a result of a poorly executed planned failover.
+In GitLab 12.4, you can optionally allow GitLab to manage replication of Object Storage for
+**secondary** nodes. For more information, see [Object Storage replication][os-conf].
+
### Review the configuration of each **secondary** node
Database settings are automatically replicated to the **secondary** node, but the
@@ -224,5 +215,4 @@ Don't forget to remove the broadcast message after failover is complete.
[background-verification]: background_verification.md
[limitations]: ../replication/index.md#current-limitations
[moving-repositories]: ../../operations/moving_repositories.md
-[os-conf]: ../replication/object_storage.md#configuration
-[os-repl]: ../replication/object_storage.md#replication
+[os-conf]: ../replication/object_storage.md
diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md
index ddb5f22fd05..f09d9f20dab 100644
--- a/doc/administration/geo/replication/configuration.md
+++ b/doc/administration/geo/replication/configuration.md
@@ -25,7 +25,7 @@ Any change that requires access to the **Admin Area** needs to be done in the
GitLab stores a number of secret values in the `/etc/gitlab/gitlab-secrets.json`
file which *must* be the same on all nodes. Until there is
-a means of automatically replicating these between nodes (see issue [gitlab-org/gitlab-ee#3789]),
+a means of automatically replicating these between nodes (see [issue #3789](https://gitlab.com/gitlab-org/gitlab/issues/3789)),
they must be manually replicated to the **secondary** node.
1. SSH into the **primary** node, and execute the command below:
@@ -75,7 +75,7 @@ they must be manually replicated to the **secondary** node.
### Step 2. Manually replicate the **primary** node's SSH host keys
GitLab integrates with the system-installed SSH daemon, designating a user
-(typically named git) through which all access requests are handled.
+(typically named `git`) through which all access requests are handled.
In a [Disaster Recovery] situation, GitLab system
administrators will promote a **secondary** node to the **primary** node. DNS records for the
@@ -165,10 +165,32 @@ keys must be manually replicated to the **secondary** node.
### Step 3. Add the **secondary** node
+1. SSH into your GitLab **secondary** server and login as root:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and add a **unique** name for your node. You will need this in the next steps:
+
+ ```ruby
+ # The unique identifier for the Geo node.
+ gitlab_rails['geo_node_name'] = '<node_name_here>'
+ ```
+
+1. Reconfigure the **secondary** node for the change to take effect:
+
+ ```sh
+ gitlab-ctl reconfigure
+ ```
+
1. Visit the **primary** node's **Admin Area > Geo**
(`/admin/geo/nodes`) in your browser.
-1. Add the **secondary** node by providing its full URL. **Do NOT** check the
+1. Click the **New node** button.
+1. Add the **secondary** node. Use the **exact** name you inputed for `gitlab_rails['geo_node_name']` as the Name and the full URL as the URL. **Do NOT** check the
**This is a primary node** checkbox.
+
+ ![Add secondary node](img/adding_a_secondary_node.png)
1. Optionally, choose which groups or storage shards should be replicated by the
**secondary** node. Leave blank to replicate all. Read more in
[selective synchronization](#selective-synchronization).
@@ -299,7 +321,6 @@ See the [troubleshooting document](troubleshooting.md).
[setup-geo-omnibus]: index.md#using-omnibus-gitlab
[Hashed Storage]: ../../repository_storage_types.md
[Disaster Recovery]: ../disaster_recovery/index.md
-[gitlab-org/gitlab-ee#3789]: https://gitlab.com/gitlab-org/gitlab/issues/3789
[gitlab-com/infrastructure#2821]: https://gitlab.com/gitlab-com/infrastructure/issues/2821
[omnibus-ssl]: https://docs.gitlab.com/omnibus/settings/ssl.html
[using-geo]: using_a_geo_server.md
diff --git a/doc/administration/geo/replication/database.md b/doc/administration/geo/replication/database.md
index 33f240ed11f..fa1b0f0e1d7 100644
--- a/doc/administration/geo/replication/database.md
+++ b/doc/administration/geo/replication/database.md
@@ -1,9 +1,6 @@
# Geo database replication **(PREMIUM ONLY)**
NOTE: **Note:**
-The following steps are for Omnibus installs only. Using Geo with source-based installs was **deprecated** in GitLab 11.5.
-
-NOTE: **Note:**
If your GitLab installation uses external (not managed by Omnibus) PostgreSQL
instances, the Omnibus roles will not be able to perform all necessary
configuration steps. In this case,
@@ -37,8 +34,8 @@ recover. See below for more details.
The following guide assumes that:
- You are using Omnibus and therefore you are using PostgreSQL 9.6 or later
- which includes the [`pg_basebackup` tool][pgback] and improved
- [Foreign Data Wrapper][FDW] support.
+ which includes the [`pg_basebackup` tool](https://www.postgresql.org/docs/9.6/app-pgbasebackup.html) and improved
+ [Foreign Data Wrapper][FDW](https://www.postgresql.org/docs/9.6/postgres-fdw.html) support.
- You have a **primary** node already set up (the GitLab server you are
replicating from), running Omnibus' PostgreSQL (or equivalent version), and
you have a new **secondary** server set up with the same versions of the OS,
@@ -56,6 +53,19 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
sudo -i
```
+1. Edit `/etc/gitlab/gitlab.rb` and add a **unique** name for your node:
+
+ ```ruby
+ # The unique identifier for the Geo node.
+ gitlab_rails['geo_node_name'] = '<node_name_here>'
+ ```
+
+1. Reconfigure the **primary** node for the change to take effect:
+
+ ```sh
+ gitlab-ctl reconfigure
+ ```
+
1. Execute the command below to define the node as **primary** node:
```sh
@@ -149,9 +159,9 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
address (corresponds to "internal address" for Google Cloud Platform) for
`postgresql['md5_auth_cidr_addresses']` and `postgresql['listen_address']`.
- The `listen_address` option opens PostgreSQL up to network connections
- with the interface corresponding to the given address. See [the PostgreSQL
- documentation][pg-docs-runtime-conn] for more details.
+ The `listen_address` option opens PostgreSQL up to network connections with the interface
+ corresponding to the given address. See [the PostgreSQL documentation](https://www.postgresql.org/docs/9.6/runtime-config-connection.html)
+ for more details.
Depending on your network configuration, the suggested addresses may not
be correct. If your **primary** node and **secondary** nodes connect over a local
@@ -202,9 +212,8 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32', '<another_secondary_node_ip>/32']
```
- You may also want to edit the `wal_keep_segments` and `max_wal_senders` to
- match your database replication requirements. Consult the [PostgreSQL -
- Replication documentation][pg-docs-runtime-replication]
+ You may also want to edit the `wal_keep_segments` and `max_wal_senders` to match your
+ database replication requirements. Consult the [PostgreSQL - Replication documentation](https://www.postgresql.org/docs/9.6/runtime-config-replication.html)
for more information.
1. Save the file and reconfigure GitLab for the database listen changes and
@@ -430,7 +439,7 @@ data before running `pg_basebackup`.
(e.g., you know the network path is secure, or you are using a site-to-site
VPN). This is **not** safe over the public Internet!
- You can read more details about each `sslmode` in the
- [PostgreSQL documentation][pg-docs-ssl];
+ [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/libpq-ssl.html#LIBPQ-SSL-PROTECTION);
the instructions above are carefully written to ensure protection against
both passive eavesdroppers and active "man-in-the-middle" attackers.
- Change the `--slot-name` to the name of the replication slot
@@ -443,16 +452,16 @@ data before running `pg_basebackup`.
The replication process is now complete.
-## PGBouncer support (optional)
+## PgBouncer support (optional)
-[PGBouncer](http://pgbouncer.github.io/) may be used with GitLab Geo to pool
-PostgreSQL connections. We recommend using PGBouncer if you use GitLab in a
+[PgBouncer](http://pgbouncer.github.io/) may be used with GitLab Geo to pool
+PostgreSQL connections. We recommend using PgBouncer if you use GitLab in a
high-availability configuration with a cluster of nodes supporting a Geo
**primary** node and another cluster of nodes supporting a Geo **secondary** node. For more
information, see [High Availability with GitLab Omnibus](../../high_availability/database.md#high-availability-with-gitlab-omnibus-premium-only).
-For a Geo **secondary** node to work properly with PGBouncer in front of the database,
-it will need a separate read-only user to make [PostgreSQL FDW queries][FDW]
+For a Geo **secondary** node to work properly with PgBouncer in front of the database,
+it will need a separate read-only user to make [PostgreSQL FDW queries](https://www.postgresql.org/docs/9.6/postgres-fdw.html)
work:
1. On the **primary** Geo database, enter the PostgreSQL on the console as an
@@ -498,11 +507,6 @@ work:
Read the [troubleshooting document](troubleshooting.md).
[replication-slots-article]: https://medium.com/@tk512/replication-slots-in-postgresql-b4b03d277c75
-[pgback]: http://www.postgresql.org/docs/9.2/static/app-pgbasebackup.html
[replication user]:https://wiki.postgresql.org/wiki/Streaming_Replication
-[FDW]: https://www.postgresql.org/docs/9.6/static/postgres-fdw.html
[toc]: index.md#using-omnibus-gitlab
[rake-maintenance]: ../../raketasks/maintenance.md
-[pg-docs-ssl]: https://www.postgresql.org/docs/9.6/static/libpq-ssl.html#LIBPQ-SSL-PROTECTION
-[pg-docs-runtime-conn]: https://www.postgresql.org/docs/9.6/static/runtime-config-connection.html
-[pg-docs-runtime-replication]: https://www.postgresql.org/docs/9.6/static/runtime-config-replication.html
diff --git a/doc/administration/geo/replication/external_database.md b/doc/administration/geo/replication/external_database.md
index 256195998a7..4451d3c6c08 100644
--- a/doc/administration/geo/replication/external_database.md
+++ b/doc/administration/geo/replication/external_database.md
@@ -132,7 +132,7 @@ when `roles ['geo_secondary_role']` is set. For high availability,
refer to [Geo High Availability](../../high_availability/README.md).
If you want to run this database external to Omnibus, please follow the instructions below.
-The tracking database requires an [FDW](https://www.postgresql.org/docs/9.6/static/postgres-fdw.html)
+The tracking database requires an [FDW](https://www.postgresql.org/docs/9.6/postgres-fdw.html)
connection with the **secondary** replica database for improved performance.
If you have an external database ready to be used as the tracking database,
@@ -173,7 +173,7 @@ the tracking database on port 5432.
gitlab-rake geo:db:migrate
```
-1. Configure the [PostgreSQL FDW](https://www.postgresql.org/docs/9.6/static/postgres-fdw.html)
+1. Configure the [PostgreSQL FDW](https://www.postgresql.org/docs/9.6/postgres-fdw.html)
connection and credentials:
Save the script below in a file, ex. `/tmp/geo_fdw.sh` and modify the connection
diff --git a/doc/administration/geo/replication/faq.md b/doc/administration/geo/replication/faq.md
index b3580a706c3..b07b518d3b1 100644
--- a/doc/administration/geo/replication/faq.md
+++ b/doc/administration/geo/replication/faq.md
@@ -43,9 +43,9 @@ attachments / avatars and the whole database. This means user accounts,
issues, merge requests, groups, project data, etc., will be available for
query.
-## Can I git push to a **secondary** node?
+## Can I `git push` to a **secondary** node?
-Yes! Pushing directly to a **secondary** node (for both HTTP and SSH, including git-lfs) was [introduced](https://about.gitlab.com/2018/09/22/gitlab-11-3-released/) in [GitLab Premium](https://about.gitlab.com/pricing/#self-managed) 11.3.
+Yes! Pushing directly to a **secondary** node (for both HTTP and SSH, including Git LFS) was [introduced](https://about.gitlab.com/blog/2018/09/22/gitlab-11-3-released/) in [GitLab Premium](https://about.gitlab.com/pricing/#self-managed) 11.3.
## How long does it take to have a commit replicated to a **secondary** node?
diff --git a/doc/administration/geo/replication/high_availability.md b/doc/administration/geo/replication/high_availability.md
index 9d84e10d496..faa9d051107 100644
--- a/doc/administration/geo/replication/high_availability.md
+++ b/doc/administration/geo/replication/high_availability.md
@@ -8,7 +8,7 @@ described, it is possible to adapt these instructions to your needs.
![Geo HA Diagram](../../high_availability/img/geo-ha-diagram.png)
-_[diagram source - gitlab employees only][diagram-source]_
+_[diagram source - GitLab employees only][diagram-source]_
The topology above assumes that the **primary** and **secondary** Geo clusters
are located in two separate locations, on their own virtual network
@@ -57,6 +57,11 @@ The following steps enable a GitLab cluster to serve as the **primary** node.
roles ['geo_primary_role']
##
+ ## The unique identifier for the Geo node.
+ ##
+ gitlab_rails['geo_node_name'] = '<node_name_here>'
+
+ ##
## Disable automatic migrations
##
gitlab_rails['auto_migrate'] = false
@@ -71,8 +76,16 @@ high availability configuration documentation for
[PostgreSQL](../../high_availability/database.md#configuring-the-application-nodes)
and [Redis](../../high_availability/redis.md#example-configuration-for-the-gitlab-application).
-The **primary** database will require modification later, as part of
-[step 2](#step-2-configure-the-main-read-only-replica-postgresql-database-on-the-secondary-node).
+### Step 2: Configure the **primary** database
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the following:
+
+ ```ruby
+ ##
+ ## Configure the Geo primary role and the PostgreSQL role
+ ##
+ roles ['geo_primary_role', 'postgres_role']
+ ```
## Configure a **secondary** node
@@ -115,9 +128,9 @@ the **primary** database. Use the following as a guide.
```ruby
##
- ## Configure the PostgreSQL role
+ ## Configure the Geo secondary role and the PostgreSQL role
##
- roles ['postgres_role']
+ roles ['geo_secondary_role', 'postgres_role']
##
## Secondary address
@@ -222,6 +235,11 @@ following modifications:
roles ['geo_secondary_role', 'application_role']
##
+ ## The unique identifier for the Geo node.
+ ##
+ gitlab_rails['geo_node_name'] = '<node_name_here>'
+
+ ##
## Disable automatic migrations
##
gitlab_rails['auto_migrate'] = false
@@ -274,15 +292,15 @@ After making these changes [Reconfigure GitLab][gitlab-reconfigure] so the chang
On the secondary the following GitLab frontend services will be enabled:
-- geo-logcursor
-- gitlab-pages
-- gitlab-workhorse
-- logrotate
-- nginx
-- registry
-- remote-syslog
-- sidekiq
-- unicorn
+- `geo-logcursor`
+- `gitlab-pages`
+- `gitlab-workhorse`
+- `logrotate`
+- `nginx`
+- `registry`
+- `remote-syslog`
+- `sidekiq`
+- `unicorn`
Verify these services by running `sudo gitlab-ctl status` on the frontend
application servers.
diff --git a/doc/administration/geo/replication/img/adding_a_secondary_node.png b/doc/administration/geo/replication/img/adding_a_secondary_node.png
new file mode 100644
index 00000000000..5421b578672
--- /dev/null
+++ b/doc/administration/geo/replication/img/adding_a_secondary_node.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/single_git_add_geolocation_rule.png b/doc/administration/geo/replication/img/single_git_add_geolocation_rule.png
new file mode 100644
index 00000000000..4b04ba8d1f1
--- /dev/null
+++ b/doc/administration/geo/replication/img/single_git_add_geolocation_rule.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/single_git_add_traffic_policy_endpoints.png b/doc/administration/geo/replication/img/single_git_add_traffic_policy_endpoints.png
new file mode 100644
index 00000000000..c19ad57c953
--- /dev/null
+++ b/doc/administration/geo/replication/img/single_git_add_traffic_policy_endpoints.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/single_git_clone_panel.png b/doc/administration/geo/replication/img/single_git_clone_panel.png
new file mode 100644
index 00000000000..8aa0bd2f7d8
--- /dev/null
+++ b/doc/administration/geo/replication/img/single_git_clone_panel.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/single_git_create_policy_records_with_traffic_policy.png b/doc/administration/geo/replication/img/single_git_create_policy_records_with_traffic_policy.png
new file mode 100644
index 00000000000..a554532f3b8
--- /dev/null
+++ b/doc/administration/geo/replication/img/single_git_create_policy_records_with_traffic_policy.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/single_git_created_policy_record.png b/doc/administration/geo/replication/img/single_git_created_policy_record.png
new file mode 100644
index 00000000000..74c42395e15
--- /dev/null
+++ b/doc/administration/geo/replication/img/single_git_created_policy_record.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/single_git_name_policy.png b/doc/administration/geo/replication/img/single_git_name_policy.png
new file mode 100644
index 00000000000..1a976539e94
--- /dev/null
+++ b/doc/administration/geo/replication/img/single_git_name_policy.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/single_git_policy_diagram.png b/doc/administration/geo/replication/img/single_git_policy_diagram.png
new file mode 100644
index 00000000000..d62952dbbb3
--- /dev/null
+++ b/doc/administration/geo/replication/img/single_git_policy_diagram.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/single_git_traffic_policies.png b/doc/administration/geo/replication/img/single_git_traffic_policies.png
new file mode 100644
index 00000000000..b3193c23d99
--- /dev/null
+++ b/doc/administration/geo/replication/img/single_git_traffic_policies.png
Binary files differ
diff --git a/doc/administration/geo/replication/index.md b/doc/administration/geo/replication/index.md
index f9f56b96e22..1fef2e85ce6 100644
--- a/doc/administration/geo/replication/index.md
+++ b/doc/administration/geo/replication/index.md
@@ -63,7 +63,7 @@ Keep in mind that:
- Get user data for logins (API).
- Replicate repositories, LFS Objects, and Attachments (HTTPS + JWT).
- Since GitLab Premium 10.0, the **primary** node no longer talks to **secondary** nodes to notify for changes (API).
-- Pushing directly to a **secondary** node (for both HTTP and SSH, including git-lfs) was [introduced](https://about.gitlab.com/2018/09/22/gitlab-11-3-released/) in [GitLab Premium](https://about.gitlab.com/pricing/#self-managed) 11.3.
+- Pushing directly to a **secondary** node (for both HTTP and SSH, including Git LFS) was [introduced](https://about.gitlab.com/blog/2018/09/22/gitlab-11-3-released/) in [GitLab Premium](https://about.gitlab.com/pricing/#self-managed) 11.3.
- There are [limitations](#current-limitations) in the current implementation.
### Architecture
@@ -108,7 +108,7 @@ The following are required to run Geo:
[fast lookup of authorized SSH keys in the database](../../operations/fast_ssh_key_lookup.md))
The following operating systems are known to ship with a current version of OpenSSH:
- [CentOS](https://www.centos.org) 7.4+
- - [Ubuntu](https://www.ubuntu.com) 16.04+
+ - [Ubuntu](https://ubuntu.com) 16.04+
- PostgreSQL 9.6+ with [FDW](https://www.postgresql.org/docs/9.6/postgres-fdw.html) support and [Streaming Replication](https://wiki.postgresql.org/wiki/Streaming_Replication)
- Git 2.9+
- All nodes must run the same GitLab version.
@@ -229,6 +229,10 @@ For more information on Geo security, see [Geo security review](security_review.
For more information on tuning Geo, see [Tuning Geo](tuning.md).
+### Set up a location-aware Git URL
+
+For an example of how to set up a location-aware Git remote URL with AWS Route53, see [Location-aware Git remote URL with AWS Route53](location_aware_git_url.md).
+
## Remove Geo node
For more information on removing a Geo node, see [Removing **secondary** Geo nodes](remove_geo_node.md).
@@ -240,7 +244,7 @@ This list of limitations only reflects the latest version of GitLab. If you are
- Pushing directly to a **secondary** node redirects (for HTTP) or proxies (for SSH) the request to the **primary** node instead of [handling it directly](https://gitlab.com/gitlab-org/gitlab/issues/1381), except when using Git over HTTP with credentials embedded within the URI. For example, `https://user:password@secondary.tld`.
- The **primary** node has to be online for OAuth login to happen. Existing sessions and Git are not affected.
-- The installation takes multiple manual steps that together can take about an hour depending on circumstances. We are working on improving this experience. See [gitlab-org/omnibus-gitlab#2978](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/2978) for details.
+- The installation takes multiple manual steps that together can take about an hour depending on circumstances. We are working on improving this experience. See [Omnibus GitLab issue #2978](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/2978) for details.
- Real-time updates of issues/merge requests (for example, via long polling) doesn't work on the **secondary** node.
- [Selective synchronization](configuration.md#selective-synchronization) applies only to files and repositories. Other datasets are replicated to the **secondary** node in full, making it inappropriate for use as an access control mechanism.
- Object pools for forked project deduplication work only on the **primary** node, and are duplicated on the **secondary** node.
@@ -251,36 +255,58 @@ This list of limitations only reflects the latest version of GitLab. If you are
The following table lists the GitLab features along with their replication
and verification status on a **secondary** node.
-You can keep track of the progress to include the missing items in:
-
-- [ee-893](https://gitlab.com/groups/gitlab-org/-/epics/893).
-- [ee-1430](https://gitlab.com/groups/gitlab-org/-/epics/1430).
-
-| Feature | Replicated | Verified |
-|-----------|------------|----------|
-| All database content (e.g. snippets, epics, issues, merge requests, groups, and project metadata) | Yes | Yes |
-| Project repository | Yes | Yes |
-| Project wiki repository | Yes | Yes |
-| Project designs repository | No | No |
-| Uploads (e.g. attachments to issues, merge requests, epics, and avatars) | Yes | Yes, only on transfer, or manually (1) |
-| LFS Objects | Yes | Yes, only on transfer, or manually (1) |
-| CI job artifacts (other than traces) | Yes | No, only manually (1) |
-| Archived traces | Yes | Yes, only on transfer, or manually (1) |
-| Personal snippets | Yes | Yes |
-| Version-controlled personal snippets ([unsupported](https://gitlab.com/gitlab-org/gitlab-foss/issues/13426)) | No | No |
-| Project snippets | Yes | Yes |
-| Version-controlled project snippets ([unsupported](https://gitlab.com/gitlab-org/gitlab-foss/issues/13426)) | No | No |
-| Object pools for forked project deduplication | No | No |
-| [Server-side Git Hooks](../../custom_hooks.md) | No | No |
-| [Elasticsearch integration](../../../integration/elasticsearch.md) | No | No |
-| [GitLab Pages](../../pages/index.md) | No | No |
-| [Container Registry](../../packages/container_registry.md) | Yes | No |
-| [NPM Registry](../../../user/packages/npm_registry/index.md) | No | No |
-| [Maven Packages](../../../user/packages/maven_repository/index.md) | No | No |
-| [External merge request diffs](../../merge_request_diffs.md) | No, if they are on-disk | No |
-| Content in object storage ([track progress](https://gitlab.com/groups/gitlab-org/-/epics/1526)) | No | No |
-
-1. The integrity can be verified manually using [Integrity Check Rake Task](../../raketasks/check.md) on both nodes and comparing the output between them.
+You can keep track of the progress to implement the missing items in
+these epics/issues:
+
+- [Unreplicated Data Types](https://gitlab.com/groups/gitlab-org/-/epics/893)
+- [Verify all replicated data](https://gitlab.com/groups/gitlab-org/-/epics/1430)
+
+| Feature | Replicated | Verified | Notes |
+|-----------------------------------------------------|--------------------------|-----------------------------|--------------------------------------------|
+| All database content | **Yes** | **Yes** | |
+| Project repository | **Yes** | **Yes** | |
+| Project wiki repository | **Yes** | **Yes** | |
+| Project designs repository | [No][design-replication] | [No][design-verification] | |
+| Uploads | **Yes** | [No][upload-verification] | Verified only on transfer, or manually (1) |
+| LFS Objects | **Yes** | [No][lfs-verification] | Verified only on transfer, or manually (1) |
+| CI job artifacts (other than traces) | **Yes** | [No][artifact-verification] | Verified only manually (1) |
+| Archived traces | **Yes** | [No][artifact-verification] | Verified only on transfer, or manually (1) |
+| Personal snippets | **Yes** | **Yes** | |
+| Version-controlled personal snippets | No | No | [Not yet supported][unsupported-snippets] |
+| Project snippets | **Yes** | **Yes** | |
+| Version-controlled project snippets | No | No | [Not yet supported][unsupported-snippets] |
+| Object pools for forked project deduplication | **Yes** | No | |
+| [Server-side Git Hooks][custom-hooks] | No | No | |
+| [Elasticsearch integration][elasticsearch] | No | No | |
+| [GitLab Pages][gitlab-pages] | [No][pages-replication] | No | |
+| [Container Registry][container-registry] | **Yes** | No | |
+| [NPM Registry][npm-registry] | No | No | |
+| [Maven Repository][maven-repository] | No | No | |
+| [Conan Repository][conan-repository] | No | No | |
+| [External merge request diffs][merge-request-diffs] | [No][diffs-replication] | No | |
+| Content in object storage | **Yes** | No | |
+
+[design-replication]: https://gitlab.com/groups/gitlab-org/-/epics/1633
+[design-verification]: https://gitlab.com/gitlab-org/gitlab/issues/32467
+[upload-verification]: https://gitlab.com/groups/gitlab-org/-/epics/1817
+[lfs-verification]: https://gitlab.com/gitlab-org/gitlab/issues/8922
+[artifact-verification]: https://gitlab.com/gitlab-org/gitlab/issues/8923
+[diffs-replication]: https://gitlab.com/gitlab-org/gitlab/issues/33817
+[pages-replication]: https://gitlab.com/groups/gitlab-org/-/epics/589
+
+[unsupported-snippets]: https://gitlab.com/gitlab-org/gitlab/issues/14228
+[custom-hooks]: ../../custom_hooks.md
+[elasticsearch]: ../../../integration/elasticsearch.md
+[gitlab-pages]: ../../pages/index.md
+[container-registry]: ../../packages/container_registry.md
+[npm-registry]: ../../../user/packages/npm_registry/index.md
+[maven-repository]: ../../../user/packages/maven_repository/index.md
+[conan-repository]: ../../../user/packages/conan_repository/index.md
+[merge-request-diffs]: ../../merge_request_diffs.md
+
+1. The integrity can be verified manually using
+[Integrity Check Rake Task](../../raketasks/check.md)
+on both nodes and comparing the output between them.
DANGER: **DANGER**
Features not on this list, or with **No** in the **Replicated** column,
diff --git a/doc/administration/geo/replication/location_aware_git_url.md b/doc/administration/geo/replication/location_aware_git_url.md
new file mode 100644
index 00000000000..6183a0ad119
--- /dev/null
+++ b/doc/administration/geo/replication/location_aware_git_url.md
@@ -0,0 +1,119 @@
+# Location-aware Git remote URL with AWS Route53 **(PREMIUM ONLY)**
+
+You can provide GitLab users with a single remote URL that automatically uses
+the Geo node closest to them. This means users don't need to update their Git
+configuration to take advantage of closer Geo nodes as they move.
+
+This is possible because, Git push requests can be automatically redirected
+(HTTP) or proxied (SSH) from **secondary** nodes to the **primary** node.
+
+Though these instructions use [AWS Route53](https://aws.amazon.com/route53/),
+other services such as [Cloudflare](https://www.cloudflare.com/) could be used
+as well.
+
+NOTE: **Note**
+You can also use a load balancer to distribute web UI or API traffic to
+[multiple Geo **secondary** nodes](../../../user/admin_area/geo_nodes.md#multiple-secondary-nodes-behind-a-load-balancer).
+Importantly, the **primary** node cannot yet be included. See the feature request
+[Support putting the **primary** behind a Geo node load balancer](https://gitlab.com/gitlab-org/gitlab/issues/10888).
+
+## Prerequisites
+
+In this example, we have already set up:
+
+- `primary.example.com` as a Geo **primary** node.
+- `secondary.example.com` as a Geo **secondary** node.
+
+We will create a `git.example.com` subdomain that will automatically direct
+requests:
+
+- From Europe to the **secondary** node.
+- From all other locations to the **primary** node.
+
+In any case, you require:
+
+- A working GitLab **primary** node that is accessible at its own address.
+- A working GitLab **secondary** node.
+- A Route53 Hosted Zone managing your domain.
+
+If you have not yet setup a Geo **primary** node and **secondary** node, please consult
+[the Geo setup instructions](https://docs.gitlab.com/ee/administration/geo/replication/#setup-instructions).
+
+## Create a traffic policy
+
+In a Route53 Hosted Zone, traffic policies can be used to set up a variety of
+routing configurations.
+
+1. Navigate to the
+[Route53 dashboard](https://console.aws.amazon.com/route53/home) and click
+**Traffic policies**.
+
+ ![Traffic policies](img/single_git_traffic_policies.png)
+
+1. Click the **Create traffic policy** button.
+
+ ![Name policy](img/single_git_name_policy.png)
+
+1. Fill in the **Policy Name** field with `Single Git Host` and click **Next**.
+
+ ![Policy diagram](img/single_git_policy_diagram.png)
+
+1. Leave **DNS type** as `A: IP Address in IPv4 format`.
+1. Click **Connect to...** and select **Geolocation rule**.
+
+ ![Add geolocation rule](img/single_git_add_geolocation_rule.png)
+
+1. For the first **Location**, leave it as `Default`.
+1. Click **Connect to...** and select **New endpoint**.
+1. Choose **Type** `value` and fill it in with `<your **primary** IP address>`.
+1. For the second **Location**, choose `Europe`.
+1. Click **Connect to...** and select **New endpoint**.
+1. Choose **Type** `value` and fill it in with `<your **secondary** IP address>`.
+
+ ![Add traffic policy endpoints](img/single_git_add_traffic_policy_endpoints.png)
+
+1. Click **Create traffic policy**.
+
+ ![Create policy records with traffic policy](img/single_git_create_policy_records_with_traffic_policy.png)
+
+1. Fill in **Policy record DNS name** with `git`.
+1. Click **Create policy records**.
+
+ ![Created policy record](img/single_git_created_policy_record.png)
+
+You have successfully set up a single host, e.g. `git.example.com` which
+distributes traffic to your Geo nodes by geolocation!
+
+## Configure Git clone URLs to use the special Git URL
+
+When a user clones a repository for the first time, they typically copy the Git
+remote URL from the project page. By default, these SSH and HTTP URLs are based
+on the external URL of the current host. For example:
+
+- `git@secondary.example.com:group1/project1.git`
+- `https://secondary.example.com/group1/project1.git`
+
+![Clone panel](img/single_git_clone_panel.png)
+
+You can customize the:
+
+- SSH remote URL to use the location-aware `git.example.com`. To do so, change the SSH remote URL's
+ host by setting `gitlab_rails['gitlab_ssh_host']` in `gitlab.rb` of web nodes.
+- HTTP remote URL as shown in
+ [Custom Git clone URL for HTTP(S)](../../../user/admin_area/settings/visibility_and_access_controls.md#custom-git-clone-url-for-https).
+
+## Example Git request handling behavior
+
+After following the configuration steps above, handling for Git requests is now location aware.
+For requests:
+
+- Outside Europe, all requests are directed to the **primary** node.
+- Within Europe, over:
+ - HTTP:
+ - `git clone http://git.example.com/foo/bar.git` is directed to the **secondary** node.
+ - `git push` is initially directed to the **secondary**, which automatically
+ redirects to `primary.example.com`.
+ - SSH:
+ - `git clone git@git.example.com:foo/bar.git` is directed to the **secondary**.
+ - `git push` is initially directed to the **secondary**, which automatically
+ proxies the request to `primary.example.com`.
diff --git a/doc/administration/geo/replication/object_storage.md b/doc/administration/geo/replication/object_storage.md
index 878b67a8f8e..a9087abcbd9 100644
--- a/doc/administration/geo/replication/object_storage.md
+++ b/doc/administration/geo/replication/object_storage.md
@@ -1,16 +1,33 @@
# Geo with Object storage **(PREMIUM ONLY)**
-Geo can be used in combination with Object Storage (AWS S3, or
-other compatible object storage).
+Geo can be used in combination with Object Storage (AWS S3, or other compatible object storage).
-## Configuration
+Currently, **secondary** nodes can use either:
-At this time it is required that if object storage is enabled on the
-**primary** node, it must also be enabled on each **secondary** node.
+- The same storage bucket as the **primary** node.
+- A replicated storage bucket.
-**Secondary** nodes can use the same storage bucket as the **primary** node, or
-they can use a replicated storage bucket. At this time GitLab does not
-take care of content replication in object storage.
+To have:
+
+- GitLab manage replication, follow [Enabling GitLab replication](#enabling-gitlab-managed-object-storage-replication).
+- Third-party services manage replication, follow [Third-party replication services](#third-party-replication-services).
+
+## Enabling GitLab managed object storage replication
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/issues/10586) in GitLab 12.4.
+
+CAUTION: **Caution:**
+This is a [**beta** feature](https://about.gitlab.com/handbook/product/#beta) and is not ready yet for production use at any scale.
+
+**Secondary** nodes can replicate files stored on the **primary** node regardless of
+whether they are stored on the local filesystem or in object storage.
+
+To enable GitLab replication, you must:
+
+1. Go to **Admin Area > Geo**.
+1. Press **Edit** on the **secondary** node.
+1. Enable the **Allow this secondary node to replicate content on Object Storage**
+ checkbox.
For LFS, follow the documentation to
[set up LFS object storage](../../../workflow/lfs/lfs_administration.md#storing-lfs-objects-in-remote-object-storage).
@@ -20,12 +37,21 @@ For CI job artifacts, there is similar documentation to configure
For user uploads, there is similar documentation to configure [upload object storage](../../uploads.md#using-object-storage-core-only)
-You should enable and configure object storage on both **primary** and **secondary**
-nodes. Migrating existing data to object storage should be performed on the
-**primary** node only. **Secondary** nodes will automatically notice that the migrated
-files are now in object storage.
+If you want to migrate the **primary** node's files to object storage, you can
+configure the **secondary** in a few ways:
+
+- Use the exact same object storage.
+- Use a separate object store but leverage your object storage solution's built-in
+ replication.
+- Use a separate object store and enable the **Allow this secondary node to replicate
+ content on Object Storage** setting.
+
+GitLab does not currently support the case where both:
+
+- The **primary** node uses local storage.
+- A **secondary** node uses object storage.
-## Replication
+## Third-party replication services
When using Amazon S3, you can use
[CRR](https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html) to
diff --git a/doc/administration/geo/replication/security_review.md b/doc/administration/geo/replication/security_review.md
index 832d02be9a5..68bf5b5d23a 100644
--- a/doc/administration/geo/replication/security_review.md
+++ b/doc/administration/geo/replication/security_review.md
@@ -1,9 +1,9 @@
# Geo security review (Q&A) **(PREMIUM ONLY)**
-The following security review of the Geo feature set focuses on security
-aspects of the feature as they apply to customers running their own GitLab
-instances. The review questions are based in part on the [application security architecture](https://www.owasp.org/index.php/Application_Security_Architecture_Cheat_Sheet)
-questions from [owasp.org](https://www.owasp.org).
+The following security review of the Geo feature set focuses on security aspects of
+the feature as they apply to customers running their own GitLab instances. The review
+questions are based in part on the [OWASP Application Security Verification Standard Project](https://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project)
+from [owasp.org](https://www.owasp.org/index.php/Main_Page).
## Business Model
@@ -30,7 +30,7 @@ questions from [owasp.org](https://www.owasp.org).
private projects. Geo replicates them all indiscriminately. “Selective sync”
exists for files and repositories (but not database content), which would permit
only less-sensitive projects to be replicated to a **secondary** node if desired.
-- See also: [developing a data classification policy](https://gitlab.com/gitlab-com/security/issues/4).
+- See also: [GitLab data classification policy](https://about.gitlab.com/handbook/engineering/security/data-classification-policy.html).
### What data backup and retention requirements have been defined for the application?
@@ -49,9 +49,9 @@ questions from [owasp.org](https://www.owasp.org).
### How do the end‐users interact with the application?
- **Secondary** nodes provide all the interfaces a **primary** node does
- (notably a HTTP/HTTPS web application, and HTTP/HTTPS or SSH git repository
+ (notably a HTTP/HTTPS web application, and HTTP/HTTPS or SSH Git repository
access), but is constrained to read-only activities. The principal use case is
- envisioned to be cloning git repositories from the **secondary** node in favor of the
+ envisioned to be cloning Git repositories from the **secondary** node in favor of the
**primary** node, but end-users may use the GitLab web interface to view projects,
issues, merge requests, snippets, etc.
@@ -229,7 +229,7 @@ questions from [owasp.org](https://www.owasp.org).
- A static secret shared across all hosts in a GitLab deployment.
- In transit, data should be encrypted, although the application does permit
communication to proceed unencrypted. The two main transits are the **secondary** node’s
- replication process for PostgreSQL, and for git repositories/files. Both should
+ replication process for PostgreSQL, and for Git repositories/files. Both should
be protected using TLS, with the keys for that managed via Omnibus per existing
configuration for end-user access to GitLab.
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
index 263fc05dce9..4d64941411a 100644
--- a/doc/administration/geo/replication/troubleshooting.md
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -252,7 +252,7 @@ to start again from scratch, there are a few steps that can help you:
gitlab-ctl stop geo-logcursor
```
- You can watch sidekiq logs to know when sidekiq jobs processing have finished:
+ You can watch Sidekiq logs to know when Sidekiq jobs processing have finished:
```sh
gitlab-ctl tail sidekiq
@@ -280,8 +280,8 @@ to start again from scratch, there are a few steps that can help you:
Any uploaded content like file attachments, avatars or LFS objects are stored in a
subfolder in one of the two paths below:
- - /var/opt/gitlab/gitlab-rails/shared
- - /var/opt/gitlab/gitlab-rails/uploads
+ - `/var/opt/gitlab/gitlab-rails/shared`
+ - `/var/opt/gitlab/gitlab-rails/uploads`
To rename all of them:
diff --git a/doc/administration/geo/replication/using_a_geo_server.md b/doc/administration/geo/replication/using_a_geo_server.md
index 55b5d486676..55c7e78da92 100644
--- a/doc/administration/geo/replication/using_a_geo_server.md
+++ b/doc/administration/geo/replication/using_a_geo_server.md
@@ -4,7 +4,7 @@
After you set up the [database replication and configure the Geo nodes][req], use your closest GitLab node as you would a normal standalone GitLab instance.
-Pushing directly to a **secondary** node (for both HTTP, SSH including git-lfs) was [introduced](https://about.gitlab.com/2018/09/22/gitlab-11-3-released/) in [GitLab Premium](https://about.gitlab.com/pricing/#self-managed) 11.3.
+Pushing directly to a **secondary** node (for both HTTP, SSH including Git LFS) was [introduced](https://about.gitlab.com/blog/2018/09/22/gitlab-11-3-released/) in [GitLab Premium](https://about.gitlab.com/pricing/#self-managed) 11.3.
Example of the output you will see when pushing to a **secondary** node:
diff --git a/doc/administration/gitaly/index.md b/doc/administration/gitaly/index.md
index 3b32baf28b9..d5749427f6e 100644
--- a/doc/administration/gitaly/index.md
+++ b/doc/administration/gitaly/index.md
@@ -3,7 +3,7 @@
[Gitaly](https://gitlab.com/gitlab-org/gitaly) is the service that
provides high-level RPC access to Git repositories. Without it, no other
components can read or write Git data. GitLab components that access Git
-repositories (gitlab-rails, gitlab-shell, gitlab-workhorse, etc.) act as clients
+repositories (GitLab Rails, GitLab Shell, GitLab Workhorse, etc.) act as clients
to Gitaly. End users do not have direct access to Gitaly.
In the rest of this page, Gitaly server is referred to the standalone node that
@@ -47,8 +47,8 @@ But since 11.8 the indexer uses Gitaly for data access as well. NFS can still
be leveraged for redudancy on block level of the Git data. But only has to
be mounted on the Gitaly server.
-Starting with GitLab 11.8, it is possible to use ElasticSearch in conjunction with
-a Gitaly setup that isn't utilising NFS. In order to use ElasticSearch in this
+Starting with GitLab 11.8, it is possible to use Elasticsearch in conjunction with
+a Gitaly setup that isn't utilising NFS. In order to use Elasticsearch in this
scenario, the [new repository indexer](../../integration/elasticsearch.md#elasticsearch-repository-indexer-beta)
needs to be enabled in your GitLab configuration.
@@ -71,8 +71,8 @@ The following list depicts what the network architecture of Gitaly is:
- A GitLab server can use one or more Gitaly servers.
- Gitaly addresses must be specified in such a way that they resolve
correctly for ALL Gitaly clients.
-- Gitaly clients are: Unicorn, Sidekiq, gitlab-workhorse,
- gitlab-shell, Elasticsearch Indexer, and Gitaly itself.
+- Gitaly clients are: Unicorn, Sidekiq, GitLab Workhorse,
+ GitLab Shell, Elasticsearch Indexer, and Gitaly itself.
- A Gitaly server must be able to make RPC calls **to itself** via its own
`(Gitaly address, Gitaly token)` pair as specified in `/config/gitlab.yml`.
- Gitaly servers must not be exposed to the public internet as Gitaly's network
@@ -86,7 +86,8 @@ Below we describe how to configure two Gitaly servers one at
`gitaly1.internal` and the other at `gitaly2.internal`
with secret token `abc123secret`. We assume
your GitLab installation has three repository storages: `default`,
-`storage1` and `storage2`.
+`storage1` and `storage2`. You can use as little as just one server with one
+repository storage if desired.
### 1. Installation
@@ -129,7 +130,7 @@ Configure a token on the instance that runs the GitLab Rails application.
Next, on the Gitaly servers, you need to configure storage paths, enable
the network listener and configure the token.
-NOTE: **Note:** if you want to reduce the risk of downtime when you enable
+NOTE: **Note:** If you want to reduce the risk of downtime when you enable
authentication you can temporarily disable enforcement, see [the
documentation on configuring Gitaly
authentication](https://gitlab.com/gitlab-org/gitaly/blob/master/doc/configuration/README.md#authentication)
@@ -177,20 +178,19 @@ Check the directory layout on your Gitaly server to be sure.
# Don't forget to copy `/etc/gitlab/gitlab-secrets.json` from web server to Gitaly server.
gitlab_rails['internal_api_url'] = 'https://gitlab.example.com'
+ # Authentication token to ensure only authorized servers can communicate with
+ # Gitaly server
+ gitaly['auth_token'] = 'abc123secret'
+
# Make Gitaly accept connections on all network interfaces. You must use
# firewalls to restrict access to this address/port.
+ # Comment out following line if you only want to support TLS connections
gitaly['listen_addr'] = "0.0.0.0:8075"
- gitaly['auth_token'] = 'abc123secret'
-
- # To use TLS for Gitaly you need to add
- gitaly['tls_listen_addr'] = "0.0.0.0:9999"
- gitaly['certificate_path'] = "path/to/cert.pem"
- gitaly['key_path'] = "path/to/key.pem"
```
1. Append the following to `/etc/gitlab/gitlab.rb` for each respective server:
- For `gitaly1.internal`:
+ On `gitaly1.internal`:
```
gitaly['storage'] = [
@@ -199,7 +199,7 @@ Check the directory layout on your Gitaly server to be sure.
]
```
- For `gitaly2.internal`:
+ On `gitaly2.internal`:
```
gitaly['storage'] = [
@@ -219,11 +219,6 @@ Check the directory layout on your Gitaly server to be sure.
```toml
listen_addr = '0.0.0.0:8075'
- tls_listen_addr = '0.0.0.0:9999'
-
- [tls]
- certificate_path = /path/to/cert.pem
- key_path = /path/to/key.pem
[auth]
token = 'abc123secret'
@@ -231,7 +226,7 @@ Check the directory layout on your Gitaly server to be sure.
1. Append the following to `/home/git/gitaly/config.toml` for each respective server:
- For `gitaly1.internal`:
+ On `gitaly1.internal`:
```toml
[[storage]]
@@ -241,7 +236,7 @@ Check the directory layout on your Gitaly server to be sure.
name = 'storage1'
```
- For `gitaly2.internal`:
+ On `gitaly2.internal`:
```toml
[[storage]]
@@ -369,14 +364,23 @@ To disable Gitaly on a client node:
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/merge_requests/22602) in GitLab 11.8.
Gitaly supports TLS encryption. To be able to communicate
-with a Gitaly instance that listens for secure connections you will need to use `tls://` url
+with a Gitaly instance that listens for secure connections you will need to use `tls://` URL
scheme in the `gitaly_address` of the corresponding storage entry in the GitLab configuration.
You will need to bring your own certificates as this isn't provided automatically.
-The certificate to be used needs to be installed on all Gitaly nodes and on all
+The certificate to be used needs to be installed on all Gitaly nodes, and the
+certificate (or CA of certificate) on all
client nodes that communicate with it following the procedure described in
[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates).
+NOTE: **Note**
+The self-signed certificate must specify the address you use to access the
+Gitaly server. If you are addressing the Gitaly server by a hostname, you can
+either use the Common Name field for this, or add it as a Subject Alternative
+Name. If you are addressing the Gitaly server by its IP address, you must add it
+as a Subject Alternative Name to the certificate.
+[gRPC does not support using an IP address as Common Name in a certificate](https://github.com/grpc/grpc/issues/2691).
+
NOTE: **Note:**
It is possible to configure Gitaly servers with both an
unencrypted listening address `listen_addr` and an encrypted listening
@@ -387,7 +391,7 @@ To configure Gitaly with TLS:
**For Omnibus GitLab**
-1. On the client nodes, edit `/etc/gitlab/gitlab.rb`:
+1. On the client node(s), edit `/etc/gitlab/gitlab.rb` as follows:
```ruby
git_data_dirs({
@@ -399,20 +403,38 @@ To configure Gitaly with TLS:
gitlab_rails['gitaly_token'] = 'abc123secret'
```
-1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
-1. On the Gitaly server nodes, edit `/etc/gitlab/gitlab.rb`:
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) on client node(s).
+1. Create the `/etc/gitlab/ssl` directory and copy your key and certificate there:
+
+ ```sh
+ sudo mkdir -p /etc/gitlab/ssl
+ sudo chmod 700 /etc/gitlab/ssl
+ sudo cp key.pem cert.pem /etc/gitlab/ssl/
+ ```
+
+1. On the Gitaly server node(s), edit `/etc/gitlab/gitlab.rb` and add:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
```ruby
gitaly['tls_listen_addr'] = "0.0.0.0:9999"
- gitaly['certificate_path'] = "path/to/cert.pem"
- gitaly['key_path'] = "path/to/key.pem"
+ gitaly['certificate_path'] = "/etc/gitlab/ssl/cert.pem"
+ gitaly['key_path'] = "/etc/gitlab/ssl/key.pem"
```
-1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) on Gitaly server node(s).
+1. (Optional) After [verifying that all Gitaly traffic is being served over TLS](#observe-type-of-gitaly-connections),
+ you can improve security by disabling non-TLS connections by commenting out
+ or deleting `gitaly['listen_addr']` in `/etc/gitlab/gitlab.rb`, saving the file,
+ and [reconfiguring GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure)
+ on Gitaly server node(s).
**For installations from source**
-1. On the client nodes, edit `/home/git/gitlab/config/gitlab.yml`:
+1. On the client node(s), edit `/home/git/gitlab/config/gitlab.yml` as follows:
```yaml
gitlab:
@@ -437,18 +459,33 @@ To configure Gitaly with TLS:
data will be stored in this folder. This will no longer be necessary after
[this issue](https://gitlab.com/gitlab-org/gitaly/issues/1282) is resolved.
-1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
-1. On the Gitaly server nodes, edit `/home/git/gitaly/config.toml`:
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source) on client node(s).
+1. Create the `/etc/gitlab/ssl` directory and copy your key and certificate there:
+
+ ```sh
+ sudo mkdir -p /etc/gitlab/ssl
+ sudo chmod 700 /etc/gitlab/ssl
+ sudo cp key.pem cert.pem /etc/gitlab/ssl/
+ ```
+
+1. On the Gitaly server node(s), edit `/home/git/gitaly/config.toml` and add:
```toml
tls_listen_addr = '0.0.0.0:9999'
[tls]
- certificate_path = '/path/to/cert.pem'
- key_path = '/path/to/key.pem'
+ certificate_path = '/etc/gitlab/ssl/cert.pem'
+ key_path = '/etc/gitlab/ssl/key.pem'
```
-1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source) on Gitaly server node(s).
+1. (Optional) After [verifying that all Gitaly traffic is being served over TLS](#observe-type-of-gitaly-connections),
+ you can improve security by disabling non-TLS connections by commenting out
+ or deleting `listen_addr` in `/home/git/gitaly/config.toml`, saving the file,
+ and [restarting GitLab](../restart_gitlab.md#installations-from-source)
+ on Gitaly server node(s).
+
+### Observe type of Gitaly connections
To observe what type of connections are actually being used in a
production environment you can use the following Prometheus query:
@@ -512,7 +549,7 @@ a few things that you need to do:
1. Configure [database lookup of SSH keys](../operations/fast_ssh_key_lookup.md)
to eliminate the need for a shared authorized_keys file.
1. Configure [object storage for job artifacts](../job_artifacts.md#using-object-storage)
- including [live tracing](../job_traces.md#new-live-trace-architecture).
+ including [incremental logging](../job_logs.md#new-incremental-logging-architecture).
1. Configure [object storage for LFS objects](../../workflow/lfs/lfs_administration.md#storing-lfs-objects-in-remote-object-storage).
1. Configure [object storage for uploads](../uploads.md#using-object-storage-core-only).
@@ -564,6 +601,109 @@ concurrency limiter, not a rate limiter. If a client makes 1000 requests
in a row in a very short timespan, the concurrency will not exceed 1,
and this mechanism (the concurrency limiter) will do nothing.
+## Rotating a Gitaly authentication token
+
+Rotating credentials in a production environment often either requires
+downtime, or causes outages, or both. If you are careful, though, you
+*can* rotate Gitaly credentials without a service interruption.
+
+This procedure also works if you are running GitLab on a single server.
+In that case, "Gitaly servers" and "Gitaly clients" refers to the same
+machine.
+
+### 1. Monitor current authentication behavior
+
+Use Prometheus to see what the current authentication behavior of your
+GitLab installation is.
+
+```
+sum(rate(gitaly_authentications_total[5m])) by (enforced, status)
+```
+
+In a system where authentication is configured correctly, and where you
+have live traffic, you will see something like this:
+
+```
+{enforced="true",status="ok"} 4424.985419441742
+```
+
+There may also be other numbers with rate 0. We only care about the
+non-zero numbers.
+
+The only non-zero number should have `enforced="true",status="ok"`. If
+you have other non-zero numbers, something is wrong in your
+configuration.
+
+The 'status="ok"' number reflects your current request rate. In the example
+above, Gitaly is handling about 4000 requests per second.
+
+Now you have established that you can monitor the Gitaly authentication
+behavior of your GitLab installation.
+
+### 2. Reconfigure all Gitaly servers to be in "auth transitioning" mode
+
+The second step is to temporarily disable authentication on the Gitaly servers.
+
+```ruby
+# in /etc/gitlab/gitlab.rb
+gitaly['auth_transitioning'] = true
+```
+
+After you have applied this, your Prometheus query should return
+something like this:
+
+```
+{enforced="false",status="would be ok"} 4424.985419441742
+```
+
+Because `enforced="false"`, it will be safe to start rolling out the new
+token.
+
+### 3. Update Gitaly token on all clients and servers
+
+```ruby
+# in /etc/gitlab/gitlab.rb
+
+gitaly['auth_token'] = 'my new secret token'
+```
+
+Remember to apply this on both your Gitaly clients *and* servers. If you
+check your Prometheus query while this change is being rolled out, you
+will see non-zero values for the `enforced="false",status="denied"` counter.
+
+### 4. Use Prometheus to ensure there are no authentication failures
+
+After you applied the Gitaly token change everywhere, and all services
+involved have been restarted, you should will temporarily see a mix of
+`status="would be ok"` and `status="denied"`.
+
+After the new token has been picked up by all Gitaly clients and
+servers, the **only non-zero rate** should be
+`enforced="false",status="would be ok"`.
+
+### 5. Disable "auth transitioning" Mode
+
+Now we turn off the 'auth transitioning' mode. These final steps are
+important: without them, you have **no authentication**.
+
+Update the configuration on your Gitaly servers:
+
+```ruby
+# in /etc/gitlab/gitlab.rb
+gitaly['auth_transitioning'] = false
+```
+
+### 6. Verify that authentication is enforced again
+
+Refresh your Prometheus query. You should now see the same kind of
+result as you did in the beginning:
+
+```
+{enforced="true",status="ok"} 4424.985419441742
+```
+
+Note that `enforced="true"`, meaning that authentication is being enforced.
+
## Troubleshooting Gitaly
### `gitaly-debug`
@@ -747,3 +887,8 @@ To remove the proxy setting, run the following commands (depending on which vari
unset http_proxy
unset https_proxy
```
+
+### Praefect
+
+Praefect is an experimental daemon that allows for replication of the Git data.
+It can be setup with omnibus, [as explained here](./praefect.md).
diff --git a/doc/administration/gitaly/praefect.md b/doc/administration/gitaly/praefect.md
new file mode 100644
index 00000000000..9038675a28f
--- /dev/null
+++ b/doc/administration/gitaly/praefect.md
@@ -0,0 +1,114 @@
+# Praefect
+
+NOTE: **Note:** Praefect is an experimental service, and for testing purposes only at
+this time.
+
+Praefect is an optional reverse-proxy for [Gitaly](../index.md) to manage a
+cluster of Gitaly nodes for high availability through replication.
+If a Gitaly node becomes unavailable, it will be possible to fail over to a
+warm Gitaly replica.
+
+The first minimal version will support:
+
+- Eventual consistency of the secondary replicas.
+- Manual fail over from the primary to the secondary.
+
+Follow the [HA Gitaly epic](https://gitlab.com/groups/gitlab-org/-/epics/1489)
+for updates and roadmap.
+
+## Omnibus
+
+### Architecture
+
+For this document, the following network topology is assumed:
+
+```mermaid
+graph TB
+ GitLab --> Gitaly;
+ GitLab --> Praefect;
+ Praefect --> Preafect-Git-1;
+ Praefect --> Preafect-Git-2;
+ Praefect --> Preafect-Git-3;
+```
+
+Where `GitLab` is the collection of clients that can request Git operations.
+`Gitaly` is a Gitaly server before using Praefect. The Praefect node has two
+storage nodes attached. Praefect itself doesn't store data, but connects to
+three Gitaly nodes, `Praefect-Git-1`, `Praefect-Git-2`, and `Praefect-Git-3`.
+There should be no knowledge other than with Praefect about the existence of
+the `Praefect-Git-X` nodes.
+
+### Setup
+
+In this setup guide, the Gitaly node will be added first, then Praefect, and
+lastly we update the GitLab configuration.
+
+#### Gitaly
+
+In their own machine, configure the Gitaly server as described in the
+[gitaly documentation](index.md#3-gitaly-server-configuration).
+
+#### Praefect
+
+Next, Praefect has to be enabled on its own node. Disable all other services,
+and add each Gitaly node that will be connected to Praefect. In the example below,
+the Gitaly nodes are named `praefect-git-X`. Note that one node is designated as
+primary, by setting the primary to `true`:
+
+```ruby
+# /etc/gitlab/gitlab.rb
+
+# Avoid running unnecessary services on the Gitaly server
+postgresql['enable'] = false
+redis['enable'] = false
+nginx['enable'] = false
+prometheus['enable'] = false
+unicorn['enable'] = false
+sidekiq['enable'] = false
+gitlab_workhorse['enable'] = false
+gitaly['enable'] = false
+
+# virtual_storage_name must match the same storage name given to praefect in git_data_dirs
+praefect['virtual_storage_name'] = 'praefect'
+praefect['auth_token'] = 'super_secret_abc'
+praefect['enable'] = true
+praefect['storage_nodes'] = [
+ {
+ 'storage' => 'praefect-git-1',
+ 'address' => 'tcp://praefect-git-1.internal',
+ 'token' => 'token1',
+ 'primary' => true
+ },
+ {
+ 'storage' => 'praefect-git-2',
+ 'address' => 'tcp://praefect-git-2.internal',
+ 'token' => 'token2'
+ },
+ {
+ 'storage' => 'praefect-git-3',
+ 'address' => 'tcp://praefect-git-3.internal',
+ 'token' => 'token3'
+ }
+]
+```
+
+Save the file and [reconfigure Praefect](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+#### GitLab
+
+When Praefect is running, it should be exposed as a storage to GitLab. This
+is done through setting the `git_data_dirs`. Assuming the default storage
+configuration is used, there would be two storages available to GitLab:
+
+```ruby
+git_data_dirs({
+ "default" => {
+ "gitaly_address" => "tcp://gitaly.internal"
+ },
+ "praefect" => {
+ "gitaly_address" => "tcp://praefect.internal:2305"
+ }
+})
+```
+
+Restart GitLab using `gitlab-ctl restart` on the GitLab node.
diff --git a/doc/administration/gitaly/reference.md b/doc/administration/gitaly/reference.md
index a3bb4f8a509..fe88ef13958 100644
--- a/doc/administration/gitaly/reference.md
+++ b/doc/administration/gitaly/reference.md
@@ -134,7 +134,7 @@ A lot of Gitaly RPCs need to look up Git objects from repositories.
Most of the time we use `git cat-file --batch` processes for that. For
better performance, Gitaly can re-use these `git cat-file` processes
across RPC calls. Previously used processes are kept around in a
-["git cat-file cache"](https://about.gitlab.com/2019/07/08/git-performance-on-nfs/#enter-cat-file-cache).
+["git cat-file cache"](https://about.gitlab.com/blog/2019/07/08/git-performance-on-nfs/#enter-cat-file-cache).
In order to control how much system resources this uses, we have a maximum number
of cat-file processes that can go into the cache.
diff --git a/doc/administration/high_availability/README.md b/doc/administration/high_availability/README.md
index 0aaf956f169..199944a160c 100644
--- a/doc/administration/high_availability/README.md
+++ b/doc/administration/high_availability/README.md
@@ -25,24 +25,24 @@ solution should balance the costs against the benefits.
There are many options when choosing a highly-available GitLab architecture. We
recommend engaging with GitLab Support to choose the best architecture for your
-use-case. This page contains some various options and guidelines based on
+use case. This page contains some various options and guidelines based on
experience with GitLab.com and Enterprise Edition on-premises customers.
-For a detailed insight into how GitLab scales and configures GitLab.com, you can
+For detailed insight into how GitLab scales and configures GitLab.com, you can
watch [this 1 hour Q&A](https://www.youtube.com/watch?v=uCU8jdYzpac)
with [John Northrup](https://gitlab.com/northrup), and live questions coming in from some of our customers.
## GitLab Components
The following components need to be considered for a scaled or highly-available
-environment. In many cases components can be combined on the same nodes to reduce
+environment. In many cases, components can be combined on the same nodes to reduce
complexity.
- Unicorn/Workhorse - Web-requests (UI, API, Git over HTTP)
- Sidekiq - Asynchronous/Background jobs
- PostgreSQL - Database
- Consul - Database service discovery and health checks/failover
- - PGBouncer - Database pool manager
+ - PgBouncer - Database pool manager
- Redis - Key/Value store (User sessions, cache, queue for Sidekiq)
- Sentinel - Redis health check/failover manager
- Gitaly - Provides high-level RPC access to Git repositories
@@ -57,12 +57,12 @@ infrastructure and maintenance costs of full high availability.
### Basic Scaling
This is the simplest form of scaling and will work for the majority of
-cases. Backend components such as PostgreSQL, Redis and storage are offloaded
+cases. Backend components such as PostgreSQL, Redis, and storage are offloaded
to their own nodes while the remaining GitLab components all run on 2 or more
application nodes.
This form of scaling also works well in a cloud environment when it is more
-cost-effective to deploy several small nodes rather than a single
+cost effective to deploy several small nodes rather than a single
larger one.
- 1 PostgreSQL node
@@ -85,11 +85,11 @@ you can continue with the next step.
### Full Scaling
-For very large installations it may be necessary to further split components
-for maximum scalability. In a fully-scaled architecture the application node
+For very large installations, it might be necessary to further split components
+for maximum scalability. In a fully-scaled architecture, the application node
is split into separate Sidekiq and Unicorn/Workhorse nodes. One indication that
this architecture is required is if Sidekiq queues begin to periodically increase
-in size, indicating that there is contention or not enough resources.
+in size, indicating that there is contention or there are not enough resources.
- 1 PostgreSQL node
- 1 Redis node
@@ -100,7 +100,7 @@ in size, indicating that there is contention or not enough resources.
## High Availability Architecture Examples
-When organizations require scaling *and* high availability the following
+When organizations require scaling *and* high availability, the following
architectures can be utilized. As the introduction section at the top of this
page mentions, there is a tradeoff between cost/complexity and uptime. Be sure
this complexity is absolutely required before taking the step into full
@@ -108,11 +108,11 @@ high availability.
For all examples below, we recommend running Consul and Redis Sentinel on
dedicated nodes. If Consul is running on PostgreSQL nodes or Sentinel on
-Redis nodes there is a potential that high resource usage by PostgreSQL or
+Redis nodes, there is a potential that high resource usage by PostgreSQL or
Redis could prevent communication between the other Consul and Sentinel nodes.
-This may lead to the other nodes believing a failure has occurred and automated
-failover is necessary. Isolating them from the services they monitor reduces
-the chances of split-brain.
+This may lead to the other nodes believing a failure has occurred and initiating
+automated failover. Isolating Redis and Consul from the services they monitor
+reduces the chances of a false positive that a failure has occurred.
The examples below do not really address high availability of NFS. Some enterprises
have access to NFS appliances that manage availability. This is the best case
@@ -131,14 +131,14 @@ trade-offs and limits.
This architecture will work well for many GitLab customers. Larger customers
may begin to notice certain events cause contention/high load - for example,
cloning many large repositories with binary files, high API usage, a large
-number of enqueued Sidekiq jobs, etc. If this happens you should consider
+number of enqueued Sidekiq jobs, and so on. If this happens, you should consider
moving to a hybrid or fully distributed architecture depending on what is causing
the contention.
- 3 PostgreSQL nodes
- 2 Redis nodes
- 3 Consul/Sentinel nodes
-- 2 or more GitLab application nodes (Unicorn, Workhorse, Sidekiq, PGBouncer)
+- 2 or more GitLab application nodes (Unicorn, Workhorse, Sidekiq, PgBouncer)
- 1 NFS/Gitaly server
- 1 Monitoring node (Prometheus, Grafana)
@@ -162,32 +162,11 @@ contention due to certain workloads.
![Hybrid architecture diagram](img/hybrid.png)
-#### Reference Architecture
-
-- **Supported Users (approximate):** 10,000
-- **Known Issues:** While validating the reference architecture, slow endpoints were discovered and are being investigated. [gitlab-org/gitlab-ce/issues/64335](https://gitlab.com/gitlab-org/gitlab-foss/issues/64335)
-
-The Support and Quality teams built, performance tested, and validated an
-environment that supports about 10,000 users. The specifications below are a
-representation of the work so far. The specifications may be adjusted in the
-future based on additional testing and iteration.
-
-NOTE: **Note:** The specifications here were performance tested against a specific coded workload. Your exact needs may be more, depending on your workload. Your workload is influenced by factors such as - but not limited to - how active your users are, how much automation you use, mirroring, and repo/change size.
-
-- 3 PostgreSQL - 4 CPU, 16GiB memory per node
-- 1 PgBouncer - 2 CPU, 4GiB memory
-- 2 Redis - 2 CPU, 8GiB memory per node
-- 3 Consul/Sentinel - 2 CPU, 2GiB memory per node
-- 4 Sidekiq - 4 CPU, 16GiB memory per node
-- 5 GitLab application nodes - 16 CPU, 64GiB memory per node
-- 1 Gitaly - 16 CPU, 64GiB memory
-- 1 Monitoring node - 2 CPU, 8GiB memory, 100GiB local storage
-
### Fully Distributed
This architecture scales to hundreds of thousands of users and projects and is
the basis of the GitLab.com architecture. While this scales well it also comes
-with the added complexity of many more nodes to configure, manage and monitor.
+with the added complexity of many more nodes to configure, manage, and monitor.
- 3 PostgreSQL nodes
- 4 or more Redis nodes (2 separate clusters for persistent and cache data)
@@ -214,3 +193,110 @@ separately:
1. [Configure the GitLab application servers](gitlab.md)
1. [Configure the load balancers](load_balancer.md)
1. [Monitoring node (Prometheus and Grafana)](monitoring_node.md)
+
+## Reference Architecture Examples
+
+These reference architecture examples rely on the general rule that approximately 2 requests per second (RPS) of load is generated for every 100 users.
+
+The specifications here were performance tested against a specific coded
+workload. Your exact needs may be more, depending on your workload. Your
+workload is influenced by factors such as - but not limited to - how active your
+users are, how much automation you use, mirroring, and repo/change size.
+
+### 10,000 User Configuration
+
+- **Supported Users (approximate):** 10,000
+- **RPS:** 200 requests per second
+- **Known Issues:** While validating the reference architecture, slow API endpoints
+ were discovered. For details, see the related issues list in
+ [this issue](https://gitlab.com/gitlab-org/gitlab-foss/issues/64335).
+
+The Support and Quality teams built, performance tested, and validated an
+environment that supports about 10,000 users. The specifications below are a
+representation of the work so far. The specifications may be adjusted in the
+future based on additional testing and iteration.
+
+| Service | Configuration | GCP type |
+| ------------------------------|-------------------------|----------------|
+| 3 GitLab Rails <br> - Puma workers on each node set to 90% of available CPUs with 16 threads | 32 vCPU, 28.8GB Memory | n1-highcpu-32 |
+| 3 PostgreSQL | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 1 PgBouncer | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| X Gitaly[^1] <br> - Gitaly Ruby workers on each node set to 90% of available CPUs with 16 threads | 16 vCPU, 60GB Memory | n1-standard-16 |
+| 3 Redis Cache + Sentinel <br> - Cache maxmemory set to 90% of available memory | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Redis Persistent + Sentinel | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 4 Sidekiq | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Consul | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| 1 NFS Server | 16 vCPU, 14.4GB Memory | n1-highcpu-16 |
+| 1 Monitoring node | 4 CPU, 3.6GB Memory | n1-highcpu-4 |
+| 1 Load Balancing node[^2] . | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+
+### 25,000 User Configuration
+
+- **Supported Users (approximate):** 25,000
+- **RPS:** 500 requests per second
+- **Known Issues:** The slow API endpoints that were discovered during testing
+ the 10,000 user architecture also affect the 25,000 user architecture. For
+ details, see the related issues list in
+ [this issue](https://gitlab.com/gitlab-org/gitlab-foss/issues/64335).
+
+The GitLab Support and Quality teams built, performance tested, and validated an
+environment that supports around 25,000 users. The specifications below are a
+representation of the work so far. The specifications may be adjusted in the
+future based on additional testing and iteration.
+
+NOTE: **Note:** The specifications here were performance tested against a
+specific coded workload. Your exact needs may be more, depending on your
+workload. Your workload is influenced by factors such as - but not limited to -
+how active your users are, how much automation you use, mirroring, and
+repo/change size.
+
+| Service | Configuration | GCP type |
+| ------------------------------|-------------------------|----------------|
+| 7 GitLab Rails <br> - Puma workers on each node set to 90% of available CPUs with 16 threads | 32 vCPU, 28.8GB Memory | n1-highcpu-32 |
+| 3 PostgreSQL | 8 vCPU, 30GB Memory | n1-standard-8 |
+| 1 PgBouncer | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| X Gitaly[^1] <br> - Gitaly Ruby workers on each node set to 90% of available CPUs with 16 threads | 32 vCPU, 120GB Memory | n1-standard-32 |
+| 3 Redis Cache + Sentinel <br> - Cache maxmemory set to 90% of available memory | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Redis Persistent + Sentinel | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 4 Sidekiq | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Consul | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| 1 NFS Server | 16 vCPU, 14.4GB Memory | n1-highcpu-16 |
+| 1 Monitoring node | 4 CPU, 3.6GB Memory | n1-highcpu-4 |
+| 1 Load Balancing node[^2] . | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+
+### 50,000 User Configuration
+
+- **Supported Users (approximate):** 50,000
+- **RPS:** 1,000 requests per second
+- **Status:** Work-in-progress
+- **Related Issue:** See the [related issue](https://gitlab.com/gitlab-org/quality/performance/issues/66) for more information.
+
+The Support and Quality teams are in the process of building and performance
+testing an environment that will support around 50,000 users. The specifications
+below are a very rough work-in-progress representation of the work so far. The
+Quality team will be certifying this environment in late 2019. The
+specifications may be adjusted prior to certification based on performance
+testing.
+
+| Service | Configuration | GCP type |
+| ------------------------------|-------------------------|----------------|
+| 15 GitLab Rails <br> - Puma workers on each node set to 90% of available CPUs with 16 threads | 32 vCPU, 28.8GB Memory | n1-highcpu-32 |
+| 3 PostgreSQL | 8 vCPU, 30GB Memory | n1-standard-8 |
+| 1 PgBouncer | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| X Gitaly[^1] <br> - Gitaly Ruby workers on each node set to 90% of available CPUs with 16 threads | 64 vCPU, 240GB Memory | n1-standard-64 |
+| 3 Redis Cache + Sentinel <br> - Cache maxmemory set to 90% of available memory | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Redis Persistent + Sentinel | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 4 Sidekiq | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Consul | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| 1 NFS Server | 16 vCPU, 14.4GB Memory | n1-highcpu-16 |
+| 1 Monitoring node | 4 CPU, 3.6GB Memory | n1-highcpu-4 |
+| 1 Load Balancing node[^2] . | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+
+[^1]: Gitaly node requirements are dependent on customer data. We recommend 2
+ nodes as an absolute minimum for performance at the 10,000 and 25,000 user
+ scale and 4 nodes as an absolute minimum at the 50,000 user scale, but
+ additional nodes should be considered in conjunction with a review of
+ project counts and sizes.
+
+[^2]: HAProxy is the only tested and recommended load balancer. Additional
+ options may be supported in the future.
diff --git a/doc/administration/high_availability/consul.md b/doc/administration/high_availability/consul.md
index b02a61b9256..b01419200cc 100644
--- a/doc/administration/high_availability/consul.md
+++ b/doc/administration/high_availability/consul.md
@@ -6,7 +6,7 @@ type: reference
As part of its High Availability stack, GitLab Premium includes a bundled version of [Consul](https://www.consul.io/) that can be managed through `/etc/gitlab/gitlab.rb`.
-A Consul cluster consists of multiple server agents, as well as client agents that run on other nodes which need to talk to the consul cluster.
+A Consul cluster consists of multiple server agents, as well as client agents that run on other nodes which need to talk to the Consul cluster.
## Prerequisites
@@ -96,7 +96,7 @@ Ideally all nodes will have a `Status` of `alive`.
**Note**: This section only applies to server agents. It is safe to restart client agents whenever needed.
-If it is necessary to restart the server cluster, it is important to do this in a controlled fashion in order to maintain quorum. If quorum is lost, you will need to follow the consul [outage recovery](#outage-recovery) process to recover the cluster.
+If it is necessary to restart the server cluster, it is important to do this in a controlled fashion in order to maintain quorum. If quorum is lost, you will need to follow the Consul [outage recovery](#outage-recovery) process to recover the cluster.
To be safe, we recommend you only restart one server agent at a time to ensure the cluster remains intact.
@@ -129,7 +129,7 @@ To fix this:
1. Run `gitlab-ctl reconfigure`
-If you still see the errors, you may have to [erase the consul database and reinitialize](#recreate-from-scratch) on the affected node.
+If you still see the errors, you may have to [erase the Consul database and reinitialize](#recreate-from-scratch) on the affected node.
### Consul agents do not start - Multiple private IPs
@@ -158,11 +158,11 @@ To fix this:
### Outage recovery
-If you lost enough server agents in the cluster to break quorum, then the cluster is considered failed, and it will not function without manual intervenetion.
+If you lost enough server agents in the cluster to break quorum, then the cluster is considered failed, and it will not function without manual intervention.
#### Recreate from scratch
-By default, GitLab does not store anything in the consul cluster that cannot be recreated. To erase the consul database and reinitialize
+By default, GitLab does not store anything in the Consul cluster that cannot be recreated. To erase the Consul database and reinitialize
```
# gitlab-ctl stop consul
@@ -174,4 +174,4 @@ After this, the cluster should start back up, and the server agents rejoin. Shor
#### Recover a failed cluster
-If you have taken advantage of consul to store other data, and want to restore the failed cluster, please follow the [Consul guide](https://www.consul.io/docs/guides/outage.html) to recover a failed cluster.
+If you have taken advantage of Consul to store other data, and want to restore the failed cluster, please follow the [Consul guide](https://learn.hashicorp.com/consul/day-2-operations/outage) to recover a failed cluster.
diff --git a/doc/administration/high_availability/database.md b/doc/administration/high_availability/database.md
index 99582dae57a..a50cc0cbd03 100644
--- a/doc/administration/high_availability/database.md
+++ b/doc/administration/high_availability/database.md
@@ -153,9 +153,9 @@ Database nodes run two services with PostgreSQL:
- Instructing remaining servers to follow the new master node.
On failure, the old master node is automatically evicted from the cluster, and should be rejoined manually once recovered.
-- Consul. Monitors the status of each node in the database cluster and tracks its health in a service definition on the consul cluster.
+- Consul. Monitors the status of each node in the database cluster and tracks its health in a service definition on the Consul cluster.
-Alongside pgbouncer, there is a consul agent that watches the status of the PostgreSQL service. If that status changes, consul runs a script which updates the configuration and reloads pgbouncer
+Alongside PgBouncer, there is a Consul agent that watches the status of the PostgreSQL service. If that status changes, Consul runs a script which updates the configuration and reloads PgBouncer
##### Connection flow
@@ -198,7 +198,7 @@ When using default setup, minimum configuration requires:
- `CONSUL_USERNAME`. Defaults to `gitlab-consul`
- `CONSUL_DATABASE_PASSWORD`. Password for the database user.
-- `CONSUL_PASSWORD_HASH`. This is a hash generated out of consul username/password pair.
+- `CONSUL_PASSWORD_HASH`. This is a hash generated out of Consul username/password pair.
Can be generated with:
```sh
@@ -248,26 +248,26 @@ We will need the following password information for the application's database u
sudo gitlab-ctl pg-password-md5 POSTGRESQL_USERNAME
```
-##### Pgbouncer information
+##### PgBouncer information
When using default setup, minimum configuration requires:
- `PGBOUNCER_USERNAME`. Defaults to `pgbouncer`
-- `PGBOUNCER_PASSWORD`. This is a password for pgbouncer service.
-- `PGBOUNCER_PASSWORD_HASH`. This is a hash generated out of pgbouncer username/password pair.
+- `PGBOUNCER_PASSWORD`. This is a password for PgBouncer service.
+- `PGBOUNCER_PASSWORD_HASH`. This is a hash generated out of PgBouncer username/password pair.
Can be generated with:
```sh
sudo gitlab-ctl pg-password-md5 PGBOUNCER_USERNAME
```
-- `PGBOUNCER_NODE`, is the IP address or a FQDN of the node running Pgbouncer.
+- `PGBOUNCER_NODE`, is the IP address or a FQDN of the node running PgBouncer.
Few notes on the service itself:
- The service runs as the same system account as the database
- In the package, this is by default `gitlab-psql`
-- If you use a non-default user account for Pgbouncer service (by default `pgbouncer`), you will have to specify this username. We will refer to this requirement with `PGBOUNCER_USERNAME`.
+- If you use a non-default user account for PgBouncer service (by default `pgbouncer`), you will have to specify this username. We will refer to this requirement with `PGBOUNCER_USERNAME`.
- The service will have a regular database user account generated for it
- This defaults to `repmgr`
- Passwords will be stored in the following locations:
@@ -315,7 +315,7 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
- # Configure the consul agent
+ # Configure the Consul agent
consul['services'] = %w(postgresql)
# START user configuration
@@ -348,7 +348,7 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
1. On secondary nodes, add all the configuration specified above for primary node
to `/etc/gitlab/gitlab.rb`. In addition, append the following configuration
- to inform gitlab-ctl that they are standby nodes initially and it need not
+ to inform `gitlab-ctl` that they are standby nodes initially and it need not
attempt to register them as primary node
```
@@ -363,7 +363,7 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
>
> - If you want your database to listen on a specific interface, change the config:
> `postgresql['listen_address'] = '0.0.0.0'`.
-> - If your Pgbouncer service runs under a different user account,
+> - If your PgBouncer service runs under a different user account,
> you also need to specify: `postgresql['pgbouncer_user'] = PGBOUNCER_USERNAME` in
> your configuration.
@@ -484,9 +484,9 @@ or secondary. The most important thing here is that this command does not produc
If there are errors it's most likely due to incorrect `gitlab-consul` database user permissions.
Check the [Troubleshooting section](#troubleshooting) before proceeding.
-#### Configuring the Pgbouncer node
+#### Configuring the PgBouncer node
-See our [documentation for Pgbouncer](pgbouncer.md) for information on running Pgbouncer as part of an HA setup.
+See our [documentation for PgBouncer](pgbouncer.md) for information on running PgBouncer as part of an HA setup.
#### Configuring the Application nodes
@@ -515,10 +515,10 @@ Ensure that all migrations ran:
gitlab-rake gitlab:db:configure
```
-> **Note**: If you encounter a `rake aborted!` error stating that PGBouncer is failing to connect to
-PostgreSQL it may be that your PGBouncer node's IP address is missing from
+> **Note**: If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to
+PostgreSQL it may be that your PgBouncer node's IP address is missing from
PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
-[PGBouncer error `ERROR: pgbouncer cannot connect to server`](#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
+[PgBouncer error `ERROR: pgbouncer cannot connect to server`](#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
in the Troubleshooting section before proceeding.
##### Ensure GitLab is running
@@ -533,7 +533,7 @@ Here we'll show you some fully expanded example configurations.
##### Example recommended setup
-This example uses 3 consul servers, 3 postgresql servers, and 1 application node.
+This example uses 3 Consul servers, 3 PostgreSQL servers, and 1 application node.
We start with all servers on the same 10.6.0.0/16 private network range, they
can connect to each freely other on those addresses.
@@ -589,7 +589,7 @@ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
-# Configure the consul agent
+# Configure the Consul agent
consul['services'] = %w(postgresql)
postgresql['pgbouncer_user_password'] = '771a8625958a529132abe6f1a4acb19c'
@@ -635,7 +635,7 @@ postgresql['enable'] = false
pgbouncer['enable'] = true
consul['enable'] = true
-# Configure Pgbouncer
+# Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
# Configure Consul agent
@@ -691,7 +691,7 @@ After deploying the configuration follow these steps:
1. On `10.6.0.31`, our application server
- Set gitlab-consul's pgbouncer password to `toomanysecrets`
+ Set `gitlab-consul` user's PgBouncer password to `toomanysecrets`
```sh
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
@@ -705,10 +705,10 @@ After deploying the configuration follow these steps:
#### Example minimal setup
-This example uses 3 postgresql servers, and 1 application node.
+This example uses 3 PostgreSQL servers, and 1 application node.
-It differs from the [recommended setup](#example-recommended-setup) by moving the consul servers into the same servers we use for PostgreSQL.
-The trade-off is between reducing server counts, against the increased operational complexity of needing to deal with postgres [failover](#failover-procedure) and [restore](#restore-procedure) procedures in addition to [consul outage recovery](consul.md#outage-recovery) on the same set of machines.
+It differs from the [recommended setup](#example-recommended-setup) by moving the Consul servers into the same servers we use for PostgreSQL.
+The trade-off is between reducing server counts, against the increased operational complexity of needing to deal with PostgreSQL [failover](#failover-procedure) and [restore](#restore-procedure) procedures in addition to [Consul outage recovery](consul.md#outage-recovery) on the same set of machines.
In this example we start with all servers on the same 10.6.0.0/16 private network range, they can connect to each freely other on those addresses.
@@ -744,7 +744,7 @@ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
-# Configure the consul agent
+# Configure the Consul agent
consul['services'] = %w(postgresql)
postgresql['pgbouncer_user_password'] = '771a8625958a529132abe6f1a4acb19c'
@@ -788,7 +788,7 @@ postgresql['enable'] = false
pgbouncer['enable'] = true
consul['enable'] = true
-# Configure Pgbouncer
+# Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
# Configure Consul agent
@@ -817,7 +817,7 @@ The manual steps for this configuration are the same as for the [example recomme
#### Failover procedure
By default, if the master database fails, `repmgrd` should promote one of the
-standby nodes to master automatically, and consul will update pgbouncer with
+standby nodes to master automatically, and Consul will update PgBouncer with
the new master.
If you need to failover manually, you have two options:
@@ -907,7 +907,7 @@ can require md5 authentication.
##### Trust specific addresses
-If you know the IP address, or FQDN of all database and pgbouncer nodes in the
+If you know the IP address, or FQDN of all database and PgBouncer nodes in the
cluster, you can trust only those nodes.
In `/etc/gitlab/gitlab.rb` on all of the database nodes, set
@@ -926,7 +926,7 @@ repmgr['trust_auth_cidr_addresses'] = %w(192.168.1.44/32 db2.example.com)
##### MD5 Authentication
If you are running on an untrusted network, repmgr can use md5 authentication
-with a [.pgpass file](https://www.postgresql.org/docs/9.6/static/libpq-pgpass.html)
+with a [.pgpass file](https://www.postgresql.org/docs/9.6/libpq-pgpass.html)
to authenticate.
You can specify by IP address, FQDN, or by subnet, using the same format as in
@@ -950,7 +950,7 @@ the previous section:
1. Set `postgresql['md5_auth_cidr_addresses']` to the desired value
1. Set `postgresql['sql_replication_user'] = 'gitlab_repmgr'`
1. Reconfigure with `gitlab-ctl reconfigure`
- 1. Restart postgresql with `gitlab-ctl restart postgresql`
+ 1. Restart PostgreSQL with `gitlab-ctl restart postgresql`
1. Create a `.pgpass` file. Enter the `gitlab_repmgr` password twice to
when asked:
@@ -959,7 +959,7 @@ the previous section:
gitlab-ctl write-pgpass --user gitlab_repmgr --hostuser gitlab-psql --database '*'
```
-1. On each pgbouncer node, edit `/etc/gitlab/gitlab.rb`:
+1. On each PgBouncer node, edit `/etc/gitlab/gitlab.rb`:
1. Ensure `gitlab_rails['db_password']` is set to the plaintext password for
the `gitlab` database user
1. [Reconfigure GitLab] for the changes to take effect
@@ -993,7 +993,7 @@ To restart either service, run `gitlab-ctl restart SERVICE`
For PostgreSQL, it is usually safe to restart the master node by default. Automatic failover defaults to a 1 minute timeout. Provided the database returns before then, nothing else needs to be done. To be safe, you can stop `repmgrd` on the standby nodes first with `gitlab-ctl stop repmgrd`, then start afterwards with `gitlab-ctl start repmgrd`.
-On the consul server nodes, it is important to restart the consul service in a controlled fashion. Read our [consul documentation](consul.md#restarting-the-server-cluster) for instructions on how to restart the service.
+On the Consul server nodes, it is important to restart the Consul service in a controlled fashion. Read our [Consul documentation](consul.md#restarting-the-server-cluster) for instructions on how to restart the service.
### `gitlab-ctl repmgr-check-master` command produces errors
@@ -1010,16 +1010,16 @@ steps to fix the problem:
Now there should not be errors. If errors still occur then there is another problem.
-### PGBouncer error `ERROR: pgbouncer cannot connect to server`
+### PgBouncer error `ERROR: pgbouncer cannot connect to server`
You may get this error when running `gitlab-rake gitlab:db:configure` or you
-may see the error in the PGBouncer log file.
+may see the error in the PgBouncer log file.
```
PG::ConnectionBad: ERROR: pgbouncer cannot connect to server
```
-The problem may be that your PGBouncer node's IP address is not included in the
+The problem may be that your PgBouncer node's IP address is not included in the
`trust_auth_cidr_addresses` setting in `/etc/gitlab/gitlab.rb` on the database nodes.
You can confirm that this is the issue by checking the PostgreSQL log on the master
@@ -1049,7 +1049,7 @@ If you're running into an issue with a component not outlined here, be sure to c
## Configure using Omnibus
**Note**: We recommend that you follow the instructions here for a full [PostgreSQL cluster](#high-availability-with-gitlab-omnibus-premium-only).
-If you are reading this section due to an old bookmark, you can find that old documentation [in the repository](https://gitlab.com/gitlab-org/gitlab-foss/blob/v10.1.4/doc/administration/high_availability/database.md#configure-using-omnibus).
+If you are reading this section due to an old bookmark, you can find that old documentation [in the repository](https://gitlab.com/gitlab-org/gitlab/blob/v10.1.4/doc/administration/high_availability/database.md#configure-using-omnibus).
Read more on high-availability configuration:
diff --git a/doc/administration/high_availability/gitlab.md b/doc/administration/high_availability/gitlab.md
index 0d1dd06871a..71ab169a801 100644
--- a/doc/administration/high_availability/gitlab.md
+++ b/doc/administration/high_availability/gitlab.md
@@ -40,7 +40,7 @@ these additional steps before proceeding with GitLab installation.
```
1. Download/install GitLab Omnibus using **steps 1 and 2** from
- [GitLab downloads](https://about.gitlab.com/downloads). Do not complete other
+ [GitLab downloads](https://about.gitlab.com/install/). Do not complete other
steps on the download page.
1. Create/edit `/etc/gitlab/gitlab.rb` and use the following configuration.
Be sure to change the `external_url` to match your eventual GitLab front-end
@@ -90,8 +90,8 @@ these additional steps before proceeding with GitLab installation.
NOTE: **Note:** When you specify `https` in the `external_url`, as in the example
above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
- certificates are not present, Nginx will fail to start. See
- [Nginx documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+ certificates are not present, NGINX will fail to start. See
+ [NGINX documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
for more information.
NOTE: **Note:** It is best to set the `uid` and `gid`s prior to the initial reconfigure
@@ -99,14 +99,14 @@ these additional steps before proceeding with GitLab installation.
## First GitLab application server
-As a final step, run the setup rake task **only on** the first GitLab application server.
-Do not run this on additional application servers.
+On the first application server, run:
-1. Initialize the database by running `sudo gitlab-rake gitlab:setup`.
-1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
+```sh
+sudo gitlab-ctl reconfigure
+```
- CAUTION: **WARNING:** Only run this setup task on **NEW** GitLab instances because it
- will wipe any existing data.
+This should compile the configuration and initialize the database. Do
+not run this on additional application servers until the next step.
## Extra configuration for additional GitLab application servers
@@ -175,7 +175,7 @@ If you enable Monitoring, it must be enabled on **all** GitLab servers.
CAUTION: **Warning:**
After changing `unicorn['listen']` in `gitlab.rb`, and running `sudo gitlab-ctl reconfigure`,
- it can take an extended period of time for unicorn to complete reloading after receiving a `HUP`.
+ it can take an extended period of time for Unicorn to complete reloading after receiving a `HUP`.
For more information, see the [issue](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/4401).
## Troubleshooting
diff --git a/doc/administration/high_availability/load_balancer.md b/doc/administration/high_availability/load_balancer.md
index f11d27487d1..43a7c442d8c 100644
--- a/doc/administration/high_availability/load_balancer.md
+++ b/doc/administration/high_availability/load_balancer.md
@@ -27,10 +27,10 @@ options:
Configure your load balancer(s) to pass connections on port 443 as 'TCP' rather
than 'HTTP(S)' protocol. This will pass the connection to the application nodes
-Nginx service untouched. Nginx will have the SSL certificate and listen on port 443.
+NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
-See [Nginx HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
-for details on managing SSL certificates and configuring Nginx.
+See [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
### Load Balancer(s) terminate SSL without backend SSL
@@ -40,7 +40,7 @@ terminating SSL.
Since communication between the load balancer(s) and GitLab will not be secure,
there is some additional configuration needed. See
-[Nginx Proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
+[NGINX Proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
for details.
### Load Balancer(s) terminate SSL with backend SSL
@@ -49,12 +49,12 @@ Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
The load balancer(s) will be responsible for managing SSL certificates that
end users will see.
-Traffic will also be secure between the load balancer(s) and Nginx in this
+Traffic will also be secure between the load balancer(s) and NGINX in this
scenario. There is no need to add configuration for proxied SSL since the
connection will be secure all the way. However, configuration will need to be
added to GitLab to configure SSL certificates. See
-[Nginx HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
-for details on managing SSL certificates and configuring Nginx.
+[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
## Ports
diff --git a/doc/administration/high_availability/monitoring_node.md b/doc/administration/high_availability/monitoring_node.md
index 0b04e48f74a..d293fc350fa 100644
--- a/doc/administration/high_availability/monitoring_node.md
+++ b/doc/administration/high_availability/monitoring_node.md
@@ -34,6 +34,9 @@ Omnibus:
prometheus['listen_address'] = '0.0.0.0:9090'
prometheus['monitor_kubernetes'] = false
+ # Enable Login form
+ grafana['disable_login_form'] = false
+
# Enable Grafana
grafana['enable'] = true
grafana['admin_password'] = 'toomanysecrets'
@@ -63,6 +66,7 @@ Omnibus:
sidekiq['enable'] = false
unicorn['enable'] = false
node_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
diff --git a/doc/administration/high_availability/nfs.md b/doc/administration/high_availability/nfs.md
index 987db3c89a5..f7c5593e211 100644
--- a/doc/administration/high_availability/nfs.md
+++ b/doc/administration/high_availability/nfs.md
@@ -103,13 +103,12 @@ If you do choose to use EFS, avoid storing GitLab log files (e.g. those in `/var
there because this will also affect performance. We recommend that the log files be
stored on a local volume.
-For more details on another person's experience with EFS, see
-[Amazon's Elastic File System: Burst Credits](https://rawkode.com/2017/04/16/amazons-elastic-file-system-burst-credits/)
+For more details on another person's experience with EFS, see this [Commit Brooklyn 2019 video](https://youtu.be/K6OS8WodRBQ?t=313).
## Avoid using CephFS and GlusterFS
GitLab strongly recommends against using CephFS and GlusterFS.
-These distributed file systems are not well-suited for GitLab's input/output access patterns because git uses many small files and access times and file locking times to propagate will make git activity very slow.
+These distributed file systems are not well-suited for GitLab's input/output access patterns because Git uses many small files and access times and file locking times to propagate will make Git activity very slow.
## Avoid using PostgreSQL with NFS
@@ -118,7 +117,7 @@ across NFS. The GitLab support team will not be able to assist on performance is
this configuration.
Additionally, this configuration is specifically warned against in the
-[Postgres Documentation](https://www.postgresql.org/docs/current/static/creating-cluster.html#CREATING-CLUSTER-NFS):
+[Postgres Documentation](https://www.postgresql.org/docs/current/creating-cluster.html#CREATING-CLUSTER-NFS):
>PostgreSQL does nothing special for NFS file systems, meaning it assumes NFS behaves exactly like
>locally-connected drives. If the client or server NFS implementation does not provide standard file
@@ -147,7 +146,7 @@ Note there are several options that you should consider using:
## A single NFS mount
-It's recommended to nest all gitlab data dirs within a mount, that allows automatic
+It's recommended to nest all GitLab data dirs within a mount, that allows automatic
restore of backups without manually moving existing data.
```
diff --git a/doc/administration/high_availability/nfs_host_client_setup.md b/doc/administration/high_availability/nfs_host_client_setup.md
index 9b0e085fe25..5b6b28bf633 100644
--- a/doc/administration/high_availability/nfs_host_client_setup.md
+++ b/doc/administration/high_availability/nfs_host_client_setup.md
@@ -11,7 +11,7 @@ setup act as clients while the NFS server plays host.
> Note: The instructions provided in this documentation allow for setting a quick
proof of concept but will leave NFS as potential single point of failure and
therefore not recommended for use in production. Explore options such as [Pacemaker
-and Corosync](http://clusterlabs.org/) for highly available NFS in production.
+and Corosync](https://clusterlabs.org) for highly available NFS in production.
Below are instructions for setting up an application node(client) in an HA cluster
to read from and write to a central NFS server(host).
@@ -56,7 +56,7 @@ You may need to update your server's firewall. See the [firewall section](#nfs-i
## Client/ GitLab application node Setup
-> Follow the instructions below to connect any GitLab rails application node running
+> Follow the instructions below to connect any GitLab Rails application node running
inside your HA environment to the NFS server configured above.
### Step 1 - Install NFS Common on Client
@@ -108,7 +108,7 @@ When using the default Omnibus configuration you will need to share 5 data locat
between all GitLab cluster nodes. No other locations should be shared. Changing the
default file locations in `gitlab.rb` on the client allows you to have one main mount
point and have all the required locations as subdirectories to use the NFS mount for
-git-data.
+`git-data`.
```text
git_data_dirs({"default" => {"path" => "/nfs/home/var/opt/gitlab-data/git-data"}})
diff --git a/doc/administration/high_availability/pgbouncer.md b/doc/administration/high_availability/pgbouncer.md
index b99724d12a2..e7479ad1ecb 100644
--- a/doc/administration/high_availability/pgbouncer.md
+++ b/doc/administration/high_availability/pgbouncer.md
@@ -2,29 +2,29 @@
type: reference
---
-# Working with the bundle Pgbouncer service
+# Working with the bundle PgBouncer service
-As part of its High Availability stack, GitLab Premium includes a bundled version of [Pgbouncer](https://pgbouncer.github.io/) that can be managed through `/etc/gitlab/gitlab.rb`.
+As part of its High Availability stack, GitLab Premium includes a bundled version of [PgBouncer](https://pgbouncer.github.io/) that can be managed through `/etc/gitlab/gitlab.rb`.
-In a High Availability setup, Pgbouncer is used to seamlessly migrate database connections between servers in a failover scenario.
+In a High Availability setup, PgBouncer is used to seamlessly migrate database connections between servers in a failover scenario.
Additionally, it can be used in a non-HA setup to pool connections, speeding up response time while reducing resource usage.
-It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on its own dedicated node in a cluster.
+It is recommended to run PgBouncer alongside the `gitlab-rails` service, or on its own dedicated node in a cluster.
## Operations
-### Running Pgbouncer as part of an HA GitLab installation
+### Running PgBouncer as part of an HA GitLab installation
1. Make sure you collect [`CONSUL_SERVER_NODES`](database.md#consul-information), [`CONSUL_PASSWORD_HASH`](database.md#consul-information), and [`PGBOUNCER_PASSWORD_HASH`](database.md#pgbouncer-information) before executing the next step.
1. Edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
```ruby
- # Disable all components except Pgbouncer and Consul agent
+ # Disable all components except PgBouncer and Consul agent
roles ['pgbouncer_role']
- # Configure Pgbouncer
+ # Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
# Configure Consul agent
@@ -59,13 +59,13 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
1. Run `gitlab-ctl reconfigure`
1. Create a `.pgpass` file so Consul is able to
- reload pgbouncer. Enter the `PGBOUNCER_PASSWORD` twice when asked:
+ reload PgBouncer. Enter the `PGBOUNCER_PASSWORD` twice when asked:
```sh
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
-#### PGBouncer Checkpoint
+#### PgBouncer Checkpoint
1. Ensure the node is talking to the current master:
@@ -100,7 +100,7 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
(2 rows)
```
-### Running Pgbouncer as part of a non-HA GitLab installation
+### Running PgBouncer as part of a non-HA GitLab installation
1. Generate PGBOUNCER_USER_PASSWORD_HASH with the command `gitlab-ctl pg-password-md5 pgbouncer`
@@ -119,7 +119,7 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
**Note:** If the database was already running, it will need to be restarted after reconfigure by running `gitlab-ctl restart postgresql`.
-1. On the node you are running pgbouncer on, make sure the following is set in `/etc/gitlab/gitlab.rb`
+1. On the node you are running PgBouncer on, make sure the following is set in `/etc/gitlab/gitlab.rb`
```ruby
pgbouncer['enable'] = true
@@ -134,7 +134,7 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
1. Run `gitlab-ctl reconfigure`
-1. On the node running unicorn, make sure the following is set in `/etc/gitlab/gitlab.rb`
+1. On the node running Unicorn, make sure the following is set in `/etc/gitlab/gitlab.rb`
```ruby
gitlab_rails['db_host'] = 'PGBOUNCER_HOST'
@@ -144,13 +144,13 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
1. Run `gitlab-ctl reconfigure`
-1. At this point, your instance should connect to the database through pgbouncer. If you are having issues, see the [Troubleshooting](#troubleshooting) section
+1. At this point, your instance should connect to the database through PgBouncer. If you are having issues, see the [Troubleshooting](#troubleshooting) section
## Enable Monitoring
> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/3786) in GitLab 12.0.
-If you enable Monitoring, it must be enabled on **all** pgbouncer servers.
+If you enable Monitoring, it must be enabled on **all** PgBouncer servers.
1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
@@ -173,11 +173,11 @@ If you enable Monitoring, it must be enabled on **all** pgbouncer servers.
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
-### Interacting with pgbouncer
+### Interacting with PgBouncer
#### Administrative console
-As part of omnibus-gitlab, we provide a command `gitlab-ctl pgb-console` to automatically connect to the pgbouncer administrative console. Please see the [pgbouncer documentation](https://pgbouncer.github.io/usage.html#admin-console) for detailed instructions on how to interact with the console.
+As part of Omnibus GitLab, we provide a command `gitlab-ctl pgb-console` to automatically connect to the PgBouncer administrative console. Please see the [PgBouncer documentation](https://pgbouncer.github.io/usage.html#admin-console) for detailed instructions on how to interact with the console.
To start a session, run
@@ -235,7 +235,7 @@ ote_pid | tls
## Troubleshooting
-In case you are experiencing any issues connecting through pgbouncer, the first place to check is always the logs:
+In case you are experiencing any issues connecting through PgBouncer, the first place to check is always the logs:
```shell
# gitlab-ctl tail pgbouncer
diff --git a/doc/administration/high_availability/redis.md b/doc/administration/high_availability/redis.md
index aa616ec91d8..ba4599e5bcd 100644
--- a/doc/administration/high_availability/redis.md
+++ b/doc/administration/high_availability/redis.md
@@ -68,7 +68,7 @@ Omnibus:
gitaly['enable'] = false
redis['bind'] = '0.0.0.0'
- redis['port'] = '6379'
+ redis['port'] = 6379
redis['password'] = 'SECRET_PASSWORD_HERE'
gitlab_rails['auto_migrate'] = false
@@ -313,12 +313,12 @@ Pick the one that suits your needs.
- [Installations from source][source]: You need to install Redis and Sentinel
yourself. Use the [Redis HA installation from source](redis_source.md)
documentation.
-- [Omnibus GitLab **Community Edition** (CE) package][ce]: Redis is bundled, so you
+- [Omnibus GitLab **Community Edition** (CE) package](https://about.gitlab.com/install/?version=ce): Redis is bundled, so you
can use the package with only the Redis service enabled as described in steps
1 and 2 of this document (works for both master and slave setups). To install
and configure Sentinel, jump directly to the Sentinel section in the
[Redis HA installation from source](redis_source.md#step-3-configuring-the-redis-sentinel-instances) documentation.
-- [Omnibus GitLab **Enterprise Edition** (EE) package][ee]: Both Redis and Sentinel
+- [Omnibus GitLab **Enterprise Edition** (EE) package](https://about.gitlab.com/install/?version=ee): Both Redis and Sentinel
are bundled in the package, so you can use the EE package to set up the whole
Redis HA infrastructure (master, slave and Sentinel) which is described in
this document.
@@ -397,7 +397,7 @@ The prerequisites for a HA Redis setup are the following:
1. [Reconfigure Omnibus GitLab][reconfigure] for the changes to take effect.
-> Note: You can specify multiple roles like sentinel and redis as:
+> Note: You can specify multiple roles like sentinel and Redis as:
> `roles ['redis_sentinel_role', 'redis_master_role']`. Read more about high
> availability roles at <https://docs.gitlab.com/omnibus/roles/>.
@@ -446,7 +446,7 @@ The prerequisites for a HA Redis setup are the following:
1. [Reconfigure Omnibus GitLab][reconfigure] for the changes to take effect.
1. Go through the steps again for all the other slave nodes.
-> Note: You can specify multiple roles like sentinel and redis as:
+> Note: You can specify multiple roles like sentinel and Redis as:
> `roles ['redis_sentinel_role', 'redis_slave_role']`. Read more about high
> availability roles at <https://docs.gitlab.com/omnibus/roles/>.
@@ -628,7 +628,7 @@ single-machine install, to rotate the **Master** to one of the new nodes.
Make the required changes in configuration and restart the new nodes again.
-To disable redis in the single install, edit `/etc/gitlab/gitlab.rb`:
+To disable Redis in the single install, edit `/etc/gitlab/gitlab.rb`:
```ruby
redis['enable'] = false
@@ -902,7 +902,7 @@ You can check if everything is correct by connecting to each server using
/opt/gitlab/embedded/bin/redis-cli -h <redis-host-or-ip> -a '<redis-password>' info replication
```
-When connected to a `master` redis, you will see the number of connected
+When connected to a `master` Redis, you will see the number of connected
`slaves`, and a list of each with connection details:
```
@@ -948,7 +948,7 @@ to [this issue][gh-531].
You must make sure you are defining the same value in `redis['master_name']`
and `redis['master_pasword']` as you defined for your sentinel node.
-The way the redis connector `redis-rb` works with sentinel is a bit
+The way the Redis connector `redis-rb` works with sentinel is a bit
non-intuitive. We try to hide the complexity in omnibus, but it still requires
a few extra configs.
@@ -1025,6 +1025,4 @@ Read more on High Availability:
[sentinel]: https://redis.io/topics/sentinel
[omnifile]: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-cookbooks/gitlab/libraries/gitlab_rails.rb
[source]: ../../install/installation.md
-[ce]: https://about.gitlab.com/downloads
-[ee]: https://about.gitlab.com/downloads-ee
[it]: https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png
diff --git a/doc/administration/high_availability/redis_source.md b/doc/administration/high_availability/redis_source.md
index 0758b240a25..9ab9d9a206d 100644
--- a/doc/administration/high_availability/redis_source.md
+++ b/doc/administration/high_availability/redis_source.md
@@ -341,7 +341,7 @@ to [this upstream issue][gh-531].
You must make sure that `resque.yml` and `sentinel.conf` are configured correctly,
otherwise `redis-rb` will not work properly.
-The `master-group-name` ('gitlab-redis') defined in (`sentinel.conf`)
+The `master-group-name` (`gitlab-redis`) defined in (`sentinel.conf`)
**must** be used as the hostname in GitLab (`resque.yml`):
```conf
@@ -374,4 +374,4 @@ When in doubt, please read [Redis Sentinel documentation](https://redis.io/topic
[downloads]: https://about.gitlab.com/downloads
[restart]: ../restart_gitlab.md#installations-from-source
[it]: https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png
-[resque]: https://gitlab.com/gitlab-org/gitlab-foss/blob/master/config/resque.yml.example
+[resque]: https://gitlab.com/gitlab-org/gitlab/blob/master/config/resque.yml.example
diff --git a/doc/administration/housekeeping.md b/doc/administration/housekeeping.md
index 43c9679be65..9083619841e 100644
--- a/doc/administration/housekeeping.md
+++ b/doc/administration/housekeeping.md
@@ -1,6 +1,6 @@
# Housekeeping
-> [Introduced][ce-2371] in GitLab 8.4.
+> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/issues/3041) in GitLab 8.4.
## Automatic housekeeping
@@ -23,16 +23,12 @@ For example in the following scenario a `git repack -d` will be executed:
When the `pushes_since_gc` value is 50 a `repack -A -d --pack-kept-objects` will run, similarly when
the `pushes_since_gc` value is 200 a `git gc` will be run.
-- `git gc` ([man page][man-gc]) runs a number of housekeeping tasks,
+- `git gc` ([man page](https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-gc.html)) runs a number of housekeeping tasks,
such as compressing filerevisions (to reduce disk space and increase performance)
and removing unreachable objects which may have been created from prior invocations of
`git add`.
-- `git repack` ([man page][man-repack]) re-organize existing packs into a single, more efficient pack.
+- `git repack` ([man page](https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-repack.html)) re-organize existing packs into a single, more efficient pack.
You can find this option under your project's **Settings > General > Advanced**.
![Housekeeping settings](img/housekeeping_settings.png)
-
-[ce-2371]: https://gitlab.com/gitlab-org/gitlab-foss/merge_requests/2371 "Housekeeping merge request"
-[man-gc]: https://www.kernel.org/pub/software/scm/git/docs/git-gc.html "git gc man page"
-[man-repack]: https://www.kernel.org/pub/software/scm/git/docs/git-repack.html
diff --git a/doc/administration/img/integration/plantuml-example.png b/doc/administration/img/integration/plantuml-example.png
deleted file mode 100644
index 3e0d6389cbd..00000000000
--- a/doc/administration/img/integration/plantuml-example.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/incoming_email.md b/doc/administration/incoming_email.md
index 45634d50b91..88cf702cf0e 100644
--- a/doc/administration/incoming_email.md
+++ b/doc/administration/incoming_email.md
@@ -58,7 +58,7 @@ this method only supports replies, and not the other features of [incoming email
## Set it up
If you want to use Gmail / Google Apps for incoming emails, make sure you have
-[IMAP access enabled](https://support.google.com/mail/troubleshooter/1668960?hl=en#ts=1665018)
+[IMAP access enabled](https://support.google.com/mail/answer/7126229)
and [allowed less secure apps to access the account](https://support.google.com/accounts/answer/6010255)
or [turn-on 2-step validation](https://support.google.com/accounts/answer/185839)
and use [an application password](https://support.google.com/mail/answer/185833).
diff --git a/doc/administration/index.md b/doc/administration/index.md
index 6d40039026d..f90b9b2c7d5 100644
--- a/doc/administration/index.md
+++ b/doc/administration/index.md
@@ -97,7 +97,7 @@ Learn how to install, configure, update, and maintain your GitLab instance.
### GitLab platform integrations
-- [Mattermost](https://docs.gitlab.com/omnibus/gitlab-mattermost/): Integrate with [Mattermost](https://about.mattermost.com/), an open source, private cloud workplace for web messaging.
+- [Mattermost](https://docs.gitlab.com/omnibus/gitlab-mattermost/): Integrate with [Mattermost](https://mattermost.com), an open source, private cloud workplace for web messaging.
- [PlantUML](integration/plantuml.md): Create simple diagrams in AsciiDoc and Markdown documents
created in snippets, wikis, and repos.
- [Web terminals](integration/terminal.md): Provide terminal access to your applications deployed to Kubernetes from within GitLab's CI/CD [environments](../ci/environments.md#web-terminals).
@@ -154,7 +154,7 @@ Learn how to install, configure, update, and maintain your GitLab instance.
- [Enable/disable GitLab CI/CD](../ci/enable_or_disable_ci.md#site-wide-admin-setting): Enable or disable GitLab CI/CD for your instance.
- [GitLab CI/CD admin settings](../user/admin_area/settings/continuous_integration.md): Enable or disable Auto DevOps site-wide and define the artifacts' max size and expiration time.
- [Job artifacts](job_artifacts.md): Enable, disable, and configure job artifacts (a set of files and directories which are outputted by a job when it completes successfully).
-- [Job traces](job_traces.md): Information about the job traces (logs).
+- [Job logs](job_logs.md): Information about the job logs.
- [Register Shared and specific Runners](../ci/runners/README.md#registering-a-shared-runner): Learn how to register and configure Shared and specific Runners to your own instance.
- [Shared Runners pipelines quota](../user/admin_area/settings/continuous_integration.md#shared-runners-pipeline-minutes-quota-starter-only): Limit the usage of pipeline minutes for Shared Runners. **(STARTER ONLY)**
- [Enable/disable Auto DevOps](../topics/autodevops/index.md#enablingdisabling-auto-devops): Enable or disable Auto DevOps for your instance.
@@ -193,7 +193,7 @@ Learn how to install, configure, update, and maintain your GitLab instance.
- [Debugging tips](troubleshooting/debug.md): Tips to debug problems when things go wrong
- [Log system](logs.md): Where to look for logs.
- [Sidekiq Troubleshooting](troubleshooting/sidekiq.md): Debug when Sidekiq appears hung and is not processing jobs.
-- [Troubleshooting ElasticSearch](troubleshooting/elasticsearch.md)
+- [Troubleshooting Elasticsearch](troubleshooting/elasticsearch.md)
### Support Team Docs
@@ -212,8 +212,9 @@ who are aware of the risks.
- [Useful diagnostics tools](troubleshooting/diagnostics_tools.md)
- [Useful Linux commands](troubleshooting/linux_cheat_sheet.md)
- [Troubleshooting Kubernetes](troubleshooting/kubernetes_cheat_sheet.md)
+- [Troubleshooting PostgreSQL](troubleshooting/postgresql.md)
- [Guide to test environments](troubleshooting/test_environments.md) (for Support Engineers)
-- [GitLab rails console commands](troubleshooting/gitlab_rails_cheat_sheet.md) (for Support Engineers)
+- [GitLab Rails console commands](troubleshooting/gitlab_rails_cheat_sheet.md) (for Support Engineers)
- Useful links:
- [GitLab Developer Docs](../development/README.md)
- [Repairing and recovering broken Git repositories](https://git.seveas.net/repairing-and-recovering-broken-git-repositories.html)
diff --git a/doc/administration/integration/plantuml.md b/doc/administration/integration/plantuml.md
index 67e1729e7fd..e595c640aac 100644
--- a/doc/administration/integration/plantuml.md
+++ b/doc/administration/integration/plantuml.md
@@ -21,6 +21,28 @@ docker run -d --name plantuml -p 8080:8080 plantuml/plantuml-server:tomcat
The **PlantUML URL** will be the hostname of the server running the container.
+When running GitLab in Docker, it will need to have access to the PlantUML container.
+The easiest way to achieve that is by using [Docker Compose](https://docs.docker.com/compose/).
+
+A simple `docker-compose.yml` file would be:
+
+```yaml
+version: "3"
+services:
+ gitlab:
+ image: 'gitlab/gitlab-ce:12.2.5-ce.0'
+ environment:
+ GITLAB_OMNIBUS_CONFIG: |
+ nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n"
+
+ plantuml:
+ image: 'plantuml/plantuml-server:tomcat'
+ container_name: plantuml
+```
+
+In this scenario, PlantUML will be accessible for GitLab at the URL
+`http://plantuml:8080/`.
+
### Debian/Ubuntu
Installing and configuring your
@@ -54,6 +76,10 @@ http://localhost:8080/plantuml
you can change these defaults by editing the `/etc/tomcat7/server.xml` file.
+Note that the default URL is different than when using the Docker-based image,
+where the service is available at the root of URL with no relative path. Adjust
+the configuration below accordingly.
+
### Making local PlantUML accessible using custom GitLab setup
The PlantUML server runs locally on your server, so it is not accessible
@@ -61,12 +87,22 @@ externally. As such, it is necessary to catch external PlantUML calls and
redirect them to the local server.
The idea is to redirect each call to `https://gitlab.example.com/-/plantuml/`
-to the local PlantUML server `http://localhost:8080/plantuml`.
+to the local PlantUML server `http://plantuml:8080/` or `http://localhost:8080/plantuml/`, depending on your setup.
To enable the redirection, add the following line in `/etc/gitlab/gitlab.rb`:
```ruby
-nginx['custom_gitlab_server_config'] = "location /-/plantuml { \n proxy_cache off; \n proxy_pass http://127.0.0.1:8080; \n}\n"
+# Docker deployment
+nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n"
+
+# Built from source
+nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://127.0.0.1:8080/plantuml/; \n}\n"
+```
+
+To activate the changes, run the following command:
+
+```sh
+sudo gitlab-ctl reconfigure
```
## GitLab
@@ -89,7 +125,7 @@ our AsciiDoc snippets, wikis and repos using delimited blocks:
~~~markdown
```plantuml
Bob -> Alice : hello
- Alice -> Bob : Go Away
+ Alice -> Bob : hi
```
~~~
@@ -99,7 +135,7 @@ our AsciiDoc snippets, wikis and repos using delimited blocks:
[plantuml, format="png", id="myDiagram", width="200px"]
----
Bob->Alice : hello
- Alice -> Bob : Go Away
+ Alice -> Bob : hi
----
```
@@ -110,7 +146,7 @@ our AsciiDoc snippets, wikis and repos using delimited blocks:
:caption: Caption with **bold** and *italic*
Bob -> Alice: hello
- Alice -> Bob: Go Away
+ Alice -> Bob: hi
```
You can also use the `uml::` directive for compatibility with [sphinxcontrib-plantuml](https://pypi.org/project/sphinxcontrib-plantuml/), but please note that we currently only support the `caption` option.
@@ -119,7 +155,10 @@ The above blocks will be converted to an HTML img tag with source pointing to th
PlantUML instance. If the PlantUML server is correctly configured, this should
render a nice diagram instead of the block:
-![PlantUML Integration](../img/integration/plantuml-example.png)
+```plantuml
+Bob -> Alice : hello
+Alice -> Bob : hi
+```
Inside the block you can add any of the supported diagrams by PlantUML such as
[Sequence](http://plantuml.com/sequence-diagram), [Use Case](http://plantuml.com/use-case-diagram),
diff --git a/doc/administration/integration/terminal.md b/doc/administration/integration/terminal.md
index dbc61c82061..1af15648b97 100644
--- a/doc/administration/integration/terminal.md
+++ b/doc/administration/integration/terminal.md
@@ -60,8 +60,8 @@ guides document the necessary steps for a selection of popular reverse proxies:
- [Apache](https://httpd.apache.org/docs/2.4/mod/mod_proxy_wstunnel.html)
- [NGINX](https://www.nginx.com/blog/websocket-nginx/)
-- [HAProxy](http://blog.haproxy.com/2012/11/07/websockets-load-balancing-with-haproxy/)
-- [Varnish](https://www.varnish-cache.org/docs/4.1/users-guide/vcl-example-websockets.html)
+- [HAProxy](https://www.haproxy.com/blog/websockets-load-balancing-with-haproxy/)
+- [Varnish](https://varnish-cache.org/docs/4.1/users-guide/vcl-example-websockets.html)
Workhorse won't let WebSocket requests through to non-WebSocket endpoints, so
it's safe to enable support for these headers globally. If you'd rather had a
diff --git a/doc/administration/issue_closing_pattern.md b/doc/administration/issue_closing_pattern.md
index 0e34505c2b0..7b815143597 100644
--- a/doc/administration/issue_closing_pattern.md
+++ b/doc/administration/issue_closing_pattern.md
@@ -13,11 +13,11 @@ in the project's default branch.
In order to change the pattern you need to have access to the server that GitLab
is installed on.
-The default pattern can be located in [`gitlab.yml.example`](https://gitlab.com/gitlab-org/gitlab-foss/blob/master/config/gitlab.yml.example)
+The default pattern can be located in [`gitlab.yml.example`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/gitlab.yml.example)
under the "Automatic issue closing" section.
> **Tip:**
-You are advised to use <http://rubular.com> to test the issue closing pattern.
+You are advised to use <https://rubular.com> to test the issue closing pattern.
Because Rubular doesn't understand `%{issue_ref}`, you can replace this by
`#\d+` when testing your patterns, which matches only local issue references like `#123`.
diff --git a/doc/administration/job_artifacts.md b/doc/administration/job_artifacts.md
index 913321012e4..ec2f40700f5 100644
--- a/doc/administration/job_artifacts.md
+++ b/doc/administration/job_artifacts.md
@@ -90,9 +90,9 @@ This configuration relies on valid AWS credentials to be configured already.
Use an object storage option like AWS S3 to store job artifacts.
DANGER: **Danger:**
-If you're enabling S3 in [GitLab HA](high_availability/README.md), you will need to have an [NFS mount set up for CI traces and artifacts](high_availability/nfs.md#a-single-nfs-mount) or enable [live tracing](job_traces.md#new-live-trace-architecture). If these settings are not set, you will risk job traces disappearing or not being saved.
+If you're enabling S3 in [GitLab HA](high_availability/README.md), you will need to have an [NFS mount set up for CI logs and artifacts](high_availability/nfs.md#a-single-nfs-mount) or enable [incremental logging](job_logs.md#new-incremental-logging-architecture). If these settings are not set, you will risk job logs disappearing or not being saved.
-### Object Storage Settings
+#### Object Storage Settings
For source installations the following settings are nested under `artifacts:` and then `object_store:`. On Omnibus GitLab installs they are prefixed by `artifacts_object_store_`.
@@ -105,7 +105,7 @@ For source installations the following settings are nested under `artifacts:` an
| `proxy_download` | Set to true to enable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
| `connection` | Various connection options described below | |
-#### S3 compatible connection settings
+##### S3 compatible connection settings
The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
@@ -118,7 +118,7 @@ The connection settings match those provided by [Fog](https://github.com/fog), a
| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
-| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio](https://www.minio.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
+| `endpoint` | Can be used when configuring an S3 compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
| `use_iam_profile` | Set to true to use IAM profile instead of access keys | false
@@ -188,6 +188,14 @@ _The artifacts are stored by default in
sudo -u git -H bundle exec rake gitlab:artifacts:migrate RAILS_ENV=production
```
+### Migrating from object storage to local storage
+
+In order to migrate back to local storage:
+
+1. Set both `direct_upload` and `background_upload` to false under the artifacts object storage settings. Don't forget to restart GitLab.
+1. Run `rake gitlab:artifacts:migrate_to_local` on your console.
+1. Disable `object_storage` for artifacts in `gitlab.rb`. Remember to restart GitLab afterwards.
+
## Expiring artifacts
If an expiry date is used for the artifacts, they are marked for deletion
diff --git a/doc/administration/job_logs.md b/doc/administration/job_logs.md
new file mode 100644
index 00000000000..d6d56515ac6
--- /dev/null
+++ b/doc/administration/job_logs.md
@@ -0,0 +1,169 @@
+# Job logs
+
+> [Renamed from Job Traces to Job logs](https://gitlab.com/gitlab-org/gitlab/issues/29121) in 12.4.
+
+Job logs (traces) are sent by GitLab Runner while it's processing a job. You can see
+logs in job pages, pipelines, email notifications, etc.
+
+## Data flow
+
+In general, there are two states for job logs: `log` and `archived log`.
+In the following table you can see the phases a log goes through:
+
+| Phase | State | Condition | Data flow | Stored path |
+| -------------- | ------------ | ----------------------- | -----------------------------------------| ----------- |
+| 1: patching | log | When a job is running | GitLab Runner => Unicorn => file storage | `#{ROOT_PATH}/gitlab-ci/builds/#{YYYY_mm}/#{project_id}/#{job_id}.log` |
+| 2: overwriting | log | When a job is finished | GitLab Runner => Unicorn => file storage | `#{ROOT_PATH}/gitlab-ci/builds/#{YYYY_mm}/#{project_id}/#{job_id}.log` |
+| 3: archiving | archived log | After a job is finished | Sidekiq moves log to artifacts folder | `#{ROOT_PATH}/gitlab-rails/shared/artifacts/#{disk_hash}/#{YYYY_mm_dd}/#{job_id}/#{job_artifact_id}/job.log` |
+| 4: uploading | archived log | After a log is archived | Sidekiq moves archived log to [object storage](#uploading-logs-to-object-storage) (if configured) | `#{bucket_name}/#{disk_hash}/#{YYYY_mm_dd}/#{job_id}/#{job_artifact_id}/job.log` |
+
+The `ROOT_PATH` varies per environment. For Omnibus GitLab it
+would be `/var/opt/gitlab`, and for installations from source
+it would be `/home/git/gitlab`.
+
+## Changing the job logs local location
+
+To change the location where the job logs will be stored, follow the steps below.
+
+**In Omnibus installations:**
+
+1. Edit `/etc/gitlab/gitlab.rb` and add or amend the following line:
+
+ ```ruby
+ gitlab_ci['builds_directory'] = '/mnt/to/gitlab-ci/builds'
+ ```
+
+1. Save the file and [reconfigure GitLab][] for the changes to take effect.
+
+---
+
+**In installations from source:**
+
+1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
+
+ ```yaml
+ gitlab_ci:
+ # The location where build logs are stored (default: builds/).
+ # Relative paths are relative to Rails.root.
+ builds_path: path/to/builds/
+ ```
+
+1. Save the file and [restart GitLab][] for the changes to take effect.
+
+[reconfigure gitlab]: restart_gitlab.md#omnibus-gitlab-reconfigure "How to reconfigure Omnibus GitLab"
+[restart gitlab]: restart_gitlab.md#installations-from-source "How to restart GitLab"
+
+## Uploading logs to object storage
+
+Archived logs are considered as [job artifacts](job_artifacts.md).
+Therefore, when you [set up the object storage integration](job_artifacts.md#object-storage-settings),
+job logs are automatically migrated to it along with the other job artifacts.
+
+See "Phase 4: uploading" in [Data flow](#data-flow) to learn about the process.
+
+## How to remove job logs
+
+There isn't a way to automatically expire old job logs, but it's safe to remove
+them if they're taking up too much space. If you remove the logs manually, the
+job output in the UI will be empty.
+
+## New incremental logging architecture
+
+> [Introduced][ce-18169] in GitLab 10.4.
+> [Announced as General availability][ce-46097] in GitLab 11.0.
+
+NOTE: **Note:**
+This feature is off by default. See below for how to [enable or disable](#enabling-incremental-logging) it.
+
+By combining the process with object storage settings, we can completely bypass
+the local file storage. This is a useful option if GitLab is installed as
+cloud-native, for example on Kubernetes.
+
+The data flow is the same as described in the [data flow section](#data-flow)
+with one change: _the stored path of the first two phases is different_. This incremental
+log architecture stores chunks of logs in Redis and a persistent store (object storage or database) instead of
+file storage. Redis is used as first-class storage, and it stores up-to 128KB
+of data. Once the full chunk is sent, it is flushed to a persistent store, either object storage(temporary directory) or database.
+After a while, the data in Redis and a persitent store will be archived to [object storage](#uploading-logs-to-object-storage).
+
+The data are stored in the following Redis namespace: `Gitlab::Redis::SharedState`.
+
+Here is the detailed data flow:
+
+1. GitLab Runner picks a job from GitLab
+1. GitLab Runner sends a piece of log to GitLab
+1. GitLab appends the data to Redis
+1. Once the data in Redis reach 128KB, the data is flushed to a persistent store (object storage or the database).
+1. The above steps are repeated until the job is finished.
+1. Once the job is finished, GitLab schedules a Sidekiq worker to archive the log.
+1. The Sidekiq worker archives the log to object storage and cleans up the log
+ in Redis and a persistent store (object storage or the database).
+
+### Enabling incremental logging
+
+The following commands are to be issued in a Rails console:
+
+```sh
+# Omnibus GitLab
+gitlab-rails console
+
+# Installation from source
+cd /home/git/gitlab
+sudo -u git -H bin/rails console RAILS_ENV=production
+```
+
+**To check if incremental logging (trace) is enabled:**
+
+```ruby
+Feature.enabled?('ci_enable_live_trace')
+```
+
+**To enable incremental logging (trace):**
+
+```ruby
+Feature.enable('ci_enable_live_trace')
+```
+
+NOTE: **Note:**
+The transition period will be handled gracefully. Upcoming logs will be
+generated with the incremental architecture, and on-going logs will stay with the
+legacy architecture, which means that on-going logs won't be forcibly
+re-generated with the incremental architecture.
+
+**To disable incremental logging (trace):**
+
+```ruby
+Feature.disable('ci_enable_live_trace')
+```
+
+NOTE: **Note:**
+The transition period will be handled gracefully. Upcoming logs will be generated
+with the legacy architecture, and on-going incremental logs will stay with the incremental
+architecture, which means that on-going incremental logs won't be forcibly re-generated
+with the legacy architecture.
+
+### Potential implications
+
+In some cases, having data stored on Redis could incur data loss:
+
+1. **Case 1: When all data in Redis are accidentally flushed**
+ - On going incremental logs could be recovered by re-sending logs (this is
+ supported by all versions of the GitLab Runner).
+ - Finished jobs which have not archived incremental logs will lose the last part
+ (~128KB) of log data.
+
+1. **Case 2: When Sidekiq workers fail to archive (e.g., there was a bug that
+ prevents archiving process, Sidekiq inconsistency, etc.)**
+ - Currently all log data in Redis will be deleted after one week. If the
+ Sidekiq workers can't finish by the expiry date, the part of log data will be lost.
+
+Another issue that might arise is that it could consume all memory on the Redis
+instance. If the number of jobs is 1000, 128MB (128KB * 1000) is consumed.
+
+Also, it could pressure the database replication lag. `INSERT`s are generated to
+indicate that we have log chunk. `UPDATE`s with 128KB of data is issued once we
+receive multiple chunks.
+
+[ce-18169]: https://gitlab.com/gitlab-org/gitlab-foss/merge_requests/18169
+[ce-21193]: https://gitlab.com/gitlab-org/gitlab-foss/merge_requests/21193
+[ce-46097]: https://gitlab.com/gitlab-org/gitlab-foss/issues/46097
diff --git a/doc/administration/job_traces.md b/doc/administration/job_traces.md
index 8a68f82d2fc..d0b346a931e 100644
--- a/doc/administration/job_traces.md
+++ b/doc/administration/job_traces.md
@@ -1,207 +1,5 @@
-# Job traces (logs)
-
-Job traces are sent by GitLab Runner while it's processing a job. You can see
-traces in job pages, pipelines, email notifications, etc.
-
-## Data flow
-
-In general, there are two states in job traces: "live trace" and "archived trace".
-In the following table you can see the phases a trace goes through.
-
-| Phase | State | Condition | Data flow | Stored path |
-| ----- | ----- | --------- | --------- | ----------- |
-| 1: patching | Live trace | When a job is running | GitLab Runner => Unicorn => file storage |`#{ROOT_PATH}/gitlab-ci/builds/#{YYYY_mm}/#{project_id}/#{job_id}.log`|
-| 2: overwriting | Live trace | When a job is finished | GitLab Runner => Unicorn => file storage |`#{ROOT_PATH}/gitlab-ci/builds/#{YYYY_mm}/#{project_id}/#{job_id}.log`|
-| 3: archiving | Archived trace | After a job is finished | Sidekiq moves live trace to artifacts folder |`#{ROOT_PATH}/gitlab-rails/shared/artifacts/#{disk_hash}/#{YYYY_mm_dd}/#{job_id}/#{job_artifact_id}/job.log`|
-| 4: uploading | Archived trace | After a trace is archived | Sidekiq moves archived trace to [object storage](#uploading-traces-to-object-storage) (if configured) |`#{bucket_name}/#{disk_hash}/#{YYYY_mm_dd}/#{job_id}/#{job_artifact_id}/job.log`|
-
-The `ROOT_PATH` varies per your environment. For Omnibus GitLab it
-would be `/var/opt/gitlab`, whereas for installations from source
-it would be `/home/git/gitlab`.
-
-## Changing the job traces local location
-
-To change the location where the job logs will be stored, follow the steps below.
-
-**In Omnibus installations:**
-
-1. Edit `/etc/gitlab/gitlab.rb` and add or amend the following line:
-
- ```ruby
- gitlab_ci['builds_directory'] = '/mnt/to/gitlab-ci/builds'
- ```
-
-1. Save the file and [reconfigure GitLab][] for the changes to take effect.
-
+---
+redirect_to: 'job_logs.md'
---
-**In installations from source:**
-
-1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
-
- ```yaml
- gitlab_ci:
- # The location where build traces are stored (default: builds/).
- # Relative paths are relative to Rails.root.
- builds_path: path/to/builds/
- ```
-
-1. Save the file and [restart GitLab][] for the changes to take effect.
-
-[reconfigure gitlab]: restart_gitlab.md#omnibus-gitlab-reconfigure "How to reconfigure Omnibus GitLab"
-[restart gitlab]: restart_gitlab.md#installations-from-source "How to restart GitLab"
-
-## Uploading traces to object storage
-
-Archived traces are considered as [job artifacts](job_artifacts.md).
-Therefore, when you [set up the object storage integration](job_artifacts.md#object-storage-settings),
-job traces are automatically migrated to it along with the other job artifacts.
-
-See "Phase 4: uploading" in [Data flow](#data-flow) to learn about the process.
-
-## How to archive legacy job trace files
-
-Legacy job traces, which were created before GitLab 10.5, were not archived regularly.
-It's the same state with the "2: overwriting" in the above [Data flow](#data-flow).
-To archive those legacy job traces, please follow the instruction below.
-
-1. Execute the following command
-
- ```bash
- gitlab-rake gitlab:traces:archive
- ```
-
- After you executed this task, GitLab instance queues up Sidekiq jobs (asynchronous processes)
- for migrating job trace files from local storage to object storage.
- It could take time to complete the all migration jobs. You can check the progress by the following command
-
- ```bash
- sudo gitlab-rails console
- ```
-
- ```bash
- [1] pry(main)> Sidekiq::Stats.new.queues['pipeline_background:archive_trace']
- => 100
- ```
-
- If the count becomes zero, the archiving processes are done
-
-## How to migrate archived job traces to object storage
-
-> [Introduced][ce-21193] in GitLab 11.3.
-
-If job traces have already been archived into local storage, and you want to migrate those traces to object storage, please follow the instruction below.
-
-1. Ensure [Object storage integration for Job Artifacts](job_artifacts.md#object-storage-settings) is enabled
-1. Execute the following command
-
- ```bash
- gitlab-rake gitlab:traces:migrate
- ```
-
-## How to remove job traces
-
-There isn't a way to automatically expire old job logs, but it's safe to remove
-them if they're taking up too much space. If you remove the logs manually, the
-job output in the UI will be empty.
-
-## New live trace architecture
-
-> [Introduced][ce-18169] in GitLab 10.4.
-> [Announced as General availability][ce-46097] in GitLab 11.0.
-
-NOTE: **Note:**
-This feature is off by default. Check below how to [enable/disable](#enabling-live-trace) it.
-
-By combining the process with object storage settings, we can completely bypass
-the local file storage. This is a useful option if GitLab is installed as
-cloud-native, for example on Kubernetes.
-
-The data flow is the same as described in the [data flow section](#data-flow)
-with one change: _the stored path of the first two phases is different_. This new live
-trace architecture stores chunks of traces in Redis and a persistent store (object storage or database) instead of
-file storage. Redis is used as first-class storage, and it stores up-to 128KB
-of data. Once the full chunk is sent, it is flushed a persistent store, either object storage(temporary directory) or database.
-After a while, the data in Redis and a persitent store will be archived to [object storage](#uploading-traces-to-object-storage).
-
-The data are stored in the following Redis namespace: `Gitlab::Redis::SharedState`.
-
-Here is the detailed data flow:
-
-1. GitLab Runner picks a job from GitLab
-1. GitLab Runner sends a piece of trace to GitLab
-1. GitLab appends the data to Redis
-1. Once the data in Redis reach 128KB, the data is flushed to a persistent store (object storage or the database).
-1. The above steps are repeated until the job is finished.
-1. Once the job is finished, GitLab schedules a Sidekiq worker to archive the trace.
-1. The Sidekiq worker archives the trace to object storage and cleans up the trace
- in Redis and a persistent store (object storage or the database).
-
-### Enabling live trace
-
-The following commands are to be issues in a Rails console:
-
-```sh
-# Omnibus GitLab
-gitlab-rails console
-
-# Installation from source
-cd /home/git/gitlab
-sudo -u git -H bin/rails console RAILS_ENV=production
-```
-
-**To check if live trace is enabled:**
-
-```ruby
-Feature.enabled?('ci_enable_live_trace')
-```
-
-**To enable live trace:**
-
-```ruby
-Feature.enable('ci_enable_live_trace')
-```
-
-NOTE: **Note:**
-The transition period will be handled gracefully. Upcoming traces will be
-generated with the new architecture, and on-going live traces will stay with the
-legacy architecture, which means that on-going live traces won't be forcibly
-re-generated with the new architecture.
-
-**To disable live trace:**
-
-```ruby
-Feature.disable('ci_enable_live_trace')
-```
-
-NOTE: **Note:**
-The transition period will be handled gracefully. Upcoming traces will be generated
-with the legacy architecture, and on-going live traces will stay with the new
-architecture, which means that on-going live traces won't be forcibly re-generated
-with the legacy architecture.
-
-### Potential implications
-
-In some cases, having data stored on Redis could incur data loss:
-
-1. **Case 1: When all data in Redis are accidentally flushed**
- - On going live traces could be recovered by re-sending traces (this is
- supported by all versions of the GitLab Runner).
- - Finished jobs which have not archived live traces will lose the last part
- (~128KB) of trace data.
-
-1. **Case 2: When Sidekiq workers fail to archive (e.g., there was a bug that
- prevents archiving process, Sidekiq inconsistency, etc.)**
- - Currently all trace data in Redis will be deleted after one week. If the
- Sidekiq workers can't finish by the expiry date, the part of trace data will be lost.
-
-Another issue that might arise is that it could consume all memory on the Redis
-instance. If the number of jobs is 1000, 128MB (128KB * 1000) is consumed.
-
-Also, it could pressure the database replication lag. `INSERT`s are generated to
-indicate that we have trace chunk. `UPDATE`s with 128KB of data is issued once we
-receive multiple chunks.
-
-[ce-18169]: https://gitlab.com/gitlab-org/gitlab-foss/merge_requests/18169
-[ce-21193]: https://gitlab.com/gitlab-org/gitlab-foss/merge_requests/21193
-[ce-46097]: https://gitlab.com/gitlab-org/gitlab-foss/issues/46097
+This document was moved to [another location](job_logs.md).
diff --git a/doc/administration/libravatar.md b/doc/administration/libravatar.md
new file mode 100644
index 00000000000..43a6b8f0d34
--- /dev/null
+++ b/doc/administration/libravatar.md
@@ -0,0 +1,101 @@
+---
+type: howto
+---
+
+# Using the Libravatar service with GitLab
+
+GitLab by default supports the [Gravatar](https://gravatar.com) avatar service.
+
+Libravatar is another service that delivers your avatar (profile picture) to
+other websites. The Libravatar API is
+[heavily based on gravatar](https://wiki.libravatar.org/api/), so you can
+easily switch to the Libravatar avatar service or even a self-hosted Libravatar
+server.
+
+## Configuration
+
+In the [`gitlab.yml` gravatar section](https://gitlab.com/gitlab-org/gitlab/blob/672bd3902d86b78d730cea809fce312ec49d39d7/config/gitlab.yml.example#L122), set
+the configuration options as follows:
+
+### For HTTP
+
+```yml
+ gravatar:
+ enabled: true
+ # gravatar URLs: possible placeholders: %{hash} %{size} %{email} %{username}
+ plain_url: "http://cdn.libravatar.org/avatar/%{hash}?s=%{size}&d=identicon"
+```
+
+### For HTTPS
+
+```yml
+ gravatar:
+ enabled: true
+ # gravatar URLs: possible placeholders: %{hash} %{size} %{email} %{username}
+ ssl_url: "https://seccdn.libravatar.org/avatar/%{hash}?s=%{size}&d=identicon"
+```
+
+### Self-hosted Libravatar server
+
+If you are [running your own libravatar service](https://wiki.libravatar.org/running_your_own/),
+the URL will be different in the configuration, but you must provide the same
+placeholders so GitLab can parse the URL correctly.
+
+For example, you host a service on `http://libravatar.example.com` and the
+`plain_url` you need to supply in `gitlab.yml` is
+
+`http://libravatar.example.com/avatar/%{hash}?s=%{size}&d=identicon`
+
+### Omnibus GitLab example
+
+In `/etc/gitlab/gitlab.rb`:
+
+#### For HTTP
+
+```ruby
+gitlab_rails['gravatar_enabled'] = true
+gitlab_rails['gravatar_plain_url'] = "http://cdn.libravatar.org/avatar/%{hash}?s=%{size}&d=identicon"
+```
+
+#### For HTTPS
+
+```ruby
+gitlab_rails['gravatar_enabled'] = true
+gitlab_rails['gravatar_ssl_url'] = "https://seccdn.libravatar.org/avatar/%{hash}?s=%{size}&d=identicon"
+```
+
+Then run `sudo gitlab-ctl reconfigure` for the changes to take effect.
+
+## Default URL for missing images
+
+[Libravatar supports different sets](https://wiki.libravatar.org/api/) of
+missing images for user email addresses that are not found on the Libravatar
+service.
+
+In order to use a set other than `identicon`, replace the `&d=identicon`
+portion of the URL with another supported set.
+For example, you can use the `retro` set, in which case the URL would look like:
+`plain_url: "http://cdn.libravatar.org/avatar/%{hash}?s=%{size}&d=retro"`
+
+## Usage examples for Microsoft Office 365
+
+If your users are Office 365 users, the `GetPersonaPhoto` service can be used.
+Note that this service requires a login, so this use case is most useful in a
+corporate installation where all users have access to Office 365.
+
+```ruby
+gitlab_rails['gravatar_plain_url'] = 'http://outlook.office365.com/owa/service.svc/s/GetPersonaPhoto?email=%{email}&size=HR120x120'
+gitlab_rails['gravatar_ssl_url'] = 'https://outlook.office365.com/owa/service.svc/s/GetPersonaPhoto?email=%{email}&size=HR120x120'
+```
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/logs.md b/doc/administration/logs.md
index 7857dcc1f08..dae0dae8395 100644
--- a/doc/administration/logs.md
+++ b/doc/administration/logs.md
@@ -23,7 +23,7 @@ requests from the API are logged to a separate file in `api_json.log`.
Each line contains a JSON line that can be ingested by Elasticsearch, Splunk, etc. For example:
```json
-{"method":"GET","path":"/gitlab/gitlab-ce/issues/1234","format":"html","controller":"Projects::IssuesController","action":"show","status":200,"duration":229.03,"view":174.07,"db":13.24,"time":"2017-08-08T20:15:54.821Z","params":[{"key":"param_key","value":"param_value"}],"remote_ip":"18.245.0.1","user_id":1,"username":"admin","gitaly_calls":76,"gitaly_duration":7.41,"queue_duration": 112.47}
+{"method":"GET","path":"/gitlab/gitlab-foss/issues/1234","format":"html","controller":"Projects::IssuesController","action":"show","status":200,"duration":229.03,"view":174.07,"db":13.24,"time":"2017-08-08T20:15:54.821Z","params":[{"key":"param_key","value":"param_value"}],"remote_ip":"18.245.0.1","user_id":1,"username":"admin","gitaly_calls":76,"gitaly_duration":7.41,"queue_duration": 112.47}
```
In this example, you can see this was a GET request for a specific
@@ -233,7 +233,7 @@ This file lives in `/var/log/gitlab/gitlab-shell/gitlab-shell.log` for
Omnibus GitLab packages or in `/home/git/gitlab-shell/gitlab-shell.log` for
installations from source.
-GitLab shell is used by GitLab for executing Git commands and provide
+GitLab Shell is used by GitLab for executing Git commands and provide
SSH access to Git repositories. For example:
```
@@ -241,7 +241,7 @@ I, [2015-02-13T06:17:00.671315 #9291] INFO -- : Adding project root/example.git
I, [2015-02-13T06:17:00.679433 #9291] INFO -- : Moving existing hooks directory and symlinking global hooks directory for /var/opt/gitlab/git-data/repositories/root/example.git.
```
-User clone/fetch activity using ssh transport appears in this log as `executing git command <gitaly-upload-pack...`.
+User clone/fetch activity using SSH transport appears in this log as `executing git command <gitaly-upload-pack...`.
## `unicorn_stderr.log`
@@ -252,7 +252,7 @@ installations from source.
Unicorn is a high-performance forking Web server which is used for
serving the GitLab application. You can look at this log if, for
example, your application does not respond. This log contains all
-information about the state of unicorn processes at any given time.
+information about the state of Unicorn processes at any given time.
```
I, [2015-02-13T06:14:46.680381 #9047] INFO -- : Refreshing Gem list
@@ -294,9 +294,11 @@ This log records:
- Information whenever [Rack Attack] registers an abusive request.
- Requests over the [Rate Limit] on raw endpoints.
+- [Protected paths] abusive requests.
NOTE: **Note:**
-From [%12.1](https://gitlab.com/gitlab-org/gitlab-foss/issues/62756), user id and username are available on this log.
+From [%12.1](https://gitlab.com/gitlab-org/gitlab-foss/issues/62756), user id and username are also
+recorded on this log, if available.
## `graphql_json.log`
@@ -327,17 +329,19 @@ is populated whenever `gitlab-ctl reconfigure` is run manually or as part of an
Reconfigure logs files are named according to the UNIX timestamp of when the reconfigure
was initiated, such as `1509705644.log`
-## `sidekiq_exporter.log`
+## `sidekiq_exporter.log` and `web_exporter.log`
If Prometheus metrics and the Sidekiq Exporter are both enabled, Sidekiq will
-start a Web server and listen to the defined port (default: 3807). Access logs
+start a Web server and listen to the defined port (default: 8082). Access logs
will be generated in `/var/log/gitlab/gitlab-rails/sidekiq_exporter.log` for
Omnibus GitLab packages or in `/home/git/gitlab/log/sidekiq_exporter.log` for
installations from source.
-[repocheck]: repository_checks.md
-[Rack Attack]: ../security/rack_attack.md
-[Rate Limit]: ../user/admin_area/settings/rate_limits_on_raw_endpoints.md
+If Prometheus metrics and the Web Exporter are both enabled, Unicorn/Puma will
+start a Web server and listen to the defined port (default: 8083). Access logs
+will be generated in `/var/log/gitlab/gitlab-rails/web_exporter.log` for
+Omnibus GitLab packages or in `/home/git/gitlab/log/web_exporter.log` for
+installations from source.
## `database_load_balancing.log` **(PREMIUM ONLY)**
@@ -348,3 +352,8 @@ It is stored at:
- `/var/log/gitlab/gitlab-rails/database_load_balancing.log` for Omnibus GitLab packages.
- `/home/git/gitlab/log/database_load_balancing.log` for installations from source.
+
+[repocheck]: repository_checks.md
+[Rack Attack]: ../security/rack_attack.md
+[Rate Limit]: ../user/admin_area/settings/rate_limits_on_raw_endpoints.md
+[Protected paths]: ../user/admin_area/settings/protected_paths.md
diff --git a/doc/administration/merge_request_diffs.md b/doc/administration/merge_request_diffs.md
index f24a3f94ceb..ca09171e5ff 100644
--- a/doc/administration/merge_request_diffs.md
+++ b/doc/administration/merge_request_diffs.md
@@ -61,6 +61,9 @@ To enable external storage of merge request diffs, follow the instructions below
## Using object storage
+CAUTION: **WARNING:**
+ Currently migrating to object storage is **non-reversible**
+
Instead of storing the external diffs on disk, we recommended the use of an object
store like AWS S3 instead. This configuration relies on valid AWS credentials to
be configured already.
@@ -93,7 +96,7 @@ The connection settings match those provided by [Fog](https://github.com/fog), a
| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
-| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio](https://www.minio.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
+| `endpoint` | Can be used when configuring an S3 compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
| `use_iam_profile` | Set to true to use IAM profile instead of access keys | false
diff --git a/doc/administration/monitoring/github_imports.md b/doc/administration/monitoring/github_imports.md
index 2b1b7a230f7..4d60cf0d491 100644
--- a/doc/administration/monitoring/github_imports.md
+++ b/doc/administration/monitoring/github_imports.md
@@ -16,7 +16,7 @@ This metric tracks the total time spent (in seconds) importing a project (from
project creation until the import process finishes), for every imported project.
The name of the project is stored in the `project` label in the format
-`namespace/name` (e.g. `gitlab-org/gitlab-ce`).
+`namespace/name` (e.g. `gitlab-org/gitlab`).
## Number of imported projects
@@ -54,7 +54,7 @@ projects. This metric does not expose any labels.
This metric tracks the number of imported issues across all projects.
The name of the project is stored in the `project` label in the format
-`namespace/name` (e.g. `gitlab-org/gitlab-ce`).
+`namespace/name` (e.g. `gitlab-org/gitlab`).
## Number of imported pull requests
@@ -65,7 +65,7 @@ The name of the project is stored in the `project` label in the format
This metric tracks the number of imported pull requests across all projects.
The name of the project is stored in the `project` label in the format
-`namespace/name` (e.g. `gitlab-org/gitlab-ce`).
+`namespace/name` (e.g. `gitlab-org/gitlab`).
## Number of imported comments
@@ -76,7 +76,7 @@ The name of the project is stored in the `project` label in the format
This metric tracks the number of imported comments across all projects.
The name of the project is stored in the `project` label in the format
-`namespace/name` (e.g. `gitlab-org/gitlab-ce`).
+`namespace/name` (e.g. `gitlab-org/gitlab`).
## Number of imported pull request review comments
@@ -87,7 +87,7 @@ The name of the project is stored in the `project` label in the format
This metric tracks the number of imported comments across all projects.
The name of the project is stored in the `project` label in the format
-`namespace/name` (e.g. `gitlab-org/gitlab-ce`).
+`namespace/name` (e.g. `gitlab-org/gitlab`).
## Number of imported repositories
diff --git a/doc/administration/monitoring/index.md b/doc/administration/monitoring/index.md
index 2b3daec42bd..80e727f6a5c 100644
--- a/doc/administration/monitoring/index.md
+++ b/doc/administration/monitoring/index.md
@@ -10,4 +10,4 @@ Explore our features to monitor your GitLab instance:
- [GitHub imports](github_imports.md): Monitor the health and progress of GitLab's GitHub importer with various Prometheus metrics.
- [Monitoring uptime](../../user/admin_area/monitoring/health_check.md): Check the server status using the health check endpoint.
- [IP whitelists](ip_whitelist.md): Configure GitLab for monitoring endpoints that provide health check information when probed.
-- [nginx_status](https://docs.gitlab.com/omnibus/settings/nginx.html#enablingdisabling-nginx_status): Monitor your Nginx server status
+- [`nginx_status`](https://docs.gitlab.com/omnibus/settings/nginx.html#enablingdisabling-nginx_status): Monitor your NGINX server status
diff --git a/doc/administration/monitoring/performance/grafana_configuration.md b/doc/administration/monitoring/performance/grafana_configuration.md
index d389c7c5003..ccba0a55479 100644
--- a/doc/administration/monitoring/performance/grafana_configuration.md
+++ b/doc/administration/monitoring/performance/grafana_configuration.md
@@ -32,14 +32,14 @@ in the top bar.
Fill in the configuration details for the InfluxDB data source. Save and
Test Connection to ensure the configuration is correct.
-- **Name**: InfluxDB
+- **Name**: `InfluxDB`
- **Default**: Checked
-- **Type**: InfluxDB 0.9.x (Even if you're using InfluxDB 0.10.x)
+- **Type**: `InfluxDB 0.9.x` (Even if you're using InfluxDB 0.10.x)
- **Url**: `https://localhost:8086` (Or the remote URL if you've installed InfluxDB
on a separate server)
-- **Access**: proxy
-- **Database**: gitlab
-- **User**: admin (Or the username configured when setting up InfluxDB)
+- **Access**: `proxy`
+- **Database**: `gitlab`
+- **User**: `admin` (Or the username configured when setting up InfluxDB)
- **Password**: The password configured when you set up InfluxDB
![Grafana data source configurations](img/grafana_data_source_configuration.png)
@@ -146,7 +146,7 @@ However, you should **not** reinstate your old data _except_ under one of the fo
If you require access to your old Grafana data but do not meet one of these criteria, you may consider reinstating it temporarily, [exporting the dashboards](https://grafana.com/docs/reference/export_import/#exporting-a-dashboard) you need, then refreshing the data and [re-importing your dashboards](https://grafana.com/docs/reference/export_import/#importing-a-dashboard). Note that this poses a temporary vulnerability while your old Grafana data is in use, and the decision to do so should be weighed carefully with your need to access existing data and dashboards.
-For more information and further mitigation details, please refer to our [blog post on the security release](https://about.gitlab.com/2019/08/12/critical-security-release-gitlab-12-dot-1-dot-6-released/).
+For more information and further mitigation details, please refer to our [blog post on the security release](https://about.gitlab.com/blog/2019/08/12/critical-security-release-gitlab-12-dot-1-dot-6-released/).
---
diff --git a/doc/administration/monitoring/performance/img/performance_bar_gitaly_threshold.png b/doc/administration/monitoring/performance/img/performance_bar_gitaly_threshold.png
new file mode 100644
index 00000000000..d4bf5c69ffa
--- /dev/null
+++ b/doc/administration/monitoring/performance/img/performance_bar_gitaly_threshold.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_request_selector_warning.png b/doc/administration/monitoring/performance/img/performance_bar_request_selector_warning.png
new file mode 100644
index 00000000000..966549554a4
--- /dev/null
+++ b/doc/administration/monitoring/performance/img/performance_bar_request_selector_warning.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_request_selector_warning_expanded.png b/doc/administration/monitoring/performance/img/performance_bar_request_selector_warning_expanded.png
new file mode 100644
index 00000000000..3362186bb48
--- /dev/null
+++ b/doc/administration/monitoring/performance/img/performance_bar_request_selector_warning_expanded.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/index.md b/doc/administration/monitoring/performance/index.md
index ef71ca1d6c3..5204ab40dc9 100644
--- a/doc/administration/monitoring/performance/index.md
+++ b/doc/administration/monitoring/performance/index.md
@@ -31,8 +31,8 @@ including (but not limited to):
- System statistics such as the process' memory usage and open file descriptors.
- Ruby garbage collection statistics.
-Metrics data is written to [InfluxDB][influxdb] over [UDP][influxdb-udp]. Stored
-data can be visualized using [Grafana][grafana] or any other application that
+Metrics data is written to [InfluxDB](https://www.influxdata.com/products/influxdb-overview/)
+over [UDP][influxdb-udp]. Stored data can be visualized using [Grafana](https://grafana.com) or any other application that
supports reading data from InfluxDB. Alternatively data can be queried using the
InfluxDB CLI.
@@ -67,6 +67,4 @@ the actual interval can be anywhere between 7.5 and 22.5. The interval is
re-generated for every sampling run instead of being generated once and re-used
for the duration of the process' lifetime.
-[influxdb]: https://influxdata.com/time-series-platform/influxdb/
[influxdb-udp]: https://docs.influxdata.com/influxdb/v0.9/write_protocols/udp/
-[grafana]: http://grafana.org/
diff --git a/doc/administration/monitoring/performance/influxdb_configuration.md b/doc/administration/monitoring/performance/influxdb_configuration.md
index cf6728510fe..f1f588a924d 100644
--- a/doc/administration/monitoring/performance/influxdb_configuration.md
+++ b/doc/administration/monitoring/performance/influxdb_configuration.md
@@ -38,8 +38,8 @@ InfluxDB needs to be restarted.
### Storage Engine
InfluxDB comes with different storage engines and as of InfluxDB 0.9.5 a new
-storage engine is available, called [TSM Tree]. All users **must** use the new
-`tsm1` storage engine as this [will be the default engine][tsm1-commit] in
+storage engine is available, called [TSM Tree](https://www.influxdata.com/blog/new-storage-engine-time-structured-merge-tree/).
+All users **must** use the new `tsm1` storage engine as this [will be the default engine][tsm1-commit] in
upcoming InfluxDB releases.
Make sure you have the following in your configuration file:
@@ -95,7 +95,7 @@ UDP can be done using the following settings:
This does the following:
1. Enable UDP and bind it to port 8089 for all addresses.
-1. Store any data received in the "gitlab" database.
+1. Store any data received in the `gitlab` database.
1. Define a batch of points to be 1000 points in size and allow a maximum of
5 batches _or_ flush them automatically after 1 second.
1. Define a UDP read buffer size of 200 MB.
@@ -188,6 +188,5 @@ Read more on:
[influxdb cli]: https://docs.influxdata.com/influxdb/v0.9/tools/shell/
[udp]: https://docs.influxdata.com/influxdb/v0.9/write_protocols/udp/
[influxdb]: https://www.influxdata.com/products/influxdb-overview/
-[tsm tree]: https://influxdata.com/blog/new-storage-engine-time-structured-merge-tree/
[tsm1-commit]: https://github.com/influxdata/influxdb/commit/15d723dc77651bac83e09e2b1c94be480966cb0d
[influx-admin]: https://docs.influxdata.com/influxdb/v0.9/administration/authentication_and_authorization/#create-a-new-admin-user
diff --git a/doc/administration/monitoring/performance/performance_bar.md b/doc/administration/monitoring/performance/performance_bar.md
index 02f4b78bd60..53c08e32cb2 100644
--- a/doc/administration/monitoring/performance/performance_bar.md
+++ b/doc/administration/monitoring/performance/performance_bar.md
@@ -21,6 +21,24 @@ On the far right is a request selector that allows you to view the same metrics
(excluding the page timing and line profiler) for any requests made while the
page was open. Only the first two requests per unique URL are captured.
+## Request warnings
+
+For requests exceeding pre-defined limits, a warning icon will be shown
+next to the failing metric, along with an explanation. In this example,
+the Gitaly call duration exceeded the threshold:
+
+![Gitaly call duration exceeded threshold](img/performance_bar_gitaly_threshold.png)
+
+If any requests on the current page generated warnings, the icon will
+appear next to the request selector:
+
+![Request selector showing two requests with warnings](img/performance_bar_request_selector_warning.png)
+
+And requests with warnings are indicated in the request selector with a
+`(!)` after their path:
+
+![Request selector showing dropdown](img/performance_bar_request_selector_warning_expanded.png)
+
## Enable the Performance Bar via the Admin panel
GitLab Performance Bar is disabled by default. To enable it for a given group,
diff --git a/doc/administration/monitoring/performance/prometheus.md b/doc/administration/monitoring/performance/prometheus.md
index 2c5bab46dd9..f05a420fc19 100644
--- a/doc/administration/monitoring/performance/prometheus.md
+++ b/doc/administration/monitoring/performance/prometheus.md
@@ -2,4 +2,4 @@
redirect_to: '../prometheus/index.md'
---
-This document was moved to [monitoring/prometheus](../prometheus/index.md).
+This document was moved to [another location](../prometheus/index.md).
diff --git a/doc/administration/monitoring/prometheus/gitlab_exporter.md b/doc/administration/monitoring/prometheus/gitlab_exporter.md
index cfd9f55acc3..f6178799e0a 100644
--- a/doc/administration/monitoring/prometheus/gitlab_exporter.md
+++ b/doc/administration/monitoring/prometheus/gitlab_exporter.md
@@ -1,12 +1,16 @@
# GitLab exporter
->**Note:**
-Available since [Omnibus GitLab 8.17][1132]. For installations from source
-you'll have to install and configure it yourself.
+>- Available since [Omnibus GitLab 8.17](https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/1132).
+>- Renamed from `GitLab monitor exporter` to `GitLab exporter` in [GitLab 12.3](https://gitlab.com/gitlab-org/gitlab/merge_requests/16511).
-The [GitLab exporter] allows you to measure various GitLab metrics, pulled from Redis and the database.
+The [GitLab exporter](https://gitlab.com/gitlab-org/gitlab-exporter) allows you to
+measure various GitLab metrics, pulled from Redis and the database, in Omnibus GitLab
+instances.
-To enable the GitLab exporter:
+NOTE: **Note:**
+For installations from source you'll have to install and configure it yourself.
+
+To enable the GitLab exporter in an Omnibus GitLab instance:
1. [Enable Prometheus](index.md#configuring-prometheus)
1. Edit `/etc/gitlab/gitlab.rb`
@@ -16,15 +20,10 @@ To enable the GitLab exporter:
gitlab_exporter['enable'] = true
```
-1. Save the file and [reconfigure GitLab][reconfigure] for the changes to
- take effect
+1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure)
+ for the changes to take effect
Prometheus will now automatically begin collecting performance data from
the GitLab exporter exposed under `localhost:9168`.
[← Back to the main Prometheus page](index.md)
-
-[1132]: https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/1132
-[GitLab exporter]: https://gitlab.com/gitlab-org/gitlab-exporter
-[prometheus]: https://prometheus.io
-[reconfigure]: ../../restart_gitlab.md#omnibus-gitlab-reconfigure
diff --git a/doc/administration/monitoring/prometheus/gitlab_metrics.md b/doc/administration/monitoring/prometheus/gitlab_metrics.md
index 302d74dd96a..02920293daa 100644
--- a/doc/administration/monitoring/prometheus/gitlab_metrics.md
+++ b/doc/administration/monitoring/prometheus/gitlab_metrics.md
@@ -42,10 +42,10 @@ The following metrics are available:
| `gitlab_transaction_cache_read_hit_count_total` | Counter | 10.2 | Counter for cache hits for Rails cache calls | controller, action |
| `gitlab_transaction_cache_read_miss_count_total` | Counter | 10.2 | Counter for cache misses for Rails cache calls | controller, action |
| `gitlab_transaction_duration_seconds` | Histogram | 10.2 | Duration for all transactions (gitlab_transaction_* metrics) | controller, action |
-| `gitlab_transaction_event_build_found_total` | Counter | 9.4 | Counter for build found for api /jobs/request | |
-| `gitlab_transaction_event_build_invalid_total` | Counter | 9.4 | Counter for build invalid due to concurrency conflict for api /jobs/request | |
-| `gitlab_transaction_event_build_not_found_cached_total` | Counter | 9.4 | Counter for cached response of build not found for api /jobs/request | |
-| `gitlab_transaction_event_build_not_found_total` | Counter | 9.4 | Counter for build not found for api /jobs/request | |
+| `gitlab_transaction_event_build_found_total` | Counter | 9.4 | Counter for build found for API /jobs/request | |
+| `gitlab_transaction_event_build_invalid_total` | Counter | 9.4 | Counter for build invalid due to concurrency conflict for API /jobs/request | |
+| `gitlab_transaction_event_build_not_found_cached_total` | Counter | 9.4 | Counter for cached response of build not found for API /jobs/request | |
+| `gitlab_transaction_event_build_not_found_total` | Counter | 9.4 | Counter for build not found for API /jobs/request | |
| `gitlab_transaction_event_change_default_branch_total` | Counter | 9.4 | Counter when default branch is changed for any repository | |
| `gitlab_transaction_event_create_repository_total` | Counter | 9.4 | Counter when any repository is created | |
| `gitlab_transaction_event_etag_caching_cache_hit_total` | Counter | 9.4 | Counter for etag cache hit. | endpoint |
@@ -66,10 +66,10 @@ The following metrics are available:
| `gitlab_transaction_event_remove_branch_total` | Counter | 9.4 | Counter when a branch is removed for any repository | |
| `gitlab_transaction_event_remove_repository_total` | Counter | 9.4 | Counter when a repository is removed | |
| `gitlab_transaction_event_remove_tag_total` | Counter | 9.4 | Counter when a tag is remove for any repository | |
-| `gitlab_transaction_event_sidekiq_exception_total` | Counter | 9.4 | Counter of sidekiq exceptions | |
+| `gitlab_transaction_event_sidekiq_exception_total` | Counter | 9.4 | Counter of Sidekiq exceptions | |
| `gitlab_transaction_event_stuck_import_jobs_total` | Counter | 9.4 | Count of stuck import jobs | projects_without_jid_count, projects_with_jid_count |
-| `gitlab_transaction_event_update_build_total` | Counter | 9.4 | Counter for update build for api /jobs/request/:id | |
-| `gitlab_transaction_new_redis_connections_total` | Counter | 9.4 | Counter for new redis connections | |
+| `gitlab_transaction_event_update_build_total` | Counter | 9.4 | Counter for update build for API /jobs/request/:id | |
+| `gitlab_transaction_new_redis_connections_total` | Counter | 9.4 | Counter for new Redis connections | |
| `gitlab_transaction_queue_duration_total` | Counter | 9.4 | Duration jobs were enqueued before processing | |
| `gitlab_transaction_rails_queue_duration_total` | Counter | 9.4 | Measures latency between GitLab Workhorse forwarding a request to Rails | controller, action |
| `gitlab_transaction_view_duration_total` | Counter | 9.4 | Duration for views | controller, action, view |
@@ -140,8 +140,7 @@ The following metrics are available:
| Metric | Type | Since | Description |
|:--------------------------------- |:--------- |:------------------------------------------------------------- |:-------------------------------------- |
-| `db_load_balancing_hosts` | Gauge | [12.3](https://gitlab.com/gitlab-org/gitlab/issues/13630) | Current number of load balancing hosts |
-| `db_load_balancing_index` | Gauge | [12.3](https://gitlab.com/gitlab-org/gitlab/issues/13630) | Current load balancing host index |
+| `db_load_balancing_hosts` | Gauge | [12.3](https://gitlab.com/gitlab-org/gitlab/issues/13630) | Current number of load balancing hosts |
## Ruby metrics
diff --git a/doc/administration/monitoring/prometheus/gitlab_monitor_exporter.md b/doc/administration/monitoring/prometheus/gitlab_monitor_exporter.md
new file mode 100644
index 00000000000..ae3a3d739b5
--- /dev/null
+++ b/doc/administration/monitoring/prometheus/gitlab_monitor_exporter.md
@@ -0,0 +1,5 @@
+---
+redirect_to: 'gitlab_exporter.md'
+---
+
+This document was moved to [another location](gitlab_exporter.md).
diff --git a/doc/administration/monitoring/prometheus/index.md b/doc/administration/monitoring/prometheus/index.md
index 9228ebf4fed..c35d6f505be 100644
--- a/doc/administration/monitoring/prometheus/index.md
+++ b/doc/administration/monitoring/prometheus/index.md
@@ -21,7 +21,7 @@ Prometheus works by periodically connecting to data sources and collecting their
performance metrics via the [various exporters](#bundled-software-metrics). To view
and work with the monitoring data, you can either
[connect directly to Prometheus](#viewing-performance-metrics) or utilize a
-dashboard tool like [Grafana].
+dashboard tool like [Grafana](https://grafana.com).
## Configuring Prometheus
@@ -114,7 +114,7 @@ To use an external Prometheus server:
gitlab_rails['monitoring_whitelist'] = ['127.0.0.0/8', '192.168.0.1']
```
-1. To scrape nginx metrics, you'll also need to configure nginx to allow the Prometheus server
+1. To scrape NGINX metrics, you'll also need to configure NGINX to allow the Prometheus server
IP. For example:
```ruby
@@ -199,8 +199,8 @@ having [NGINX proxy it][nginx-custom-config].
The performance data collected by Prometheus can be viewed directly in the
Prometheus console or through a compatible dashboard tool.
-The Prometheus interface provides a [flexible query language][prom-query] to work
-with the collected data where you can visualize their output.
+The Prometheus interface provides a [flexible query language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
+to work with the collected data where you can visualize their output.
For a more fully featured dashboard, Grafana can be used and has
[official support for Prometheus][prom-grafana].
@@ -274,7 +274,7 @@ The GitLab exporter allows you to measure various GitLab metrics, pulled from Re
> Introduced in GitLab 9.0.
> Pod monitoring introduced in GitLab 9.4.
-If your GitLab server is running within Kubernetes, Prometheus will collect metrics from the Nodes and [annotated Pods](https://prometheus.io/docs/operating/configuration/#kubernetes_sd_config) in the cluster, including performance data on each container. This is particularly helpful if your CI/CD environments run in the same cluster, as you can use the [Prometheus project integration][prometheus integration] to monitor them.
+If your GitLab server is running within Kubernetes, Prometheus will collect metrics from the Nodes and [annotated Pods](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config) in the cluster, including performance data on each container. This is particularly helpful if your CI/CD environments run in the same cluster, as you can use the [Prometheus project integration][prometheus integration] to monitor them.
To disable the monitoring of Kubernetes:
@@ -288,16 +288,11 @@ To disable the monitoring of Kubernetes:
1. Save the file and [reconfigure GitLab][reconfigure] for the changes to
take effect.
-[grafana]: https://grafana.net
[hsts]: https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security
[multi-user-prometheus]: https://gitlab.com/gitlab-org/multi-user-prometheus
[nginx-custom-config]: https://docs.gitlab.com/omnibus/settings/nginx.html#inserting-custom-nginx-settings-into-the-gitlab-server-block
[prometheus]: https://prometheus.io
-[prom-exporters]: https://prometheus.io/docs/instrumenting/exporters/
-[prom-query]: https://prometheus.io/docs/querying/basics
[prom-grafana]: https://prometheus.io/docs/visualization/grafana/
-[scrape-config]: https://prometheus.io/docs/operating/configuration/#%3Cscrape_config%3E
[reconfigure]: ../../restart_gitlab.md#omnibus-gitlab-reconfigure
[1261]: https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/1261
[prometheus integration]: ../../../user/project/integrations/prometheus.md
-[prometheus-cadvisor-metrics]: https://github.com/google/cadvisor/blob/master/docs/storage/prometheus.md
diff --git a/doc/administration/operations/cleaning_up_redis_sessions.md b/doc/administration/operations/cleaning_up_redis_sessions.md
index c9b5ab9d290..fd469ae23e3 100644
--- a/doc/administration/operations/cleaning_up_redis_sessions.md
+++ b/doc/administration/operations/cleaning_up_redis_sessions.md
@@ -11,8 +11,8 @@ start building up again after you clean up.
In GitLab versions prior to 7.3.0, the session keys in Redis are 16-byte
hexadecimal values such as '976aa289e2189b17d7ef525a6702ace9'. Starting with
GitLab 7.3.0, the keys are
-prefixed with 'session:gitlab:', so they would look like
-'session:gitlab:976aa289e2189b17d7ef525a6702ace9'. Below we describe how to
+prefixed with `session:gitlab:`, so they would look like
+`session:gitlab:976aa289e2189b17d7ef525a6702ace9`. Below we describe how to
remove the keys in the old format.
**Note:** the instructions below must be modified in accordance with your
diff --git a/doc/administration/operations/fast_ssh_key_lookup.md b/doc/administration/operations/fast_ssh_key_lookup.md
index 16424c25a98..9a38e8ddd23 100644
--- a/doc/administration/operations/fast_ssh_key_lookup.md
+++ b/doc/administration/operations/fast_ssh_key_lookup.md
@@ -2,7 +2,7 @@
NOTE: **Note:** This document describes a drop-in replacement for the
`authorized_keys` file for normal (non-deploy key) users. Consider
-using [ssh certificates](ssh_certificates.md), they are even faster,
+using [SSH certificates](ssh_certificates.md), they are even faster,
but are not a drop-in replacement.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/issues/1631) in
@@ -78,7 +78,7 @@ CAUTION: **Caution:** Do not disable writes until SSH is confirmed to be working
perfectly, because the file will quickly become out-of-date.
In the case of lookup failures (which are common), the `authorized_keys`
-file will still be scanned. So git SSH performance will still be slow for many
+file will still be scanned. So Git SSH performance will still be slow for many
users as long as a large file exists.
You can disable any more writes to the `authorized_keys` file by unchecking
diff --git a/doc/administration/operations/moving_repositories.md b/doc/administration/operations/moving_repositories.md
index ec11a92db1b..d54ffacd281 100644
--- a/doc/administration/operations/moving_repositories.md
+++ b/doc/administration/operations/moving_repositories.md
@@ -31,7 +31,7 @@ If you want to see progress, replace `-xf` with `-xvf`.
### Tar pipe to another server
You can also use a tar pipe to copy data to another server. If your
-'git' user has SSH access to the newserver as 'git@newserver', you
+`git` user has SSH access to the newserver as `git@newserver`, you
can pipe the data through SSH.
```
@@ -61,7 +61,7 @@ If you want to see progress, replace `-a` with `-av`.
### Single rsync to another server
-If the 'git' user on your source system has SSH access to the target
+If the `git` user on your source system has SSH access to the target
server you can send the repositories over the network with rsync.
```
@@ -95,7 +95,7 @@ after switching to the new repository storage directory.
This will sync repositories with 10 rsync processes at a time. We keep
track of progress so that the transfer can be restarted if necessary.
-First we create a new directory, owned by 'git', to hold transfer
+First we create a new directory, owned by `git`, to hold transfer
logs. We assume the directory is empty before we start the transfer
procedure, and that we are the only ones writing files in it.
diff --git a/doc/administration/operations/sidekiq_memory_killer.md b/doc/administration/operations/sidekiq_memory_killer.md
index 8eac42f2fe2..79e9fb778b6 100644
--- a/doc/administration/operations/sidekiq_memory_killer.md
+++ b/doc/administration/operations/sidekiq_memory_killer.md
@@ -2,7 +2,7 @@
The GitLab Rails application code suffers from memory leaks. For web requests
this problem is made manageable using
-[unicorn-worker-killer](https://github.com/kzk/unicorn-worker-killer) which
+[`unicorn-worker-killer`](https://github.com/kzk/unicorn-worker-killer) which
restarts Unicorn worker processes in between requests when needed. The Sidekiq
MemoryKiller applies the same approach to the Sidekiq processes used by GitLab
to process background jobs.
@@ -10,8 +10,8 @@ to process background jobs.
Unlike unicorn-worker-killer, which is enabled by default for all GitLab
installations since GitLab 6.4, the Sidekiq MemoryKiller is enabled by default
_only_ for Omnibus packages. The reason for this is that the MemoryKiller
-relies on Runit to restart Sidekiq after a memory-induced shutdown and GitLab
-installations from source do not all use Runit or an equivalent.
+relies on runit to restart Sidekiq after a memory-induced shutdown and GitLab
+installations from source do not all use runit or an equivalent.
With the default settings, the MemoryKiller will cause a Sidekiq restart no
more often than once every 15 minutes, with the restart causing about one
@@ -26,18 +26,50 @@ run as a process group leader (e.g., using `chpst -P`). If using Omnibus or the
The MemoryKiller is controlled using environment variables.
-- `SIDEKIQ_MEMORY_KILLER_MAX_RSS`: if this variable is set, and its value is
- greater than 0, then after each Sidekiq job, the MemoryKiller will check the
- RSS of the Sidekiq process that executed the job. If the RSS of the Sidekiq
- process (expressed in kilobytes) exceeds SIDEKIQ_MEMORY_KILLER_MAX_RSS, a
- delayed shutdown is triggered. The default value for Omnibus packages is set
- [in the omnibus-gitlab
+- `SIDEKIQ_DAEMON_MEMORY_KILLER`: defaults to 0. When set to 1, the MemoryKiller
+ works in _daemon_ mode. Otherwise, the MemoryKiller works in _legacy_ mode.
+
+ In _legacy_ mode, the MemoryKiller checks the Sidekiq process RSS after each job.
+
+ In _daemon_ mode, the MemoryKiller checks the Sidekiq process RSS every 3 seconds
+ (defined by `SIDEKIQ_MEMORY_KILLER_CHECK_INTERVAL`).
+
+- `SIDEKIQ_MEMORY_KILLER_MAX_RSS`: if this variable is set, and its value is greater
+ than 0, the MemoryKiller is enabled. Otherwise the MemoryKiller is disabled.
+
+ `SIDEKIQ_MEMORY_KILLER_MAX_RSS` defines the Sidekiq process allowed RSS.
+
+ In _legacy_ mode, if the Sidekiq process exceeds the allowed RSS then an irreversible
+ delayed graceful restart will be triggered. The restart of Sidekiq will happen
+ after `SIDEKIQ_MEMORY_KILLER_GRACE_TIME` seconds.
+
+ In _daemon_ mode, if the Sidekiq process exceeds the allowed RSS for longer than
+ `SIDEKIQ_MEMORY_KILLER_GRACE_TIME` the graceful restart will be triggered. If the
+ Sidekiq process go below the allowed RSS within `SIDEKIQ_MEMORY_KILLER_GRACE_TIME`,
+ the restart will be aborted.
+
+ The default value for Omnibus packages is set
+ [in the Omnibus GitLab
repository](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-cookbooks/gitlab/attributes/default.rb).
-- `SIDEKIQ_MEMORY_KILLER_GRACE_TIME`: defaults to 900 seconds (15 minutes). When
- a shutdown is triggered, the Sidekiq process will keep working normally for
- another 15 minutes.
-- `SIDEKIQ_MEMORY_KILLER_SHUTDOWN_WAIT`: defaults to 30 seconds. When the grace
- time has expired, the MemoryKiller tells Sidekiq to stop accepting new jobs.
- Existing jobs get 30 seconds to finish. After that, the MemoryKiller tells
- Sidekiq to shut down, and an external supervision mechanism (e.g. Runit) must
- restart Sidekiq.
+
+- `SIDEKIQ_MEMORY_KILLER_HARD_LIMIT_RSS`: is used by _daemon_ mode. If the Sidekiq
+ process RSS (expressed in kilobytes) exceeds `SIDEKIQ_MEMORY_KILLER_HARD_LIMIT_RSS`,
+ an immediate graceful restart of Sidekiq is triggered.
+
+- `SIDEKIQ_MEMORY_KILLER_CHECK_INTERVAL`: used in _daemon_ mode to define how
+ often to check process RSS, default to 3 seconds.
+
+- `SIDEKIQ_MEMORY_KILLER_GRACE_TIME`: defaults to 900 seconds (15 minutes).
+ The usage of this variable is described as part of `SIDEKIQ_MEMORY_KILLER_MAX_RSS`.
+
+- `SIDEKIQ_MEMORY_KILLER_SHUTDOWN_WAIT`: defaults to 30 seconds. This defines the
+ maximum time allowed for all Sidekiq jobs to finish. No new jobs will be accepted
+ during that time, and the process will exit as soon as all jobs finish.
+
+ If jobs do not finish during that time, the MemoryKiller will interrupt all currently
+ running jobs by sending `SIGTERM` to the Sidekiq process.
+
+ If the process hard shutdown/restart is not performed by Sidekiq,
+ the Sidekiq process will be forcefully terminated after
+ `Sidekiq.options[:timeout] * 2` seconds. An external supervision mechanism
+ (e.g. runit) must restart Sidekiq afterwards.
diff --git a/doc/administration/operations/ssh_certificates.md b/doc/administration/operations/ssh_certificates.md
index 3792bcd3bca..2a9a4cff34e 100644
--- a/doc/administration/operations/ssh_certificates.md
+++ b/doc/administration/operations/ssh_certificates.md
@@ -3,7 +3,7 @@
> [Available in](https://gitlab.com/gitlab-org/gitlab-foss/merge_requests/19911) GitLab
> Community Edition 11.2.
-GitLab's default SSH authentication requires users to upload their ssh
+GitLab's default SSH authentication requires users to upload their SSH
public keys before they can use the SSH transport.
In centralized (e.g. corporate) environments this can be a hassle
diff --git a/doc/administration/operations/unicorn.md b/doc/administration/operations/unicorn.md
index 8178cb243f3..969f1211643 100644
--- a/doc/administration/operations/unicorn.md
+++ b/doc/administration/operations/unicorn.md
@@ -40,7 +40,7 @@ master process has PID 56227 below.
The main tunables for Unicorn are the number of worker processes and the
request timeout after which the Unicorn master terminates a worker process.
-See the [omnibus-gitlab Unicorn settings
+See the [Omnibus GitLab Unicorn settings
documentation](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/unicorn.md)
if you want to adjust these settings.
diff --git a/doc/administration/packages/container_registry.md b/doc/administration/packages/container_registry.md
index b5320d39d92..bf86a549fda 100644
--- a/doc/administration/packages/container_registry.md
+++ b/doc/administration/packages/container_registry.md
@@ -5,8 +5,8 @@
> Docker versions earlier than 1.10.
NOTE: **Note:**
-This document is about the admin guide. To learn how to use GitLab Container
-Registry [user documentation](../../user/packages/container_registry/index.md).
+This document is the administrator's guide. To learn how to use GitLab Container
+Registry, see the [user documentation](../../user/packages/container_registry/index.md).
With the Container Registry integrated into GitLab, every project can have its
own space to store its Docker images.
@@ -37,7 +37,7 @@ If you have installed GitLab from source:
1. After the installation is complete, you will have to configure the Registry's
settings in `gitlab.yml` in order to enable it.
1. Use the sample NGINX configuration file that is found under
- [`lib/support/nginx/registry-ssl`](https://gitlab.com/gitlab-org/gitlab-foss/blob/master/lib/support/nginx/registry-ssl) and edit it to match the
+ [`lib/support/nginx/registry-ssl`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/support/nginx/registry-ssl) and edit it to match the
`host`, `port` and TLS certs paths.
The contents of `gitlab.yml` are:
@@ -360,9 +360,9 @@ The different supported drivers are:
| Driver | Description |
|------------|-------------------------------------|
| filesystem | Uses a path on the local filesystem |
-| azure | Microsoft Azure Blob Storage |
+| Azure | Microsoft Azure Blob Storage |
| gcs | Google Cloud Storage |
-| s3 | Amazon Simple Storage Service. Be sure to configure your storage bucket with the correct [S3 Permission Scopes](https://docs.docker.com/registry/storage-drivers/s3/#s3-permission-scopes). |
+| s3 | Amazon Simple Storage Service. Be sure to configure your storage bucket with the correct [S3 Permission Scopes](https://docs.docker.com/registry/storage-drivers/s3/#s3-permission-scopes). |
| swift | OpenStack Swift Object Storage |
| oss | Aliyun OSS |
@@ -374,7 +374,7 @@ filesystem. Remember to enable backups with your object storage provider if
desired.
NOTE: **Note:**
-`regionendpoint` is only required when configuring an S3 compatible service such as Minio. It takes a URL such as `http://127.0.0.1:9000`.
+`regionendpoint` is only required when configuring an S3 compatible service such as MinIO. It takes a URL such as `http://127.0.0.1:9000`.
**Omnibus GitLab installations**
@@ -877,6 +877,6 @@ The above image shows:
- The HEAD request to the AWS bucket reported a 403 Unauthorized.
What does this mean? This strongly suggests that the S3 user does not have the right
-[permissions to perform a HEAD request](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html).
+[permissions to perform a HEAD request](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html).
The solution: check the [IAM permissions again](https://docs.docker.com/registry/storage-drivers/s3/).
Once the right permissions were set, the error will go away.
diff --git a/doc/administration/packages/index.md b/doc/administration/packages/index.md
index 99ec5811681..d4afc65577e 100644
--- a/doc/administration/packages/index.md
+++ b/doc/administration/packages/index.md
@@ -8,6 +8,7 @@ The Packages feature allows GitLab to act as a repository for the following:
| Software repository | Description | Available in GitLab version |
| ------------------- | ----------- | --------------------------- |
+| [Conan Repository](../../user/packages/conan_repository/index.md) | The GitLab Conan Repository enables every project in GitLab to have its own space to store [Conan](https://conan.io/) packages. | 12.4+ |
| [Maven Repository](../../user/packages/maven_repository/index.md) | The GitLab Maven Repository enables every project in GitLab to have its own space to store [Maven](https://maven.apache.org/) packages. | 11.3+ |
| [NPM Registry](../../user/packages/npm_registry/index.md) | The GitLab NPM Registry enables every project in GitLab to have its own space to store [NPM](https://www.npmjs.com/) packages. | 11.7+ |
diff --git a/doc/administration/pages/index.md b/doc/administration/pages/index.md
index 41a372c4aeb..cacfb73451c 100644
--- a/doc/administration/pages/index.md
+++ b/doc/administration/pages/index.md
@@ -13,10 +13,6 @@ description: 'Learn how to administer GitLab Pages.'
GitLab Pages allows for hosting of static sites. It must be configured by an
administrator. Separate [user documentation][pages-userguide] is available.
-Read the [changelog](#changelog) if you are upgrading to a new GitLab
-version as it may include new features and changes needed to be made in your
-configuration.
-
NOTE: **Note:**
This guide is for Omnibus GitLab installations. If you have installed
GitLab from source, see
@@ -119,7 +115,7 @@ since that is needed in all configurations.
URL scheme: `http://page.example.io`
This is the minimum setup that you can use Pages with. It is the base for all
-other setups as described below. Nginx will proxy all requests to the daemon.
+other setups as described below. NGINX will proxy all requests to the daemon.
The Pages daemon doesn't listen to the outside world.
1. Set the external URL for GitLab Pages in `/etc/gitlab/gitlab.rb`:
@@ -143,7 +139,7 @@ Watch the [video tutorial][video-admin] for this configuration.
URL scheme: `https://page.example.io`
-Nginx will proxy all requests to the daemon. Pages daemon doesn't listen to the
+NGINX will proxy all requests to the daemon. Pages daemon doesn't listen to the
outside world.
1. Place the certificate and key inside `/etc/gitlab/ssl`
@@ -200,7 +196,7 @@ you have IPv6 as well as IPv4 addresses, you can use them both.
URL scheme: `http://page.example.io` and `http://domain.com`
-In that case, the Pages daemon is running, Nginx still proxies requests to
+In that case, the Pages daemon is running, NGINX still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
world. Custom domains are supported, but no TLS.
@@ -231,7 +227,7 @@ world. Custom domains are supported, but no TLS.
URL scheme: `https://page.example.io` and `https://domain.com`
-In that case, the Pages daemon is running, Nginx still proxies requests to
+In that case, the Pages daemon is running, NGINX still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
world. Custom domains and TLS are supported.
@@ -309,7 +305,7 @@ Pages access control is disabled by default. To enable it:
```
1. [Reconfigure GitLab][reconfigure].
-1. Users can now configure it in their [projects' settings](../../user/project/pages/introduction.md#gitlab-pages-access-control-core-only).
+1. Users can now configure it in their [projects' settings](../../user/project/pages/introduction.md#gitlab-pages-access-control-core).
### Running behind a proxy
@@ -323,7 +319,7 @@ pages:
gitlab_pages['http_proxy'] = 'http://example:8080'
```
-1. [Reconfigure Gitlab][reconfigure] for the changes to take effect.
+1. [Reconfigure GitLab][reconfigure] for the changes to take effect.
## Activate verbose logging for daemon
@@ -430,37 +426,9 @@ Pages are part of the [regular backup][backup] so there is nothing to configure.
## Security
-You should strongly consider running GitLab pages under a different hostname
+You should strongly consider running GitLab Pages under a different hostname
than GitLab to prevent XSS attacks.
-## Changelog
-
-GitLab Pages were first introduced in GitLab EE 8.3. Since then, many features
-where added, like custom CNAME and TLS support, and many more are likely to
-come. Below is a brief changelog. If no changes were introduced or a version is
-missing from the changelog, assume that the documentation is the same as the
-latest previous version.
-
----
-
-**GitLab 8.17 ([documentation](https://gitlab.com/gitlab-org/gitlab-foss/blob/8-17-stable/doc/administration/pages/index.md))**
-
-- GitLab Pages were ported to Community Edition in GitLab 8.17.
-- Documentation was refactored to be more modular and easy to follow.
-
-**GitLab 8.5 ([documentation](https://gitlab.com/gitlab-org/gitlab/blob/8-5-stable-ee/doc/pages/administration.md))**
-
-- In GitLab 8.5 we introduced the [gitlab-pages][] daemon which is now the
- recommended way to set up GitLab Pages.
-- The [NGINX configs][] have changed to reflect this change. So make sure to
- update them.
-- Custom CNAME and TLS certificates support.
-- Documentation was moved to one place.
-
-**GitLab 8.3 ([documentation](https://gitlab.com/gitlab-org/gitlab/blob/8-3-stable-ee/doc/pages/administration.md))**
-
-- GitLab Pages feature was introduced.
-
[backup]: ../../raketasks/backup_restore.md
[ce-14605]: https://gitlab.com/gitlab-org/gitlab-foss/issues/14605
[ee-80]: https://gitlab.com/gitlab-org/gitlab/merge_requests/80
diff --git a/doc/administration/pages/source.md b/doc/administration/pages/source.md
index bacfa0117bb..be8bba3c95b 100644
--- a/doc/administration/pages/source.md
+++ b/doc/administration/pages/source.md
@@ -93,7 +93,7 @@ since that is needed in all configurations.
URL scheme: `http://page.example.io`
This is the minimum setup that you can use Pages with. It is the base for all
-other setups as described below. Nginx will proxy all requests to the daemon.
+other setups as described below. NGINX will proxy all requests to the daemon.
The Pages daemon doesn't listen to the outside world.
1. Install the Pages daemon:
@@ -136,7 +136,7 @@ The Pages daemon doesn't listen to the outside world.
gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090"
```
-1. Copy the `gitlab-pages` Nginx configuration file:
+1. Copy the `gitlab-pages` NGINX configuration file:
```bash
sudo cp lib/support/nginx/gitlab-pages /etc/nginx/sites-available/gitlab-pages.conf
@@ -155,7 +155,7 @@ The Pages daemon doesn't listen to the outside world.
URL scheme: `https://page.example.io`
-Nginx will proxy all requests to the daemon. Pages daemon doesn't listen to the
+NGINX will proxy all requests to the daemon. Pages daemon doesn't listen to the
outside world.
1. Install the Pages daemon:
@@ -193,7 +193,7 @@ outside world.
gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090 -root-cert /path/to/example.io.crt -root-key /path/to/example.io.key
```
-1. Copy the `gitlab-pages-ssl` Nginx configuration file:
+1. Copy the `gitlab-pages-ssl` NGINX configuration file:
```bash
sudo cp lib/support/nginx/gitlab-pages-ssl /etc/nginx/sites-available/gitlab-pages-ssl.conf
@@ -219,7 +219,7 @@ that without TLS certificates.
URL scheme: `http://page.example.io` and `http://domain.com`
-In that case, the pages daemon is running, Nginx still proxies requests to
+In that case, the pages daemon is running, NGINX still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
world. Custom domains are supported, but no TLS.
@@ -261,7 +261,7 @@ world. Custom domains are supported, but no TLS.
gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090 -listen-http 192.0.2.2:80"
```
-1. Copy the `gitlab-pages-ssl` Nginx configuration file:
+1. Copy the `gitlab-pages-ssl` NGINX configuration file:
```bash
sudo cp lib/support/nginx/gitlab-pages /etc/nginx/sites-available/gitlab-pages.conf
@@ -284,7 +284,7 @@ world. Custom domains are supported, but no TLS.
URL scheme: `https://page.example.io` and `https://domain.com`
-In that case, the pages daemon is running, Nginx still proxies requests to
+In that case, the pages daemon is running, NGINX still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
world. Custom domains and TLS are supported.
@@ -330,7 +330,7 @@ world. Custom domains and TLS are supported.
gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090 -listen-http 192.0.2.2:80 -listen-https 192.0.2.2:443 -root-cert /path/to/example.io.crt -root-key /path/to/example.io.key
```
-1. Copy the `gitlab-pages-ssl` Nginx configuration file:
+1. Copy the `gitlab-pages-ssl` NGINX configuration file:
```bash
sudo cp lib/support/nginx/gitlab-pages-ssl /etc/nginx/sites-available/gitlab-pages-ssl.conf
@@ -351,7 +351,7 @@ The following information applies only for installations from source.
Be extra careful when setting up the domain name in the NGINX config. You must
not remove the backslashes.
-If your GitLab pages domain is `example.io`, replace:
+If your GitLab Pages domain is `example.io`, replace:
```bash
server_name ~^.*\.YOUR_GITLAB_PAGES\.DOMAIN$;
@@ -401,7 +401,7 @@ Pages access control is disabled by default. To enable it:
1. Create a new [system OAuth application](../../integration/oauth_provider.md#adding-an-application-through-the-profile).
This should be called `GitLab Pages` and have a `Redirect URL` of
`https://projects.example.io/auth`. It does not need to be a "trusted"
- application, but it does need the "api" scope.
+ application, but it does need the `api` scope.
1. Start the Pages daemon with the following additional arguments:
```shell
@@ -411,7 +411,7 @@ Pages access control is disabled by default. To enable it:
-auth-server <URL of the GitLab instance>
```
-1. Users can now configure it in their [projects' settings](../../user/project/pages/introduction.md#gitlab-pages-access-control-core-only).
+1. Users can now configure it in their [projects' settings](../../user/project/pages/introduction.md#gitlab-pages-access-control-core).
## Change storage path
@@ -443,7 +443,7 @@ Pages are part of the [regular backup][backup] so there is nothing to configure.
## Security
-You should strongly consider running GitLab pages under a different hostname
+You should strongly consider running GitLab Pages under a different hostname
than GitLab to prevent XSS attacks.
[backup]: ../../raketasks/backup_restore.md
@@ -455,5 +455,5 @@ than GitLab to prevent XSS attacks.
[pages-userguide]: ../../user/project/pages/index.md
[restart]: ../restart_gitlab.md#installations-from-source
[gitlab-pages]: https://gitlab.com/gitlab-org/gitlab-pages/tree/v0.4.0
-[gl-example]: https://gitlab.com/gitlab-org/gitlab-foss/blob/master/lib/support/init.d/gitlab.default.example
+[gl-example]: https://gitlab.com/gitlab-org/gitlab/blob/master/lib/support/init.d/gitlab.default.example
[shared runners]: ../../ci/runners/README.md
diff --git a/doc/administration/raketasks/check.md b/doc/administration/raketasks/check.md
index d8f80965c21..eb230f02c0d 100644
--- a/doc/administration/raketasks/check.md
+++ b/doc/administration/raketasks/check.md
@@ -8,7 +8,7 @@ help GitLab administrators diagnose problem repositories so they can be fixed.
There are 3 things that are checked to determine integrity.
-1. Git repository file system check ([git fsck](https://git-scm.com/docs/git-fsck)).
+1. Git repository file system check ([`git fsck`](https://git-scm.com/docs/git-fsck)).
This step verifies the connectivity and validity of objects in the repository.
1. Check for `config.lock` in the repository directory.
1. Check for any branch/references lock files in `refs/heads`.
diff --git a/doc/administration/raketasks/geo.md b/doc/administration/raketasks/geo.md
index 387bc71965b..09f72c3411d 100644
--- a/doc/administration/raketasks/geo.md
+++ b/doc/administration/raketasks/geo.md
@@ -2,7 +2,7 @@
## Git housekeeping
-There are few tasks you can run to schedule a git housekeeping to start at the
+There are few tasks you can run to schedule a Git housekeeping to start at the
next repository sync in a **Secondary node**:
### Incremental Repack
diff --git a/doc/administration/raketasks/maintenance.md b/doc/administration/raketasks/maintenance.md
index 89335fcd2a8..e63e0c40393 100644
--- a/doc/administration/raketasks/maintenance.md
+++ b/doc/administration/raketasks/maintenance.md
@@ -62,7 +62,7 @@ It will check that each component was set up according to the installation guide
You may also have a look at our Troubleshooting Guides:
- [Troubleshooting Guide (GitLab)](../index.md#troubleshooting)
-- [Troubleshooting Guide (Omnibus Gitlab)](https://docs.gitlab.com/omnibus/README.html#troubleshooting)
+- [Troubleshooting Guide (Omnibus GitLab)](https://docs.gitlab.com/omnibus/README.html#troubleshooting)
**Omnibus Installation**
@@ -76,7 +76,7 @@ sudo gitlab-rake gitlab:check
bundle exec rake gitlab:check RAILS_ENV=production
```
-NOTE: Use SANITIZE=true for gitlab:check if you want to omit project names from the output.
+NOTE: Use `SANITIZE=true` for `gitlab:check` if you want to omit project names from the output.
Example output:
@@ -146,7 +146,7 @@ You will lose any data stored in authorized_keys file.
Do you want to continue (yes/no)? yes
```
-## Clear redis cache
+## Clear Redis cache
If for some reason the dashboard shows wrong information you might want to
clear Redis' cache.
@@ -183,7 +183,7 @@ For omnibus versions, the unoptimized assets (JavaScript, CSS) are frozen at
the release of upstream GitLab. The omnibus version includes optimized versions
of those assets. Unless you are modifying the JavaScript / CSS code on your
production machine after installing the package, there should be no reason to redo
-rake gitlab:assets:compile on the production machine. If you suspect that assets
+`rake gitlab:assets:compile` on the production machine. If you suspect that assets
have been corrupted, you should reinstall the omnibus package.
## Tracking Deployments
diff --git a/doc/administration/raketasks/uploads/migrate.md b/doc/administration/raketasks/uploads/migrate.md
index d9b4c9b3369..517d6b01438 100644
--- a/doc/administration/raketasks/uploads/migrate.md
+++ b/doc/administration/raketasks/uploads/migrate.md
@@ -113,3 +113,39 @@ To migrate all uploads created by legacy uploaders, run:
```shell
bundle exec rake gitlab:uploads:legacy:migrate
```
+
+## Migrate from object storage to local storage
+
+If you need to disable Object Storage for any reason, first you need to migrate
+your data out of Object Storage and back into your local storage.
+
+**Before proceeding, it is important to disable both `direct_upload` and `background_upload` under `uploads` settings in `gitlab.rb`**
+
+CAUTION: **Warning:**
+ **Extended downtime is required** so no new files are created in object storage during
+ the migration. A configuration setting will be added soon to allow migrating
+ from object storage to local files with only a brief moment of downtime for configuration changes.
+ See issue [gitlab-org/gitlab#30979](https://gitlab.com/gitlab-org/gitlab/issues/30979)
+
+### All-in-one rake task
+
+GitLab provides a wrapper rake task that migrates all uploaded files - avatars,
+logos, attachments, favicon, etc. - to local storage in one go. Under the hood,
+it invokes individual rake tasks to migrate files falling under each of this
+category one by one. For details on these rake tasks please [refer to the section above](#individual-rake-tasks),
+keeping in mind the task name in this case is `gitlab:uploads:migrate_to_local`.
+
+**Omnibus Installation**
+
+```bash
+gitlab-rake "gitlab:uploads:migrate_to_local:all"
+```
+
+**Source Installation**
+
+```bash
+sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:migrate_to_local:all
+```
+
+After this is done, you may disable Object Storage by undoing the changes described
+in the instructions to [configure object storage](../../uploads.md#using-object-storage-core-only)
diff --git a/doc/administration/repository_storage_paths.md b/doc/administration/repository_storage_paths.md
index 376eb90deea..7d3e36e9796 100644
--- a/doc/administration/repository_storage_paths.md
+++ b/doc/administration/repository_storage_paths.md
@@ -118,6 +118,6 @@ randomly placed on one of the selected paths.
[restart-gitlab]: restart_gitlab.md#installations-from-source
[reconfigure-gitlab]: restart_gitlab.md#omnibus-gitlab-reconfigure
[backups]: ../raketasks/backup_restore.md
-[raketask]: https://gitlab.com/gitlab-org/gitlab-foss/blob/033e5423a2594e08a7ebcd2379bd2331f4c39032/lib/backup/repository.rb#L54-56
-[repospath]: https://gitlab.com/gitlab-org/gitlab-foss/blob/8-9-stable/config/gitlab.yml.example#L457
+[raketask]: https://gitlab.com/gitlab-org/gitlab/blob/033e5423a2594e08a7ebcd2379bd2331f4c39032/lib/backup/repository.rb#L54-56
+[repospath]: https://gitlab.com/gitlab-org/gitlab/blob/8-9-stable/config/gitlab.yml.example#L457
[ce-11449]: https://gitlab.com/gitlab-org/gitlab-foss/merge_requests/11449
diff --git a/doc/administration/repository_storage_types.md b/doc/administration/repository_storage_types.md
index 5f6738dc190..227d6928baf 100644
--- a/doc/administration/repository_storage_types.md
+++ b/doc/administration/repository_storage_types.md
@@ -10,7 +10,7 @@ that can be:
- Mounted to the local disk
- Exposed as an NFS shared volume
-- Accessed via [gitaly] on its own machine.
+- Accessed via [Gitaly] on its own machine.
In GitLab, this is configured in `/etc/gitlab/gitlab.rb` by the `git_data_dirs({})`
configuration hash. The storage layouts discussed here will apply to any shard
diff --git a/doc/administration/restart_gitlab.md b/doc/administration/restart_gitlab.md
index 169a220b9a9..9f95080654f 100644
--- a/doc/administration/restart_gitlab.md
+++ b/doc/administration/restart_gitlab.md
@@ -141,5 +141,5 @@ If you are using other init systems, like systemd, you can check the
[install]: ../install/installation.md "Documentation to install GitLab from source"
[mailroom]: reply_by_email.md "Used for replying by email in GitLab issues and merge requests"
[chef]: https://www.chef.io/products/chef-infra/ "Chef official website"
-[src-service]: https://gitlab.com/gitlab-org/gitlab-foss/blob/master/lib/support/init.d/gitlab "GitLab init service file"
+[src-service]: https://gitlab.com/gitlab-org/gitlab/blob/master/lib/support/init.d/gitlab "GitLab init service file"
[gl-recipes]: https://gitlab.com/gitlab-org/gitlab-recipes/tree/master/init "GitLab Recipes repository"
diff --git a/doc/administration/smime_signing_email.md b/doc/administration/smime_signing_email.md
index 530553ec1c4..60cab22d1f4 100644
--- a/doc/administration/smime_signing_email.md
+++ b/doc/administration/smime_signing_email.md
@@ -1,6 +1,6 @@
# Signing outgoing email with S/MIME
-Notification emails sent by Gitlab can be signed with S/MIME for improved
+Notification emails sent by GitLab can be signed with S/MIME for improved
security.
> **Note:**
diff --git a/doc/administration/troubleshooting/debug.md b/doc/administration/troubleshooting/debug.md
index 562624fc9dc..3007b711405 100644
--- a/doc/administration/troubleshooting/debug.md
+++ b/doc/administration/troubleshooting/debug.md
@@ -89,10 +89,10 @@ in Omnibus, run as root:
Many of the tips to diagnose issues below apply to many different situations. We'll use one
concrete example to illustrate what you can do to learn what is going wrong.
-### 502 Gateway Timeout after unicorn spins at 100% CPU
+### 502 Gateway Timeout after Unicorn spins at 100% CPU
This error occurs when the Web server times out (default: 60 s) after not
-hearing back from the unicorn worker. If the CPU spins to 100% while this in
+hearing back from the Unicorn worker. If the CPU spins to 100% while this in
progress, there may be something taking longer than it should.
To fix this issue, we first need to figure out what is happening. The
@@ -100,7 +100,7 @@ following tips are only recommended if you do NOT mind users being affected by
downtime. Otherwise skip to the next section.
1. Load the problematic URL
-1. Run `sudo gdb -p <PID>` to attach to the unicorn process.
+1. Run `sudo gdb -p <PID>` to attach to the Unicorn process.
1. In the gdb window, type:
```
@@ -135,7 +135,7 @@ downtime. Otherwise skip to the next section.
exit
```
-Note that if the unicorn process terminates before you are able to run these
+Note that if the Unicorn process terminates before you are able to run these
commands, gdb will report an error. To buy more time, you can always raise the
Unicorn timeout. For omnibus users, you can edit `/etc/gitlab/gitlab.rb` and
increase it from 60 seconds to 300:
@@ -152,7 +152,7 @@ For source installations, edit `config/unicorn.rb`.
#### Troubleshooting without affecting other users
-The previous section attached to a running unicorn process, and this may have
+The previous section attached to a running Unicorn process, and this may have
undesirable effects for users trying to access GitLab during this time. If you
are concerned about affecting others during a production system, you can run a
separate Rails process to debug the issue:
@@ -183,7 +183,7 @@ separate Rails process to debug the issue:
### GitLab: API is not accessible
-This often occurs when gitlab-shell attempts to request authorization via the
+This often occurs when GitLab Shell attempts to request authorization via the
internal API (e.g., `http://localhost:8080/api/v4/internal/allowed`), and
something in the check fails. There are many reasons why this may happen:
@@ -192,7 +192,7 @@ something in the check fails. There are many reasons why this may happen:
1. Error accessing the repository (e.g., stale NFS handles)
To diagnose this problem, try to reproduce the problem and then see if there
-is a unicorn worker that is spinning via `top`. Try to use the `gdb`
+is a Unicorn worker that is spinning via `top`. Try to use the `gdb`
techniques above. In addition, using `strace` may help isolate issues:
```shell
@@ -211,5 +211,5 @@ The output in `/tmp/unicorn.txt` may help diagnose the root cause.
## More information
-- [Debugging Stuck Ruby Processes](https://blog.newrelic.com/2013/04/29/debugging-stuck-ruby-processes-what-to-do-before-you-kill-9/)
+- [Debugging Stuck Ruby Processes](https://blog.newrelic.com/engineering/debugging-stuck-ruby-processes-what-to-do-before-you-kill-9/)
- [Cheatsheet of using gdb and ruby processes](gdb-stuck-ruby.txt)
diff --git a/doc/administration/troubleshooting/elasticsearch.md b/doc/administration/troubleshooting/elasticsearch.md
index 13b9c30b29d..37ec32413f8 100644
--- a/doc/administration/troubleshooting/elasticsearch.md
+++ b/doc/administration/troubleshooting/elasticsearch.md
@@ -1,6 +1,6 @@
-# Troubleshooting ElasticSearch
+# Troubleshooting Elasticsearch
-Troubleshooting ElasticSearch requires:
+Troubleshooting Elasticsearch requires:
- Knowledge of common terms.
- Establishing within which category the problem fits.
@@ -30,7 +30,7 @@ The type of problem will determine what steps to take. The possible troubleshoot
### Search Results workflow
-The following workflow is for ElasticSearch search results issues:
+The following workflow is for Elasticsearch search results issues:
```mermaid
graph TD;
@@ -42,11 +42,11 @@ graph TD;
B5 --> |Yes| B6
B5 --> |No| B7
B7 --> B8
- B{Is GitLab using<br>ElasticSearch for<br>searching?}
+ B{Is GitLab using<br>Elasticsearch for<br>searching?}
B1[Check Admin Area > Integrations<br>to ensure the settings are correct]
B2[Perform a search via<br>the rails console]
- B3[If all settings are correct<br>and it still doesn't show ElasticSearch<br>doing the searches, escalate<br>to GitLab support.]
- B4[Perform<br>the same search via the<br>ElasticSearch API]
+ B3[If all settings are correct<br>and it still doesn't show Elasticsearch<br>doing the searches, escalate<br>to GitLab support.]
+ B4[Perform<br>the same search via the<br>Elasticsearch API]
B5{Are the results<br>the same?}
B6[This means it is working as intended.<br>Speak with GitLab support<br>to confirm if the issue lies with<br>the filters.]
B7[Check the index status of the project<br>containing the missing search<br>results.]
@@ -55,7 +55,7 @@ graph TD;
### Indexing workflow
-The following workflow is for ElasticSearch indexing issues:
+The following workflow is for Elasticsearch indexing issues:
```mermaid
graph TD;
@@ -67,7 +67,7 @@ graph TD;
C --> |No| C6
C6 --> |No| C10
C7 --> |GitLab| C8
- C7 --> |ElasticSearch| C9
+ C7 --> |Elasticsearch| C9
C6 --> |Yes| C7
C10 --> |No| C12
C10 --> |Yes| C11
@@ -76,27 +76,27 @@ graph TD;
C14 --> |Yes| C15
C14 --> |No| C16
C{Is the problem with<br>creating an empty<br>index?}
- C1{Does the gitlab-production<br>index exist on the<br>ElasticSearch instance?}
- C2(Try to manually<br>delete the index on the<br>ElasticSearch instance and<br>retry creating an empty index.)
- C3{Can indices be made<br>manually on the ElasticSearch<br>instance?}
+ C1{Does the gitlab-production<br>index exist on the<br>Elasticsearch instance?}
+ C2(Try to manually<br>delete the index on the<br>Elasticsearch instance and<br>retry creating an empty index.)
+ C3{Can indices be made<br>manually on the Elasticsearch<br>instance?}
C4(Retry the creation of an empty index)
- C5(It is best to speak with an<br>ElasticSearch admin concerning the<br>instance's inability to create indices.)
+ C5(It is best to speak with an<br>Elasticsearch admin concerning the<br>instance's inability to create indices.)
C6{Is the indexer presenting<br>errors during indexing?}
- C7{Is the error a GitLab<br>error or an ElasticSearch<br>error?}
+ C7{Is the error a GitLab<br>error or an Elasticsearch<br>error?}
C8[Escalate to<br>GitLab support]
- C9[You will want<br>to speak with an<br>ElasticSearch admin.]
+ C9[You will want<br>to speak with an<br>Elasticsearch admin.]
C10{Does the index status<br>show 100%?}
C11[Escalate to<br>GitLab support]
C12{Does re-indexing the project<br> present any GitLab errors?}
C13[Rectify the GitLab errors and<br>restart troubleshooting, or<br>escalate to GitLab support.]
- C14{Does re-indexing the project<br>present errors on the <br>ElasticSearch instance?}
- C15[It would be best<br>to speak with an<br>ElasticSearch admin.]
+ C14{Does re-indexing the project<br>present errors on the <br>Elasticsearch instance?}
+ C15[It would be best<br>to speak with an<br>Elasticsearch admin.]
C16[This is likely a bug/issue<br>in GitLab and will require<br>deeper investigation. Escalate<br>to GitLab support.]
```
### Integration workflow
-The following workflow is for ElasticSearch integration issues:
+The following workflow is for Elasticsearch integration issues:
```mermaid
graph TD;
@@ -107,7 +107,7 @@ graph TD;
D4 --> |No| D5
D4 --> |Yes| D6
D{Is the error concerning<br>the beta indexer?}
- D1[It would be best<br>to speak with an<br>ElasticSearch admin.]
+ D1[It would be best<br>to speak with an<br>Elasticsearch admin.]
D2{Is the ICU development<br>package installed?}
D3>This package is required.<br>Install the package<br>and retry.]
D4{Is the error stemming<br>from the indexer?}
@@ -117,7 +117,7 @@ graph TD;
### Performance workflow
-The following workflow is for ElasticSearch performance issues:
+The following workflow is for Elasticsearch performance issues:
```mermaid
graph TD;
@@ -128,19 +128,19 @@ graph TD;
F4 --> F5
F5 --> |No| F6
F5 --> |Yes| F7
- F{Is the ElasticSearch instance<br>running on the same server<br>as the GitLab instance?}
- F1(This is not advised and will cause issues.<br>We recommend moving the ElasticSearch<br>instance to a different server.)
- F2{Does the ElasticSearch<br>server have at least 8<br>GB of RAM and 2 CPU<br>cores?}
- F3(According to ElasticSearch, a non-prod<br>server needs these as a base requirement.<br>Production often requires more. We recommend<br>you increase the server specifications.)
+ F{Is the Elasticsearch instance<br>running on the same server<br>as the GitLab instance?}
+ F1(This is not advised and will cause issues.<br>We recommend moving the Elasticsearch<br>instance to a different server.)
+ F2{Does the Elasticsearch<br>server have at least 8<br>GB of RAM and 2 CPU<br>cores?}
+ F3(According to Elasticsearch, a non-prod<br>server needs these as a base requirement.<br>Production often requires more. We recommend<br>you increase the server specifications.)
F4(Obtain the <br>cluster health information)
F5(Does it show the<br>status as green?)
- F6(We recommend you speak with<br>an ElasticSearch admin<br>about implementing sharding.)
+ F6(We recommend you speak with<br>an Elasticsearch admin<br>about implementing sharding.)
F7(Escalate to<br>GitLab support.)
```
## Troubleshooting walkthrough
-Most ElasticSearch troubleshooting can be broken down into 4 categories:
+Most Elasticsearch troubleshooting can be broken down into 4 categories:
- [Troubleshooting search results](#troubleshooting-search-results)
- [Troubleshooting indexing](#troubleshooting-indexing)
@@ -150,19 +150,19 @@ Most ElasticSearch troubleshooting can be broken down into 4 categories:
Generally speaking, if it does not fall into those four categories, it is either:
- Something GitLab support needs to look into.
-- Not a true ElasticSearch issue.
+- Not a true Elasticsearch issue.
-Exercise caution. Issues that appear to be ElasticSearch problems can be OS-level issues.
+Exercise caution. Issues that appear to be Elasticsearch problems can be OS-level issues.
### Troubleshooting search results
-Troubleshooting search result issues is rather straight forward on ElasticSearch.
+Troubleshooting search result issues is rather straight forward on Elasticsearch.
-The first step is to confirm GitLab is using ElasticSearch for the search function.
+The first step is to confirm GitLab is using Elasticsearch for the search function.
To do this:
1. Confirm the integration is enabled in **Admin Area > Settings > Integrations**.
-1. Confirm searches utilize ElasticSearch by accessing the rails console
+1. Confirm searches utilize Elasticsearch by accessing the rails console
(`sudo gitlab-rails console`) and running the following commands:
```rails
@@ -173,21 +173,21 @@ To do this:
The ouput from the last command is the key here. If it shows:
-- `ActiveRecord::Relation`, **it is not** using ElasticSearch.
-- `Kaminari::PaginatableArray`, **it is** using ElasticSearch.
+- `ActiveRecord::Relation`, **it is not** using Elasticsearch.
+- `Kaminari::PaginatableArray`, **it is** using Elasticsearch.
-| Not using ElasticSearch | Using ElasticSearch |
+| Not using Elasticsearch | Using Elasticsearch |
|--------------------------|------------------------------|
| `ActiveRecord::Relation` | `Kaminari::PaginatableArray` |
-If all the settings look correct and it is still not using ElasticSearch for the search function, it is best to escalate to GitLab support. This could be a bug/issue.
+If all the settings look correct and it is still not using Elasticsearch for the search function, it is best to escalate to GitLab support. This could be a bug/issue.
-Moving past that, it is best to attempt the same search using the [ElasticSearch Search API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html) and compare the results from what you see in GitLab.
+Moving past that, it is best to attempt the same search using the [Elasticsearch Search API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html) and compare the results from what you see in GitLab.
If the results:
- Sync up, then there is not a technical "issue" per se. Instead, it might be a problem
- with the ElasticSearch filters we are using. This can be complicated, so it is best to
+ with the Elasticsearch filters we are using. This can be complicated, so it is best to
escalate to GitLab support to check these and guide you on the potential on whether or
not a feature request is needed.
- Do not match up, this indicates a problem with the documents generated from the
@@ -197,20 +197,20 @@ If the results:
### Troubleshooting indexing
Troubleshooting indexing issues can be tricky. It can pretty quickly go to either GitLab
-support or your ElasticSearch admin.
+support or your Elasticsearch admin.
The best place to start is to determine if the issue is with creating an empty index.
-If it is, check on the ElasticSearch side to determine if the `gitlab-production` (the
-name for the GitLab index) exists. If it exists, manually delete it on the ElasticSearch
+If it is, check on the Elasticsearch side to determine if the `gitlab-production` (the
+name for the GitLab index) exists. If it exists, manually delete it on the Elasticsearch
side and attempt to recreate it from the
[`create_empty_index`](../../integration/elasticsearch.md#gitlab-elasticsearch-rake-tasks)
rake task.
-If you still encounter issues, try creating an index manually on the ElasticSearch
+If you still encounter issues, try creating an index manually on the Elasticsearch
instance. The details of the index aren't important here, as we want to test if indices
can be made. If the indices:
-- Cannot be made, speak with your ElasticSearch admin.
+- Cannot be made, speak with your Elasticsearch admin.
- Can be made, Escalate this to GitLab support.
If the issue is not with creating an empty index, the next step is to check for errors
@@ -218,7 +218,7 @@ during the indexing of projects. If errors do occur, they will either stem from
- On the GitLab side. You need to rectify those. If they are not
something you are familiar with, contact GitLab support for guidance.
-- Within the ElasticSearch instance itself. See if the error is [documented and has a fix](../../integration/elasticsearch.md#troubleshooting). If not, speak with your ElasticSearch admin.
+- Within the Elasticsearch instance itself. See if the error is [documented and has a fix](../../integration/elasticsearch.md#troubleshooting). If not, speak with your Elasticsearch admin.
If the indexing process does not present errors, you will want to check the status of the indexed projects. You can do this via the following rake tasks:
@@ -235,8 +235,8 @@ If:
If reindexing the project shows:
- Errors on the GitLab side, escalate those to GitLab support.
-- ElasticSearch errors or doesn't present any errors at all, reach out to your
- ElasticSearch admin to check the instance.
+- Elasticsearch errors or doesn't present any errors at all, reach out to your
+ Elasticsearch admin to check the instance.
### Troubleshooting integration
@@ -246,7 +246,7 @@ much to "integrate" here.
If the issue is:
- Not concerning the beta indexer, it is almost always an
- ElasticSearch-side issue. This means you should reach out to your ElasticSearch admin
+ Elasticsearch-side issue. This means you should reach out to your Elasticsearch admin
regarding the error(s) you are seeing. If you are unsure here, it never hurts to reach
out to GitLab support.
- With the beta indexer, check if the ICU development package is installed.
@@ -260,48 +260,48 @@ Beyond that, you will want to review the error. If it is:
### Troubleshooting performance
-Troubleshooting performance can be difficult on ElasticSearch. There is a ton of tuning
+Troubleshooting performance can be difficult on Elasticsearch. There is a ton of tuning
that *can* be done, but the majority of this falls on shoulders of a skilled
-ElasticSearch administrator.
+Elasticsearch administrator.
Generally speaking, ensure:
-- The ElasticSearch server **is not** running on the same node as GitLab.
-- The ElasticSearch server have enough RAM and CPU cores.
+- The Elasticsearch server **is not** running on the same node as GitLab.
+- The Elasticsearch server have enough RAM and CPU cores.
- That sharding **is** being used.
-Going into some more detail here, if ElasticSearch is running on the same server as GitLab, resource contention is **very** likely to occur. Ideally, ElasticSearch, which requires ample resources, should be running on its own server (maybe coupled with logstash and kibana).
+Going into some more detail here, if Elasticsearch is running on the same server as GitLab, resource contention is **very** likely to occur. Ideally, Elasticsearch, which requires ample resources, should be running on its own server (maybe coupled with logstash and kibana).
-When it comes to ElasticSearch, RAM is the key resource. ElasticSearch themselves recommend:
+When it comes to Elasticsearch, RAM is the key resource. Elasticsearch themselves recommend:
- **At least** 8 GB of RAM for a non-production instance.
- **At least** 16 GB of RAM for a production instance.
- Ideally, 64 GB of RAM.
-For CPU, ElasticSearch recommends at least 2 CPU cores, but ElasticSearch states common
+For CPU, Elasticsearch recommends at least 2 CPU cores, but Elasticsearch states common
setups use up to 8 cores. For more details on server specs, check out
-[ElasticSearch's hardware guide](https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html).
+[Elasticsearch's hardware guide](https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html).
-Beyond the obvious, sharding comes into play. Sharding is a core part of ElasticSearch.
+Beyond the obvious, sharding comes into play. Sharding is a core part of Elasticsearch.
It allows for horizontal scaling of indices, which is helpful when you are dealing with
a large amount of data.
With the way GitLab does indexing, there is a **huge** amount of documents being
-indexed. By utilizing sharding, you can speed up ElasticSearch's ability to locate
+indexed. By utilizing sharding, you can speed up Elasticsearch's ability to locate
data, since each shard is a Lucene index.
If you are not using sharding, you are likely to hit issues when you start using
-ElasticSearch in a production environment.
+Elasticsearch in a production environment.
Keep in mind that an index with only one shard has **no scale factor** and will
likely encounter issues when called upon with some frequency.
If you need to know how many shards, read
-[ElasticSearch's documentation on capacity planning](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/capacity-planning.html),
+[Elasticsearch's documentation on capacity planning](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/capacity-planning.html),
as the answer is not straight forward.
The easiest way to determine if sharding is in use is to check the output of the
-[ElasticSearch Health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html):
+[Elasticsearch Health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html):
- Red means the cluster is down.
- Yellow means it is up with no sharding/replication.
@@ -311,11 +311,11 @@ For production use, it should always be green.
Beyond these steps, you get into some of the more complicated things to check,
such as merges and caching. These can get complicated and it takes some time to
-learn them, so it is best to escalate/pair with an ElasticSearch expert if you need to
+learn them, so it is best to escalate/pair with an Elasticsearch expert if you need to
dig further into these.
Feel free to reach out to GitLab support, but this is likely to be something a skilled
-ElasticSearch admin has more experience with.
+Elasticsearch admin has more experience with.
## Common issues
@@ -324,12 +324,12 @@ feel free to update that page with issues you encounter and solutions.
## Replication
-Setting up ElasticSearch isn't too bad, but it can be a bit finnicky and time consuming.
+Setting up Elasticsearch isn't too bad, but it can be a bit finnicky and time consuming.
The eastiest method is to spin up a docker container with the required version and
bind ports 9200/9300 so it can be used.
-The following is an example of running a docker container of ElasticSearch v7.2.0:
+The following is an example of running a docker container of Elasticsearch v7.2.0:
```bash
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.2.0
@@ -341,5 +341,5 @@ From here, you can:
- Grab the IP of the docker container (use `docker inspect <container_id>`)
- Use `<IP.add.re.ss:9200>` to communicate with it.
-This is a quick method to test out ElasticSearch, but by no means is this a
+This is a quick method to test out Elasticsearch, but by no means is this a
production solution.
diff --git a/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md b/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
index a064dfbfbe2..34a5acbe7b7 100644
--- a/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
+++ b/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
@@ -4,7 +4,7 @@ type: reference
# GitLab Rails Console Cheat Sheet
-This is the GitLab Support Team's collection of information regarding the GitLab rails
+This is the GitLab Support Team's collection of information regarding the GitLab Rails
console, for use while troubleshooting. It is listed here for transparency,
and it may be useful for users with experience with these tools. If you are currently
having an issue with GitLab, it is highly recommended that you check your
@@ -556,6 +556,14 @@ parent.members_with_descendants.count
GroupDestroyWorker.perform_async(group_id, user_id)
```
+### Modify group project creation
+
+```ruby
+# Project creation levels: 0 - No one, 1 - Maintainers, 2 - Developers + Maintainers
+group = Group.find_by_path_or_name('group-name')
+group.project_creation_level=0
+```
+
## LDAP
### LDAP commands in the rails console
@@ -680,6 +688,15 @@ u = User.find_by_username('')
MergeRequests::PostMergeService.new(p, u).execute(m)
```
+### Delete a merge request
+
+```ruby
+u = User.find_by_username('<username>')
+p = Project.find_by_full_path('<group>/<project>')
+m = p.merge_requests.find_by(iid: <IID>)
+Issuable::DestroyService.new(m.project, u).execute(m)
+```
+
### Rebase manually
```ruby
@@ -693,7 +710,8 @@ MergeRequests::RebaseService.new(m.target_project, u).execute(m)
### Cancel stuck pending pipelines
-See <https://gitlab.com/gitlab-com/support-forum/issues/2449#note_41929707>.
+For more information, see the [confidential issue](../../user/project/issues/confidential_issues.md)
+`https://gitlab.com/gitlab-com/support-forum/issues/2449#note_41929707`.
```ruby
Ci::Pipeline.where(project_id: p.id).where(status: 'pending').count
@@ -715,13 +733,15 @@ Namespace.find_by_full_path("user/proj").namespace_statistics.update(shared_runn
project = Project.find_by_full_path('')
builds_with_artifacts = project.builds.with_artifacts_archive
-# Prior to 10.6 the above line would be:
-# builds_with_artifacts = project.builds.with_artifacts
-
# Instance-wide:
-builds_with_artifacts = Ci::Build.with_artifacts
+builds_with_artifacts = Ci::Build.with_artifacts_archive
+
+# Prior to 10.6 the above lines would be:
+# builds_with_artifacts = project.builds.with_artifacts
+# builds_with_artifacts = Ci::Build.with_artifacts
### CLEAR THEM OUT
+# Note that this will also erase artifacts that developers marked to "Keep"
builds_to_clear = builds_with_artifacts.where("finished_at < ?", 1.week.ago)
builds_to_clear.each do |build|
build.artifacts_expire_at = Time.now
@@ -812,7 +832,7 @@ License.current # check to make sure it applied
From [Zendesk ticket #91083](https://gitlab.zendesk.com/agent/tickets/91083) (internal)
-### Poll unicorn requests by seconds
+### Poll Unicorn requests by seconds
```ruby
require 'rubygems'
@@ -872,6 +892,23 @@ queue = Sidekiq::Queue.new('repository_import')
queue.each { |job| job.delete if <condition>}
```
+`<condition>` probably includes references to job arguments, which depend on the type of job in question.
+
+| queue | worker | job args |
+| ----- | ------ | -------- |
+| repository_import | RepositoryImportWorker | project_id |
+| update_merge_requests | UpdateMergeRequestsWorker | project_id, user_id, oldrev, newrev, ref |
+
+**Example:** Delete all UpdateMergeRequestsWorker jobs associated with a merge request on project_id 125,
+merging branch `ref/heads/my_branch`.
+
+```ruby
+queue = Sidekiq::Queue.new('update_merge_requests')
+queue.each { |job| job.delete if job.args[0]==125 and job.args[4]=='ref/heads/my_branch'}
+```
+
+**Note:** Running jobs will not be killed. Stop sidekiq before doing this, to get all matching jobs.
+
### Enable debug logging of Sidekiq
```ruby
@@ -888,13 +925,13 @@ See <https://github.com/mperham/sidekiq/wiki/Signals#ttin>.
## Redis
-### Connect to redis (omnibus)
+### Connect to Redis (omnibus)
```sh
/opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket
```
-### Connect to redis (HA)
+### Connect to Redis (HA)
```sh
/opt/gitlab/embedded/bin/redis-cli -h <host ip> -a <password>
diff --git a/doc/administration/troubleshooting/kubernetes_cheat_sheet.md b/doc/administration/troubleshooting/kubernetes_cheat_sheet.md
index 1247060058b..7c2c2050b12 100644
--- a/doc/administration/troubleshooting/kubernetes_cheat_sheet.md
+++ b/doc/administration/troubleshooting/kubernetes_cheat_sheet.md
@@ -15,7 +15,7 @@ If you are on a [paid tier](https://about.gitlab.com/pricing/) and are not sure
to use these commands, it is best to [contact Support](https://about.gitlab.com/support/)
and they will assist you with any issues you are having.
-## Generic kubernetes commands
+## Generic Kubernetes commands
- How to authorize to your GCP project (can be especially useful if you have projects
under different GCP accounts):
@@ -33,7 +33,7 @@ and they will assist you with any issues you are having.
kubectl proxy
```
-- How to ssh to a Kubernetes node and enter the container as root
+- How to SSH to a Kubernetes node and enter the container as root
<https://github.com/kubernetes/kubernetes/issues/30656>:
- For GCP, you may find the node name and run `gcloud compute ssh node-name`.
@@ -72,12 +72,12 @@ and they will assist you with any issues you are having.
This is the principle of Kubernetes, read [Twelve-factor app](https://12factor.net/)
for details.
-## GitLab-specific kubernetes information
+## GitLab-specific Kubernetes information
-- Minimal config that can be used to test a Kubernetes helm chart can be found
+- Minimal config that can be used to test a Kubernetes Helm chart can be found
[here](https://gitlab.com/gitlab-org/charts/gitlab/issues/620).
-- Tailing logs of a separate pod. An example for a unicorn pod:
+- Tailing logs of a separate pod. An example for a Unicorn pod:
```bash
kubectl logs gitlab-unicorn-7656fdd6bf-jqzfs -c unicorn
@@ -101,7 +101,7 @@ and they will assist you with any issues you are having.
```
- Check all events in the `gitlab` namespace (the namespace name can be different if you
- specified a different one when deploying the helm chart):
+ specified a different one when deploying the Helm chart):
```bash
kubectl get events -w --namespace=gitlab
@@ -140,8 +140,8 @@ and they will assist you with any issues you are having.
- Check the output of `kubectl get events -w --all-namespaces`.
- Check the logs of pods within `gitlab-managed-apps` namespace.
- - On the side of GitLab check sidekiq log and kubernetes log. When GitLab is installed
- via Helm Chart, `kubernetes.log` can be found inside the sidekiq pod.
+ - On the side of GitLab check Sidekiq log and Kubernetes log. When GitLab is installed
+ via Helm Chart, `kubernetes.log` can be found inside the Sidekiq pod.
- How to get your initial admin password <https://docs.gitlab.com/charts/installation/deployment.html#initial-login>:
@@ -191,8 +191,8 @@ and they will assist you with any issues you are having.
## Installation of minimal GitLab config via Minukube on macOS
-This section is based on [Developing for Kubernetes with Minikube](https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/minikube/index.md)
-and [Helm](https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/helm/index.md). Refer
+This section is based on [Developing for Kubernetes with Minikube](https://docs.gitlab.com/charts/development/minikube/index.html)
+and [Helm](https://docs.gitlab.com/charts/installation/tools.html#helm). Refer
to those documents for details.
- Install Kubectl via Homebrew:
diff --git a/doc/administration/troubleshooting/postgresql.md b/doc/administration/troubleshooting/postgresql.md
new file mode 100644
index 00000000000..f427cd88ce0
--- /dev/null
+++ b/doc/administration/troubleshooting/postgresql.md
@@ -0,0 +1,146 @@
+---
+type: reference
+---
+
+# PostgreSQL
+
+This page is useful information about PostgreSQL that the GitLab Support
+Team sometimes uses while troubleshooting. GitLab is making this public, so that anyone
+can make use of the Support team's collected knowledge.
+
+CAUTION: **Caution:** Some procedures documented here may break your GitLab instance. Use at your own risk.
+
+If you are on a [paid tier](https://about.gitlab.com/pricing/) and are not sure how
+to use these commands, it is best to [contact Support](https://about.gitlab.com/support/)
+and they will assist you with any issues you are having.
+
+## Other GitLab PostgreSQL documentation
+
+This section is for links to information elsewhere in the GitLab documentation.
+
+### Procedures
+
+- [Connect to the PostgreSQL console.](https://docs.gitlab.com/omnibus/settings/database.html#connecting-to-the-bundled-postgresql-database)
+
+- [Omnibus database procedures](https://docs.gitlab.com/omnibus/settings/database.html) including
+ - SSL: enabling, disabling, and verifying.
+ - Enabling Write Ahead Log (WAL) archiving.
+ - Using an external (non-Omnibus) PostgreSQL installation; and backing it up.
+ - Listening on TCP/IP as well as or instead of sockets.
+ - Storing data in another location.
+ - Destructively reseeding the GitLab database.
+ - Guidance around updating packaged PostgreSQL, including how to stop it happening automatically.
+
+- [More about external PostgreSQL](../external_database.md)
+
+- [Running GEO with external PostgreSQL](../geo/replication/external_database.md)
+
+- [Upgrades when running PostgreSQL configured for HA.](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-gitlab-ha-cluster)
+
+- Consuming PostgreSQL from [within CI runners](../../ci/services/postgres.md)
+
+- [Using Slony to update PostgreSQL](../../update/upgrading_postgresql_using_slony.md)
+ - Uses replication to handle PostgreSQL upgrades - providing the schemas are the same.
+ - Reduces downtime to a short window for swinging over to the newer vewrsion.
+
+- Managing Omnibus PostgreSQL versions [from the development docs](https://docs.gitlab.com/omnibus/development/managing-postgresql-versions.html)
+
+- [PostgreSQL scaling and HA](../high_availability/database.md)
+ - including [troubleshooting](../high_availability/database.md#troubleshooting) gitlab-ctl repmgr-check-master and pgbouncer errors
+
+- [Developer database documentation](../../development/README.md#database-guides) - some of which is absolutely not for production use. Including:
+ - understanding EXPLAIN plans
+
+### Troubleshooting/Fixes
+
+- [GitLab database requirements](../../install/requirements.md#database) including
+ - Support for MySQL was removed in GitLab 12.1; [migrate to PostgreSQL](../../update/mysql_to_postgresql.md)
+ - required extension pg_trgm
+ - required extension postgres_fdw for Geo
+
+- Errors like this in the production/sidekiq log; see: [Set default_transaction_isolation into read committed](https://docs.gitlab.com/omnibus/settings/database.html#set-default_transaction_isolation-into-read-committed)
+
+```
+ActiveRecord::StatementInvalid PG::TRSerializationFailure: ERROR: could not serialize access due to concurrent update
+```
+
+- PostgreSQL HA - [replication slot errors](https://docs.gitlab.com/omnibus/settings/database.html#troubleshooting-upgrades-in-an-ha-cluster)
+
+```
+pg_basebackup: could not create temporary replication slot "pg_basebackup_12345": ERROR: all replication slots are in use
+HINT: Free one or increase max_replication_slots.
+```
+
+- GEO [replication errors](../geo/replication/troubleshooting.md#fixing-replication-errors) including:
+
+```
+ERROR: replication slots can only be used if max_replication_slots > 0
+
+FATAL: could not start WAL streaming: ERROR: replication slot “geo_secondary_my_domain_com” does not exist
+
+Command exceeded allowed execution time
+
+PANIC: could not write to file ‘pg_xlog/xlogtemp.123’: No space left on device
+```
+
+- [Checking GEO configuration](../geo/replication/troubleshooting.md#checking-configuration) including
+ - reconfiguring hosts/ports
+ - checking and fixing user/password mappings
+
+- [Common GEO errors](../geo/replication/troubleshooting.md#fixing-common-errors)
+
+## Support topics
+
+### Database deadlocks
+
+References:
+
+- [Issue #1 Deadlocks with GitLab 12.1, PostgreSQL 10.7](https://gitlab.com/gitlab-org/gitlab/issues/30528)
+- [Customer ticket (internal) GitLab 12.1.6](https://gitlab.zendesk.com/agent/tickets/134307) and [google doc (internal)](https://docs.google.com/document/d/19xw2d_D1ChLiU-MO1QzWab-4-QXgsIUcN5e_04WTKy4)
+- [Issue #2 deadlocks can occur if an instance is flooded with pushes](https://gitlab.com/gitlab-org/gitlab/issues/33650). Provided for context about how GitLab code can have this sort of unanticipated effect in unusual situations.
+
+```
+ERROR: deadlock detected
+```
+
+Three applicable timeouts are identified in the issue [#1](https://gitlab.com/gitlab-org/gitlab/issues/30528); our recommended settings are as follows:
+
+```
+deadlock_timeout = 5s
+statement_timeout = 15s
+idle_in_transaction_session_timeout = 60s
+```
+
+Quoting from from issue [#1](https://gitlab.com/gitlab-org/gitlab/issues/30528):
+
+> "If a deadlock is hit, and we resolve it through aborting the transaction after a short period, then the retry mechanisms we already have will make the deadlocked piece of work try again, and it's unlikely we'll deadlock multiple times in a row."
+
+TIP: **Tip:** In support, our general approach to reconfiguring timeouts (applies also to the HTTP stack as well) is that it's acceptable to do it temporarily as a workaround. If it makes GitLab usable for the customer, then it buys time to understand the problem more completely, implement a hot fix, or make some other change that addresses the root cause. Generally, the timeouts should be put back to reasonable defaults once the root cause is resolved.
+
+In this case, the guidance we had from development was to drop deadlock_timeout and/or statement_timeout but to leave the third setting at 60s. Setting idle_in_transaction protects the database from sessions potentially hanging for days. There's more discussion in [the issue relating to introducing this timeout on gitlab.com.](https://gitlab.com/gitlab-com/gl-infra/production/issues/1053)
+
+PostgresSQL defaults:
+
+- statement_timeout = 0 (never)
+- idle_in_transaction_session_timeout = 0 (never)
+
+Comments in issue [#1](https://gitlab.com/gitlab-org/gitlab/issues/30528) indicate that these should both be set to at least a number of minutes for all Omnibus installations (so they don't hang indefinitely). However, 15s for statement_timeout is very short, and will only be effective if the underlying infrastructure is very performant.
+
+See current settings with:
+
+```
+sudo gitlab-rails runner "c = ApplicationRecord.connection ; puts c.execute('SHOW statement_timeout').to_a ;
+puts c.execute('SHOW lock_timeout').to_a ;
+puts c.execute('SHOW idle_in_transaction_session_timeout').to_a ;"
+```
+
+It may take a little while to respond.
+
+```
+{"statement_timeout"=>"1min"}
+{"lock_timeout"=>"0"}
+{"idle_in_transaction_session_timeout"=>"1min"}
+```
+
+NOTE: **Note:**
+These are Omnibus settings. If an external database, such as a customer's PostgreSQL installation or Amazon RDS is being used, these values don't get set, and would have to be set externally.
diff --git a/doc/administration/troubleshooting/sidekiq.md b/doc/administration/troubleshooting/sidekiq.md
index fdafac8420e..41657368ea4 100644
--- a/doc/administration/troubleshooting/sidekiq.md
+++ b/doc/administration/troubleshooting/sidekiq.md
@@ -238,7 +238,7 @@ workers.each do |process_id, thread_id, work|
end
```
-### Remove sidekiq jobs for given parameters (destructive)
+### Remove Sidekiq jobs for given parameters (destructive)
```ruby
# for jobs like this:
diff --git a/doc/administration/troubleshooting/test_environments.md b/doc/administration/troubleshooting/test_environments.md
index f1cdaf580a3..d0f670a5663 100644
--- a/doc/administration/troubleshooting/test_environments.md
+++ b/doc/administration/troubleshooting/test_environments.md
@@ -49,7 +49,7 @@ gitlab/gitlab-ee:11.5.3-ee.0
#### SAML for Authentication
-We can use the [test-saml-idp Docker image](https://hub.docker.com/r/jamedjo/test-saml-idp)
+We can use the [`test-saml-idp` Docker image](https://hub.docker.com/r/jamedjo/test-saml-idp)
to do the work for us:
```sh
@@ -91,7 +91,7 @@ gitlab_rails['omniauth_providers'] = [
See [the GDK SAML documentation](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/saml.md).
-### ElasticSearch
+### Elasticsearch
```sh
docker run -d --name elasticsearch \
@@ -101,7 +101,7 @@ docker.elastic.co/elasticsearch/elasticsearch:5.5.1
```
Then confirm it works in the browser at `curl http://<IP_ADDRESS>:9200/_cat/health`.
-ElasticSearch's default username is `elastic` and password is `changeme`.
+Elasticsearch's default username is `elastic` and password is `changeme`.
### PlantUML
diff --git a/doc/administration/uploads.md b/doc/administration/uploads.md
index c6dadbb500b..04cebe568f3 100644
--- a/doc/administration/uploads.md
+++ b/doc/administration/uploads.md
@@ -81,7 +81,7 @@ The connection settings match those provided by [Fog](https://github.com/fog), a
| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
-| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio](https://www.minio.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
+| `endpoint` | Can be used when configuring an S3 compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
| `use_iam_profile` | Set to true to use IAM profile instead of access keys | false
@@ -165,7 +165,7 @@ The connection settings match those provided by [Fog](https://github.com/fog), a
|---------|-------------|---------|
| `provider` | Always `OpenStack` for compatible hosts | OpenStack |
| `openstack_username` | OpenStack username | |
-| `openstack_api_key` | OpenStack api key | |
+| `openstack_api_key` | OpenStack API key | |
| `openstack_temp_url_key` | OpenStack key for generating temporary urls | |
| `openstack_auth_url` | OpenStack authentication endpont | |
| `openstack_region` | OpenStack region | |