Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to 'doc/administration/reference_architectures')
-rw-r--r--doc/administration/reference_architectures/10k_users.md66
-rw-r--r--doc/administration/reference_architectures/25k_users.md68
-rw-r--r--doc/administration/reference_architectures/2k_users.md52
-rw-r--r--doc/administration/reference_architectures/3k_users.md67
-rw-r--r--doc/administration/reference_architectures/50k_users.md70
-rw-r--r--doc/administration/reference_architectures/5k_users.md73
-rw-r--r--doc/administration/reference_architectures/index.md9
7 files changed, 119 insertions, 286 deletions
diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md
index a2463c6ff88..88913eb1f7f 100644
--- a/doc/administration/reference_architectures/10k_users.md
+++ b/doc/administration/reference_architectures/10k_users.md
@@ -35,13 +35,13 @@ full list of reference architectures, see
| GitLab Rails | 3 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` |
| Monitoring node | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` |
| Object storage<sup>4</sup> | - | - | - | - |
-| NFS server (non-Gitaly) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` |
<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
@@ -189,7 +189,7 @@ CI pipelines alike.
As such, large repositories come with notable cost and typically will require more resources to handle,
significantly so in some cases. It's therefore **strongly** recommended then to review large repositories
-to ensure they maintain good repo health and reduce their size wherever possible.
+to ensure they maintain good health and reduce their size wherever possible.
NOTE:
If best practices aren't followed and large repositories are present on the environment,
@@ -227,9 +227,6 @@ To set up GitLab and its components to accommodate up to 10,000 users:
environment.
1. [Configure the object storage](#configure-the-object-storage)
used for shared data objects.
-1. [Configure NFS](#configure-nfs-optional) (optional, and not recommended)
- to have shared disk storage service as an alternative to Gitaly or object
- storage.
1. [Configure Advanced Search](#configure-advanced-search) (optional) for faster,
more advanced code search across your entire GitLab instance.
@@ -1275,7 +1272,7 @@ in the second step, do not supply the `EXTERNAL_URL` value.
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
- postgresql['max_connections'] = 200
+ postgresql['max_connections'] = 500
# Prevent database migrations from running on upgrade automatically
gitlab_rails['auto_migrate'] = false
@@ -1759,9 +1756,8 @@ To configure Praefect with TLS:
Sidekiq requires connection to the [Redis](#configure-redis),
[PostgreSQL](#configure-postgresql) and [Gitaly](#configure-gitaly) instances.
-Since it's recommended to use [Object storage](#configure-the-object-storage)
-over [NFS](#configure-nfs-optional) for data objects, the following examples
-include the Object storage configuration.
+Because you must use [Object storage](#configure-the-object-storage) instead of NFS for data objects, the following
+examples include the Object storage configuration.
- `10.6.0.101`: Sidekiq 1
- `10.6.0.102`: Sidekiq 2
@@ -1859,8 +1855,8 @@ Updates to example must be made at:
# Set number of Sidekiq queue processes to the same number as available CPUs
sidekiq['queue_groups'] = ['*'] * 4
- # Set number of Sidekiq threads per queue process to the recommend number of 10
- sidekiq['max_concurrency'] = 10
+ # Set number of Sidekiq threads per queue process to the recommend number of 20
+ sidekiq['max_concurrency'] = 20
# Monitoring
consul['enable'] = true
@@ -1918,7 +1914,7 @@ Updates to example must be made at:
NOTE:
If you find that the environment's Sidekiq job processing is slow with long queues,
more nodes can be added as required. You can also tune your Sidekiq nodes to
-run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
+run [multiple Sidekiq processes](../sidekiq/extra_sidekiq_processes.md).
<div align="right">
<a type="button" class="btn btn-default" href="#setup-components">
@@ -1929,9 +1925,8 @@ run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
## Configure GitLab Rails
This section describes how to configure the GitLab application (Rails) component.
-Since it's recommended to use [Object storage](#configure-the-object-storage)
-over [NFS](#configure-nfs-optional) for data objects, the following examples
-include the Object storage configuration.
+Because you must use [Object storage](#configure-the-object-storage) instead of NFS for data objects, the following
+examples include the Object storage configuration.
The following IPs will be used as an example:
@@ -2070,7 +2065,6 @@ On each node perform the following:
1. Copy the `/etc/gitlab/gitlab-secrets.json` file from the first Omnibus node you configured and add or replace
the file of the same name on this server. If this is the first Omnibus node you are configuring then you can skip this step.
-
1. To ensure database migrations are only run during reconfigure and not automatically on upgrade, run:
```shell
@@ -2081,9 +2075,7 @@ On each node perform the following:
[GitLab Rails post-configuration](#gitlab-rails-post-configuration) section.
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-1. [Enable incremental logging](#enable-incremental-logging), unless you are using [NFS](#configure-nfs-optional).
-
+1. [Enable incremental logging](#enable-incremental-logging).
1. Confirm the node can connect to Gitaly:
```shell
@@ -2209,9 +2201,6 @@ To configure the Monitoring node:
## Configure the object storage
GitLab supports using an object storage service for holding numerous types of data.
-It's recommended over [NFS](#configure-nfs-optional) and in general it's better
-in larger setups as object storage is typically much more performant, reliable,
-and scalable.
GitLab has been tested on a number of object storage providers:
@@ -2235,7 +2224,7 @@ NOTE:
When using the [storage-specific form](../object_storage.md#storage-specific-configuration)
in GitLab 14.x and earlier, you should enable [direct upload mode](../../development/uploads/index.md#direct-upload).
The previous [background upload](../../development/uploads/index.md#direct-upload) mode,
-which was deprecated in 14.9, requires shared storage such as [NFS](#configure-nfs-optional).
+which was deprecated in 14.9, requires shared storage such as NFS.
Using separate buckets for each data type is the recommended approach for GitLab.
This ensures there are no collisions across the various types of data GitLab stores.
@@ -2254,22 +2243,6 @@ GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily
While sharing the job logs through NFS is supported, it's recommended to avoid the need to use NFS by enabling [incremental logging](../job_logs.md#incremental-logging-architecture) (required when no NFS node has been deployed). Incremental logging uses Redis instead of disk space for temporary caching of job logs.
-## Configure NFS (optional)
-
-[Object storage](#configure-the-object-storage), along with [Gitaly](#configure-gitaly)
-are recommended over NFS wherever possible for improved performance.
-
-See how to [configure NFS](../nfs.md).
-
-WARNING:
-Engineering support for NFS for Git repositories is deprecated, and [technical support is scheduled to be unavailable](../nfs.md#gitaly-and-nfs-deprecation)
-after the release of GitLab 15.6. No further enhancements are planned for this feature.
-
-Read:
-
-- [Gitaly and NFS Deprecation](../nfs.md#gitaly-and-nfs-deprecation).
-- About the [correct mount options to use](../nfs.md#upgrade-to-gitaly-cluster-or-disable-caching-if-experiencing-data-loss).
-
## Configure Advanced Search
You can leverage Elasticsearch and [enable Advanced Search](../../integration/advanced_search/elasticsearch.md)
@@ -2319,12 +2292,10 @@ Refer to [epic 6127](https://gitlab.com/groups/gitlab-org/-/epics/6127) for more
The following tables and diagram detail the hybrid environment using the same formats
as the normal environment above.
-First are the components that run in Kubernetes. The recommendation at this time is to
-use Google Cloud's Kubernetes Engine (GKE) or AWS Elastic Kubernetes Service (EKS) and associated machine types, but the memory
-and CPU requirements should translate to most other providers. We hope to update this in the
-future with further specific cloud provider details.
+First are the components that run in Kubernetes. These run across several node groups, although you can change
+the overall makeup as desired as long as the minimum CPU and Memory requirements are observed.
-| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
+| Service Node Group | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
|---------------------|-------|-------------------------|-----------------|--------------|---------------------------------|
| Webservice | 4 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` | 127.5 vCPU, 118 GB memory |
| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 15.5 vCPU, 50 GB memory |
@@ -2333,7 +2304,7 @@ future with further specific cloud provider details.
- For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results)
[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
- Nodes configuration is shown as it is forced to ensure pod vCPU / memory ratios and avoid scaling during **performance testing**.
- - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
+ - In production deployments, there is no need to assign pods to specific nodes. A minimum of three nodes per node group in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
@@ -2355,7 +2326,8 @@ services where applicable):
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md
index 84eba01fe11..02739904f5e 100644
--- a/doc/administration/reference_architectures/25k_users.md
+++ b/doc/administration/reference_architectures/25k_users.md
@@ -35,13 +35,13 @@ full list of reference architectures, see
| GitLab Rails | 5 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` |
| Monitoring node | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` |
| Object storage<sup>4</sup> | - | - | - | - |
-| NFS server (non-Gitaly) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` |
<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
@@ -189,7 +189,7 @@ CI pipelines alike.
As such, large repositories come with notable cost and typically will require more resources to handle,
significantly so in some cases. It's therefore **strongly** recommended then to review large repositories
-to ensure they maintain good repo health and reduce their size wherever possible.
+to ensure they maintain good health and reduce their size wherever possible.
NOTE:
If best practices aren't followed and large repositories are present on the environment,
@@ -227,9 +227,6 @@ To set up GitLab and its components to accommodate up to 25,000 users:
environment.
1. [Configure the object storage](#configure-the-object-storage)
used for shared data objects.
-1. [Configure NFS](#configure-nfs-optional) (optional, and not recommended)
- to have shared disk storage service as an alternative to Gitaly or object
- storage.
1. [Configure Advanced Search](#configure-advanced-search) (optional) for faster,
more advanced code search across your entire GitLab instance.
@@ -1295,7 +1292,7 @@ in the second step, do not supply the `EXTERNAL_URL` value.
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
- postgresql['max_connections'] = 200
+ postgresql['max_connections'] = 500
# Prevent database migrations from running on upgrade automatically
gitlab_rails['auto_migrate'] = false
@@ -1777,9 +1774,8 @@ To configure Praefect with TLS:
Sidekiq requires connection to the [Redis](#configure-redis),
[PostgreSQL](#configure-postgresql) and [Gitaly](#configure-gitaly) instances.
-Since it's recommended to use [Object storage](#configure-the-object-storage)
-over [NFS](#configure-nfs-optional) for data objects, the following examples
-include the Object storage configuration.
+Because you must use [Object storage](#configure-the-object-storage) instead of NFS for data objects, the following
+examples include the Object storage configuration.
- `10.6.0.101`: Sidekiq 1
- `10.6.0.102`: Sidekiq 2
@@ -1877,8 +1873,8 @@ Updates to example must be made at:
# Set number of Sidekiq queue processes to the same number as available CPUs
sidekiq['queue_groups'] = ['*'] * 4
- # Set number of Sidekiq threads per queue process to the recommend number of 10
- sidekiq['max_concurrency'] = 10
+ # Set number of Sidekiq threads per queue process to the recommend number of 20
+ sidekiq['max_concurrency'] = 20
# Monitoring
consul['enable'] = true
@@ -1936,7 +1932,7 @@ Updates to example must be made at:
NOTE:
If you find that the environment's Sidekiq job processing is slow with long queues,
more nodes can be added as required. You can also tune your Sidekiq nodes to
-run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
+run [multiple Sidekiq processes](../sidekiq/extra_sidekiq_processes.md).
<div align="right">
<a type="button" class="btn btn-default" href="#setup-components">
@@ -1947,9 +1943,8 @@ run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
## Configure GitLab Rails
This section describes how to configure the GitLab application (Rails) component.
-Since it's recommended to use [Object storage](#configure-the-object-storage)
-over [NFS](#configure-nfs-optional) for data objects, the following examples
-include the Object storage configuration.
+Because you must use [Object storage](#configure-the-object-storage) instead of NFS for data objects, the following
+examples include the Object storage configuration.
The following IPs will be used as an example:
@@ -2090,7 +2085,6 @@ On each node perform the following:
1. Copy the `/etc/gitlab/gitlab-secrets.json` file from the first Omnibus node you configured and add or replace
the file of the same name on this server. If this is the first Omnibus node you are configuring then you can skip this step.
-
1. To ensure database migrations are only run during reconfigure and not automatically on upgrade, run:
```shell
@@ -2101,9 +2095,7 @@ On each node perform the following:
[GitLab Rails post-configuration](#gitlab-rails-post-configuration) section.
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-1. [Enable incremental logging](#enable-incremental-logging), unless you are using [NFS](#configure-nfs-optional).
-
+1. [Enable incremental logging](#enable-incremental-logging).
1. Confirm the node can connect to Gitaly:
```shell
@@ -2228,9 +2220,6 @@ To configure the Monitoring node:
## Configure the object storage
GitLab supports using an object storage service for holding numerous types of data.
-It's recommended over [NFS](#configure-nfs-optional) and in general it's better
-in larger setups as object storage is typically much more performant, reliable,
-and scalable.
GitLab has been tested on a number of object storage providers:
@@ -2254,7 +2243,7 @@ NOTE:
When using the [storage-specific form](../object_storage.md#storage-specific-configuration)
in GitLab 14.x and earlier, you should enable [direct upload mode](../../development/uploads/index.md#direct-upload).
The previous [background upload](../../development/uploads/index.md#direct-upload) mode,
-which was deprecated in 14.9, requires shared storage such as [NFS](#configure-nfs-optional).
+which was deprecated in 14.9, requires shared storage such as NFS.
Using separate buckets for each data type is the recommended approach for GitLab.
This ensures there are no collisions across the various types of data GitLab stores.
@@ -2273,22 +2262,6 @@ GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily
While sharing the job logs through NFS is supported, it's recommended to avoid the need to use NFS by enabling [incremental logging](../job_logs.md#incremental-logging-architecture) (required when no NFS node has been deployed). Incremental logging uses Redis instead of disk space for temporary caching of job logs.
-## Configure NFS (optional)
-
-[Object storage](#configure-the-object-storage), along with [Gitaly](#configure-gitaly)
-are recommended over NFS wherever possible for improved performance.
-
-See how to [configure NFS](../nfs.md).
-
-WARNING:
-Engineering support for NFS for Git repositories is deprecated, and [technical support is scheduled to be unavailable](../nfs.md#gitaly-and-nfs-deprecation)
-after the release of GitLab 15.6. No further enhancements are planned for this feature.
-
-Read:
-
-- [Gitaly and NFS Deprecation](../nfs.md#gitaly-and-nfs-deprecation).
-- About the [correct mount options to use](../nfs.md#upgrade-to-gitaly-cluster-or-disable-caching-if-experiencing-data-loss).
-
## Configure Advanced Search
You can leverage Elasticsearch and [enable Advanced Search](../../integration/advanced_search/elasticsearch.md)
@@ -2338,12 +2311,10 @@ Refer to [epic 6127](https://gitlab.com/groups/gitlab-org/-/epics/6127) for more
The following tables and diagram detail the hybrid environment using the same formats
as the normal environment above.
-First are the components that run in Kubernetes. The recommendation at this time is to
-use Google Cloud's Kubernetes Engine (GKE) or AWS Elastic Kubernetes Service (EKS) and associated machine types, but the memory
-and CPU requirements should translate to most other providers. We hope to update this in the
-future with further specific cloud provider details.
+First are the components that run in Kubernetes. These run across several node groups, although you can change
+the overall makeup as desired as long as the minimum CPU and Memory requirements are observed.
-| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
+| Service Node Group | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
|---------------------|-------|-------------------------|-----------------|--------------|---------------------------------|
| Webservice | 7 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` | 223 vCPU, 206.5 GB memory |
| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 15.5 vCPU, 50 GB memory |
@@ -2352,7 +2323,7 @@ future with further specific cloud provider details.
- For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results)
[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
- Nodes configuration is shown as it is forced to ensure pod vCPU / memory ratios and avoid scaling during **performance testing**.
- - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
+ - In production deployments, there is no need to assign pods to specific nodes. A minimum of three nodes per node group in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
@@ -2362,7 +2333,7 @@ services where applicable):
| Consul<sup>1</sup> | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` |
| PostgreSQL<sup>1</sup> | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | `m5.4xlarge` |
| PgBouncer<sup>1</sup> | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` |
-| Internal load balancing node<sup>3</sup> | 1 | 4 vCPU, 3.6GB memory | `n1-highcpu-4` | `c5.xlarge` |
+| Internal load balancing node<sup>3</sup> | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` |
| Redis/Sentinel - Cache<sup>2</sup> | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` |
| Redis/Sentinel - Persistent<sup>2</sup> | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` |
| Gitaly<sup>5 6</sup> | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | `m5.8xlarge` |
@@ -2374,7 +2345,8 @@ services where applicable):
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
diff --git a/doc/administration/reference_architectures/2k_users.md b/doc/administration/reference_architectures/2k_users.md
index 1acae93f764..f41c8e9cb24 100644
--- a/doc/administration/reference_architectures/2k_users.md
+++ b/doc/administration/reference_architectures/2k_users.md
@@ -29,12 +29,12 @@ For a full list of reference architectures, see
| GitLab Rails | 2 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` |
| Monitoring node | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
| Object storage<sup>4</sup> | - | - | - | - | - |
-| NFS server (non-Gitaly) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` |
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440), and [Azure Database for PostgreSQL](https://azure.microsoft.com/en-gb/products/postgresql/#overview) is **not recommended** due to [performance issues](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
@@ -125,7 +125,7 @@ CI pipelines alike.
As such, large repositories come with notable cost and typically will require more resources to handle,
significantly so in some cases. It's therefore **strongly** recommended then to review large repositories
-to ensure they maintain good repo health and reduce their size wherever possible.
+to ensure they maintain good health and reduce their size wherever possible.
NOTE:
If best practices aren't followed and large repositories are present on the environment,
@@ -151,9 +151,6 @@ To set up GitLab and its components to accommodate up to 2,000 users:
environment.
1. [Configure the object storage](#configure-the-object-storage) used for
shared data objects.
-1. [Configure NFS](#configure-nfs-optional) (optional, and not recommended)
- to have shared disk storage service as an alternative to Gitaly or object
- storage.
1. [Configure Advanced Search](#configure-advanced-search) (optional) for faster,
more advanced code search across your entire GitLab instance.
@@ -700,8 +697,8 @@ On each node perform the following:
puma['listen'] = '0.0.0.0'
sidekiq['listen_address'] = "0.0.0.0"
- # Configure Sidekiq with 2 workers and 10 max concurrency
- sidekiq['max_concurrency'] = 10
+ # Configure Sidekiq with 2 workers and 20 max concurrency
+ sidekiq['max_concurrency'] = 20
sidekiq['queue_groups'] = ['*'] * 2
# Add the monitoring node's IP address to the monitoring whitelist and allow it to
@@ -780,9 +777,7 @@ On each node perform the following:
[GitLab Rails post-configuration](#gitlab-rails-post-configuration) section.
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-1. [Enable incremental logging](#enable-incremental-logging), unless you are using [NFS](#configure-nfs-optional).
-
+1. [Enable incremental logging](#enable-incremental-logging).
1. Run `sudo gitlab-rake gitlab:gitaly:check` to confirm the node can connect to Gitaly.
1. Tail the logs to see the requests:
@@ -930,9 +925,6 @@ running [Prometheus](../monitoring/prometheus/index.md) and
## Configure the object storage
GitLab supports using an object storage service for holding numerous types of data.
-It's recommended over [NFS](#configure-nfs-optional) and in general it's better
-in larger setups as object storage is typically much more performant, reliable,
-and scalable.
GitLab has been tested on a number of object storage providers:
@@ -957,7 +949,7 @@ NOTE:
When using the [storage-specific form](../object_storage.md#storage-specific-configuration)
in GitLab 14.x and earlier, you should enable [direct upload mode](../../development/uploads/index.md#direct-upload).
The previous [background upload](../../development/uploads/index.md#direct-upload) mode,
-which was deprecated in 14.9, requires shared storage such as [NFS](#configure-nfs-optional).
+which was deprecated in 14.9, requires shared storage such as NFS.
Using separate buckets for each data type is the recommended approach for GitLab.
This ensures there are no collisions across the various types of data GitLab stores.
@@ -976,23 +968,6 @@ GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily
While sharing the job logs through NFS is supported, it's recommended to avoid the need to use NFS by enabling [incremental logging](../job_logs.md#incremental-logging-architecture) (required when no NFS node has been deployed). Incremental logging uses Redis instead of disk space for temporary caching of job logs.
-## Configure NFS (optional)
-
-For improved performance, [object storage](#configure-the-object-storage),
-along with [Gitaly](#configure-gitaly), are recommended over using NFS whenever
-possible.
-
-See how to [configure NFS](../nfs.md).
-
-WARNING:
-Engineering support for NFS for Git repositories is deprecated, and [technical support is scheduled to be unavailable](../nfs.md#gitaly-and-nfs-deprecation)
-after the release of GitLab 15.6. No further enhancements are planned for this feature.
-
-Read:
-
-- [Gitaly and NFS Deprecation](../nfs.md#gitaly-and-nfs-deprecation).
-- About the [correct mount options to use](../nfs.md#upgrade-to-gitaly-cluster-or-disable-caching-if-experiencing-data-loss).
-
## Configure Advanced Search **(PREMIUM SELF)**
You can leverage Elasticsearch and [enable Advanced Search](../../integration/advanced_search/elasticsearch.md)
@@ -1046,12 +1021,10 @@ Refer to [epic 6127](https://gitlab.com/groups/gitlab-org/-/epics/6127) for more
The following tables and diagram detail the hybrid environment using the same formats
as the normal environment above.
-First are the components that run in Kubernetes. The recommendation at this time is to
-use Google Cloud's Kubernetes Engine (GKE) or AWS Elastic Kubernetes Service (EKS) and associated machine types, but the memory
-and CPU requirements should translate to most other providers. We hope to update this in the
-future with further specific cloud provider details.
+First are the components that run in Kubernetes. These run across several node groups, although you can change
+the overall makeup as desired as long as the minimum CPU and Memory requirements are observed.
-| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
+| Service Node Group | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
|---------------------|-------|------------------------|-----------------|--------------|---------------------------------|
| Webservice | 3 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` | 23.7 vCPU, 16.9 GB memory |
| Sidekiq | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 7.8 vCPU, 25.9 GB memory |
@@ -1060,7 +1033,7 @@ future with further specific cloud provider details.
- For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results)
[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
- Nodes configuration is shown as it is forced to ensure pod vCPU / memory ratios and avoid scaling during **performance testing**.
- - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
+ - In production deployments, there is no need to assign pods to specific nodes. A minimum of three nodes per node group in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
@@ -1076,7 +1049,8 @@ services where applicable):
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440), and [Azure Database for PostgreSQL](https://azure.microsoft.com/en-gb/products/postgresql/#overview) is **not recommended** due to [performance issues](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md
index 4fc6af3f72e..008b5ffcc0e 100644
--- a/doc/administration/reference_architectures/3k_users.md
+++ b/doc/administration/reference_architectures/3k_users.md
@@ -44,13 +44,13 @@ For a full list of reference architectures, see
| GitLab Rails | 3 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` |
| Monitoring node | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` |
| Object storage<sup>4</sup> | - | - | - | - |
-| NFS server (non-Gitaly) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` |
<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
@@ -195,7 +195,7 @@ CI pipelines alike.
As such, large repositories come with notable cost and typically will require more resources to handle,
significantly so in some cases. It's therefore **strongly** recommended then to review large repositories
-to ensure they maintain good repo health and reduce their size wherever possible.
+to ensure they maintain good health and reduce their size wherever possible.
NOTE:
If best practices aren't followed and large repositories are present on the environment,
@@ -233,9 +233,6 @@ To set up GitLab and its components to accommodate up to 3,000 users:
environment.
1. [Configure the object storage](#configure-the-object-storage)
used for shared data objects.
-1. [Configure NFS](#configure-nfs-optional) (optional, and not recommended)
- to have shared disk storage service as an alternative to Gitaly or object
- storage.
1. [Configure Advanced Search](#configure-advanced-search) (optional) for faster,
more advanced code search across your entire GitLab instance.
@@ -1230,7 +1227,7 @@ in the second step, do not supply the `EXTERNAL_URL` value.
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
- postgresql['max_connections'] = 200
+ postgresql['max_connections'] = 500
# Prevent database migrations from running on upgrade automatically
gitlab_rails['auto_migrate'] = false
@@ -1711,9 +1708,8 @@ To configure Praefect with TLS:
Sidekiq requires connection to the [Redis](#configure-redis),
[PostgreSQL](#configure-postgresql) and [Gitaly](#configure-gitaly) instances.
-Since it's recommended to use [Object storage](#configure-the-object-storage)
-over [NFS](#configure-nfs-optional) for data objects, the following examples
-include the Object storage configuration.
+Because you must use [Object storage](#configure-the-object-storage) instead of NFS for data objects, the following
+examples include the Object storage configuration.
The following IPs will be used as an example:
@@ -1794,8 +1790,8 @@ Updates to example must be made at:
## Set number of Sidekiq queue processes to the same number as available CPUs
sidekiq['queue_groups'] = ['*'] * 2
- ## Set number of Sidekiq threads per queue process to the recommend number of 10
- sidekiq['max_concurrency'] = 10
+ ## Set number of Sidekiq threads per queue process to the recommend number of 20
+ sidekiq['max_concurrency'] = 20
# Monitoring
consul['enable'] = true
@@ -1869,7 +1865,7 @@ Updates to example must be made at:
NOTE:
If you find that the environment's Sidekiq job processing is slow with long queues,
more nodes can be added as required. You can also tune your Sidekiq nodes to
-run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
+run [multiple Sidekiq processes](../sidekiq/extra_sidekiq_processes.md).
<div align="right">
<a type="button" class="btn btn-default" href="#setup-components">
@@ -1880,9 +1876,8 @@ run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
## Configure GitLab Rails
This section describes how to configure the GitLab application (Rails) component.
-Since it's recommended to use [Object storage](#configure-the-object-storage)
-over [NFS](#configure-nfs-optional) for data objects, the following examples
-include the Object storage configuration.
+Because you must use [Object storage](#configure-the-object-storage) instead of NFS for data objects, the following
+examples include the Object storage configuration.
On each node perform the following:
@@ -2021,7 +2016,6 @@ On each node perform the following:
1. Copy the `/etc/gitlab/gitlab-secrets.json` file from the first Omnibus node you configured and add or replace
the file of the same name on this server. If this is the first Omnibus node you are configuring then you can skip this step.
-
1. To ensure database migrations are only run during reconfigure and not automatically on upgrade, run:
```shell
@@ -2032,11 +2026,8 @@ On each node perform the following:
[GitLab Rails post-configuration](#gitlab-rails-post-configuration) section.
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-1. [Enable incremental logging](#enable-incremental-logging), unless you are using [NFS](#configure-nfs-optional).
-
+1. [Enable incremental logging](#enable-incremental-logging).
1. Run `sudo gitlab-rake gitlab:gitaly:check` to confirm the node can connect to Gitaly.
-
1. Tail the logs to see the requests:
```shell
@@ -2175,9 +2166,6 @@ running [Prometheus](../monitoring/prometheus/index.md) and
## Configure the object storage
GitLab supports using an object storage service for holding numerous types of data.
-It's recommended over [NFS](#configure-nfs-optional) and in general it's better
-in larger setups as object storage is typically much more performant, reliable,
-and scalable.
GitLab has been tested on a number of object storage providers:
@@ -2201,7 +2189,7 @@ NOTE:
When using the [storage-specific form](../object_storage.md#storage-specific-configuration)
in GitLab 14.x and earlier, you should enable [direct upload mode](../../development/uploads/index.md#direct-upload).
The previous [background upload](../../development/uploads/index.md#direct-upload) mode,
-which was deprecated in 14.9, requires shared storage such as [NFS](#configure-nfs-optional).
+which was deprecated in 14.9, requires shared storage such as NFS.
Using separate buckets for each data type is the recommended approach for GitLab.
This ensures there are no collisions across the various types of data GitLab stores.
@@ -2220,22 +2208,6 @@ GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily
While sharing the job logs through NFS is supported, it's recommended to avoid the need to use NFS by enabling [incremental logging](../job_logs.md#incremental-logging-architecture) (required when no NFS node has been deployed). Incremental logging uses Redis instead of disk space for temporary caching of job logs.
-## Configure NFS (optional)
-
-[Object storage](#configure-the-object-storage), along with [Gitaly](#configure-gitaly)
-are recommended over NFS wherever possible for improved performance.
-
-See how to [configure NFS](../nfs.md).
-
-WARNING:
-Engineering support for NFS for Git repositories is deprecated, and [technical support is scheduled to be unavailable](../nfs.md#gitaly-and-nfs-deprecation)
-after the release of GitLab 15.6. No further enhancements are planned for this feature.
-
-Read:
-
-- [Gitaly and NFS Deprecation](../nfs.md#gitaly-and-nfs-deprecation).
-- About the [correct mount options to use](../nfs.md#upgrade-to-gitaly-cluster-or-disable-caching-if-experiencing-data-loss).
-
## Configure Advanced Search
You can leverage Elasticsearch and [enable Advanced Search](../../integration/advanced_search/elasticsearch.md)
@@ -2309,12 +2281,10 @@ Refer to [epic 6127](https://gitlab.com/groups/gitlab-org/-/epics/6127) for more
The following tables and diagram detail the hybrid environment using the same formats
as the normal environment above.
-First are the components that run in Kubernetes. The recommendation at this time is to
-use Google Cloud's Kubernetes Engine (GKE) or AWS Elastic Kubernetes Service (EKS) and associated machine types, but the memory
-and CPU requirements should translate to most other providers. We hope to update this in the
-future with further specific cloud provider details.
+First are the components that run in Kubernetes. These run across several node groups, although you can change
+the overall makeup as desired as long as the minimum CPU and Memory requirements are observed.
-| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
+| Service Node Group | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
|---------------------|-------|-------------------------|-----------------|--------------|---------------------------------|
| Webservice | 2 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | `c5.4xlarge` | 31.8 vCPU, 24.8 GB memory |
| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 11.8 vCPU, 38.9 GB memory |
@@ -2323,7 +2293,7 @@ future with further specific cloud provider details.
- For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results)
[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
- Nodes configuration is shown as it is forced to ensure pod vCPU / memory ratios and avoid scaling during **performance testing**.
- - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
+ - In production deployments, there is no need to assign pods to specific nodes. A minimum of three nodes per node group in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
@@ -2344,7 +2314,8 @@ services where applicable):
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md
index ca159d62f1f..87d1408b568 100644
--- a/doc/administration/reference_architectures/50k_users.md
+++ b/doc/administration/reference_architectures/50k_users.md
@@ -35,13 +35,13 @@ full list of reference architectures, see
| GitLab Rails | 12 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` |
| Monitoring node | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` |
| Object storage<sup>4</sup> | - | - | - | - |
-| NFS server (non-Gitaly) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` |
<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
@@ -189,7 +189,7 @@ CI pipelines alike.
As such, large repositories come with notable cost and typically will require more resources to handle,
significantly so in some cases. It's therefore **strongly** recommended then to review large repositories
-to ensure they maintain good repo health and reduce their size wherever possible.
+to ensure they maintain good health and reduce their size wherever possible.
NOTE:
If best practices aren't followed and large repositories are present on the environment,
@@ -227,9 +227,6 @@ To set up GitLab and its components to accommodate up to 50,000 users:
environment.
1. [Configure the object storage](#configure-the-object-storage)
used for shared data objects.
-1. [Configure NFS](#configure-nfs-optional) (optional, and not recommended)
- to have shared disk storage service as an alternative to Gitaly or object
- storage.
1. [Configure Advanced Search](#configure-advanced-search) (optional) for faster,
more advanced code search across your entire GitLab instance.
@@ -296,8 +293,8 @@ could also be used, those load balancers have not been validated.
### Balancing algorithm
-We recommend that a least-connection load balancing algorithm or equivalent
-is used wherever possible to ensure equal spread of calls to the nodes and good performance.
+You should use a least-connection load balancing algorithm or equivalent
+wherever possible to ensure equal spread of calls to the nodes and good performance.
We don't recommend the use of round-robin algorithms as they are known to not
spread connections equally in practice.
@@ -1288,7 +1285,7 @@ in the second step, do not supply the `EXTERNAL_URL` value.
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
- postgresql['max_connections'] = 200
+ postgresql['max_connections'] = 500
# Prevent database migrations from running on upgrade automatically
gitlab_rails['auto_migrate'] = false
@@ -1772,9 +1769,8 @@ To configure Praefect with TLS:
Sidekiq requires connection to the [Redis](#configure-redis),
[PostgreSQL](#configure-postgresql) and [Gitaly](#configure-gitaly) instances.
-Since it's recommended to use [Object storage](#configure-the-object-storage)
-over [NFS](#configure-nfs-optional) for data objects, the following examples
-include the Object storage configuration.
+Because you must use [Object storage](#configure-the-object-storage) instead of NFS for data objects, the following
+examples include the Object storage configuration.
- `10.6.0.101`: Sidekiq 1
- `10.6.0.102`: Sidekiq 2
@@ -1872,8 +1868,8 @@ Updates to example must be made at:
## Set number of Sidekiq queue processes to the same number as available CPUs
sidekiq['queue_groups'] = ['*'] * 4
- ## Set number of Sidekiq threads per queue process to the recommend number of 10
- sidekiq['max_concurrency'] = 10
+ ## Set number of Sidekiq threads per queue process to the recommend number of 20
+ sidekiq['max_concurrency'] = 20
# Monitoring
consul['enable'] = true
@@ -1931,7 +1927,7 @@ Updates to example must be made at:
NOTE:
If you find that the environment's Sidekiq job processing is slow with long queues,
more nodes can be added as required. You can also tune your Sidekiq nodes to
-run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
+run [multiple Sidekiq processes](../sidekiq/extra_sidekiq_processes.md).
<div align="right">
<a type="button" class="btn btn-default" href="#setup-components">
@@ -1942,9 +1938,8 @@ run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
## Configure GitLab Rails
This section describes how to configure the GitLab application (Rails) component.
-Since it's recommended to use [Object storage](#configure-the-object-storage)
-over [NFS](#configure-nfs-optional) for data objects, the following examples
-include the Object storage configuration.
+Because you must use [Object storage](#configure-the-object-storage) instead of NFS for data objects, the following
+examples include the Object storage configuration.
The following IPs will be used as an example:
@@ -2092,7 +2087,6 @@ On each node perform the following:
1. Copy the `/etc/gitlab/gitlab-secrets.json` file from the first Omnibus node you configured and add or replace
the file of the same name on this server. If this is the first Omnibus node you are configuring then you can skip this step.
-
1. To ensure database migrations are only run during reconfigure and not automatically on upgrade, run:
```shell
@@ -2103,9 +2097,7 @@ On each node perform the following:
[GitLab Rails post-configuration](#gitlab-rails-post-configuration) section.
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-1. [Enable incremental logging](#enable-incremental-logging), unless you are using [NFS](#configure-nfs-optional).
-
+1. [Enable incremental logging](#enable-incremental-logging).
1. Confirm the node can connect to Gitaly:
```shell
@@ -2230,9 +2222,6 @@ To configure the Monitoring node:
## Configure the object storage
GitLab supports using an object storage service for holding numerous types of data.
-It's recommended over [NFS](#configure-nfs-optional) and in general it's better
-in larger setups as object storage is typically much more performant, reliable,
-and scalable.
GitLab has been tested on a number of object storage providers:
@@ -2256,7 +2245,7 @@ NOTE:
When using the [storage-specific form](../object_storage.md#storage-specific-configuration)
in GitLab 14.x and earlier, you should enable [direct upload mode](../../development/uploads/index.md#direct-upload).
The previous [background upload](../../development/uploads/index.md#direct-upload) mode,
-which was deprecated in 14.9, requires shared storage such as [NFS](#configure-nfs-optional).
+which was deprecated in 14.9, requires shared storage such as NFS.
Using separate buckets for each data type is the recommended approach for GitLab.
This ensures there are no collisions across the various types of data GitLab stores.
@@ -2275,22 +2264,6 @@ GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily
While sharing the job logs through NFS is supported, it's recommended to avoid the need to use NFS by enabling [incremental logging](../job_logs.md#incremental-logging-architecture) (required when no NFS node has been deployed). Incremental logging uses Redis instead of disk space for temporary caching of job logs.
-## Configure NFS (optional)
-
-[Object storage](#configure-the-object-storage), along with [Gitaly](#configure-gitaly)
-are recommended over NFS wherever possible for improved performance.
-
-See how to [configure NFS](../nfs.md).
-
-WARNING:
-Engineering support for NFS for Git repositories is deprecated, and [technical support is scheduled to be unavailable](../nfs.md#gitaly-and-nfs-deprecation)
-after the release of GitLab 15.6. No further enhancements are planned for this feature.
-
-Read:
-
-- [Gitaly and NFS Deprecation](../nfs.md#gitaly-and-nfs-deprecation).
-- About the [correct mount options to use](../nfs.md#upgrade-to-gitaly-cluster-or-disable-caching-if-experiencing-data-loss).
-
## Configure Advanced Search
You can leverage Elasticsearch and [enable Advanced Search](../../integration/advanced_search/elasticsearch.md)
@@ -2340,12 +2313,10 @@ Refer to [epic 6127](https://gitlab.com/groups/gitlab-org/-/epics/6127) for more
The following tables and diagram detail the hybrid environment using the same formats
as the normal environment above.
-First are the components that run in Kubernetes. The recommendation at this time is to
-use Google Cloud's Kubernetes Engine (GKE) or AWS Elastic Kubernetes Service (EKS) and associated machine types, but the memory
-and CPU requirements should translate to most other providers. We hope to update this in the
-future with further specific cloud provider details.
+First are the components that run in Kubernetes. These run across several node groups, although you can change
+the overall makeup as desired as long as the minimum CPU and Memory requirements are observed.
-| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
+| Service Node Group | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
|---------------------|-------|-------------------------|-----------------|--------------|---------------------------------|
| Webservice | 16 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `m5.8xlarge` | 510 vCPU, 472 GB memory |
| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 15.5 vCPU, 50 GB memory |
@@ -2354,7 +2325,7 @@ future with further specific cloud provider details.
- For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results)
[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
- Nodes configuration is shown as it is forced to ensure pod vCPU / memory ratios and avoid scaling during **performance testing**.
- - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
+ - In production deployments, there is no need to assign pods to specific nodes. A minimum of three nodes per node group in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
@@ -2376,7 +2347,8 @@ services where applicable):
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md
index a2b92f9c300..182edb82b5f 100644
--- a/doc/administration/reference_architectures/5k_users.md
+++ b/doc/administration/reference_architectures/5k_users.md
@@ -41,13 +41,13 @@ costly-to-operate environment by using the
| GitLab Rails | 3 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | `c5.4xlarge` |
| Monitoring node | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` |
| Object storage<sup>4</sup> | - | - | - | - |
-| NFS server (non-Gitaly) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` |
<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
@@ -192,7 +192,7 @@ CI pipelines alike.
As such, large repositories come with notable cost and typically will require more resources to handle,
significantly so in some cases. It's therefore **strongly** recommended then to review large repositories
-to ensure they maintain good repo health and reduce their size wherever possible.
+to ensure they maintain good health and reduce their size wherever possible.
NOTE:
If best practices aren't followed and large repositories are present on the environment,
@@ -230,9 +230,6 @@ To set up GitLab and its components to accommodate up to 5,000 users:
environment.
1. [Configure the object storage](#configure-the-object-storage)
used for shared data objects.
-1. [Configure NFS](#configure-nfs-optional) (optional, and not recommended)
- to have shared disk storage service as an alternative to Gitaly or object
- storage.
1. [Configure Advanced Search](#configure-advanced-search) (optional) for faster,
more advanced code search across your entire GitLab instance.
@@ -1226,7 +1223,7 @@ in the second step, do not supply the `EXTERNAL_URL` value.
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
- postgresql['max_connections'] = 200
+ postgresql['max_connections'] = 500
# Prevent database migrations from running on upgrade automatically
gitlab_rails['auto_migrate'] = false
@@ -1708,9 +1705,8 @@ To configure Praefect with TLS:
Sidekiq requires connection to the [Redis](#configure-redis),
[PostgreSQL](#configure-postgresql) and [Gitaly](#configure-gitaly) instances.
-Since it's recommended to use [Object storage](#configure-the-object-storage)
-over [NFS](#configure-nfs-optional) for data objects, the following examples
-include the Object storage configuration.
+Because you must use [Object storage](#configure-the-object-storage) instead of NFS for data objects, the following
+examples include the Object storage configuration.
- `10.6.0.71`: Sidekiq 1
- `10.6.0.72`: Sidekiq 2
@@ -1790,8 +1786,8 @@ Updates to example must be made at:
## Set number of Sidekiq queue processes to the same number as available CPUs
sidekiq['queue_groups'] = ['*'] * 4
- ## Set number of Sidekiq threads per queue process to the recommend number of 10
- sidekiq['max_concurrency'] = 10
+ ## Set number of Sidekiq threads per queue process to the recommend number of 20
+ sidekiq['max_concurrency'] = 20
# Monitoring
consul['enable'] = true
@@ -1865,7 +1861,7 @@ Updates to example must be made at:
NOTE:
If you find that the environment's Sidekiq job processing is slow with long queues,
more nodes can be added as required. You can also tune your Sidekiq nodes to
-run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
+run [multiple Sidekiq processes](../sidekiq/extra_sidekiq_processes.md).
<div align="right">
<a type="button" class="btn btn-default" href="#setup-components">
@@ -1876,9 +1872,8 @@ run [multiple Sidekiq processes](../operations/extra_sidekiq_processes.md).
## Configure GitLab Rails
This section describes how to configure the GitLab application (Rails) component.
-Since it's recommended to use [Object storage](#configure-the-object-storage)
-over [NFS](#configure-nfs-optional) for data objects, the following examples
-include the Object storage configuration.
+Because you must use [Object storage](#configure-the-object-storage) instead of NFS for data objects, the following
+examples include the Object storage configuration.
On each node perform the following:
@@ -2020,7 +2015,6 @@ On each node perform the following:
1. Copy the `/etc/gitlab/gitlab-secrets.json` file from the first Omnibus node you configured and add or replace
the file of the same name on this server. If this is the first Omnibus node you are configuring then you can skip this step.
-
1. To ensure database migrations are only run during reconfigure and not automatically on upgrade, run:
```shell
@@ -2031,11 +2025,8 @@ On each node perform the following:
[GitLab Rails post-configuration](#gitlab-rails-post-configuration) section.
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
-
-1. [Enable incremental logging](#enable-incremental-logging), unless you are using [NFS](#configure-nfs-optional).
-
+1. [Enable incremental logging](#enable-incremental-logging).
1. Run `sudo gitlab-rake gitlab:gitaly:check` to confirm the node can connect to Gitaly.
-
1. Tail the logs to see the requests:
```shell
@@ -2174,9 +2165,6 @@ running [Prometheus](../monitoring/prometheus/index.md) and
## Configure the object storage
GitLab supports using an object storage service for holding numerous types of data.
-It's recommended over [NFS](#configure-nfs-optional) and in general it's better
-in larger setups as object storage is typically much more performant, reliable,
-and scalable.
GitLab has been tested on a number of object storage providers:
@@ -2200,7 +2188,7 @@ NOTE:
When using the [storage-specific form](../object_storage.md#storage-specific-configuration)
in GitLab 14.x and earlier, you should enable [direct upload mode](../../development/uploads/index.md#direct-upload).
The previous [background upload](../../development/uploads/index.md#direct-upload) mode,
-which was deprecated in 14.9, requires shared storage such as [NFS](#configure-nfs-optional).
+which was deprecated in 14.9, requires shared storage such as NFS.
Using separate buckets for each data type is the recommended approach for GitLab.
This ensures there are no collisions across the various types of data GitLab stores.
@@ -2219,22 +2207,6 @@ GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily
While sharing the job logs through NFS is supported, it's recommended to avoid the need to use NFS by enabling [incremental logging](../job_logs.md#incremental-logging-architecture) (required when no NFS node has been deployed). Incremental logging uses Redis instead of disk space for temporary caching of job logs.
-## Configure NFS (optional)
-
-[Object storage](#configure-the-object-storage), along with [Gitaly](#configure-gitaly)
-are recommended over NFS wherever possible for improved performance.
-
-See how to [configure NFS](../nfs.md).
-
-WARNING:
-Engineering support for NFS for Git repositories is deprecated, and [technical support is scheduled to be unavailable](../nfs.md#gitaly-and-nfs-deprecation)
-after the release of GitLab 15.6. No further enhancements are planned for this feature.
-
-Read:
-
-- [Gitaly and NFS Deprecation](../nfs.md#gitaly-and-nfs-deprecation).
-- About the [correct mount options to use](../nfs.md#upgrade-to-gitaly-cluster-or-disable-caching-if-experiencing-data-loss).
-
## Configure Advanced Search
You can leverage Elasticsearch and [enable Advanced Search](../../integration/advanced_search/elasticsearch.md)
@@ -2284,16 +2256,14 @@ Refer to [epic 6127](https://gitlab.com/groups/gitlab-org/-/epics/6127) for more
The following tables and diagram detail the hybrid environment using the same formats
as the normal environment above.
-First are the components that run in Kubernetes. The recommendation at this time is to
-use Google Cloud's Kubernetes Engine (GKE) or AWS Elastic Kubernetes Service (EKS) and associated machine types, but the memory
-and CPU requirements should translate to most other providers. We hope to update this in the
-future with further specific cloud provider details.
+First are the components that run in Kubernetes. These run across several node groups, although you can change
+the overall makeup as desired as long as the minimum CPU and Memory requirements are observed.
-| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
-|-----------------------------------------------|-------|-------------------------|-----------------|--------------|---------------------------------|
-| Webservice | 5 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | `c5.4xlarge` | 79.5 vCPU, 62 GB memory |
-| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 11.8 vCPU, 38.9 GB memory |
-| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | 3.9 vCPU, 11.8 GB memory |
+| Service Node Group | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory |
+|-------------------- |-------|-------------------------|-----------------|--------------|---------------------------------|
+| Webservice | 5 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | `c5.4xlarge` | 79.5 vCPU, 62 GB memory |
+| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 11.8 vCPU, 38.9 GB memory |
+| Supporting services | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | 3.9 vCPU, 11.8 GB memory |
- For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results)
[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
@@ -2319,7 +2289,8 @@ services where applicable):
<!-- markdownlint-disable MD029 -->
1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
- [Google Cloud SQL](https://cloud.google.com/sql/docs/postgres/high-availability#normal) and [Amazon RDS](https://aws.amazon.com/rds/) are known to work.
- - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is **incompatible** with load balancing enabled by default in [14.4.0](../../update/index.md#1440).
- Consul is primarily used for Omnibus PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However, Consul is also used optionally by Prometheus for Omnibus auto host discovery.
2. Can be optionally run on reputable third-party external PaaS Redis solutions. See [Recommended cloud providers and services](index.md#recommended-cloud-providers-and-services) for more information.
diff --git a/doc/administration/reference_architectures/index.md b/doc/administration/reference_architectures/index.md
index 467cc332e25..60258fb5a09 100644
--- a/doc/administration/reference_architectures/index.md
+++ b/doc/administration/reference_architectures/index.md
@@ -207,7 +207,8 @@ Several cloud provider services are known not to support the above or have been
- [Amazon Aurora](https://aws.amazon.com/rds/aurora/) is incompatible and not supported. See [14.4.0](../../update/index.md#1440) for more details.
- [Azure Database for PostgreSQL Single Server](https://azure.microsoft.com/en-gb/products/postgresql/#overview) (Single / Flexible) is **strongly not recommended** for use due to notable performance / stability issues or missing functionality. See [Recommendation Notes for Azure](#recommendation-notes-for-azure) for more details.
-- [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+- [Google AlloyDB](https://cloud.google.com/alloydb) and [Amazon RDS Multi-AZ DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html) have not been tested and are not recommended. Both solutions are specifically not expected to work with GitLab Geo.
+ - Note that [Amazon RDS Multi-AZ DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html) is a separate product and is supported.
### Recommendation notes for Azure
@@ -216,7 +217,7 @@ Due to performance issues that we found with several key Azure services, we only
In addition to the above, you should be aware of the additional specific guidance for Azure:
- **We outright strongly do not recommend [Azure Database for PostgreSQL Single Server](https://learn.microsoft.com/en-us/azure/postgresql/single-server/overview-single-server)** specifically due to significant performance and stability issues found. **For GitLab 14.0 and higher the service is not supported** due to it only supporting up to PostgreSQL 11.
- - A new service, [Azure Database for Postgres Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/) has been released but due to it missing some functionality we don't recommend it at this time.
+ - A new service, [Azure Database for PostgreSQL Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/) has been released but due to it missing some functionality we don't recommend it at this time.
- [Azure Blob Storage](https://azure.microsoft.com/en-gb/products/storage/blobs/) has been found to have performance limits that can impact production use at certain times. However, this has only been seen in larger architectures.
## Validation and test results
@@ -240,11 +241,11 @@ Testing occurs against all reference architectures and cloud providers in an aut
- The [GitLab Environment Toolkit](https://gitlab.com/gitlab-org/gitlab-environment-toolkit) for building the environments.
- The [GitLab Performance Tool](https://gitlab.com/gitlab-org/quality/performance) for performance testing.
-Network latency on the test environments between components on all Cloud Providers were measured at <5ms. Note that this is shared as an observation and not as an implicit recommendation.
+Network latency on the test environments between components on all Cloud Providers were measured at <5 ms. Note that this is shared as an observation and not as an implicit recommendation.
We aim to have a "test smart" approach where architectures tested have a good range that can also apply to others. Testing focuses on 10k Omnibus on GCP as the testing has shown this is a good bellwether for the other architectures and cloud providers as well as Cloud Native Hybrids.
-The Standard Reference Architectures are designed to be platform-agnostic, with everything being run on VMs via [Omnibus GitLab](https://docs.gitlab.com/omnibus/). While testing occurs primarily on GCP, ad-hoc testing has shown that they perform similarly on equivalently specced hardware on other Cloud Providers or if run on premises (bare-metal).
+The Standard Reference Architectures are designed to be platform-agnostic, with everything being run on VMs via [Omnibus GitLab](https://docs.gitlab.com/omnibus/). While testing occurs primarily on GCP, ad-hoc testing has shown that they perform similarly on hardware with equivalent specs on other Cloud Providers or if run on premises (bare-metal).
Testing on these reference architectures is performed with the
[GitLab Performance Tool](https://gitlab.com/gitlab-org/quality/performance)