diff options
author | GitLab Bot <gitlab-bot@gitlab.com> | 2020-05-08 21:09:55 +0300 |
---|---|---|
committer | GitLab Bot <gitlab-bot@gitlab.com> | 2020-05-08 21:09:55 +0300 |
commit | c0c1433fa5a9f31c8eb4292d13de744aa74e9e83 (patch) | |
tree | d3d0092f22ceca3d97bf5d882081b1fa70524911 /doc | |
parent | 2e4dcef627009fa11836f6f624e7313843cb4a38 (diff) |
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc')
22 files changed, 870 insertions, 242 deletions
diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md new file mode 100644 index 00000000000..7f31f336251 --- /dev/null +++ b/doc/administration/reference_architectures/10k_users.md @@ -0,0 +1,79 @@ +# Reference architecture: up to 10,000 users + +This page describes GitLab reference architecture for up to 10,000 users. +For a full list of reference architectures, see +[Available reference architectures](index.md#available-reference-architectures). + +> - **Supported users (approximate):** 10,000 +> - **High Availability:** True +> - **Test RPS rates:** API: 200 RPS, Web: 20 RPS, Git: 20 RPS + +| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | +|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------| +| GitLab Rails ([1](#footnotes)) | 3 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge | F32s v2 | +| PostgreSQL | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 16 vCPU, 60GB Memory | n1-standard-16 | m5.4xlarge | D16s v3 | +| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | +| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | +| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| Object Storage ([4](#footnotes)) | - | - | - | - | - | +| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | +| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | +| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | + +## Footnotes + +1. In our architectures we run each GitLab Rails node using the Puma webserver + and have its number of workers set to 90% of available CPUs along with four threads. For + nodes that are running Rails with other components the worker value should be reduced + accordingly where we've found 50% achieves a good balance but this is dependent + on workload. + +1. Gitaly node requirements are dependent on customer data, specifically the number of + projects and their sizes. We recommend two nodes as an absolute minimum for HA environments + and at least four nodes should be used when supporting 50,000 or more users. + We also recommend that each Gitaly node should store no more than 5TB of data + and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) + set to 20% of available CPUs. Additional nodes should be considered in conjunction + with a review of expected data size and spread based on the recommendations above. + +1. Recommended Redis setup differs depending on the size of the architecture. + For smaller architectures (less than 3,000 users) a single instance should suffice. + For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all + classes and that Redis Sentinel is hosted alongside Consul. + For larger architectures (10,000 users or more) we suggest running a separate + [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class + and another for the Queues and Shared State classes respectively. We also recommend + that you run the Redis Sentinel clusters separately for each Redis Cluster. + +1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md) + over NFS where possible, due to better performance and availability. + +1. NFS can be used as an alternative for both repository data (replacing Gitaly) and + object storage but this isn't typically recommended for performance reasons. Note however it is required for + [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196). + +1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/) + as the load balancer. Although other load balancers with similar feature sets + could also be used, those load balancers have not been validated. + +1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over + HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write + as these components have heavy I/O. These IOPS values are recommended only as a starter + as with time they may be adjusted higher or lower depending on the scale of your + environment's workload. If you're running the environment on a Cloud provider + you may need to refer to their documentation on how configure IOPS correctly. + +1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms) + CPU platform on GCP. On different hardware you may find that adjustments, either lower + or higher, are required for your CPU or Node counts accordingly. For more information, a + [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found + [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks). + +1. AWS-equivalent and Azure-equivalent configurations are rough suggestions + and may change in the future. They have not yet been tested and validated. diff --git a/doc/administration/reference_architectures/1k_users.md b/doc/administration/reference_architectures/1k_users.md new file mode 100644 index 00000000000..615da2b14c9 --- /dev/null +++ b/doc/administration/reference_architectures/1k_users.md @@ -0,0 +1,82 @@ +# Reference architecture: up to 1,000 users + +This page describes GitLab reference architecture for up to 1,000 users. +For a full list of reference architectures, see +[Available reference architectures](index.md#available-reference-architectures). + +> - **Supported users (approximate):** 1,000 +> - **High Availability:** False + +| Users | Configuration([8](#footnotes)) | GCP type | AWS type([9](#footnotes)) | +|-------|--------------------------------|---------------|---------------------------| +| 100 | 2 vCPU, 7.2GB Memory | n1-standard-2 | c5.2xlarge | +| 500 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | +| 1000 | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge | + +For situations where you need to serve up to 1,000 users, a single-node +solution with [frequent backups](index.md#automated-backups-core-only) is appropriate +for many organizations. With automatic backup of the GitLab repositories, +configuration, and the database, if you don't have strict availability +requirements, this is the ideal solution. + +## Setup instructions + +- For this default reference architecture, use the standard [installation instructions](../../install/README.md) to install GitLab. + +NOTE: **Note:** +You can also optionally configure GitLab to use an +[external PostgreSQL service](../external_database.md) or an +[external object storage service](../high_availability/object_storage.md) for +added performance and reliability at a reduced complexity cost. + +## Footnotes + +1. In our architectures we run each GitLab Rails node using the Puma webserver + and have its number of workers set to 90% of available CPUs along with four threads. For + nodes that are running Rails with other components the worker value should be reduced + accordingly where we've found 50% achieves a good balance but this is dependent + on workload. + +1. Gitaly node requirements are dependent on customer data, specifically the number of + projects and their sizes. We recommend two nodes as an absolute minimum for HA environments + and at least four nodes should be used when supporting 50,000 or more users. + We also recommend that each Gitaly node should store no more than 5TB of data + and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) + set to 20% of available CPUs. Additional nodes should be considered in conjunction + with a review of expected data size and spread based on the recommendations above. + +1. Recommended Redis setup differs depending on the size of the architecture. + For smaller architectures (less than 3,000 users) a single instance should suffice. + For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all + classes and that Redis Sentinel is hosted alongside Consul. + For larger architectures (10,000 users or more) we suggest running a separate + [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class + and another for the Queues and Shared State classes respectively. We also recommend + that you run the Redis Sentinel clusters separately for each Redis Cluster. + +1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md) + over NFS where possible, due to better performance and availability. + +1. NFS can be used as an alternative for both repository data (replacing Gitaly) and + object storage but this isn't typically recommended for performance reasons. Note however it is required for + [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196). + +1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/) + as the load balancer. Although other load balancers with similar feature sets + could also be used, those load balancers have not been validated. + +1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over + HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write + as these components have heavy I/O. These IOPS values are recommended only as a starter + as with time they may be adjusted higher or lower depending on the scale of your + environment's workload. If you're running the environment on a Cloud provider + you may need to refer to their documentation on how configure IOPS correctly. + +1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms) + CPU platform on GCP. On different hardware you may find that adjustments, either lower + or higher, are required for your CPU or Node counts accordingly. For more information, a + [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found + [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks). + +1. AWS-equivalent and Azure-equivalent configurations are rough suggestions + and may change in the future. They have not yet been tested and validated. diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md new file mode 100644 index 00000000000..2ee692d635c --- /dev/null +++ b/doc/administration/reference_architectures/25k_users.md @@ -0,0 +1,79 @@ +# Reference architecture: up to 25,000 users + +This page describes GitLab reference architecture for up to 25,000 users. +For a full list of reference architectures, see +[Available reference architectures](index.md#available-reference-architectures). + +> - **Supported users (approximate):** 25,000 +> - **High Availability:** True +> - **Test RPS rates:** API: 500 RPS, Web: 50 RPS, Git: 50 RPS + +| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | +|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------| +| GitLab Rails ([1](#footnotes)) | 5 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge | F32s v2 | +| PostgreSQL | 3 | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge | D8s v3 | +| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 32 vCPU, 120GB Memory | n1-standard-32 | m5.8xlarge | D32s v3 | +| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | +| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | +| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| Object Storage ([4](#footnotes)) | - | - | - | - | - | +| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | +| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | +| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Internal load balancing node ([6](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | + +## Footnotes + +1. In our architectures we run each GitLab Rails node using the Puma webserver + and have its number of workers set to 90% of available CPUs along with four threads. For + nodes that are running Rails with other components the worker value should be reduced + accordingly where we've found 50% achieves a good balance but this is dependent + on workload. + +1. Gitaly node requirements are dependent on customer data, specifically the number of + projects and their sizes. We recommend two nodes as an absolute minimum for HA environments + and at least four nodes should be used when supporting 50,000 or more users. + We also recommend that each Gitaly node should store no more than 5TB of data + and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) + set to 20% of available CPUs. Additional nodes should be considered in conjunction + with a review of expected data size and spread based on the recommendations above. + +1. Recommended Redis setup differs depending on the size of the architecture. + For smaller architectures (less than 3,000 users) a single instance should suffice. + For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all + classes and that Redis Sentinel is hosted alongside Consul. + For larger architectures (10,000 users or more) we suggest running a separate + [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class + and another for the Queues and Shared State classes respectively. We also recommend + that you run the Redis Sentinel clusters separately for each Redis Cluster. + +1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md) + over NFS where possible, due to better performance and availability. + +1. NFS can be used as an alternative for both repository data (replacing Gitaly) and + object storage but this isn't typically recommended for performance reasons. Note however it is required for + [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196). + +1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/) + as the load balancer. Although other load balancers with similar feature sets + could also be used, those load balancers have not been validated. + +1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over + HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write + as these components have heavy I/O. These IOPS values are recommended only as a starter + as with time they may be adjusted higher or lower depending on the scale of your + environment's workload. If you're running the environment on a Cloud provider + you may need to refer to their documentation on how configure IOPS correctly. + +1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms) + CPU platform on GCP. On different hardware you may find that adjustments, either lower + or higher, are required for your CPU or Node counts accordingly. For more information, a + [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found + [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks). + +1. AWS-equivalent and Azure-equivalent configurations are rough suggestions + and may change in the future. They have not yet been tested and validated. diff --git a/doc/administration/reference_architectures/2k_users.md b/doc/administration/reference_architectures/2k_users.md new file mode 100644 index 00000000000..874e00e6722 --- /dev/null +++ b/doc/administration/reference_architectures/2k_users.md @@ -0,0 +1,90 @@ +# Reference architecture: up to 2,000 users + +This page describes GitLab reference architecture for up to 2,000 users. +For a full list of reference architectures, see +[Available reference architectures](index.md#available-reference-architectures). + +> - **Supported users (approximate):** 2,000 +> - **High Availability:** False +> - **Test RPS rates:** API: 40 RPS, Web: 4 RPS, Git: 4 RPS + +| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | +|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|----------------| +| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Object Storage ([4](#footnotes)) | - | - | - | - | - | +| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | +| PostgreSQL | 1 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | +| Redis ([3](#footnotes)) | 1 | 1 vCPU, 3.75GB Memory | n1-standard-1 | m5.large | D2s v3 | +| Gitaly ([5](#footnotes)) ([7](#footnotes)) | X ([2](#footnotes)) | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| GitLab Rails ([1](#footnotes)) | 2 | 8 vCPU, 7.2GB Memory | n1-highcpu-8 | c5.2xlarge | F8s v2 | +| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | + +## Setup instructions + +1. [Configure the external load balancing node](../high_availability/load_balancer.md) + that will handle the load balancing of the two GitLab application services nodes. +1. [Configure the Object Storage](../object_storage.md) ([4](#footnotes)) used for shared data objects. +1. (Optional) [Configure NFS](../high_availability/nfs.md) to have + shared disk storage service as an alternative to Gitaly and/or + [Object Storage](../object_storage.md) (although not recommended). + NFS is required for GitLab Pages, you can skip this step if you're not using that feature. +1. [Configure PostgreSQL](../high_availability/load_balancer.md), the database for GitLab. +1. [Configure Redis](../high_availability/redis.md). +1. [Configure Gitaly](../gitaly/index.md#running-gitaly-on-its-own-server), + which is used to provide access to the Git repositories. +1. [Configure the main GitLab Rails application](../high_availability/gitlab.md) + to run Puma/Unicorn, Workhorse, GitLab Shell, and to serve all + frontend requests (UI, API, Git over HTTP/SSH). +1. [Configure Prometheus](../high_availability/monitoring_node.md) to monitor your GitLab environment. + +## Footnotes + +1. In our architectures we run each GitLab Rails node using the Puma webserver + and have its number of workers set to 90% of available CPUs along with four threads. For + nodes that are running Rails with other components the worker value should be reduced + accordingly where we've found 50% achieves a good balance but this is dependent + on workload. + +1. Gitaly node requirements are dependent on customer data, specifically the number of + projects and their sizes. We recommend two nodes as an absolute minimum for HA environments + and at least four nodes should be used when supporting 50,000 or more users. + We also recommend that each Gitaly node should store no more than 5TB of data + and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) + set to 20% of available CPUs. Additional nodes should be considered in conjunction + with a review of expected data size and spread based on the recommendations above. + +1. Recommended Redis setup differs depending on the size of the architecture. + For smaller architectures (less than 3,000 users) a single instance should suffice. + For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all + classes and that Redis Sentinel is hosted alongside Consul. + For larger architectures (10,000 users or more) we suggest running a separate + [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class + and another for the Queues and Shared State classes respectively. We also recommend + that you run the Redis Sentinel clusters separately for each Redis Cluster. + +1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md) + over NFS where possible, due to better performance and availability. + +1. NFS can be used as an alternative for both repository data (replacing Gitaly) and + object storage but this isn't typically recommended for performance reasons. Note however it is required for + [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196). + +1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/) + as the load balancer. Although other load balancers with similar feature sets + could also be used, those load balancers have not been validated. + +1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over + HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write + as these components have heavy I/O. These IOPS values are recommended only as a starter + as with time they may be adjusted higher or lower depending on the scale of your + environment's workload. If you're running the environment on a Cloud provider + you may need to refer to their documentation on how configure IOPS correctly. + +1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms) + CPU platform on GCP. On different hardware you may find that adjustments, either lower + or higher, are required for your CPU or Node counts accordingly. For more information, a + [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found + [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks). + +1. AWS-equivalent and Azure-equivalent configurations are rough suggestions + and may change in the future. They have not yet been tested and validated. diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md new file mode 100644 index 00000000000..bd429fbc4b4 --- /dev/null +++ b/doc/administration/reference_architectures/3k_users.md @@ -0,0 +1,82 @@ +# Reference architecture: up to 3,000 users + +This page describes GitLab reference architecture for up to 3,000 users. +For a full list of reference architectures, see +[Available reference architectures](index.md#available-reference-architectures). + +NOTE: **Note:** The 3,000-user reference architecture documented below is +designed to help your organization achieve a highly-available GitLab deployment. +If you do not have the expertise or need to maintain a highly-available +environment, you can have a simpler and less costly-to-operate environment by +following the [2,000-user reference architecture](2k_users.md). + +> - **Supported users (approximate):** 3,000 +> - **High Availability:** True +> - **Test RPS rates:** API: 60 RPS, Web: 6 RPS, Git: 6 RPS + +| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | +|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|------------------------| +| GitLab Rails ([1](#footnotes)) | 3 | 8 vCPU, 7.2GB Memory | n1-highcpu-8 | c5.2xlarge | F8s v2 | +| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | +| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| Redis ([3](#footnotes)) | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | +| Consul + Sentinel ([3](#footnotes)) | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | +| Object Storage ([4](#footnotes)) | - | - | - | - | - | +| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | +| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | + +## Footnotes + +1. In our architectures we run each GitLab Rails node using the Puma webserver + and have its number of workers set to 90% of available CPUs along with four threads. For + nodes that are running Rails with other components the worker value should be reduced + accordingly where we've found 50% achieves a good balance but this is dependent + on workload. + +1. Gitaly node requirements are dependent on customer data, specifically the number of + projects and their sizes. We recommend two nodes as an absolute minimum for HA environments + and at least four nodes should be used when supporting 50,000 or more users. + We also recommend that each Gitaly node should store no more than 5TB of data + and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) + set to 20% of available CPUs. Additional nodes should be considered in conjunction + with a review of expected data size and spread based on the recommendations above. + +1. Recommended Redis setup differs depending on the size of the architecture. + For smaller architectures (less than 3,000 users) a single instance should suffice. + For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all + classes and that Redis Sentinel is hosted alongside Consul. + For larger architectures (10,000 users or more) we suggest running a separate + [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class + and another for the Queues and Shared State classes respectively. We also recommend + that you run the Redis Sentinel clusters separately for each Redis Cluster. + +1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md) + over NFS where possible, due to better performance and availability. + +1. NFS can be used as an alternative for both repository data (replacing Gitaly) and + object storage but this isn't typically recommended for performance reasons. Note however it is required for + [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196). + +1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/) + as the load balancer. Although other load balancers with similar feature sets + could also be used, those load balancers have not been validated. + +1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over + HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write + as these components have heavy I/O. These IOPS values are recommended only as a starter + as with time they may be adjusted higher or lower depending on the scale of your + environment's workload. If you're running the environment on a Cloud provider + you may need to refer to their documentation on how configure IOPS correctly. + +1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms) + CPU platform on GCP. On different hardware you may find that adjustments, either lower + or higher, are required for your CPU or Node counts accordingly. For more information, a + [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found + [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks). + +1. AWS-equivalent and Azure-equivalent configurations are rough suggestions + and may change in the future. They have not yet been tested and validated. diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md new file mode 100644 index 00000000000..67f773a021f --- /dev/null +++ b/doc/administration/reference_architectures/50k_users.md @@ -0,0 +1,79 @@ +# Reference architecture: up to 50,000 users + +This page describes GitLab reference architecture for up to 50,000 users. +For a full list of reference architectures, see +[Available reference architectures](index.md#available-reference-architectures). + +> - **Supported users (approximate):** 50,000 +> - **High Availability:** True +> - **Test RPS rates:** API: 1000 RPS, Web: 100 RPS, Git: 100 RPS + +| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | +|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------| +| GitLab Rails ([1](#footnotes)) | 12 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge | F32s v2 | +| PostgreSQL | 3 | 16 vCPU, 60GB Memory | n1-standard-16 | m5.4xlarge | D16s v3 | +| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 64 vCPU, 240GB Memory | n1-standard-64 | m5.16xlarge | D64s v3 | +| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | +| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | +| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | +| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | +| Object Storage ([4](#footnotes)) | - | - | - | - | - | +| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | +| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Internal load balancing node ([6](#footnotes)) | 1 | 8 vCPU, 7.2GB Memory | n1-highcpu-8 | c5.2xlarge | F8s v2 | + +## Footnotes + +1. In our architectures we run each GitLab Rails node using the Puma webserver + and have its number of workers set to 90% of available CPUs along with four threads. For + nodes that are running Rails with other components the worker value should be reduced + accordingly where we've found 50% achieves a good balance but this is dependent + on workload. + +1. Gitaly node requirements are dependent on customer data, specifically the number of + projects and their sizes. We recommend two nodes as an absolute minimum for HA environments + and at least four nodes should be used when supporting 50,000 or more users. + We also recommend that each Gitaly node should store no more than 5TB of data + and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) + set to 20% of available CPUs. Additional nodes should be considered in conjunction + with a review of expected data size and spread based on the recommendations above. + +1. Recommended Redis setup differs depending on the size of the architecture. + For smaller architectures (less than 3,000 users) a single instance should suffice. + For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all + classes and that Redis Sentinel is hosted alongside Consul. + For larger architectures (10,000 users or more) we suggest running a separate + [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class + and another for the Queues and Shared State classes respectively. We also recommend + that you run the Redis Sentinel clusters separately for each Redis Cluster. + +1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md) + over NFS where possible, due to better performance and availability. + +1. NFS can be used as an alternative for both repository data (replacing Gitaly) and + object storage but this isn't typically recommended for performance reasons. Note however it is required for + [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196). + +1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/) + as the load balancer. Although other load balancers with similar feature sets + could also be used, those load balancers have not been validated. + +1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over + HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write + as these components have heavy I/O. These IOPS values are recommended only as a starter + as with time they may be adjusted higher or lower depending on the scale of your + environment's workload. If you're running the environment on a Cloud provider + you may need to refer to their documentation on how configure IOPS correctly. + +1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms) + CPU platform on GCP. On different hardware you may find that adjustments, either lower + or higher, are required for your CPU or Node counts accordingly. For more information, a + [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found + [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks). + +1. AWS-equivalent and Azure-equivalent configurations are rough suggestions + and may change in the future. They have not yet been tested and validated. diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md new file mode 100644 index 00000000000..41ef6f369c2 --- /dev/null +++ b/doc/administration/reference_architectures/5k_users.md @@ -0,0 +1,76 @@ +# Reference architecture: up to 5,000 users + +This page describes GitLab reference architecture for up to 5,000 users. +For a full list of reference architectures, see +[Available reference architectures](index.md#available-reference-architectures). + +> - **Supported users (approximate):** 5,000 +> - **High Availability:** True +> - **Test RPS rates:** API: 100 RPS, Web: 10 RPS, Git: 10 RPS + +| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | +|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|------------------------| +| GitLab Rails ([1](#footnotes)) | 3 | 16 vCPU, 14.4GB Memory | n1-highcpu-16 | c5.4xlarge | F16s v2 | +| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | +| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge | D8s v3 | +| Redis ([3](#footnotes)) | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | +| Consul + Sentinel ([3](#footnotes)) | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | +| Object Storage ([4](#footnotes)) | - | - | - | - | - | +| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | +| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | +| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | + +## Footnotes + +1. In our architectures we run each GitLab Rails node using the Puma webserver + and have its number of workers set to 90% of available CPUs along with four threads. For + nodes that are running Rails with other components the worker value should be reduced + accordingly where we've found 50% achieves a good balance but this is dependent + on workload. + +1. Gitaly node requirements are dependent on customer data, specifically the number of + projects and their sizes. We recommend two nodes as an absolute minimum for HA environments + and at least four nodes should be used when supporting 50,000 or more users. + We also recommend that each Gitaly node should store no more than 5TB of data + and have the number of [`gitaly-ruby` workers](../gitaly/index.md#gitaly-ruby) + set to 20% of available CPUs. Additional nodes should be considered in conjunction + with a review of expected data size and spread based on the recommendations above. + +1. Recommended Redis setup differs depending on the size of the architecture. + For smaller architectures (less than 3,000 users) a single instance should suffice. + For medium sized installs (3,000 - 5,000) we suggest one Redis cluster for all + classes and that Redis Sentinel is hosted alongside Consul. + For larger architectures (10,000 users or more) we suggest running a separate + [Redis Cluster](../high_availability/redis.md#running-multiple-redis-clusters) for the Cache class + and another for the Queues and Shared State classes respectively. We also recommend + that you run the Redis Sentinel clusters separately for each Redis Cluster. + +1. For data objects such as LFS, Uploads, Artifacts, etc. We recommend an [Object Storage service](../object_storage.md) + over NFS where possible, due to better performance and availability. + +1. NFS can be used as an alternative for both repository data (replacing Gitaly) and + object storage but this isn't typically recommended for performance reasons. Note however it is required for + [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages/issues/196). + +1. Our architectures have been tested and validated with [HAProxy](https://www.haproxy.org/) + as the load balancer. Although other load balancers with similar feature sets + could also be used, those load balancers have not been validated. + +1. We strongly recommend that any Gitaly or NFS nodes be set up with SSD disks over + HDD with a throughput of at least 8,000 IOPS for read operations and 2,000 IOPS for write + as these components have heavy I/O. These IOPS values are recommended only as a starter + as with time they may be adjusted higher or lower depending on the scale of your + environment's workload. If you're running the environment on a Cloud provider + you may need to refer to their documentation on how configure IOPS correctly. + +1. The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms) + CPU platform on GCP. On different hardware you may find that adjustments, either lower + or higher, are required for your CPU or Node counts accordingly. For more information, a + [Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found + [here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks). + +1. AWS-equivalent and Azure-equivalent configurations are rough suggestions + and may change in the future. They have not yet been tested and validated. diff --git a/doc/administration/reference_architectures/index.md b/doc/administration/reference_architectures/index.md index 1a547844217..fe64d39a362 100644 --- a/doc/administration/reference_architectures/index.md +++ b/doc/administration/reference_architectures/index.md @@ -48,187 +48,17 @@ how much automation you use, mirroring, and repository/change size. Additionally displayed memory values are provided by [GCP machine types](https://cloud.google.com/compute/docs/machine-types). For different cloud vendors, attempt to select options that best match the provided architecture. -## Up to 1,000 users - -> - **Supported users (approximate):** 1,000 -> - **High Availability:** False - -| Users | Configuration([8](#footnotes)) | GCP type | AWS type([9](#footnotes)) | -|-------|--------------------------------|---------------|---------------------------| -| 100 | 2 vCPU, 7.2GB Memory | n1-standard-2 | c5.2xlarge | -| 500 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | -| 1000 | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge | - -For situations where you need to serve up to 1,000 users, a single-node -solution with [frequent backups](#automated-backups-core-only) is appropriate -for many organizations. With automatic backup of the GitLab repositories, -configuration, and the database, if you don't have strict availability -requirements, this is the ideal solution. - -### Setup instructions - -- For this default reference architecture, use the standard [installation instructions](../../install/README.md) to install GitLab. - -NOTE: **Note:** -You can also optionally configure GitLab to use an -[external PostgreSQL service](../external_database.md) or an -[external object storage service](../high_availability/object_storage.md) for -added performance and reliability at a reduced complexity cost. - -## Up to 2,000 users - -> - **Supported users (approximate):** 2,000 -> - **High Availability:** False -> - **Test RPS rates:** API: 40 RPS, Web: 4 RPS, Git: 4 RPS - -| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | -|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|----------------| -| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Object Storage ([4](#footnotes)) | - | - | - | - | - | -| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | -| PostgreSQL | 1 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | -| Redis ([3](#footnotes)) | 1 | 1 vCPU, 3.75GB Memory | n1-standard-1 | m5.large | D2s v3 | -| Gitaly ([5](#footnotes)) ([7](#footnotes)) | X ([2](#footnotes)) | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| GitLab Rails ([1](#footnotes)) | 2 | 8 vCPU, 7.2GB Memory | n1-highcpu-8 | c5.2xlarge | F8s v2 | -| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | - -### Setup instructions - -1. [Configure the external load balancing node](../high_availability/load_balancer.md) - that will handle the load balancing of the two GitLab application services nodes. -1. [Configure the Object Storage](../object_storage.md) ([4](#footnotes)) used for shared data objects. -1. (Optional) [Configure NFS](../high_availability/nfs.md) to have - shared disk storage service as an alternative to Gitaly and/or - [Object Storage](../object_storage.md) (although not recommended). - NFS is required for GitLab Pages, you can skip this step if you're not using that feature. -1. [Configure PostgreSQL](../high_availability/load_balancer.md), the database for GitLab. -1. [Configure Redis](../high_availability/redis.md). -1. [Configure Gitaly](../gitaly/index.md#running-gitaly-on-its-own-server), - which is used to provide access to the Git repositories. -1. [Configure the main GitLab Rails application](../high_availability/gitlab.md) - to run Puma/Unicorn, Workhorse, GitLab Shell, and to serve all - frontend requests (UI, API, Git over HTTP/SSH). -1. [Configure Prometheus](../high_availability/monitoring_node.md) to monitor your GitLab environment. - -## Up to 3,000 users - -NOTE: **Note:** The 3,000-user reference architecture documented below is -designed to help your organization achieve a highly-available GitLab deployment. -If you do not have the expertise or need to maintain a highly-available -environment, you can have a simpler and less costly-to-operate environment by -following the [2,000-user reference architecture](#up-to-2000-users). - -> - **Supported users (approximate):** 3,000 -> - **High Availability:** True -> - **Test RPS rates:** API: 60 RPS, Web: 6 RPS, Git: 6 RPS - -| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | -|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|------------------------| -| GitLab Rails ([1](#footnotes)) | 3 | 8 vCPU, 7.2GB Memory | n1-highcpu-8 | c5.2xlarge | F8s v2 | -| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | -| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| Redis ([3](#footnotes)) | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | -| Consul + Sentinel ([3](#footnotes)) | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | -| Object Storage ([4](#footnotes)) | - | - | - | - | - | -| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | -| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | - -## Up to 5,000 users - -> - **Supported users (approximate):** 5,000 -> - **High Availability:** True -> - **Test RPS rates:** API: 100 RPS, Web: 10 RPS, Git: 10 RPS - -| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | -|--------------------------------------------------------------|-------|---------------------------------|---------------|-----------------------|------------------------| -| GitLab Rails ([1](#footnotes)) | 3 | 16 vCPU, 14.4GB Memory | n1-highcpu-16 | c5.4xlarge | F16s v2 | -| PostgreSQL | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | -| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge | D8s v3 | -| Redis ([3](#footnotes)) | 3 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | -| Consul + Sentinel ([3](#footnotes)) | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Sidekiq | 4 | 2 vCPU, 7.5GB Memory | n1-standard-2 | m5.large | D2s v3 | -| Object Storage ([4](#footnotes)) | - | - | - | - | - | -| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | -| Monitoring node | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | - -## Up to 10,000 users - -> - **Supported users (approximate):** 10,000 -> - **High Availability:** True -> - **Test RPS rates:** API: 200 RPS, Web: 20 RPS, Git: 20 RPS - -| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | -|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------| -| GitLab Rails ([1](#footnotes)) | 3 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge | F32s v2 | -| PostgreSQL | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 16 vCPU, 60GB Memory | n1-standard-16 | m5.4xlarge | D16s v3 | -| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | -| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | -| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| Object Storage ([4](#footnotes)) | - | - | - | - | - | -| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | -| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | -| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Internal load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | - -## Up to 25,000 users - -> - **Supported users (approximate):** 25,000 -> - **High Availability:** True -> - **Test RPS rates:** API: 500 RPS, Web: 50 RPS, Git: 50 RPS - -| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | -|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------| -| GitLab Rails ([1](#footnotes)) | 5 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge | F32s v2 | -| PostgreSQL | 3 | 8 vCPU, 30GB Memory | n1-standard-8 | m5.2xlarge | D8s v3 | -| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 32 vCPU, 120GB Memory | n1-standard-32 | m5.8xlarge | D32s v3 | -| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | -| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | -| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| Object Storage ([4](#footnotes)) | - | - | - | - | - | -| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | -| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | -| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Internal load balancing node ([6](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | - -## Up to 50,000 users - -> - **Supported users (approximate):** 50,000 -> - **High Availability:** True -> - **Test RPS rates:** API: 1000 RPS, Web: 100 RPS, Git: 100 RPS - -| Service | Nodes | Configuration ([8](#footnotes)) | GCP | AWS ([9](#footnotes)) | Azure([9](#footnotes)) | -|--------------------------------------------------------------|-------|---------------------------------|----------------|-----------------------|------------------------| -| GitLab Rails ([1](#footnotes)) | 12 | 32 vCPU, 28.8GB Memory | n1-highcpu-32 | c5.9xlarge | F32s v2 | -| PostgreSQL | 3 | 16 vCPU, 60GB Memory | n1-standard-16 | m5.4xlarge | D16s v3 | -| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Gitaly ([2](#footnotes)) ([5](#footnotes)) ([7](#footnotes)) | X | 64 vCPU, 240GB Memory | n1-standard-64 | m5.16xlarge | D64s v3 | -| Redis ([3](#footnotes)) - Cache | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| Redis ([3](#footnotes)) - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| Redis Sentinel ([3](#footnotes)) - Cache | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | -| Redis Sentinel ([3](#footnotes)) - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | g1-small | t2.small | B1MS | -| Consul | 3 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Sidekiq | 4 | 4 vCPU, 15GB Memory | n1-standard-4 | m5.xlarge | D4s v3 | -| NFS Server ([5](#footnotes)) ([7](#footnotes)) | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | -| Object Storage ([4](#footnotes)) | - | - | - | - | - | -| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | n1-highcpu-4 | c5.xlarge | F4s v2 | -| External load balancing node ([6](#footnotes)) | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 | c5.large | F2s v2 | -| Internal load balancing node ([6](#footnotes)) | 1 | 8 vCPU, 7.2GB Memory | n1-highcpu-8 | c5.2xlarge | F8s v2 | +## Available reference architectures + +The following reference architectures are available: + +- [Up to 1,000 users](1k_users.md) +- [Up to 2,000 users](2k_users.md) +- [Up to 3,000 users](3k_users.md) +- [Up to 5,000 users](5k_users.md) +- [Up to 10,000 users](10k_users.md) +- [Up to 25,000 users](25k_users.md) +- [Up to 50,000 users](50k_users.md) ## Availability complexity diff --git a/doc/api/groups.md b/doc/api/groups.md index b987be58091..bc7bff2964b 100644 --- a/doc/api/groups.md +++ b/doc/api/groups.md @@ -392,7 +392,11 @@ Parameters: | ------------------------ | -------------- | -------- | ----------- | | `id` | integer/string | yes | The ID or [URL-encoded path of the group](README.md#namespaced-path-encoding) owned by the authenticated user. | | `with_custom_attributes` | boolean | no | Include [custom attributes](custom_attributes.md) in response (admins only). | -| `with_projects` | boolean | no | Include details from projects that belong to the specified group (defaults to `true`). (Deprecated, [will be removed in 13.0](https://gitlab.com/gitlab-org/gitlab/-/issues/213797). To get the details of all projects within a group, use the [list a group's projects endpoint](#list-a-groups-projects).) | +| `with_projects` | boolean | no | Include details from projects that belong to the specified group (defaults to `true`). (Deprecated, [will be removed in API v5](https://gitlab.com/gitlab-org/gitlab/-/issues/213797). To get the details of all projects within a group, use the [list a group's projects endpoint](#list-a-groups-projects).) | + +NOTE: **Note:** +The `projects` and `shared_projects` attributes in the response are deprecated and will be [removed in API v5](https://gitlab.com/gitlab-org/gitlab/-/issues/213797). +To get the details of all projects within a group, use either the [list a group's projects](#list-a-groups-projects) or the [list a group's shared projects](#list-a-groups-shared-projects) endpoint. ```shell curl --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/groups/4 @@ -423,7 +427,7 @@ Example response: "file_template_project_id": 1, "parent_id": null, "created_at": "2020-01-15T12:36:29.590Z", - "projects": [ + "projects": [ // Deprecated and will be removed in API v5 { "id": 7, "description": "Voluptas veniam qui et beatae voluptas doloremque explicabo facilis.", @@ -501,7 +505,7 @@ Example response: "request_access_enabled": false } ], - "shared_projects": [ + "shared_projects": [ // Deprecated and will be removed in API v5 { "id": 8, "description": "Velit eveniet provident fugiat saepe eligendi autem.", @@ -704,6 +708,10 @@ PUT /groups/:id | `shared_runners_minutes_limit` | integer | no | **(STARTER ONLY)** Pipeline minutes quota for this group. | | `extra_shared_runners_minutes_limit` | integer | no | **(STARTER ONLY)** Extra pipeline minutes quota for this group. | +NOTE: **Note:** +The `projects` and `shared_projects` attributes in the response are deprecated and will be [removed in API v5](https://gitlab.com/gitlab-org/gitlab/-/issues/213797). +To get the details of all projects within a group, use either the [list a group's projects](#list-a-groups-projects) or the [list a group's shared projects](#list-a-groups-shared-projects) endpoint. + ```shell curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/groups/5?name=Experimental" ``` @@ -715,9 +723,6 @@ This endpoint returns: and later. To get the details of all projects within a group, use the [list a group's projects endpoint](#list-a-groups-projects) instead. -NOTE: **Note:** -The `projects` and `shared_projects` attributes [will be deprecated in GitLab 13.0](https://gitlab.com/gitlab-org/gitlab/-/issues/213797). To get the details of all projects within a group, use the [list a group's projects endpoint](#list-a-groups-projects) instead. - Example response: ```json @@ -735,7 +740,7 @@ Example response: "file_template_project_id": 1, "parent_id": null, "created_at": "2020-01-15T12:36:29.590Z", - "projects": [ + "projects": [ // Deprecated and will be removed in API v5 { "id": 9, "description": "foo", diff --git a/doc/api/projects.md b/doc/api/projects.md index beb69e1aeee..48df539ed88 100644 --- a/doc/api/projects.md +++ b/doc/api/projects.md @@ -162,7 +162,7 @@ When the user is authenticated and `simple` is not set this returns something li "merge_method": "merge", "autoclose_referenced_issues": true, "suggestion_commit_message": null, - "marked_for_deletion_at": "2020-04-03", // to be deprecated in GitLab 13.0 in favor of marked_for_deletion_on + "marked_for_deletion_at": "2020-04-03", // Deprecated and will be removed in API v5 in favor of marked_for_deletion_on "marked_for_deletion_on": "2020-04-03", "statistics": { "commit_count": 37, @@ -288,7 +288,7 @@ When the user is authenticated and `simple` is not set this returns something li ``` NOTE: **Note:** -For users on GitLab [Silver, Premium, or higher](https://about.gitlab.com/pricing/) the `marked_for_deletion_at` attribute will be deprecated in GitLab 13.0 in favor of the `marked_for_deletion_on` attribute. +For users on GitLab [Silver, Premium, or higher](https://about.gitlab.com/pricing/) the `marked_for_deletion_at` attribute has been deprecated and will be removed in API v5 in favor of the `marked_for_deletion_on` attribute. Users on GitLab [Starter, Bronze, or higher](https://about.gitlab.com/pricing/) will also see the `approvals_before_merge` parameter: @@ -411,7 +411,7 @@ This endpoint supports [keyset pagination](README.md#keyset-based-pagination) fo "merge_method": "merge", "autoclose_referenced_issues": true, "suggestion_commit_message": null, - "marked_for_deletion_at": "2020-04-03", // to be deprecated in GitLab 13.0 in favor of marked_for_deletion_on + "marked_for_deletion_at": "2020-04-03", // Deprecated and will be removed in API v5 in favor of marked_for_deletion_on "marked_for_deletion_on": "2020-04-03", "statistics": { "commit_count": 37, @@ -879,7 +879,7 @@ GET /projects/:id "service_desk_address": null, "autoclose_referenced_issues": true, "suggestion_commit_message": null, - "marked_for_deletion_at": "2020-04-03", // to be deprecated in GitLab 13.0 in favor of marked_for_deletion_on + "marked_for_deletion_at": "2020-04-03", // Deprecated and will be removed in API v5 in favor of marked_for_deletion_on "marked_for_deletion_on": "2020-04-03", "statistics": { "commit_count": 37, diff --git a/doc/ci/examples/test-and-deploy-ruby-application-to-heroku.md b/doc/ci/examples/test-and-deploy-ruby-application-to-heroku.md index f772f7bbfcd..5df407f19fe 100644 --- a/doc/ci/examples/test-and-deploy-ruby-application-to-heroku.md +++ b/doc/ci/examples/test-and-deploy-ruby-application-to-heroku.md @@ -72,7 +72,7 @@ cat > /tmp/test-config.template.toml << EOF [[runners]] [runners.docker] [[runners.docker.services]] -name = "mysql:latest" +name = "postgres:latest" EOF ``` diff --git a/doc/ci/yaml/README.md b/doc/ci/yaml/README.md index 8e36bc1c7e4..507e548b8d8 100644 --- a/doc/ci/yaml/README.md +++ b/doc/ci/yaml/README.md @@ -1101,9 +1101,9 @@ useful is not available, please ### `only`/`except` (basic) NOTE: **Note:** -The [`rules`](#rules) syntax is now the preferred method of setting job policies. -`only` and `except` are [candidates for deprecation](https://gitlab.com/gitlab-org/gitlab/issues/27449), -and may be removed in the future. +The [`rules`](#rules) syntax is an improved, more powerful solution for defining +when jobs should run or not. Consider using `rules` instead of `only/except` to get +the most out of your pipelines. `only` and `except` are two parameters that set a job policy to limit when jobs are created: diff --git a/doc/user/clusters/applications.md b/doc/user/clusters/applications.md index 40f75008219..be01540a293 100644 --- a/doc/user/clusters/applications.md +++ b/doc/user/clusters/applications.md @@ -837,7 +837,8 @@ management project. Refer to the for the available configuration options. CAUTION: **Caution:** -Installation and removal of the Cilium [requires restart](https://cilium.readthedocs.io/en/stable/gettingstarted/k8s-install-gke/#restart-remaining-pods) +Installation and removal of the Cilium requires a **manual** +[restart](https://cilium.readthedocs.io/en/stable/gettingstarted/k8s-install-gke/#restart-remaining-pods) of all affected pods in all namespaces to ensure that they are [managed](https://cilium.readthedocs.io/en/stable/troubleshooting/#ensure-pod-is-managed-by-cilium) by the correct networking plugin. diff --git a/doc/user/group/clusters/index.md b/doc/user/group/clusters/index.md index 11dfaaf9655..f15ad2165de 100644 --- a/doc/user/group/clusters/index.md +++ b/doc/user/group/clusters/index.md @@ -6,8 +6,6 @@ type: reference > [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/issues/34758) in GitLab 11.6. -## Overview - Similar to [project-level](../../project/clusters/index.md) and [instance-level](../../instance/clusters/index.md) Kubernetes clusters, group-level Kubernetes clusters allow you to connect a Kubernetes cluster to @@ -22,47 +20,43 @@ and troubleshooting applications for your group cluster, see ## RBAC compatibility -For each project under a group with a Kubernetes cluster, GitLab will -create a restricted service account with [`edit` -privileges](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) -in the project namespace. +> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/issues/29398) in GitLab 11.4. +> - [Project namespace restriction](https://gitlab.com/gitlab-org/gitlab-foss/issues/51716) was introduced in GitLab 11.5. -NOTE: **Note:** -RBAC support was introduced in -[GitLab 11.4](https://gitlab.com/gitlab-org/gitlab-foss/issues/29398), and -Project namespace restriction was introduced in -[GitLab 11.5](https://gitlab.com/gitlab-org/gitlab-foss/issues/51716). +For each project under a group with a Kubernetes cluster, GitLab creates a restricted +service account with [`edit` privileges](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) +in the project namespace. ## Cluster precedence -GitLab will use the project's cluster before using any cluster belonging -to the group containing the project if the project's cluster is available and not disabled. - -In the case of sub-groups, GitLab will use the cluster of the closest ancestor group +If the project's cluster is available and not disabled, GitLab uses the +project's cluster before using any cluster belonging to the group containing +the project. +In the case of sub-groups, GitLab uses the cluster of the closest ancestor group to the project, provided the cluster is not disabled. ## Multiple Kubernetes clusters **(PREMIUM)** -With GitLab Premium, you can associate more than one Kubernetes clusters to your -group. That way you can have different clusters for different environments, -like dev, staging, production, etc. +With [GitLab Premium](https://about.gitlab.com/pricing/premium/), you can associate +more than one Kubernetes cluster to your group, and maintain different clusters +for different environments, such as development, staging, and production. -Add another cluster similar to the first one and make sure to -[set an environment scope](#environment-scopes-premium) that will -differentiate the new cluster from the rest. +When adding another cluster, +[set an environment scope](#environment-scopes-premium) to help +differentiate the new cluster from your other clusters. ## GitLab-managed clusters > - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22011) in GitLab 11.5. > - Became [optional](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/26565) in GitLab 11.11. -You can choose to allow GitLab to manage your cluster for you. If your cluster is -managed by GitLab, resources for your projects will be automatically created. See the -[Access controls](../../project/clusters/add_remove_clusters.md#access-controls) section for details on which resources will -be created. +You can choose to allow GitLab to manage your cluster for you. If GitLab manages +your cluster, resources for your projects will be automatically created. See the +[Access controls](../../project/clusters/add_remove_clusters.md#access-controls) +section for details on which resources GitLab creates for you. -For clusters not managed by GitLab, project-specific resources will not be created -automatically. If you are using [Auto DevOps](../../../topics/autodevops/index.md) +For clusters not managed by GitLab, project-specific resources won't be created +automatically. If you're using [Auto DevOps](../../../topics/autodevops/index.md) for deployments with a cluster not managed by GitLab, you must ensure: - The project's deployment service account has permissions to deploy to @@ -72,8 +66,8 @@ for deployments with a cluster not managed by GitLab, you must ensure: `KUBE_NAMESPACE` directly is discouraged. NOTE: **Note:** -If you [install applications](#installing-applications) on your cluster, GitLab will create -the resources required to run these even if you have chosen to manage your own cluster. +If you [install applications](#installing-applications) on your cluster, GitLab creates +the resources required to run them even if you choose to manage your own cluster. ### Clearing the cluster cache @@ -86,7 +80,8 @@ your cluster, which can cause deployment jobs to fail. To clear the cache: -1. Navigate to your group’s **Kubernetes** page, and select your cluster. +1. Navigate to your group’s **{cloud-gear}** **Kubernetes** page, + and select your cluster. 1. Expand the **Advanced settings** section. 1. Click **Clear cluster cache**. @@ -110,12 +105,12 @@ them with an environment scope. The environment scope associates clusters with work. While evaluating which environment matches the environment scope of a -cluster, [cluster precedence](#cluster-precedence) will take -effect. The cluster at the project level will take precedence, followed +cluster, [cluster precedence](#cluster-precedence) takes +effect. The cluster at the project level takes precedence, followed by the closest ancestor group, followed by that groups' parent and so on. -For example, let's say we have the following Kubernetes clusters: +For example, if your project has the following Kubernetes clusters: | Cluster | Environment scope | Where | | ---------- | ------------------- | ----------| @@ -151,11 +146,11 @@ deploy to production: url: https://example.com/ ``` -The result will then be: +The result is: -- The Project cluster will be used for the `test` job. -- The Staging cluster will be used for the `deploy to staging` job. -- The Production cluster will be used for the `deploy to production` job. +- The Project cluster is used for the `test` job. +- The Staging cluster is used for the `deploy to staging` job. +- The Production cluster is used for the `deploy to production` job. ## Cluster environments **(PREMIUM)** @@ -166,8 +161,7 @@ are deployed to the Kubernetes cluster, see the documentation for ## Security of Runners For important information about securely configuring GitLab Runners, see -[Security of -Runners](../../project/clusters/add_remove_clusters.md#security-of-gitlab-runners) +[Security of Runners](../../project/clusters/add_remove_clusters.md#security-of-gitlab-runners) documentation for project-level clusters. ## More information diff --git a/doc/user/group/saml_sso/index.md b/doc/user/group/saml_sso/index.md index 8dffb5c9df4..a3d9a14df10 100644 --- a/doc/user/group/saml_sso/index.md +++ b/doc/user/group/saml_sso/index.md @@ -257,6 +257,9 @@ Set other user attributes and claims according to the [assertions table](#assert ### Okta setup notes +<i class="fa fa-youtube-play youtube" aria-hidden="true"></i> +For a demo of the Okta SAML setup including SCIM, see [Demo: Okta Group SAML & SCIM setup](https://youtu.be/0ES9HsZq0AQ). + | GitLab Setting | Okta Field | |--------------|----------------| | Identifier | Audience URI | diff --git a/doc/user/infrastructure/index.md b/doc/user/infrastructure/index.md index a50cdf1cf0e..a1d09373e2c 100644 --- a/doc/user/infrastructure/index.md +++ b/doc/user/infrastructure/index.md @@ -1,6 +1,152 @@ -# Infrastructure as Code +# Infrastructure as code with GitLab managed Terraform State -GitLab can be used to manage infrastructure as code. The following are some examples: +[Terraform remote backends](https://www.terraform.io/docs/backends/index.html) +enable you to store the state file in a remote, shared store. GitLab uses the +[Terraform HTTP backend](https://www.terraform.io/docs/backends/types/http.html) +to securely store the state files in local storage (the default) or +[the remote store of your choice](../../administration/terraform_state.md). -- [A generic tutorial for Terraform with GitLab](https://medium.com/@timhberry/terraform-pipelines-in-gitlab-415b9d842596). -- [Terraform at GitLab](https://about.gitlab.com/blog/2019/11/12/gitops-part-2/). +The GitLab managed Terraform state backend can store your Terraform state easily and +securely, and spares you from setting up additional remote resources like +Amazon S3 or Google Cloud Storage. Its features include: + +- Supporting encryption of the state file both in transit and at rest. +- Locking and unlocking state. +- Remote Terraform plan and apply execution. + +To get started, there are two different options when using GitLab managed Terraform State. + +- Use a local machine +- Use GitLab CI + +## Get Started using local development + +If you are planning to only run `terraform plan` and `terraform apply` commands from your local machine, this is a simple way to get started. + +First, create your project on your GitLab instance. + +Next, define the Terraform backend in your Terraform project to be: + +```hcl +terraform { + backend "http" { + } +} +``` + +Finally, you need to run `terraform init` on your local machine and pass in the following options. The below example is using GitLab.com: + +```bash +terraform init \ + -backend-config="address=https://gitlab.com/api/v4/projects/<YOUR-PROJECT-ID>/terraform/state/<YOUR-PROJECT-NAME>" \ + -backend-config="lock_address=https://gitlab.com/api/v4/projects/<YOUR-PROJECT-ID>/terraform/state/<YOUR-PROJECT-NAME>/lock" \ + -backend-config="unlock_address=https://gitlab.com/api/v4/projects/<YOUR-PROJECT-ID>/terraform/state/<YOUR-PROJECT-NAME>/lock" \ + -backend-config="username=<YOUR-USERNAME>" \ + -backend-config="password=<YOUR-ACCESS-TOKEN>" \ + -backend-config="lock_method=POST" \ + -backend-config="unlock_method=DELETE" \ + -backend-config="retry_wait_min=5" +``` + +This will initialize your Terraform state and store that state within your GitLab project. + +NOTE: YOUR-PROJECT-ID and YOUR-PROJECT-NAME can be accessed from the project main page. + +## Get Started using a GitLab CI + +Another route is to leverage GitLab CI to run your `terraform plan` and `terraform apply` commands. + +### Configure the CI variables + +To use the Terraform backend, [first create a Personal Access Token](../profile/personal_access_tokens.md) with the `api` scope. Keep in mind that the Terraform backend is restricted to tokens with [Maintainer access](../permissions.md) to the repository. + +To keep the Personal Access Token secure, add it as a [CI/CD environment variable](../../ci/variables/README.md). In this example we set ours to the ENV: `GITLAB_TF_PASSWORD`. + +If you are planning to use the ENV on a branch which is not protected, make sure to set the variable protection settings correctly. + +### Configure the Terraform backend + +Next we need to define the [http backend](https://www.terraform.io/docs/backends/types/http.html). In your Terraform project add the following code block in a `.tf` file such as `backend.tf` or wherever you desire to define the remote backend: + +```hcl +terraform { + backend "http" { + } +} +``` + +### Configure the CI YAML file + +Finally, configure a `.gitlab-ci.yaml`, which lives in the root of your project repository. + +In our case we are using a pre-built image: + +```yaml +image: + name: hashicorp/terraform:light + entrypoint: + - '/usr/bin/env' + - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' +``` + +We then define some environment variables to make life easier. `GITLAB_TF_ADDRESS` is the URL of the GitLab instance where this pipeline runs, and `TF_ROOT` is the directory where the Terraform commands must be executed. + +```yaml +variables: + GITLAB_TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME} + TF_ROOT: ${CI_PROJECT_DIR}/environments/cloudflare/production + +cache: + paths: + - .terraform +``` + +In a `before_script`, pass a `terraform init` call containing configuration parameters. +These parameters correspond to variables required by the +[http backend](https://www.terraform.io/docs/backends/types/http.html): + +```yaml +before_script: + - cd ${TF_ROOT} + - terraform --version + - terraform init -backend-config="address=${GITLAB_TF_ADDRESS}" -backend-config="lock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="unlock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="username=${GITLAB_USER_LOGIN}" -backend-config="password=${GITLAB_TF_PASSWORD}" -backend-config="lock_method=POST" -backend-config="unlock_method=DELETE" -backend-config="retry_wait_min=5" + +stages: + - validate + - build + - test + - deploy + +validate: + stage: validate + script: + - terraform validate + +plan: + stage: build + script: + - terraform plan + - terraform show + +apply: + stage: deploy + environment: + name: production + script: + - terraform apply + dependencies: + - plan + when: manual + only: + - master +``` + +### Push to GitLab + +Pushing your project to GitLab triggers a CI job pipeline, which runs the `terraform init`, `terraform validate`, and `terraform plan` commands automatically. + +The output from the above `terraform` commands should be viewable in the job logs. + +## Example project + +See [this reference project](https://gitlab.com/nicholasklick/gitlab-terraform-aws) using GitLab and Terraform to deploy a basic AWS EC2 within a custom VPC. diff --git a/doc/user/project/import/jira.md b/doc/user/project/import/jira.md index 49224001fe6..db48282a8f3 100644 --- a/doc/user/project/import/jira.md +++ b/doc/user/project/import/jira.md @@ -9,6 +9,15 @@ Jira issues import is an MVC, project-level feature, meaning that issues from mu Jira projects can be imported into a GitLab project. MVC version imports issue title and description as well as some other issue metadata as a section in the issue description. +## Future iterations + +As of GitLab 12.10, the Jira issue importer only brings across the title and description of +an issue. + +There is an [epic](https://gitlab.com/groups/gitlab-org/-/epics/2738) tracking the +addition of items such as issue assignees, labels, comments, user mapping, and much more. +These will be included in the future iterations of the GitLab Jira importer. + ## Prerequisites ### Permissions diff --git a/doc/user/project/operations/alert_management.md b/doc/user/project/operations/alert_management.md new file mode 100644 index 00000000000..d9b0501af0a --- /dev/null +++ b/doc/user/project/operations/alert_management.md @@ -0,0 +1,60 @@ +# Alert Management + +> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/2877) in GitLab 13.0. + +Alert Management enables developers to easily discover and view the alerts +generated by their application. By surfacing alert information where the code is +being developed, efficiency and awareness can be increased. + +## Enable Alert Management + +NOTE: **Note:** +You will need at least Maintainer [permissions](../../permissions.md) to enable the Alert Management feature. + +1. Follow the [instructions for toggling generic alerts](../integrations/generic_alerts.md#setting-up-generic-alerts) +1. You can now visit **{cloud-gear}** **Operations > Alert Management** in your project's sidebar to [view a list](#alert-management-list) of alerts. + +![Alert Management Toggle](img/alert_management_1_v13_1.png) + +## Alert Management severity + +Each level of alert contains a uniquely shaped and color-coded icon to help +you identify the severity of a particular alert. These severity icons help you +immediately identify which alerts you should prioritize investigating: + +![Alert Management Severity System](img/alert_management_severity_v13_0.png) + +Alerts contain one of the following icons: + +- **Critical**: **{severity-critical}** and hexadecimal color `#8b2615` +- **High**: **{severity-high}** and hexadecimal color `#c0341d` +- **Medium**: **{severity-medium}** and hexadecimal color `#fca429` +- **Low**: **{severity-low}** and hexadecimal color `#fdbc60` +- **Info**: **{severity-info}** and hexadecimal color `#418cd8` +- **Unknown**: **{severity-unknown}** and hexadecimal color `#bababa` + +## Alert Management list + +NOTE: **Note:** +You will need at least Developer [permissions](../../permissions.md) to view the Alert Management list. + +You can find the Alert Management list at **{cloud-gear}** **Operations > Alerts** in your project's sidebar. +Each alert contains the following metrics: + +![Alert Management List](img/alert_management_1_v13_0.png) + +- **Severity** - The current importance of a alert and how much attention it should receive. +- **Start time** - How long ago the alert fired. This field uses the standard GitLab pattern of `X time ago`, but is supported by a granular date/time tooltip depending on the user's locale. +- **End time** - How long ago the alert fired was resolved. This field uses the standard GitLab pattern of `X time ago`, but is supported by a granular date/time tooltip depending on the user's locale. +- **Alert description** - The description of the alert, which attempts to capture the most meaningful data. +- **Event count** - The number of times that an alert has fired. +- **Status** - The [current status](#alert-management-statuses) of the alert. + +### Alert Management statuses + +Each alert contains a status dropdown to indicate which alerts need investigation. +Standard alert statuses include `triggered`, `acknowledged`, and `resolved`: + +- **Triggered**: No one has begun investigation. +- **Acknowledged**: Someone is actively investigating the problem. +- **Resolved**: No further work is required. diff --git a/doc/user/project/operations/img/alert_management_1_v13_0.png b/doc/user/project/operations/img/alert_management_1_v13_0.png Binary files differnew file mode 100644 index 00000000000..dbc1e795b16 --- /dev/null +++ b/doc/user/project/operations/img/alert_management_1_v13_0.png diff --git a/doc/user/project/operations/img/alert_management_1_v13_1.png b/doc/user/project/operations/img/alert_management_1_v13_1.png Binary files differnew file mode 100644 index 00000000000..c01b4749eda --- /dev/null +++ b/doc/user/project/operations/img/alert_management_1_v13_1.png diff --git a/doc/user/project/operations/img/alert_management_severity_v13_0.png b/doc/user/project/operations/img/alert_management_severity_v13_0.png Binary files differnew file mode 100644 index 00000000000..f996d6e88f4 --- /dev/null +++ b/doc/user/project/operations/img/alert_management_severity_v13_0.png diff --git a/doc/user/project/settings/import_export.md b/doc/user/project/settings/import_export.md index f1c340daa68..d9f3ae776da 100644 --- a/doc/user/project/settings/import_export.md +++ b/doc/user/project/settings/import_export.md @@ -44,11 +44,24 @@ Note the following: ## Version history -The following table lists updates to Import/Export: +Starting with GitLab 13.0, GitLab can import bundles that were exported from a different GitLab deployment. +This ability is limited to two previous GitLab [minor](../../../policy/maintenance.md#versioning) +releases, which is similar to our process for [Security Releases](../../../policy/maintenance.md#security-releases). + +For example: + +| Current version | Can import bundles exported from | +|-----------------|----------------------------------| +| 13.0 | 13.0, 12.10, 12.9 | +| 13.1 | 13.1, 13.0, 12.10 | + +### 12.x + +Prior to 13.0 this was a defined compatibility table: | Exporting GitLab version | Importing GitLab version | | -------------------------- | -------------------------- | -| 11.7 to current | 11.7 to current | +| 11.7 to 13.0 | 11.7 to 13.0 | | 11.1 to 11.6 | 11.1 to 11.6 | | 10.8 to 11.0 | 10.8 to 11.0 | | 10.4 to 10.7 | 10.4 to 10.7 | |