Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2021-04-20 17:36:54 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2021-04-20 17:36:54 +0300
commitf61bb2a16a514b71bf33aabbbb999d6732016a24 (patch)
tree9548caa89e60b4f40b99bbd1dac030420b812aa8 /doc/administration/gitaly
parent35fc54e5d261f8898e390aea7c2f5ec5fdf0539d (diff)
Add latest changes from gitlab-org/gitlab@13-11-stable-eev13.11.0-rc42
Diffstat (limited to 'doc/administration/gitaly')
-rw-r--r--doc/administration/gitaly/configure_gitaly.md225
-rw-r--r--doc/administration/gitaly/img/architecture_v12_4.pngbin42885 -> 0 bytes
-rw-r--r--doc/administration/gitaly/index.md390
-rw-r--r--doc/administration/gitaly/praefect.md341
-rw-r--r--doc/administration/gitaly/reference.md2
5 files changed, 707 insertions, 251 deletions
diff --git a/doc/administration/gitaly/configure_gitaly.md b/doc/administration/gitaly/configure_gitaly.md
index 7e3647d1e34..51ad376a625 100644
--- a/doc/administration/gitaly/configure_gitaly.md
+++ b/doc/administration/gitaly/configure_gitaly.md
@@ -5,7 +5,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
type: reference
---
-# Configure Gitaly
+# Configure Gitaly **(FREE SELF)**
The Gitaly service itself is configured by using a [TOML configuration file](reference.md).
@@ -941,3 +941,226 @@ result as you did at the start. For example:
```
Note that `enforced="true"` means that authentication is being enforced.
+
+## Pack-objects cache **(FREE SELF)**
+
+> - [Introduced](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/372) in GitLab 13.11.
+> - It's enabled on GitLab.com.
+> - It's recommended for production use.
+
+[Gitaly](index.md), the service that provides storage for Git
+repositories, can be configured to cache a short rolling window of Git
+fetch responses. This can reduce server load when your server receives
+lots of CI fetch traffic.
+
+### Overview
+
+The pack-objects cache wraps `git pack-objects`, an internal part of
+Git that gets invoked indirectly via the PostUploadPack and
+SSHUploadPack Gitaly RPCs. These are the RPCs that Gitaly runs when a
+user does a Git fetch via HTTP or SSH, respectively. When the cache is
+enabled, anything that uses PostUploadPack or SSHUploadPack can
+benefit from it. It is orthogonal to:
+
+- The transport (HTTP or SSH).
+- Git protocol version (v0 or v2).
+- The type of fetch (full clones, incremental fetches, shallow clones,
+ partial clones, and so on).
+
+The strength of this cache is its ability to deduplicate concurrent
+identical fetches. It:
+
+- Can benefit GitLab instances where your users run CI/CD pipelines with many concurrent jobs.
+ There should be a noticeable reduction in server CPU utilization.
+- Does not benefit unique fetches at all. For example, if you run a spot check by cloning a
+ repository to your local computer, you are unlikely to see a benefit from this cache because
+ your fetch is probably unique.
+
+The pack-objects cache is a local cache. It:
+
+- Stores its metadata in the memory of the Gitaly process it is enabled in.
+- Stores the actual Git data it is caching in files on local storage.
+
+Using local files has the benefit that the operating system may
+automatically keep parts of the pack-objects cache files in RAM,
+making it faster.
+
+Because the pack-objects cache can lead to a significant increase in
+disk write IO, it is off by default.
+
+### Configure the cache
+
+These are the configuration settings for the pack-objects cache. Each
+setting is discussed in greater detail below.
+
+|Setting|Default|Description|
+|:---|:---|:---|
+|`enabled`|`false`|Turns on the cache. When off, Gitaly runs a dedicated `git pack-objects` process for each request. |
+|`dir`|`<PATH TO FIRST STORAGE>/+gitaly/PackObjectsCache`|Local directory where cache files get stored.|
+|`max_age`|`5m` (5 minutes)|Cache entries older than this get evicted and removed from disk.|
+
+In `/etc/gitlab/gitlab.rb`, set:
+
+```ruby
+gitaly['pack_objects_cache_enabled'] = true
+## gitaly['pack_objects_cache_dir'] = '/var/opt/gitlab/git-data/repositories/+gitaly/PackObjectsCache'
+## gitaly['pack_objects_cache_max_age'] = '5m'
+```
+
+#### `enabled` defaults to `false`
+
+The cache is disabled by default. This is because in some cases, it
+can create an [extreme
+increase](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4010#note_534564684)
+in the number of bytes written to disk. On GitLab.com, we have verified
+that our repository storage disks can handle this extra workload, but
+we felt we cannot assume this is true everywhere.
+
+#### Cache storage directory `dir`
+
+The cache needs a directory to store its files in. This directory
+should be:
+
+- In a filesystem with enough space. If the cache filesystem runs out of space, all
+ fetches start failing.
+- On a disk with enough IO bandwidth. If the cache disk runs out of IO bandwidth, all
+ fetches, and probably the entire server, slows down.
+
+By default, the cache storage directory is set to a subdirectory of the first Gitaly storage
+defined in the configuration file.
+
+Multiple Gitaly processes can use the same directory for cache storage. Each Gitaly process
+uses a unique random string as part of the cache filenames it creates. This means:
+
+- They do not collide.
+- They do not reuse another process's files.
+
+While the default directory puts the cache files in the same
+filesystem as your repository data, this is not requirement. You can
+put the cache files on a different filesystem if that works better for
+your infrastructure.
+
+The amount of IO bandwidth required from the disk depends on:
+
+- The size and shape of the repositories on your Gitaly server.
+- The kind of traffic your users generate.
+
+You can use the `gitaly_pack_objects_generated_bytes_total` metric as a pessimistic estimate,
+pretending your cache hit ratio is 0%.
+
+The amount of space required depends on:
+
+- The bytes per second that your users pull from the cache.
+- The size of the `max_age` cache eviction window.
+
+If your users pull 100 MB/s and you use a 5 minute window, then on average you have
+`5*60*100MB = 30GB` of data in your cache directory. This is an expected average, not
+a guarantee. Peak size may exceed this average.
+
+#### Cache eviction window `max_age`
+
+The `max_age` configuration setting lets you control the chance of a
+cache hit and the average amount of storage used by cache files.
+Entries older than `max_age` get evicted from the in-memory metadata
+store, and deleted from disk.
+
+Note that eviction does not interfere with ongoing requests, so it is OK
+for `max_age` to be less than the time it takes to do a fetch over a
+slow connection. This is because Unix filesystems do not truly delete
+a file until all processes that are reading the deleted file have
+closed it.
+
+### Observe the cache
+
+The cache can be observed in logs and using metrics.
+
+#### Logs
+
+|Message|Fields|Description|
+|:---|:---|:---|
+|`generated bytes`|`bytes`, `cache_key`|Logged when an entry was added to the cache|
+|`served bytes`|`bytes`, `cache_key`|Logged when an entry was read from the cache|
+
+In the case of a:
+
+- Cache miss, Gitaly logs both a `generated bytes` and a `served bytes` message.
+- Cache hit, Gitaly logs only a `served bytes` message.
+
+Example:
+
+```json
+{
+ "bytes":26186490,
+ "cache_key":"1b586a2698ca93c2529962e85cda5eea8f0f2b0036592615718898368b462e19",
+ "correlation_id":"01F1MY8JXC3FZN14JBG1H42G9F",
+ "grpc.meta.deadline_type":"none",
+ "grpc.method":"PackObjectsHook",
+ "grpc.request.fullMethod":"/gitaly.HookService/PackObjectsHook",
+ "grpc.request.glProjectPath":"root/gitlab-workhorse",
+ "grpc.request.glRepository":"project-2",
+ "grpc.request.repoPath":"@hashed/d4/73/d4735e3a265e16eee03f59718b9b5d03019c07d8b6c51f90da3a666eec13ab35.git",
+ "grpc.request.repoStorage":"default",
+ "grpc.request.topLevelGroup":"@hashed",
+ "grpc.service":"gitaly.HookService",
+ "grpc.start_time":"2021-03-25T14:57:52.747Z",
+ "level":"info",
+ "msg":"generated bytes",
+ "peer.address":"@",
+ "pid":20961,
+ "span.kind":"server",
+ "system":"grpc",
+ "time":"2021-03-25T14:57:53.543Z"
+}
+{
+ "bytes":26186490,
+ "cache_key":"1b586a2698ca93c2529962e85cda5eea8f0f2b0036592615718898368b462e19",
+ "correlation_id":"01F1MY8JXC3FZN14JBG1H42G9F",
+ "grpc.meta.deadline_type":"none",
+ "grpc.method":"PackObjectsHook",
+ "grpc.request.fullMethod":"/gitaly.HookService/PackObjectsHook",
+ "grpc.request.glProjectPath":"root/gitlab-workhorse",
+ "grpc.request.glRepository":"project-2",
+ "grpc.request.repoPath":"@hashed/d4/73/d4735e3a265e16eee03f59718b9b5d03019c07d8b6c51f90da3a666eec13ab35.git",
+ "grpc.request.repoStorage":"default",
+ "grpc.request.topLevelGroup":"@hashed",
+ "grpc.service":"gitaly.HookService",
+ "grpc.start_time":"2021-03-25T14:57:52.747Z",
+ "level":"info",
+ "msg":"served bytes",
+ "peer.address":"@",
+ "pid":20961,
+ "span.kind":"server",
+ "system":"grpc",
+ "time":"2021-03-25T14:57:53.543Z"
+}
+```
+
+#### Metrics
+
+The following cache metrics are available.
+
+|Metric|Type|Labels|Description|
+|:---|:---|:---|:---|
+|`gitaly_pack_objects_cache_enabled`|gauge|`dir`,`max_age`|Set to `1` when the cache is enabled via the Gitaly config file|
+|`gitaly_pack_objects_cache_lookups_total`|counter|`result`|Hit/miss counter for cache lookups|
+|`gitaly_pack_objects_generated_bytes_total`|counter||Number of bytes written into the cache|
+|`gitaly_pack_objects_served_bytes_total`|counter||Number of bytes read from the cache|
+|`gitaly_streamcache_filestore_disk_usage_bytes`|gauge|`dir`|Total size of cache files|
+|`gitaly_streamcache_index_entries`|gauge|`dir`|Number of entries in the cache|
+
+Some of these metrics start with `gitaly_streamcache`
+because they are generated by the "streamcache" internal library
+package in Gitaly.
+
+Example:
+
+```plaintext
+gitaly_pack_objects_cache_enabled{dir="/var/opt/gitlab/git-data/repositories/+gitaly/PackObjectsCache",max_age="300"} 1
+gitaly_pack_objects_cache_lookups_total{result="hit"} 2
+gitaly_pack_objects_cache_lookups_total{result="miss"} 1
+gitaly_pack_objects_generated_bytes_total 2.618649e+07
+gitaly_pack_objects_served_bytes_total 7.855947e+07
+gitaly_streamcache_filestore_disk_usage_bytes{dir="/var/opt/gitlab/git-data/repositories/+gitaly/PackObjectsCache"} 2.6200152e+07
+gitaly_streamcache_filestore_removed_total{dir="/var/opt/gitlab/git-data/repositories/+gitaly/PackObjectsCache"} 1
+gitaly_streamcache_index_entries{dir="/var/opt/gitlab/git-data/repositories/+gitaly/PackObjectsCache"} 1
+```
diff --git a/doc/administration/gitaly/img/architecture_v12_4.png b/doc/administration/gitaly/img/architecture_v12_4.png
deleted file mode 100644
index 6a3955a483b..00000000000
--- a/doc/administration/gitaly/img/architecture_v12_4.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/gitaly/index.md b/doc/administration/gitaly/index.md
index b314fa85af7..1a8e18ca2b2 100644
--- a/doc/administration/gitaly/index.md
+++ b/doc/administration/gitaly/index.md
@@ -5,72 +5,296 @@ info: To determine the technical writer assigned to the Stage/Group associated w
type: reference
---
-# Gitaly
+# Gitaly and Gitaly Cluster **(FREE SELF)**
-[Gitaly](https://gitlab.com/gitlab-org/gitaly) is the service that provides high-level RPC access to
-Git repositories. Without it, no GitLab components can read or write Git data.
+[Gitaly](https://gitlab.com/gitlab-org/gitaly) provides high-level RPC access to Git repositories.
+It is used by GitLab to read and write Git data.
-In the Gitaly documentation:
+Gitaly implements a client-server architecture:
-- **Gitaly server** refers to any node that runs Gitaly itself.
-- **Gitaly client** refers to any node that runs a process that makes requests of the
- Gitaly server. Processes include, but are not limited to:
+- A Gitaly server is any node that runs Gitaly itself.
+- A Gitaly client is any node that runs a process that makes requests of the Gitaly server. These
+ include, but are not limited to:
- [GitLab Rails application](https://gitlab.com/gitlab-org/gitlab).
- [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell).
- [GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse).
-GitLab end users do not have direct access to Gitaly. Gitaly manages only Git
-repository access for GitLab. Other types of GitLab data aren't accessed using Gitaly.
+The following illustrates the Gitaly client-server architecture:
+
+```mermaid
+flowchart TD
+ subgraph Gitaly clients
+ A[GitLab Rails]
+ B[GitLab Workhorse]
+ C[GitLab Shell]
+ D[...]
+ end
+
+ subgraph Gitaly
+ E[Git integration]
+ end
+
+F[Local filesystem]
+
+A -- gRPC --> Gitaly
+B -- gRPC--> Gitaly
+C -- gRPC --> Gitaly
+D -- gRPC --> Gitaly
+
+E --> F
+```
+
+End users do not have direct access to Gitaly. Gitaly manages only Git repository access for GitLab.
+Other types of GitLab data aren't accessed using Gitaly.
<!-- vale gitlab.FutureTense = NO -->
WARNING:
-From GitLab 13.0, Gitaly support for NFS is deprecated. As of GitLab 14.0, NFS-related issues
-with Gitaly will no longer be addressed. Upgrade to [Gitaly Cluster](praefect.md) as soon as
-possible. Tools to [enable bulk moves](https://gitlab.com/groups/gitlab-org/-/epics/4916)
-of projects to Gitaly Cluster are planned.
+From GitLab 14.0, enhancements and bug fixes for NFS for Git repositories will no longer be
+considered and customer technical support will be considered out of scope.
+[Read more about Gitaly and NFS](#nfs-deprecation-notice).
<!-- vale gitlab.FutureTense = YES -->
-## Architecture
+## Configure Gitaly
-The following is a high-level architecture overview of how Gitaly is used.
+Gitaly comes pre-configured with Omnibus GitLab, which is a configuration
+[suitable for up to 1000 users](../reference_architectures/1k_users.md). For:
-![Gitaly architecture diagram](img/architecture_v12_4.png)
+- Omnibus GitLab installations for up to 2000 users, see [specific Gitaly configuration instructions](../reference_architectures/2k_users.md#configure-gitaly).
+- Source installations or custom Gitaly installations, see [Configure Gitaly](#configure-gitaly).
-## Configure Gitaly
+GitLab installations for more than 2000 users should use Gitaly Cluster.
+
+NOTE:
+If not set in GitLab, feature flags are read as false from the console and Gitaly uses their
+default value. The default value depends on the GitLab version.
+
+## Gitaly Cluster
+
+Gitaly, the service that provides storage for Git repositories, can
+be run in a clustered configuration to scale the Gitaly service and increase
+fault tolerance. In this configuration, every Git repository is stored on every
+Gitaly node in the cluster.
+
+Using a Gitaly Cluster increases fault tolerance by:
+
+- Replicating write operations to warm standby Gitaly nodes.
+- Detecting Gitaly node failures.
+- Automatically routing Git requests to an available Gitaly node.
+
+NOTE:
+Technical support for Gitaly clusters is limited to GitLab Premium and Ultimate
+customers.
+
+The availability objectives for Gitaly clusters are:
+
+- **Recovery Point Objective (RPO):** Less than 1 minute.
+
+ Writes are replicated asynchronously. Any writes that have not been replicated
+ to the newly promoted primary are lost.
+
+ [Strong consistency](praefect.md#strong-consistency) can be used to avoid loss in some
+ circumstances.
+
+- **Recovery Time Objective (RTO):** Less than 10 seconds.
+ Outages are detected by a health check run by each Praefect node every
+ second. Failover requires ten consecutive failed health checks on each
+ Praefect node.
+
+ [Faster outage detection](https://gitlab.com/gitlab-org/gitaly/-/issues/2608)
+ is planned to improve this to less than 1 second.
+
+Gitaly Cluster supports:
+
+- [Strong consistency](praefect.md#strong-consistency) of the secondary replicas.
+- [Automatic failover](praefect.md#automatic-failover-and-leader-election) from the primary to the secondary.
+- Reporting of possible data loss if replication queue is non-empty.
+- Marking repositories as [read only](praefect.md#read-only-mode) if data loss is detected to prevent data inconsistencies.
+
+Follow the [Gitaly Cluster epic](https://gitlab.com/groups/gitlab-org/-/epics/1489)
+for improvements including
+[horizontally distributing reads](https://gitlab.com/groups/gitlab-org/-/epics/2013).
+
+### Overview
+
+Git storage is provided through the Gitaly service in GitLab, and is essential
+to the operation of the GitLab application. When the number of
+users, repositories, and activity grows, it is important to scale Gitaly
+appropriately by:
+
+- Increasing the available CPU and memory resources available to Git before
+ resource exhaustion degrades Git, Gitaly, and GitLab application performance.
+- Increase available storage before storage limits are reached causing write
+ operations to fail.
+- Improve fault tolerance by removing single points of failure. Git should be
+ considered mission critical if a service degradation would prevent you from
+ deploying changes to production.
+
+### Moving beyond NFS
+
+WARNING:
+From GitLab 13.0, using NFS for Git repositories is deprecated. In GitLab 14.0,
+support for NFS for Git repositories is scheduled to be removed. Upgrade to
+Gitaly Cluster as soon as possible.
+
+[Network File System (NFS)](https://en.wikipedia.org/wiki/Network_File_System)
+is not well suited to Git workloads which are CPU and IOPS sensitive.
+Specifically:
+
+- Git is sensitive to file system latency. Even simple operations require many
+ read operations. Operations that are fast on block storage can become an order of
+ magnitude slower. This significantly impacts GitLab application performance.
+- NFS performance optimizations that prevent the performance gap between
+ block storage and NFS being even wider are vulnerable to race conditions. We have observed
+ [data inconsistencies](https://gitlab.com/gitlab-org/gitaly/-/issues/2589)
+ in production environments caused by simultaneous writes to different NFS
+ clients. Data corruption is not an acceptable risk.
+
+Gitaly Cluster is purpose built to provide reliable, high performance, fault
+tolerant Git storage.
+
+Further reading:
+
+- Blog post: [The road to Gitaly v1.0 (aka, why GitLab doesn't require NFS for storing Git data anymore)](https://about.gitlab.com/blog/2018/09/12/the-road-to-gitaly-1-0/)
+- Blog post: [How we spent two weeks hunting an NFS bug in the Linux kernel](https://about.gitlab.com/blog/2018/11/14/how-we-spent-two-weeks-hunting-an-nfs-bug/)
+
+### Where Gitaly Cluster fits
+
+GitLab accesses [repositories](../../user/project/repository/index.md) through the configured
+[repository storages](../repository_storage_paths.md). Each new repository is stored on one of the
+repository storages based on their configured weights. Each repository storage is either:
+
+- A Gitaly storage served directly by Gitaly. These map to a directory on the file system of a
+ Gitaly node.
+- A [virtual storage](#virtual-storage-or-direct-gitaly-storage) served by Praefect. A virtual
+ storage is a cluster of Gitaly storages that appear as a single repository storage.
+
+Virtual storages are a feature of Gitaly Cluster. They support replicating the repositories to
+multiple storages for fault tolerance. Virtual storages can improve performance by distributing
+requests across Gitaly nodes. Their distributed nature makes it viable to have a single repository
+storage in GitLab to simplify repository management.
+
+### Components of Gitaly Cluster
+
+Gitaly Cluster consists of multiple components:
+
+- [Load balancer](praefect.md#load-balancer) for distributing requests and providing fault-tolerant access to
+ Praefect nodes.
+- [Praefect](praefect.md#praefect) nodes for managing the cluster and routing requests to Gitaly nodes.
+- [PostgreSQL database](praefect.md#postgresql) for persisting cluster metadata and [PgBouncer](praefect.md#pgbouncer),
+ recommended for pooling Praefect's database connections.
+- Gitaly nodes to provide repository storage and Git access.
+
+![Cluster example](img/cluster_example_v13_3.png)
+
+In this example:
+
+- Repositories are stored on a virtual storage called `storage-1`.
+- Three Gitaly nodes provide `storage-1` access: `gitaly-1`, `gitaly-2`, and `gitaly-3`.
+- The three Gitaly nodes store data on their file systems.
+
+### Virtual storage or direct Gitaly storage
+
+Gitaly supports multiple models of scaling:
+
+- Clustering using Gitaly Cluster, where each repository is stored on multiple Gitaly nodes in the
+ cluster. Read requests are distributed between repository replicas and write requests are
+ broadcast to repository replicas. GitLab accesses virtual storage.
+- Direct access to Gitaly storage using [repository storage paths](../repository_storage_paths.md),
+ where each repository is stored on the assigned Gitaly node. All requests are routed to this node.
-Gitaly comes pre-configured with Omnibus GitLab. For more information on customizing your
-Gitaly installation, see [Configure Gitaly](configure_gitaly.md).
+The following is Gitaly set up to use direct access to Gitaly instead of Gitaly Cluster:
-## Direct Git access bypassing Gitaly
+![Shard example](img/shard_example_v13_3.png)
-GitLab doesn't advise directly accessing Gitaly repositories stored on disk with
-a Git client, because Gitaly is being continuously improved and changed. These
-improvements may invalidate assumptions, resulting in performance degradation, instability, and even data loss.
+In this example:
-Gitaly has optimizations, such as the
-[`info/refs` advertisement cache](https://gitlab.com/gitlab-org/gitaly/blob/master/doc/design_diskcache.md),
-that rely on Gitaly controlling and monitoring access to repositories by using the
-official gRPC interface. Likewise, Praefect has optimizations, such as fault
-tolerance and distributed reads, that depend on the gRPC interface and
-database to determine repository state.
+- Each repository is stored on one of three Gitaly storages: `storage-1`, `storage-2`,
+ or `storage-3`.
+- Each storage is serviced by a Gitaly node.
+- The three Gitaly nodes store data in three separate hashed storage locations.
-For these reasons, **accessing repositories directly is done at your own risk
-and is not supported**.
+Generally, virtual storage with Gitaly Cluster can replace direct Gitaly storage configurations, at
+the expense of additional storage needed to store each repository on multiple Gitaly nodes. The
+benefit of using Gitaly Cluster over direct Gitaly storage is:
+
+- Improved fault tolerance, because each Gitaly node has a copy of every repository.
+- Improved resource utilization, reducing the need for over-provisioning for shard-specific peak
+ loads, because read loads are distributed across replicas.
+- Manual rebalancing for performance is not required, because read loads are distributed across
+ replicas.
+- Simpler management, because all Gitaly nodes are identical.
+
+Under some workloads, CPU and memory requirements may require a large fleet of Gitaly nodes. It
+can be uneconomical to have one to one replication factor.
+
+A hybrid approach can be used in these instances, where each shard is configured as a smaller
+cluster. [Variable replication factor](https://gitlab.com/groups/gitlab-org/-/epics/3372) is planned
+to provide greater flexibility for extremely large GitLab instances.
+
+### Gitaly Cluster compared to Geo
+
+Gitaly Cluster and [Geo](../geo/index.md) both provide redundancy. However the redundancy of:
+
+- Gitaly Cluster provides fault tolerance for data storage and is invisible to the user. Users are
+ not aware when Gitaly Cluster is used.
+- Geo provides [replication](../geo/index.md) and [disaster recovery](../geo/disaster_recovery/index.md) for
+ an entire instance of GitLab. Users know when they are using Geo for
+ [replication](../geo/index.md). Geo [replicates multiple data types](../geo/replication/datatypes.md#limitations-on-replicationverification),
+ including Git data.
+
+The following table outlines the major differences between Gitaly Cluster and Geo:
+
+| Tool | Nodes | Locations | Latency tolerance | Failover | Consistency | Provides redundancy for |
+|:---------------|:---------|:----------|:-------------------|:----------------------------------------------------------------|:-----------------------------------------|:------------------------|
+| Gitaly Cluster | Multiple | Single | Approximately 1 ms | [Automatic](praefect.md#automatic-failover-and-leader-election) | [Strong](praefect.md#strong-consistency) | Data storage in Git |
+| Geo | Multiple | Multiple | Up to one minute | [Manual](../geo/disaster_recovery/index.md) | Eventual | Entire GitLab instance |
+
+For more information, see:
+
+- Geo [use cases](../geo/index.md#use-cases).
+- Geo [architecture](../geo/index.md#architecture).
+
+### Architecture
+
+Praefect is a router and transaction manager for Gitaly, and a required
+component for running a Gitaly Cluster.
+
+![Architecture diagram](img/praefect_architecture_v12_10.png)
+
+For more information, see [Gitaly High Availability (HA) Design](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/design_ha.md).
+
+### Configure Gitaly Cluster
+
+For more information on configuring Gitaly Cluster, see [Configure Gitaly Cluster](praefect.md).
+
+## Do not bypass Gitaly
+
+GitLab doesn't advise directly accessing Gitaly repositories stored on disk with a Git client,
+because Gitaly is being continuously improved and changed. These improvements may invalidate
+your assumptions, resulting in performance degradation, instability, and even data loss. For example:
+
+- Gitaly has optimizations such as the [`info/refs` advertisement cache](https://gitlab.com/gitlab-org/gitaly/blob/master/doc/design_diskcache.md),
+ that rely on Gitaly controlling and monitoring access to repositories by using the official gRPC
+ interface.
+- [Gitaly Cluster](praefect.md) has optimizations, such as fault tolerance and
+ [distributed reads](praefect.md#distributed-reads), that depend on the gRPC interface and database
+ to determine repository state.
+
+WARNING:
+Accessing Git repositories directly is done at your own risk and is not supported.
## Direct access to Git in GitLab
Direct access to Git uses code in GitLab known as the "Rugged patches".
-### History
-
-Before Gitaly existed, what are now Gitaly clients used to access Git repositories directly, either:
+Before Gitaly existed, what are now Gitaly clients accessed Git repositories directly, either:
-- On a local disk in the case of a single-machine Omnibus GitLab installation
+- On a local disk in the case of a single-machine Omnibus GitLab installation.
- Using NFS in the case of a horizontally-scaled GitLab installation.
-Besides running plain `git` commands, GitLab used to use a Ruby library called
+In addition to running plain `git` commands, GitLab used a Ruby library called
[Rugged](https://github.com/libgit2/rugged). Rugged is a wrapper around
[libgit2](https://libgit2.org/), a stand-alone implementation of Git in the form of a C library.
@@ -81,9 +305,9 @@ not an external process, there was very little overhead between:
- GitLab application code that tried to look up data in Git repositories.
- The Git implementation itself.
-Because the combination of Rugged and Unicorn was so efficient, the GitLab application code ended up with lots of
-duplicate Git object lookups. For example, looking up the `master` commit a dozen times in one
-request. We could write inefficient code without poor performance.
+Because the combination of Rugged and Unicorn was so efficient, the GitLab application code ended up
+with lots of duplicate Git object lookups. For example, looking up the default branch commit a dozen
+times in one request. We could write inefficient code without poor performance.
When we migrated these Git lookups to Gitaly calls, we suddenly had a much higher fixed cost per Git
lookup. Even when Gitaly is able to re-use an already-running `git` process (for example, to look up
@@ -94,8 +318,8 @@ a commit), you still have:
Using GitLab.com to measure, we reduced the number of Gitaly calls per request until the loss of
Rugged's efficiency was no longer felt. It also helped that we run Gitaly itself directly on the Git
-file severs, rather than by using NFS mounts. This gave us a speed boost that counteracted the negative
-effect of not using Rugged anymore.
+file servers, rather than by using NFS mounts. This gave us a speed boost that counteracted the
+negative effect of not using Rugged anymore.
Unfortunately, other deployments of GitLab could not remove NFS like we did on GitLab.com, and they
got the worst of both worlds:
@@ -154,7 +378,29 @@ There are two facets to our efforts to remove direct Git access in GitLab:
NFS.
The second facet presents the only real solution. For this, we developed
-[Gitaly Cluster](praefect.md).
+[Gitaly Cluster](#gitaly-cluster).
+
+## NFS deprecation notice
+
+<!-- vale gitlab.FutureTense = NO -->
+
+From GitLab 14.0, enhancements and bug fixes for NFS for Git repositories will no longer be
+considered and customer technical support will be considered out of scope.
+
+Additional information:
+
+- [Recommended NFS mount options and known issues with Gitaly and NFS](../nfs.md#upgrade-to-gitaly-cluster-or-disable-caching-if-experiencing-data-loss).
+- [GitLab statement of support](https://about.gitlab.com/support/statement-of-support.html#gitaly-and-nfs).
+
+<!-- vale gitlab.FutureTense = YES -->
+
+GitLab recommends:
+
+- Creating a [Gitaly Cluster](#gitaly-cluster) as soon as possible.
+- [Moving your repositories](praefect.md#migrate-to-gitaly-cluster) from NFS-based storage to Gitaly
+ Cluster.
+
+We welcome your feedback on this process: raise a support ticket, or [comment on the epic](https://gitlab.com/groups/gitlab-org/-/epics/4916).
## Troubleshooting Gitaly
@@ -213,6 +459,21 @@ You can run a gRPC trace with:
sudo GRPC_TRACE=all GRPC_VERBOSITY=DEBUG gitlab-rake gitlab:gitaly:check
```
+### Server side gRPC logs
+
+gRPC tracing can also be enabled in Gitaly itself with the `GODEBUG=http2debug`
+environment variable. To set this in an Omnibus GitLab install:
+
+1. Add the following to your `gitlab.rb` file:
+
+ ```ruby
+ gitaly['env'] = {
+ "GODEBUG=http2debug" => "2"
+ }
+ ```
+
+1. [Reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure) GitLab.
+
### Correlating Git processes with RPCs
Sometimes you need to find out which Gitaly RPC created a particular Git process.
@@ -240,9 +501,9 @@ so, there's not that much visibility into what goes on inside
If you have Prometheus set up to scrape your Gitaly process, you can see
request rates and error codes for individual RPCs in `gitaly-ruby` by
querying `grpc_client_handled_total`. Strictly speaking, this metric does
-not differentiate between `gitaly-ruby` and other RPCs, but in practice
-(as of GitLab 11.9), all gRPC calls made by Gitaly itself are internal
-calls from the main Gitaly process to one of its `gitaly-ruby` sidecars.
+not differentiate between `gitaly-ruby` and other RPCs. However from GitLab 11.9,
+all gRPC calls made by Gitaly itself are internal calls from the main Gitaly process to one of its
+`gitaly-ruby` sidecars.
Assuming your `grpc_client_handled_total` counter observes only Gitaly,
the following query shows you RPCs are (most likely) internally
@@ -349,9 +610,10 @@ update the secrets file on the Gitaly server to match the Gitaly client, then
### Command line tools cannot connect to Gitaly
-If you can't connect to a Gitaly server with command line (CLI) tools,
-and certain actions result in a `14: Connect Failed` error message,
-gRPC cannot reach your Gitaly server.
+gRPC cannot reach your Gitaly server if:
+
+- You can't connect to a Gitaly server with command-line tools.
+- Certain actions result in a `14: Connect Failed` error message.
Verify you can reach Gitaly by using TCP:
@@ -383,16 +645,30 @@ unset http_proxy
unset https_proxy
```
-### Permission denied errors appearing in Gitaly logs when accessing repositories from a standalone Gitaly server
+### Permission denied errors appearing in Gitaly or Praefect logs when accessing repositories
-If this error occurs even though file permissions are correct, it's likely that
-the Gitaly server is experiencing
-[clock drift](https://en.wikipedia.org/wiki/Clock_drift).
+You might see the following in Gitaly and Praefect logs:
-Ensure the Gitaly clients and servers are synchronized, and use an NTP time
-server to keep them synchronized, if possible.
+```shell
+{
+ ...
+ "error":"rpc error: code = PermissionDenied desc = permission denied",
+ "grpc.code":"PermissionDenied",
+ "grpc.meta.client_name":"gitlab-web",
+ "grpc.request.fullMethod":"/gitaly.ServerService/ServerInfo",
+ "level":"warning",
+ "msg":"finished unary call with code PermissionDenied",
+ ...
+}
+```
-### Praefect
+This is a GRPC call
+[error response code](https://grpc.github.io/grpc/core/md_doc_statuscodes.html).
-Praefect is a router and transaction manager for Gitaly, and a required
-component for running a Gitaly Cluster. For more information see [Gitaly Cluster](praefect.md).
+If this error occurs, even though
+[the Gitaly auth tokens are correctly setup](../gitaly/praefect.md#debugging-praefect),
+it's likely that the Gitaly servers are experiencing
+[clock drift](https://en.wikipedia.org/wiki/Clock_drift).
+
+Ensure the Gitaly clients and servers are synchronized, and use an NTP time
+server to keep them synchronized.
diff --git a/doc/administration/gitaly/praefect.md b/doc/administration/gitaly/praefect.md
index 80b5c1bb799..1a8df1cea92 100644
--- a/doc/administration/gitaly/praefect.md
+++ b/doc/administration/gitaly/praefect.md
@@ -5,154 +5,11 @@ info: To determine the technical writer assigned to the Stage/Group associated w
type: reference
---
-# Gitaly Cluster **(FREE SELF)**
+# Configure Gitaly Cluster **(FREE SELF)**
-[Gitaly](index.md), the service that provides storage for Git repositories, can
-be run in a clustered configuration to increase fault tolerance. In this
-configuration, every Git repository is stored on every Gitaly node in the
-cluster. Multiple clusters (or storage shards) can be configured.
-
-NOTE:
-Technical support for Gitaly clusters is limited to GitLab Premium and Ultimate
-customers.
-
-Praefect is a router and transaction manager for Gitaly, and a required
-component for running a Gitaly Cluster.
-
-![Architecture diagram](img/praefect_architecture_v12_10.png)
-
-Using a Gitaly Cluster increases fault tolerance by:
-
-- Replicating write operations to warm standby Gitaly nodes.
-- Detecting Gitaly node failures.
-- Automatically routing Git requests to an available Gitaly node.
-
-The availability objectives for Gitaly clusters are:
-
-- **Recovery Point Objective (RPO):** Less than 1 minute.
-
- Writes are replicated asynchronously. Any writes that have not been replicated
- to the newly promoted primary are lost.
-
- [Strong consistency](#strong-consistency) can be used to avoid loss in some
- circumstances.
-
-- **Recovery Time Objective (RTO):** Less than 10 seconds.
-
- Outages are detected by a health checks run by each Praefect node every
- second. Failover requires ten consecutive failed health checks on each
- Praefect node.
-
- [Faster outage detection](https://gitlab.com/gitlab-org/gitaly/-/issues/2608)
- is planned to improve this to less than 1 second.
-
-Gitaly Cluster supports:
-
-- [Strong consistency](#strong-consistency) of the secondary replicas.
-- [Automatic failover](#automatic-failover-and-leader-election) from the primary to the secondary.
-- Reporting of possible data loss if replication queue is non-empty.
-- Marking repositories as [read only](#read-only-mode) if data loss is detected to prevent data inconsistencies.
-
-Follow the [Gitaly Cluster epic](https://gitlab.com/groups/gitlab-org/-/epics/1489)
-for improvements including
-[horizontally distributing reads](https://gitlab.com/groups/gitlab-org/-/epics/2013).
-
-## Gitaly Cluster compared to Geo
-
-Gitaly Cluster and [Geo](../geo/index.md) both provide redundancy. However the redundancy of:
-
-- Gitaly Cluster provides fault tolerance for data storage and is invisible to the user. Users are
- not aware when Gitaly Cluster is used.
-- Geo provides [replication](../geo/index.md) and [disaster recovery](../geo/disaster_recovery/index.md) for
- an entire instance of GitLab. Users know when they are using Geo for
- [replication](../geo/index.md). Geo [replicates multiple data types](../geo/replication/datatypes.md#limitations-on-replicationverification),
- including Git data.
-
-The following table outlines the major differences between Gitaly Cluster and Geo:
-
-| Tool | Nodes | Locations | Latency tolerance | Failover | Consistency | Provides redundancy for |
-|:---------------|:---------|:----------|:-------------------|:-----------------------------------------------------|:------------------------------|:------------------------|
-| Gitaly Cluster | Multiple | Single | Approximately 1 ms | [Automatic](#automatic-failover-and-leader-election) | [Strong](#strong-consistency) | Data storage in Git |
-| Geo | Multiple | Multiple | Up to one minute | [Manual](../geo/disaster_recovery/index.md) | Eventual | Entire GitLab instance |
-
-For more information, see:
-
-- [Gitaly architecture](index.md#architecture).
-- Geo [use cases](../geo/index.md#use-cases) and [architecture](../geo/index.md#architecture).
-
-## Where Gitaly Cluster fits
-
-GitLab accesses [repositories](../../user/project/repository/index.md) through the configured
-[repository storages](../repository_storage_paths.md). Each new repository is stored on one of the
-repository storages based on their configured weights. Each repository storage is either:
-
-- A Gitaly storage served directly by Gitaly. These map to a directory on the file system of a
- Gitaly node.
-- A [virtual storage](#virtual-storage-or-direct-gitaly-storage) served by Praefect. A virtual
- storage is a cluster of Gitaly storages that appear as a single repository storage.
-
-Virtual storages are a feature of Gitaly Cluster. They support replicating the repositories to
-multiple storages for fault tolerance. Virtual storages can improve performance by distributing
-requests across Gitaly nodes. Their distributed nature makes it viable to have a single repository
-storage in GitLab to simplify repository management.
-
-## Components of Gitaly Cluster
-
-Gitaly Cluster consists of multiple components:
-
-- [Load balancer](#load-balancer) for distributing requests and providing fault-tolerant access to
- Praefect nodes.
-- [Praefect](#praefect) nodes for managing the cluster and routing requests to Gitaly nodes.
-- [PostgreSQL database](#postgresql) for persisting cluster metadata and [PgBouncer](#pgbouncer),
- recommended for pooling Praefect's database connections.
-- [Gitaly](index.md) nodes to provide repository storage and Git access.
-
-![Cluster example](img/cluster_example_v13_3.png)
-
-In this example:
-
-- Repositories are stored on a virtual storage called `storage-1`.
-- Three Gitaly nodes provide `storage-1` access: `gitaly-1`, `gitaly-2`, and `gitaly-3`.
-- The three Gitaly nodes store data on their file systems.
-
-### Virtual storage or direct Gitaly storage
-
-Gitaly supports multiple models of scaling:
-
-- Clustering using Gitaly Cluster, where each repository is stored on multiple Gitaly nodes in the
- cluster. Read requests are distributed between repository replicas and write requests are
- broadcast to repository replicas. GitLab accesses virtual storage.
-- Direct access to Gitaly storage using [repository storage paths](../repository_storage_paths.md),
- where each repository is stored on the assigned Gitaly node. All requests are routed to this node.
-
-The following is Gitaly set up to use direct access to Gitaly instead of Gitaly Cluster:
-
-![Shard example](img/shard_example_v13_3.png)
-
-In this example:
-
-- Each repository is stored on one of three Gitaly storages: `storage-1`, `storage-2`,
- or `storage-3`.
-- Each storage is serviced by a Gitaly node.
-- The three Gitaly nodes store data in three separate hashed storage locations.
-
-Generally, virtual storage with Gitaly Cluster can replace direct Gitaly storage configurations, at
-the expense of additional storage needed to store each repository on multiple Gitaly nodes. The
-benefit of using Gitaly Cluster over direct Gitaly storage is:
-
-- Improved fault tolerance, because each Gitaly node has a copy of every repository.
-- Improved resource utilization, reducing the need for over-provisioning for shard-specific peak
- loads, because read loads are distributed across replicas.
-- Manual rebalancing for performance is not required, because read loads are distributed across
- replicas.
-- Simpler management, because all Gitaly nodes are identical.
-
-Under some workloads, CPU and memory requirements may require a large fleet of Gitaly nodes. It
-can be uneconomical to have one to one replication factor.
-
-A hybrid approach can be used in these instances, where each shard is configured as a smaller
-cluster. [Variable replication factor](https://gitlab.com/groups/gitlab-org/-/epics/3372) is planned
-to provide greater flexibility for extremely large GitLab instances.
+In addition to Gitaly Cluster configuration instructions available as part of
+[reference architectures](../reference_architectures/index.md) for installations for more than
+2000 users, advanced configuration instructions are available below.
## Requirements for configuring a Gitaly Cluster
@@ -167,6 +24,10 @@ See the [design
document](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/design_ha.md)
for implementation details.
+NOTE:
+If not set in GitLab, feature flags are read as false from the console and Praefect uses their
+default value. The default value depends on the GitLab version.
+
## Setup Instructions
If you [installed](https://about.gitlab.com/install/) GitLab using the Omnibus
@@ -204,7 +65,7 @@ You need the IP/host address for each node.
If you are using a cloud provider, you can look up the addresses for each server through your cloud provider's management console.
-If you are using Google Cloud Platform, SoftLayer, or any other vendor that provides a virtual private cloud (VPC) you can use the private addresses for each cloud instance (corresponds to “internal address” for Google Cloud Platform) for `PRAEFECT_HOST`, `GITALY_HOST_*`, and `GITLAB_HOST`.
+If you are using Google Cloud Platform, SoftLayer, or any other vendor that provides a virtual private cloud (VPC) you can use the private addresses for each cloud instance (corresponds to "internal address" for Google Cloud Platform) for `PRAEFECT_HOST`, `GITALY_HOST_*`, and `GITLAB_HOST`.
#### Secrets
@@ -227,6 +88,9 @@ with secure tokens as you complete the setup process.
We note in the instructions below where these secrets are required.
+NOTE:
+Omnibus GitLab installations can use `gitlab-secrets.json` for `GITLAB_SHELL_SECRET_TOKEN`.
+
### PostgreSQL
NOTE:
@@ -236,10 +100,11 @@ database on the same PostgreSQL server if using
of GitLab and should not be replicated.
These instructions help set up a single PostgreSQL database, which creates a single point of
-failure. For greater fault tolerance, the following options are available:
+failure. The following options are available:
-- For non-Geo installations, use one of the fault-tolerant
- [PostgreSQL setups](../postgresql/index.md).
+- For non-Geo installations, either:
+ - Use one of the documented [PostgreSQL setups](../postgresql/index.md).
+ - Use your own third-party database setup, if fault tolerance is required.
- For Geo instances, either:
- Set up a separate [PostgreSQL instance](https://www.postgresql.org/docs/11/high-availability.html).
- Use a cloud-managed PostgreSQL service. AWS
@@ -458,7 +323,7 @@ application server, or a Gitaly node.
WARNING:
If you have data on an already existing storage called
`default`, you should configure the virtual storage with another name and
- [migrate the data to the Gitaly Cluster storage](#migrate-existing-repositories-to-gitaly-cluster)
+ [migrate the data to the Gitaly Cluster storage](#migrate-to-gitaly-cluster)
afterwards.
Replace `PRAEFECT_INTERNAL_TOKEN` with a strong secret, which is used by
@@ -755,14 +620,26 @@ documentation](configure_gitaly.md#configure-gitaly-servers).
gitaly['auth_token'] = 'PRAEFECT_INTERNAL_TOKEN'
```
-1. Configure the GitLab Shell `secret_token`, and `internal_api_url` which are
- needed for `git push` operations.
+1. Configure the GitLab Shell secret token, which is needed for `git push` operations. Either:
- If you have already configured [Gitaly on its own server](index.md)
+ - Method 1:
- ```ruby
- gitlab_shell['secret_token'] = 'GITLAB_SHELL_SECRET_TOKEN'
+ 1. Copy `/etc/gitlab/gitlab-secrets.json` from the Gitaly client to same path on the Gitaly
+ servers and any other Gitaly clients.
+ 1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) on Gitaly servers.
+ - Method 2:
+
+ 1. Edit `/etc/gitlab/gitlab.rb`.
+ 1. Replace `GITLAB_SHELL_SECRET_TOKEN` with the real secret.
+
+ ```ruby
+ gitlab_shell['secret_token'] = 'GITLAB_SHELL_SECRET_TOKEN'
+ ```
+
+1. Configure and `internal_api_url`, which is also needed for `git push` operations:
+
+ ```ruby
# Configure the gitlab-shell API callback URL. Without this, `git push` will
# fail. This can be your front door GitLab URL or an internal load balancer.
# Examples: 'https://gitlab.example.com', 'http://1.2.3.4'
@@ -838,7 +715,7 @@ addition to the GitLab nodes. Some requests handled by
process. `gitaly-ruby` uses the Gitaly address set in the GitLab server's
`git_data_dirs` setting to make this connection.
-We hope that if you’re managing fault-tolerant systems like GitLab, you have a load balancer
+We hope that if you're managing fault-tolerant systems like GitLab, you have a load balancer
of choice already. Some examples include [HAProxy](https://www.haproxy.org/)
(open-source), [Google Internal Load Balancer](https://cloud.google.com/load-balancing/docs/internal/),
[AWS Elastic Load Balancer](https://aws.amazon.com/elasticloadbalancing/), F5
@@ -887,7 +764,7 @@ Particular attention should be shown to:
WARNING:
If you have existing data stored on the default Gitaly storage,
- you should [migrate the data your Gitaly Cluster storage](#migrate-existing-repositories-to-gitaly-cluster)
+ you should [migrate the data your Gitaly Cluster storage](#migrate-to-gitaly-cluster)
first.
```ruby
@@ -914,15 +791,23 @@ Particular attention should be shown to:
})
```
-1. Configure the `gitlab_shell['secret_token']` so that callbacks from Gitaly
- nodes during a `git push` are properly authenticated by editing
- `/etc/gitlab/gitlab.rb`:
+1. Configure the GitLab Shell secret token so that callbacks from Gitaly nodes during a `git push`
+ are properly authenticated. Either:
- You need to replace `GITLAB_SHELL_SECRET_TOKEN` with the real secret.
+ - Method 1:
- ```ruby
- gitlab_shell['secret_token'] = 'GITLAB_SHELL_SECRET_TOKEN'
- ```
+ 1. Copy `/etc/gitlab/gitlab-secrets.json` from the Gitaly client to same path on the Gitaly
+ servers and any other Gitaly clients.
+ 1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) on Gitaly servers.
+
+ - Method 2:
+
+ 1. Edit `/etc/gitlab/gitlab.rb`.
+ 1. Replace `GITLAB_SHELL_SECRET_TOKEN` with the real secret.
+
+ ```ruby
+ gitlab_shell['secret_token'] = 'GITLAB_SHELL_SECRET_TOKEN'
+ ```
1. Add Prometheus monitoring settings by editing `/etc/gitlab/gitlab.rb`. If Prometheus
is enabled on a different node, make edits on that node instead.
@@ -1036,6 +921,7 @@ cluster.
> - [Made generally available and enabled by default](https://gitlab.com/gitlab-org/gitaly/-/issues/2951) in GitLab 13.3.
> - [Disabled by default](https://gitlab.com/gitlab-org/gitaly/-/issues/3178) in GitLab 13.5.
> - [Enabled by default](https://gitlab.com/gitlab-org/gitaly/-/issues/3334) in GitLab 13.8.
+> - [Feature flag removed](https://gitlab.com/gitlab-org/gitaly/-/issues/3383) in GitLab 13.11.
Praefect supports distribution of read operations across Gitaly nodes that are
configured for the virtual node.
@@ -1064,8 +950,10 @@ They reflect configuration defined for this instance of Praefect.
> - Introduced in GitLab 13.1 in [alpha](https://about.gitlab.com/handbook/product/gitlab-the-product/#alpha-beta-ga), disabled by default.
> - Entered [beta](https://about.gitlab.com/handbook/product/gitlab-the-product/#alpha-beta-ga) in GitLab 13.2, disabled by default.
-> - From GitLab 13.3, disabled unless primary-wins reference transactions strategy is disabled.
+> - In GitLab 13.3, disabled unless primary-wins voting strategy is disabled.
> - From GitLab 13.4, enabled by default.
+> - From GitLab 13.5, you must use Git v2.28.0 or higher on Gitaly nodes to enable strong consistency.
+> - From GitLab 13.6, primary-wins voting strategy and `gitaly_reference_transactions_primary_wins` feature flag were removed from the source code.
Praefect guarantees eventual consistency by replicating all writes to secondary nodes
after the write to the primary Gitaly node has happened.
@@ -1077,18 +965,12 @@ information, see the [strong consistency epic](https://gitlab.com/groups/gitlab-
To enable strong consistency:
-- In GitLab 13.5, you must use Git v2.28.0 or higher on Gitaly nodes to enable
- strong consistency.
-- In GitLab 13.4 and later, the strong consistency voting strategy has been
- improved. Instead of requiring all nodes to agree, only the primary and half
- of the secondaries need to agree. This strategy is enabled by default. To
- disable it and continue using the primary-wins strategy, enable the
- `:gitaly_reference_transactions_primary_wins` feature flag.
-- In GitLab 13.3, reference transactions are enabled by default with a
- primary-wins strategy. This strategy causes all transactions to succeed for
- the primary and thus does not ensure strong consistency. To enable strong
- consistency, disable the `:gitaly_reference_transactions_primary_wins`
- feature flag.
+- In GitLab 13.5, you must use Git v2.28.0 or higher on Gitaly nodes to enable strong consistency.
+- In GitLab 13.4 and later, the strong consistency voting strategy has been improved and enabled by default.
+ Instead of requiring all nodes to agree, only the primary and half of the secondaries need to agree.
+- In GitLab 13.3, reference transactions are enabled by default with a primary-wins strategy.
+ This strategy causes all transactions to succeed for the primary and thus does not ensure strong consistency.
+ To enable strong consistency, disable the `:gitaly_reference_transactions_primary_wins` feature flag.
- In GitLab 13.2, enable the `:gitaly_reference_transactions` feature flag.
- In GitLab 13.1, enable the `:gitaly_reference_transactions` and `:gitaly_hooks_rpc`
feature flags.
@@ -1368,8 +1250,9 @@ affected repositories. Praefect provides tools for:
- [Automatic](#automatic-reconciliation) reconciliation, for GitLab 13.4 and later.
- [Manual](#manual-reconciliation) reconciliation, for:
- GitLab 13.3 and earlier.
- - Repositories upgraded to GitLab 13.4 and later without entries in the `repositories` table.
- A migration tool [is planned](https://gitlab.com/gitlab-org/gitaly/-/issues/3033).
+ - Repositories upgraded to GitLab 13.4 and later without entries in the `repositories` table. In
+ GitLab 13.6 and later, [a migration is run](https://gitlab.com/gitlab-org/gitaly/-/issues/3033)
+ when Praefect starts for these repositories.
These tools reconcile the outdated repositories to bring them fully up to date again.
@@ -1413,23 +1296,37 @@ sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.t
- Replace the placeholder `<up-to-date-storage>` with the Gitaly storage name containing up to date repositories.
- Replace the placeholder `<outdated-storage>` with the Gitaly storage name containing outdated repositories.
-## Migrate existing repositories to Gitaly Cluster
+## Migrate to Gitaly Cluster
+
+To migrate to Gitaly Cluster, existing repositories stored outside Gitaly Cluster must be
+moved. There is no automatic migration but the moves can be scheduled with the GitLab API.
-If your GitLab instance already has repositories on single Gitaly nodes, these aren't migrated to
-Gitaly Cluster automatically.
+GitLab repositories can be associated with projects, groups, and snippets. Each of these types
+have a separate API to schedule the respective repositories to move. To move all repositories
+on a GitLab instance, each of these types must be scheduled to move for each storage.
-Project repositories may be moved from one storage location using the [Project repository storage moves API](../../api/project_repository_storage_moves.md). Note that this API cannot move all repository types. For moving other repositories types, see:
+Each repository is made read only when the move is scheduled. The repository is not writable
+until the move has completed.
-- [Snippet repository storage moves API](../../api/snippet_repository_storage_moves.md).
-- [Group repository storage moves API](../../api/group_repository_storage_moves.md).
+After creating and configuring Gitaly Cluster:
-To move repositories to Gitaly Cluster:
+1. Ensure all storages are accessible to the GitLab instance. In this example, these are
+ `<original_storage_name>` and `<cluster_storage_name>`.
+1. [Configure repository storage weights](../repository_storage_paths.md#configure-where-new-repositories-are-stored)
+ so that the Gitaly Cluster receives all new projects. This stops new projects being created
+ on existing Gitaly nodes while the migration is in progress.
+1. Schedule repository moves for:
+ - [Projects](#bulk-schedule-projects).
+ - [Snippets](#bulk-schedule-snippets).
+ - [Groups](#bulk-schedule-groups). **(PREMIUM SELF)**
+
+### Bulk schedule projects
1. [Schedule repository storage moves for all projects on a storage shard](../../api/project_repository_storage_moves.md#schedule-repository-storage-moves-for-all-projects-on-a-storage-shard) using the API. For example:
```shell
curl --request POST --header "Private-Token: <your_access_token>" --header "Content-Type: application/json" \
- --data '{"source_storage_name":"gitaly","destination_storage_name":"praefect"}' "https://gitlab.example.com/api/v4/project_repository_storage_moves"
+ --data '{"source_storage_name":"<original_storage_name>","destination_storage_name":"<cluster_storage_name>"}' "https://gitlab.example.com/api/v4/project_repository_storage_moves"
```
1. [Query the most recent repository moves](../../api/project_repository_storage_moves.md#retrieve-all-project-repository-storage-moves)
@@ -1442,9 +1339,69 @@ To move repositories to Gitaly Cluster:
using the API to confirm that all projects have moved. No projects should be returned
with `repository_storage` field set to the old storage.
-In a similar way, you can move other repository types by using the
-[Snippet repository storage moves API](../../api/snippet_repository_storage_moves.md) **(FREE SELF)**
-or the [Groups repository storage moves API](../../api/group_repository_storage_moves.md) **(PREMIUM SELF)**.
+ ```shell
+ curl --header "Private-Token: <your_access_token>" --header "Content-Type: application/json" \
+ "https://gitlab.example.com/api/v4/projects?repository_storage=<original_storage_name>"
+ ```
+
+ Alternatively use [the rails console](../operations/rails_console.md) to
+ confirm that all projects have moved. Run the following in the rails console:
+
+ ```ruby
+ ProjectRepository.for_repository_storage('<original_storage_name>')
+ ```
+
+1. Repeat for each storage as required.
+
+### Bulk schedule snippets
+
+1. [Schedule repository storage moves for all snippets on a storage shard](../../api/snippet_repository_storage_moves.md#schedule-repository-storage-moves-for-all-snippets-on-a-storage-shard) using the API. For example:
+
+ ```shell
+ curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" --header "Content-Type: application/json" \
+ --data '{"source_storage_name":"<original_storage_name>","destination_storage_name":"<cluster_storage_name>"}' "https://gitlab.example.com/api/v4/snippet_repository_storage_moves"
+ ```
+
+1. [Query the most recent repository moves](../../api/snippet_repository_storage_moves.md#retrieve-all-snippet-repository-storage-moves)
+ using the API. The query indicates either:
+ - The moves have completed successfully. The `state` field is `finished`.
+ - The moves are in progress. Re-query the repository move until it completes successfully.
+ - The moves have failed. Most failures are temporary and are solved by rescheduling the move.
+
+1. After the moves are complete, use [the rails console](../operations/rails_console.md) to
+ confirm that all snippets have moved. No snippets should be returned for the original
+ storage. Run the following in the rails console:
+
+ ```ruby
+ SnippetRepository.for_repository_storage('<original_storage_name>')
+ ```
+
+1. Repeat for each storage as required.
+
+### Bulk schedule groups **(PREMIUM SELF)**
+
+1. [Schedule repository storage moves for all groups on a storage shard](../../api/group_repository_storage_moves.md#schedule-repository-storage-moves-for-all-groups-on-a-storage-shard) using the API.
+
+ ```shell
+ curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" --header "Content-Type: application/json" \
+ --data '{"source_storage_name":"<original_storage_name>","destination_storage_name":"<cluster_storage_name>"}' "https://gitlab.example.com/api/v4/group_repository_storage_moves"
+ ```
+
+1. [Query the most recent repository moves](../../api/group_repository_storage_moves.md#retrieve-all-group-repository-storage-moves)
+ using the API. The query indicates either:
+ - The moves have completed successfully. The `state` field is `finished`.
+ - The moves are in progress. Re-query the repository move until it completes successfully.
+ - The moves have failed. Most failures are temporary and are solved by rescheduling the move.
+
+1. After the moves are complete, use [the rails console](../operations/rails_console.md) to
+ confirm that all groups have moved. No groups should be returned for the original
+ storage. Run the following in the rails console:
+
+ ```ruby
+ GroupWikiRepository.for_repository_storage('<original_storage_name>')
+ ```
+
+1. Repeat for each storage as required.
## Debugging Praefect
diff --git a/doc/administration/gitaly/reference.md b/doc/administration/gitaly/reference.md
index f08b03017e4..ec5a8d47ae2 100644
--- a/doc/administration/gitaly/reference.md
+++ b/doc/administration/gitaly/reference.md
@@ -5,7 +5,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
type: reference
---
-# Gitaly reference
+# Gitaly reference **(FREE SELF)**
Gitaly is configured via a [TOML](https://github.com/toml-lang/toml)
configuration file. Unlike installations from source, in Omnibus GitLab, you