Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2024-01-05 12:11:29 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2024-01-05 12:11:29 +0300
commitfc23bd54a1a49003eda83bc2331d9b8b8417a91b (patch)
tree9c90d537ac9477541488d974b8bc9493baa91daf /doc
parent309dbdc49533a76243003a3f662736ae0b0ec14a (diff)
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc')
-rw-r--r--doc/architecture/blueprints/cloud_connector/index.md18
-rw-r--r--doc/architecture/blueprints/gitaly_adaptive_concurrency_limit/index.md65
-rw-r--r--doc/architecture/blueprints/runway/index.md2
-rw-r--r--doc/development/activitypub/actors/index.md2
-rw-r--r--doc/development/database_review.md2
-rw-r--r--doc/development/lfs.md40
-rw-r--r--doc/development/multi_version_compatibility.md20
-rw-r--r--doc/development/reactive_caching.md8
-rw-r--r--doc/development/ruby_upgrade.md68
-rw-r--r--doc/policy/maintenance.md10
10 files changed, 117 insertions, 118 deletions
diff --git a/doc/architecture/blueprints/cloud_connector/index.md b/doc/architecture/blueprints/cloud_connector/index.md
index 9aef8bc7a98..50e233a6089 100644
--- a/doc/architecture/blueprints/cloud_connector/index.md
+++ b/doc/architecture/blueprints/cloud_connector/index.md
@@ -170,16 +170,16 @@ It will have the following responsibilities:
We suggest to use one of the following language stacks:
1. **Go.** There is substantial organizational knowledge in writing and running
-Go systems at GitLab, and it is a great systems language that gives us efficient ways to handle requests where
-they merely need to be forwarded (request proxying) and a powerful concurrency mechanism through goroutines. This makes the
-service easier to scale and cheaper to run than Ruby or Python, which scale largely at the process level due to their use
-of Global Interpreter Locks, and use inefficient memory models especially as regards byte stream handling and manipulation.
-A drawback of Go is that resource requirements such as memory use are less predictable because Go is a garbage collected language.
+ Go systems at GitLab, and it is a great systems language that gives us efficient ways to handle requests where
+ they merely need to be forwarded (request proxying) and a powerful concurrency mechanism through goroutines. This makes the
+ service easier to scale and cheaper to run than Ruby or Python, which scale largely at the process level due to their use
+ of Global Interpreter Locks, and use inefficient memory models especially as regards byte stream handling and manipulation.
+ A drawback of Go is that resource requirements such as memory use are less predictable because Go is a garbage collected language.
1. **Rust.** We are starting to build up knowledge in Rust at GitLab. Like Go, it is a great systems language that is
-also starting to see wider adoption in the Ruby ecosystem to write CRuby extensions. A major benefit is more predictable
-resource consumption because it is not garbage collected and allows for finer control of memory use.
-It is also very fast; we found that the Rust implementation for `prometheus-client-mmap` outperformed the original
-extension written in C.
+ also starting to see wider adoption in the Ruby ecosystem to write CRuby extensions. A major benefit is more predictable
+ resource consumption because it is not garbage collected and allows for finer control of memory use.
+ It is also very fast; we found that the Rust implementation for `prometheus-client-mmap` outperformed the original
+ extension written in C.
## Alternative solutions
diff --git a/doc/architecture/blueprints/gitaly_adaptive_concurrency_limit/index.md b/doc/architecture/blueprints/gitaly_adaptive_concurrency_limit/index.md
index f3335a0935e..7f451b4f92b 100644
--- a/doc/architecture/blueprints/gitaly_adaptive_concurrency_limit/index.md
+++ b/doc/architecture/blueprints/gitaly_adaptive_concurrency_limit/index.md
@@ -43,14 +43,14 @@ configurations, especially the value of the concurrency limit, are static. There
are some drawbacks to this:
- It's tedious to maintain a sane value for the concurrency limit. Looking at
-this [production configuration](https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/blob/db11ef95859e42d656bb116c817402635e946a32/roles/gprd-base-stor-gitaly-common.json),
-each limit is heavily calibrated based on clues from different sources. When the
-overall scene changes, we need to tweak them again.
+ this [production configuration](https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/blob/db11ef95859e42d656bb116c817402635e946a32/roles/gprd-base-stor-gitaly-common.json),
+ each limit is heavily calibrated based on clues from different sources. When the
+ overall scene changes, we need to tweak them again.
- Static limits are not good for all usage patterns. It's not feasible to pick a
-fit-them-all value. If the limit is too low, big users will be affected. If the
-value is too loose, the protection effect is lost.
+ fit-them-all value. If the limit is too low, big users will be affected. If the
+ value is too loose, the protection effect is lost.
- A request may be rejected even though the server is idle as the rate is not
-necessarily an indicator of the load induced on the server.
+ necessarily an indicator of the load induced on the server.
To overcome all of those drawbacks while keeping the benefits of concurrency
limiting, one promising solution is to make the concurrency limit adaptive to
@@ -78,18 +78,18 @@ occurs. There are various criteria for determining whether Gitaly is in trouble.
In this proposal, we focus on two things:
- Lack of resources, particularly memory and CPU, which are essential for
-handling Git processes.
+ handling Git processes.
- Serious latency degradation.
The proposed solution is heavily inspired by many materials about this subject
shared by folks from other companies in the industry, especially the following:
- TCP Congestion Control ([RFC-2581](https://www.rfc-editor.org/rfc/rfc2581), [RFC-5681](https://www.rfc-editor.org/rfc/rfc5681),
-[RFC-9293](https://www.rfc-editor.org/rfc/rfc9293.html#name-tcp-congestion-control), [Computer Networks: A Systems Approach](https://book.systemsapproach.org/congestion/tcpcc.html)).
+ [RFC-9293](https://www.rfc-editor.org/rfc/rfc9293.html#name-tcp-congestion-control), [Computer Networks: A Systems Approach](https://book.systemsapproach.org/congestion/tcpcc.html)).
- Netflix adaptive concurrency limit ([blog post](https://tech.olx.com/load-shedding-with-nginx-using-adaptive-concurrency-control-part-1-e59c7da6a6df)
-and [implementation](https://github.com/Netflix/concurrency-limits))
+ and [implementation](https://github.com/Netflix/concurrency-limits))
- Envoy Adaptive Concurrency
-([doc](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/adaptive_concurrency_filter#config-http-filters-adaptive-concurrency))
+ ([doc](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/adaptive_concurrency_filter#config-http-filters-adaptive-concurrency))
We cannot blindly apply a solution without careful consideration and expect it
to function flawlessly. The suggested approach considers Gitaly's specific
@@ -116,12 +116,11 @@ process functioning but quickly reducing it when an issue occurs.
During initialization, we configure the following parameters:
- `initialLimit`: Concurrency limit to start with. This value is essentially
-equal to the current static concurrency limit.
+ equal to the current static concurrency limit.
- `maxLimit`: Maximum concurrency limit.
- `minLimit`: Minimum concurrency limit so that the process is considered as
-functioning. If it's equal to 0, it rejects all upcoming requests.
-- `backoffFactor`: how fast the limit decreases when a backoff event occurs (`0
-< backoff < 1`, default to `0.75`)
+ functioning. If it's equal to 0, it rejects all upcoming requests.
+- `backoffFactor`: how fast the limit decreases when a backoff event occurs (`0 < backoff < 1`, default to `0.75`)
When the Gitaly process starts, it sets `limit = initialLimit`, in which `limit`
is the maximum in-flight requests allowed at a time.
@@ -130,9 +129,9 @@ Periodically, maybe once per 15 seconds, the value of the `limit` is
re-calibrated:
- `limit = limit + 1` if there is no backoff event since the last
-calibration. The new limit cannot exceed `maxLimit`.
+ calibration. The new limit cannot exceed `maxLimit`.
- `limit = limit * backoffFactor` otherwise. The new limit cannot be lower than
-`minLimit`.
+ `minLimit`.
When a process can no longer handle requests or will not be able to handle them
soon, it is referred to as a back-off event. Ideally, we would love to see the
@@ -151,16 +150,16 @@ The concurrency limit restricts the total number of in-flight requests (IFR) at
a time.
- When `IFR < limit`, Gitaly handles new requests without waiting. After an
-increment, Gitaly immediately handles the subsequent request in the queue, if
-any.
+ increment, Gitaly immediately handles the subsequent request in the queue, if
+ any.
- When `IFR = limit`, it means the limit is reached. Subsequent requests are
-queued, waiting for their turn. If the queue length reaches a configured limit,
-Gitaly rejects new requests immediately. When a request stays in the queue long
-enough, it is also automatically dropped by Gitaly.
+ queued, waiting for their turn. If the queue length reaches a configured limit,
+ Gitaly rejects new requests immediately. When a request stays in the queue long
+ enough, it is also automatically dropped by Gitaly.
- When `IRF > limit`, it's appropriately a consequence of backoff events. It
-means Gitaly handles more requests than the newly appointed limits. In addition
-to queueing upcoming requests similarly to the above case, Gitaly may start
-load-shedding in-flight requests if this situation is not resolved long enough.
+ means Gitaly handles more requests than the newly appointed limits. In addition
+ to queueing upcoming requests similarly to the above case, Gitaly may start
+ load-shedding in-flight requests if this situation is not resolved long enough.
At several points in time we have discussed whether we want to change queueing
semantics. Right now we admit queued processes from the head of the queue
@@ -181,16 +180,16 @@ Each system has its own set of signals, and in the case of Gitaly, there are two
aspects to consider:
- Lack of resources, particularly memory and CPU, which are essential for
-handling Git processes like `git-pack-objects(1)`. When these resources are limited
-or depleted, it doesn't make sense for Gitaly to accept more requests. Doing so
-would worsen the saturation, and Gitaly addresses this issue by applying cgroups
-extensively. The following section outlines how accounting can be carried out
-using cgroup.
+ handling Git processes like `git-pack-objects(1)`. When these resources are limited
+ or depleted, it doesn't make sense for Gitaly to accept more requests. Doing so
+ would worsen the saturation, and Gitaly addresses this issue by applying cgroups
+ extensively. The following section outlines how accounting can be carried out
+ using cgroup.
- Serious latency degradation. Gitaly offers various RPCs for different purposes
-besides serving Git data that is hard to reason about latencies. A significant
-overall latency decline is an indication that Gitaly should not accept more
-requests. Another section below describes how to assert latency degradation
-reasonably.
+ besides serving Git data that is hard to reason about latencies. A significant
+ overall latency decline is an indication that Gitaly should not accept more
+ requests. Another section below describes how to assert latency degradation
+ reasonably.
Apart from the above signals, we can consider adding more signals in the future
to make the system smarter. Some examples are Go garbage collector statistics,
diff --git a/doc/architecture/blueprints/runway/index.md b/doc/architecture/blueprints/runway/index.md
index becb7914feb..af7f466cdc9 100644
--- a/doc/architecture/blueprints/runway/index.md
+++ b/doc/architecture/blueprints/runway/index.md
@@ -169,7 +169,7 @@ In order for runway to function, there are two JSON/YAML documents in use. They
1. The Runway Inventory Model. This covers what service projects are currently onboarded into Runway. It's located [here](https://gitlab.com/gitlab-com/gl-infra/platform/runway/provisioner/-/blob/main/inventory.json?ref_type=heads). The schema used to validate the docuemnt is located [here](https://gitlab.com/gitlab-com/gl-infra/platform/runway/runwayctl/-/blob/main/schemas/service-inventory/v1.0.0-beta/inventory.schema.json?ref_type=heads). There is no backwards compatibility guarenteed to changes to this document schema. This is because it's only used internally by the Runway team, and there is only a single document actually being used by Runway to provision/deprovision Runway services.
1. The runway Service Model. This is used by Runway users to pass through configuration needed to Runway in order to deploy their service. It's located inside their Service project, at `.runway/runway.yml`. [An example is here](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/.runway/runway.yml?ref_type=heads). The schema used to validate the document is located [here](https://gitlab.com/gitlab-com/gl-infra/platform/runway/runwayctl/-/blob/main/schemas/service-manifest/v1.0.0-beta/manifest.schema.json?ref_type=heads). We aim to continue to make improvements and changes to the model, but all changes to the model within the same `kind/apiVersion` must be backwards compatible. In order to
-make breaking changes, a new `apiVersion` of the schema will be released. The overall goal is to copy the [Kubernetes model for making API changes](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md).
+ make breaking changes, a new `apiVersion` of the schema will be released. The overall goal is to copy the [Kubernetes model for making API changes](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md).
There are also [GitLab CI templates](https://gitlab.com/gitlab-com/gl-infra/platform/runway/ci-tasks) used by Runway users in order to automate deployments via Runway through GitLab CI. Users will be encouraged to use tools such as [Renovate bot](https://gitlab.com/gitlab-com/gl-infra/common-ci-tasks/-/blob/main/renovate-bot.md) in order to make sure the CI templates and
version of Runway they are using is up to date. The Runway team will support all released versions of Runway, with the exception of when a security issue is identified. When this happens, Runway users will be expected to update to a version of Runway that contains a fix for the issue as soon as possible (once notification is received).
diff --git a/doc/development/activitypub/actors/index.md b/doc/development/activitypub/actors/index.md
index 6d82e79b9a0..e0367d71871 100644
--- a/doc/development/activitypub/actors/index.md
+++ b/doc/development/activitypub/actors/index.md
@@ -72,7 +72,7 @@ render json: ActivityPub::ReleasesActorSerializer.new.represent(project, opts)
```
- `outbox` is the endpoint where to find the activities feed for this
-actor.
+ actor.
- `inbox` is where to POST to subscribe to the feed. Not yet implemented, so pass `nil`.
## Outbox page
diff --git a/doc/development/database_review.md b/doc/development/database_review.md
index ec4c4f9f2c5..f4fd81f85ec 100644
--- a/doc/development/database_review.md
+++ b/doc/development/database_review.md
@@ -113,7 +113,7 @@ the following preparations into account.
#### Preparation when adding migrations
- Ensure `db/structure.sql` is updated as [documented](migration_style_guide.md#schema-changes), and additionally ensure that the relevant version files under
-`db/schema_migrations` were added or removed.
+ `db/schema_migrations` were added or removed.
- Ensure that the Database Dictionary is updated as [documented](database/database_dictionary.md).
- Make migrations reversible by using the `change` method or include a `down` method when using `up`.
- Include either a rollback procedure or describe how to rollback changes.
diff --git a/doc/development/lfs.md b/doc/development/lfs.md
index 289c258fafe..2fb7d20cabc 100644
--- a/doc/development/lfs.md
+++ b/doc/development/lfs.md
@@ -180,32 +180,32 @@ sequenceDiagram
1. The user requests the project archive from the UI.
1. Workhorse forwards this request to Rails.
1. If the user is authorized to download the archive, Rails replies with
-an HTTP header of `Gitlab-Workhorse-Send-Data` with a base64-encoded
-JSON payload prefaced with `git-archive`. This payload includes the
-`SendArchiveRequest` binary message, which is encoded again in base64.
+ an HTTP header of `Gitlab-Workhorse-Send-Data` with a base64-encoded
+ JSON payload prefaced with `git-archive`. This payload includes the
+ `SendArchiveRequest` binary message, which is encoded again in base64.
1. Workhorse decodes the `Gitlab-Workhorse-Send-Data` payload. If the
-archive already exists in the archive cache, Workhorse sends that
-file. Otherwise, Workhorse sends the `SendArchiveRequest` to the
-appropriate Gitaly server.
+ archive already exists in the archive cache, Workhorse sends that
+ file. Otherwise, Workhorse sends the `SendArchiveRequest` to the
+ appropriate Gitaly server.
1. The Gitaly server calls `git archive <ref>` to begin generating
-the Git archive on-the-fly. If the `include_lfs_blobs` flag is enabled,
-Gitaly enables a custom LFS smudge filter via the `-c
-filter.lfs.smudge=/path/to/gitaly-lfs-smudge` Git option.
+ the Git archive on-the-fly. If the `include_lfs_blobs` flag is enabled,
+ Gitaly enables a custom LFS smudge filter via the `-c
+ filter.lfs.smudge=/path/to/gitaly-lfs-smudge` Git option.
1. When `git` identifies a possible LFS pointer using the
-`.gitattributes` file, `git` calls `gitaly-lfs-smudge` and provides the
-LFS pointer via the standard input. Gitaly provides `GL_PROJECT_PATH`
-and `GL_INTERNAL_CONFIG` as environment variables to enable lookup of
-the LFS object.
+ `.gitattributes` file, `git` calls `gitaly-lfs-smudge` and provides the
+ LFS pointer via the standard input. Gitaly provides `GL_PROJECT_PATH`
+ and `GL_INTERNAL_CONFIG` as environment variables to enable lookup of
+ the LFS object.
1. If a valid LFS pointer is decoded, `gitaly-lfs-smudge` makes an
-internal API call to Workhorse to download the LFS object from GitLab.
+ internal API call to Workhorse to download the LFS object from GitLab.
1. Workhorse forwards this request to Rails. If the LFS object exists
-and is associated with the project, Rails sends `ArchivePath` either
-with a path where the LFS object resides (for local disk) or a
-pre-signed URL (when object storage is enabled) via the
-`Gitlab-Workhorse-Send-Data` HTTP header with a payload prefaced with
-`send-url`.
+ and is associated with the project, Rails sends `ArchivePath` either
+ with a path where the LFS object resides (for local disk) or a
+ pre-signed URL (when object storage is enabled) via the
+ `Gitlab-Workhorse-Send-Data` HTTP header with a payload prefaced with
+ `send-url`.
1. Workhorse retrieves the file and send it to the `gitaly-lfs-smudge`
-process, which writes the contents to the standard output.
+ process, which writes the contents to the standard output.
1. `git` reads this output and sends it back to the Gitaly process.
1. Gitaly sends the data back to Rails.
1. The archive data is sent back to the client.
diff --git a/doc/development/multi_version_compatibility.md b/doc/development/multi_version_compatibility.md
index 40a30aa4926..716e42655d0 100644
--- a/doc/development/multi_version_compatibility.md
+++ b/doc/development/multi_version_compatibility.md
@@ -234,7 +234,7 @@ And these deployments align perfectly with application changes.
1. At the beginning we have `Version N` on `Schema A`.
1. Then we have a _long_ transition period with both `Version N` and `Version N+1` on `Schema B`.
1. When we only have `Version N+1` on `Schema B` the schema changes again.
-1. Finally we have `Version N+1` on `Schema C`.
+1. Finally we have `Version N+1` on `Schema C`.
With all those details in mind, let's imagine we need to replace a query, and this query has an index to support it.
@@ -310,10 +310,10 @@ variable `CI_NODE_TOTAL` being an integer failed. This was caused because after
1. New code: Sidekiq created a new pipeline and new build. `build.options[:parallel]` is a `Hash`.
1. Old code: Runners requested a job from an API node that is running the previous version.
1. As a result, the [new code](https://gitlab.com/gitlab-org/gitlab/-/blob/42b82a9a3ac5a96f9152aad6cbc583c42b9fb082/app/models/concerns/ci/contextable.rb#L104)
-was not run on the API server. The runner's request failed because the
-older API server tried return the `CI_NODE_TOTAL` CI/CD variable, but
-instead of sending an integer value (for example, 9), it sent a serialized
-`Hash` value (`{:number=>9, :total=>9}`).
+ was not run on the API server. The runner's request failed because the
+ older API server tried return the `CI_NODE_TOTAL` CI/CD variable, but
+ instead of sending an integer value (for example, 9), it sent a serialized
+ `Hash` value (`{:number=>9, :total=>9}`).
If you look at the [deployment pipeline](https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/202212),
you see all nodes were updated in parallel:
@@ -322,11 +322,11 @@ you see all nodes were updated in parallel:
However, even though the updated started around the same time, the completion time varied significantly:
-|Node type|Duration (min)|
-|---------|--------------|
-|API |54 |
-|Sidekiq |21 |
-|K8S |8 |
+| Node type | Duration (min) |
+|-----------|----------------|
+| API | 54 |
+| Sidekiq | 21 |
+| K8S | 8 |
Builds that used the `parallel` keyword and depended on `CI_NODE_TOTAL`
and `CI_NODE_INDEX` would fail during the time after Sidekiq was
diff --git a/doc/development/reactive_caching.md b/doc/development/reactive_caching.md
index 00110d21dc0..dfe1e2d1b05 100644
--- a/doc/development/reactive_caching.md
+++ b/doc/development/reactive_caching.md
@@ -255,14 +255,14 @@ self.reactive_cache_hard_limit = 5.megabytes
#### `self.reactive_cache_work_type`
- This is the type of work performed by the `calculate_reactive_cache` method. Based on this attribute,
-it's able to pick the right worker to process the caching job. Make sure to
-set it as `:external_dependency` if the work performs any external request
-(for example, Kubernetes, Sentry); otherwise set it to `:no_dependency`.
+ it's able to pick the right worker to process the caching job. Make sure to
+ set it as `:external_dependency` if the work performs any external request
+ (for example, Kubernetes, Sentry); otherwise set it to `:no_dependency`.
#### `self.reactive_cache_worker_finder`
- This is the method used by the background worker to find or generate the object on
-which `calculate_reactive_cache` can be called.
+ which `calculate_reactive_cache` can be called.
- By default it uses the model primary key to find the object:
```ruby
diff --git a/doc/development/ruby_upgrade.md b/doc/development/ruby_upgrade.md
index 110bc6076b0..d1dd6d793d4 100644
--- a/doc/development/ruby_upgrade.md
+++ b/doc/development/ruby_upgrade.md
@@ -50,31 +50,31 @@ To help you estimate the scope of future upgrades, see the efforts required for
Before any upgrade, consider all audiences and targets, ordered by how immediately they are affected by Ruby upgrades:
1. **Developers.** We have many contributors to GitLab and related projects both inside and outside the company. Changing files such as `.ruby-version` affects everyone using tooling that interprets these files.
-The developers are affected as soon as they pull from the repository containing the merged changes.
+ The developers are affected as soon as they pull from the repository containing the merged changes.
1. **GitLab CI/CD.** We heavily lean on CI/CD for code integration and testing. CI/CD jobs do not interpret files such as `.ruby-version`.
-Instead, they use the Ruby installed in the Docker container they execute in, which is defined in `.gitlab-ci.yml`.
-The container images used in these jobs are maintained in the [`gitlab-build-images`](https://gitlab.com/gitlab-org/gitlab-build-images) repository.
-When we merge an update to an image, CI/CD jobs are affected as soon as the [image is built](https://gitlab.com/gitlab-org/gitlab-build-images/#pushing-a-rebuild-image).
+ Instead, they use the Ruby installed in the Docker container they execute in, which is defined in `.gitlab-ci.yml`.
+ The container images used in these jobs are maintained in the [`gitlab-build-images`](https://gitlab.com/gitlab-org/gitlab-build-images) repository.
+ When we merge an update to an image, CI/CD jobs are affected as soon as the [image is built](https://gitlab.com/gitlab-org/gitlab-build-images/#pushing-a-rebuild-image).
1. **GitLab SaaS**. GitLab.com is deployed from customized Helm charts that use Docker images from [Cloud Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/CNG).
-Just like CI/CD, `.ruby-version` is meaningless in this environment. Instead, those Docker images must be patched to upgrade Ruby.
-GitLab SaaS is affected with the next deployment.
+ Just like CI/CD, `.ruby-version` is meaningless in this environment. Instead, those Docker images must be patched to upgrade Ruby.
+ GitLab SaaS is affected with the next deployment.
1. **Self-managed GitLab.** Customers installing GitLab via [Omnibus](https://gitlab.com/gitlab-org/omnibus-gitlab) use none of the above.
-Instead, their Ruby version is defined by the [Ruby software bundle](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/config/software/ruby.rb) in Omnibus.
-Self-managed customers are affected as soon as they upgrade to the release containing this change.
+ Instead, their Ruby version is defined by the [Ruby software bundle](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/config/software/ruby.rb) in Omnibus.
+ Self-managed customers are affected as soon as they upgrade to the release containing this change.
## Ruby upgrade approach
Timing all steps in a Ruby upgrade correctly is critical. As a general guideline, consider the following:
- For smaller upgrades where production behavior is unlikely to change, aim to keep the version gap between
-repositories and production minimal. Coordinate with stakeholders to merge all changes closely together
-(within a day or two) to avoid drift. In this scenario the likely order is to upgrade developer tooling and
-environments first, production second.
+ repositories and production minimal. Coordinate with stakeholders to merge all changes closely together
+ (within a day or two) to avoid drift. In this scenario the likely order is to upgrade developer tooling and
+ environments first, production second.
- For larger changes, the risk of going to production with a new Ruby is significant. In this case, try to get into a
-position where all known incompatibilities with the new Ruby version are already fixed, then work
-with production engineers to deploy the new Ruby to a subset of the GitLab production fleet. In this scenario
-the likely order is to update production first, developer tooling and environments second. This makes rollbacks
-easier in case of critical regressions in production.
+ position where all known incompatibilities with the new Ruby version are already fixed, then work
+ with production engineers to deploy the new Ruby to a subset of the GitLab production fleet. In this scenario
+ the likely order is to update production first, developer tooling and environments second. This makes rollbacks
+ easier in case of critical regressions in production.
Either way, we found that from past experience the following approach works well, with some steps likely only
necessary for minor and major upgrades. Note that some of these steps can happen in parallel or may have their
@@ -110,17 +110,17 @@ for a smoother transition by supporting both old and new Ruby versions for a per
There are two places that require changes:
1. **[GitLab Build Images](https://gitlab.com/gitlab-org/gitlab-build-images).** These are Docker images
-we use for runners and other Docker-based pre-production environments. The kind of change necessary
-depends on the scope.
+ we use for runners and other Docker-based pre-production environments. The kind of change necessary
+ depends on the scope.
- For [patch level updates](https://gitlab.com/gitlab-org/gitlab-build-images/-/merge_requests/418), it should suffice to increment the patch level of `RUBY_VERSION`.
-All projects building against the same minor release automatically download the new patch release.
+ All projects building against the same minor release automatically download the new patch release.
- For [major and minor updates](https://gitlab.com/gitlab-org/gitlab-build-images/-/merge_requests/320), create a new set of Docker images that can be used side-by-side with existing images during the upgrade process. **Important:** Make sure to copy over all Ruby patch files
-in the `/patches` directory to a new folder matching the Ruby version you upgrade to, or they aren't applied.
+ in the `/patches` directory to a new folder matching the Ruby version you upgrade to, or they aren't applied.
1. **[GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit).**
-Update GDK to add the new Ruby as an additional option for
-developers to choose from. This typically only requires it to be appended to `.tool-versions` so `asdf`
-users will benefit from this. Other users will have to install it manually
-([example](https://gitlab.com/gitlab-org/gitlab-development-kit/-/merge_requests/2136).)
+ Update GDK to add the new Ruby as an additional option for
+ developers to choose from. This typically only requires it to be appended to `.tool-versions` so `asdf`
+ users will benefit from this. Other users will have to install it manually
+ ([example](https://gitlab.com/gitlab-org/gitlab-development-kit/-/merge_requests/2136).)
For larger version upgrades, consider working with [Quality Engineering](https://about.gitlab.com/handbook/engineering/quality/)
to identify and set up a test plan.
@@ -265,16 +265,16 @@ For GitLab SaaS, GitLab team members can inspect these log events in Kibana
During the upgrade process, consider the following recommendations:
- **Front-load as many changes as possible.** Especially for minor and major releases, it is likely that application
-code will break or change. Any changes that are backward compatible should be merged into the main branch and
-released independently ahead of the Ruby version upgrade. This ensures that we move in small increments and
-get feedback from production environments early.
+ code will break or change. Any changes that are backward compatible should be merged into the main branch and
+ released independently ahead of the Ruby version upgrade. This ensures that we move in small increments and
+ get feedback from production environments early.
- **Create an experimental branch for larger updates.** We generally try to avoid long-running topic branches,
-but for purposes of feedback and experimentation, it can be useful to have such a branch to get regular
-feedback from CI/CD when running a newer Ruby. This can be helpful when first assessing what problems
-we might run into, as [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/50640) demonstrates.
-These experimental branches are not intended to be merged; they can be closed once all required changes have been broken out
-and merged back independently.
+ but for purposes of feedback and experimentation, it can be useful to have such a branch to get regular
+ feedback from CI/CD when running a newer Ruby. This can be helpful when first assessing what problems
+ we might run into, as [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/50640) demonstrates.
+ These experimental branches are not intended to be merged; they can be closed once all required changes have been broken out
+ and merged back independently.
- **Give yourself enough time to fix problems ahead of a milestone release.** GitLab moves fast.
-As a Ruby upgrade requires many MRs to be sent and reviewed, make sure all changes are merged at least a week
-before release day. This gives us extra time to act if something breaks. If in doubt, it is better to
-postpone the upgrade to the following month, as we [prioritize availability over velocity](https://about.gitlab.com/handbook/engineering/development/principles/#prioritizing-technical-decisions).
+ As a Ruby upgrade requires many MRs to be sent and reviewed, make sure all changes are merged at least a week
+ before release day. This gives us extra time to act if something breaks. If in doubt, it is better to
+ postpone the upgrade to the following month, as we [prioritize availability over velocity](https://about.gitlab.com/handbook/engineering/development/principles/#prioritizing-technical-decisions).
diff --git a/doc/policy/maintenance.md b/doc/policy/maintenance.md
index 8e4541a58a7..0790cf07ed1 100644
--- a/doc/policy/maintenance.md
+++ b/doc/policy/maintenance.md
@@ -72,14 +72,14 @@ GitLab.
These two policies are in place because:
1. GitLab has Community and Enterprise distributions, doubling the amount of work
-necessary to test/release the software.
+ necessary to test/release the software.
1. Backporting to more than one release creates a high development, quality assurance,
-and support cost.
+ and support cost.
1. Supporting parallel version discourages incremental upgrades which over time accumulate in
-complexity and create upgrade challenges for all users. GitLab has a dedicated team ensuring that
-incremental upgrades (and installations) are as simple as possible.
+ complexity and create upgrade challenges for all users. GitLab has a dedicated team ensuring that
+ incremental upgrades (and installations) are as simple as possible.
1. The number of changes created in the GitLab application is high, which contributes to backporting complexity to older releases. In several cases, backporting has to go through the same
-review process a new change goes through.
+ review process a new change goes through.
1. Ensuring that tests pass on the older release is a considerable challenge in some cases, and as such is very time-consuming.
Including new features in a patch release is not possible as that would break [Semantic Versioning](https://semver.org/).