Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to 'doc/development/merge_request_concepts')
-rw-r--r--doc/development/merge_request_concepts/diffs/development.md188
-rw-r--r--doc/development/merge_request_concepts/diffs/index.md199
-rw-r--r--doc/development/merge_request_concepts/img/merge_ref_head_options_v13_6.pngbin0 -> 21605 bytes
-rw-r--r--doc/development/merge_request_concepts/performance.md565
4 files changed, 952 insertions, 0 deletions
diff --git a/doc/development/merge_request_concepts/diffs/development.md b/doc/development/merge_request_concepts/diffs/development.md
new file mode 100644
index 00000000000..1c22eff34db
--- /dev/null
+++ b/doc/development/merge_request_concepts/diffs/development.md
@@ -0,0 +1,188 @@
+---
+stage: Create
+group: Code Review
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Merge request diffs development guide
+
+This document explains the backend design and flow of merge request diffs.
+It should help contributors:
+
+- Understand the code design.
+- Identify areas for improvement through contribution.
+
+It's intentional that it doesn't contain too many implementation details, as they
+can change often. The code better explains these details. The components
+mentioned here are the major parts of the application for how merge request diffs
+are generated, stored, and returned to users.
+
+NOTE:
+This page is a living document. Update it accordingly when the parts
+of the codebase touched in this document are changed or removed, or when new components
+are added.
+
+## Data model
+
+Four main ActiveRecord models represent what we collectively refer to
+as _diffs._ These database-backed records replicate data contained in the
+project's Git repository, and are in part a cache against excessive access requests
+to [Gitaly](../../gitaly.md). Additionally, they provide a logical place for:
+
+- Calculated and retrieved metadata about the pieces of the diff.
+- General class- and instance- based logic.
+
+```mermaid
+erDiagram
+ MergeRequest ||--|{ MergeRequestDiff: ""
+ MergeRequestDiff |{--|{ MergeRequestDiffCommit: ""
+ MergeRequestDiff |{--|| MergeRequestDiffDetail: ""
+ MergeRequestDiff |{--|{ MergeRequestDiffFile: ""
+ MergeRequestDiffCommit |{--|| MergeRequestDiffCommitUser: ""
+```
+
+### `MergeRequestDiff`
+
+`MergeRequestDiff` is defined in `app/models/merge_request_diff.rb`. This
+class holds metadata and context related to the diff resulting from a set of
+commits. It defines methods that are the primary means for interacting with diff
+contents, individual commits, and the files containing changes.
+
+```ruby
+#<MergeRequestDiff:0x00007fd1ed63b4d0
+ id: 28,
+ state: "collected",
+ merge_request_id: 28,
+ created_at: Tue, 06 Sep 2022 18:56:02.509469000 UTC +00:00,
+ updated_at: Tue, 06 Sep 2022 18:56:02.754201000 UTC +00:00,
+ base_commit_sha: "ae73cb07c9eeaf35924a10f713b364d32b2dd34f",
+ real_size: "9",
+ head_commit_sha: "bb5206fee213d983da88c47f9cf4cc6caf9c66dc",
+ start_commit_sha: "0b4bc9a49b562e85de7cc9e834518ea6828729b9",
+ commits_count: 6,
+ external_diff: "diff-28",
+ external_diff_store: 1,
+ stored_externally: nil,
+ files_count: 9,
+ sorted: true,
+ diff_type: "regular",
+ verification_checksum: nil>
+```
+
+Diff content is usually accessed through this class. Logic is often applied
+to diff, file, and commit content before it is returned to a user.
+
+### `MergeRequestDiffCommit`
+
+`MergeRequestDiffCommit` is defined in `app/models/merge_request_diff_commit.rb`.
+This class corresponds to a single commit contained in its corresponding `MergeRequestDiff`,
+and holds header information about the commit.
+
+```ruby
+#<MergeRequestDiffCommit:0x00007fd1dfc6c4c0
+ authored_date: Wed, 06 Aug 2022 06:35:52.000000000 UTC +00:00,
+ committed_date: Wed, 06 Aug 2022 06:35:52.000000000 UTC +00:00,
+ merge_request_diff_id: 28,
+ relative_order: 0,
+ sha: "bb5206fee213d983da88c47f9cf4cc6caf9c66dc",
+ message: "Feature conflcit added\n\nSigned-off-by: Sample User <sample.user@example.com>\n",
+ trailers: {},
+ commit_author_id: 19,
+ committer_id: 19>
+```
+
+Every `MergeRequestDiffCommit` has a corresponding `MergeRequest::DiffCommitUser`
+record it `:belongs_to`, in ActiveRecord parlance. These records are `:commit_author`
+and `:committer`, and could be distinct individuals.
+
+### `MergeRequest::DiffCommitUser`
+
+`MergeRequest::DiffCommitUser` is defined in `app/models/merge_request/diff_commit_user.rb`.
+It captures the `name` and `email` of a given commit, but contains no connection
+itself to any `User` records.
+
+```ruby
+#<MergeRequest::DiffCommitUser:0x00007fd1dff7c930
+ id: 19,
+ name: "Sample User",
+ email: "sample.user@example.com">
+```
+
+### `MergeRequestDiffFile`
+
+`MergeRequestDiffFile` is defined in `app/models/merge_request_diff_file.rb`.
+This record of this class represents the diff of a single file contained in the
+`MergeRequestDiff`. It holds both meta and specific information about the file's
+relationship to the change, such as:
+
+- Whether it is added or renamed.
+- Its ordering in the diff.
+- The raw diff output itself.
+
+### `MergeRequestDiffDetail`
+
+`MergeRequestDiffDetail` is defined in `app/models/merge_request_diff_detail.rb`.
+This class provides verification information for Geo replication, but otherwise
+is not used for user-facing diffs.
+
+```ruby
+#<MergeRequestDiffFile:0x00007fd1ef7c9048
+ merge_request_diff_id: 28,
+ relative_order: 0,
+ new_file: true,
+ renamed_file: false,
+ deleted_file: false,
+ too_large: false,
+ a_mode: "0",
+ b_mode: "100644",
+ new_path: "files/ruby/feature.rb",
+ old_path: "files/ruby/feature.rb",
+ diff:
+ "@@ -0,0 +1,4 @@\n+# This file was changed in feature branch\n+# We put different code here to make merge conflict\n+class Conflict\n+end\n",
+ binary: false,
+ external_diff_offset: nil,
+ external_diff_size: nil>
+```
+
+## Flow
+
+These flowcharts should help explain the flow from the controllers down to the
+models for different features. This page is not intended to document the entirety
+of options for access and working with diffs, focusing solely on the most common.
+
+### `batch_diffs.json`
+
+The most common avenue for viewing diffs is the **Changes**
+tab in the top navigation bar of merge request pages in the GitLab UI. When selected, the
+diffs themselves are loaded via a paginated request to `/-/merge_requests/:id/batch_diffs.json`,
+which is served by [`Projects::MergeRequests::DiffsController#diffs_batch`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/merge_requests/diffs_controller.rb):
+
+<!-- Don't delete the &nbsp; characters below. Mermaid returns a syntax error if they aren't included.-->
+
+```mermaid
+sequenceDiagram
+ Note over .#diffs_batch: Preload diffs and ivars
+ .#diffs_batch->>+.#define_diff_vars: &nbsp;
+ .#define_diff_vars ->>+ @merge_request: @merge_request_diffs =
+ Note right of @merge_request: An ordered collection of all diffs in MR
+ @merge_request-->>-.#define_diff_vars: &nbsp;
+ .#define_diff_vars ->>+ @merge_request: @merge_request_diff =
+ Note right of @merge_request: Most recent merge_request_diff (or commit)
+ @merge_request-->>-.#define_diff_vars: &nbsp;
+ .#define_diff_vars ->>+ .#define_diff_vars: @compare =
+ Note right of .#define_diff_vars:: param-filtered merge_request_diff(s)
+ .#define_diff_vars -->>- .#diffs_batch: &nbsp;
+ Note over .#diffs_batch: Preloading complete
+ .#diffs_batch->>+@merge_request: Calculate unfoldable diff lines
+ Note right of @merge_request: note_positions_for_paths.unfoldable
+ @merge_request-->>-.#diffs_batch: &nbsp;
+ Note over .#diffs_batch: Build options hash
+ Note over .#diffs_batch: Build cache_context
+ Note over .#diffs_batch: Unfold files in diff
+ .#diffs_batch->>+Gitlab_Diff_FileCollection_MergeRequestDiffBase: diffs.write_diff
+ Gitlab_Diff_FileCollection_MergeRequestDiffBase->>+Gitlab_Diff_HighlightCache: Highlight diff
+ Gitlab_Diff_HighlightCache -->>-Gitlab_Diff_FileCollection_MergeRequestDiffBase: Return highlighted diff
+ Note over Gitlab_Diff_FileCollection_MergeRequestDiffBase: Cache diff
+ Gitlab_Diff_FileCollection_MergeRequestDiffBase-->>-.#diffs_batch: &nbsp;
+ Note over .#diffs_batch: render JSON
+```
diff --git a/doc/development/merge_request_concepts/diffs/index.md b/doc/development/merge_request_concepts/diffs/index.md
new file mode 100644
index 00000000000..8ef3f01aba9
--- /dev/null
+++ b/doc/development/merge_request_concepts/diffs/index.md
@@ -0,0 +1,199 @@
+---
+stage: Create
+group: Code Review
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Working with diffs
+
+We rely on different sources to present diffs. These include:
+
+- Gitaly service
+- Database (through `merge_request_diff_files`)
+- Redis (cached highlighted diffs)
+
+## Deep Dive
+
+<!-- vale gitlab.Spelling = NO -->
+
+In January 2019, Oswaldo Ferreira hosted a Deep Dive (GitLab team members only:
+`https://gitlab.com/gitlab-org/create-stage/issues/1`) on GitLab Diffs and Commenting on Diffs
+functionality to share domain-specific knowledge with anyone who may work in this part of the
+codebase in the future:
+
+<!-- vale gitlab.Spelling = YES -->
+
+- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
+ [Recording on YouTube](https://www.youtube.com/watch?v=K6G3gMcFyek)
+- Slides on [Google Slides](https://docs.google.com/presentation/d/1bGutFH2AT3bxOPZuLMGl1ANWHqFnrxwQwjiwAZkF-TU/edit)
+- [PDF slides](https://gitlab.com/gitlab-org/create-stage/uploads/b5ad2f336e0afcfe0f99db0af0ccc71a/)
+
+Everything covered in this deep dive was accurate as of GitLab 11.7, and while specific details may
+have changed since then, it should still serve as a good introduction.
+
+## Architecture overview
+
+### Merge request diffs
+
+When refreshing a merge request (pushing to a source branch, force-pushing to target branch, or if the target branch now contains any commits from the MR)
+we fetch the comparison information using `Gitlab::Git::Compare`, which fetches `base` and `head` data using Gitaly and diff between them through
+`Gitlab::Git::Diff.between`.
+The diffs fetching process _limits_ single file diff sizes and the overall size of the whole diff through a series of constant values. Raw diff files are
+then persisted on `merge_request_diff_files` table.
+
+Even though diffs larger than 10% of the value of `ApplicationSettings#diff_max_patch_bytes` are collapsed,
+we still keep them on PostgreSQL. However, diff files larger than defined _safety limits_
+(see the [Diff limits section](#diff-limits)) are _not_ persisted in the database.
+
+In order to present diffs information on the merge request diffs page, we:
+
+1. Fetch all diff files from database `merge_request_diff_files`
+1. Fetch the _old_ and _new_ file blobs in batch to:
+ - Highlight old and new file content
+ - Know which viewer it should use for each file (text, image, deleted, etc)
+ - Know if the file content changed
+ - Know if it was stored externally
+ - Know if it had storage errors
+1. If the diff file is cacheable (text-based), it's cached on Redis
+ using `Gitlab::Diff::FileCollection::MergeRequestDiff`
+
+### Note diffs
+
+When commenting on a diff (any comparison), we persist a truncated diff version
+on `NoteDiffFile` (which is associated with the actual `DiffNote`). So instead
+of hitting the repository every time we need the diff of the file, we:
+
+1. Check whether we have the `NoteDiffFile#diff` persisted and use it
+1. Otherwise, if it's a current MR revision, use the persisted
+ `MergeRequestDiffFile#diff`
+1. In the last scenario, go the repository and fetch the diff
+
+## Diff limits
+
+As explained above, we limit single diff files and the size of the whole diff. There are scenarios where we collapse the diff file,
+and cases where the diff file is not presented at all, and the user is guided to the Blob view.
+
+### Diff collection limits
+
+Limits that act onto all diff files collection. Files number, lines number and files size are considered.
+
+```ruby
+Gitlab::Git::DiffCollection.collection_limits[:safe_max_files] = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_files] = 100
+```
+
+File diffs are collapsed (but are expandable) if 100 files have already been rendered.
+
+```ruby
+Gitlab::Git::DiffCollection.collection_limits[:safe_max_lines] = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_lines] = 5000
+```
+
+File diffs are collapsed (but be expandable) if 5000 lines have already been rendered.
+
+```ruby
+Gitlab::Git::DiffCollection.collection_limits[:safe_max_bytes] = Gitlab::Git::DiffCollection.collection_limits[:safe_max_files] * 5.kilobytes = 500.kilobytes
+```
+
+File diffs are collapsed (but be expandable) if 500 kilobytes have already been rendered.
+
+```ruby
+Gitlab::Git::DiffCollection.collection_limits[:max_files] = Commit::DIFF_HARD_LIMIT_FILES = 1000
+```
+
+No more files are rendered at all if 1000 files have already been rendered.
+
+```ruby
+Gitlab::Git::DiffCollection.collection_limits[:max_lines] = Commit::DIFF_HARD_LIMIT_LINES = 50000
+```
+
+No more files are rendered at all if 50,000 lines have already been rendered.
+
+```ruby
+Gitlab::Git::DiffCollection.collection_limits[:max_bytes] = Gitlab::Git::DiffCollection.collection_limits[:max_files] * 5.kilobytes = 5000.kilobytes
+```
+
+No more files are rendered at all if 5 megabytes have already been rendered.
+
+All collection limit parameters are sent and applied on Gitaly. That is, after the limit is surpassed,
+Gitaly only returns the safe amount of data to be persisted on `merge_request_diff_files`.
+
+### Individual diff file limits
+
+Limits that act onto each diff file of a collection. Files number, lines number and files size are considered.
+
+#### Expandable patches (collapsed)
+
+Diff patches are collapsed when surpassing 10% of the value set in `ApplicationSettings#diff_max_patch_bytes`.
+That is, it's equivalent to 10kb if the maximum allowed value is 100kb.
+The diff is persisted and expandable if the patch size doesn't
+surpass `ApplicationSettings#diff_max_patch_bytes`.
+
+Although this nomenclature (Collapsing) is also used on Gitaly, this limit is only used on GitLab (hardcoded - not sent to Gitaly).
+Gitaly only returns `Diff.Collapsed` (RPC) when surpassing collection limits.
+
+#### Not expandable patches (too large)
+
+The patch not be rendered if it's larger than `ApplicationSettings#diff_max_patch_bytes`.
+Users see a `Changes are too large to be shown.` message and a button to view only that file in that commit.
+
+```ruby
+Commit::DIFF_SAFE_LINES = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_lines] = 5000
+```
+
+File diff is suppressed (technically different from collapsed, but behaves the same, and is expandable) if it has more than 5000 lines.
+
+This limit is hardcoded and only applied on GitLab.
+
+## Viewers
+
+Diff Viewers, which can be found on `models/diff_viewer/*` are classes used to map metadata about each type of Diff File. It has information
+whether it's a binary, which partial should be used to render it or which File extensions this class accounts for.
+
+`DiffViewer::Base` validates _blobs_ (old and new versions) content, extension and file type to check if it can be rendered.
+
+## Merge request diffs against the `HEAD` of the target branch
+
+Historically, merge request diffs have been calculated by `git diff target...source` which compares the
+`HEAD` of the source branch with the merge base (or a common ancestor) of the target branch and the source's.
+This solution works well until the target branch starts containing some of the
+changes introduced by the source branch: Consider the following case, in which the source branch
+is `feature_a` and the target is `main`:
+
+1. Checkout a new branch `feature_a` from `main` and remove `file_a` and `file_b` in it.
+1. Add a commit that removes `file_a` to `main`.
+
+The merge request diff still contains the `file_a` removal while the actual diff compared to
+`main`'s `HEAD` has only the `file_b` removal. The diff with such redundant
+changes is harder to review.
+
+In order to display an up-to-date diff, in GitLab 12.9 we
+[introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/27008) merge request
+diffs compared against `HEAD` of the target branch: the
+target branch is artificially merged into the source branch, then the resulting
+merge ref is compared to the source branch to calculate an accurate
+diff.
+
+Until we complete the epics ["use merge refs for diffs"](https://gitlab.com/groups/gitlab-org/-/epics/854)
+and ["merge conflicts in diffs"](https://gitlab.com/groups/gitlab-org/-/epics/4893),
+both options `main (base)` and `main (HEAD)` are available to be displayed in merge requests:
+
+![Merge ref head options](../img/merge_ref_head_options_v13_6.png)
+
+The `main (HEAD)` option is meant to replace `main (base)` in the future.
+
+In order to support comments for both options, diff note positions are stored for
+both `main (base)` and `main (HEAD)` versions ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/198457) in 12.10).
+The position for `main (base)` version is stored in `Note#position` and
+`Note#original_position` columns, for `main (HEAD)` version `DiffNotePosition`
+has been introduced.
+
+One of the key challenges to deal with when working on merge ref diffs are merge
+conflicts. If the target and source branch contains a merge conflict, the branches
+cannot be automatically merged. The
+<i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [recording on YouTube](https://www.youtube.com/watch?v=GFXIFA4ZuZw&feature=youtu.be&ab_channel=GitLabUnfiltered)
+is a quick introduction to the problem and the motivation behind the [epic](https://gitlab.com/groups/gitlab-org/-/epics/854).
+
+In 13.5 a solution for both-modified merge
+conflict has been
+[introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/232484). However,
+there are more classes of merge conflicts that are to be
+[addressed](https://gitlab.com/groups/gitlab-org/-/epics/4893) in the future.
diff --git a/doc/development/merge_request_concepts/img/merge_ref_head_options_v13_6.png b/doc/development/merge_request_concepts/img/merge_ref_head_options_v13_6.png
new file mode 100644
index 00000000000..3134092cc92
--- /dev/null
+++ b/doc/development/merge_request_concepts/img/merge_ref_head_options_v13_6.png
Binary files differ
diff --git a/doc/development/merge_request_concepts/performance.md b/doc/development/merge_request_concepts/performance.md
new file mode 100644
index 00000000000..c1bdd45891d
--- /dev/null
+++ b/doc/development/merge_request_concepts/performance.md
@@ -0,0 +1,565 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Merge Request Performance Guidelines
+
+Each new introduced merge request **should be performant by default**.
+
+To ensure a merge request does not negatively impact performance of GitLab
+_every_ merge request **should** adhere to the guidelines outlined in this
+document. There are no exceptions to this rule unless specifically discussed
+with and agreed upon by backend maintainers and performance specialists.
+
+It's also highly recommended that you read the following guides:
+
+- [Performance Guidelines../performance.md)
+- [Avoiding downtime in migrations](../database/avoiding_downtime_in_migrations.md)
+
+## Definition
+
+The term `SHOULD` per the [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt) means:
+
+> This word, or the adjective "RECOMMENDED", mean that there
+> may exist valid reasons in particular circumstances to ignore a
+> particular item, but the full implications must be understood and
+> carefully weighed before choosing a different course.
+
+Ideally, each of these tradeoffs should be documented
+in the separate issues, labeled accordingly and linked
+to original issue and epic.
+
+## Impact Analysis
+
+**Summary:** think about the impact your merge request may have on performance
+and those maintaining a GitLab setup.
+
+Any change submitted can have an impact not only on the application itself but
+also those maintaining it and those keeping it up and running (for example, production
+engineers). As a result you should think carefully about the impact of your
+merge request on not only the application but also on the people keeping it up
+and running.
+
+Can the queries used potentially take down any critical services and result in
+engineers being woken up in the night? Can a malicious user abuse the code to
+take down a GitLab instance? Do my changes make loading a certain page
+slower? Does execution time grow exponentially given enough load or data in the
+database?
+
+These are all questions one should ask themselves before submitting a merge
+request. It may sometimes be difficult to assess the impact, in which case you
+should ask a performance specialist to review your code. See the "Reviewing"
+section below for more information.
+
+## Performance Review
+
+**Summary:** ask performance specialists to review your code if you're not sure
+about the impact.
+
+Sometimes it's hard to assess the impact of a merge request. In this case you
+should ask one of the merge request reviewers to review your changes. You can
+find a list of these reviewers at <https://about.gitlab.com/company/team/>. A reviewer
+in turn can request a performance specialist to review the changes.
+
+## Think outside of the box
+
+Everyone has their own perception of how to use the new feature.
+Always consider how users might be using the feature instead. Usually,
+users test our features in a very unconventional way,
+like by brute forcing or abusing edge conditions that we have.
+
+## Data set
+
+The data set the merge request processes should be known
+and documented. The feature should clearly document what the expected
+data set is for this feature to process, and what problems it might cause.
+
+If you would think about the following example that puts
+a strong emphasis of data set being processed.
+The problem is simple: you want to filter a list of files from
+some Git repository. Your feature requests a list of all files
+from the repository and perform search for the set of files.
+As an author you should in context of that problem consider
+the following:
+
+1. What repositories are planned to be supported?
+1. How long it do big repositories like Linux kernel take?
+1. Is there something that we can do differently to not process such a
+ big data set?
+1. Should we build some fail-safe mechanism to contain
+ computational complexity? Usually it's better to degrade
+ the service for a single user instead of all users.
+
+## Query plans and database structure
+
+The query plan can tell us if we need additional
+indexes, or expensive filtering (such as using sequential scans).
+
+Each query plan should be run against substantial size of data set.
+For example, if you look for issues with specific conditions,
+you should consider validating a query against
+a small number (a few hundred) and a big number (100_000) of issues.
+See how the query behaves if the result is a few
+and a few thousand.
+
+This is needed as we have users using GitLab for very big projects and
+in a very unconventional way. Even if it seems that it's unlikely
+that such a big data set is used, it's still plausible that one
+of our customers could encounter a problem with the feature.
+
+Understanding ahead of time how it behaves at scale, even if we accept it,
+is the desired outcome. We should always have a plan or understanding of what is needed
+to optimize the feature for higher usage patterns.
+
+Every database structure should be optimized and sometimes even over-described
+in preparation for easy extension. The hardest part after some point is
+data migration. Migrating millions of rows is always troublesome and
+can have a negative impact on the application.
+
+To better understand how to get help with the query plan reviews
+read this section on [how to prepare the merge request for a database review../database_review.md#how-to-prepare-the-merge-request-for-a-database-review).
+
+## Query Counts
+
+**Summary:** a merge request **should not** increase the total number of executed SQL
+queries unless absolutely necessary.
+
+The total number of queries executed by the code modified or added by a merge request
+must not increase unless absolutely necessary. When building features it's
+entirely possible you need some extra queries, but you should try to keep
+this at a minimum.
+
+As an example, say you introduce a feature that updates a number of database
+rows with the same value. It may be very tempting (and easy) to write this using
+the following pseudo code:
+
+```ruby
+objects_to_update.each do |object|
+ object.some_field = some_value
+ object.save
+end
+```
+
+This means running one query for every object to update. This code can
+easily overload a database given enough rows to update or many instances of this
+code running in parallel. This particular problem is known as the
+["N+1 query problem"](https://guides.rubyonrails.org/active_record_querying.html#eager-loading-associations). You can write a test with [QueryRecorder](../database/query_recorder.md) to detect this and prevent regressions.
+
+In this particular case the workaround is fairly easy:
+
+```ruby
+objects_to_update.update_all(some_field: some_value)
+```
+
+This uses ActiveRecord's `update_all` method to update all rows in a single
+query. This in turn makes it much harder for this code to overload a database.
+
+## Use read replicas when possible
+
+In a DB cluster we have many read replicas and one primary. A classic use of scaling the DB is to have read-only actions be performed by the replicas. We use [load balancing](../../administration/postgresql/database_load_balancing.md) to distribute this load. This allows for the replicas to grow as the pressure on the DB grows.
+
+By default, queries use read-only replicas, but due to
+[primary sticking](../../administration/postgresql/database_load_balancing.md#primary-sticking), GitLab uses the
+primary for some time and reverts to secondaries after they have either caught up or after 30 seconds.
+Doing this can lead to a considerable amount of unnecessary load on the primary.
+To prevent switching to the primary [merge request 56849](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/56849) introduced the
+`without_sticky_writes` block. Typically, this method can be applied to prevent primary stickiness
+after a trivial or insignificant write which doesn't affect the following queries in the same session.
+
+To learn when a usage timestamp update can lead the session to stick to the primary and how to
+prevent it by using `without_sticky_writes`, see [merge request 57328](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/57328)
+
+As a counterpart of the `without_sticky_writes` utility,
+[merge request 59167](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/59167) introduced
+`use_replicas_for_read_queries`. This method forces all read-only queries inside its block to read
+replicas regardless of the current primary stickiness.
+This utility is reserved for cases where queries can tolerate replication lag.
+
+Internally, our database load balancer classifies the queries based on their main statement (`select`, `update`, `delete`, and so on). When in doubt, it redirects the queries to the primary database. Hence, there are some common cases the load balancer sends the queries to the primary unnecessarily:
+
+- Custom queries (via `exec_query`, `execute_statement`, `execute`, and so on)
+- Read-only transactions
+- In-flight connection configuration set
+- Sidekiq background jobs
+
+After the above queries are executed, GitLab
+[sticks to the primary](../../administration/postgresql/database_load_balancing.md#primary-sticking).
+To make the inside queries prefer using the replicas,
+[merge request 59086](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/59086) introduced
+`fallback_to_replicas_for_ambiguous_queries`. This MR is also an example of how we redirected a
+costly, time-consuming query to the replicas.
+
+## Use CTEs wisely
+
+Read about [complex queries on the relation object../database/iterating_tables_in_batches.md#complex-queries-on-the-relation-object) for considerations on how to use CTEs. We have found in some situations that CTEs can become problematic in use (similar to the n+1 problem above). In particular, hierarchical recursive CTE queries such as the CTE in [AuthorizedProjectsWorker](https://gitlab.com/gitlab-org/gitlab/-/issues/325688) are very difficult to optimize and don't scale. We should avoid them when implementing new features that require any kind of hierarchical structure.
+
+CTEs have been effectively used as an optimization fence in many simpler cases,
+such as this [example](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/43242#note_61416277).
+Beginning in PostgreSQL 12, CTEs are inlined then [optimized by default](https://paquier.xyz/postgresql-2/postgres-12-with-materialize/).
+Keeping the old behavior requires marking CTEs with the keyword `MATERIALIZED`.
+
+When building CTE statements, use the `Gitlab::SQL::CTE` class [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/56976) in GitLab 13.11.
+By default, this `Gitlab::SQL::CTE` class forces materialization through adding the `MATERIALIZED` keyword for PostgreSQL 12 and higher.
+`Gitlab::SQL::CTE` automatically omits materialization when PostgreSQL 11 is running
+(this behavior is implemented using a custom Arel node `Gitlab::Database::AsWithMaterialized` under the surface).
+
+WARNING:
+Upgrading to GitLab 14.0 requires PostgreSQL 12 or higher.
+
+## Cached Queries
+
+**Summary:** a merge request **should not** execute duplicated cached queries.
+
+Rails provides an [SQL Query Cache](../cached_queries.md#cached-queries-guidelines),
+used to cache the results of database queries for the duration of the request.
+
+See [why cached queries are considered bad](../cached_queries.md#why-cached-queries-are-considered-bad) and
+[how to detect them](../cached_queries.md#how-to-detect-cached-queries).
+
+The code introduced by a merge request, should not execute multiple duplicated cached queries.
+
+The total number of the queries (including cached ones) executed by the code modified or added by a merge request
+should not increase unless absolutely necessary.
+The number of executed queries (including cached queries) should not depend on
+collection size.
+You can write a test by passing the `skip_cached` variable to [QueryRecorder../database/query_recorder.md) to detect this and prevent regressions.
+
+As an example, say you have a CI pipeline. All pipeline builds belong to the same pipeline,
+thus they also belong to the same project (`pipeline.project`):
+
+```ruby
+pipeline_project = pipeline.project
+# Project Load (0.6ms) SELECT "projects".* FROM "projects" WHERE "projects"."id" = $1 LIMIT $2
+build = pipeline.builds.first
+
+build.project == pipeline_project
+# CACHE Project Load (0.0ms) SELECT "projects".* FROM "projects" WHERE "projects"."id" = $1 LIMIT $2
+# => true
+```
+
+When we call `build.project`, it doesn't hit the database, it uses the cached result, but it re-instantiates
+the same pipeline project object. It turns out that associated objects do not point to the same in-memory object.
+
+If we try to serialize each build:
+
+```ruby
+pipeline.builds.each do |build|
+ build.to_json(only: [:name], include: [project: { only: [:name]}])
+end
+```
+
+It re-instantiates project object for each build, instead of using the same in-memory object.
+
+In this particular case the workaround is fairly easy:
+
+```ruby
+ActiveRecord::Associations::Preloader.new.preload(pipeline, [builds: :project])
+
+pipeline.builds.each do |build|
+ build.to_json(only: [:name], include: [project: { only: [:name]}])
+end
+```
+
+`ActiveRecord::Associations::Preloader` uses the same in-memory object for the same project.
+This avoids the cached SQL query and also avoids re-instantiation of the project object for each build.
+
+## Executing Queries in Loops
+
+**Summary:** SQL queries **must not** be executed in a loop unless absolutely
+necessary.
+
+Executing SQL queries in a loop can result in many queries being executed
+depending on the number of iterations in a loop. This may work fine for a
+development environment with little data, but in a production environment this
+can quickly spiral out of control.
+
+There are some cases where this may be needed. If this is the case this should
+be clearly mentioned in the merge request description.
+
+## Batch process
+
+**Summary:** Iterating a single process to external services (for example, PostgreSQL, Redis, Object Storage)
+should be executed in a **batch-style** to reduce connection overheads.
+
+For fetching rows from various tables in a batch-style, please see [Eager Loading](#eager-loading) section.
+
+### Example: Delete multiple files from Object Storage
+
+When you delete multiple files from object storage, like GCS,
+executing a single REST API call multiple times is a quite expensive
+process. Ideally, this should be done in a batch-style, for example, S3 provides
+[batch deletion API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html),
+so it'd be a good idea to consider such an approach.
+
+The `FastDestroyAll` module might help this situation. It's a
+small framework when you remove a bunch of database rows and its associated data
+in a batch style.
+
+## Timeout
+
+**Summary:** You should set a reasonable timeout when the system invokes HTTP calls
+to external services (such as Kubernetes), and it should be executed in Sidekiq, not
+in Puma threads.
+
+Often, GitLab needs to communicate with an external service such as Kubernetes
+clusters. In this case, it's hard to estimate when the external service finishes
+the requested process, for example, if it's a user-owned cluster that's inactive for some reason,
+GitLab might wait for the response forever ([Example](https://gitlab.com/gitlab-org/gitlab/-/issues/31475)).
+This could result in Puma timeout and should be avoided at all cost.
+
+You should set a reasonable timeout, gracefully handle exceptions and surface the
+errors in UI or logging internally.
+
+Using [`ReactiveCaching`../utilities.md#reactivecaching) is one of the best solutions to fetch external data.
+
+## Keep database transaction minimal
+
+**Summary:** You should avoid accessing to external services like Gitaly during database
+transactions, otherwise it leads to severe contention problems
+as an open transaction basically blocks the release of a PostgreSQL backend connection.
+
+For keeping transaction as minimal as possible, please consider using `AfterCommitQueue`
+module or `after_commit` AR hook.
+
+Here is [an example](https://gitlab.com/gitlab-org/gitlab/-/issues/36154#note_247228859)
+that one request to Gitaly instance during transaction triggered a ~"priority::1" issue.
+
+## Eager Loading
+
+**Summary:** always eager load associations when retrieving more than one row.
+
+When retrieving multiple database records for which you need to use any
+associations you **must** eager load these associations. For example, if you're
+retrieving a list of blog posts and you want to display their authors you
+**must** eager load the author associations.
+
+In other words, instead of this:
+
+```ruby
+Post.all.each do |post|
+ puts post.author.name
+end
+```
+
+You should use this:
+
+```ruby
+Post.all.includes(:author).each do |post|
+ puts post.author.name
+end
+```
+
+Also consider using [QueryRecoder tests](../database/query_recorder.md) to prevent a regression when eager loading.
+
+## Memory Usage
+
+**Summary:** merge requests **must not** increase memory usage unless absolutely
+necessary.
+
+A merge request must not increase the memory usage of GitLab by more than the
+absolute bare minimum required by the code. This means that if you have to parse
+some large document (for example, an HTML document) it's best to parse it as a stream
+whenever possible, instead of loading the entire input into memory. Sometimes
+this isn't possible, in that case this should be stated explicitly in the merge
+request.
+
+## Lazy Rendering of UI Elements
+
+**Summary:** only render UI elements when they are actually needed.
+
+Certain UI elements may not always be needed. For example, when hovering over a
+diff line there's a small icon displayed that can be used to create a new
+comment. Instead of always rendering these kind of elements they should only be
+rendered when actually needed. This ensures we don't spend time generating
+Haml/HTML when it's not used.
+
+## Use of Caching
+
+**Summary:** cache data in memory or in Redis when it's needed multiple times in
+a transaction or has to be kept around for a certain time period.
+
+Sometimes certain bits of data have to be re-used in different places during a
+transaction. In these cases this data should be cached in memory to remove the
+need for running complex operations to fetch the data. You should use Redis if
+data should be cached for a certain time period instead of the duration of the
+transaction.
+
+For example, say you process multiple snippets of text containing username
+mentions (for example, `Hello @alice` and `How are you doing @alice?`). By caching the
+user objects for every username we can remove the need for running the same
+query for every mention of `@alice`.
+
+Caching data per transaction can be done using
+[RequestStore](https://github.com/steveklabnik/request_store) (use
+`Gitlab::SafeRequestStore` to avoid having to remember to check
+`RequestStore.active?`). Caching data in Redis can be done using
+[Rails' caching system](https://guides.rubyonrails.org/caching_with_rails.html).
+
+## Pagination
+
+Each feature that renders a list of items as a table needs to include pagination.
+
+The main styles of pagination are:
+
+1. Offset-based pagination: user goes to a specific page, like 1. User sees the next page number,
+ and the total number of pages. This style is well supported by all components of GitLab.
+1. Offset-based pagination, but without the count: user goes to a specific page, like 1.
+ User sees only the next page number, but does not see the total amount of pages.
+1. Next page using keyset-based pagination: user can only go to next page, as we don't know how many pages
+ are available.
+1. Infinite scrolling pagination: user scrolls the page and next items are loaded asynchronously. This is ideal,
+ as it has exact same benefits as the previous one.
+
+The ultimately scalable solution for pagination is to use Keyset-based pagination.
+However, we don't have support for that at GitLab at that moment. You
+can follow the progress looking at [API: Keyset Pagination](https://gitlab.com/groups/gitlab-org/-/epics/2039).
+
+Take into consideration the following when choosing a pagination strategy:
+
+1. It's very inefficient to calculate amount of objects that pass the filtering,
+ this operation usually can take seconds, and can time out,
+1. It's very inefficient to get entries for page at higher ordinals, like 1000.
+ The database has to sort and iterate all previous items, and this operation usually
+ can result in substantial load put on database.
+
+You can find useful tips related to pagination in the [pagination guidelines../database/pagination_guidelines.md).
+
+## Badge counters
+
+Counters should always be truncated. It means that we don't want to present
+the exact number over some threshold. The reason for that is for the cases where we want
+to calculate exact number of items, we effectively need to filter each of them for
+the purpose of knowing the exact number of items matching.
+
+From ~UX perspective it's often acceptable to see that you have over 1000+ pipelines,
+instead of that you have 40000+ pipelines, but at a tradeoff of loading page for 2s longer.
+
+An example of this pattern is the list of pipelines and jobs. We truncate numbers to `1000+`,
+but we show an accurate number of running pipelines, which is the most interesting information.
+
+There's a helper method that can be used for that purpose - `NumbersHelper.limited_counter_with_delimiter` -
+that accepts an upper limit of counting rows.
+
+In some cases it's desired that badge counters are loaded asynchronously.
+This can speed up the initial page load and give a better user experience overall.
+
+## Usage of feature flags
+
+Each feature that has performance critical elements or has a known performance deficiency
+needs to come with feature flag to disable it.
+
+The feature flag makes our team more happy, because they can monitor the system and
+quickly react without our users noticing the problem.
+
+Performance deficiencies should be addressed right away after we merge initial
+changes.
+
+Read more about when and how feature flags should be used in
+[Feature flags in GitLab development](https://about.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#feature-flags-in-gitlab-development).
+
+## Storage
+
+We can consider the following types of storages:
+
+- **Local temporary storage** (very-very short-term storage) This type of storage is system-provided storage, like a `/tmp` folder.
+ This is the type of storage that you should ideally use for all your temporary tasks.
+ The fact that each node has its own temporary storage makes scaling significantly easier.
+ This storage is also very often SSD-based, thus is significantly faster.
+ The local storage can easily be configured for the application with
+ the usage of `TMPDIR` variable.
+
+- **Shared temporary storage** (short-term storage) This type of storage is network-based temporary storage,
+ usually run with a common NFS server. As of Feb 2020, we still use this type of storage
+ for most of our implementations. Even though this allows the above limit to be significantly larger,
+ it does not really mean that you can use more. The shared temporary storage is shared by
+ all nodes. Thus, the job that uses significant amount of that space or performs a lot
+ of operations creates a contention on execution of all other jobs and request
+ across the whole application, this can easily impact stability of the whole GitLab.
+ Be respectful of that.
+
+- **Shared persistent storage** (long-term storage) This type of storage uses
+ shared network-based storage (for example, NFS). This solution is mostly used by customers running small
+ installations consisting of a few nodes. The files on shared storage are easily accessible,
+ but any job that is uploading or downloading data can create a serious contention to all other jobs.
+ This is also an approach by default used by Omnibus.
+
+- **Object-based persistent storage** (long term storage) this type of storage uses external
+ services like [AWS S3](https://en.wikipedia.org/wiki/Amazon_S3). The Object Storage
+ can be treated as infinitely scalable and redundant. Accessing this storage usually requires
+ downloading the file to manipulate it. The Object Storage can be considered as an ultimate
+ solution, as by definition it can be assumed that it can handle unlimited concurrent uploads
+ and downloads of files. This is also ultimate solution required to ensure that application can
+ run in containerized deployments (Kubernetes) at ease.
+
+### Temporary storage
+
+The storage on production nodes is really sparse. The application should be built
+in a way that accommodates running under very limited temporary storage.
+You can expect the system on which your code runs has a total of `1G-10G`
+of temporary storage. However, this storage is really shared across all
+jobs being run. If your job requires to use more than `100MB` of that space
+you should reconsider the approach you have taken.
+
+Whatever your needs are, you should clearly document if you need to process files.
+If you require more than `100MB`, consider asking for help from a maintainer
+to work with you to possibly discover a better solution.
+
+#### Local temporary storage
+
+The usage of local storage is a desired solution to use,
+especially since we work on deploying applications to Kubernetes clusters.
+When you would like to use `Dir.mktmpdir`? In a case when you want for example
+to extract/create archives, perform extensive manipulation of existing data, and so on.
+
+```ruby
+Dir.mktmpdir('designs') do |path|
+ # do manipulation on path
+ # the path will be removed once
+ # we go out of the block
+end
+```
+
+#### Shared temporary storage
+
+The usage of shared temporary storage is required if your intent
+is to persistent file for a disk-based storage, and not Object Storage.
+[Workhorse direct upload](../uploads/index.md#direct-upload) when accepting file
+can write it to shared storage, and later GitLab Rails can perform a move operation.
+The move operation on the same destination is instantaneous.
+The system instead of performing `copy` operation just re-attaches file into a new place.
+
+Since this introduces extra complexity into application, you should only try
+to re-use well established patterns (for example, `ObjectStorage` concern) instead of re-implementing it.
+
+The usage of shared temporary storage is otherwise deprecated for all other usages.
+
+### Persistent storage
+
+#### Object Storage
+
+It is required that all features holding persistent files support saving data
+to Object Storage. Having a persistent storage in the form of shared volume across nodes
+is not scalable, as it creates a contention on data access all nodes.
+
+GitLab offers the [ObjectStorage concern](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/uploaders/object_storage.rb)
+that implements a seamless support for Shared and Object Storage-based persistent storage.
+
+#### Data access
+
+Each feature that accepts data uploads or allows to download them needs to use
+[Workhorse direct upload](../uploads/index.md#direct-upload). It means that uploads needs to be
+saved directly to Object Storage by Workhorse, and all downloads needs to be served
+by Workhorse.
+
+Performing uploads/downloads via Puma is an expensive operation,
+as it blocks the whole processing slot (thread) for the duration of the upload.
+
+Performing uploads/downloads via Puma also has a problem where the operation
+can time out, which is especially problematic for slow clients. If clients take a long time
+to upload/download the processing slot might be killed due to request processing
+timeout (usually between 30s-60s).
+
+For the above reasons it is required that [Workhorse direct upload../uploads/index.md#direct-upload) is implemented
+for all file uploads and downloads.