Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to 'doc/development/database')
-rw-r--r--doc/development/database/add_foreign_key_to_existing_column.md34
-rw-r--r--doc/development/database/constraint_naming_convention.md5
-rw-r--r--doc/development/database/database_reviewer_guidelines.md4
-rw-r--r--doc/development/database/efficient_in_operator_queries.md949
-rw-r--r--doc/development/database/index.md1
-rw-r--r--doc/development/database/keyset_pagination.md29
-rw-r--r--doc/development/database/multiple_databases.md236
-rw-r--r--doc/development/database/not_null_constraints.md16
-rw-r--r--doc/development/database/pagination_guidelines.md2
-rw-r--r--doc/development/database/rename_database_tables.md4
-rw-r--r--doc/development/database/strings_and_the_text_data_type.md53
-rw-r--r--doc/development/database/table_partitioning.md6
-rw-r--r--doc/development/database/transaction_guidelines.md2
13 files changed, 1224 insertions, 117 deletions
diff --git a/doc/development/database/add_foreign_key_to_existing_column.md b/doc/development/database/add_foreign_key_to_existing_column.md
index f83dc35b4a6..d74f826cc14 100644
--- a/doc/development/database/add_foreign_key_to_existing_column.md
+++ b/doc/development/database/add_foreign_key_to_existing_column.md
@@ -4,11 +4,17 @@ group: Database
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
-# Adding foreign key constraint to an existing column
+# Add a foreign key constraint to an existing column
-Foreign keys help ensure consistency between related database tables. The current database review process **always** encourages you to add [foreign keys](../foreign_keys.md) when creating tables that reference records from other tables.
+Foreign keys ensure consistency between related database tables. The current database review process **always** encourages you to add [foreign keys](../foreign_keys.md) when creating tables that reference records from other tables.
-Starting with Rails version 4, Rails includes migration helpers to add foreign key constraints to database tables. Before Rails 4, the only way for ensuring some level of consistency was the [`dependent`](https://guides.rubyonrails.org/association_basics.html#options-for-belongs-to-dependent) option within the association definition. Ensuring data consistency on the application level could fail in some unfortunate cases, so we might end up with inconsistent data in the table. This is mostly affecting older tables, where we simply didn't have the framework support to ensure consistency on the database level. These data inconsistencies can easily cause unexpected application behavior or bugs.
+Starting with Rails version 4, Rails includes migration helpers to add foreign key constraints
+to database tables. Before Rails 4, the only way for ensuring some level of consistency was the
+[`dependent`](https://guides.rubyonrails.org/association_basics.html#options-for-belongs-to-dependent)
+option in the association definition. Ensuring data consistency on the application level could fail
+in some unfortunate cases, so we might end up with inconsistent data in the table. This mostly affects
+older tables, where we didn't have the framework support to ensure consistency on the database level.
+These data inconsistencies can cause unexpected application behavior or bugs.
Adding a foreign key to an existing database column requires database structure changes and potential data changes. In case the table is in use, we should always assume that there is inconsistent data.
@@ -45,7 +51,7 @@ class Email < ActiveRecord::Base
end
```
-Problem: when the user is removed, the email records related to the removed user will stay in the `emails` table:
+Problem: when the user is removed, the email records related to the removed user stays in the `emails` table:
```ruby
user = User.find(1)
@@ -66,9 +72,7 @@ In the example above, you'd be still able to update records in the `emails` tabl
Migration file for adding `NOT VALID` foreign key:
```ruby
-class AddNotValidForeignKeyToEmailsUser < ActiveRecord::Migration[5.2]
- include Gitlab::Database::MigrationHelpers
-
+class AddNotValidForeignKeyToEmailsUser < Gitlab::Database::Migration[1.0]
def up
# safe to use: it requires short lock on the table since we don't validate the foreign key
add_foreign_key :emails, :users, on_delete: :cascade, validate: false
@@ -85,16 +89,16 @@ Avoid using the `add_foreign_key` constraint more than once per migration file,
#### Data migration to fix existing records
-The approach here depends on the data volume and the cleanup strategy. If we can easily find "invalid" records by doing a simple database query and the record count is not that high, then the data migration can be executed within a Rails migration.
+The approach here depends on the data volume and the cleanup strategy. If we can find "invalid"
+records by doing a database query and the record count is not high, then the data migration can
+be executed in a Rails migration.
In case the data volume is higher (>1000 records), it's better to create a background migration. If unsure, please contact the database team for advice.
-Example for cleaning up records in the `emails` table within a database migration:
+Example for cleaning up records in the `emails` table in a database migration:
```ruby
-class RemoveRecordsWithoutUserFromEmailsTable < ActiveRecord::Migration[5.2]
- include Gitlab::Database::MigrationHelpers
-
+class RemoveRecordsWithoutUserFromEmailsTable < Gitlab::Database::Migration[1.0]
disable_ddl_transaction!
class Email < ActiveRecord::Base
@@ -116,7 +120,7 @@ end
### Validate the foreign key
-Validating the foreign key will scan the whole table and make sure that each relation is correct.
+Validating the foreign key scans the whole table and makes sure that each relation is correct.
NOTE:
When using [background migrations](../background_migrations.md), foreign key validation should happen in the next GitLab release.
@@ -126,9 +130,7 @@ Migration file for validating the foreign key:
```ruby
# frozen_string_literal: true
-class ValidateForeignKeyOnEmailUsers < ActiveRecord::Migration[5.2]
- include Gitlab::Database::MigrationHelpers
-
+class ValidateForeignKeyOnEmailUsers < Gitlab::Database::Migration[1.0]
def up
validate_foreign_key :emails, :user_id
end
diff --git a/doc/development/database/constraint_naming_convention.md b/doc/development/database/constraint_naming_convention.md
index 3faef8aee09..a22ddc1551c 100644
--- a/doc/development/database/constraint_naming_convention.md
+++ b/doc/development/database/constraint_naming_convention.md
@@ -6,7 +6,10 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Constraints naming conventions
-The most common option is to let Rails pick the name for database constraints and indexes or let PostgreSQL use the defaults (when applicable). However, when needing to define custom names in Rails or working in Go applications where no ORM is used, it is important to follow strict naming conventions to improve consistency and discoverability.
+The most common option is to let Rails pick the name for database constraints and indexes or let
+PostgreSQL use the defaults (when applicable). However, when defining custom names in Rails, or
+working in Go applications where no ORM is used, it is important to follow strict naming conventions
+to improve consistency and discoverability.
The table below describes the naming conventions for custom PostgreSQL constraints.
The intent is not to retroactively change names in existing databases but rather ensure consistency of future changes.
diff --git a/doc/development/database/database_reviewer_guidelines.md b/doc/development/database/database_reviewer_guidelines.md
index 7a9c08d9d49..59653c6dde3 100644
--- a/doc/development/database/database_reviewer_guidelines.md
+++ b/doc/development/database/database_reviewer_guidelines.md
@@ -19,7 +19,7 @@ Database reviewers are domain experts who have substantial experience with datab
A database review is required whenever an application update [touches the database](../database_review.md#general-process).
The database reviewer is tasked with reviewing the database specific updates and
-making sure that any queries or modifications will perform without issues
+making sure that any queries or modifications perform without issues
at the scale of GitLab.com.
For more information on the database review process, check the [database review guidelines](../database_review.md).
@@ -72,7 +72,7 @@ topics and use cases. The most frequently required during database reviewing are
- [Avoiding downtime in migrations](../avoiding_downtime_in_migrations.md).
- [SQL guidelines](../sql.md) for working with SQL queries.
-## How to apply for becoming a database maintainer
+## How to apply to become a database maintainer
Once a database reviewer feels confident on switching to a database maintainer,
they can update their [team profile](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/data/team.yml)
diff --git a/doc/development/database/efficient_in_operator_queries.md b/doc/development/database/efficient_in_operator_queries.md
new file mode 100644
index 00000000000..bc72bce30bf
--- /dev/null
+++ b/doc/development/database/efficient_in_operator_queries.md
@@ -0,0 +1,949 @@
+---
+stage: Enablement
+group: Database
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Efficient `IN` operator queries
+
+This document describes a technique for building efficient ordered database queries with the `IN`
+SQL operator and the usage of a GitLab utility module to help apply the technique.
+
+NOTE:
+The described technique makes heavy use of
+[keyset pagination](pagination_guidelines.md#keyset-pagination).
+It's advised to get familiar with the topic first.
+
+## Motivation
+
+In GitLab, many domain objects like `Issue` live under nested hierarchies of projects and groups.
+To fetch nested database records for domain objects at the group-level,
+we often perform queries with the `IN` SQL operator.
+We are usually interested in ordering the records by some attributes
+and limiting the number of records using `ORDER BY` and `LIMIT` clauses for performance.
+Pagination may be used to fetch subsequent records.
+
+Example tasks requiring querying nested domain objects from the group level:
+
+- Show first 20 issues by creation date or due date from the group `gitlab-org`.
+- Show first 20 merge_requests by merged at date from the group `gitlab-com`.
+
+Unfortunately, ordered group-level queries typically perform badly
+as their executions require heavy I/O, memory, and computations.
+Let's do an in-depth examination of executing one such query.
+
+### Performance problems with `IN` queries
+
+Consider the task of fetching the twenty oldest created issues
+from the group `gitlab-org` with the following query:
+
+```sql
+SELECT "issues".*
+FROM "issues"
+WHERE "issues"."project_id" IN
+ (SELECT "projects"."id"
+ FROM "projects"
+ WHERE "projects"."namespace_id" IN
+ (SELECT traversal_ids[array_length(traversal_ids, 1)] AS id
+ FROM "namespaces"
+ WHERE (traversal_ids @> ('{9970}'))))
+ORDER BY "issues"."created_at" ASC,
+ "issues"."id" ASC
+LIMIT 20
+```
+
+NOTE:
+For pagination, ordering by the `created_at` column is not enough,
+we must add the `id` column as a
+[tie-breaker](pagination_performance_guidelines.md#tie-breaker-column).
+
+The execution of the query can be largely broken down into three steps:
+
+1. The database accesses both `namespaces` and `projects` tables
+ to find all projects from all groups in the group hierarchy.
+1. The database retrieves `issues` records for each project causing heavy disk I/O.
+ Ideally, an appropriate index configuration should optimize this process.
+1. The database sorts the `issues` rows in memory by `created_at` and returns `LIMIT 20` rows to
+ the end-user. For large groups, this final step requires both large memory and CPU resources.
+
+<details>
+<summary>Expand this sentence to see the execution plan for this DB query.</summary>
+<pre><code>
+ Limit (cost=90170.07..90170.12 rows=20 width=1329) (actual time=967.597..967.607 rows=20 loops=1)
+ Buffers: shared hit=239127 read=3060
+ I/O Timings: read=336.879
+ -> Sort (cost=90170.07..90224.02 rows=21578 width=1329) (actual time=967.596..967.603 rows=20 loops=1)
+ Sort Key: issues.created_at, issues.id
+ Sort Method: top-N heapsort Memory: 74kB
+ Buffers: shared hit=239127 read=3060
+ I/O Timings: read=336.879
+ -> Nested Loop (cost=1305.66..89595.89 rows=21578 width=1329) (actual time=4.709..797.659 rows=241534 loops=1)
+ Buffers: shared hit=239121 read=3060
+ I/O Timings: read=336.879
+ -> HashAggregate (cost=1305.10..1360.22 rows=5512 width=4) (actual time=4.657..5.370 rows=1528 loops=1)
+ Group Key: projects.id
+ Buffers: shared hit=2597
+ -> Nested Loop (cost=576.76..1291.32 rows=5512 width=4) (actual time=2.427..4.244 rows=1528 loops=1)
+ Buffers: shared hit=2597
+ -> HashAggregate (cost=576.32..579.06 rows=274 width=25) (actual time=2.406..2.447 rows=265 loops=1)
+ Group Key: namespaces.traversal_ids[array_length(namespaces.traversal_ids, 1)]
+ Buffers: shared hit=334
+ -> Bitmap Heap Scan on namespaces (cost=141.62..575.63 rows=274 width=25) (actual time=1.933..2.330 rows=265 loops=1)
+ Recheck Cond: (traversal_ids @> '{9970}'::integer[])
+ Heap Blocks: exact=243
+ Buffers: shared hit=334
+ -> Bitmap Index Scan on index_namespaces_on_traversal_ids (cost=0.00..141.55 rows=274 width=0) (actual time=1.897..1.898 rows=265 loops=1)
+ Index Cond: (traversal_ids @> '{9970}'::integer[])
+ Buffers: shared hit=91
+ -> Index Only Scan using index_projects_on_namespace_id_and_id on projects (cost=0.44..2.40 rows=20 width=8) (actual time=0.004..0.006 rows=6 loops=265)
+ Index Cond: (namespace_id = (namespaces.traversal_ids)[array_length(namespaces.traversal_ids, 1)])
+ Heap Fetches: 51
+ Buffers: shared hit=2263
+ -> Index Scan using index_issues_on_project_id_and_iid on issues (cost=0.57..10.57 rows=544 width=1329) (actual time=0.114..0.484 rows=158 loops=1528)
+ Index Cond: (project_id = projects.id)
+ Buffers: shared hit=236524 read=3060
+ I/O Timings: read=336.879
+ Planning Time: 7.750 ms
+ Execution Time: 967.973 ms
+(36 rows)
+</code></pre>
+</details>
+
+The performance of the query depends on the number of rows in the database.
+On average, we can say the following:
+
+- Number of groups in the group-hierarchy: less than 1 000
+- Number of projects: less than 5 000
+- Number of issues: less than 100 000
+
+From the list, it's apparent that the number of `issues` records has
+the largest impact on the performance.
+As per normal usage, we can say that the number of issue records grows
+at a faster rate than the `namespaces` and the `projects` records.
+
+This problem affects most of our group-level features where records are listed
+in a specific order, such as group-level issues, merge requests pages, and APIs.
+For very large groups the database queries can easily time out, causing HTTP 500 errors.
+
+## Optimizing ordered `IN` queries
+
+In the talk
+["How to teach an elephant to dance rock'n'roll"](https://www.youtube.com/watch?v=Ha38lcjVyhQ),
+Maxim Boguk demonstrated a technique to optimize a special class of ordered `IN` queries,
+such as our ordered group-level queries.
+
+A typical ordered `IN` query may look like this:
+
+```sql
+SELECT t.* FROM t
+WHERE t.fkey IN (value_set)
+ORDER BY t.pkey
+LIMIT N;
+```
+
+Here's the key insight used in the technique: we need at most `|value_set| + N` record lookups,
+rather than retrieving all records satisfying the condition `t.fkey IN value_set` (`|value_set|`
+is the number of values in `value_set`).
+
+We adopted and generalized the technique for use in GitLab by implementing utilities in the
+`Gitlab::Pagination::Keyset::InOperatorOptimization` class to facilitate building efficient `IN`
+queries.
+
+### Requirements
+
+The technique is not a drop-in replacement for the existing group-level queries using `IN` operator.
+The technique can only optimize `IN` queries that satisfy the following requirements:
+
+- `LIMIT` is present, which usually means that the query is paginated
+ (offset or keyset pagination).
+- The column used with the `IN` query and the columns in the `ORDER BY`
+ clause are covered with a database index. The columns in the index must be
+ in the following order: `column_for_the_in_query`, `order by column 1`, and
+ `order by column 2`.
+- The columns in the `ORDER BY` clause are distinct
+ (the combination of the columns uniquely identifies one particular column in the table).
+
+WARNING:
+This technique will not improve the performance of the `COUNT(*)` queries.
+
+## The `InOperatorOptimization` module
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67352) in GitLab 14.3.
+
+The `Gitlab::Pagination::Keyset::InOperatorOptimization` module implements utilities for applying a generalized version of
+the efficient `IN` query technique described in the previous section.
+
+To build optimized, ordered `IN` queries that meet [the requirements](#requirements),
+use the utility class `QueryBuilder` from the module.
+
+NOTE:
+The generic keyset pagination module introduced in the merge request
+[51481](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/51481)
+plays a fundamental role in the generalized implementation of the technique
+in `Gitlab::Pagination::Keyset::InOperatorOptimization`.
+
+### Basic usage of `QueryBuilder`
+
+To illustrate a basic usage, we will build a query that
+fetches 20 issues with the oldest `created_at` from the group `gitlab-org`.
+
+The following ActiveRecord query would produce a query similar to
+[the unoptimized query](#performance-problems-with-in-queries) that we examined earlier:
+
+```ruby
+scope = Issue
+ .where(project_id: Group.find(9970).all_projects.select(:id)) # `gitlab-org` group and its subgroups
+ .order(:created_at, :id)
+ .limit(20)
+```
+
+Instead, use the query builder `InOperatorOptimization::QueryBuilder` to produce an optimized
+version:
+
+```ruby
+scope = Issue.order(:created_at, :id)
+array_scope = Group.find(9970).all_projects.select(:id)
+array_mapping_scope = -> (id_expression) { Issue.where(Issue.arel_table[:project_id].eq(id_expression)) }
+finder_query = -> (created_at_expression, id_expression) { Issue.where(Issue.arel_table[:id].eq(id_expression)) }
+
+Gitlab::Pagination::Keyset::InOperatorOptimization::QueryBuilder.new(
+ scope: scope,
+ array_scope: array_scope,
+ array_mapping_scope: array_mapping_scope,
+ finder_query: finder_query
+).execute.limit(20)
+```
+
+- `scope` represents the original `ActiveRecord::Relation` object without the `IN` query. The
+ relation should define an order which must be supported by the
+ [keyset pagination library](keyset_pagination.md#usage).
+- `array_scope` contains the `ActiveRecord::Relation` object, which represents the original
+ `IN (subquery)`. The select values must contain the columns by which the subquery is "connected"
+ to the main query: the `id` of the project record.
+- `array_mapping_scope` defines a lambda returning an `ActiveRecord::Relation` object. The lambda
+ matches (`=`) single select values from the `array_scope`. The lambda yields as many
+ arguments as the select values defined in the `array_scope`. The arguments are Arel SQL expressions.
+- `finder_query` loads the actual record row from the database. It must also be a lambda, where
+ the order by column expressions is available for locating the record. In this example, the
+ yielded values are `created_at` and `id` SQL expressions. Finding a record is very fast via the
+ primary key, so we don't use the `created_at` value.
+
+The following database index on the `issues` table must be present
+to make the query execute efficiently:
+
+```sql
+"idx_issues_on_project_id_and_created_at_and_id" btree (project_id, created_at, id)
+```
+
+<details>
+<summary>Expand this sentence to see the SQL query.</summary>
+<pre><code>
+SELECT "issues".*
+FROM
+ (WITH RECURSIVE "array_cte" AS MATERIALIZED
+ (SELECT "projects"."id"
+ FROM "projects"
+ WHERE "projects"."namespace_id" IN
+ (SELECT traversal_ids[array_length(traversal_ids, 1)] AS id
+ FROM "namespaces"
+ WHERE (traversal_ids @> ('{9970}')))),
+ "recursive_keyset_cte" AS ( -- initializer row start
+ (SELECT NULL::issues AS records,
+ array_cte_id_array,
+ issues_created_at_array,
+ issues_id_array,
+ 0::bigint AS COUNT
+ FROM
+ (SELECT ARRAY_AGG("array_cte"."id") AS array_cte_id_array,
+ ARRAY_AGG("issues"."created_at") AS issues_created_at_array,
+ ARRAY_AGG("issues"."id") AS issues_id_array
+ FROM
+ (SELECT "array_cte"."id"
+ FROM array_cte) array_cte
+ LEFT JOIN LATERAL
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = "array_cte"."id"
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC
+ LIMIT 1) issues ON TRUE
+ WHERE "issues"."created_at" IS NOT NULL
+ AND "issues"."id" IS NOT NULL) array_scope_lateral_query
+ LIMIT 1)
+ -- initializer row finished
+ UNION ALL
+ (SELECT
+ -- result row start
+ (SELECT issues -- record finder query as the first column
+ FROM "issues"
+ WHERE "issues"."id" = recursive_keyset_cte.issues_id_array[position]
+ LIMIT 1),
+ array_cte_id_array,
+ recursive_keyset_cte.issues_created_at_array[:position_query.position-1]||next_cursor_values.created_at||recursive_keyset_cte.issues_created_at_array[position_query.position+1:],
+ recursive_keyset_cte.issues_id_array[:position_query.position-1]||next_cursor_values.id||recursive_keyset_cte.issues_id_array[position_query.position+1:],
+ recursive_keyset_cte.count + 1
+ -- result row finished
+ FROM recursive_keyset_cte,
+ LATERAL
+ -- finding the cursor values of the next record start
+ (SELECT created_at,
+ id,
+ position
+ FROM UNNEST(issues_created_at_array, issues_id_array) WITH
+ ORDINALITY AS u(created_at, id, position)
+ WHERE created_at IS NOT NULL
+ AND id IS NOT NULL
+ ORDER BY "created_at" ASC, "id" ASC
+ LIMIT 1) AS position_query,
+ -- finding the cursor values of the next record end
+ -- finding the next cursor values (next_cursor_values_query) start
+ LATERAL
+ (SELECT "record"."created_at",
+ "record"."id"
+ FROM (
+ VALUES (NULL,
+ NULL)) AS nulls
+ LEFT JOIN
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM (
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = recursive_keyset_cte.array_cte_id_array[position]
+ AND recursive_keyset_cte.issues_created_at_array[position] IS NULL
+ AND "issues"."created_at" IS NULL
+ AND "issues"."id" > recursive_keyset_cte.issues_id_array[position]
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC)
+ UNION ALL
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = recursive_keyset_cte.array_cte_id_array[position]
+ AND recursive_keyset_cte.issues_created_at_array[position] IS NOT NULL
+ AND "issues"."created_at" IS NULL
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC)
+ UNION ALL
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = recursive_keyset_cte.array_cte_id_array[position]
+ AND recursive_keyset_cte.issues_created_at_array[position] IS NOT NULL
+ AND "issues"."created_at" > recursive_keyset_cte.issues_created_at_array[position]
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC)
+ UNION ALL
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = recursive_keyset_cte.array_cte_id_array[position]
+ AND recursive_keyset_cte.issues_created_at_array[position] IS NOT NULL
+ AND "issues"."created_at" = recursive_keyset_cte.issues_created_at_array[position]
+ AND "issues"."id" > recursive_keyset_cte.issues_id_array[position]
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC)) issues
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC
+ LIMIT 1) record ON TRUE
+ LIMIT 1) AS next_cursor_values))
+ -- finding the next cursor values (next_cursor_values_query) END
+SELECT (records).*
+ FROM "recursive_keyset_cte" AS "issues"
+ WHERE (COUNT <> 0)) issues -- filtering out the initializer row
+LIMIT 20
+</code></pre>
+</details>
+
+### Using the `IN` query optimization
+
+#### Adding more filters
+
+In this example, let's add an extra filter by `milestone_id`.
+
+Be careful when adding extra filters to the query. If the column is not covered by the same index,
+then the query might perform worse than the non-optimized query. The `milestone_id` column in the
+`issues` table is currently covered by a different index:
+
+```sql
+"index_issues_on_milestone_id" btree (milestone_id)
+```
+
+Adding the `miletone_id = X` filter to the `scope` argument or to the optimized scope causes bad performance.
+
+Example (bad):
+
+```ruby
+Gitlab::Pagination::Keyset::InOperatorOptimization::QueryBuilder.new(
+ scope: scope,
+ array_scope: array_scope,
+ array_mapping_scope: array_mapping_scope,
+ finder_query: finder_query
+).execute
+ .where(milestone_id: 5)
+ .limit(20)
+```
+
+To address this concern, we could define another index:
+
+```sql
+"idx_issues_on_project_id_and_milestone_id_and_created_at_and_id" btree (project_id, milestone_id, created_at, id)
+```
+
+Adding more indexes to the `issues` table could significantly affect the performance of
+the `UPDATE` queries. In this case, it's better to rely on the original query. It means that if we
+want to use the optimization for the unfiltered page we need to add extra logic in the application code:
+
+```ruby
+if optimization_possible? # no extra params or params covered with the same index as the ORDER BY clause
+ run_optimized_query
+else
+ run_normal_in_query
+end
+```
+
+#### Multiple `IN` queries
+
+Let's assume that we want to extend the group-level queries to include only incident and test case
+issue types.
+
+The original ActiveRecord query would look like this:
+
+```ruby
+scope = Issue
+ .where(project_id: Group.find(9970).all_projects.select(:id)) # `gitlab-org` group and its subgroups
+ .where(issue_type: [:incident, :test_case]) # 1, 2
+ .order(:created_at, :id)
+ .limit(20)
+```
+
+To construct the array scope, we'll need to take the Cartesian product of the `project_id IN` and
+the `issue_type IN` queries. `issue_type` is an ActiveRecord enum, so we need to
+construct the following table:
+
+| `project_id` | `issue_type_value` |
+| ------------ | ------------------ |
+| 2 | 1 |
+| 2 | 2 |
+| 5 | 1 |
+| 5 | 2 |
+| 10 | 1 |
+| 10 | 2 |
+| 9 | 1 |
+| 9 | 2 |
+
+For the `issue_types` query we can construct a value list without querying a table:
+
+```ruby
+value_list = Arel::Nodes::ValuesList.new([[Issue.issue_types[:incident]],[Issue.issue_types[:test_case]]])
+issue_type_values = Arel::Nodes::Grouping.new(value_list).as('issue_type_values (value)').to_sql
+
+array_scope = Group
+ .find(9970)
+ .all_projects
+ .from("#{Project.table_name}, #{issue_type_values}")
+ .select(:id, :value)
+```
+
+Building the `array_mapping_scope` query requires two arguments: `id` and `issue_type_value`:
+
+```ruby
+array_mapping_scope = -> (id_expression, issue_type_value_expression) { Issue.where(Issue.arel_table[:project_id].eq(id_expression)).where(Issue.arel_table[:issue_type].eq(issue_type_value_expression)) }
+```
+
+The `scope` and the `finder` queries don't change:
+
+```ruby
+scope = Issue.order(:created_at, :id)
+finder_query = -> (created_at_expression, id_expression) { Issue.where(Issue.arel_table[:id].eq(id_expression)) }
+
+Gitlab::Pagination::Keyset::InOperatorOptimization::QueryBuilder.new(
+ scope: scope,
+ array_scope: array_scope,
+ array_mapping_scope: array_mapping_scope,
+ finder_query: finder_query
+).execute.limit(20)
+```
+
+<details>
+<summary>Expand this sentence to see the SQL query.</summary>
+<pre><code>
+SELECT "issues".*
+FROM
+ (WITH RECURSIVE "array_cte" AS MATERIALIZED
+ (SELECT "projects"."id", "value"
+ FROM projects, (
+ VALUES (1), (2)) AS issue_type_values (value)
+ WHERE "projects"."namespace_id" IN
+ (WITH RECURSIVE "base_and_descendants" AS (
+ (SELECT "namespaces".*
+ FROM "namespaces"
+ WHERE "namespaces"."type" = 'Group'
+ AND "namespaces"."id" = 9970)
+ UNION
+ (SELECT "namespaces".*
+ FROM "namespaces", "base_and_descendants"
+ WHERE "namespaces"."type" = 'Group'
+ AND "namespaces"."parent_id" = "base_and_descendants"."id")) SELECT "id"
+ FROM "base_and_descendants" AS "namespaces")),
+ "recursive_keyset_cte" AS (
+ (SELECT NULL::issues AS records,
+ array_cte_id_array,
+ array_cte_value_array,
+ issues_created_at_array,
+ issues_id_array,
+ 0::bigint AS COUNT
+ FROM
+ (SELECT ARRAY_AGG("array_cte"."id") AS array_cte_id_array,
+ ARRAY_AGG("array_cte"."value") AS array_cte_value_array,
+ ARRAY_AGG("issues"."created_at") AS issues_created_at_array,
+ ARRAY_AGG("issues"."id") AS issues_id_array
+ FROM
+ (SELECT "array_cte"."id",
+ "array_cte"."value"
+ FROM array_cte) array_cte
+ LEFT JOIN LATERAL
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = "array_cte"."id"
+ AND "issues"."issue_type" = "array_cte"."value"
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC
+ LIMIT 1) issues ON TRUE
+ WHERE "issues"."created_at" IS NOT NULL
+ AND "issues"."id" IS NOT NULL) array_scope_lateral_query
+ LIMIT 1)
+ UNION ALL
+ (SELECT
+ (SELECT issues
+ FROM "issues"
+ WHERE "issues"."id" = recursive_keyset_cte.issues_id_array[POSITION]
+ LIMIT 1), array_cte_id_array,
+ array_cte_value_array,
+ recursive_keyset_cte.issues_created_at_array[:position_query.position-1]||next_cursor_values.created_at||recursive_keyset_cte.issues_created_at_array[position_query.position+1:], recursive_keyset_cte.issues_id_array[:position_query.position-1]||next_cursor_values.id||recursive_keyset_cte.issues_id_array[position_query.position+1:], recursive_keyset_cte.count + 1
+ FROM recursive_keyset_cte,
+ LATERAL
+ (SELECT created_at,
+ id,
+ POSITION
+ FROM UNNEST(issues_created_at_array, issues_id_array) WITH
+ ORDINALITY AS u(created_at, id, POSITION)
+ WHERE created_at IS NOT NULL
+ AND id IS NOT NULL
+ ORDER BY "created_at" ASC, "id" ASC
+ LIMIT 1) AS position_query,
+ LATERAL
+ (SELECT "record"."created_at",
+ "record"."id"
+ FROM (
+ VALUES (NULL,
+ NULL)) AS nulls
+ LEFT JOIN
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM (
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = recursive_keyset_cte.array_cte_id_array[POSITION]
+ AND "issues"."issue_type" = recursive_keyset_cte.array_cte_value_array[POSITION]
+ AND recursive_keyset_cte.issues_created_at_array[POSITION] IS NULL
+ AND "issues"."created_at" IS NULL
+ AND "issues"."id" > recursive_keyset_cte.issues_id_array[POSITION]
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC)
+ UNION ALL
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = recursive_keyset_cte.array_cte_id_array[POSITION]
+ AND "issues"."issue_type" = recursive_keyset_cte.array_cte_value_array[POSITION]
+ AND recursive_keyset_cte.issues_created_at_array[POSITION] IS NOT NULL
+ AND "issues"."created_at" IS NULL
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC)
+ UNION ALL
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = recursive_keyset_cte.array_cte_id_array[POSITION]
+ AND "issues"."issue_type" = recursive_keyset_cte.array_cte_value_array[POSITION]
+ AND recursive_keyset_cte.issues_created_at_array[POSITION] IS NOT NULL
+ AND "issues"."created_at" > recursive_keyset_cte.issues_created_at_array[POSITION]
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC)
+ UNION ALL
+ (SELECT "issues"."created_at",
+ "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = recursive_keyset_cte.array_cte_id_array[POSITION]
+ AND "issues"."issue_type" = recursive_keyset_cte.array_cte_value_array[POSITION]
+ AND recursive_keyset_cte.issues_created_at_array[POSITION] IS NOT NULL
+ AND "issues"."created_at" = recursive_keyset_cte.issues_created_at_array[POSITION]
+ AND "issues"."id" > recursive_keyset_cte.issues_id_array[POSITION]
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC)) issues
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC
+ LIMIT 1) record ON TRUE
+ LIMIT 1) AS next_cursor_values)) SELECT (records).*
+ FROM "recursive_keyset_cte" AS "issues"
+ WHERE (COUNT <> 0)) issues
+LIMIT 20
+</code>
+</pre>
+</details>
+
+NOTE:
+To make the query efficient, the following columns need to be covered with an index: `project_id`, `issue_type`, `created_at`, and `id`.
+
+#### Batch iteration
+
+Batch iteration over the records is possible via the keyset `Iterator` class.
+
+```ruby
+scope = Issue.order(:created_at, :id)
+array_scope = Group.find(9970).all_projects.select(:id)
+array_mapping_scope = -> (id_expression) { Issue.where(Issue.arel_table[:project_id].eq(id_expression)) }
+finder_query = -> (created_at_expression, id_expression) { Issue.where(Issue.arel_table[:id].eq(id_expression)) }
+
+opts = {
+ in_operator_optimization_options: {
+ array_scope: array_scope,
+ array_mapping_scope: array_mapping_scope,
+ finder_query: finder_query
+ }
+}
+
+Gitlab::Pagination::Keyset::Iterator.new(scope: scope, **opts).each_batch(of: 100) do |records|
+ puts records.select(:id).map { |r| [r.id] }
+end
+```
+
+#### Keyset pagination
+
+The optimization works out of the box with GraphQL and the `keyset_paginate` helper method.
+Read more about [keyset pagination](keyset_pagination.md).
+
+```ruby
+array_scope = Group.find(9970).all_projects.select(:id)
+array_mapping_scope = -> (id_expression) { Issue.where(Issue.arel_table[:project_id].eq(id_expression)) }
+finder_query = -> (created_at_expression, id_expression) { Issue.where(Issue.arel_table[:id].eq(id_expression)) }
+
+opts = {
+ in_operator_optimization_options: {
+ array_scope: array_scope,
+ array_mapping_scope: array_mapping_scope,
+ finder_query: finder_query
+ }
+}
+
+issues = Issue
+ .order(:created_at, :id)
+ .keyset_paginate(per_page: 20, keyset_order_options: opts)
+ .records
+```
+
+#### Offset pagination with Kaminari
+
+The `ActiveRecord` scope produced by the `InOperatorOptimization` class can be used in
+[offset-paginated](pagination_guidelines.md#offset-pagination)
+queries.
+
+```ruby
+Gitlab::Pagination::Keyset::InOperatorOptimization::QueryBuilder
+ .new(...)
+ .execute
+ .page(1)
+ .per(20)
+ .without_count
+```
+
+## Generalized `IN` optimization technique
+
+Let's dive into how `QueryBuilder` builds the optimized query
+to fetch the twenty oldest created issues from the group `gitlab-org`
+using the generalized `IN` optimization technique.
+
+### Array CTE
+
+As the first step, we use a common table expression (CTE) for collecting the `projects.id` values.
+This is done by wrapping the incoming `array_scope` ActiveRecord relation parameter with a CTE.
+
+```sql
+WITH array_cte AS MATERIALIZED (
+ SELECT "projects"."id"
+ FROM "projects"
+ WHERE "projects"."namespace_id" IN
+ (SELECT traversal_ids[array_length(traversal_ids, 1)] AS id
+ FROM "namespaces"
+ WHERE (traversal_ids @> ('{9970}')))
+)
+```
+
+This query produces the following result set with only one column (`projects.id`):
+
+| ID |
+| --- |
+| 9 |
+| 2 |
+| 5 |
+| 10 |
+
+### Array mapping
+
+For each project (that is, each record storing a project ID in `array_cte`),
+we will fetch the cursor value identifying the first issue respecting the `ORDER BY` clause.
+
+As an example, let's pick the first record `ID=9` from `array_cte`.
+The following query should fetch the cursor value `(created_at, id)` identifying
+the first issue record respecting the `ORDER BY` clause for the project with `ID=9`:
+
+```sql
+SELECT "issues"."created_at", "issues"."id"
+FROM "issues"."project_id"=9
+ORDER BY "issues"."created_at" ASC, "issues"."id" ASC
+LIMIT 1;
+```
+
+We will use `LATERAL JOIN` to loop over the records in the `array_cte` and find the
+cursor value for each project. The query would be built using the `array_mapping_scope` lambda
+function.
+
+```sql
+SELECT ARRAY_AGG("array_cte"."id") AS array_cte_id_array,
+ ARRAY_AGG("issues"."created_at") AS issues_created_at_array,
+ ARRAY_AGG("issues"."id") AS issues_id_array
+FROM (
+ SELECT "array_cte"."id" FROM array_cte
+) array_cte
+LEFT JOIN LATERAL
+(
+ SELECT "issues"."created_at", "issues"."id"
+ FROM "issues"
+ WHERE "issues"."project_id" = "array_cte"."id"
+ ORDER BY "issues"."created_at" ASC, "issues"."id" ASC
+ LIMIT 1
+) issues ON TRUE
+```
+
+Since we have an index on `project_id`, `created_at`, and `id`,
+index-only scans should quickly locate all the cursor values.
+
+This is how the query could be translated to Ruby:
+
+```ruby
+created_at_values = []
+id_values = []
+project_ids.map do |project_id|
+ created_at, id = Issue.select(:created_at, :id).where(project_id: project_id).order(:created_at, :id).limit(1).first # N+1 but it's fast
+ created_at_values << created_at
+ id_values << id
+end
+```
+
+This is what the result set would look like:
+
+| `project_ids` | `created_at_values` | `id_values` |
+| ------------- | ------------------- | ----------- |
+| 2 | 2020-01-10 | 5 |
+| 5 | 2020-01-05 | 4 |
+| 10 | 2020-01-15 | 7 |
+| 9 | 2020-01-05 | 3 |
+
+The table shows the cursor values (`created_at, id`) of the first record for each project
+respecting the `ORDER BY` clause.
+
+At this point, we have the initial data. To start collecting the actual records from the database,
+we'll use a recursive CTE query where each recursion locates one row until
+the `LIMIT` is reached or no more data can be found.
+
+Here's an outline of the steps we will take in the recursive CTE query
+(expressing the steps in SQL is non-trivial but will be explained next):
+
+1. Sort the initial resultset according to the `ORDER BY` clause.
+1. Pick the top cursor to fetch the record, this is our first record. In the example,
+this cursor would be (`2020-01-05`, `3`) for `project_id=9`.
+1. We can use (`2020-01-05`, `3`) to fetch the next issue respecting the `ORDER BY` clause
+`project_id=9` filter. This produces an updated resultset.
+
+ | `project_ids` | `created_at_values` | `id_values` |
+ | ------------- | ------------------- | ----------- |
+ | 2 | 2020-01-10 | 5 |
+ | 5 | 2020-01-05 | 4 |
+ | 10 | 2020-01-15 | 7 |
+ | **9** | **2020-01-06** | **6** |
+
+1. Repeat 1 to 3 with the updated resultset until we have fetched `N=20` records.
+
+### Initializing the recursive CTE query
+
+For the initial recursive query, we'll need to produce exactly one row, we call this the
+initializer query (`initializer_query`).
+
+Use `ARRAY_AGG` function to compact the initial result set into a single row
+and use the row as the initial value for the recursive CTE query:
+
+Example initializer row:
+
+| `records` | `project_ids` | `created_at_values` | `id_values` | `Count` | `Position` |
+| -------------- | --------------- | ------------------- | ----------- | ------- | ---------- |
+| `NULL::issues` | `[9, 2, 5, 10]` | `[...]` | `[...]` | `0` | `NULL` |
+
+- The `records` column contains our sorted database records, and the initializer query sets the
+first value to `NULL`, which is filtered out later.
+- The `count` column tracks the number of records found. We use this column to filter out the
+initializer row from the result set.
+
+### Recursive portion of the CTE query
+
+The result row is produced with the following steps:
+
+1. [Order the keyset arrays.](#order-the-keyset-arrays)
+1. [Find the next cursor.](#find-the-next-cursor)
+1. [Produce a new row.](#produce-a-new-row)
+
+#### Order the keyset arrays
+
+Order the keyset arrays according to the original `ORDER BY` clause with `LIMIT 1` using the
+`UNNEST [] WITH ORDINALITY` table function. The function locates the "lowest" keyset cursor
+values and gives us the array position. These cursor values are used to locate the record.
+
+NOTE:
+At this point, we haven't read anything from the database tables, because we relied on
+fast index-only scans.
+
+| `project_ids` | `created_at_values` | `id_values` |
+| ------------- | ------------------- | ----------- |
+| 2 | 2020-01-10 | 5 |
+| 5 | 2020-01-05 | 4 |
+| 10 | 2020-01-15 | 7 |
+| 9 | 2020-01-05 | 3 |
+
+The first row is the 4th one (`position = 4`), because it has the lowest `created_at` and
+`id` values. The `UNNEST` function also exposes the position using an extra column (note:
+PostgreSQL uses 1-based index).
+
+Demonstration of the `UNNEST [] WITH ORDINALITY` table function:
+
+```sql
+SELECT position FROM unnest('{2020-01-10, 2020-01-05, 2020-01-15, 2020-01-05}'::timestamp[], '{5, 4, 7, 3}'::int[])
+ WITH ORDINALITY AS t(created_at, id, position) ORDER BY created_at ASC, id ASC LIMIT 1;
+```
+
+Result:
+
+```sql
+position
+----------
+ 4
+(1 row)
+```
+
+#### Find the next cursor
+
+Now, let's find the next cursor values (`next_cursor_values_query`) for the project with `id = 9`.
+To do that, we build a keyset pagination SQL query. Find the next row after
+`created_at = 2020-01-05` and `id = 3`. Because we order by two database columns, there can be two
+cases:
+
+- There are rows with `created_at = 2020-01-05` and `id > 3`.
+- There are rows with `created_at > 2020-01-05`.
+
+Generating this query is done by the generic keyset pagination library. After the query is done,
+we have a temporary table with the next cursor values:
+
+| `created_at` | ID |
+| ------------ | --- |
+| 2020-01-06 | 6 |
+
+#### Produce a new row
+
+As the final step, we need to produce a new row by manipulating the initializer row
+(`data_collector_query` method). Two things happen here:
+
+- Read the full row from the DB and return it in the `records` columns. (`result_collector_columns`
+method)
+- Replace the cursor values at the current position with the results from the keyset query.
+
+Reading the full row from the database is only one index scan by the primary key. We use the
+ActiveRecord query passed as the `finder_query`:
+
+```sql
+(SELECT "issues".* FROM issues WHERE id = id_values[position] LIMIT 1)
+```
+
+By adding parentheses, the result row can be put into the `records` column.
+
+Replacing the cursor values at `position` can be done via standard PostgreSQL array operators:
+
+```sql
+-- created_at_values column value
+created_at_values[:position-1]||next_cursor_values.created_at||created_at_values[position+1:]
+
+-- id_values column value
+id_values[:position-1]||next_cursor_values.id||id_values[position+1:]
+```
+
+The Ruby equivalent would be the following:
+
+```ruby
+id_values[0..(position - 1)] + [next_cursor_values.id] + id_values[(position + 1)..-1]
+```
+
+After this, the recursion starts again by finding the next lowest cursor value.
+
+### Finalizing the query
+
+For producing the final `issues` rows, we're going to wrap the query with another `SELECT` statement:
+
+```sql
+SELECT "issues".*
+FROM (
+ SELECT (records).* -- similar to ruby splat operator
+ FROM recursive_keyset_cte
+ WHERE recursive_keyset_cte.count <> 0 -- filter out the initializer row
+) AS issues
+```
+
+### Performance comparison
+
+Assuming that we have the correct database index in place, we can compare the query performance by
+looking at the number of database rows accessed by the query.
+
+- Number of groups: 100
+- Number of projects: 500
+- Number of issues (in the group hierarchy): 50 000
+
+Standard `IN` query:
+
+| Query | Entries read from index | Rows read from the table | Rows sorted in memory |
+| ------------------------ | ----------------------- | ------------------------ | --------------------- |
+| group hierarchy subquery | 100 | 0 | 0 |
+| project lookup query | 500 | 0 | 0 |
+| issue lookup query | 50 000 | 20 | 50 000 |
+
+Optimized `IN` query:
+
+| Query | Entries read from index | Rows read from the table | Rows sorted in memory |
+| ------------------------ | ----------------------- | ------------------------ | --------------------- |
+| group hierarchy subquery | 100 | 0 | 0 |
+| project lookup query | 500 | 0 | 0 |
+| issue lookup query | 519 | 20 | 10 000 |
+
+The group and project queries are not using sorting, the necessary columns are read from database
+indexes. These values are accessed frequently so it's very likely that most of the data will be
+in the PostgreSQL's buffer cache.
+
+The optimized `IN` query will read maximum 519 entries (cursor values) from the index:
+
+- 500 index-only scans for populating the arrays for each project. The cursor values of the first
+record will be here.
+- Maximum 19 additional index-only scans for the consecutive records.
+
+The optimized `IN` query will sort the array (cursor values per project array) 20 times, which
+means we'll sort 20 x 500 rows. However, this might be a less memory-intensive task than
+sorting 10 000 rows at once.
+
+Performance comparison for the `gitlab-org` group:
+
+| Query | Number of 8K Buffers involved | Uncached execution time | Cached execution time |
+| -------------------- | ----------------------------- | ----------------------- | --------------------- |
+| `IN` query | 240833 | 1.2s | 660ms |
+| Optimized `IN` query | 9783 | 450ms | 22ms |
+
+NOTE:
+Before taking measurements, the group lookup query was executed separately in order to make
+the group data available in the buffer cache. Since it's a frequently called query, it's going to
+hit many shared buffers during the query execution in the production environment.
diff --git a/doc/development/database/index.md b/doc/development/database/index.md
index b61a71ffb8e..a7b752e14ef 100644
--- a/doc/development/database/index.md
+++ b/doc/development/database/index.md
@@ -62,6 +62,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
- [Query performance guidelines](../query_performance.md)
- [Pagination guidelines](pagination_guidelines.md)
- [Pagination performance guidelines](pagination_performance_guidelines.md)
+- [Efficient `IN` operator queries](efficient_in_operator_queries.md)
## Case studies
diff --git a/doc/development/database/keyset_pagination.md b/doc/development/database/keyset_pagination.md
index e30c3cc8832..fd62c36b753 100644
--- a/doc/development/database/keyset_pagination.md
+++ b/doc/development/database/keyset_pagination.md
@@ -36,7 +36,8 @@ Keyset pagination works without any configuration for simple ActiveRecord querie
- Order by one column.
- Order by two columns, where the last column is the primary key.
-The library can detect nullable and non-distinct columns and based on these, it will add extra ordering using the primary key. This is necessary because keyset pagination expects distinct order by values:
+The library detects nullable and non-distinct columns and based on these, adds extra ordering
+using the primary key. This is necessary because keyset pagination expects distinct order by values:
```ruby
Project.order(:created_at).keyset_paginate.records # ORDER BY created_at, id
@@ -79,7 +80,7 @@ cursor = paginator.cursor_for_next_page # encoded column attributes for the next
paginator = Project.order(:name).keyset_paginate(cursor: cursor).records # loading the next page
```
-Since keyset pagination does not support page numbers, we are restricted to go to the following pages:
+Because keyset pagination does not support page numbers, we are restricted to go to the following pages:
- Next page
- Previous page
@@ -111,7 +112,8 @@ In the HAML file, we can render the records:
The performance of the keyset pagination depends on the database index configuration and the number of columns we use in the `ORDER BY` clause.
-In case we order by the primary key (`id`), then the generated queries will be efficient since the primary key is covered by a database index.
+In case we order by the primary key (`id`), then the generated queries are efficient because
+the primary key is covered by a database index.
When two or more columns are used in the `ORDER BY` clause, it's advised to check the generated database query and make sure that the correct index configuration is used. More information can be found on the [pagination guideline page](pagination_guidelines.md#index-coverage).
@@ -149,7 +151,9 @@ puts paginator2.records.to_a # UNION query
## Complex order configuration
-Common `ORDER BY` configurations will be handled by the `keyset_paginate` method automatically so no manual configuration is needed. There are a few edge cases where order object configuration is necessary:
+Common `ORDER BY` configurations are handled by the `keyset_paginate` method automatically
+so no manual configuration is needed. There are a few edge cases where order object
+configuration is necessary:
- `NULLS LAST` ordering.
- Function-based ordering.
@@ -170,12 +174,13 @@ scope.keyset_paginate # raises: Gitlab::Pagination::Keyset::Paginator::Unsupport
The `keyset_paginate` method raises an error because the order value on the query is a custom SQL string and not an [`Arel`](https://www.rubydoc.info/gems/arel) AST node. The keyset library cannot automatically infer configuration values from these kinds of queries.
-To make keyset pagination work, we need to configure custom order objects, to do so, we need to collect information about the order columns:
+To make keyset pagination work, we must configure custom order objects, to do so, we must
+collect information about the order columns:
-- `relative_position` can have duplicated values since no unique index is present.
-- `relative_position` can have null values because we don't have a not null constraint on the column. For this, we need to determine where will we see NULL values, at the beginning of the resultset or the end (`NULLS LAST`).
-- Keyset pagination requires distinct order columns, so we'll need to add the primary key (`id`) to make the order distinct.
-- Jumping to the last page and paginating backwards actually reverses the `ORDER BY` clause. For this, we'll need to provide the reversed `ORDER BY` clause.
+- `relative_position` can have duplicated values because no unique index is present.
+- `relative_position` can have null values because we don't have a not null constraint on the column. For this, we must determine where we see NULL values, at the beginning of the result set, or the end (`NULLS LAST`).
+- Keyset pagination requires distinct order columns, so we must add the primary key (`id`) to make the order distinct.
+- Jumping to the last page and paginating backwards actually reverses the `ORDER BY` clause. For this, we must provide the reversed `ORDER BY` clause.
Example:
@@ -206,7 +211,8 @@ scope.keyset_paginate.records # works
### Function-based ordering
-In the following example, we multiply the `id` by 10 and ordering by that value. Since the `id` column is unique, we need to define only one column:
+In the following example, we multiply the `id` by 10 and order by that value. Because the `id`
+column is unique, we define only one column:
```ruby
order = Gitlab::Pagination::Keyset::Order.build([
@@ -233,7 +239,8 @@ The `add_to_projections` flag tells the paginator to expose the column expressio
### `iid` based ordering
-When ordering issues, the database ensures that we'll have distinct `iid` values within a project. Ordering by one column is enough to make the pagination work if the `project_id` filter is present:
+When ordering issues, the database ensures that we have distinct `iid` values in a project.
+Ordering by one column is enough to make the pagination work if the `project_id` filter is present:
```ruby
order = Gitlab::Pagination::Keyset::Order.build([
diff --git a/doc/development/database/multiple_databases.md b/doc/development/database/multiple_databases.md
index 71dcc5bb866..0fd9f821fab 100644
--- a/doc/development/database/multiple_databases.md
+++ b/doc/development/database/multiple_databases.md
@@ -24,24 +24,26 @@ configurations. For example, given a `config/database.yml` like below:
```yaml
development:
- adapter: postgresql
- encoding: unicode
- database: gitlabhq_development
- host: /path/to/gdk/postgresql
- pool: 10
- prepared_statements: false
- variables:
- statement_timeout: 120s
+ main:
+ adapter: postgresql
+ encoding: unicode
+ database: gitlabhq_development
+ host: /path/to/gdk/postgresql
+ pool: 10
+ prepared_statements: false
+ variables:
+ statement_timeout: 120s
test: &test
- adapter: postgresql
- encoding: unicode
- database: gitlabhq_test
- host: /path/to/gdk/postgresql
- pool: 10
- prepared_statements: false
- variables:
- statement_timeout: 120s
+ main:
+ adapter: postgresql
+ encoding: unicode
+ database: gitlabhq_test
+ host: /path/to/gdk/postgresql
+ pool: 10
+ prepared_statements: false
+ variables:
+ statement_timeout: 120s
```
Edit the `config/database.yml` to look like this:
@@ -98,18 +100,45 @@ and their tables must be placed in two directories for now:
We aim to keep the schema for both tables the same across both databases.
+<!--
+NOTE: The `validate_cross_joins!` method in `spec/support/database/prevent_cross_joins.rb` references
+ the following heading in the code, so if you make a change to this heading, make sure to update
+ the corresponding documentation URL used in `spec/support/database/prevent_cross_joins.rb`.
+-->
+
### Removing joins between `ci_*` and non `ci_*` tables
-We are planning on moving all the `ci_*` tables to a separate database so
+Queries that join across databases raise an error. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/68620)
+in GitLab 14.3, for new queries only. Pre-existing queries do not raise an error.
+
+We are planning on moving all the `ci_*` tables to a separate database, so
referencing `ci_*` tables with other tables will not be possible. This means,
that using any kind of `JOIN` in SQL queries will not work. We have identified
already many such examples that need to be fixed in
<https://gitlab.com/groups/gitlab-org/-/epics/6289> .
-The following are some real examples that have resulted from this and these
-patterns may apply to future cases.
+#### Path to removing cross-database joins
+
+The following steps are the process to remove cross-database joins between
+`ci_*` and non `ci_*` tables:
+
+1. **{check-circle}** Add all failing specs to the [`cross-join-allowlist.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/f5de89daeb468fc45e1e95a76d1b5297aa53da11/spec/support/database/cross-join-allowlist.yml)
+ file.
+1. **{dotted-circle}** Find the code that caused the spec failure and wrap the isolated code
+ in [`allow_cross_joins_across_databases`](#allowlist-for-existing-cross-joins).
+ Link to a new issue assigned to the correct team to remove the specs from the
+ `cross-join-allowlist.yml` file.
+1. **{dotted-circle}** Remove the `cross-join-allowlist.yml` file and stop allowing
+ whole test files.
+1. **{dotted-circle}** Fix the problem and remove the `allow_cross_joins_across_databases` call.
+1. **{dotted-circle}** Fix all the cross-joins and remove the `allow_cross_joins_across_databases` method.
+
+#### Suggestions for removing cross-database joins
-#### Remove the code
+The following sections are some real examples that were identified as joining across databases,
+along with possible suggestions on how to fix them.
+
+##### Remove the code
The simplest solution we've seen several times now has been an existing scope
that is unused. This is the easiest example to fix. So the first step is to
@@ -131,7 +160,7 @@ to evaluate, because `UsageData` is not critical to users and it may be possible
to get a similarly useful metric with a simpler approach. Alternatively we may
find that nobody is using these metrics, so we can remove them.
-#### Use `preload` instead of `includes`
+##### Use `preload` instead of `includes`
The `includes` and `preload` methods in Rails are both ways to avoid an N+1
query. The `includes` method in Rails uses a heuristic approach to determine
@@ -145,7 +174,7 @@ allows you to avoid the join, while still avoiding the N+1 query.
You can see a real example of this solution being used in
<https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67655>.
-#### De-normalize some foreign key to the table
+##### De-normalize some foreign key to the table
De-normalization refers to adding redundant precomputed (duplicated) data to
a table to simplify certain queries or to improve performance. In this
@@ -198,7 +227,7 @@ You can see this approach implemented in
<https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66963> . This MR also
de-normalizes `pipeline_id` to fix a similar query.
-#### De-normalize into an extra table
+##### De-normalize into an extra table
Sometimes the previous de-normalization (adding an extra column) doesn't work for
your specific case. This may be due to the fact that your data is not 1:1, or
@@ -245,18 +274,88 @@ logic to delete these rows if or whenever necessary in your domain.
Finally, this de-normalization and new query also improves performance because
it does less joins and needs less filtering.
-#### Summary of cross-join removal patterns
+##### Use `disable_joins` for `has_one` or `has_many` `through:` relations
+
+Sometimes a join query is caused by using `has_one ... through:` or `has_many
+... through:` across tables that span the different databases. These joins
+sometimes can be solved by adding
+[`disable_joins:true`](https://edgeguides.rubyonrails.org/active_record_multiple_databases.html#handling-associations-with-joins-across-databases).
+This is a Rails feature which we
+[backported](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66400). We
+also extended the feature to allow a lambda syntax for enabling `disable_joins`
+with a feature flag. If you use this feature we encourage using a feature flag
+as it mitigates risk if there is some serious performance regression.
+
+You can see an example where this was used in
+<https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66709/diffs>.
+
+With any change to DB queries it is important to analyze and compare the SQL
+before and after the change. `disable_joins` can introduce very poorly performing
+code depending on the actual logic of the `has_many` or `has_one` relationship.
+The key thing to look for is whether any of the intermediate result sets
+used to construct the final result set have an unbounded amount of data loaded.
+The best way to tell is by looking at the SQL generated and confirming that
+each one is limited in some way. You can tell by either a `LIMIT 1` clause or
+by `WHERE` clause that is limiting based on a unique column. Any unbounded
+intermediate dataset could lead to loading too many IDs into memory.
+
+An example where you may see very poor performance is the following
+hypothetical code:
+
+```ruby
+class Project
+ has_many :pipelines
+ has_many :builds, through: :pipelines
+end
+
+class Pipeline
+ has_many :builds
+end
+
+class Build
+ belongs_to :pipeline
+end
+
+def some_action
+ @builds = Project.find(5).builds.order(created_at: :desc).limit(10)
+end
+```
+
+In the above case `some_action` will generate a query like:
+
+```sql
+select * from builds
+inner join pipelines on builds.pipeline_id = pipelines.id
+where pipelines.project_id = 5
+order by builds.created_at desc
+limit 10
+```
+
+However, if you changed the relation to be:
+
+```ruby
+class Project
+ has_many :pipelines
+ has_many :builds, through: :pipelines, disable_joins: true
+end
+```
-A quick checklist for fixing a specific join query would be:
+Then you would get the following 2 queries:
-1. Is the code even used? If not just remove it
-1. If the code is used, then is this feature even used or can we implement the
- feature in a simpler way and still meet the requirements. Always prefer the
- simplest option.
-1. Can we remove the join if we de-normalize the data you are joining to by
- adding a new column
-1. Can we remove the join by adding a new table in the correct database that
- replicates the minimum data needed to do the join
+```sql
+select id from pipelines where project_id = 5;
+
+select * from builds where pipeline_id in (...)
+order by created_at desc
+limit 10;
+```
+
+Because the first query does not limit by any unique column or
+have a `LIMIT` clause, it can load an unlimited number of
+pipeline IDs into memory, which are then sent in the following query.
+This can lead to very poor performance in the Rails application and the
+database. In cases like this, you might need to re-write the
+query or look at other patterns described above for removing cross-joins.
#### How to validate you have correctly removed a cross-join
@@ -291,3 +390,74 @@ end
You can see a real example of using this method for fixing a cross-join in
<https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67655>.
+
+#### Allowlist for existing cross-joins
+
+A cross-join across databases can be explicitly allowed by wrapping the code in the
+`::Gitlab::Database.allow_cross_joins_across_databases` helper method.
+
+This method should only be used:
+
+- For existing code.
+- If the code is required to help migrate away from a cross-join. For example,
+ in a migration that backfills data for future use to remove a cross-join.
+
+The `allow_cross_joins_across_databases` helper method can be used as follows:
+
+```ruby
+::Gitlab::Database.allow_cross_joins_across_databases(url: 'https://gitlab.com/gitlab-org/gitlab/-/issues/336590') do
+ subject.perform(1, 4)
+end
+```
+
+The `url` parameter should point to an issue with a milestone for when we intend
+to fix the cross-join. If the cross-join is being used in a migration, we do not
+need to fix the code. See <https://gitlab.com/gitlab-org/gitlab/-/issues/340017>
+for more details.
+
+## `config/database.yml`
+
+GitLab will support running multiple databases in the future, for example to [separate tables for the continuous integration features](https://gitlab.com/groups/gitlab-org/-/epics/6167) from the main database. In order to prepare for this change, we [validate the structure of the configuration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67877) in `database.yml` to ensure that only known databases are used.
+
+Previously, the `config/database.yml` would look like this:
+
+```yaml
+production:
+ adapter: postgresql
+ encoding: unicode
+ database: gitlabhq_production
+ ...
+```
+
+With the support for many databases the support for this
+syntax is deprecated and will be removed in [15.0](https://gitlab.com/gitlab-org/gitlab/-/issues/338182).
+
+The new `config/database.yml` needs to include a database name
+to define a database configuration. Only `main:` and `ci:` database
+names are supported today. The `main:` needs to always be a first
+entry in a hash. This change applies to decomposed and non-decomposed
+change. If an invalidate or deprecated syntax is used the error
+or warning will be printed during application start.
+
+```yaml
+# Non-decomposed database
+production:
+ main:
+ adapter: postgresql
+ encoding: unicode
+ database: gitlabhq_production
+ ...
+
+# Decomposed database
+production:
+ main:
+ adapter: postgresql
+ encoding: unicode
+ database: gitlabhq_production
+ ...
+ ci:
+ adapter: postgresql
+ encoding: unicode
+ database: gitlabhq_production_ci
+ ...
+```
diff --git a/doc/development/database/not_null_constraints.md b/doc/development/database/not_null_constraints.md
index 178a207dab5..de070f7e434 100644
--- a/doc/development/database/not_null_constraints.md
+++ b/doc/development/database/not_null_constraints.md
@@ -25,7 +25,7 @@ For example, consider a migration that creates a table with two `NOT NULL` colum
`db/migrate/20200401000001_create_db_guides.rb`:
```ruby
-class CreateDbGuides < ActiveRecord::Migration[6.0]
+class CreateDbGuides < Gitlab::Database::Migration[1.0]
def change
create_table :db_guides do |t|
t.bigint :stars, default: 0, null: false
@@ -44,7 +44,7 @@ For example, consider a migration that adds a new `NOT NULL` column `active` to
`db/migrate/20200501000001_add_active_to_db_guides.rb`:
```ruby
-class AddExtendedTitleToSprints < ActiveRecord::Migration[6.0]
+class AddExtendedTitleToSprints < Gitlab::Database::Migration[1.0]
def change
add_column :db_guides, :active, :boolean, default: true, null: false
end
@@ -111,9 +111,7 @@ with `validate: false` in a post-deployment migration,
`db/post_migrate/20200501000001_add_not_null_constraint_to_epics_description.rb`:
```ruby
-class AddNotNullConstraintToEpicsDescription < ActiveRecord::Migration[6.0]
- include Gitlab::Database::MigrationHelpers
-
+class AddNotNullConstraintToEpicsDescription < Gitlab::Database::Migration[1.0]
disable_ddl_transaction!
def up
@@ -144,9 +142,7 @@ so we are going to add a post-deployment migration for the 13.0 milestone (curre
`db/post_migrate/20200501000002_cleanup_epics_with_null_description.rb`:
```ruby
-class CleanupEpicsWithNullDescription < ActiveRecord::Migration[6.0]
- include Gitlab::Database::MigrationHelpers
-
+class CleanupEpicsWithNullDescription < Gitlab::Database::Migration[1.0]
# With BATCH_SIZE=1000 and epics.count=29500 on GitLab.com
# - 30 iterations will be run
# - each requires on average ~150ms
@@ -184,9 +180,7 @@ migration helper in a final post-deployment migration,
`db/post_migrate/20200601000001_validate_not_null_constraint_on_epics_description.rb`:
```ruby
-class ValidateNotNullConstraintOnEpicsDescription < ActiveRecord::Migration[6.0]
- include Gitlab::Database::MigrationHelpers
-
+class ValidateNotNullConstraintOnEpicsDescription < Gitlab::Database::Migration[1.0]
disable_ddl_transaction!
def up
diff --git a/doc/development/database/pagination_guidelines.md b/doc/development/database/pagination_guidelines.md
index b7209b4ca30..3a772b10a6d 100644
--- a/doc/development/database/pagination_guidelines.md
+++ b/doc/development/database/pagination_guidelines.md
@@ -302,7 +302,7 @@ LIMIT 20
##### Tooling
-A generic keyset pagination library is available within the GitLab project which can most of the cases easly replace the existing, kaminari based pagination with significant performance improvements when dealing with large datasets.
+A generic keyset pagination library is available within the GitLab project which can most of the cases easily replace the existing, kaminari based pagination with significant performance improvements when dealing with large datasets.
Example:
diff --git a/doc/development/database/rename_database_tables.md b/doc/development/database/rename_database_tables.md
index 8ac50d2c0a0..881adf00ad0 100644
--- a/doc/development/database/rename_database_tables.md
+++ b/doc/development/database/rename_database_tables.md
@@ -60,7 +60,7 @@ Consider the next release as "Release N.M".
Execute a standard migration (not a post-migration):
```ruby
- include Gitlab::Database::MigrationHelpers
+ enable_lock_retries!
def up
rename_table_safely(:issues, :tickets)
@@ -96,8 +96,6 @@ At this point, we don't have applications using the old database table name in t
1. Remove the database view through a post-migration:
```ruby
- include Gitlab::Database::MigrationHelpers
-
def up
finalize_table_rename(:issues, :tickets)
end
diff --git a/doc/development/database/strings_and_the_text_data_type.md b/doc/development/database/strings_and_the_text_data_type.md
index 688d811b897..a0dda42fdc7 100644
--- a/doc/development/database/strings_and_the_text_data_type.md
+++ b/doc/development/database/strings_and_the_text_data_type.md
@@ -11,11 +11,13 @@ info: To determine the technical writer assigned to the Stage/Group associated w
When adding new columns that will be used to store strings or other textual information:
1. We always use the `text` data type instead of the `string` data type.
-1. `text` columns should always have a limit set, either by using the `create_table_with_constraints` helper
-when creating a table, or by using the `add_text_limit` when altering an existing table.
+1. `text` columns should always have a limit set, either by using the `create_table` with
+the `#text ... limit: 100` helper (see below) when creating a table, or by using the `add_text_limit`
+when altering an existing table.
-The `text` data type can not be defined with a limit, so `create_table_with_constraints` and `add_text_limit` enforce
-that by adding a [check constraint](https://www.postgresql.org/docs/11/ddl-constraints.html) on the column.
+The standard Rails `text` column type can not be defined with a limit, but we extend `create_table` to
+add a `limit: 255` option. Outside of `create_table`, `add_text_limit` can be used to add a [check constraint](https://www.postgresql.org/docs/11/ddl-constraints.html)
+to an already existing column.
## Background information
@@ -41,36 +43,24 @@ Don't use text columns for `attr_encrypted` attributes. Use a
## Create a new table with text columns
When adding a new table, the limits for all text columns should be added in the same migration as
-the table creation.
+the table creation. We add a `limit:` attribute to Rails' `#text` method, which allows adding a limit
+for this column.
For example, consider a migration that creates a table with two text columns,
`db/migrate/20200401000001_create_db_guides.rb`:
```ruby
-class CreateDbGuides < ActiveRecord::Migration[6.0]
- include Gitlab::Database::MigrationHelpers
-
- def up
- create_table_with_constraints :db_guides do |t|
+class CreateDbGuides < Gitlab::Database::Migration[1.0]
+ def change
+ create_table :db_guides do |t|
t.bigint :stars, default: 0, null: false
- t.text :title
- t.text :notes
-
- t.text_limit :title, 128
- t.text_limit :notes, 1024
+ t.text :title, limit: 128
+ t.text :notes, limit: 1024
end
end
-
- def down
- # No need to drop the constraints, drop_table takes care of everything
- drop_table :db_guides
- end
end
```
-Note that the `create_table_with_constraints` helper uses the `with_lock_retries` helper
-internally, so we don't need to manually wrap the method call in the migration.
-
## Add a text column to an existing table
Adding a column to an existing table requires an exclusive lock for that table. Even though that lock
@@ -84,7 +74,7 @@ For example, consider a migration that adds a new text column `extended_title` t
`db/migrate/20200501000001_add_extended_title_to_sprints.rb`:
```ruby
-class AddExtendedTitleToSprints < ActiveRecord::Migration[6.0]
+class AddExtendedTitleToSprints < Gitlab::Database::Migration[1.0]
# rubocop:disable Migration/AddLimitToTextColumns
# limit is added in 20200501000002_add_text_limit_to_sprints_extended_title
@@ -99,8 +89,7 @@ A second migration should follow the first one with a limit added to `extended_t
`db/migrate/20200501000002_add_text_limit_to_sprints_extended_title.rb`:
```ruby
-class AddTextLimitToSprintsExtendedTitle < ActiveRecord::Migration[6.0]
- include Gitlab::Database::MigrationHelpers
+class AddTextLimitToSprintsExtendedTitle < Gitlab::Database::Migration[1.0]
disable_ddl_transaction!
def up
@@ -175,9 +164,7 @@ in a post-deployment migration,
`db/post_migrate/20200501000001_add_text_limit_migration.rb`:
```ruby
-class AddTextLimitMigration < ActiveRecord::Migration[6.0]
- include Gitlab::Database::MigrationHelpers
-
+class AddTextLimitMigration < Gitlab::Database::Migration[1.0]
disable_ddl_transaction!
def up
@@ -208,9 +195,7 @@ to add a background migration for the 13.0 milestone (current),
`db/post_migrate/20200501000002_schedule_cap_title_length_on_issues.rb`:
```ruby
-class ScheduleCapTitleLengthOnIssues < ActiveRecord::Migration[6.0]
- include Gitlab::Database::MigrationHelpers
-
+class ScheduleCapTitleLengthOnIssues < Gitlab::Database::Migration[1.0]
# Info on how many records will be affected on GitLab.com
# time each batch needs to run on average, etc ...
BATCH_SIZE = 5000
@@ -255,9 +240,7 @@ helper in a final post-deployment migration,
`db/post_migrate/20200601000001_validate_text_limit_migration.rb`:
```ruby
-class ValidateTextLimitMigration < ActiveRecord::Migration[6.0]
- include Gitlab::Database::MigrationHelpers
-
+class ValidateTextLimitMigration < Gitlab::Database::Migration[1.0]
disable_ddl_transaction!
def up
diff --git a/doc/development/database/table_partitioning.md b/doc/development/database/table_partitioning.md
index 207d5a733ce..5319c73aad0 100644
--- a/doc/development/database/table_partitioning.md
+++ b/doc/development/database/table_partitioning.md
@@ -173,7 +173,7 @@ An example migration of partitioning the `audit_events` table by its
`created_at` column would look like:
```ruby
-class PartitionAuditEvents < ActiveRecord::Migration[6.0]
+class PartitionAuditEvents < Gitlab::Database::Migration[1.0]
include Gitlab::Database::PartitioningMigrationHelpers
def up
@@ -200,7 +200,7 @@ into the partitioned copy.
Continuing the above example, the migration would look like:
```ruby
-class BackfillPartitionAuditEvents < ActiveRecord::Migration[6.0]
+class BackfillPartitionAuditEvents < Gitlab::Database::Migration[1.0]
include Gitlab::Database::PartitioningMigrationHelpers
def up
@@ -233,7 +233,7 @@ failed jobs.
Once again, continuing the example, this migration would look like:
```ruby
-class CleanupPartitionedAuditEventsBackfill < ActiveRecord::Migration[6.0]
+class CleanupPartitionedAuditEventsBackfill < Gitlab::Database::Migration[1.0]
include Gitlab::Database::PartitioningMigrationHelpers
def up
diff --git a/doc/development/database/transaction_guidelines.md b/doc/development/database/transaction_guidelines.md
index 1c25496b153..4c586135015 100644
--- a/doc/development/database/transaction_guidelines.md
+++ b/doc/development/database/transaction_guidelines.md
@@ -12,7 +12,7 @@ For further reference please check PostgreSQL documentation about [transactions]
## Database decomposition and sharding
-The [sharding group](https://about.gitlab.com/handbook/engineering/development/enablement/sharding) plans to split the main GitLab database and move some of the database tables to other database servers.
+The [sharding group](https://about.gitlab.com/handbook/engineering/development/enablement/sharding/) plans to split the main GitLab database and move some of the database tables to other database servers.
The group will start decomposing the `ci_*` related database tables first. To maintain the current application development experience, tooling and static analyzers will be added to the codebase to ensure correct data access and data modification methods. By using the correct form for defining database transactions, we can save significant refactoring work in the future.