Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2022-07-20 18:40:28 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2022-07-20 18:40:28 +0300
commitb595cb0c1dec83de5bdee18284abe86614bed33b (patch)
tree8c3d4540f193c5ff98019352f554e921b3a41a72 /doc/development
parent2f9104a328fc8a4bddeaa4627b595166d24671d0 (diff)
Add latest changes from gitlab-org/gitlab@15-2-stable-eev15.2.0-rc42
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/adding_database_indexes.md124
-rw-r--r--doc/development/api_graphql_styleguide.md62
-rw-r--r--doc/development/application_limits.md5
-rw-r--r--doc/development/application_slis/rails_request_apdex.md2
-rw-r--r--doc/development/appsec/index.md35
-rw-r--r--doc/development/backend/ruby_style_guide.md2
-rw-r--r--doc/development/cicd/pipeline_wizard.md18
-rw-r--r--doc/development/cicd/templates.md33
-rw-r--r--doc/development/code_review.md69
-rw-r--r--doc/development/contributing/index.md2
-rw-r--r--doc/development/contributing/issue_workflow.md2
-rw-r--r--doc/development/dangerbot.md12
-rw-r--r--doc/development/database/add_foreign_key_to_existing_column.md21
-rw-r--r--doc/development/database/batched_background_migrations.md135
-rw-r--r--doc/development/database/constraint_naming_convention.md2
-rw-r--r--doc/development/database/loose_foreign_keys.md4
-rw-r--r--doc/development/database/multiple_databases.md61
-rw-r--r--doc/development/database/transaction_guidelines.md4
-rw-r--r--doc/development/deprecation_guidelines/img/deprecation_removal_process.pngbin53890 -> 27632 bytes
-rw-r--r--doc/development/deprecation_guidelines/index.md4
-rw-r--r--doc/development/development_processes.md124
-rw-r--r--doc/development/documentation/restful_api_styleguide.md20
-rw-r--r--doc/development/documentation/site_architecture/folder_structure.md9
-rw-r--r--doc/development/documentation/site_architecture/index.md13
-rw-r--r--doc/development/documentation/styleguide/index.md15
-rw-r--r--doc/development/documentation/testing.md1
-rw-r--r--doc/development/event_store.md21
-rw-r--r--doc/development/experiment_guide/experiment_code_reviews.md2
-rw-r--r--doc/development/experiment_guide/experiment_rollout.md2
-rw-r--r--doc/development/experiment_guide/implementing_experiments.md2
-rw-r--r--doc/development/experiment_guide/index.md2
-rw-r--r--doc/development/experiment_guide/testing_experiments.md2
-rw-r--r--doc/development/fe_guide/graphql.md8
-rw-r--r--doc/development/fe_guide/haml.md19
-rw-r--r--doc/development/fe_guide/index.md4
-rw-r--r--doc/development/fe_guide/style/scss.md2
-rw-r--r--doc/development/fe_guide/view_component.md174
-rw-r--r--doc/development/fe_guide/vue.md209
-rw-r--r--doc/development/feature_development.md197
-rw-r--r--doc/development/feature_flags/index.md7
-rw-r--r--doc/development/fips_compliance.md227
-rw-r--r--doc/development/foreign_keys.md73
-rw-r--r--doc/development/geo.md255
-rw-r--r--doc/development/geo/proxying.md356
-rw-r--r--doc/development/git_object_deduplication.md3
-rw-r--r--doc/development/gitaly.md28
-rw-r--r--doc/development/gitlab_flavored_markdown/specification_guide/index.md79
-rw-r--r--doc/development/go_guide/index.md37
-rw-r--r--doc/development/i18n/externalization.md50
-rw-r--r--doc/development/i18n/proofreader.md4
-rw-r--r--doc/development/import_project.md67
-rw-r--r--doc/development/index.md304
-rw-r--r--doc/development/integrations/index.md1
-rw-r--r--doc/development/integrations/secure.md46
-rw-r--r--doc/development/internal_api/index.md7
-rw-r--r--doc/development/iterating_tables_in_batches.md35
-rw-r--r--doc/development/licensed_feature_availability.md9
-rw-r--r--doc/development/migration_style_guide.md14
-rw-r--r--doc/development/omnibus.md2
-rw-r--r--doc/development/packages/debian_repository.md151
-rw-r--r--doc/development/packages/structure.md1
-rw-r--r--doc/development/performance.md6
-rw-r--r--doc/development/pipelines.md41
-rw-r--r--doc/development/product_qualified_lead_guide/index.md2
-rw-r--r--doc/development/rails_initializers.md18
-rw-r--r--doc/development/rake_tasks.md30
-rw-r--r--doc/development/reusing_abstractions.md4
-rw-r--r--doc/development/secure_coding_guidelines.md28
-rw-r--r--doc/development/service_ping/implement.md4
-rw-r--r--doc/development/service_ping/index.md40
-rw-r--r--doc/development/service_ping/metrics_dictionary.md40
-rw-r--r--doc/development/service_ping/metrics_instrumentation.md2
-rw-r--r--doc/development/service_ping/metrics_lifecycle.md2
-rw-r--r--doc/development/service_ping/performance_indicator_metrics.md2
-rw-r--r--doc/development/service_ping/review_guidelines.md2
-rw-r--r--doc/development/service_ping/troubleshooting.md12
-rw-r--r--doc/development/service_ping/usage_data.md2
-rw-r--r--doc/development/sidekiq/compatibility_across_updates.md5
-rw-r--r--doc/development/snowplow/event_dictionary_guide.md2
-rw-r--r--doc/development/snowplow/implementation.md18
-rw-r--r--doc/development/snowplow/index.md2
-rw-r--r--doc/development/snowplow/infrastructure.md2
-rw-r--r--doc/development/snowplow/review_guidelines.md2
-rw-r--r--doc/development/snowplow/schemas.md2
-rw-r--r--doc/development/snowplow/troubleshooting.md2
-rw-r--r--doc/development/stage_group_dashboards.md11
-rw-r--r--doc/development/testing_guide/best_practices.md60
-rw-r--r--doc/development/testing_guide/contract/consumer_tests.md44
-rw-r--r--doc/development/testing_guide/contract/index.md39
-rw-r--r--doc/development/testing_guide/contract/provider_tests.md77
-rw-r--r--doc/development/testing_guide/end_to_end/best_practices.md2
-rw-r--r--doc/development/testing_guide/end_to_end/index.md2
-rw-r--r--doc/development/testing_guide/end_to_end/resources.md10
-rw-r--r--doc/development/testing_guide/review_apps.md2
-rw-r--r--doc/development/uploads.md9
-rw-r--r--doc/development/value_stream_analytics/value_stream_analytics_aggregated_backend.md2
-rw-r--r--doc/development/workhorse/new_features.md2
97 files changed, 2777 insertions, 932 deletions
diff --git a/doc/development/adding_database_indexes.md b/doc/development/adding_database_indexes.md
index f524b04c6eb..e80bffe7c18 100644
--- a/doc/development/adding_database_indexes.md
+++ b/doc/development/adding_database_indexes.md
@@ -20,27 +20,24 @@ WHERE user_id = 2;
Here we are filtering by the `user_id` column and as such a developer may decide
to index this column.
-While in certain cases indexing columns using the above approach may make sense
-it can actually have a negative impact. Whenever you write data to a table any
-existing indexes need to be updated. The more indexes there are the slower this
-can potentially become. Indexes can also take up quite some disk space depending
+While in certain cases indexing columns using the above approach may make sense,
+it can actually have a negative impact. Whenever you write data to a table, any
+existing indexes must also be updated. The more indexes there are, the slower this
+can potentially become. Indexes can also take up significant disk space, depending
on the amount of data indexed and the index type. For example, PostgreSQL offers
-"GIN" indexes which can be used to index certain data types that can not be
-indexed by regular B-tree indexes. These indexes however generally take up more
+`GIN` indexes which can be used to index certain data types that cannot be
+indexed by regular B-tree indexes. These indexes, however, generally take up more
data and are slower to update compared to B-tree indexes.
-Because of all this one should not blindly add a new index for every column used
-to filter data by. Instead one should ask themselves the following questions:
+Because of all this, it's important make the following considerations
+when adding a new index:
-1. Can you write your query in such a way that it re-uses as many existing indexes
- as possible?
-1. Is the data large enough that using an index is actually
- faster than just iterating over the rows in the table?
+1. Do the new queries re-use as many existing indexes as possible?
+1. Is there enough data that using an index is faster than iterating over
+ rows in the table?
1. Is the overhead of maintaining the index worth the reduction in query
timings?
-We explore every question in detail below.
-
## Re-using Queries
The first step is to make sure your query re-uses as many existing indexes as
@@ -59,10 +56,8 @@ unindexed. In reality the query may perform just fine given the index on
`user_id` can filter out enough rows.
The best way to determine if indexes are re-used is to run your query using
-`EXPLAIN ANALYZE`. Depending on any extra tables that may be joined and
-other columns being used for filtering you may find an extra index is not going
-to make much (if any) difference. On the other hand you may determine that the
-index _may_ make a difference.
+`EXPLAIN ANALYZE`. Depending on the joined tables and the columns being used for filtering,
+you may find an extra index doesn't make much, if any, difference.
In short:
@@ -73,28 +68,24 @@ In short:
## Data Size
-A database may decide not to use an index despite it existing in case a regular
-sequence scan (= simply iterating over all existing rows) is faster. This is
-especially the case for small tables.
+A database may not use an index even when a regular sequence scan
+(iterating over all rows) is faster, especially for small tables.
-If a table is expected to grow in size and you expect your query has to filter
-out a lot of rows you may want to consider adding an index. If the table size is
-very small (for example, fewer than `1,000` records) or any existing indexes filter out
-enough rows you may _not_ want to add a new index.
+Consider adding an index if a table is expected to grow, and your query has to filter a lot of rows.
+You may _not_ want to add an index if the table size is small (<`1,000` records),
+or if existing indexes already filter out enough rows.
## Maintenance Overhead
-Indexes have to be updated on every table write. In case of PostgreSQL _all_
+Indexes have to be updated on every table write. In the case of PostgreSQL, _all_
existing indexes are updated whenever data is written to a table. As a
-result of this having many indexes on the same table slows down writes.
-
-Because of this one should ask themselves: is the reduction in query performance
-worth the overhead of maintaining an extra index?
+result, having many indexes on the same table slows down writes. It's therefore important
+to balance query performance with the overhead of maintaining an extra index.
-If adding an index reduces SELECT timings by 5 milliseconds but increases
-INSERT/UPDATE/DELETE timings by 10 milliseconds then the index may not be worth
-it. On the other hand, if SELECT timings are reduced but INSERT/UPDATE/DELETE
-timings are not affected you may want to add the index after all.
+Let's say that adding an index reduces SELECT timings by 5 milliseconds but increases
+INSERT/UPDATE/DELETE timings by 10 milliseconds. In this case, the new index may not be worth
+it. A new index is more valuable when SELECT timings are reduced and INSERT/UPDATE/DELETE
+timings are unaffected.
## Finding Unused Indexes
@@ -111,26 +102,32 @@ ORDER BY pg_relation_size(indexrelname::regclass) desc;
```
This query outputs a list containing all indexes that are never used and sorts
-them by indexes sizes in descending order. This query can be useful to
-determine if any previously indexes are useful after all. More information on
+them by indexes sizes in descending order. This query helps in
+determining whether existing indexes are still required. More information on
the meaning of the various columns can be found at
<https://www.postgresql.org/docs/current/monitoring-stats.html>.
-Because the output of this query relies on the actual usage of your database it
-may be affected by factors such as (but not limited to):
+To determine if an index is still being used on production, use the following
+Thanos query with your index name:
+
+```sql
+sum(rate(pg_stat_user_indexes_idx_tup_read{env="gprd", indexrelname="index_ci_name", type="patroni-ci"}[5m]))
+```
+
+Because the query output relies on the actual usage of your database, it
+may be affected by factors such as:
- Certain queries never being executed, thus not being able to use certain
indexes.
- Certain tables having little data, resulting in PostgreSQL using sequence
scans instead of index scans.
-In other words, this data is only reliable for a frequently used database with
-plenty of data and with as many GitLab features enabled (and being used) as
-possible.
+This data is only reliable for a frequently used database with
+plenty of data, and using as many GitLab features as possible.
## Requirements for naming indexes
-Indexes with complex definitions need to be explicitly named rather than
+Indexes with complex definitions must be explicitly named rather than
relying on the implicit naming behavior of migration methods. In short,
that means you **must** provide an explicit name argument for an index
created with one or more of the following options:
@@ -144,16 +141,7 @@ created with one or more of the following options:
### Considerations for index names
-Index names don't have any significance in the database, so they should
-attempt to communicate intent to others. The most important rule to
-remember is that generic names are more likely to conflict or be duplicated,
-and should not be used. Some other points to consider:
-
-- For general indexes, use a template, like: `index_{table}_{column}_{options}`.
-- For indexes added to solve a very specific problem, it may make sense
- for the name to reflect their use.
-- Identifiers in PostgreSQL have a maximum length of 63 bytes.
-- Check `db/structure.sql` for conflicts and ideas.
+Check our [Constraints naming conventions](database/constraint_naming_convention.md) page.
### Why explicit names are required
@@ -172,7 +160,7 @@ end
Creation of the second index would fail, because Rails would generate
the same name for both indexes.
-This is further complicated by the behavior of the `index_exists?` method.
+This naming issue is further complicated by the behavior of the `index_exists?` method.
It considers only the table name, column names, and uniqueness specification
of the index when making a comparison. Consider:
@@ -188,7 +176,7 @@ The call to `index_exists?` returns true if **any** index exists on
`:my_table` and `:my_column`, and index creation is bypassed.
The `add_concurrent_index` helper is a requirement for creating indexes
-on populated tables. Since it cannot be used inside a transactional
+on populated tables. Because it cannot be used inside a transactional
migration, it has a built-in check that detects if the index already
exists. In the event a match is found, index creation is skipped.
Without an explicit name argument, Rails can return a false positive
@@ -201,14 +189,14 @@ chance of error is greatly reduced.
There may be times when an index is only needed temporarily.
For example, in a migration, a column of a table might be conditionally
-updated. To query which columns need to be updated within the
-[query performance guidelines](query_performance.md), an index is needed that would otherwise
-not be used.
+updated. To query which columns must be updated in the
+[query performance guidelines](query_performance.md), an index is needed
+that would otherwise not be used.
-In these cases, a temporary index should be considered. To specify a
+In these cases, consider a temporary index. To specify a
temporary index:
-1. Prefix the index name with `tmp_` and follow the [naming conventions](database/constraint_naming_convention.md) and [requirements for naming indexes](#requirements-for-naming-indexes) for the rest of the name.
+1. Prefix the index name with `tmp_` and follow the [naming conventions](database/constraint_naming_convention.md).
1. Create a follow-up issue to remove the index in the next (or future) milestone.
1. Add a comment in the migration mentioning the removal issue.
@@ -237,10 +225,10 @@ on GitLab.com, the deployment process is blocked waiting for index
creation to finish.
To limit impact on GitLab.com, a process exists to create indexes
-asynchronously during weekend hours. Due to generally lower levels of
-traffic and lack of regular deployments, this process allows the
-creation of indexes to proceed with a lower level of risk. The below
-sections describe the steps required to use these features:
+asynchronously during weekend hours. Due to generally lower traffic and fewer deployments,
+index creation can proceed at a lower level of risk.
+
+### Schedule index creation for a low-impact time
1. [Schedule the index to be created](#schedule-the-index-to-be-created).
1. [Verify the MR was deployed and the index exists in production](#verify-the-mr-was-deployed-and-the-index-exists-in-production).
@@ -291,12 +279,10 @@ migration as expected for other installations. The below block
demonstrates how to create the second migration for the previous
asynchronous example.
-WARNING:
-The responsibility lies on the individual writing the migrations to verify
-the index exists in production before merging a second migration that
-adds the index using `add_concurrent_index`. If the second migration is
-deployed and the index has not yet been created, the index is created
-synchronously when the second migration executes.
+**WARNING:**
+Verify that the index exists in production before merging a second migration with `add_concurrent_index`.
+If the second migration is deployed before the index has been created,
+the index is created synchronously when the second migration executes.
```ruby
# in db/post_migrate/
diff --git a/doc/development/api_graphql_styleguide.md b/doc/development/api_graphql_styleguide.md
index de6840b2c6c..37de7044765 100644
--- a/doc/development/api_graphql_styleguide.md
+++ b/doc/development/api_graphql_styleguide.md
@@ -475,17 +475,18 @@ end
Developers can add [feature flags](../development/feature_flags/index.md) to GraphQL
fields in the following ways:
-- Add the `feature_flag` property to a field. This allows the field to be _hidden_
+- Add the [`feature_flag` property](#feature_flag-property) to a field. This allows the field to be _hidden_
from the GraphQL schema when the flag is disabled.
-- Toggle the return value when resolving the field.
+- [Toggle the return value](#toggle-the-value-of-a-field) when resolving the field.
You can refer to these guidelines to decide which approach to use:
- If your field is experimental, and its name or type is subject to
- change, use the `feature_flag` property.
+ change, use the [`feature_flag` property](#feature_flag-property).
- If your field is stable and its definition doesn't change, even after the flag is
- removed, toggle the return value of the field instead. Note that
+ removed, [toggle the return value](#toggle-the-value-of-a-field) of the field instead. Note that
[all fields should be nullable](#nullable-fields) anyway.
+- If your field will be accessed from frontend using the `@include` or `@skip` directive, [do not use the `feature_flag` property](#frontend-and-backend-feature-flag-strategies).
### `feature_flag` property
@@ -517,6 +518,20 @@ field :test_field, type: GraphQL::Types::String,
feature_flag: :my_feature_flag
```
+### Frontend and Backend feature flag strategies
+
+#### Directives
+
+When feature flags are used in the frontend to control the `@include` and `@skip` directives, do not use the `feature_flag`
+property on the server-side. For the accepted backend workaround, see [Toggle the value of a field](#toggle-the-value-of-a-field). It is recommended that the feature flag used in this approach be the same for frontend and backend.
+
+Even if the frontend directives evaluate to `@include:false` / `@skip:true`, the guarded entity is sent to the backend and matched
+against the GraphQL schema. We would then get an exception due to a schema mismatch. See the [frontend GraphQL guide](../development/fe_guide/graphql.md#the-include-directive) for more guidance.
+
+#### Different versions of a query
+
+See the guide frontend GraphQL guide for [different versions of a query](../development/fe_guide/graphql.md#different-versions-of-a-query), and [why it is not the preferred approach](../development/fe_guide/graphql.md#avoiding-multiple-query-versions)
+
### Toggle the value of a field
This method of using feature flags for fields is to toggle the
@@ -524,6 +539,12 @@ return value of the field. This can be done in the resolver, in the
type, or even in a model method, depending on your preference and
situation.
+Consider also [marking the field as Alpha](#marking-schema-items-as-alpha)
+while the value of the field can be toggled. You can
+[change or remove Alpha fields at any time](#breaking-change-exemptions) without needing to deprecate them.
+This also signals to consumers of the public GraphQL API that the field is not
+meant to be used yet.
+
When applying a feature flag to toggle the value of a field, the
`description` of the field must:
@@ -537,6 +558,7 @@ Example:
```ruby
field :foo, GraphQL::Types::String,
null: true,
+ deprecated: { reason: :alpha, milestone: '10.0' },
description: 'Some test field. Returns `null`' \
'if `my_feature_flag` feature flag is disabled.'
@@ -2007,13 +2029,13 @@ end
.to contain_exactly(a_graphql_entity_for(issue, :iid, :title, created_at: some_time))
```
-- Use `GraphqlHelpers#empty_schema` to create an empty schema, rather than creating
+- Use `GraphqlHelpers#empty_schema` to create an empty schema, rather than creating
one by hand. For example:
-
+
```ruby
# good
let(:schema) { empty_schema }
-
+
# bad
let(:query_type) { GraphQL::ObjectType.new }
let(:schema) { GraphQL::Schema.define(query: query_type, mutation: nil)}
@@ -2024,7 +2046,7 @@ end
```ruby
# good
let(:query) { query_double(schema: GitlabSchema) }
-
+
# bad
let(:query) { double('Query', schema: GitlabSchema) }
```
@@ -2092,9 +2114,9 @@ end
```ruby
type Types::IssueType.connection_type, null: true
```
-
+
However this might cause a cyclic definition, which can result in errors like:
-
+
```ruby
NameError: uninitialized constant Resolvers::GroupIssuesResolver
```
@@ -2109,7 +2131,7 @@ end
class IssueConnectionType < CountableConnectionType
end
end
-
+
Types::IssueConnectionType.prepend_mod_with('Types::IssueConnectionType')
```
@@ -2120,22 +2142,22 @@ end
```ruby
type "Types::IssueConnection", null: true
```
-
+
Only use this style if you are having spec failures. This is not intended to be a new
pattern that we use. This issue may disappear after we've upgraded to `2.x`.
-- There can be instances where a spec fails because the class is not loaded correctly.
- It relates to the
+- There can be instances where a spec fails because the class is not loaded correctly.
+ It relates to the
[circular dependencies problem](https://github.com/rmosolgo/graphql-ruby/issues/1929) and
[Adding field with resolver on a Type causes "Can't determine the return type " error on a different Type](https://github.com/rmosolgo/graphql-ruby/issues/3974).
Unfortunately, the errors generated don't really indicate what the problem is. For example,
- remove the quotes from the `Rspec.descrbe` in
+ remove the quotes from the `Rspec.descrbe` in
[ee/spec/graphql/resolvers/compliance_management/merge_requests/compliance_violation_resolver_spec.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/spec/graphql/resolvers/compliance_management/merge_requests/compliance_violation_resolver_spec.rb).
Then run `rspec ee/spec/graphql/resolvers/compliance_management/merge_requests/compliance_violation_resolver_spec.rb`.
-
+
This generates errors with the expectations. For example:
-
+
```ruby
1) Resolvers::ComplianceManagement::MergeRequests::ComplianceViolationResolver#resolve user is authorized filtering the results when given an array of project IDs finds the filtered compliance violations
Failure/Error: expect(subject).to contain_exactly(compliance_violation)
@@ -2145,10 +2167,10 @@ end
the extra elements were: [#<MergeRequests::ComplianceViolation id: 5, violating_user_id: 27, merge_request_id: 5, reason: "approved_by_merge_request_author", severity_level: "high">]
# ./ee/spec/graphql/resolvers/compliance_management/merge_requests/compliance_violation_resolver_spec.rb:55:in `block (6 levels) in <top (required)>'
```
-
+
However, this is not a case of the wrong result being generated, it's because of the loading order
of the `ComplianceViolationResolver` class.
-
+
The only way we've found to fix this is by quoting the class name in the spec. For example, changing
```ruby
@@ -2198,7 +2220,7 @@ end
[removed eventually](https://gitlab.com/gitlab-org/gitlab/-/issues/363121),
and writing unit tests for resolvers/mutations is
[already deprecated](#writing-unit-tests-deprecated)
-
+
## Notes about Query flow and GraphQL infrastructure
The GitLab GraphQL infrastructure can be found in `lib/gitlab/graphql`.
diff --git a/doc/development/application_limits.md b/doc/development/application_limits.md
index 6c7213ab235..2826b8a3bc4 100644
--- a/doc/development/application_limits.md
+++ b/doc/development/application_limits.md
@@ -12,8 +12,7 @@ limits to GitLab.
## Documentation
First of all, you have to gather information and decide which are the different
-limits that are set for the different GitLab tiers. You also need to
-coordinate with others to [document](../administration/instance_limits.md)
+limits that are set for the different GitLab tiers. Coordinate with others to [document](../administration/instance_limits.md)
and communicate those limits.
There is a guide about [introducing application
@@ -206,6 +205,6 @@ rate limiting architecture:
1. Making it possible to define and override limits per namespace / per plan.
1. Automatically generating documentation about what limits are implemented and
what the defaults are.
-1. Defining limits in a single place that is easy to find an explore.
+1. Defining limits in a single place that can be found and explored.
1. Soft and hard limits, with support for notifying users when a limit is
approaching.
diff --git a/doc/development/application_slis/rails_request_apdex.md b/doc/development/application_slis/rails_request_apdex.md
index f9613a14dd1..3e3cd100183 100644
--- a/doc/development/application_slis/rails_request_apdex.md
+++ b/doc/development/application_slis/rails_request_apdex.md
@@ -254,6 +254,6 @@ In the **Budget Attribution** row, the **Puma Apdex** log link shows you
how many requests are not meeting a 1s or 5s target.
Learn more about the content of the dashboard in the documentation for
-[Dashboards for stage groups](../stage_group_dashboards.md). For more information
+[Dashboards for stage groups](../stage_group_observability/index.md). For more information
on our exploration of the error budget itself, read the infrastructure issue
[Stage group error budget exploration dashboard](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1365).
diff --git a/doc/development/appsec/index.md b/doc/development/appsec/index.md
index 2ece3fdf4bf..8361170c3d2 100644
--- a/doc/development/appsec/index.md
+++ b/doc/development/appsec/index.md
@@ -1,32 +1,11 @@
---
-stage: Secure, Protect
-group: all
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
-type: index, dev, reference
+redirect_to: '../index.md'
+remove_date: '2022-09-23'
---
-# Application Security development documentation
+This document was moved to [another location](../index.md).
-Development guides that are specific to the stages that work on Application Security features are listed here.
-
-Please go to [Application Security](../../user/application_security/index.md) if you are looking for documentation on how to use those features.
-
-## Namespaces
-
-Application Security code in the Rails monolith is organized into the following namespaces, which generally follows
-the feature categories in the [Secure](https://about.gitlab.com/stages-devops-lifecycle/secure/) and [Protect](https://about.gitlab.com/stages-devops-lifecycle/protect/) stages.
-
-- `AppSec`: shared code.
- - `AppSec::ContainerScanning`: Container Scanning code.
- - `AppSec::Dast`: DAST code.
- - `AppSec::DependencyScanning`: Dependency Scanning code.
- - `AppSec::Fuzzing::API`: API Fuzzing code.
- - `AppSec::Fuzzing::Coverage`: Coverage Fuzzing code.
- - `AppSec::Fuzzing`: Shared fuzzing code.
- - `AppSec::LicenseCompliance`: License Compliance code.
- - `AppSec::Sast`: SAST code.
- - `AppSec::SecretDetection`: Secret Detection code.
- - `AppSec::VulnMgmt`: Vulnerability Management code.
-
-Most AppSec code does not conform to these namespace guidelines. When developing, make an effort
-to move existing code into the appropriate namespace whenever possible.
+<!-- This redirect file can be deleted after <2022-09-23>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/backend/ruby_style_guide.md b/doc/development/backend/ruby_style_guide.md
index eff6ae7f217..c86f21d4bac 100644
--- a/doc/development/backend/ruby_style_guide.md
+++ b/doc/development/backend/ruby_style_guide.md
@@ -16,7 +16,7 @@ document the new rule. For every new guideline, add it in a new section and link
[version history note](../documentation/versions.md#add-a-version-history-item)
to provide context and serve as a reference.
-Just because something is listed here does not mean it cannot be reopened for discussion.
+Everything listed here can be [reopened for discussion](https://about.gitlab.com/handbook/values/#disagree-commit-and-disagree).
## Instance variable access using `attr_reader`
diff --git a/doc/development/cicd/pipeline_wizard.md b/doc/development/cicd/pipeline_wizard.md
index 608c21778c0..7a0b70bd8e8 100644
--- a/doc/development/cicd/pipeline_wizard.md
+++ b/doc/development/cicd/pipeline_wizard.md
@@ -227,3 +227,21 @@ Use as `widget: list`. This inserts a `list` in the YAML file.
| `invalidFeedback` | **{dotted-circle}** No | string | Help text displayed when the pattern validation fails. |
| `default` | **{dotted-circle}** No | list | The default value for the list |
| `id` | **{dotted-circle}** No | string | The input field ID is usually autogenerated but can be overridden by providing this property. |
+
+#### Checklist
+
+Use as `widget: checklist`. This inserts a list of checkboxes that need to
+be checked before proceeding to the next step.
+
+| Name | Required | Type | Description |
+|---------|------------------------|--------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `title` | **{dotted-circle}** No | string | A title above the checklist items. |
+| `items` | **{dotted-circle}** No | list | A list of items that need to be checked. Each item corresponds to one checkbox, and can be a string or [checklist item](#checklist-item). |
+
+##### Checklist Item
+
+| Name | Required | Type | Description |
+|--------|------------------------|---------|-----------------------------------------|
+| `text` | **{check-circle}** Yes | string | A title above the checklist items. |
+| `help` | **{dotted-circle}** No | string | Help text explaining the item. |
+| `id` | **{dotted-circle}** No | string | The input field ID is usually autogenerated but can be overridden by providing this property. |
diff --git a/doc/development/cicd/templates.md b/doc/development/cicd/templates.md
index 8d88e7155a2..4ea7a9d960c 100644
--- a/doc/development/cicd/templates.md
+++ b/doc/development/cicd/templates.md
@@ -289,9 +289,32 @@ Please read [versioning](#versioning) section for introducing breaking change sa
## Versioning
-Versioning allows you to introduce a new template without modifying the existing
-one. This process is useful when we need to introduce a breaking change,
-but don't want to affect the existing projects that depends on the current template.
+To introduce a breaking change without affecting the existing projects that depend on
+the current template, use [stable](#stable-version) and [latest](#latest-version) versioning.
+
+Stable templates usually only receive breaking changes in major version releases, while
+latest templates can receive breaking changes in any release. In major release milestones,
+the latest template is made the new stable template (and the latest template might be deleted).
+
+Adding a latest template is safe, but comes with a maintenance burden:
+
+- GitLab has to choose a DRI to overwrite the stable template with the contents of the
+ latest template at the next major release of GitLab. The DRI is responsible for
+ supporting users who have trouble with the change.
+- When we make a new non-breaking change, both the stable and latest templates must be updated
+ to match, as must as possible.
+- A latest template could remain for longer than planned because many users could
+ directly depend on it continuing to exist.
+
+Before adding a new latest template, see if the change can be made to the stable
+template instead, even if it's a breaking change. If the template is intended for copy-paste
+usage only, it might be possible to directly change the stable version. Before changing
+the stable template with a breaking change in a minor milestone, make sure:
+
+- It's a [pipeline template](#template-types) and it has a [code comment](#explain-requirements-and-expectations)
+ explaining that it's not designed to be used with the `includes`.
+- The [CI/CD template usage metrics](#add-metrics) doesn't show any usage. If the metrics
+ show zero usage for the template, the template is not actively being used with `include`.
### Stable version
@@ -393,7 +416,9 @@ is updated in a major version GitLab release.
### Add metrics
-Every CI/CD template must also have metrics defined to track their use.
+Every CI/CD template must also have metrics defined to track their use. The CI/CD template monthly usage report
+can be found in [Sisense (GitLab team members only)](https://app.periscopedata.com/app/gitlab/785953/Pipeline-Authoring-Dashboard?widget=13440051&udv=0).
+Double click a template to see the graph for that single template.
To add a metric definition for a new template:
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index a6976271ddf..1225260e600 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -71,26 +71,34 @@ It picks reviewers and maintainers from the list at the
[engineering projects](https://about.gitlab.com/handbook/engineering/projects/)
page, with these behaviors:
-1. It doesn't pick people whose Slack or [GitLab status](../user/profile/index.md#set-your-current-status):
- - Contains the string `OOO`, `PTO`, `Parental Leave`, or `Friends and Family`.
- - GitLab user **Busy** indicator is set to `True`.
- - Emoji is from one of these categories:
- - **On leave** - 🌴 `:palm_tree:`, 🏖️ `:beach:`, ⛱ `:beach_umbrella:`, 🏖 `:beach_with_umbrella:`, 🌞 `:sun_with_face:`, 🎡 `:ferris_wheel:`
- - **Out sick** - 🌡️ `:thermometer:`, 🤒 `:face_with_thermometer:`
- - **At capacity** - 🔴 `:red_circle:`
- - **Focus mode** - 💡 `:bulb:` (focusing on their team's work)
-1. [Trainee maintainers](https://about.gitlab.com/handbook/engineering/workflow/code-review/#trainee-maintainer)
- are three times as likely to be picked as other reviewers.
-1. Team members whose Slack or [GitLab status](../user/profile/index.md#set-your-current-status) emoji
- is 🔵 `:large_blue_circle:` are more likely to be picked. This applies to both reviewers and trainee maintainers.
- - Reviewers with 🔵 `:large_blue_circle:` are two times as likely to be picked as other reviewers.
- - Trainee maintainers with 🔵 `:large_blue_circle:` are four times as likely to be picked as other reviewers.
-1. People whose [GitLab status](../user/profile/index.md#set-your-current-status) emoji
- is 🔶 `:large_orange_diamond:` or 🔸 `:small_orange_diamond:` are half as likely to be picked.
-1. It always picks the same reviewers and maintainers for the same
- branch name (unless their out-of-office (`OOO`) status changes, as in point 1). It
- removes leading `ce-` and `ee-`, and trailing `-ce` and `-ee`, so
- that it can be stable for backport branches.
+- It doesn't pick people whose Slack or [GitLab status](../user/profile/index.md#set-your-current-status):
+ - Contains the string `OOO`, `PTO`, `Parental Leave`, or `Friends and Family`.
+ - GitLab user **Busy** indicator is set to `True`.
+ - Emoji is from one of these categories:
+ - **On leave** - 🌴 `:palm_tree:`, 🏖️ `:beach:`, ⛱ `:beach_umbrella:`, 🏖 `:beach_with_umbrella:`, 🌞 `:sun_with_face:`, 🎡 `:ferris_wheel:`
+ - **Out sick** - 🌡️ `:thermometer:`, 🤒 `:face_with_thermometer:`
+ - **At capacity** - 🔴 `:red_circle:`
+ - **Focus mode** - 💡 `:bulb:` (focusing on their team's work)
+- It doesn't pick people who are already assigned a number of reviews that is equal to
+ or greater than their chosen "review limit". The review limit is the maximum number of
+ reviews people are ready to handle at a time. Set a review limit by using one of the following
+ as a Slack or [GitLab status](../user/profile/index.md#set-your-current-status):
+ - 0️⃣ - `:zero:` (similar to `:red_circle:`)
+ - 1️⃣ - `:one:`
+ - 2️⃣ - `:two:`
+ - 3️⃣ - `:three:`
+ - 4️⃣ - `:four:`
+ - 5️⃣ - `:five:`
+- Team members whose Slack or [GitLab status](../user/profile/index.md#set-your-current-status) emoji
+ is 🔵 `:large_blue_circle:` are more likely to be picked. This applies to both reviewers and trainee maintainers.
+ - Reviewers with 🔵 `:large_blue_circle:` are two times as likely to be picked as other reviewers.
+ - [Trainee maintainers](https://about.gitlab.com/handbook/engineering/workflow/code-review/#trainee-maintainer) with 🔵 `:large_blue_circle:` are three times as likely to be picked as other reviewers.
+- People whose [GitLab status](../user/profile/index.md#set-your-current-status) emoji
+ is 🔶 `:large_orange_diamond:` or 🔸 `:small_orange_diamond:` are half as likely to be picked.
+- It always picks the same reviewers and maintainers for the same
+ branch name (unless their out-of-office (`OOO`) status changes, as in point 1). It
+ removes leading `ce-` and `ee-`, and trailing `-ce` and `-ee`, so
+ that it can be stable for backport branches.
The [Roulette dashboard](https://gitlab-org.gitlab.io/gitlab-roulette) contains:
@@ -131,7 +139,7 @@ with [domain expertise](#domain-experts).
1. If your merge request includes documentation changes, it must be **approved
by a [Technical writer](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments)**,
based on assignments in the appropriate [DevOps stage group](https://about.gitlab.com/handbook/product/categories/#devops-stages).
-1. If your merge request includes changes to development guidelines, follow the [review process](index.md#development-guidelines-review) and get the approvals accordingly.
+1. If your merge request includes changes to development guidelines, follow the [review process](development_processes.md#development-guidelines-review) and get the approvals accordingly.
1. If your merge request includes end-to-end **and** non-end-to-end changes (*4*), it must be **approved
by a [Software Engineer in Test](https://about.gitlab.com/handbook/engineering/quality/#individual-contributors)**.
1. If your merge request only includes end-to-end changes (*4*) **or** if the MR author is a [Software Engineer in Test](https://about.gitlab.com/handbook/engineering/quality/#individual-contributors), it must be **approved by a [Quality maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab_maintainers_qa)**
@@ -275,7 +283,7 @@ This saves reviewers time and helps authors catch mistakes earlier.
Verify that the merge request meets all [contribution acceptance criteria](contributing/merge_request_workflow.md#contribution-acceptance-criteria).
If a merge request is too large, fixes more than one issue, or implements more
-than one feature, you should guide the author towards spltting the merge request
+than one feature, you should guide the author towards splitting the merge request
into smaller merge requests.
When you are confident
@@ -300,11 +308,18 @@ Because a maintainer's job only depends on their knowledge of the overall GitLab
codebase, and not that of any specific domain, they can review, approve, and merge
merge requests from any team and in any product area.
-If a merge request is too large, fixes more than one issue, or implements more
-than one feature, the maintainer can ask the author to make the merge request
-smaller. Request the previous reviewer, or a merge request coach to help guide
-the author on how to split the merge request, and to review the resulting
-changes.
+A maintainer should ask the author to make a merge request smaller if it is:
+
+- Too large.
+- Fixes more than one issue.
+- Implements more than one feature.
+- Has a high complexity resulting in additional risk.
+
+The maintainer, any of the
+reviewers, or a merge request coach can step up to help the author to divide work
+into smaller iterations, and guide the author on how to split the merge request.
+The author may choose to request that the current maintainers and reviewers review the split MRs
+or request a new group of maintainers and reviewers.
Maintainers do their best to also review the specifics of the chosen solution
before merging, but as they are not necessarily [domain experts](#domain-experts), they may be poorly
diff --git a/doc/development/contributing/index.md b/doc/development/contributing/index.md
index 182d00d52ab..12fd7c3dc12 100644
--- a/doc/development/contributing/index.md
+++ b/doc/development/contributing/index.md
@@ -240,4 +240,4 @@ For information on how to contribute documentation, see GitLab
## Getting an Enterprise Edition License
If you need a license for contributing to an EE-feature, see
-[relevant information](https://about.gitlab.com/handbook/marketing/community-relations/code-contributor-program/#for-contributors-to-the-gitlab-enterprise-edition-ee).
+[relevant information](https://about.gitlab.com/handbook/marketing/community-relations/code-contributor-program/#contributing-to-the-gitlab-enterprise-edition-ee).
diff --git a/doc/development/contributing/issue_workflow.md b/doc/development/contributing/issue_workflow.md
index 97c8c179e09..c6d977cf5ad 100644
--- a/doc/development/contributing/issue_workflow.md
+++ b/doc/development/contributing/issue_workflow.md
@@ -68,7 +68,7 @@ labels, you can _always_ add the type, stage, group, and often the category/feat
Type labels are very important. They define what kind of issue this is. Every
issue should have one and only one.
-The current type labels are [available in the handbook](https://about.gitlab.com/handbook/engineering/metrics/#work-type-classification)
+The SSOT for type and subtype labels is [available in the handbook](https://about.gitlab.com/handbook/engineering/metrics/#work-type-classification).
A number of type labels have a priority assigned to them, which automatically
makes them float to the top, depending on their importance.
diff --git a/doc/development/dangerbot.md b/doc/development/dangerbot.md
index 003df4fe078..d2b231ebc7c 100644
--- a/doc/development/dangerbot.md
+++ b/doc/development/dangerbot.md
@@ -58,7 +58,7 @@ itself, increasing visibility.
## Development guidelines
-Danger code is Ruby code, so all our [usual backend guidelines](index.md#backend-guides)
+Danger code is Ruby code, so all our [usual backend guidelines](feature_development.md#backend-guides)
continue to apply. However, there are a few things that deserve special emphasis.
### When to use Danger
@@ -175,15 +175,7 @@ at GitLab so far:
- Database review
- Documentation review
- Merge request metrics
-- Reviewer roulette. Reviewers and maintainers are chosen based on:
- - Their roles (backend, frontend, database, etc).
- - Their availability:
- - No "OOO"/"PTO"/"Parental Leave" in their GitLab or Slack status.
- - No `:red_circle:`/`:palm_tree:`/`:beach:`/`:beach_umbrella:`/`:beach_with_umbrella:` emojis in GitLab or Slack status.
- - (Experimental) Their time zone: people for which the local hour is between
- 6 AM and 2 PM are eligible to be picked. This is to ensure they have a good
- chance to get to perform a review during their current work day. The experimentation is tracked in
- [this issue](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/563)
+- [Reviewer roulette](code_review.md#reviewer-roulette)
- Single codebase effort
## Limitations
diff --git a/doc/development/database/add_foreign_key_to_existing_column.md b/doc/development/database/add_foreign_key_to_existing_column.md
index 9842814816f..7a18da2223f 100644
--- a/doc/development/database/add_foreign_key_to_existing_column.md
+++ b/doc/development/database/add_foreign_key_to_existing_column.md
@@ -64,18 +64,14 @@ emails = Email.where(user_id: 1) # returns emails for the deleted user
Add a `NOT VALID` foreign key constraint to the table, which enforces consistency on the record changes.
-[Using the `with_lock_retries` helper method is advised when performing operations on high-traffic tables](../migration_style_guide.md#when-to-use-the-helper-method),
-in this case, if the table or the foreign table is a high-traffic table, we should use the helper method.
-
In the example above, you'd be still able to update records in the `emails` table. However, when you'd try to update the `user_id` with non-existent value, the constraint causes a database error.
Migration file for adding `NOT VALID` foreign key:
```ruby
-class AddNotValidForeignKeyToEmailsUser < Gitlab::Database::Migration[1.0]
+class AddNotValidForeignKeyToEmailsUser < Gitlab::Database::Migration[2.0]
def up
- # safe to use: it requires short lock on the table since we don't validate the foreign key
- add_foreign_key :emails, :users, on_delete: :cascade, validate: false
+ add_concurrent_foreign_key :emails, :users, on_delete: :cascade, validate: false
end
def down
@@ -84,8 +80,14 @@ class AddNotValidForeignKeyToEmailsUser < Gitlab::Database::Migration[1.0]
end
```
+Adding a foreign key without validating it is a fast operation. It only requires a
+short lock on the table before being able to enforce the constraint on new data.
+We do still want to enable lock retries for high traffic and large tables.
+`add_concurrent_foreign_key` does this for us, and also checks if the foreign key already exists.
+
WARNING:
-Avoid using the `add_foreign_key` constraint more than once per migration file, unless the source and target tables are identical.
+Avoid using `add_foreign_key` or `add_concurrent_foreign_key` constraints more than
+once per migration file, unless the source and target tables are identical.
#### Data migration to fix existing records
@@ -98,7 +100,7 @@ In case the data volume is higher (>1000 records), it's better to create a backg
Example for cleaning up records in the `emails` table in a database migration:
```ruby
-class RemoveRecordsWithoutUserFromEmailsTable < Gitlab::Database::Migration[1.0]
+class RemoveRecordsWithoutUserFromEmailsTable < Gitlab::Database::Migration[2.0]
disable_ddl_transaction!
class Email < ActiveRecord::Base
@@ -121,6 +123,7 @@ end
### Validate the foreign key
Validating the foreign key scans the whole table and makes sure that each relation is correct.
+Fortunately, this does not lock the source table (`users`) while running.
NOTE:
When using [background migrations](background_migrations.md), foreign key validation should happen in the next GitLab release.
@@ -130,7 +133,7 @@ Migration file for validating the foreign key:
```ruby
# frozen_string_literal: true
-class ValidateForeignKeyOnEmailUsers < Gitlab::Database::Migration[1.0]
+class ValidateForeignKeyOnEmailUsers < Gitlab::Database::Migration[2.0]
def up
validate_foreign_key :emails, :user_id
end
diff --git a/doc/development/database/batched_background_migrations.md b/doc/development/database/batched_background_migrations.md
index 6d3d5fa7f92..f3ea82b5c61 100644
--- a/doc/development/database/batched_background_migrations.md
+++ b/doc/development/database/batched_background_migrations.md
@@ -244,8 +244,6 @@ background migration.
```ruby
class QueueBackfillRoutesNamespaceId < Gitlab::Database::Migration[2.0]
- disable_ddl_transaction!
-
MIGRATION = 'BackfillRouteNamespaceId'
DELAY_INTERVAL = 2.minutes
@@ -316,6 +314,137 @@ background migration.
After the batched migration is completed, you can safely depend on the
data in `routes.namespace_id` being populated.
+### Batching over non-distinct columns
+
+The default batching strategy provides an efficient way to iterate over primary key columns.
+However, if you need to iterate over columns where values are not unique, you must use a
+different batching strategy.
+
+The `LooseIndexScanBatchingStrategy` batching strategy uses a special version of [`EachBatch`](../iterating_tables_in_batches.md#loose-index-scan-with-distinct_each_batch)
+to provide efficient and stable iteration over the distinct column values.
+
+This example shows a batched background migration where the `issues.project_id` column is used as
+the batching column.
+
+Database post-migration:
+
+```ruby
+class ProjectsWithIssuesMigration < Gitlab::Database::Migration[2.0]
+ MIGRATION = 'BatchProjectsWithIssues'
+ INTERVAL = 2.minutes
+ BATCH_SIZE = 5000
+ SUB_BATCH_SIZE = 500
+ restrict_gitlab_migration gitlab_schema: :gitlab_main
+
+ disable_ddl_transaction!
+ def up
+ queue_batched_background_migration(
+ MIGRATION,
+ :issues,
+ :project_id,
+ job_interval: INTERVAL,
+ batch_size: BATCH_SIZE,
+ batch_class_name: 'LooseIndexScanBatchingStrategy', # Override the default batching strategy
+ sub_batch_size: SUB_BATCH_SIZE
+ )
+ end
+
+ def down
+ delete_batched_background_migration(MIGRATION, :issues, :project_id, [])
+ end
+end
+```
+
+Implementing the background migration class:
+
+```ruby
+module Gitlab
+ module BackgroundMigration
+ class BatchProjectsWithIssues < Gitlab::BackgroundMigration::BatchedMigrationJob
+ include Gitlab::Database::DynamicModelHelpers
+
+ def perform
+ distinct_each_batch(operation_name: :backfill_issues) do |batch|
+ project_ids = batch.pluck(batch_column)
+ # do something with the distinct project_ids
+ end
+ end
+ end
+ end
+end
+```
+
+### Adding filters to the initial batching
+
+By default, when creating background jobs to perform the migration, batched background migrations will iterate over the full specified table. This is done using the [`PrimaryKeyBatchingStrategy`](https://gitlab.com/gitlab-org/gitlab/-/blob/c9dabd1f4b8058eece6d8cb4af95e9560da9a2ee/lib/gitlab/database/migrations/batched_background_migration_helpers.rb#L17). This means if there are 1000 records in the table and the batch size is 100, there will be 10 jobs. For illustrative purposes, `EachBatch` is used like this:
+
+```ruby
+# PrimaryKeyBatchingStrategy
+Projects.all.each_batch(of: 100) do |relation|
+ relation.where(foo: nil).update_all(foo: 'bar') # this happens in each background job
+end
+```
+
+There are cases where we only need to look at a subset of records. Perhaps we only need to update 1 out of every 10 of those 1000 records. It would be best if we could apply a filter to the initial relation when the jobs are created:
+
+```ruby
+Projects.where(foo: nil).each_batch(of: 100) do |relation|
+ relation.update_all(foo: 'bar')
+end
+```
+
+In the `PrimaryKeyBatchingStrategy` example, we do not know how many records will be updated in each batch. In the filtered example, we know exactly 100 will be updated with each batch.
+
+The `PrimaryKeyBatchingStrategy` contains [a method that can be overwritten](https://gitlab.com/gitlab-org/gitlab/-/blob/dd1e70d3676891025534dc4a1e89ca9383178fe7/lib/gitlab/background_migration/batching_strategies/primary_key_batching_strategy.rb#L38-52) to apply additional filtering on the initial `EachBatch`.
+
+We can accomplish this by:
+
+1. Create a new class that inherits from `PrimaryKeyBatchingStrategy` and overrides the method using the desired filter (this may be the same filter used in the sub-batch):
+
+ ```ruby
+ # frozen_string_literal: true
+
+ module GitLab
+ module BackgroundMigration
+ module BatchingStrategies
+ class FooStrategy < PrimaryKeyBatchingStrategy
+ def apply_additional_filters(relation, job_arguments: [], job_class: nil)
+ relation.where(foo: nil)
+ end
+ end
+ end
+ end
+ end
+ ```
+
+1. In the post-deployment migration that queues the batched background migration, specify the new batching strategy using the `batch_class_name` parameter:
+
+ ```ruby
+ class BackfillProjectsFoo < Gitlab::Database::Migration[2.0]
+ MIGRATION = 'BackfillProjectsFoo'
+ DELAY_INTERVAL = 2.minutes
+ BATCH_CLASS_NAME = 'FooStrategy'
+
+ restrict_gitlab_migration gitlab_schema: :gitlab_main
+
+ def up
+ queue_batched_background_migration(
+ MIGRATION,
+ :routes,
+ :id,
+ job_interval: DELAY_INTERVAL,
+ batch_class_name: BATCH_CLASS_NAME
+ )
+ end
+
+ def down
+ delete_batched_background_migration(MIGRATION, :routes, :id, [])
+ end
+ end
+ ```
+
+When applying a batching strategy, it is important to ensure the filter properly covered by an index to optimize `EachBatch` performance. See [the `EachBatch` docs for more information](../iterating_tables_in_batches.md).
+
## Testing
Writing tests is required for:
@@ -357,7 +486,7 @@ You can view failures in two ways:
- Via GitLab logs:
1. After running a batched background migration, if any jobs fail,
- view the logs in [Kibana](https://log.gprd.gitlab.net/goto/5f06a57f768c6025e1c65aefb4075694).
+ view the logs in [Kibana](https://log.gprd.gitlab.net/goto/4cb43f40-f861-11ec-b86b-d963a1a6788e).
View the production Sidekiq log and filter for:
- `json.new_state: failed`
diff --git a/doc/development/database/constraint_naming_convention.md b/doc/development/database/constraint_naming_convention.md
index 72f16c20559..2f0b8bf0463 100644
--- a/doc/development/database/constraint_naming_convention.md
+++ b/doc/development/database/constraint_naming_convention.md
@@ -25,5 +25,7 @@ The intent is not to retroactively change names in existing databases but rather
## Observations
+- Check `db/structure.sql` for conflicts.
- Prefixes are preferred over suffices because they make it easier to identify the type of a given constraint quickly, as well as group them alphabetically;
- The `_and_` that joins column names can be omitted to keep the identifiers under the 63 characters' length limit defined by PostgreSQL. Additionally, the notation may be abbreviated to the best of our ability if struggling to keep under this limit.
+- For indexes added to solve a very specific problem, it may make sense for the name to reflect their use.
diff --git a/doc/development/database/loose_foreign_keys.md b/doc/development/database/loose_foreign_keys.md
index dec51d484fd..6aa1b9c40ff 100644
--- a/doc/development/database/loose_foreign_keys.md
+++ b/doc/development/database/loose_foreign_keys.md
@@ -66,8 +66,6 @@ The tool ensures that all aspects of swapping a foreign key are covered. This in
- Updating `db/structure.sql` with the new migration.
- Updating `lib/gitlab/database/gitlab_loose_foreign_keys.yml` to add the new loose foreign key.
- Creating or updating a model's specs to ensure that the loose foreign key is properly supported.
-- Creating a new branch, commit, push, and creating a merge request on GitLab.com.
-- Creating a merge request template with all the necessary details to validate the safety of the foreign key removal.
The tool is located at `scripts/decomposition/generate-loose-foreign-key`:
@@ -77,9 +75,7 @@ $ scripts/decomposition/generate-loose-foreign-key -h
Usage: scripts/decomposition/generate-loose-foreign-key [options] <filters...>
-c, --cross-schema Show only cross-schema foreign keys
-n, --dry-run Do not execute any commands (dry run)
- -b, --[no-]branch Create or not a new branch
-r, --[no-]rspec Create or not a rspecs automatically
- -m, --milestone MILESTONE Specify custom milestone (current: 14.8)
-h, --help Prints this help
```
diff --git a/doc/development/database/multiple_databases.md b/doc/development/database/multiple_databases.md
index 7badd7f76fa..9641ea37002 100644
--- a/doc/development/database/multiple_databases.md
+++ b/doc/development/database/multiple_databases.md
@@ -6,8 +6,11 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Multiple Databases
-To scale GitLab, the we are
-[decomposing the GitLab application database into multiple databases](https://gitlab.com/groups/gitlab-org/-/epics/6168).
+To allow GitLab to scale further we
+[decomposed the GitLab application database into multiple
+databases](https://gitlab.com/groups/gitlab-org/-/epics/6168). The two databases
+are `main` and `ci`. GitLab supports being run with either one database or two databases.
+On GitLab.com we are using two separate databases.
## GitLab Schema
@@ -23,6 +26,7 @@ Each table of GitLab needs to have a `gitlab_schema` assigned:
- `gitlab_main`: describes all tables that are being stored in the `main:` database (for example, like `projects`, `users`).
- `gitlab_ci`: describes all CI tables that are being stored in the `ci:` database (for example, `ci_pipelines`, `ci_builds`).
+- `gitlab_geo`: describes all Geo tables that are being stored in the `geo:` database (for example, like `project_registry`, `secondary_usage_data`).
- `gitlab_shared`: describe all application tables that contain data across all decomposed databases (for example, `loose_foreign_keys_deleted_records`) for models that inherit from `Gitlab::Database::SharedModel`.
- `gitlab_internal`: describe all internal tables of Rails and PostgreSQL (for example, `ar_internal_metadata`, `schema_migrations`, `pg_*`).
- `...`: more schemas to be introduced with additional decomposed databases
@@ -31,6 +35,7 @@ The usage of schema enforces the base class to be used:
- `ApplicationRecord` for `gitlab_main`
- `Ci::ApplicationRecord` for `gitlab_ci`
+- `Geo::TrackingBase` for `gitlab_geo`
- `Gitlab::Database::SharedModel` for `gitlab_shared`
### The impact of `gitlab_schema`
@@ -40,7 +45,7 @@ The `gitlab_schema` primary purpose is to introduce a barrier between different
This is used as a primary source of classification for:
-- [Discovering cross-joins across tables from different schemas](#removing-joins-between-ci_-and-non-ci_-tables)
+- [Discovering cross-joins across tables from different schemas](#removing-joins-between-ci-and-non-ci-tables)
- [Discovering cross-database transactions across tables from different schemas](#removing-cross-database-transactions)
### The special purpose of `gitlab_shared`
@@ -72,10 +77,6 @@ Read [Migrations for Multiple Databases](migrations_for_multiple_databases.md).
## CI/CD Database
-> Support for configuring the GitLab Rails application to use a distinct
-database for CI/CD tables was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/64289)
-in GitLab 14.1. This feature is still under development, and is not ready for production use.
-
### Configure single database
By default, GDK is configured to run with multiple databases.
@@ -107,32 +108,14 @@ NOTE: The `validate_cross_joins!` method in `spec/support/database/prevent_cross
the corresponding documentation URL used in `spec/support/database/prevent_cross_joins.rb`.
-->
-### Removing joins between `ci_*` and non `ci_*` tables
+### Removing joins between `ci` and non `ci` tables
Queries that join across databases raise an error. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/68620)
in GitLab 14.3, for new queries only. Pre-existing queries do not raise an error.
-We are planning on moving all the `ci_*` tables to a separate database, so
-referencing `ci_*` tables with other tables will not be possible. This means,
-that using any kind of `JOIN` in SQL queries will not work. We have identified
-already many such examples that need to be fixed in
-<https://gitlab.com/groups/gitlab-org/-/epics/6289> .
-
-#### Path to removing cross-database joins
-
-The following steps are the process to remove cross-database joins between
-`ci_*` and non `ci_*` tables:
-
-1. **{check-circle}** Add all failing specs to the [`cross-join-allowlist.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/database/cross-join-allowlist.yml)
- file.
-1. **{check-circle}** Find the code that caused the spec failure and wrap the isolated code
- in [`allow_cross_joins_across_databases`](#allowlist-for-existing-cross-joins).
- Link to a new issue assigned to the correct team to remove the specs from the
- `cross-join-allowlist.yml` file.
-1. **{dotted-circle}** Remove the `cross-join-allowlist.yml` file and stop allowing
- whole test files.
-1. **{dotted-circle}** Fix the problem and remove the `allow_cross_joins_across_databases` call.
-1. **{dotted-circle}** Fix all the cross-joins and remove the `allow_cross_joins_across_databases` method.
+Because GitLab can be run with multiple separate databases, referencing `ci`
+tables with non `ci` tables in a single query is not possible. Therefore,
+using any kind of `JOIN` in SQL queries will not work.
#### Suggestions for removing cross-database joins
@@ -416,13 +399,10 @@ query or look at other patterns described above for removing cross-joins.
#### How to validate you have correctly removed a cross-join
-Using RSpec tests, you can validate all SQL queries within a code block to
-ensure that none of them are joining across the two databases. This is a useful
-tool to confirm you have correctly fixed an existing cross-join.
-
-At some point in the future we will have fixed all cross-joins and this tool
-will run by default in all tests. For now, the tool needs to be explicitly enabled
-for your test.
+RSpec is configured to automatically validate all SQL queries do not join
+across databases. If this validation is disabled in
+`spec/support/database/cross-join-allowlist.yml` then you can still validate an
+isolated code block using `with_cross_joins_prevented`.
You can use this method like so:
@@ -553,12 +533,11 @@ The `MyAsyncConsistencyJob` would also attempt to update the timestamp if they d
At this point, we don't have the tooling (we might not even need it) to ensure similar consistency
characteristics as we had with one database. If you think that the code you're working on requires
-these properties, then you can disable the cross-database modification check by wrapping to
-offending database queries with a block and create a follow-up issue mentioning the sharding group
-(`gitlab-org/sharding-group`).
+these properties, then you can disable the cross-database modification check in your tests by wrapping the
+offending test code with a block and create a follow-up issue.
```ruby
-Gitlab::Database.allow_cross_joins_across_databases(url: 'gitlab issue URL') do
+allow_cross_database_modification_within_transaction(url: 'gitlab issue URL') do
ApplicationRecord.transaction do
ci_build.update!(updated_at: Time.current) # UPDATE on CI DB
ci_build.project.update!(updated_at: Time.current) # UPDATE on Main DB
@@ -567,7 +546,7 @@ end
```
Don't hesitate to reach out to the
-[sharding group](https://about.gitlab.com/handbook/engineering/development/enablement/sharding/)
+[pods group](https://about.gitlab.com/handbook/engineering/development/enablement/data_stores/pods/)
for advice.
##### Avoid `dependent: :nullify` and `dependent: :destroy` across databases
diff --git a/doc/development/database/transaction_guidelines.md b/doc/development/database/transaction_guidelines.md
index d96d11f05a5..255de19a420 100644
--- a/doc/development/database/transaction_guidelines.md
+++ b/doc/development/database/transaction_guidelines.md
@@ -132,12 +132,12 @@ end
build_1 = Ci::Build.find(1)
build_2 = Ci::Build.find(2)
-ActiveRecord::Base.transaction do
+ApplicationRecord.transaction do
build_1.touch
build_2.touch
end
```
-The `ActiveRecord::Base` class uses a different database connection than the `Ci::Build` records.
+The `ApplicationRecord` class uses a different database connection than the `Ci::Build` records.
The two statements in the transaction block are not part of the transaction and are
rolled back in case something goes wrong. They act as 3rd part calls.
diff --git a/doc/development/deprecation_guidelines/img/deprecation_removal_process.png b/doc/development/deprecation_guidelines/img/deprecation_removal_process.png
index 99642ebbae0..594e15861b0 100644
--- a/doc/development/deprecation_guidelines/img/deprecation_removal_process.png
+++ b/doc/development/deprecation_guidelines/img/deprecation_removal_process.png
Binary files differ
diff --git a/doc/development/deprecation_guidelines/index.md b/doc/development/deprecation_guidelines/index.md
index 7fbe2261f4d..4e1d2e22e78 100644
--- a/doc/development/deprecation_guidelines/index.md
+++ b/doc/development/deprecation_guidelines/index.md
@@ -41,7 +41,9 @@ changes](../contributing/index.md#breaking-changes) to GitLab features.
Deprecations should be announced on the [Deprecated feature removal schedule](../../update/deprecations.md).
-For steps to create a deprecation entry, see [Deprecations](https://about.gitlab.com/handbook/marketing/blog/release-posts/#deprecations).
+Do not include the deprecation announcement in the merge request that introduces a code change for the deprecation.
+Use a separate MR to create a deprecation entry. For steps to create a deprecation entry, see
+[Deprecations](https://about.gitlab.com/handbook/marketing/blog/release-posts/#deprecations).
## When can a feature be removed/changed?
diff --git a/doc/development/development_processes.md b/doc/development/development_processes.md
new file mode 100644
index 00000000000..e199aedd3f5
--- /dev/null
+++ b/doc/development/development_processes.md
@@ -0,0 +1,124 @@
+---
+stage: none
+group: Development
+info: "See the Technical Writers assigned to Development Guidelines: https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines"
+---
+
+# Development processes
+
+Consult these topics for information on development processes for contributing to GitLab.
+
+## Processes
+
+Must-reads:
+
+- [Guide on adapting existing and introducing new components](architecture.md#adapting-existing-and-introducing-new-components)
+- [Code review guidelines](code_review.md) for reviewing code and having code
+ reviewed
+- [Database review guidelines](database_review.md) for reviewing
+ database-related changes and complex SQL queries, and having them reviewed
+- [Secure coding guidelines](secure_coding_guidelines.md)
+- [Pipelines for the GitLab project](pipelines.md)
+
+Complementary reads:
+
+- [GitLab core team & GitLab Inc. contribution process](https://gitlab.com/gitlab-org/gitlab/-/blob/master/PROCESS.md)
+- [Security process for developers](https://gitlab.com/gitlab-org/release/docs/blob/master/general/security/developer.md#security-releases-critical-non-critical-as-a-developer)
+- [Patch release process for developers](https://gitlab.com/gitlab-org/release/docs/blob/master/general/patch/process.md#process-for-developers)
+- [Guidelines for implementing Enterprise Edition features](ee_features.md)
+- [Adding a new service component to GitLab](adding_service_component.md)
+- [Guidelines for changelogs](changelog.md)
+- [Dependencies](dependencies.md)
+- [Danger bot](dangerbot.md)
+- [Requesting access to ChatOps on GitLab.com](chatops_on_gitlabcom.md#requesting-access) (for GitLab team members)
+
+### Development guidelines review
+
+When you submit a change to the GitLab development guidelines, who
+you ask for reviews depends on the level of change.
+
+#### Wording, style, or link changes
+
+Not all changes require extensive review. For example, MRs that don't change the
+content's meaning or function can be reviewed, approved, and merged by any
+maintainer or Technical Writer. These can include:
+
+- Typo fixes.
+- Clarifying links, such as to external programming language documentation.
+- Changes to comply with the [Documentation Style Guide](documentation/index.md)
+ that don't change the intent of the documentation page.
+
+#### Specific changes
+
+If the MR proposes changes that are limited to a particular stage, group, or team,
+request a review and approval from an experienced GitLab Team Member in that
+group. For example, if you're documenting a new internal API used exclusively by
+a given group, request an engineering review from one of the group's members.
+
+After the engineering review is complete, assign the MR to the
+[Technical Writer associated with the stage and group](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments)
+in the modified documentation page's metadata.
+If the page is not assigned to a specific group, follow the
+[Technical Writing review process for development guidelines](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines).
+
+#### Broader changes
+
+Some changes affect more than one group. For example:
+
+- Changes to [code review guidelines](code_review.md).
+- Changes to [commit message guidelines](contributing/merge_request_workflow.md#commit-messages-guidelines).
+- Changes to guidelines in [feature flags in development of GitLab](feature_flags/).
+- Changes to [feature flags documentation guidelines](documentation/feature_flags.md).
+
+In these cases, use the following workflow:
+
+1. Request a peer review from a member of your team.
+1. Request a review and approval of an Engineering Manager (EM)
+ or Staff Engineer who's responsible for the area in question:
+
+ - [Frontend](https://about.gitlab.com/handbook/engineering/frontend/)
+ - [Backend](https://about.gitlab.com/handbook/engineering/)
+ - [Database](https://about.gitlab.com/handbook/engineering/development/database/)
+ - [User Experience (UX)](https://about.gitlab.com/handbook/engineering/ux/)
+ - [Security](https://about.gitlab.com/handbook/engineering/security/)
+ - [Quality](https://about.gitlab.com/handbook/engineering/quality/)
+ - [Engineering Productivity](https://about.gitlab.com/handbook/engineering/quality/engineering-productivity/)
+ - [Infrastructure](https://about.gitlab.com/handbook/engineering/infrastructure/)
+ - [Technical Writing](https://about.gitlab.com/handbook/engineering/ux/technical-writing/)
+
+ You can skip this step for MRs authored by EMs or Staff Engineers responsible
+ for their area.
+
+ If there are several affected groups, you may need approvals at the
+ EM/Staff Engineer level from each affected area.
+
+1. After completing the reviews, consult with the EM/Staff Engineer
+ author / approver of the MR.
+
+ If this is a significant change across multiple areas, request final review
+ and approval from the VP of Development, the DRI for Development Guidelines,
+ @clefelhocz1.
+
+1. After all approvals are complete, assign the MR to the
+ [Technical Writer associated with the stage and group](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments)
+ in the modified documentation page's metadata.
+ If the page is not assigned to a specific group, follow the
+ [Technical Writing review process for development guidelines](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines).
+ The Technical Writer may ask for additional approvals as previously suggested before merging the MR.
+
+### Reviewer values
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/57293) in GitLab 14.1.
+
+As a reviewer or as a reviewee, make sure to familiarize yourself with
+the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/reviewer-values/) we strive for at GitLab.
+
+## Language-specific guides
+
+### Go guides
+
+- [Go Guidelines](go_guide/index.md)
+
+### Shell Scripting guides
+
+- [Shell scripting standards and style guidelines](shell_scripting_guide/index.md)
diff --git a/doc/development/documentation/restful_api_styleguide.md b/doc/development/documentation/restful_api_styleguide.md
index 1f270a2b5ee..92c34c01e5d 100644
--- a/doc/development/documentation/restful_api_styleguide.md
+++ b/doc/development/documentation/restful_api_styleguide.md
@@ -66,7 +66,8 @@ Supported attributes:
| `attribute` | datatype | **{dotted-circle}** No | Detailed description. |
| `attribute` | datatype | **{dotted-circle}** No | Detailed description. |
-Response body attributes:
+If successful, returns [`<status_code>`](../../api/index.md#status-codes) and the following
+response attributes:
| Attribute | Type | Description |
|:-------------------------|:---------|:----------------------|
@@ -103,7 +104,10 @@ for the section. For example:
> `widget_message` [introduced](<link-to-issue>) in GitLab 14.3.
```
-## Attribute deprecation
+## Deprecations
+
+To document the deprecation of an API endpoint, follow the steps to
+[deprecate a page or topic](versions.md#deprecate-a-page-or-topic).
To deprecate an attribute:
@@ -121,8 +125,8 @@ To deprecate an attribute:
| `widget_name` | string | **{dotted-circle}** No | [Deprecated](<link-to-issue>) in GitLab 14.7 and is planned for removal in 15.4. Use `widget_id` instead. The name of the widget. |
```
-1. Optional. To widely announce the change, or if it's a breaking change,
- [update the deprecations and removals documentation](../deprecation_guidelines/#update-the-deprecations-and-removals-documentation).
+To widely announce a deprecation, or if it's a breaking change,
+[update the deprecations and removals documentation](../deprecation_guidelines/#update-the-deprecations-and-removals-documentation).
## Method description
@@ -151,6 +155,14 @@ For information about writing attribute descriptions, see the [GraphQL API descr
## Response body description
+Start the description with the following sentence, replacing `status code` with the
+relevant [HTTP status code](../../api/index.md#status-codes), for example:
+
+```markdown
+If successful, returns [`200 OK`](../../api/index.md#status-codes) and the
+following response attributes:
+```
+
Use the following table headers to describe the response bodies. Attributes should
always be in code blocks using backticks (`` ` ``).
diff --git a/doc/development/documentation/site_architecture/folder_structure.md b/doc/development/documentation/site_architecture/folder_structure.md
index e960a6491c7..0e8065d794f 100644
--- a/doc/development/documentation/site_architecture/folder_structure.md
+++ b/doc/development/documentation/site_architecture/folder_structure.md
@@ -85,6 +85,15 @@ place for it.
Do not include the same information in multiple places.
[Link to a single source of truth instead.](../styleguide/index.md#link-instead-of-repeating-text)
+For example, if you have code in a repository other than the [primary repositories](index.md#architecture),
+and documentation in the same repository, you can keep the documentation in that repository.
+
+Then you can either:
+
+- Publish it to <https://docs.gitlab.com>.
+- Link to it from <https://docs.gitlab.com> by adding an entry in the global navigation.
+ View [an example](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/fedb6378a3c92274ba3b6031df0d34455594e4cc/content/_data/navigation.yaml#L2944).
+
## References across documents
- Give each folder an `index.md` page that introduces the topic, and both introduces
diff --git a/doc/development/documentation/site_architecture/index.md b/doc/development/documentation/site_architecture/index.md
index 05015fe7c5f..af24fbe303b 100644
--- a/doc/development/documentation/site_architecture/index.md
+++ b/doc/development/documentation/site_architecture/index.md
@@ -59,6 +59,19 @@ product, and all together are pulled to generate the docs website:
Learn more about [the docs folder structure](folder_structure.md).
+### Documentation in other repositories
+
+If you have code and documentation in a repository other than the [primary repositories](#architecture),
+you should keep the documentation with the code in that repository.
+
+Then you can either:
+
+- [Add the repository to the list of products](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/development.md#add-a-new-product)
+ published at <https://docs.gitlab.com>.
+- [Add an entry in the global navigation](global_nav.md#add-a-navigation-entry) for
+ <https://docs.gitlab.com> that links to the documentation in that repository.
+ View [an example](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/fedb6378a3c92274ba3b6031df0d34455594e4cc/content/_data/navigation.yaml#L2944-L2946).
+
## Assets
To provide an optimized site structure, design, and a search-engine friendly
diff --git a/doc/development/documentation/styleguide/index.md b/doc/development/documentation/styleguide/index.md
index 700d64c30d1..1af0cb72055 100644
--- a/doc/development/documentation/styleguide/index.md
+++ b/doc/development/documentation/styleguide/index.md
@@ -49,7 +49,7 @@ GitLab adds all troubleshooting information to the documentation, no matter how
unlikely a user is to encounter a situation.
GitLab Support maintains their own
-[troubleshooting content](../../../administration/index.md#support-team-docs)
+[troubleshooting content](../../../administration/index.md#support-team-documentation)
in the GitLab documentation.
### The documentation includes all media types
@@ -1096,14 +1096,15 @@ copy of `https://gitlab.com/gitlab-org/gitlab`, run in a terminal:
### Animated images
-Sometimes an image with animation (such as an animated GIF)
-can help the reader understand a complicated interaction with the user interface.
+Avoid using animated images (such as animated GIFs). They can be distracting
+and annoying for users.
-However, you should use them sparingly and avoid them when you can.
-Do not use them to replace written descriptions of processes or the product.
+If you're describing a complicated interaction in the user interface and want to
+include a visual representation to help readers understand it, you can:
-If you include an animated image, follow the same size and naming conventions we use for images. If the animated image loops, add at least a three
-second pause to the end of the loop.
+- Use a static image (screenshot) and if necessary, add callouts to emphasize an
+ an area of the screen.
+- Create a short video of the interaction and link to it.
## Videos
diff --git a/doc/development/documentation/testing.md b/doc/development/documentation/testing.md
index feb10845aea..d55cbe28d9b 100644
--- a/doc/development/documentation/testing.md
+++ b/doc/development/documentation/testing.md
@@ -361,6 +361,7 @@ To configure Vale in your editor, install one of the following as appropriate:
- Sublime Text [`SublimeLinter-contrib-vale` package](https://packagecontrol.io/packages/SublimeLinter-contrib-vale).
- Visual Studio Code [`errata-ai.vale-server` extension](https://marketplace.visualstudio.com/items?itemName=errata-ai.vale-server).
You can configure the plugin to [display only a subset of alerts](#show-subset-of-vale-alerts).
+- Atom [`atomic-vale` package](https://atom.io/packages/atomic-vale).
- Vim [ALE plugin](https://github.com/dense-analysis/ale).
- JetBrains IDEs - No plugin exists, but
[this issue comment](https://github.com/errata-ai/vale-server/issues/39#issuecomment-751714451)
diff --git a/doc/development/event_store.md b/doc/development/event_store.md
index fa7208ead04..ffde51216cf 100644
--- a/doc/development/event_store.md
+++ b/doc/development/event_store.md
@@ -293,6 +293,8 @@ in the `handle_event` method of the subscriber worker.
## Testing
+### Testing the publisher
+
The publisher's responsibility is to ensure that the event is published correctly.
To test that an event has been published correctly, we can use the RSpec matcher `:publish_event`:
@@ -308,6 +310,25 @@ it 'publishes a ProjectDeleted event with project id and namespace id' do
end
```
+It is also possible to compose matchers inside the `:publish_event` matcher.
+This could be useful when we want to assert that an event is created with a certain kind of value,
+but we do not know the value in advance. An example of this is when publishing an event
+after creating a new record.
+
+```ruby
+it 'publishes a ProjectCreatedEvent with project id and namespace id' do
+ # The project ID will only be generated when the `create_project`
+ # is called in the expect block.
+ expected_data = { project_id: kind_of(Numeric), namespace_id: group_id }
+
+ expect { create_project(user, name: 'Project', path: 'project', namespace_id: group_id) }
+ .to publish_event(Projects::ProjectCreatedEvent)
+ .with(expected_data)
+end
+```
+
+### Testing the subscriber
+
The subscriber must ensure that a published event can be consumed correctly. For this purpose
we have added helpers and shared examples to standardize the way we test subscribers:
diff --git a/doc/development/experiment_guide/experiment_code_reviews.md b/doc/development/experiment_guide/experiment_code_reviews.md
index eda316db9d4..07bc0f59463 100644
--- a/doc/development/experiment_guide/experiment_code_reviews.md
+++ b/doc/development/experiment_guide/experiment_code_reviews.md
@@ -1,6 +1,6 @@
---
stage: Growth
-group: Adoption
+group: Acquisition
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/experiment_guide/experiment_rollout.md b/doc/development/experiment_guide/experiment_rollout.md
index ff0844f9d3c..e68520f7812 100644
--- a/doc/development/experiment_guide/experiment_rollout.md
+++ b/doc/development/experiment_guide/experiment_rollout.md
@@ -1,6 +1,6 @@
---
stage: Growth
-group: Adoption
+group: Acquisition
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/experiment_guide/implementing_experiments.md b/doc/development/experiment_guide/implementing_experiments.md
index c9e277873dc..19200d48637 100644
--- a/doc/development/experiment_guide/implementing_experiments.md
+++ b/doc/development/experiment_guide/implementing_experiments.md
@@ -1,6 +1,6 @@
---
stage: Growth
-group: Adoption
+group: Acquisition
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/experiment_guide/index.md b/doc/development/experiment_guide/index.md
index 163cd009c51..e11e516485a 100644
--- a/doc/development/experiment_guide/index.md
+++ b/doc/development/experiment_guide/index.md
@@ -1,6 +1,6 @@
---
stage: Growth
-group: Activation
+group: Acquisition
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/experiment_guide/testing_experiments.md b/doc/development/experiment_guide/testing_experiments.md
index a73896c8436..78a5d606490 100644
--- a/doc/development/experiment_guide/testing_experiments.md
+++ b/doc/development/experiment_guide/testing_experiments.md
@@ -1,6 +1,6 @@
---
stage: Growth
-group: Activation
+group: Acquisition
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md
index 67b53fa0299..10db332d64c 100644
--- a/doc/development/fe_guide/graphql.md
+++ b/doc/development/fe_guide/graphql.md
@@ -597,7 +597,7 @@ export default {
Note that, even if the directive evaluates to `false`, the guarded entity is sent to the backend and
matched against the GraphQL schema. So this approach requires that the feature-flagged entity
exists in the schema, even if the feature flag is disabled. When the feature flag is turned off, it
-is recommended that the resolver returns `null` at the very least.
+is recommended that the resolver returns `null` at the very least using the same feature flag as the frontend. See the [API GraphQL guide](../api_graphql_styleguide.md#frontend-and-backend-feature-flag-strategies).
##### Different versions of a query
@@ -617,8 +617,10 @@ export default {
};
```
-This approach is not recommended as it results in bigger merge requests and requires maintaining
-two similar queries for as long as the feature flag exists. This can be used in cases where the new
+##### Avoiding multiple query versions
+
+The multiple version approach is not recommended as it results in bigger merge requests and requires maintaining
+two similar queries for as long as the feature flag exists. Multiple versions can be used in cases where the new
GraphQL entities are not yet part of the schema, or if they are feature-flagged at the schema level
(`new_entity: :feature_flag`).
diff --git a/doc/development/fe_guide/haml.md b/doc/development/fe_guide/haml.md
index 00096ce7fdc..7e72570454e 100644
--- a/doc/development/fe_guide/haml.md
+++ b/doc/development/fe_guide/haml.md
@@ -8,15 +8,20 @@ info: To determine the technical writer assigned to the Stage/Group associated w
[HAML](https://haml.info/) is the [Ruby on Rails](https://rubyonrails.org/) template language that GitLab uses.
-## GitLab UI form builder
+## HAML and our Pajamas Design System
[GitLab UI](https://gitlab-org.gitlab.io/gitlab-ui/) is a Vue component library that conforms
-to the [Pajamas design system](https://design.gitlab.com/). Most of these components
+to the [Pajamas design system](https://design.gitlab.com/). Many of these components
rely on JavaScript and therefore can only be used in Vue.
-However, some of the simpler components (checkboxes, radio buttons, form inputs) can be
-used in HAML by applying the correct CSS classes to the elements. A custom
-[Ruby on Rails form builder](https://gitlab.com/gitlab-org/gitlab/-/blob/7c108df101e86d8a27d69df2b5b1ff1fc24133c5/lib/gitlab/form_builders/gitlab_ui_form_builder.rb) exists to help use GitLab UI components in HAML.
+However, some of the simpler components (such as buttons, checkboxes, or form inputs) can be
+used in HAML:
+
+- Some of the Pajamas components are available as a [ViewComponent](view_component.md#pajamas-components). Use these when possible.
+- If no ViewComponent exists, why not go ahead and create it? Talk to the Foundations team if you need help.
+- As a fallback, this can be done by applying the correct CSS classes to the elements.
+- A custom
+[Ruby on Rails form builder](https://gitlab.com/gitlab-org/gitlab/-/blob/7c108df101e86d8a27d69df2b5b1ff1fc24133c5/lib/gitlab/form_builders/gitlab_ui_form_builder.rb) exists to help use GitLab UI components in HAML forms.
### Use the GitLab UI form builder
@@ -100,7 +105,7 @@ Currently only the listed components are available but more components are plann
This component supports [ViewComponent slots](https://viewcomponent.org/guide/slots.html).
-| Slot | Description
+| Slot | Description
|---|---|
| `label` | Checkbox label content. This slot can be used instead of the `label` argument. |
| `help_text` | Help text content displayed below the checkbox. This slot can be used instead of the `help_text` argument. |
@@ -128,7 +133,7 @@ This component supports [ViewComponent slots](https://viewcomponent.org/guide/sl
This component supports [ViewComponent slots](https://viewcomponent.org/guide/slots.html).
-| Slot | Description
+| Slot | Description
|---|---|
| `label` | Checkbox label content. This slot can be used instead of the `label` argument. |
| `help_text` | Help text content displayed below the radio button. This slot can be used instead of the `help_text` argument. |
diff --git a/doc/development/fe_guide/index.md b/doc/development/fe_guide/index.md
index 9ef4375d795..544985d7edc 100644
--- a/doc/development/fe_guide/index.md
+++ b/doc/development/fe_guide/index.md
@@ -89,6 +89,10 @@ How to use [GraphQL](graphql.md).
How to use [HAML](haml.md).
+## ViewComponent
+
+How we use [ViewComponent](view_component.md).
+
## Icons and Illustrations
How we use SVG for our [Icons and Illustrations](icons.md).
diff --git a/doc/development/fe_guide/style/scss.md b/doc/development/fe_guide/style/scss.md
index 5d5b250e9a9..451b0c8a4c6 100644
--- a/doc/development/fe_guide/style/scss.md
+++ b/doc/development/fe_guide/style/scss.md
@@ -153,7 +153,7 @@ Usage of the `extend` at-rule is prohibited due to [memory leaks](https://gitlab
}
// Good
-@mixing gl-pt-3 {
+@mixin gl-pt-3 {
padding-top: 12px;
}
diff --git a/doc/development/fe_guide/view_component.md b/doc/development/fe_guide/view_component.md
new file mode 100644
index 00000000000..f4bb7ac3a2e
--- /dev/null
+++ b/doc/development/fe_guide/view_component.md
@@ -0,0 +1,174 @@
+---
+stage: Ecosystem
+group: Foundations
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# ViewComponent
+
+ViewComponent is a framework for creating reusable, testable & encapsulated view
+components with Ruby on Rails, without the need for a JavaScript framework like Vue.
+They are rendered server-side and can be seamlessly used with template languages like [Haml](haml.md).
+
+Refer to the official [documentation](https://viewcomponent.org/) to learn more or
+watch this [introduction video](https://youtu.be/akRhUbvtnmo).
+
+## Pajamas components
+
+Some of the components of our [Pajamas](https://design.gitlab.com) design system are
+available as a ViewComponent in `app/components/pajamas`.
+
+NOTE:
+We have a small but growing number of Pajamas components. Reach out to the
+[Foundations team](https://about.gitlab.com/handbook/engineering/development/dev/ecosystem/foundations/)
+if the component you are looking for is not yet available.
+
+### Available components
+
+#### Alert
+
+The `Pajamas::AlertComponent` follows the [Pajamas Alert](https://design.gitlab.com/components/alert) specification.
+
+**Examples:**
+
+By default this creates a dismissible info alert with icon:
+
+```haml
+= render Pajamas::AlertComponent.new(title: "Almost done!")
+```
+
+You can set variant, hide the icons and more:
+
+```haml
+= render Pajamas::AlertComponent.new(title: "All done!",
+ variant: :success,
+ dismissible: :false,
+ show_icon: false)
+```
+
+For the full list of options, see its
+[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/alert_component.rb).
+
+#### Banner
+
+The `Pajamas::BannerComponent` follows the [Pajamas Banner](https://design.gitlab.com/components/banner) specification.
+
+**Examples:**
+
+In its simplest form the banner component looks like this:
+
+```haml
+= render Pajamas::BannerComponent.new(button_text: 'Learn more', button_link: example_path,
+ svg_path: 'illustrations/example.svg') do |c|
+ - c.title { 'Hello world!' }
+ %p Content of your banner goes here...
+```
+
+If you have a need for more control, you can also use the `illustration` slot
+instead of `svg_path` and the `primary_action` slot instead of `button_text` and `button_link`:
+
+```haml
+= render Pajamas::BannerComponent.new do |c|
+ - c.illustration do
+ = custom_icon('my_inline_svg')
+ - c.title do
+ Hello world!
+ - c.primary_action do
+ = render 'my_button_in_a_partial'
+```
+
+For the full list of options, see its
+[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/banner_component.rb).
+
+#### Button
+
+The `Pajamas::ButtonComponent` follows the [Pajamas Button](https://design.gitlab.com/components/button) specification.
+
+**Examples:**
+
+The button component has a lot of options but all of them have good defaults,
+so the simplest button looks like this:
+
+```haml
+= render Pajamas::ButtonComponent.new do |c|
+ = _('Button text goes here')
+```
+
+The following example shows most of the available options:
+
+```haml
+= render Pajamas::ButtonComponent.new(category: :secondary,
+ variant: :danger,
+ size: :small,
+ type: :submit,
+ disabled: true,
+ loading: false,
+ block: true) do |c|
+ Button text goes here
+```
+
+You can also create button-like looking `<a>` tags, like this:
+
+```haml
+= render Pajamas::ButtonComponent.new(href: root_path) do |c|
+ Go home
+```
+
+For the full list of options, see its
+[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/button_component.rb).
+
+#### Card
+
+The `Pajamas::CardComponent` follows the [Pajamas Card](https://design.gitlab.com/components/card) specification.
+
+**Examples:**
+
+The card has one mandatory `body` slot and optional `header` and `footer` slots:
+
+```haml
+= render Pajamas::CardComponent.new do |c|
+ - c.header do
+ I'm the header.
+ - c.body do
+ %p Multiple line
+ %p body content.
+ - c.footer do
+ Footer goes here.
+```
+
+If you want to add custom attributes to any of these or the card itself, use the following options:
+
+```haml
+= render Pajamas::CardComponent.new(card_options: {id: "my-id"}, body_options: {data: { count: 1 }})
+```
+
+`header_options` and `footer_options` are available, too.
+
+For the full list of options, see its
+[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/card_component.rb).
+
+#### Toggle
+
+The `Pajamas::ToggleComponent` follows the [Pajamas Toggle](https://design.gitlab.com/components/toggle) specification.
+
+```haml
+= render Pajamas::ToggleComponent.new(classes: 'js-force-push-toggle',
+ label: s_("ProtectedBranch|Toggle allowed to force push"),
+ is_checked: protected_branch.allow_force_push,
+ label_position: :hidden)
+ Leverage this block to render a rich help text. To render a plain text help text, prefer the `help` parameter.
+```
+
+NOTE:
+**The toggle ViewComponent is special as it depends on the Vue.js component.**
+To actually initialize this component, make sure to call the `initToggle` helper from `~/toggles`.
+
+For the full list of options, see its
+[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/toggle_component.rb).
+
+### Best practices
+
+- If you are about to create a new view in Haml, use the available components
+ over creating plain Haml tags with CSS classes.
+- If you are making changes to an existing Haml view and see, for example, a
+ button that is still implemented with plain Haml, consider migrating it to use a ViewComponent.
diff --git a/doc/development/fe_guide/vue.md b/doc/development/fe_guide/vue.md
index ae13e3fc8c5..7943ae119be 100644
--- a/doc/development/fe_guide/vue.md
+++ b/doc/development/fe_guide/vue.md
@@ -19,8 +19,8 @@ What is described in the following sections can be found in these examples:
## Vue architecture
All new features built with Vue.js must follow a [Flux architecture](https://facebook.github.io/flux/).
-The main goal we are trying to achieve is to have only one data flow and only one data entry.
-In order to achieve this goal we use [Vuex](#vuex).
+The main goal we are trying to achieve is to have only one data flow, and only one data entry.
+To achieve this goal we use [Vuex](#vuex).
You can also read about this architecture in Vue documentation about
[state management](https://vuejs.org/v2/guide/state-management.html#Simple-State-Management-from-Scratch)
@@ -48,8 +48,8 @@ Let's look into each of them:
### An `index.js` file
-This is the index file of your new feature. This is where the root Vue instance
-of the new feature should be.
+This file is the index file of your new feature. The root Vue instance
+of the new feature should be here.
The Store and the Service should be imported and initialized in this file and
provided as a prop to the main component.
@@ -62,17 +62,16 @@ Be sure to read about [page-specific JavaScript](performance.md#page-specific-ja
While mounting a Vue application, you might need to provide data from Rails to JavaScript.
To do that, you can use the `data` attributes in the HTML element and query them while mounting the application.
-
You should only do this while initializing the application, because the mounted element is replaced
with a Vue-generated DOM.
The advantage of providing data from the DOM to the Vue instance through `props` or
`provide` in the `render` function, instead of querying the DOM inside the main Vue
-component, is that you avoid the need to create a fixture or an HTML element in the unit test.
+component, is that you avoid creating a fixture or an HTML element in the unit test.
-##### provide/inject
+##### `provide` and `inject`
-Vue supports dependency injection through [provide/inject](https://vuejs.org/v2/api/#provide-inject).
+Vue supports dependency injection through [`provide` and `inject`](https://vuejs.org/v2/api/#provide-inject).
In the component the `inject` configuration accesses the values `provide` passes down.
This example of a Vue app initialization shows how the `provide` configuration passes a value from HAML to the component:
@@ -119,13 +118,16 @@ Using dependency injection to provide values from HAML is ideal when:
- The injected value doesn't need an explicit validation against its data type or contents.
- The value doesn't need to be reactive.
-- There are multiple components in the hierarchy that need access to this value where
+- Multiple components exist in the hierarchy that need access to this value where
prop-drilling becomes an inconvenience. Prop-drilling when the same prop is passed
through all components in the hierarchy until the component that is genuinely using it.
-Dependency injection can potentially break a child component (either an immediate child or multiple levels deep) if the value declared in the `inject` configuration doesn't have defaults defined and the parent component has not provided the value using the `provide` configuration.
+Dependency injection can potentially break a child component (either an immediate child or multiple levels deep) if both conditions are true:
+
+- The value declared in the `inject` configuration doesn't have defaults defined.
+- The parent component has not provided the value using the `provide` configuration.
-- A [default value](https://vuejs.org/guide/components/provide-inject.html#injection-default-values) might be useful in contexts where it makes sense.
+A [default value](https://vuejs.org/guide/components/provide-inject.html#injection-default-values) might be useful in contexts where it makes sense.
##### props
@@ -155,7 +157,8 @@ return new Vue({
});
```
-> When adding an `id` attribute to mount a Vue application, please make sure this `id` is unique
+NOTE:
+When adding an `id` attribute to mount a Vue application, make sure this `id` is unique
across the codebase.
For more information on why we explicitly declare the data being passed into the Vue app,
@@ -165,9 +168,9 @@ refer to our [Vue style guide](style/vue.md#basic-rules).
When composing a form with Rails, the `name`, `id`, and `value` attributes of form inputs are generated
to match the backend. It can be helpful to have access to these generated attributes when converting
-a Rails form to Vue, or when [integrating components (datepicker, project selector, etc)](https://gitlab.com/gitlab-org/gitlab/-/blob/8956ad767d522f37a96e03840595c767de030968/app/assets/javascripts/access_tokens/index.js#L15) into it.
+a Rails form to Vue, or when [integrating components](https://gitlab.com/gitlab-org/gitlab/-/blob/8956ad767d522f37a96e03840595c767de030968/app/assets/javascripts/access_tokens/index.js#L15) (such as a date picker or project selector) into it.
The [`parseRailsFormFields`](https://gitlab.com/gitlab-org/gitlab/-/blob/fe88797f682c7ff0b13f2c2223a3ff45ada751c1/app/assets/javascripts/lib/utils/forms.js#L107) utility can be used to parse the generated form input attributes so they can be passed to the Vue application.
-This allows us to easily integrate Vue components without changing how the form submits.
+This enables us to integrate Vue components without changing how the form submits.
```haml
-# form.html.haml
@@ -245,7 +248,7 @@ export default {
We query the `gl` object for data that doesn't change during the application's life
cycle in the same place we query the DOM. By following this practice, we can
-avoid the need to mock the `gl` object, which makes tests easier. It should be done while
+avoid mocking the `gl` object, which makes tests easier. It should be done while
initializing our Vue instance, and the data should be provided as `props` to the main component:
```javascript
@@ -263,8 +266,8 @@ return new Vue({
#### Accessing feature flags
-Use Vue's [provide/inject](https://vuejs.org/v2/api/#provide-inject) mechanism
-to make feature flags available to any descendant components in a Vue
+Use the [`provide` and `inject`](https://vuejs.org/v2/api/#provide-inject) mechanisms
+in Vue to make feature flags available to any descendant components in a Vue
application. The `glFeatures` object is already provided in `commons/vue.js`, so
only the mixin is required to use the flags:
@@ -303,14 +306,14 @@ This approach has a few benefits:
});
```
-- No need to access a global variable, except in the application's
+- Accessing a global variable is not required, except in the application's
[entry point](#accessing-the-gl-object).
### A folder for Components
This folder holds all components that are specific to this new feature.
-If you need to use or create a component that is likely to be used somewhere
-else, please refer to `vue_shared/components`.
+To use or create a component that is likely to be used somewhere
+else, refer to `vue_shared/components`.
A good guideline to know when you should create a component is to think if
it could be reusable elsewhere.
@@ -330,7 +333,7 @@ Check this [page](vuex.md) for more details.
### Mixing Vue and jQuery
- Mixing Vue and jQuery is not recommended.
-- If you need to use a specific jQuery plugin in Vue, [create a wrapper around it](https://vuejs.org/v2/examples/select2.html).
+- To use a specific jQuery plugin in Vue, [create a wrapper around it](https://vuejs.org/v2/examples/select2.html).
- It is acceptable for Vue to listen to existing jQuery events using jQuery event listeners.
- It is not recommended to add new jQuery events for Vue to interact with jQuery.
@@ -356,22 +359,171 @@ cannot use primitives or objects.
#### Why
-There are additional reasons why having a JavaScript class presents maintainability issues on a huge codebase:
+Additional reasons why having a JavaScript class presents maintainability issues on a huge codebase:
- After a class is created, it can be extended in a way that can infringe Vue reactivity and best practices.
- A class adds a layer of abstraction, which makes the component API and its inner workings less clear.
- It makes it harder to test. Because the class is instantiated by the component data function, it is
harder to 'manage' component and class separately.
-- Adding Object Oriented Principles (OOP) to a functional codebase adds yet another way of writing code, reducing consistency and clarity.
+- Adding Object Oriented Principles (OOP) to a functional codebase adds another way of writing code, reducing consistency and clarity.
## Style guide
-Please refer to the Vue section of our [style guide](style/vue.md)
+Refer to the Vue section of our [style guide](style/vue.md)
for best practices while writing and testing your Vue components and templates.
+## Composition API
+
+With Vue 2.7 it is possible to use [Composition API](https://vuejs.org/guide/introduction.html#api-styles) in Vue components and as standalone composables.
+
+### Prefer `<script>` over `<script setup>`
+
+Composition API allows you to place the logic in the `<script>` section of the component or to have a dedicated `<script setup>` section. We should use `<script>` and add Composition API to components using `setup()` property:
+
+```html
+<script>
+ import { computed } from 'vue';
+
+ export default {
+ name: 'MyComponent',
+ setup(props) {
+ const doubleCount = computed(() => props.count*2)
+ }
+ }
+</script>
+```
+
+### Aim to have one API style per component
+
+When adding `setup()` property to Vue component, consider refactoring it to Composition API entirely. It's not always feasible, especially for large components, but we should aim to have one API style per component for readability and maintainability.
+
+### Composables
+
+With Composition API, we have a new way of abstracting logic including reactive state to _composables_. Composable is the function that can accept parameters and return reactive properties and methods to be used in Vue component.
+
+```javascript
+// useCount.js
+import { ref } from 'vue';
+
+export function useCount(initialValue) {
+ const count = ref(initialValue)
+
+ function incrementCount() {
+ ref.value += 1
+ }
+
+ function decrementCount() {
+ ref.value -= 1
+ }
+
+ return { count, incrementCount, decrementCount }
+}
+```
+
+```javascript
+// MyComponent.vue
+import { useCount } from 'useCount'
+
+export default {
+ name: 'MyComponent',
+ setup() {
+ const { count, incrementCount, decrementCount } = useCount(5)
+
+ return { count, incrementCount, decrementCount }
+ }
+}
+```
+
+#### Prefix function and file names with `use`
+
+Common naming convention in Vue for composables is to prefix them with `use` and then refer to composable functionality briefly (`useBreakpoints`, `useGeolocation` etc). The same rule applies to the `.js` files containing composables - they should start with `use_` even if the file contains more than one composable.
+
+#### Avoid lifecycle pitfalls
+
+When building a composable, we should aim to keep it as simple as possible. Lifecycle hooks add complexity to composables and might lead to unexpected side effects. To avoid that we should follow these principles:
+
+- Minimize lifecycle hooks usage whenever possible, prefer accepting/returning callbacks instead.
+- If your composable needs lifecycle hooks, make sure it also performs a cleanup. If we add a listener on `onMounted`, we should remove it on `onUnmounted` within the same composable.
+- Always set up lifecycle hooks immediately:
+
+```javascript
+// bad
+const useAsyncLogic = () => {
+ const action = async () => {
+ await doSomething();
+ onMounted(doSomethingElse);
+ };
+ return { action };
+};
+
+// OK
+const useAsyncLogic = () => {
+ const done = ref(false);
+ onMounted(() => {
+ watch(
+ done,
+ () => done.value && doSomethingElse(),
+ { immediate: true },
+ );
+ });
+ const action = async () => {
+ await doSomething();
+ done.value = true;
+ };
+ return { action };
+};
+```
+
+#### Avoid escape hatches
+
+It might be tempting to write a composable that does everything as a black box, using some of the escape hatches that Vue provides. But for most of the cases this makes them too complex and hard to maintain. One escape hatch is the `getCurrentInstance` method. This method returns an instance of a current rendering component. Instead of using that method, you should prefer passing down the data or methods to a composable via arguments.
+
+```javascript
+const useSomeLogic = () => {
+ doSomeLogic();
+ getCurrentInstance().emit('done'); // bad
+};
+```
+
+```javascript
+const done = () => emit('done');
+
+const useSomeLogic = (done) => {
+ doSomeLogic();
+ done(); // good, composable doesn't try to be too smart
+}
+```
+
+#### Composables and Vuex
+
+We should always prefer to avoid using Vuex state in composables. In case it's not possible, we should use props to receive that state, and emit events from the `setup` to update the Vuex state. A parent component should be responsible to get that state from Vuex, and mutate it on events emitted from a child. You should **never mutate a state that's coming down from a prop**. If a composable must mutate a Vuex state, it should use a callback to emit an event.
+
+```javascript
+const useAsyncComposable = ({ state, update }) => {
+ const start = async () => {
+ const newState = await doSomething(state);
+ update(newState);
+ };
+ return { start };
+};
+
+const ComponentWithComposable = {
+ setup(props, { emit }) {
+ const update = (data) => emit('update', data);
+ const state = computed(() => props.state); // state from Vuex
+ const { start } = useAsyncComposable({ state, update });
+ start();
+ },
+};
+```
+
+#### Testing composables
+
+<!-- TBD -->
+
## Testing Vue Components
-Please refer to the [Vue testing style guide](style/vue.md#vue-testing)
+Refer to the [Vue testing style guide](style/vue.md#vue-testing)
for guidelines and best practices for testing your Vue components.
Each Vue component has a unique output. This output is always present in the render function.
@@ -500,8 +652,8 @@ component under test, with the `computed` property, for example). Remember to us
### Events
-We should test for events emitted in response to an action in our component. This is used to
-verify the correct events are being fired with the correct arguments.
+We should test for events emitted in response to an action in our component. This testing
+verifies the correct events are being fired with the correct arguments.
For any DOM events we should use [`trigger`](https://v1.test-utils.vuejs.org/api/wrapper/#trigger)
to fire out event.
@@ -519,8 +671,7 @@ it('should fire the click event', () => {
})
```
-When we need to fire a Vue event, we should use [`emit`](https://vuejs.org/v2/guide/components-custom-events.html)
-to fire our event.
+When firing a Vue event, use [`emit`](https://vuejs.org/v2/guide/components-custom-events.html).
```javascript
wrapper = shallowMount(DropdownItem);
diff --git a/doc/development/feature_development.md b/doc/development/feature_development.md
new file mode 100644
index 00000000000..a5d74a0bfd9
--- /dev/null
+++ b/doc/development/feature_development.md
@@ -0,0 +1,197 @@
+---
+stage: none
+group: Development
+info: "See the Technical Writers assigned to Development Guidelines: https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines"
+---
+
+# Feature development
+
+Consult these topics for information on contributing to specific GitLab features.
+
+## UX and Frontend guides
+
+- [GitLab Design System](https://design.gitlab.com/), for building GitLab with
+ existing CSS styles and elements
+- [Frontend guidelines](fe_guide/index.md)
+- [Emoji guide](fe_guide/emojis.md)
+
+## Backend guides
+
+### General
+
+- [Directory structure](directory_structure.md)
+- [GitLab EventStore](event_store.md) to publish/subscribe to domain events
+- [GitLab utilities](utilities.md)
+- [Newlines style guide](newlines_styleguide.md)
+- [Logging](logging.md)
+- [Dealing with email/mailers](emails.md)
+- [Kubernetes integration guidelines](kubernetes.md)
+- [Permissions](permissions.md)
+- [Code comments](code_comments.md)
+- [Windows Development on GCP](windows.md)
+- [FIPS compliance](fips_compliance.md)
+- [`Gemfile` guidelines](gemfile.md)
+- [Ruby upgrade guidelines](ruby_upgrade.md)
+
+### Things to be aware of
+
+- [Gotchas](gotchas.md) to avoid
+- [Avoid modules with instance variables](module_with_instance_variables.md), if
+ possible
+- [Guidelines for reusing abstractions](reusing_abstractions.md)
+- [Ruby 3 gotchas](ruby3_gotchas.md)
+
+### Rails Framework related
+
+- [Routing](routing.md)
+- [Rails initializers](rails_initializers.md)
+- [Mass Inserting Models](mass_insert.md)
+- [Issuable-like Rails models](issuable-like-models.md)
+- [Issue types vs first-class types](issue_types.md)
+- [DeclarativePolicy framework](policies.md)
+- [Rails update guidelines](rails_update.md)
+
+### Debugging
+
+- [Pry debugging](pry_debugging.md)
+- [Sidekiq debugging](../administration/troubleshooting/sidekiq.md)
+
+### Git specifics
+
+- [How Git object deduplication works in GitLab](git_object_deduplication.md)
+- [Git LFS](lfs.md)
+
+### API
+
+- [API style guide](api_styleguide.md) for contributing to the API
+- [GraphQL API style guide](api_graphql_styleguide.md) for contributing to the
+ [GraphQL API](../api/graphql/index.md)
+
+### GitLab components and features
+
+- [Developing against interacting components or features](interacting_components.md)
+- [Manage feature flags](feature_flags/index.md)
+- [Licensed feature availability](licensed_feature_availability.md)
+- [Accessing session data](session.md)
+- [How to dump production data to staging](db_dump.md)
+- [Geo development](geo.md)
+- [Redis guidelines](redis.md)
+ - [Adding a new Redis instance](redis/new_redis_instance.md)
+- [Sidekiq guidelines](sidekiq/index.md) for working with Sidekiq workers
+- [Working with Gitaly](gitaly.md)
+- [Elasticsearch integration docs](elasticsearch.md)
+- [Working with merge request diffs](diffs.md)
+- [Approval Rules](approval_rules.md)
+- [Repository mirroring](repository_mirroring.md)
+- [Uploads development guide](uploads/index.md)
+- [Auto DevOps development guide](auto_devops.md)
+- [Renaming features](renaming_features.md)
+- [Code Intelligence](code_intelligence/index.md)
+- [Feature categorization](feature_categorization/index.md)
+- [Wikis development guide](wikis.md)
+- [Image scaling guide](image_scaling.md)
+- [Cascading Settings](cascading_settings.md)
+- [Shell commands](shell_commands.md) in the GitLab codebase
+- [Value Stream Analytics development guide](value_stream_analytics.md)
+- [Application limits](application_limits.md)
+
+### Import and Export
+
+- [Working with the GitHub importer](github_importer.md)
+- [Import/Export development documentation](import_export.md)
+- [Test Import Project](import_project.md)
+- [Group migration](bulk_import.md)
+- [Export to CSV](export_csv.md)
+
+## Performance guides
+
+- [Performance guidelines](performance.md) for writing code, benchmarks, and
+ certain patterns to avoid.
+- [Caching guidelines](caching.md) for using caching in Rails under a GitLab environment.
+- [Merge request performance guidelines](merge_request_performance_guidelines.md)
+ for ensuring merge requests do not negatively impact GitLab performance
+- [Profiling](profiling.md) a URL or tracking down N+1 queries using Bullet.
+- [Cached queries guidelines](cached_queries.md), for tracking down N+1 queries
+ masked by query caching, memory profiling and why should we avoid cached
+ queries.
+
+## Database guides
+
+See [database guidelines](database/index.md).
+
+## Integration guides
+
+- [Integrations development guide](integrations/index.md)
+- [Jira Connect app](integrations/jira_connect.md)
+- [Security Scanners](integrations/secure.md)
+- [Secure Partner Integration](integrations/secure_partner_integration.md)
+- [How to run Jenkins in development environment](integrations/jenkins.md)
+- [How to run local `Codesandbox` integration for Web IDE Live Preview](integrations/codesandbox.md)
+
+## Testing guides
+
+- [Testing standards and style guidelines](testing_guide/index.md)
+- [Frontend testing standards and style guidelines](testing_guide/frontend_testing.md)
+
+## Refactoring guides
+
+- [Refactoring guidelines](refactoring_guide/index.md)
+
+## Deprecation guides
+
+- [Deprecation guidelines](deprecation_guidelines/index.md)
+
+## Documentation guides
+
+- [Writing documentation](documentation/index.md)
+- [Documentation style guide](documentation/styleguide/index.md)
+- [Markdown](../user/markdown.md)
+
+## Internationalization (i18n) guides
+
+- [Introduction](i18n/index.md)
+- [Externalization](i18n/externalization.md)
+- [Translation](i18n/translation.md)
+
+## Product Intelligence guides
+
+- [Product Intelligence guide](https://about.gitlab.com/handbook/product/product-intelligence-guide/)
+- [Service Ping guide](service_ping/index.md)
+- [Snowplow guide](snowplow/index.md)
+
+## Experiment guide
+
+- [Introduction](experiment_guide/index.md)
+
+## Build guides
+
+- [Building a package for testing purposes](build_test_package.md)
+
+## Compliance
+
+- [Licensing](licensing.md) for ensuring license compliance
+
+## Domain-specific guides
+
+- [CI/CD development documentation](cicd/index.md)
+
+## Technical Reference by Group
+
+- [Create: Source Code BE](backend/create_source_code_be/index.md)
+
+## Other development guides
+
+- [Defining relations between files using projections](projections.md)
+- [Reference processing](reference_processing.md)
+- [Compatibility with multiple versions of the application running at the same time](multi_version_compatibility.md)
+- [Features inside `.gitlab/`](features_inside_dot_gitlab.md)
+- [Dashboards for stage groups](stage_group_observability/index.md)
+- [Preventing transient bugs](transient/prevention-patterns.md)
+- [GitLab Application SLIs](application_slis/index.md)
+- [Spam protection and CAPTCHA development guide](spam_protection_and_captcha/index.md)
+
+## Other GitLab Development Kit (GDK) guides
+
+- [Run full Auto DevOps cycle in a GDK instance](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/auto_devops.md)
+- [Using GitLab Runner with the GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/runner.md)
+- [Using the Web IDE terminal with the GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/web_ide_terminal_gdk_setup.md)
diff --git a/doc/development/feature_flags/index.md b/doc/development/feature_flags/index.md
index d21a46142a2..140d5f826cf 100644
--- a/doc/development/feature_flags/index.md
+++ b/doc/development/feature_flags/index.md
@@ -236,6 +236,11 @@ command. For example:
/chatops run feature list --staging
```
+## Toggle a feature flag
+
+See [rolling out changes](controls.md#rolling-out-changes) for more information about toggling
+feature flags.
+
## Delete a feature flag
See [cleaning up feature flags](controls.md#cleaning-up) for more information about
@@ -520,6 +525,8 @@ Feature.remove(:feature_flag_name)
```
- Any change behind a feature flag that is **enabled** by default **should** have a changelog entry.
+- The changelog for a feature flag should describe the feature and not the
+ flag, unless a default on feature flag is removed keeping the new code (`other` in the flowchart above).
## Feature flags in tests
diff --git a/doc/development/fips_compliance.md b/doc/development/fips_compliance.md
index 5b6f6ba0d98..6261b2fda6f 100644
--- a/doc/development/fips_compliance.md
+++ b/doc/development/fips_compliance.md
@@ -25,26 +25,37 @@ mean FIPS 140-2.
## Current status
-GitLab Inc has not committed to making GitLab FIPS-compliant at this time. We are
-performing initial investigations to see how much work such an effort would be.
-
Read [Epic &5104](https://gitlab.com/groups/gitlab-org/-/epics/5104) for more
information on the status of the investigation.
-## FIPS compliance at GitLab
-
-In a FIPS context, compliance is a form of self-certification - if we say we are
-"FIPS compliant", we mean that we *believe* we are. There are no external
-certifications to acquire, but if we are aware of non-compliant areas
-in GitLab, we cannot self-certify in good faith.
+GitLab is actively working towards FIPS compliance.
-The known areas of non-compliance are tracked in [Epic &5104](https://gitlab.com/groups/gitlab-org/-/epics/5104).
+## FIPS compliance at GitLab
To be compliant, all components (GitLab itself, Gitaly, etc) must be compliant,
along with the communication between those components, and any storage used by
them. Where functionality cannot be brought into compliance, it must be disabled
when FIPS mode is enabled.
+### Leveraged Cryptographic modules
+
+| Cryptographic module name | CMVP number | Instance type | Software component used |
+|----------------------------------------------------------|-------------------------------------------------------------------------------------------------|---------------|-------------------------|
+| Ubuntu 20.04 AWS Kernel Crypto API Cryptographic Module | [4132](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4132) | EC2 | Linux kernel |
+| Ubuntu 20.04 OpenSSL Cryptographic Module | [3966](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/3966) | EC2 | Gitaly, Rails (Puma/Sidekiq) |
+| Ubuntu 20.04 Libgcrypt Cryptographic Module | [3902](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/3902) | EC2 instances | `gpg`, `sshd` |
+| Amazon Linux 2 Kernel Crypto API Cryptographic Module | [3709](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/3709) | EKS nodes | Linux kernel |
+| Amazon Linux 2 OpenSSL Cryptographic Module | [3553](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/3553) | EKS nodes | NGINX |
+| RedHat Enterprise Linux 8 OpenSSL Cryptographic Module | [3852](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/3852) | EKS nodes | UBI containers: Workhorse, Pages, Container Registry, Rails (Puma/Sidekiq), Security Analyzers |
+| RedHat Enterprise Linux 8 Libgcrypt Cryptographic Module | [3784](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/3784) | EKS nodes | UBI containers: GitLab Shell, `gpg` |
+
+### Supported Operating Systems
+
+The supported hybrid environments are:
+
+- Omnibus: Ubuntu 20.04 FIPS
+- EKS: Amazon Linux 2
+
## FIPS validation at GitLab
Unlike FIPS compliance, FIPS validation is a formal declaration of compliance by
@@ -55,81 +66,24 @@ A list of FIPS-validated modules can be found at the
NIST (National Institute of Standards and Technology)
[cryptographic module validation program](https://csrc.nist.gov/projects/cryptographic-module-validation-program/validated-modules).
-## Setting up a FIPS-enabled development environment
-
-The simplest approach is to set up a virtual machine running
-[Red Hat Enterprise Linux 8](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardening#switching-the-system-to-fips-mode_using-the-system-wide-cryptographic-policies).
-
-Red Hat provide free licenses to developers, and permit the CD image to be
-downloaded from the [Red Hat developer's portal](https://developers.redhat.com).
-Registration is required.
-
-After the virtual machine is set up, you can follow the [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit)
-installation instructions, including the [advanced instructions for RHEL](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/advanced.md#red-hat-enterprise-linux).
-Note that `asdf` is not used for dependency management because it's essential to
-use the RedHat-provided Go compiler and other system dependencies.
-
-### Enable FIPS mode
-
-After GDK and its dependencies are installed, run this command (as
-root) and restart the virtual machine:
-
-```shell
-fips-mode-setup --enable
-```
-
-You can check whether it's taken effect by running:
-
-```shell
-fips-mode-setup --check
-```
-
-In this environment, OpenSSL refuses to perform cryptographic operations
-forbidden by the FIPS standards. This enables you to reproduce FIPS-related bugs,
-and validate fixes.
-
-You should be able to open a web browser inside the virtual machine and log in
-to the GitLab instance.
-
-You can disable FIPS mode again by running this command, then restarting the
-virtual machine:
-
-```shell
-fips-mode-setup --disable
-```
-
-#### Detect FIPS enablement in code
-
-You can query `Gitlab::FIPS` in Ruby code to determine if the instance is FIPS-enabled:
-
-```ruby
-def default_min_key_size(name)
- if Gitlab::FIPS.enabled?
- Gitlab::SSHPublicKey.supported_sizes(name).select(&:positive?).min || -1
- else
- 0
- end
-end
-```
+## Install GitLab with FIPS compliance
-## Nightly Omnibus FIPS builds
+This guide is specifically for public users or GitLab team members with a requirement
+to run a production instance of GitLab that is FIPS compliant. This guide outlines
+a hybrid deployment using elements from both Omnibus and our Cloud Native GitLab installations.
-The Distribution team has created [nightly FIPS Omnibus builds](https://packages.gitlab.com/gitlab/nightly-fips-builds). These
-GitLab builds are compiled to use the system OpenSSL instead of the Omnibus-embedded version of OpenSSL.
+### Prerequisites
-See [the section on how FIPS builds are created](#how-fips-builds-are-created).
+- Amazon Web Services account. Our first target environment is running on AWS, and uses other FIPS Compliant AWS resources.
+- Ability to run Ubuntu 20.04 machines for GitLab. Our first target environment uses the hybrid architecture.
-## Runner
-
-See the [documentation on installing a FIPS-compliant GitLab Runner](https://docs.gitlab.com/runner/install/#fips-compliant-gitlab-runner).
-
-## Set up a FIPS-enabled cluster
+### Set up a FIPS-enabled cluster
You can use the [GitLab Environment Toolkit](https://gitlab.com/gitlab-org/gitlab-environment-toolkit) to spin
-up a FIPS-enabled cluster for development and testing. These instructions use Amazon Web Services (AWS)
-because that is the first target environment, but you can adapt them for other providers.
+up a FIPS-enabled cluster for development and testing. As mentioned in the prerequisites, these instructions use Amazon Web Services (AWS)
+because that is the first target environment.
-### Set up your environment
+#### Set up your environment
To get started, your AWS account must subscribe to a FIPS-enabled Amazon
Machine Image (AMI) in the [AWS Marketplace console](https://aws.amazon.com/premiumsupport/knowledge-center/launch-ec2-marketplace-subscription/).
@@ -138,13 +92,13 @@ This example assumes that the `Ubuntu Pro 20.04 FIPS LTS` AMI by
`Canonical Group Limited` has been added your account. This operating
system is used for virtual machines running in Amazon EC2.
-### Omnibus
+#### Omnibus
The simplest way to get a FIPS-enabled GitLab cluster is to use an Omnibus reference architecture.
See the [GET Quick Start Guide](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/blob/main/docs/environment_quick_start_guide.md)
for more details. The following instructions build on the Quick Start and are also necessary for [Cloud Native Hybrid](#cloud-native-hybrid) installations.
-#### Terraform: Use a FIPS AMI
+##### Terraform: Use a FIPS AMI
1. Follow the guide to set up Terraform and Ansible.
1. After [step 2b](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/blob/main/docs/environment_quick_start_guide.md#2b-setup-config),
@@ -189,7 +143,7 @@ an instance, this would result in data loss: not only would disks be
destroyed, but also GitLab secrets would be lost. There is a [Terraform lifecycle rule](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/blob/2aaeaff8ac8067f23cd7b6bb5bf131061649089d/terraform/modules/gitlab_aws_instance/main.tf#L40)
to ignore AMI changes.
-#### Ansible: Specify the FIPS Omnibus builds
+##### Ansible: Specify the FIPS Omnibus builds
The standard Omnibus GitLab releases build their own OpenSSL library, which is
not FIPS-validated. However, we have nightly builds that create Omnibus packages
@@ -203,11 +157,11 @@ in this way:
all:
vars:
...
- gitlab_repo_script_url: "https://packages.gitlab.com/install/repositories/gitlab/nightly-fips-builds/script.deb.sh"
+ gitlab_repo_script_url: "https://packages.gitlab.com/install/repositories/gitlab/gitlab-fips/script.deb.sh"
gitlab_edition: "gitlab-fips"
```
-### Cloud Native Hybrid
+#### Cloud Native Hybrid
A Cloud Native Hybrid install uses both Omnibus and Cloud Native GitLab
(CNG) images. The previous instructions cover the Omnibus part, but two
@@ -216,7 +170,7 @@ additional steps are needed to enable FIPS in CNG:
1. Use a custom Amazon Elastic Kubernetes Service (EKS) AMI.
1. Use GitLab containers built with RedHat's Universal Base Image (UBI).
-#### Build a custom EKS AMI
+##### Build a custom EKS AMI
Because Amazon does not yet publish a FIPS-enabled AMI, you have to
build one yourself with Packer.
@@ -259,7 +213,7 @@ be different.
Building a RHEL-based system with FIPS enabled should be possible, but
there is [an outstanding issue preventing the Packer build from completing](https://github.com/aws-samples/amazon-eks-custom-amis/issues/51).
-#### Terraform: Use a custom EKS AMI
+##### Terraform: Use a custom EKS AMI
Now you can set the custom EKS AMI.
@@ -286,7 +240,7 @@ Now you can set the custom EKS AMI.
}
```
-#### Ansible: Use UBI images
+##### Ansible: Use UBI images
CNG uses a Helm Chart to manage which container images to deploy. By default, GET
deploys the latest released versions that use Debian-based containers.
@@ -396,6 +350,107 @@ gitlab:
tag: v15.1.0-fips
```
+## FIPS Performance Benchmarking
+
+The Quality Engineering Enablement team assists these efforts by checking if FIPS-enabled environments perform well compared to non-FIPS environments.
+
+Testing shows an impact in some places, such as Gitaly SSL, but it's not large enough to impact customers.
+
+You can find more information on FIPS performance benchmarking in the following issue:
+
+- [Benchmark performance of FIPS reference architecture](https://gitlab.com/gitlab-org/gitlab/-/issues/364051#note_1010450415)
+
+## Setting up a FIPS-enabled development environment
+
+The simplest approach is to set up a virtual machine running
+[Red Hat Enterprise Linux 8](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardening#switching-the-system-to-fips-mode_using-the-system-wide-cryptographic-policies).
+
+Red Hat provide free licenses to developers, and permit the CD image to be
+downloaded from the [Red Hat developer's portal](https://developers.redhat.com).
+Registration is required.
+
+After the virtual machine is set up, you can follow the [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit)
+installation instructions, including the [advanced instructions for RHEL](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/advanced.md#red-hat-enterprise-linux).
+Note that `asdf` is not used for dependency management because it's essential to
+use the RedHat-provided Go compiler and other system dependencies.
+
+### Enable FIPS mode
+
+After GDK and its dependencies are installed, run this command (as
+root) and restart the virtual machine:
+
+```shell
+fips-mode-setup --enable
+```
+
+You can check whether it's taken effect by running:
+
+```shell
+fips-mode-setup --check
+```
+
+In this environment, OpenSSL refuses to perform cryptographic operations
+forbidden by the FIPS standards. This enables you to reproduce FIPS-related bugs,
+and validate fixes.
+
+You should be able to open a web browser inside the virtual machine and log in
+to the GitLab instance.
+
+You can disable FIPS mode again by running this command, then restarting the
+virtual machine:
+
+```shell
+fips-mode-setup --disable
+```
+
+#### Detect FIPS enablement in code
+
+You can query `Gitlab::FIPS` in Ruby code to determine if the instance is FIPS-enabled:
+
+```ruby
+def default_min_key_size(name)
+ if Gitlab::FIPS.enabled?
+ Gitlab::SSHPublicKey.supported_sizes(name).select(&:positive?).min || -1
+ else
+ 0
+ end
+end
+```
+
+#### Unsupported features in FIPS mode
+
+Some GitLab features may not work when FIPS mode is enabled. The following features
+are known to not work in FIPS mode. However, there may be additional features not
+listed here that also do not work properly in FIPS mode:
+
+- [Container Scanning](../user/application_security/container_scanning/index.md) support for scanning images in repositories that require authentication.
+- [Code Quality](../ci/testing/code_quality.md) does not support operating in FIPS-compliant mode.
+- [Dependency scanning](../user/application_security/dependency_scanning/index.md) support for Gradle.
+- [Dynamic Application Security Testing (DAST)](../user/application_security/dast/index.md)
+ does not support operating in FIPS-compliant mode.
+- [License compliance](../user/compliance/license_compliance/index.md).
+- [Solutions for vulnerabilities](../user/application_security/vulnerabilities/index.md#resolve-a-vulnerability)
+ for yarn projects.
+- [Static Application Security Testing (SAST)](../user/application_security/sast/index.md)
+ supports a reduced set of [analyzers](../user/application_security/sast/#fips-enabled-images)
+ when operating in FIPS-compliant mode.
+
+Additionally, these package repositories are disabled in FIPS mode:
+
+- [Conan package repository](../user/packages/conan_repository/index.md).
+- [Debian package repository](../user/packages/debian_repository/index.md).
+
+## Nightly Omnibus FIPS builds
+
+The Distribution team has created [nightly FIPS Omnibus builds](https://packages.gitlab.com/gitlab/nightly-fips-builds). These
+GitLab builds are compiled to use the system OpenSSL instead of the Omnibus-embedded version of OpenSSL.
+
+See [the section on how FIPS builds are created](#how-fips-builds-are-created).
+
+## Runner
+
+See the [documentation on installing a FIPS-compliant GitLab Runner](https://docs.gitlab.com/runner/install/#fips-compliant-gitlab-runner).
+
## Verify FIPS
The following sections describe ways you can verify if FIPS is enabled.
diff --git a/doc/development/foreign_keys.md b/doc/development/foreign_keys.md
index 77df6fbfb0d..e0dd0fe8e7c 100644
--- a/doc/development/foreign_keys.md
+++ b/doc/development/foreign_keys.md
@@ -28,9 +28,80 @@ Guide](migration_style_guide.md) for more information.
Keep in mind that you can only safely add foreign keys to existing tables after
you have removed any orphaned rows. The method `add_concurrent_foreign_key`
-does not take care of this so you need to do so manually. See
+does not take care of this so you must do so manually. See
[adding foreign key constraint to an existing column](database/add_foreign_key_to_existing_column.md).
+## Updating Foreign Keys In Migrations
+
+Sometimes a foreign key constraint must be changed, preserving the column
+but updating the constraint condition. For example, moving from
+`ON DELETE CASCADE` to `ON DELETE SET NULL` or vice-versa.
+
+PostgreSQL does not prevent you from adding overlapping foreign keys. It
+honors the most recently added constraint. This allows us to replace foreign keys without
+ever losing foreign key protection on a column.
+
+To replace a foreign key:
+
+1. [Add the new foreign key without validation](database/add_foreign_key_to_existing_column.md#prevent-invalid-records)
+
+ The name of the foreign key constraint must be changed to add a new
+ foreign key before removing the old one.
+
+ ```ruby
+ class ReplaceFkOnPackagesPackagesProjectId < Gitlab::Database::Migration[2.0]
+ disable_ddl_transaction!
+
+ NEW_CONSTRAINT_NAME = 'fk_new'
+
+ def up
+ add_concurrent_foreign_key(:packages_packages, :projects, column: :project_id, on_delete: :nullify, validate: false, name: NEW_CONSTRAINT_NAME)
+ end
+
+ def down
+ with_lock_retries do
+ remove_foreign_key_if_exists(:packages_packages, column: :project_id, on_delete: :nullify, name: NEW_CONSTRAINT_NAME)
+ end
+ end
+ end
+ ```
+
+1. [Validate the new foreign key](database/add_foreign_key_to_existing_column.md#validate-the-foreign-key)
+
+ ```ruby
+ class ValidateFkNew < Gitlab::Database::Migration[2.0]
+ NEW_CONSTRAINT_NAME = 'fk_new'
+
+ # foreign key added in <link to MR or path to migration adding new FK>
+ def up
+ validate_foreign_key(:packages_packages, name: NEW_CONSTRAINT_NAME)
+ end
+
+ def down
+ # no-op
+ end
+ end
+ ```
+
+1. Remove the old foreign key:
+
+ ```ruby
+ class RemoveFkOld < Gitlab::Database::Migration[2.0]
+ OLD_CONSTRAINT_NAME = 'fk_old'
+
+ # new foreign key added in <link to MR or path to migration adding new FK>
+ # and validated in <link to MR or path to migration validating new FK>
+ def up
+ remove_foreign_key_if_exists(:packages_packages, column: :project_id, on_delete: :cascade, name: OLD_CONSTRAINT_NAME)
+ end
+
+ def down
+ # Validation is skipped here, so if rolled back, this will need to be revalidated in a separate migration
+ add_concurrent_foreign_key(:packages_packages, :projects, column: :project_id, on_delete: :cascade, validate: false, name: OLD_CONSTRAINT_NAME)
+ end
+ end
+ ```
+
## Cascading Deletes
Every foreign key must define an `ON DELETE` clause, and in 99% of the cases
diff --git a/doc/development/geo.md b/doc/development/geo.md
index 18dffe42177..9e9bd85ecd8 100644
--- a/doc/development/geo.md
+++ b/doc/development/geo.md
@@ -19,11 +19,25 @@ Geo handles replication for different components:
- [Database](#database-replication): includes the entire application, except cache and jobs.
- [Git repositories](#repository-replication): includes both projects and wikis.
-- [Uploaded blobs](#uploads-replication): includes anything from images attached on issues
+- [Blobs](#blob-replication): includes anything from images attached on issues
to raw logs and assets from CI.
With the exception of the Database replication, on a *secondary* site, everything is coordinated
-by the [Geo Log Cursor](#geo-log-cursor).
+by the [Geo Log Cursor](#geo-log-cursor-daemon).
+
+### Replication states
+
+The following diagram illustrates how the replication works. Some allowed transitions are omitted for clarity.
+
+```mermaid
+stateDiagram-v2
+ Pending --> Started
+ Started --> Synced
+ Started --> Failed
+ Synced --> Pending: Mark for resync
+ Failed --> Pending: Mark for resync
+ Failed --> Started: Retry
+```
### Geo Log Cursor daemon
@@ -66,7 +80,7 @@ the state of every repository in the [tracking database](#tracking-database).
There are a few ways a repository gets replicated by the:
- [Repository Sync worker](#repository-sync-worker).
-- [Geo Log Cursor](#geo-log-cursor).
+- [Geo Log Cursor](#geo-log-cursor-daemon).
#### Project Registry
@@ -97,26 +111,211 @@ projects that need updating. Those projects can be:
timestamp that is more recent than the `last_repository_successful_sync_at`
timestamp in the `Geo::ProjectRegistry` model.
- Manual: The administrator can manually flag a repository to resync in the
- [Geo Admin Area](../user/admin_area/geo_nodes.md).
+ [Geo Admin Area](../user/admin_area/geo_sites.md).
When we fail to fetch a repository on the secondary `RETRIES_BEFORE_REDOWNLOAD`
times, Geo does a so-called _re-download_. It will do a clean clone
into the `@geo-temporary` directory in the root of the storage. When
it's successful, we replace the main repository with the newly cloned one.
-### Uploads replication
+### Blob replication
+
+Blobs such as [uploads](uploads/index.md), LFS objects, and CI job artifacts, are replicated to the **secondary** site with the [Self-Service Framework](geo/framework.md). To track the state of syncing, each model has a corresponding registry table, for example `Upload` has `Geo::UploadRegistry` in the [PostgreSQL Geo Tracking Database](#tracking-database).
-File uploads are also being replicated to the **secondary** site. To
-track the state of syncing, the `Geo::UploadRegistry` model is used.
+#### Blob replication happy path workflows between services
+
+Job artifacts are used in the diagrams below, as one example of a blob.
+
+##### Replicating a new job artifact
+
+Primary site:
+
+```mermaid
+sequenceDiagram
+ participant R as Runner
+ participant P as Puma
+ participant DB as PostgreSQL
+ participant SsP as Secondary site PostgreSQL
+ R->>P: Upload artifact
+ P->>DB: Insert `ci_job_artifacts` row
+ P->>DB: Insert `geo_events` row
+ P->>DB: Insert `geo_event_log` row
+ DB->>SsP: Replicate rows
+```
-#### Upload Registry
+- A [Runner](https://docs.gitlab.com/runner/) uploads an artifact
+- [Puma](architecture.md#puma) inserts `ci_job_artifacts` row
+- Puma inserts `geo_events` row with data like "Job Artifact with ID 123 was updated"
+- Puma inserts `geo_event_log` row pointing to the `geo_events` row (because we built SSF on top of some legacy logic)
+- [PostgreSQL](architecture.md#postgresql) streaming replication inserts the rows in the read replica
-Similar to the [Project Registry](#project-registry), there is a
-`Geo::UploadRegistry` model that tracks the synced uploads.
+Secondary site, after the PostgreSQL DB rows have been replicated:
+
+```mermaid
+sequenceDiagram
+ participant DB as PostgreSQL
+ participant GLC as Geo Log Cursor
+ participant R as Redis
+ participant S as Sidekiq
+ participant TDB as PostgreSQL Tracking DB
+ participant PP as Primary site Puma
+ GLC->>DB: Query `geo_event_log`
+ GLC->>DB: Query `geo_events`
+ GLC->>R: Enqueue `Geo::EventWorker`
+ S->>R: Pick up `Geo::EventWorker`
+ S->>TDB: Insert to `job_artifact_registry`, "starting sync"
+ S->>PP: GET <primary site internal URL>/geo/retrieve/job_artifact/123
+ S->>TDB: Update `job_artifact_registry`, "synced"
+```
+
+- [Geo Log Cursor](#geo-log-cursor-daemon) loop finds the new `geo_event_log` row
+- Geo Log Cursor processes the `geo_events` row
+ - Geo Log Cursor enqueues `Geo::EventWorker` job passing through the `geo_events` row data
+- [Sidekiq](architecture.md#sidekiq) picks up `Geo::EventWorker` job
+ - Sidekiq inserts `job_artifact_registry` row in the [PostgreSQL Geo Tracking Database](#tracking-database) because it doesn't exist, and marks it "started sync"
+ - Sidekiq does a GET request on an API endpoint at the primary Geo site and downloads the file
+ - Sidekiq marks the `job_artifact_registry` row as "synced" and "pending verification"
+
+##### Backfilling existing job artifacts
+
+- Sysadmin has an existing GitLab site without Geo
+- There are existing CI jobs and job artifacts
+- Sysadmin sets up a new GitLab site and configures it to be a secondary Geo site
+
+Secondary site:
+
+There are two cronjobs running every minute: `Geo::Secondary::RegistryConsistencyWorker` and `Geo::RegistrySyncWorker`. The workflow below is split into two, along those lines.
+
+```mermaid
+sequenceDiagram
+ participant SC as Sidekiq-cron
+ participant R as Redis
+ participant S as Sidekiq
+ participant DB as PostgreSQL
+ participant TDB as PostgreSQL Tracking DB
+ SC->>R: Enqueue `Geo::Secondary::RegistryConsistencyWorker`
+ S->>R: Pick up `Geo::Secondary::RegistryConsistencyWorker`
+ S->>DB: Query `ci_job_artifacts`
+ S->>TDB: Query `job_artifact_registry`
+ S->>TDB: Insert to `job_artifact_registry`
+```
-CI Job Artifacts and LFS objects are synced in a similar way as uploads,
-but they are tracked by `Geo::JobArtifactRegistry`, and `Geo::LfsObjectRegistry`
-models respectively.
+- [Sidekiq-cron](https://github.com/ondrejbartas/sidekiq-cron) enqueues a `Geo::Secondary::RegistryConsistencyWorker` job every minute. As long as it is actively doing work (creating and deleting rows), this job immediately reenqueues itself. This job uses an exclusive lease to prevent multiple instances of itself from running simultaneously.
+- [Sidekiq](architecture.md#sidekiq) picks up `Geo::Secondary::RegistryConsistencyWorker` job
+ - Sidekiq queries `ci_job_artifacts` table for up to 10000 rows
+ - Sidekiq queries `job_artifact_registry` table for up to 10000 rows
+ - Sidekiq inserts a `job_artifact_registry` row in the [PostgreSQL Geo Tracking Database](#tracking-database) corresponding to the existing Job Artifact
+
+```mermaid
+sequenceDiagram
+ participant SC as Sidekiq-cron
+ participant R as Redis
+ participant S as Sidekiq
+ participant DB as PostgreSQL
+ participant TDB as PostgreSQL Tracking DB
+ participant PP as Primary site Puma
+ SC->>R: Enqueue `Geo::RegistrySyncWorker`
+ S->>R: Pick up `Geo::RegistrySyncWorker`
+ S->>TDB: Query `*_registry` tables
+ S->>R: Enqueue `Geo::EventWorker`s
+ S->>R: Pick up `Geo::EventWorker`
+ S->>TDB: Insert to `job_artifact_registry`, "starting sync"
+ S->>PP: GET <primary site internal URL>/geo/retrieve/job_artifact/123
+ S->>TDB: Update `job_artifact_registry`, "synced"
+```
+
+- [Sidekiq-cron](https://github.com/ondrejbartas/sidekiq-cron) enqueues a `Geo::RegistrySyncWorker` job every minute. As long as it is actively doing work, this job loops for up to an hour scheduling sync jobs. This job uses an exclusive lease to prevent multiple instances of itself from running simultaneously.
+- [Sidekiq](architecture.md#sidekiq) picks up `Geo::RegistrySyncWorker` job
+ - Sidekiq queries all `registry` tables in the [PostgreSQL Geo Tracking Database](#tracking-database) for "never attempted sync" rows. It interleaves rows from each table and adds them to an in-memory queue.
+ - If the previous step yielded less than 1000 rows, then Sidekiq queries all `registry` tables for "failed sync and ready to retry" rows and interleaves those and adds them to the in-memory queue.
+ - Sidekiq enqueues `Geo::EventWorker` jobs with arguments like "Job Artifact with ID 123 was updated" for each item in the queue, and tracks the enqueued Sidekiq job IDs.
+ - Sidekiq stops enqueuing `Geo::EventWorker` jobs when "maximum concurrency limit" settings are reached
+ - Sidekiq loops doing this kind of work until it has no more to do
+- Sidekiq picks up `Geo::EventWorker` job
+ - Sidekiq marks the `job_artifact_registry` row as "started sync"
+ - Sidekiq does a GET request on an API endpoint at the primary Geo site and downloads the file
+ - Sidekiq marks the `job_artifact_registry` row as "synced" and "pending verification"
+
+##### Verifying a new job artifact
+
+Primary site:
+
+```mermaid
+sequenceDiagram
+ participant Ru as Runner
+ participant P as Puma
+ participant DB as PostgreSQL
+ participant SC as Sidekiq-cron
+ participant Rd as Redis
+ participant S as Sidekiq
+ participant F as Filesystem
+ Ru->>P: Upload artifact
+ P->>DB: Insert `ci_job_artifacts`
+ P->>DB: Insert `ci_job_artifact_states`
+ SC->>Rd: Enqueue `Geo::VerificationCronWorker`
+ S->>Rd: Pick up `Geo::VerificationCronWorker`
+ S->>DB: Query `ci_job_artifact_states`
+ S->>Rd: Enqueue `Geo::VerificationBatchWorker`
+ S->>Rd: Pick up `Geo::VerificationBatchWorker`
+ S->>DB: Query `ci_job_artifact_states`
+ S->>DB: Update `ci_job_artifact_states` row, "started"
+ S->>F: Checksum file
+ S->>DB: Update `ci_job_artifact_states` row, "succeeded"
+```
+
+- A [Runner](https://docs.gitlab.com/runner/) uploads an artifact
+- [Puma](architecture.md#puma) creates a `ci_job_artifacts` row
+- Puma creates a `ci_job_artifact_states` row to store verification state.
+ - The row is marked "pending verification"
+- [Sidekiq-cron](https://github.com/ondrejbartas/sidekiq-cron) enqueues a `Geo::VerificationCronWorker` job every minute
+- [Sidekiq](architecture.md#sidekiq) picks up the `Geo::VerificationCronWorker` job
+ - Sidekiq queries `ci_job_artifact_states` for the number of rows marked "pending verification" or "failed verification and ready to retry"
+ - Sidekiq enqueues one or more `Geo::VerificationBatchWorker` jobs, limited by the "maximum verification concurrency" setting
+- Sidekiq picks up `Geo::VerificationBatchWorker` job
+ - Sidekiq queries `ci_job_artifact_states` for rows marked "pending verification"
+ - If the previous step yielded less than 10 rows, then Sidekiq queries `ci_job_artifact_states` for rows marked "failed verification and ready to retry"
+ - For each row
+ - Sidekiq marks it "started verification"
+ - Sidekiq gets the SHA256 checksum of the file
+ - Sidekiq saves the checksum in the row and marks it "succeeded verification"
+ - Now secondary Geo sites can compare against this checksum
+
+Secondary site:
+
+```mermaid
+sequenceDiagram
+ participant SC as Sidekiq-cron
+ participant R as Redis
+ participant S as Sidekiq
+ participant TDB as PostgreSQL Tracking DB
+ participant F as Filesystem
+ participant DB as PostgreSQL
+ SC->>R: Enqueue `Geo::VerificationCronWorker`
+ S->>R: Pick up `Geo::VerificationCronWorker`
+ S->>TDB: Query `job_artifact_registry`
+ S->>R: Enqueue `Geo::VerificationBatchWorker`
+ S->>R: Pick up `Geo::VerificationBatchWorker`
+ S->>TDB: Query `job_artifact_registry`
+ S->>TDB: Update `job_artifact_registry` row, "started"
+ S->>F: Checksum file
+ S->>DB: Query `ci_job_artifact_states`
+ S->>TDB: Update `job_artifact_registry` row, "succeeded"
+```
+
+- After the artifact is successfully synced, it becomes "pending verification"
+- [Sidekiq-cron](https://github.com/ondrejbartas/sidekiq-cron) enqueues a `Geo::VerificationCronWorker` job every minute
+- [Sidekiq](architecture.md#sidekiq) picks up the `Geo::VerificationCronWorker` job
+ - Sidekiq queries `job_artifact_registry` in the [PostgreSQL Geo Tracking Database](#tracking-database) for the number of rows marked "pending verification" or "failed verification and ready to retry"
+ - Sidekiq enqueues one or more `Geo::VerificationBatchWorker` jobs, limited by the "maximum verification concurrency" setting
+- Sidekiq picks up `Geo::VerificationBatchWorker` job
+ - Sidekiq queries `job_artifact_registry` in the PostgreSQL Geo Tracking Databasef for rows marked "pending verification"
+ - If the previous step yielded less than 10 rows, then Sidekiq queries `job_artifact_registry` for rows marked "failed verification and ready to retry"
+ - For each row
+ - Sidekiq marks it "started verification"
+ - Sidekiq gets the SHA256 checksum of the file
+ - Sidekiq saves the checksum in the row
+ - Sidekiq compares the checksum against the checksum in the `ci_job_artifact_states` row which was replicated by PostgreSQL
+ - If the checksum matches, then Sidekiq marks the `job_artifact_registry` row "succeeded verification"
## Authentication
@@ -241,6 +440,22 @@ ignores items in object storage. Either:
## Verification
+### Verification states
+
+The following diagram illustrates how the verification works. Some allowed transitions are omitted for clarity.
+
+```mermaid
+stateDiagram-v2
+ Pending --> Started
+ Pending --> Disabled: No primary checksum
+ Disabled --> Started: Primary checksum succeeded
+ Started --> Succeeded
+ Started --> Failed
+ Succeeded --> Pending: Mark for reverify
+ Failed --> Pending: Mark for reverify
+ Failed --> Started: Retry
+```
+
### Repository verification
Repositories are verified with a checksum.
@@ -252,7 +467,12 @@ basically hashes all Git refs together and stores that hash in the
The **secondary** site does the same to calculate the hash of its
clone, and compares the hash with the value the **primary** site
calculated. If there is a mismatch, Geo will mark this as a mismatch
-and the administrator can see this in the [Geo Admin Area](../user/admin_area/geo_nodes.md).
+and the administrator can see this in the [Geo Admin Area](../user/admin_area/geo_sites.md).
+
+## Geo proxying
+
+Geo secondaries can proxy web requests to the primary.
+Read more on the [Geo proxying (development) page](geo/proxying.md).
## Glossary
@@ -303,10 +523,7 @@ events include:
- Job Artifact Deleted event
- Upload Deleted event
-### Geo Log Cursor
-
-The process running on the **secondary** site that looks for new
-`Geo::EventLog` rows.
+See [Geo Log Cursor daemon](#geo-log-cursor-daemon).
## Code features
@@ -415,7 +632,7 @@ We switch and filter from each event by the `event_name` field.
### Geo Log Cursor (GitLab 10.0 and up)
In GitLab 10.0 and later, [System Webhooks](#system-hooks-gitlab-87-to-95) are no longer
-used and Geo Log Cursor is used instead. The Log Cursor traverses the
+used and [Geo Log Cursor](#geo-log-cursor-daemon) is used instead. The Log Cursor traverses the
`Geo::EventLog` rows to see if there are changes since the last time
the log was checked and will handle repository updates, deletes,
changes, and renames.
diff --git a/doc/development/geo/proxying.md b/doc/development/geo/proxying.md
new file mode 100644
index 00000000000..41c7f426c6f
--- /dev/null
+++ b/doc/development/geo/proxying.md
@@ -0,0 +1,356 @@
+---
+stage: Systems
+group: Geo
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Geo proxying
+
+With Geo proxying, secondaries now proxy web requests through Workhorse to the primary, so users navigating to the
+secondary see a read-write UI, and are able to do all operations that they can do on the primary.
+
+## Request life cycle
+
+### Top-level view
+
+The proxying interaction can be explained at a high level through the following diagram:
+
+```mermaid
+sequenceDiagram
+actor client
+participant secondary
+participant primary
+
+client->>secondary: GET /explore
+secondary-->>primary: GET /explore (proxied)
+primary-->>secondary: HTTP/1.1 200 OK [..]
+secondary->>client: HTTP/1.1 200 OK [..]
+```
+
+### Proxy detection mechanism
+
+To know whether or not it should proxy requests to the primary, and the URL of the primary (as it is stored in
+the database), Workhorse polls the internal API when Geo is enabled. When proxying should be enabled, the internal
+API responds with the primary URL and JWT-signed data that is passed on to the primary for every request.
+
+```mermaid
+sequenceDiagram
+ participant W as Workhorse (secondary)
+ participant API as Internal Rails API
+ W->API: GET /api/v4/geo/proxy (internal)
+ loop Poll every 10 seconds
+ API-->W: {geo_proxy_primary_url, geo_proxy_extra_data}, update config
+ end
+```
+
+### In-depth request flow and local data acceleration compared with proxying
+
+Detailing implementation, Workhorse on the secondary (requested) site decides whether to proxy the data or not. If it
+can "accelerate" the data type (that is, can serve locally to save a roundtrip request), it returns the data
+immediately. Otherwise, traffic is sent to the primary's internal URL, served by Workhorse on the primary exactly
+as a direct request would. The response is then be proxied back to the user through the secondary Workhorse in the
+same connection.
+
+```mermaid
+flowchart LR
+ A[Client]--->W1["Workhorse (secondary)"]
+ W1 --> W1C[Serve data locally?]
+ W1C -- "Yes" ----> W1
+ W1C -- "No (proxy)" ----> W2["Workhorse (primary)"]
+ W2 --> W1 ----> A
+```
+
+## Sign-in
+
+### Requests proxied to the primary requiring authorization
+
+```mermaid
+sequenceDiagram
+autoNumber
+participant Client
+participant Secondary
+participant Primary
+
+Client->>Secondary: `/group/project` request
+Secondary->>Primary: proxy /group/project
+opt primary not signed in
+Primary-->>Secondary: 302 redirect
+Secondary-->>Client: proxy 302 redirect
+Client->>Secondary: /users/sign_in
+Secondary->>Primary: proxy /users/sign_in
+Note right of Primary: authentication happens, POST to same URL etc
+Primary-->>Secondary: 302 redirect
+Secondary-->>Client: proxy 302 redirect
+Client->>Secondary: /group/project
+Secondary->>Primary: proxy /group/project
+end
+Primary-->>Secondary: /group/project logged in response (session on primary created)
+Secondary-->>Client: proxy full response
+```
+
+### Requests requiring a user session on the secondary
+
+At the moment, this flow only applies to Project Replication Details and Design Replication Details in the Geo Admin
+Area. For more context, see
+[View replication data on the primary site](../../administration/geo/index.md#view-replication-data-on-the-primary-site).
+
+```mermaid
+sequenceDiagram
+autoNumber
+participant Client
+participant Secondary
+participant Primary
+
+Client->>Secondary: `admin/geo/replication/projects` request
+opt secondary not signed in
+Secondary-->>Client: 302 redirect
+Client->>Secondary: /users/auth/geo/sign_in
+Secondary-->>Client: 302 redirect
+Client->>Secondary: /oauth/geo/auth/geo/sign_in
+Secondary-->>Client: 302 redirect
+Client->>Secondary: /oauth/authorize
+Secondary->>Primary: proxy /oauth/authorize
+opt primary not signed in
+Primary-->>Secondary: 302 redirect
+Secondary-->>Client: proxy 302 redirect
+Client->>Secondary: /users/sign_in
+Secondary->>Primary: proxy /users/sign_in
+Note right of Primary: authentication happens, POST to same URL etc
+end
+Primary-->>Secondary: 302 redirect
+Secondary-->>Client: proxy 302 redirect
+Client->>Secondary: /oauth/geo/callback
+Secondary-->>Client: 302 redirect
+Client->>Secondary: admin/geo/replication/projects
+end
+Secondary-->>Client: admin/geo/replication/projects logged in response (session on both primary and secondary)
+```
+
+## Git pull
+
+For historical reasons, the `push_from_secondary` path is used to forward a Git pull. There is [an issue proposing to
+rename this route](https://gitlab.com/gitlab-org/gitlab/-/issues/292690) to avoid confusion.
+
+### Git pull over HTTP(s)
+
+#### Accelerated repositories
+
+When a repository exists on the secondary and we detect is up to date with the primary, we serve it directly instead of
+proxying.
+
+```mermaid
+sequenceDiagram
+participant C as Git client
+participant Wsec as "Workhorse (secondary)"
+participant Rsec as "Rails (secondary)"
+participant Gsec as "Gitaly (secondary)"
+C->>Wsec: GET /foo/bar.git/info/refs/?service=git-upload-pack
+Wsec->>Rsec: <internal API check>
+note over Rsec: decide that the repo is synced and up to date
+Rsec-->>Wsec: 401 Unauthorized
+Wsec-->>C: <response>
+C->>Wsec: GET /foo/bar.git/info/refs/?service=git-upload-pack
+Wsec->>Rsec: <internal API check>
+Rsec-->>Wsec: Render Workhorse OK
+Wsec-->>C: 200 OK
+C->>Wsec: POST /foo/bar.git/git-upload-pack
+Wsec->>Rsec: GitHttpController#git_receive_pack
+Rsec-->>Wsec: Render Workhorse OK
+Wsec->>Gsec: Workhorse gets the connection details from Rails, connects to Gitaly: SmartHTTP Service, UploadPack RPC (check the proto for details)
+Gsec-->>Wsec: Return a stream of Proto messages
+Wsec-->>C: Pipe messages to the Git client
+```
+
+#### Proxied repositories
+
+If a requested repository isn't synced, or we detect is not up to date, the request will be proxied to the primary, in
+order to get the latest version of the changes.
+
+```mermaid
+sequenceDiagram
+participant C as Git client
+participant Wsec as "Workhorse (secondary)"
+participant Rsec as "Rails (secondary)"
+participant W as "Workhorse (primary)"
+participant R as "Rails (primary)"
+participant G as "Gitaly (primary)"
+C->>Wsec: GET /foo/bar.git/info/refs/?service=git-upload-pack
+Wsec->>Rsec: <response>
+note over Rsec: decide that the repo is out of date
+Rsec-->>Wsec: 302 Redirect to /-/push_from_secondary/2/foo/bar.git/info/refs?service=git-upload-pack
+Wsec-->>C: <response>
+C->>Wsec: GET /-/push_from_secondary/2/foo/bar.git/info/refs/?service=git-upload-pack
+Wsec->>W: <proxied request>
+W->>R: <data>
+R-->>W: 401 Unauthorized
+W-->>Wsec: <proxied response>
+Wsec-->>C: <response>
+C->>Wsec: GET /-/push_from_secondary/2/foo/bar.git/info/refs/?service=git-upload-pack
+note over W: proxied
+Wsec->>W: <proxied request>
+W->>R: <data>
+R-->>W: Render Workhorse OK
+W-->>Wsec: <proxied response>
+Wsec-->>C: <response>
+C->>Wsec: POST /-/push_from_secondary/2/foo/bar.git/git-upload-pack
+Wsec->>W: <proxied request>
+W->>R: GitHttpController#git_receive_pack
+R-->>W: Render Workhorse OK
+W->>G: Workhorse gets the connection details from Rails, connects to Gitaly: SmartHTTP Service, UploadPack RPC (check the proto for details)
+G-->>W: Return a stream of Proto messages
+W-->>Wsec: Pipe messages to the Git client
+Wsec-->>C: Return piped messages from Git
+```
+
+### Git pull over SSH
+
+As SSH operations go through GitLab Shell instead of Workhorse, they are not proxied through the mechanism used for
+Workhorse requests. With SSH operations, they are proxied as Git HTTP requests to the primary site by the secondary
+Rails internal API.
+
+#### Accelerated repositories
+
+When a repository exists on the secondary and we detect is up to date with the primary, we serve it directly instead of
+proxying.
+
+```mermaid
+sequenceDiagram
+participant C as Git client
+participant S as GitLab Shell (secondary)
+participant I as Internal API (secondary Rails)
+participant G as Gitaly (secondary)
+C->>S: git pull
+S->>I: SSH key validation (api/v4/internal/authorized_keys?key=..)
+I-->>S: HTTP/1.1 200 OK
+S->>G: InfoRefs:UploadPack RPC
+G-->>S: stream Git response back
+S-->>C: stream Git response back
+C-->>S: stream Git data to push
+S->>G: UploadPack RPC
+G-->>S: stream Git response back
+S-->>C: stream Git response back
+```
+
+#### Proxied repositories
+
+If a requested repository isn't synced, or we detect is not up to date, the request will be proxied to the primary, in
+order to get the latest version of the changes.
+
+```mermaid
+sequenceDiagram
+participant C as Git client
+participant S as GitLab Shell (secondary)
+participant I as Internal API (secondary Rails)
+participant P as Primary API
+C->>S: git pull
+S->>I: SSH key validation (api/v4/internal/authorized_keys?key=..)
+I-->>S: HTTP/1.1 300 (custom action status) with {endpoint, msg, primary_repo}
+S->>I: POST /api/v4/geo/proxy_git_ssh/info_refs_upload_pack
+I->>P: POST $PRIMARY/foo/bar.git/info/refs/?service=git-upload-pack
+P-->>I: HTTP/1.1 200 OK
+I-->>S: <response>
+S-->>C: return Git response from primary
+C-->>S: stream Git data to push
+S->>I: POST /api/v4/geo/proxy_git_ssh/upload_pack
+I->>P: POST $PRIMARY/foo/bar.git/git-upload-pack
+P-->>I: HTTP/1.1 200 OK
+I-->>S: <response>
+S-->>C: return Git response from primary
+```
+
+## Git push
+
+### Unified URLs
+
+With unified URLs, a push will redirect to a local path formatted as `/-/push_from_secondary/$SECONDARY_ID/*`. Further
+requests through this path will be proxied to the primary, which will handle the push.
+
+#### Git push over SSH
+
+As SSH operations go through GitLab Shell instead of Workhorse, they are not proxied through the mechanism used for
+Workhorse requests. With SSH operations, they are proxied as Git HTTP requests to the primary site by the secondary
+Rails internal API.
+
+```mermaid
+sequenceDiagram
+participant C as Git client
+participant S as GitLab Shell (secondary)
+participant I as Internal API (secondary Rails)
+participant P as Primary API
+C->>S: git push
+S->>I: SSH key validation (api/v4/internal/authorized_keys?key=..)
+I-->>S: HTTP/1.1 300 (custom action status) with {endpoint, msg, primary_repo}
+S->>I: POST /api/v4/geo/proxy_git_ssh/info_refs_receive_pack
+I->>P: POST $PRIMARY/foo/bar.git/info/refs/?service=git-receive-pack
+P-->>I: HTTP/1.1 200 OK
+I-->>S: <response>
+S-->>C: return Git response from primary
+C-->>S: stream Git data to push
+S->>I: POST /api/v4/geo/proxy_git_ssh/receive_pack
+I->>P: POST $PRIMARY/foo/bar.git/git-receive-pack
+P-->>I: HTTP/1.1 200 OK
+I-->>S: <response>
+S-->>C: return Git response from primary
+```
+
+#### Git push over HTTP(s)
+
+```mermaid
+sequenceDiagram
+participant C as Git client
+participant Wsec as Workhorse (secondary)
+participant W as Workhorse (primary)
+participant R as Rails (primary)
+participant G as Gitaly (primary)
+C->>Wsec: GET /foo/bar.git/info/refs/?service=git-receive-pack
+Wsec->>C: 302 Redirect to /-/push_from_secondary/2/foo/bar.git/info/refs?service=git-receive-pack
+C->>Wsec: GET /-/push_from_secondary/2/foo/bar.git/info/refs/?service=git-receive-pack
+Wsec->>W: <proxied request>
+W->>R: <data>
+R-->>W: 401 Unauthorized
+W-->>Wsec: <proxied response>
+Wsec-->>C: <response>
+C->>Wsec: GET /-/push_from_secondary/2/foo/bar.git/info/refs/?service=git-receive-pack
+Wsec->>W: <proxied request>
+W->>R: <data>
+R-->>W: Render Workhorse OK
+W-->>Wsec: <proxied response>
+Wsec-->>C: <response>
+C->>Wsec: POST /-/push_from_secondary/2/foo/bar.git/git-receive-pack
+Wsec->>W: <proxied request>
+W->>R: GitHttpController:git_receive_pack
+R-->>W: Render Workhorse OK
+W->>G: Get connection details from Rails and connects to SmartHTTP Service, ReceivePack RPC
+G-->>W: Return a stream of Proto messages
+W-->>Wsec: Pipe messages to the Git client
+Wsec-->>C: Return piped messages from Git
+```
+
+### Git push over HTTP with Separate URLs
+
+With separate URLs, the secondary will redirect to a URL formatted like `$PRIMARY/-/push_from_secondary/$SECONDARY_ID/*`.
+
+```mermaid
+sequenceDiagram
+participant Wsec as Workhorse (secondary)
+participant C as Git client
+participant W as Workhorse (primary)
+participant R as Rails (primary)
+participant G as Gitaly (primary)
+C->>Wsec: GET $SECONDARY/foo/bar.git/info/refs/?service=git-receive-pack
+Wsec->>C: 302 Redirect to $PRIMARY/-/push_from_secondary/2/foo/bar.git/info/refs?service=git-receive-pack
+C->>W: GET $PRIMARY/-/push_from_secondary/2/foo/bar.git/info/refs/?service=git-receive-pack
+W->>R: <data>
+R-->>W: 401 Unauthorized
+W-->>C: <response>
+C->>W: GET /-/push_from_secondary/2/foo/bar.git/info/refs/?service=git-receive-pack
+W->>R: <data>
+R-->>W: Render Workhorse OK
+W-->>C: <response>
+C->>W: POST /-/push_from_secondary/2/foo/bar.git/git-receive-pack
+W->>R: GitHttpController:git_receive_pack
+R-->>W: Render Workhorse OK
+W->>G: Get connection details from Rails and connects to SmartHTTP Service, ReceivePack RPC
+G-->>W: Return a stream of Proto messages
+W-->>C: Pipe messages to the Git client
+```
diff --git a/doc/development/git_object_deduplication.md b/doc/development/git_object_deduplication.md
index 3ac24b19fc2..1a864ef81f0 100644
--- a/doc/development/git_object_deduplication.md
+++ b/doc/development/git_object_deduplication.md
@@ -1,8 +1,7 @@
---
-stage: Create
+stage: Systems
group: Gitaly
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
-type: reference
---
# How Git object deduplication works in GitLab
diff --git a/doc/development/gitaly.md b/doc/development/gitaly.md
index 0743a03ddac..8a0cf8e7717 100644
--- a/doc/development/gitaly.md
+++ b/doc/development/gitaly.md
@@ -1,8 +1,7 @@
---
-stage: Create
+stage: Systems
group: Gitaly
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
-type: reference
---
# Gitaly developers guide
@@ -388,3 +387,28 @@ When you are using GDK, you can set it up with:
1. Start the database: `gdk start db`
1. Load the environment from GDK: `eval $(cd ../gitaly && gdk env)`
1. Create the database: `createdb --encoding=UTF8 --locale=C --echo praefect_test`
+
+## Git references used by Gitaly
+
+Gitaly uses many Git references ([refs](https://git-scm.com/docs/gitglossary#Documentation/gitglossary.txt-aiddefrefaref)) to provide Git services to GitLab.
+
+### Standard Git references
+
+These standard Git references are used by GitLab (through Gitaly) in any Git repository:
+
+- `refs/heads/`. Used for branches. See the [`git branch`](https://git-scm.com/docs/git-branch) documentation.
+- `refs/tags/`. Used for tags. See the [`git tag`](https://git-scm.com/docs/git-tag) documentation.
+
+### GitLab-specific references
+
+These GitLab-specific references are used exclusively by GitLab (through Gitaly):
+
+- `refs/keep-around/<object-id>`. References to commits that have pipeline jobs or merge requests. The `object-id` points to the commit the pipeline was run on.
+- `refs/merge-requests/<merge-request-iid>/`. [Merges](https://git-scm.com/docs/git-merge) merge two histories together. This ref namespace tracks information about a
+ merge using the following refs under it:
+ - `head`. Current `HEAD` of the merge request.
+ - `merge`. Commit for the merge request. Every merge request creates a commit object under `refs/keep-around`.
+ - If [merge trains are enabled](../ci/pipelines/merge_trains.md): `train`. Commit for the merge train.
+- `refs/pipelines/<pipeline-iid>`. References to pipelines. Temporarily used to store the pipeline commit object ID.
+- `refs/environments/<environment-slug>`. References to commits where deployments to environments were performed.
+- `refs/heads/revert-<source-commit-short-object-id>`. References to the commit's object ID created when [reverting changes](../user/project/merge_requests/revert_changes.md).
diff --git a/doc/development/gitlab_flavored_markdown/specification_guide/index.md b/doc/development/gitlab_flavored_markdown/specification_guide/index.md
index cedf44cf1fc..80837506037 100644
--- a/doc/development/gitlab_flavored_markdown/specification_guide/index.md
+++ b/doc/development/gitlab_flavored_markdown/specification_guide/index.md
@@ -69,11 +69,11 @@ serve as input to automated conformance tests. It is
> This document attempts to specify Markdown syntax unambiguously. It contains many
> examples with side-by-side Markdown and HTML. These examples are intended to double as conformance tests.
-The HTML-rendered versions of the specifications:
+Here are the HTML-rendered versions of the specifications:
- [GitLab Flavored Markdown (GLFM) specification](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/output/spec.html), which extends the:
-- [GitHub Flavored Markdown (GFM) specification](https://github.github.com/gfm/), which extends the:
-- [CommonMark specification](https://spec.commonmark.org/0.30/)
+- [GitHub Flavored Markdown (GFM) specification](https://github.github.com/gfm/) (rendered from the [source `spec.txt` for GFM specification](https://github.com/github/cmark-gfm/blob/master/test/spec.txt)), which extends the:
+- [CommonMark specification](https://spec.commonmark.org/0.30/) (rendered from the [source `spec.txt` for CommonMark specification](https://github.com/commonmark/commonmark-spec/blob/master/spec.txt))
NOTE:
The creation of the
@@ -136,10 +136,10 @@ NOTE:
#### Markdown snapshot testing
_Markdown snapshot testing_ refers to the automated testing performed in
-the GitLab codebase, which is driven by snapshot fixture data derived from the
-GLFM specification. It consists of both backend RSpec tests and frontend Jest tests
-which use the fixture data. This fixture data is contained in YAML files. These files
-can be generated and updated based on the Markdown examples in the specification,
+the GitLab codebase, which is driven by "example_snapshots" fixture data derived from all of
+the examples in the GLFM specification. It consists of both backend RSpec tests and frontend Jest
+tests which use the fixture data. This fixture data is contained in YAML files. These files
+are generated and updated based on the Markdown examples in the specification,
and the existing GLFM parser and render implementations. They may also be
manually updated as necessary to test-drive incomplete implementations.
Regarding the terminology used here:
@@ -159,7 +159,7 @@ Regarding the terminology used here:
[Jest snapshot testing](https://jestjs.io/docs/snapshot-testing), as used elsewhere
in the GitLab frontend testing suite. However, the Markdown snapshot testing does
follow the same philosophy and patterns as Jest snapshot testing:
- 1. Snapshot fixture data is represented as files which are checked into source control.
+ 1. Snapshot example fixture data is represented as files which are checked into source control.
1. The files can be automatically generated and updated based on the implementation
of the code under tests.
1. The files can also be manually updated when necessary, for example, to test-drive
@@ -168,9 +168,15 @@ Regarding the terminology used here:
[Rails database fixture files](https://api.rubyonrails.org/classes/ActiveRecord/FixtureSet.html).
It instead refers to _test fixtures_ in the
[more generic definition](https://en.wikipedia.org/wiki/Test_fixture#Software),
- as input data to support automated testing. However, fixture files still exist, so
- they are colocated under the `spec/fixtures` directory with the rest of
- the fixture data for the GitLab Rails application.
+ as input data to support automated testing.
+1. These example snapshots fixture files are generated from and closely related to the rest of the
+ GLFM specification. Therefore, the `example_snapshots` directory is colocated under the
+ `glfm_specification` directory with the rest of the
+ GLFM [specification files](#specification-files). They are intentionally _not_
+ located under the `spec/fixtures` directory with the rest of
+ the fixture data for the GitLab Rails application. In practice, developers have found
+ it simpler and more understandable to have everything under the `glfm_specification` directory
+ rather than splitting these files into the `spec/fixtures` directory.
<!-- vale gitlab.InclusionCultural = YES -->
@@ -395,9 +401,10 @@ The documentation on the implementation is split into three sections:
1. [Scripts](#scripts).
1. [Specification files](#specification-files).
-1. Example snapshot files: These YAML files are used as input data
+1. [Example snapshot files](#example-snapshot-files):
+ These YAML files are used as input data
or fixtures to drive the various tests, and are located under
- `spec/fixtures/glfm/example_snapshots`. All example snapshot files are automatically
+ `glfm_specification/example_snapshots`. All example snapshot files are automatically
generated based on the specification files and the implementation of the parsers and renderers.
However, they can also be directly edited if necessary, such as to
test-drive an incomplete implementation.
@@ -662,16 +669,16 @@ controls the behavior of the [scripts](#scripts) and [tests](#types-of-markdown-
The following optional entries are supported for each example. They all default to `false`:
- `skip_update_example_snapshots`: When true, skips any addition or update of any this example's entries
- in the [`spec/fixtures/glfm/example_snapshots/html.yml`](#specfixturesglfmexample_snapshotshtmlyml) file
- or the [`spec/fixtures/glfm/example_snapshots/prosemirror_json.yml`](#specfixturesglfmexample_snapshotsprosemirror_jsonyml) file.
+ in the [`glfm_specification/example_snapshots/html.yml`](#glfm_specificationexample_snapshotshtmlyml) file
+ or the [`glfm_specification/example_snapshots/prosemirror_json.yml`](#glfm_specificationexample_snapshotsprosemirror_jsonyml) file.
If this value is truthy, then no other `skip_update_example_snapshot_*` entries can be truthy,
and an error is raised if any of them are.
- `skip_update_example_snapshot_html_static`: When true, skips addition or update of this example's [static HTML](#static-html)
- entry in the [`spec/fixtures/glfm/example_snapshots/html.yml`](#specfixturesglfmexample_snapshotshtmlyml) file.
+ entry in the [`glfm_specification/example_snapshots/html.yml`](#glfm_specificationexample_snapshotshtmlyml) file.
- `skip_update_example_snapshot_html_wysiwyg`: When true, skips addition or update of this example's [WYSIWYG HTML](#wysiwyg-html)
- entry in the [`spec/fixtures/glfm/example_snapshots/html.yml`](#specfixturesglfmexample_snapshotshtmlyml) file.
+ entry in the [`glfm_specification/example_snapshots/html.yml`](#glfm_specificationexample_snapshotshtmlyml) file.
- `skip_update_example_snapshot_prosemirror_json`: When true, skips addition or update of this example's
- entry in the [`spec/fixtures/glfm/example_snapshots/prosemirror_json.yml`](#specfixturesglfmexample_snapshotsprosemirror_jsonyml) file.
+ entry in the [`glfm_specification/example_snapshots/prosemirror_json.yml`](#glfm_specificationexample_snapshotsprosemirror_jsonyml) file.
- `skip_running_conformance_static_tests`: When true, skips running the [Markdown conformance tests](#markdown-conformance-testing)
of the [static HTML](#static-html) for this example.
- `skip_running_conformance_wysiwyg_tests`: When true, skips running the [Markdown conformance tests](#markdown-conformance-testing)
@@ -681,7 +688,7 @@ The following optional entries are supported for each example. They all default
- `skip_running_snapshot_wysiwyg_html_tests`: When true, skips running the [Markdown snapshot tests](#markdown-snapshot-testing)
of the [WYSIWYG HTML](#wysiwyg-html) for this example.
- `skip_running_snapshot_prosemirror_json_tests`: When true, skips running the [Markdown snapshot tests](#markdown-snapshot-testing)
- of the [ProseMirror JSON](#specfixturesglfmexample_snapshotsprosemirror_jsonyml) for this example.
+ of the [ProseMirror JSON](#glfm_specificationexample_snapshotsprosemirror_jsonyml) for this example.
`glfm_specification/input/gitlab_flavored_markdown/glfm_example_status.yml` sample entry:
@@ -808,9 +815,9 @@ key in `glfm_specification/input/gitlab_flavored_markdown/glfm_example_status.ym
can be used to disable automatic generation of some examples. They can instead
be manually edited as necessary to help drive the implementations.
-#### `spec/fixtures/glfm/example_snapshots/examples_index.yml`
+#### `glfm_specification/example_snapshots/examples_index.yml`
-[`spec/fixtures/glfm/example_snapshots/examples_index.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/fixtures/glfm/example_snapshots/examples_index.yml)
+[`glfm_specification/example_snapshots/examples_index.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/example_snapshots/examples_index.yml)
is the main list of all
CommonMark, GFM, and GLFM example names, each with a unique canonical name.
@@ -836,7 +843,7 @@ CommonMark, GFM, and GLFM example names, each with a unique canonical name.
examples where multiple examples exist for the same Section 7 subsection are
added to the end of the sub-section.
-`spec/fixtures/glfm/example_snapshots/examples_index.yml` sample entries:
+`glfm_specification/example_snapshots/examples_index.yml` sample entries:
```yaml
02_01_preliminaries_characters_and_lines_1:
@@ -856,10 +863,10 @@ CommonMark, GFM, and GLFM example names, each with a unique canonical name.
source_specification: gitlab
```
-#### `spec/fixtures/glfm/example_snapshots/markdown.yml`
+#### `glfm_specification/example_snapshots/markdown.yml`
-[`spec/fixtures/glfm/example_snapshots/markdown.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/fixtures/glfm/example_snapshots/markdown.yml) contains the original Markdown
-for each entry in `spec/fixtures/glfm/example_snapshots/examples_index.yml`
+[`glfm_specification/example_snapshots/markdown.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/example_snapshots/markdown.yml) contains the original Markdown
+for each entry in `glfm_specification/example_snapshots/examples_index.yml`
- For CommonMark and GFM Markdown,
it is generated (or updated) from the standard GFM
@@ -868,17 +875,17 @@ for each entry in `spec/fixtures/glfm/example_snapshots/examples_index.yml`
`glfm_specification/input/gitlab_flavored_markdown/glfm_canonical_examples.txt`
input specification file.
-`spec/fixtures/glfm/example_snapshots/markdown.yml` sample entry:
+`glfm_specification/example_snapshots/markdown.yml` sample entry:
```yaml
06_04_inlines_emphasis_and_strong_emphasis_1: |
*foo bar*
```
-#### `spec/fixtures/glfm/example_snapshots/html.yml`
+#### `glfm_specification/example_snapshots/html.yml`
-[`spec/fixtures/glfm/example_snapshots/html.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/fixtures/glfm/example_snapshots/html.yml)
-contains the HTML for each entry in `spec/fixtures/glfm/example_snapshots/examples_index.yml`
+[`glfm_specification/example_snapshots/html.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/example_snapshots/html.yml)
+contains the HTML for each entry in `glfm_specification/example_snapshots/examples_index.yml`
Three types of entries exist, with different HTML for each:
@@ -889,13 +896,13 @@ Three types of entries exist, with different HTML for each:
`glfm_specification/input/gitlab_flavored_markdown/glfm_canonical_examples.txt`.
- **Static**
- This is the static (backend (Ruby)-generated) HTML for each entry in
- `spec/fixtures/glfm/example_snapshots/examples_index.yml`.
+ `glfm_specification/example_snapshots/examples_index.yml`.
- It is generated/updated from backend [Markdown API](../../../api/markdown.md)
(or the underlying internal classes) via the `update-example-snapshots.rb` script,
but can be manually updated for static examples with incomplete implementations.
- **WYSIWYG**
- The WYSIWYG (frontend, JavaScript-generated) HTML for each entry in
- `spec/fixtures/glfm/example_snapshots/examples_index.yml`.
+ `glfm_specification/example_snapshots/examples_index.yml`.
- It is generated (or updated) from the frontend Content Editor implementation via the
`update-example-snapshots.rb` script. It can be manually updated for WYSIWYG
examples with incomplete implementations.
@@ -903,7 +910,7 @@ Three types of entries exist, with different HTML for each:
Any exceptions or failures which occur when generating HTML are replaced with an
`Error - check implementation` value.
-`spec/fixtures/glfm/example_snapshots/html.yml` sample entry:
+`glfm_specification/example_snapshots/html.yml` sample entry:
```yaml
06_04_inlines_emphasis_and_strong_emphasis_1:
@@ -919,16 +926,16 @@ NOTE:
The actual `static` or `WYSIWYG` entries may differ from the example `html.yml`,
depending on how the implementations evolve.
-#### `spec/fixtures/glfm/example_snapshots/prosemirror_json.yml`
+#### `glfm_specification/example_snapshots/prosemirror_json.yml`
-[`spec/fixtures/glfm/example_snapshots/prosemirror_json.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/fixtures/glfm/example_snapshots/prosemirror_json.yml)
-contains the ProseMirror JSON for each entry in `spec/fixtures/glfm/example_snapshots/examples_index.yml`
+[`glfm_specification/example_snapshots/prosemirror_json.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/example_snapshots/prosemirror_json.yml)
+contains the ProseMirror JSON for each entry in `glfm_specification/example_snapshots/examples_index.yml`
- It is generated (or updated) from the frontend code via the `update-example-snapshots.rb`
script, but can be manually updated for examples with incomplete implementations.
- Any exceptions or failures when generating are replaced with a `Error - check implementation` value.
-`spec/fixtures/glfm/example_snapshots/prosemirror_json.yml` sample entry:
+`glfm_specification/example_snapshots/prosemirror_json.yml` sample entry:
```yaml
06_04_inlines_emphasis_and_strong_emphasis_1: |-
diff --git a/doc/development/go_guide/index.md b/doc/development/go_guide/index.md
index f5b0da2f162..1a11321b70f 100644
--- a/doc/development/go_guide/index.md
+++ b/doc/development/go_guide/index.md
@@ -443,6 +443,43 @@ of the Code Review Comments page on the Go wiki for more details.
Most editors/IDEs allow you to run commands before/after saving a file, you can set it
up to run `goimports -local gitlab.com/gitlab-org` so that it's applied to every file when saving.
+### Initializing slices
+
+If initializing a slice, provide a capacity where possible to avoid extra
+allocations.
+
+<table>
+<tr><th>:white_check_mark: Do</th><th>:x: Don't</th></tr>
+<tr>
+ <td>
+
+ ```golang
+ s2 := make([]string, 0, size)
+ for _, val := range s1 {
+ s2 = append(s2, val)
+ }
+ ```
+
+ </td>
+ <td>
+
+ ```golang
+ var s2 []string
+ for _, val := range s1 {
+ s2 = append(s2, val)
+ }
+ ```
+
+ </td>
+</tr>
+</table>
+
+If no capacity is passed to `make` when creating a new slice, `append`
+will continuously resize the slice's backing array if it cannot hold
+the values. Providing the capacity ensures that allocations are kept
+to a minimum. It is recommended that the [`prealloc`](https://github.com/alexkohler/prealloc)
+golanci-lint rule automatically check for this.
+
### Analyzer Tests
The conventional Secure [analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/) has a [`convert` function](https://gitlab.com/gitlab-org/security-products/analyzers/command/-/blob/main/convert.go#L15-17) that converts SAST/DAST scanner reports into [GitLab Security Reports](https://gitlab.com/gitlab-org/security-products/security-report-schemas). When writing tests for the `convert` function, we should make use of [test fixtures](https://dave.cheney.net/2016/05/10/test-fixtures-in-go) using a `testdata` directory at the root of the analyzer's repository. The `testdata` directory should contain two subdirectories: `expect` and `reports`. The `reports` directory should contain sample SAST/DAST scanner reports which are passed into the `convert` function during the test setup. The `expect` directory should contain the expected GitLab Security Report that the `convert` returns. See Secret Detection for an [example](https://gitlab.com/gitlab-org/security-products/analyzers/secrets/-/blob/160424589ef1eed7b91b59484e019095bc7233bd/convert_test.go#L13-66).
diff --git a/doc/development/i18n/externalization.md b/doc/development/i18n/externalization.md
index 2aea15de443..18704fc2b60 100644
--- a/doc/development/i18n/externalization.md
+++ b/doc/development/i18n/externalization.md
@@ -411,6 +411,56 @@ use `%{created_at}` in Ruby but `%{createdAt}` in JavaScript. Make sure to
// => When x == 2: 'Last 2 days'
```
+- In Vue:
+
+ One of [the recommended ways to organize translated strings for Vue files](#vue-files) is to extract them into a `constants.js` file.
+ That can be difficult to do when there are pluralized strings because the `count` variable won't be known inside the constants file.
+ To overcome this, we recommend creating a function which takes a `count` argument:
+
+ ```javascript
+ // .../feature/constants.js
+ import { n__ } from '~/locale';
+
+ export const I18N = {
+ // Strings that are only singular don't need to be a function
+ someDaysRemain: __('Some days remain'),
+ daysRemaining(count) { return n__('%d day remaining', '%d days remaining', count); },
+ };
+ ```
+
+ Then within a Vue component the function can be used to retrieve the correct pluralization form of the string:
+
+ ```javascript
+ // .../feature/components/days_remaining.vue
+ import { sprintf } from '~/locale';
+ import { I18N } from '../constants';
+
+ <script>
+ export default {
+ props: {
+ days: {
+ type: Number,
+ required: true,
+ },
+ },
+ i18n: I18N,
+ };
+ </script>
+
+ <template>
+ <div>
+ <span>
+ A singular string:
+ {{ $options.i18n.someDaysRemain }}
+ </span>
+ <span>
+ A plural string:
+ {{ $options.i18n.daysRemaining(days) }}
+ </span>
+ </div>
+ </template>
+ ```
+
The `n_` and `n__` methods should only be used to fetch pluralized translations of the same
string, not to control the logic of showing different strings for different
quantities. Some languages have different quantities of target plural forms.
diff --git a/doc/development/i18n/proofreader.md b/doc/development/i18n/proofreader.md
index 8231cf4316b..cee078ca891 100644
--- a/doc/development/i18n/proofreader.md
+++ b/doc/development/i18n/proofreader.md
@@ -46,6 +46,8 @@ are very appreciative of the work done by translators and proofreaders!
- scootergrisen - [GitLab](https://gitlab.com/scootergrisen), [Crowdin](https://crowdin.com/profile/scootergrisen)
- Dutch
- Emily Hendle - [GitLab](https://gitlab.com/pundachan), [Crowdin](https://crowdin.com/profile/pandachan)
+- English (UK)
+ - Andrew Smith - [GitLab](https://gitlab.com/espadav8), [Crowdin](https://crowdin.com/profile/espadav8)
- Esperanto
- Lyubomir Vasilev - [Crowdin](https://crowdin.com/profile/lyubomirv)
- Estonian
@@ -54,6 +56,7 @@ are very appreciative of the work done by translators and proofreaders!
- Andrei Jiroh Halili - [GitLab](https://gitlab.com/ajhalili2006), [Crowdin](https://crowdin.com/profile/AndreiJirohHaliliDev2006)
- French
- Davy Defaud - [GitLab](https://gitlab.com/DevDef), [Crowdin](https://crowdin.com/profile/DevDef)
+ - Germain Gorisse - [GitLab](https://gitlab.com/ggorisse), [Crowdin](https://crowdin.com/profile/germaingorisse)
- Galician
- Antón Méixome - [Crowdin](https://crowdin.com/profile/meixome)
- Pedro Garcia - [GitLab](https://gitlab.com/pedgarrod), [Crowdin](https://crowdin.com/profile/breaking_pitt)
@@ -61,6 +64,7 @@ are very appreciative of the work done by translators and proofreaders!
- Michael Hahnle - [GitLab](https://gitlab.com/mhah), [Crowdin](https://crowdin.com/profile/mhah)
- Katrin Leinweber - [GitLab](https://gitlab.com/katrinleinweber), [Crowdin](https://crowdin.com/profile/katrinleinweber)
- Justman10000 - [GitLab](https://gitlab.com/Justman10000), [Crowdin](https://crowdin.com/profile/Justman10000)
+ - Vladislav Wanner - [GitLab](https://gitlab.com/RumBugen), [Crowdin](https://crowdin.com/profile/RumBugen)
- Greek
- Proofreaders needed.
- Hebrew
diff --git a/doc/development/import_project.md b/doc/development/import_project.md
index e910983997c..c63ba229921 100644
--- a/doc/development/import_project.md
+++ b/doc/development/import_project.md
@@ -80,6 +80,14 @@ If you're running Omnibus, run the following Rake task:
gitlab-rake "gitlab:import_export:import[root, group/subgroup, testingprojectimport, /path/to/file.tar.gz]"
```
+#### Enable verbose output
+
+To make the import Rake task more verbose, use the `IMPORT_DEBUG` environment variable:
+
+```shell
+IMPORT_DEBUG=true gitlab-rake "gitlab:import_export:import[root, group/subgroup, testingprojectimport, /path/to/file.tar.gz]"
+```
+
#### Troubleshooting
Check the common errors listed below, what they mean, and how to fix them.
@@ -133,6 +141,51 @@ During import, the tarball is cached in your configured `shared_path` directory.
disk has enough free space to accommodate both the cached tarball and the unpacked
project files on disk.
+##### Import is successful, but with a `Total number of not imported relations: XX` message, and issues are not created during the import
+
+If you receive a `Total number of not imported relations: XX` message, and issues
+aren't created during the import, check [exceptions_json.log](../administration/logs.md#exceptions_jsonlog).
+You might see an error like `N is out of range for ActiveModel::Type::Integer with limit 4 bytes`,
+where `N` is the integer exceeding the 4-byte integer limit. If that's the case, you
+are likely hitting the issue with rebalancing of `relative_position` field of the issues.
+
+The feature flag to enable the rebalance automatically was enabled on GitLab.com.
+We intend to enable it by default on self-managed instances when the issue
+[Rebalance issues FF rollout](https://gitlab.com/gitlab-org/gitlab/-/issues/343368)
+is implemented.
+
+If the feature is not enabled by default on your GitLab version, run the following
+commands in the [Rails console](../administration/operations/rails_console.md) as
+a workaround. Replace the ID with the ID of your project you were trying to import:
+
+```ruby
+# Check if the feature is enabled on your instance. If it is, rebalance should work automatically on your instance
+Feature.enabled?(:rebalance_issues,Project.find(ID).root_namespace)
+
+# Check the current maximum value of relative_position
+Issue.where(project_id: Project.find(ID).root_namespace.all_projects).maximum(:relative_position)
+
+# Enable `rebalance_issues` feauture and check that it was successfully enabled
+Feature.enable(:rebalance_issues,Project.find(ID).root_namespace)
+Feature.enabled?(:rebalance_issues,Project.find(ID).root_namespace)
+
+# Run the rebalancing process and check if the maximum value of relative_position has changed
+Issues::RelativePositionRebalancingService.new(Project.find(ID).root_namespace.all_projects).execute
+Issue.where(project_id: Project.find(ID).root_namespace.all_projects).maximum(:relative_position)
+```
+
+Repeat the import attempt after that and check if the issues are imported successfully.
+
+##### Gitaly calls error when importing
+
+If you're attempting to import a large project into a development environment, you may see Gitaly throw an error about too many calls or invocations, for example:
+
+```plaintext
+Error importing repository into qa-perf-testing/gitlabhq - GitalyClient#call called 31 times from single request. Potential n+1?
+```
+
+This is due to a [n+1 calls limit being set for development setups](gitaly.md#toomanyinvocationserror-errors). You can work around this by setting `GITALY_DISABLE_REQUEST_LIMITS=1` as an environment variable, restarting your development environment and importing again.
+
### Importing via the Rails console
The last option is to import a project using a Rails console:
@@ -206,20 +259,6 @@ You can execute the script from the `gdk/gitlab` directory like this:
bundle exec rails r /path_to_script/script.rb project_name /path_to_extracted_project request_store_enabled
```
-## Troubleshooting
-
-This section details known issues we've seen when trying to import a project and how to manage them.
-
-### Gitaly calls error when importing
-
-If you're attempting to import a large project into a development environment, you may see Gitaly throw an error about too many calls or invocations, for example:
-
-```plaintext
-Error importing repository into qa-perf-testing/gitlabhq - GitalyClient#call called 31 times from single request. Potential n+1?
-```
-
-This is due to a [n+1 calls limit being set for development setups](gitaly.md#toomanyinvocationserror-errors). You can work around this by setting `GITALY_DISABLE_REQUEST_LIMITS=1` as an environment variable, restarting your development environment and importing again.
-
## Access token setup
Many of the tests also require a GitLab Personal Access Token. This is due to numerous endpoints themselves requiring authentication.
diff --git a/doc/development/index.md b/doc/development/index.md
index 3d5ec24d3e2..1b897db5097 100644
--- a/doc/development/index.md
+++ b/doc/development/index.md
@@ -46,307 +46,3 @@ GitLab instance, see the [Administrator documentation](../administration/index.m
- [Implement design & UI elements](contributing/design.md)
- [GitLab Architecture Overview](architecture.md)
- [Rake tasks](rake_tasks.md) for development
-
-## Processes
-
-**Must-reads:**
-
-- [Guide on adapting existing and introducing new components](architecture.md#adapting-existing-and-introducing-new-components)
-- [Code review guidelines](code_review.md) for reviewing code and having code
- reviewed
-- [Database review guidelines](database_review.md) for reviewing
- database-related changes and complex SQL queries, and having them reviewed
-- [Secure coding guidelines](secure_coding_guidelines.md)
-- [Pipelines for the GitLab project](pipelines.md)
-
-Complementary reads:
-
-- [GitLab core team & GitLab Inc. contribution process](https://gitlab.com/gitlab-org/gitlab/-/blob/master/PROCESS.md)
-- [Security process for developers](https://gitlab.com/gitlab-org/release/docs/blob/master/general/security/developer.md#security-releases-critical-non-critical-as-a-developer)
-- [Patch release process for developers](https://gitlab.com/gitlab-org/release/docs/blob/master/general/patch/process.md#process-for-developers)
-- [Guidelines for implementing Enterprise Edition features](ee_features.md)
-- [Adding a new service component to GitLab](adding_service_component.md)
-- [Guidelines for changelogs](changelog.md)
-- [Dependencies](dependencies.md)
-- [Danger bot](dangerbot.md)
-- [Requesting access to ChatOps on GitLab.com](chatops_on_gitlabcom.md#requesting-access) (for GitLab team members)
-
-### Development guidelines review
-
-When you submit a change to the GitLab development guidelines, who
-you ask for reviews depends on the level of change.
-
-#### Wording, style, or link changes
-
-Not all changes require extensive review. For example, MRs that don't change the
-content's meaning or function can be reviewed, approved, and merged by any
-maintainer or Technical Writer. These can include:
-
-- Typo fixes.
-- Clarifying links, such as to external programming language documentation.
-- Changes to comply with the [Documentation Style Guide](documentation/index.md)
- that don't change the intent of the documentation page.
-
-#### Specific changes
-
-If the MR proposes changes that are limited to a particular stage, group, or team,
-request a review and approval from an experienced GitLab Team Member in that
-group. For example, if you're documenting a new internal API used exclusively by
-a given group, request an engineering review from one of the group's members.
-
-After the engineering review is complete, assign the MR to the
-[Technical Writer associated with the stage and group](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments)
-in the modified documentation page's metadata.
-If the page is not assigned to a specific group, follow the
-[Technical Writing review process for development guidelines](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines).
-
-#### Broader changes
-
-Some changes affect more than one group. For example:
-
-- Changes to [code review guidelines](code_review.md).
-- Changes to [commit message guidelines](contributing/merge_request_workflow.md#commit-messages-guidelines).
-- Changes to guidelines in [feature flags in development of GitLab](feature_flags/).
-- Changes to [feature flags documentation guidelines](documentation/feature_flags.md).
-
-In these cases, use the following workflow:
-
-1. Request a peer review from a member of your team.
-1. Request a review and approval of an Engineering Manager (EM)
- or Staff Engineer who's responsible for the area in question:
-
- - [Frontend](https://about.gitlab.com/handbook/engineering/frontend/)
- - [Backend](https://about.gitlab.com/handbook/engineering/)
- - [Database](https://about.gitlab.com/handbook/engineering/development/database/)
- - [User Experience (UX)](https://about.gitlab.com/handbook/engineering/ux/)
- - [Security](https://about.gitlab.com/handbook/engineering/security/)
- - [Quality](https://about.gitlab.com/handbook/engineering/quality/)
- - [Engineering Productivity](https://about.gitlab.com/handbook/engineering/quality/engineering-productivity/)
- - [Infrastructure](https://about.gitlab.com/handbook/engineering/infrastructure/)
- - [Technical Writing](https://about.gitlab.com/handbook/engineering/ux/technical-writing/)
-
- You can skip this step for MRs authored by EMs or Staff Engineers responsible
- for their area.
-
- If there are several affected groups, you may need approvals at the
- EM/Staff Engineer level from each affected area.
-
-1. After completing the reviews, consult with the EM/Staff Engineer
- author / approver of the MR.
-
- If this is a significant change across multiple areas, request final review
- and approval from the VP of Development, the DRI for Development Guidelines,
- @clefelhocz1.
-
-1. After all approvals are complete, assign the MR to the
- [Technical Writer associated with the stage and group](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments)
- in the modified documentation page's metadata.
- If the page is not assigned to a specific group, follow the
- [Technical Writing review process for development guidelines](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines).
- The Technical Writer may ask for additional approvals as previously suggested before merging the MR.
-
-### Reviewer values
-
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/57293) in GitLab 14.1.
-
-As a reviewer or as a reviewee, make sure to familiarize yourself with
-the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/reviewer-values/) we strive for at GitLab.
-
-## UX and Frontend guides
-
-- [GitLab Design System](https://design.gitlab.com/), for building GitLab with
- existing CSS styles and elements
-- [Frontend guidelines](fe_guide/index.md)
-- [Emoji guide](fe_guide/emojis.md)
-
-## Backend guides
-
-### General
-
-- [Directory structure](directory_structure.md)
-- [GitLab EventStore](event_store.md) to publish/subscribe to domain events
-- [GitLab utilities](utilities.md)
-- [Newlines style guide](newlines_styleguide.md)
-- [Logging](logging.md)
-- [Dealing with email/mailers](emails.md)
-- [Kubernetes integration guidelines](kubernetes.md)
-- [Permissions](permissions.md)
-- [Code comments](code_comments.md)
-- [Windows Development on GCP](windows.md)
-- [FIPS compliance](fips_compliance.md)
-- [`Gemfile` guidelines](gemfile.md)
-- [Ruby upgrade guidelines](ruby_upgrade.md)
-
-### Things to be aware of
-
-- [Gotchas](gotchas.md) to avoid
-- [Avoid modules with instance variables](module_with_instance_variables.md), if
- possible
-- [Guidelines for reusing abstractions](reusing_abstractions.md)
-- [Ruby 3 gotchas](ruby3_gotchas.md)
-
-### Rails Framework related
-
-- [Routing](routing.md)
-- [Rails initializers](rails_initializers.md)
-- [Mass Inserting Models](mass_insert.md)
-- [Issuable-like Rails models](issuable-like-models.md)
-- [Issue types vs first-class types](issue_types.md)
-- [DeclarativePolicy framework](policies.md)
-- [Rails update guidelines](rails_update.md)
-
-### Debugging
-
-- [Pry debugging](pry_debugging.md)
-- [Sidekiq debugging](../administration/troubleshooting/sidekiq.md)
-
-### Git specifics
-
-- [How Git object deduplication works in GitLab](git_object_deduplication.md)
-- [Git LFS](lfs.md)
-
-### API
-
-- [API style guide](api_styleguide.md) for contributing to the API
-- [GraphQL API style guide](api_graphql_styleguide.md) for contributing to the
- [GraphQL API](../api/graphql/index.md)
-
-### GitLab components and features
-
-- [Developing against interacting components or features](interacting_components.md)
-- [Manage feature flags](feature_flags/index.md)
-- [Licensed feature availability](licensed_feature_availability.md)
-- [Accessing session data](session.md)
-- [How to dump production data to staging](db_dump.md)
-- [Geo development](geo.md)
-- [Redis guidelines](redis.md)
- - [Adding a new Redis instance](redis/new_redis_instance.md)
-- [Sidekiq guidelines](sidekiq/index.md) for working with Sidekiq workers
-- [Working with Gitaly](gitaly.md)
-- [Elasticsearch integration docs](elasticsearch.md)
-- [Working with merge request diffs](diffs.md)
-- [Approval Rules](approval_rules.md)
-- [Repository mirroring](repository_mirroring.md)
-- [Uploads development guide](uploads/index.md)
-- [Auto DevOps development guide](auto_devops.md)
-- [Renaming features](renaming_features.md)
-- [Code Intelligence](code_intelligence/index.md)
-- [Feature categorization](feature_categorization/index.md)
-- [Wikis development guide](wikis.md)
-- [Image scaling guide](image_scaling.md)
-- [Cascading Settings](cascading_settings.md)
-- [Shell commands](shell_commands.md) in the GitLab codebase
-- [Value Stream Analytics development guide](value_stream_analytics.md)
-- [Application limits](application_limits.md)
-
-### Import/Export
-
-- [Working with the GitHub importer](github_importer.md)
-- [Import/Export development documentation](import_export.md)
-- [Test Import Project](import_project.md)
-- [Group migration](bulk_import.md)
-- [Export to CSV](export_csv.md)
-
-## Language-specific guides
-
-### Go guides
-
-- [Go Guidelines](go_guide/index.md)
-
-### Shell Scripting guides
-
-- [Shell scripting standards and style guidelines](shell_scripting_guide/index.md)
-
-## Performance guides
-
-- [Performance guidelines](performance.md) for writing code, benchmarks, and
- certain patterns to avoid.
-- [Caching guidelines](caching.md) for using caching in Rails under a GitLab environment.
-- [Merge request performance guidelines](merge_request_performance_guidelines.md)
- for ensuring merge requests do not negatively impact GitLab performance
-- [Profiling](profiling.md) a URL or tracking down N+1 queries using Bullet.
-- [Cached queries guidelines](cached_queries.md), for tracking down N+1 queries
- masked by query caching, memory profiling and why should we avoid cached
- queries.
-
-## Database guides
-
-See [database guidelines](database/index.md).
-
-## Integration guides
-
-- [Integrations development guide](integrations/index.md)
-- [Jira Connect app](integrations/jira_connect.md)
-- [Security Scanners](integrations/secure.md)
-- [Secure Partner Integration](integrations/secure_partner_integration.md)
-- [How to run Jenkins in development environment](integrations/jenkins.md)
-- [How to run local `Codesandbox` integration for Web IDE Live Preview](integrations/codesandbox.md)
-
-## Testing guides
-
-- [Testing standards and style guidelines](testing_guide/index.md)
-- [Frontend testing standards and style guidelines](testing_guide/frontend_testing.md)
-
-## Refactoring guides
-
-- [Refactoring guidelines](refactoring_guide/index.md)
-
-## Deprecation guides
-
-- [Deprecation guidelines](deprecation_guidelines/index.md)
-
-## Documentation guides
-
-- [Writing documentation](documentation/index.md)
-- [Documentation style guide](documentation/styleguide/index.md)
-- [Markdown](../user/markdown.md)
-
-## Internationalization (i18n) guides
-
-- [Introduction](i18n/index.md)
-- [Externalization](i18n/externalization.md)
-- [Translation](i18n/translation.md)
-
-## Product Intelligence guides
-
-- [Product Intelligence guide](https://about.gitlab.com/handbook/product/product-intelligence-guide/)
-- [Service Ping guide](service_ping/index.md)
-- [Snowplow guide](snowplow/index.md)
-
-## Experiment guide
-
-- [Introduction](experiment_guide/index.md)
-
-## Build guides
-
-- [Building a package for testing purposes](build_test_package.md)
-
-## Compliance
-
-- [Licensing](licensing.md) for ensuring license compliance
-
-## Domain-specific guides
-
-- [CI/CD development documentation](cicd/index.md)
-- [AppSec development documentation](appsec/index.md)
-
-## Technical Reference by Group
-
-- [Create: Source Code BE](backend/create_source_code_be/index.md)
-
-## Other Development guides
-
-- [Defining relations between files using projections](projections.md)
-- [Reference processing](reference_processing.md)
-- [Compatibility with multiple versions of the application running at the same time](multi_version_compatibility.md)
-- [Features inside `.gitlab/`](features_inside_dot_gitlab.md)
-- [Dashboards for stage groups](stage_group_dashboards.md)
-- [Preventing transient bugs](transient/prevention-patterns.md)
-- [GitLab Application SLIs](application_slis/index.md)
-- [Spam protection and CAPTCHA development guide](spam_protection_and_captcha/index.md)
-
-## Other GitLab Development Kit (GDK) guides
-
-- [Run full Auto DevOps cycle in a GDK instance](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/auto_devops.md)
-- [Using GitLab Runner with the GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/runner.md)
-- [Using the Web IDE terminal with the GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/web_ide_terminal_gdk_setup.md)
diff --git a/doc/development/integrations/index.md b/doc/development/integrations/index.md
index 604e481a809..5d1bd5ad61c 100644
--- a/doc/development/integrations/index.md
+++ b/doc/development/integrations/index.md
@@ -148,6 +148,7 @@ This method should return an array of hashes for each field, where the keys can
| `title:` | string | false | Capitalized value of `name:` | The label for the form field.
| `placeholder:` | string | false | | A placeholder for the form field.
| `help:` | string | false | | A help text that displays below the form field.
+| `api_only:` | boolean | false | `false` | Specify if the field should only be available through the API, and excluded from the frontend form.
#### Additional keys for `type: 'checkbox'`
diff --git a/doc/development/integrations/secure.md b/doc/development/integrations/secure.md
index 1a51ee88c58..0a0c5e4d2a6 100644
--- a/doc/development/integrations/secure.md
+++ b/doc/development/integrations/secure.md
@@ -316,11 +316,12 @@ and [Container Scanning](../../user/application_security/container_scanning/inde
You can find the schemas for these scanners here:
-- [SAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/sast-report-format.json)
-- [DAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dast-report-format.json)
-- [Dependency Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dependency-scanning-report-format.json)
+- [Cluster Image Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/cluster-image-scanning-report-format.json)
- [Container Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/container-scanning-report-format.json)
- [Coverage Fuzzing](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/coverage-fuzzing-report-format.json)
+- [DAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dast-report-format.json)
+- [Dependency Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dependency-scanning-report-format.json)
+- [SAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/sast-report-format.json)
- [Secret Detection](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/secret-detection-report-format.json)
### Retention period for vulnerabilities
@@ -360,6 +361,41 @@ Ongoing improvements to report validation are tracked [in this epic](https://git
In the meantime, you can see which versions are supported in the
[source code](https://gitlab.com/gitlab-org/gitlab/-/blob/08dd756429731a0cca1e27ca9d59eea226398a7d/lib/gitlab/ci/parsers/security/validators/schema_validator.rb#L9-27).
+#### Validate locally
+
+Before running your analyzer in GitLab, you should validate the report produced by your analyzer to
+ensure it complies with the declared schema version.
+
+Use the script below to validate JSON files against a given schema.
+
+```ruby
+require 'bundler/inline'
+
+gemfile do
+ source 'https://rubygems.org'
+ gem 'json_schemer'
+end
+
+require 'json'
+require 'pathname'
+
+raise 'Usage: ruby script.rb <security schema file name> <report file name>' unless ARGV.size == 2
+
+schema = JSONSchemer.schema(Pathname.new(ARGV[0]))
+report = JSON.parse(File.open(ARGV[1]).read)
+schema_validation_errors = schema.validate(report).map { |error| JSONSchemer::Errors.pretty(error) }
+puts(schema_validation_errors)
+```
+
+1. Download the appropriate schema that matches your report type and declared version. For
+ example, you can find version `14.0.6` of the `container_scanning` report schema at
+ `https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/raw/v14.0.6/dist/container-scanning-report-format.json?inline=false`.
+1. Save the Ruby script above in a file, for example, `validate.rb`.
+1. Run the script, passing the schema and report file names as arguments in order. For example:
+ 1. Using your local Ruby interpreter: `ruby validate.rb container-scanning-format_14-0-6.json gl-container-scanning-report.json`.
+ 1. Using Docker: `docker run -it --rm -v $(pwd):/ci ruby:3-slim ruby /ci/validate.rb /ci/container-scanning-format_14-0-6.json /ci/gl-container-scanning-report.json`
+1. Validation errors are shown on the screen. You must resolve these errors before GitLab can ingest your report.
+
### Report Fields
#### Version
@@ -451,7 +487,7 @@ The `identifiers` array describes the detected vulnerability. An identifier obje
`value` fields are used to tell if two identifiers are the same. The user interface uses the
object's `name` and `url` fields to display the identifier.
-It is recommended to reuse the identifiers the GitLab scanners already define:
+We recommend that you use the identifiers the GitLab scanners already define:
| Identifier | Type | Example value |
|------------|------|---------------|
@@ -479,7 +515,7 @@ Not all vulnerabilities have CVEs, and a CVE can be identified multiple times. A
isn't a stable identifier and you shouldn't assume it as such when tracking vulnerabilities.
The maximum number of identifiers for a vulnerability is set as 20. If a vulnerability has more than 20 identifiers,
-the system saves only the first 20 of them. Note that vulnerabilities in the [Pipeline Security](../../user/application_security/security_dashboard/#view-vulnerabilities-in-a-pipeline)
+the system saves only the first 20 of them. Note that vulnerabilities in the [Pipeline Security](../../user/application_security/vulnerability_report/pipeline.md#view-vulnerabilities-in-a-pipeline)
tab do not enforce this limit and all identifiers present in the report artifact are displayed.
#### Details
diff --git a/doc/development/internal_api/index.md b/doc/development/internal_api/index.md
index 288c0056821..13e095b4a83 100644
--- a/doc/development/internal_api/index.md
+++ b/doc/development/internal_api/index.md
@@ -334,14 +334,15 @@ Example response:
## Authenticate Error Tracking requests
This endpoint is called by the error tracking Go REST API application to authenticate a project.
+> [Introduced](https://gitlab.com/gitlab-org/opstrace/opstrace/-/issues/1693) in GitLab 15.1.
| Attribute | Type | Required | Description |
|:-------------|:--------|:---------|:-------------------------------------------------------------------|
| `project_id` | integer | yes | The ID of the project which has the associated key. |
-| `public_key` | string | yes | The public key generated by the integrated error tracking feature. |
+| `public_key` | string | yes | The [public key](../../api/error_tracking.md#error-tracking-client-keys) generated by the integrated Error Tracking feature. |
```plaintext
-POST /internal/error_tracking_allowed
+POST /internal/error_tracking/allowed
```
Example request:
@@ -349,7 +350,7 @@ Example request:
```shell
curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" \
--data "project_id=111&public_key=generated-error-tracking-key" \
- "http://localhost:3001/api/v4/internal/error_tracking_allowed"
+ "http://localhost:3001/api/v4/internal/error_tracking/allowed"
```
Example response:
diff --git a/doc/development/iterating_tables_in_batches.md b/doc/development/iterating_tables_in_batches.md
index b4459b53efa..1159e3755e5 100644
--- a/doc/development/iterating_tables_in_batches.md
+++ b/doc/development/iterating_tables_in_batches.md
@@ -42,20 +42,20 @@ The API of this method is similar to `in_batches`, though it doesn't support
all of the arguments that `in_batches` supports. You should always use
`each_batch` _unless_ you have a specific need for `in_batches`.
-## Avoid iterating over non-unique columns
+## Iterating over non-unique columns
-One should proceed with extra caution, and possibly avoid iterating over a column that can contain
-duplicate values. When you iterate over an attribute that is not unique, even with the applied max
-batch size, there is no guarantee that the resulting batches do not surpass it. The following
-snippet demonstrates this situation when one attempt to select `Ci::Build` entries for users with
-`id` between `1` and `10,000`, the database returns `1 215 178` matching rows.
+One should proceed with extra caution. When you iterate over an attribute that is not unique,
+even with the applied max batch size, there is no guarantee that the resulting batches do not
+surpass it. The following snippet demonstrates this situation when one attempt to select
+`Ci::Build` entries for users with `id` between `1` and `10,000`, the database returns
+`1 215 178` matching rows.
```ruby
[ gstg ] production> Ci::Build.where(user_id: (1..10_000)).size
=> 1215178
```
-This happens because built relation is translated into the following query
+This happens because the built relation is translated into the following query:
```ruby
[ gstg ] production> puts Ci::Build.where(user_id: (1..10_000)).to_sql
@@ -69,6 +69,27 @@ threshold does not translate to the size of the returned dataset. That happens b
`n` possible values of attributes, one can't tell for sure that the number of records that contains
them is less than `n`.
+### Loose-index scan with `distinct_each_batch`
+
+When iterating over a non-unique column is necessary, use the `distinct_each_batch` helper
+method. The helper uses the [loose-index scan technique](https://wiki.postgresql.org/wiki/Loose_indexscan)
+(skip-index scan) to skip duplicated values within a database index.
+
+Example: iterating over distinct `author_id` in the Issue model
+
+```ruby
+Issue.distinct_each_batch(column: :author_id, of: 1000) do |relation|
+ users = User.where(id: relation.select(:author_id)).to_a
+end
+```
+
+The technique provides stable performance between the batches regardless of the data distribution.
+The `relation` object returns an ActiveRecord scope where only the given `column` is available.
+Other columns are not loaded.
+
+The underlying database queries use recursive CTEs, which adds extra overhead. We therefore advise to use
+smaller batch sizes than those used for a standard `each_batch` iteration.
+
## Column definition
`EachBatch` uses the primary key of the model by default for the iteration. This works most of the
diff --git a/doc/development/licensed_feature_availability.md b/doc/development/licensed_feature_availability.md
index 09c32fc4244..21b07ae89b5 100644
--- a/doc/development/licensed_feature_availability.md
+++ b/doc/development/licensed_feature_availability.md
@@ -61,3 +61,12 @@ before_action do
push_licensed_feature(:feature_symbol, project)
end
```
+
+## Allow use of licensed EE features
+
+To enable plans per namespace turn on the `Allow use of licensed EE features` option from the settings page.
+This will make licensed EE features available to projects only if the project namespace's plan includes the feature
+or if the project is public. To enable it:
+
+1. If you are developing locally, follow the steps in [simulate SaaS](ee_features.md#act-as-saas) to make the option available.
+1. Select Admin > Settings > General > "Account and limit" and enable "Allow use of licensed EE features".
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index c9b59ba66b5..e0e21319f47 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -90,6 +90,14 @@ Keep in mind that all durations should be measured against GitLab.com.
| Post-deployment migrations | `<= 10 minutes` | A valid exception are schema changes, since they must not happen in background migrations. |
| Background migrations | `> 10 minutes` | Since these are suitable for larger tables, it's not possible to set a precise timing guideline, however, any single query must stay below [`1 second` execution time](query_performance.md#timing-guidelines-for-queries) with cold caches. |
+## Decide which database to target
+
+GitLab connects to two different Postgres databases: `main` and `ci`. This split can affect migrations
+as they may run on either or both of these databases.
+
+Read [Migrations for Multiple databases](database/migrations_for_multiple_databases.md) to understand if or how
+a migration you add should account for this.
+
## Create a regular schema migration
To create a migration you can use the following Rails generator:
@@ -569,6 +577,12 @@ class MyMigration < Gitlab::Database::Migration[2.0]
end
```
+Verify the index is not being used anymore with this Thanos query:
+
+```sql
+sum(rate(pg_stat_user_indexes_idx_tup_read{env="gprd", indexrelname="index_ci_name", type="patroni-ci"}[5m]))
+```
+
Note that it is not necessary to check if the index exists prior to
removing it, however it is required to specify the name of the
index that is being removed. This can be done either by passing the name
diff --git a/doc/development/omnibus.md b/doc/development/omnibus.md
index b62574e34e5..4e2f2b0c763 100644
--- a/doc/development/omnibus.md
+++ b/doc/development/omnibus.md
@@ -21,7 +21,7 @@ For example, the `git` user is allowed to write in the `log/` directory, in
`public/uploads`, and they are allowed to rewrite the `db/structure.sql` file.
In other cases, the reconfigure script tricks GitLab into not trying to write a
-file. For instance, GitLab will generate a `.secret` file if it cannot find one
+file. For instance, GitLab generates a `.secret` file if it cannot find one
and write it to the Rails root. In the Omnibus packages, reconfigure writes the
`.secret` file first, so that GitLab never tries to write it.
diff --git a/doc/development/packages/debian_repository.md b/doc/development/packages/debian_repository.md
new file mode 100644
index 00000000000..a417ced2e65
--- /dev/null
+++ b/doc/development/packages/debian_repository.md
@@ -0,0 +1,151 @@
+---
+stage: Package
+group: Package
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Debian Repository
+
+This guide explains:
+
+1. A basic overview of how Debian packages are structured
+1. What package managers, clients, and tools are used to manage Debian packages
+1. How the GitLab Debian repository functions
+
+## Debian package basics
+
+There are two types of [Debian packages](https://www.debian.org/doc/manuals/debian-faq/pkg-basics.en.html): binary and source.
+
+- **Binary** - These are usually `.deb` files and contain executables, config files, and other data. A binary package must match your OS or architecture since it is already compiled. These are usually installed using `dpkg`. Dependencies must already exist on the system when installing a binary package.
+- **Source** - These are usual made up of `.dsc` files and `.gz` files. A source package is compiled on your system. These are fetched and installed with [`apt`](https://manpages.debian.org/bullseye/apt/apt.8.en.html), which then uses `dpkg` after the package is compiled. When you use `apt`, it will fetch and install the necessary dependencies.
+
+The `.deb` file follows the naming convention `<PackageName>_<VersionNumber>-<DebianRevisionNumber>_<DebianArchitecture>.deb`
+
+It includes a `control file` that contains metadata about the package. You can view the control file by using `dpkg --info <deb_file>`
+
+The [`.changes` file](https://www.debian.org/doc/debian-policy/ch-controlfields.html#debian-changes-files-changes) is used to tell the Debian repository how to process updates to packages. It contains a variety of metadata for the package, including architecture, distribution, and version. In addition to the metadata, they contain three lists of checksums: `sha1`, `sha256`, and `md5` in the `Files` section. Refer to [sample_1.2.3~alpha2_amd64.changes](https://gitlab.com/gitlab-org/gitlab/-/blob/dd1e70d3676891025534dc4a1e89ca9383178fe7/spec/fixtures/packages/debian/sample_1.2.3~alpha2_amd64.changes) for an example of how these files are structured.
+
+## How do people get Debian packages?
+
+While you can download a single `.deb` file and install it with [`dpkg`](https://manpages.debian.org/bullseye/dpkg/dpkg.1.en.html), most users consume Debian packages with [`apt`](https://manpages.debian.org/bullseye/apt/apt.8.en.html) using `apt-get`. `apt` wraps `dpkg`, adding dependency management and compilation.
+
+## How do people publish Debian packages?
+
+It is not uncommon to use `curl` to publish packages depending on the type of Debian repository you are working with. However, `dput-ng` is the best tool to use as it will upload the relevant files based on the `.changes` file.
+
+## What is all this distribution business?
+
+When it comes to Debian, packages don't exist on their own. They belong to a _distribution_. This can mean many things, but the main thing to note is users are used to having to specify the distribution.
+
+## What does a Debian Repository look like?
+
+- A [Debian repository](https://wiki.debian.org/DebianRepository) is made up of many releases.
+- Each release is given a **codename**. For the public Debian repository, these are things like "bullseye" and "jesse".
+ - There is also the concept of **suites** which are essentially aliases of codenames synonymous with release channels like "stable" and "edge".
+- Each release has many **components**. In the public repository, these are "main", "contrib", and "non-free".
+- Each release has many **architectures** such as "amd64", "arm64", or "i386".
+- Each release has a signed **Release** file (see below about [GPG signing](#what-are-gpg-keys-and-what-are-signed-releases))
+
+A standard directory-based Debian repository would be organized as:
+
+```plaintext
+dists\
+ |--jessie/
+ |--bullseye\
+ |Changelog
+ |Release
+ |InRelease
+ |Release.gpg
+ |--main\
+ |--amd64\
+ |--arm64\
+ |--contrib\
+ |--non-free\
+pool\
+ |--this is where the .deb files for all releases live
+```
+
+You can explore a mirror of the public Debian repository here: <http://ftp.us.debian.org/debian/>
+
+In the public Debian repository, the entire directory structure, release files, GPG keys, and other files are all generated by a series of scripts called the [Debian Archive Kit, or dak](https://salsa.debian.org/ftp-team/dak).
+
+In the GitLab Debian repository, we don't deal with specific file directories. Instead, we use code and an underlying [PostgreSQL database to organize the relationships](structure.md#debian-packages) between these different pieces.
+
+## What does a Debian Repository do?
+
+The Debian community created many package repository systems before things like object storage existed, and they used FTP to upload artifacts to a remote server. Most current package repositories and registries are just directories on a server somewhere. Packages added to the [official Debian distribution](https://www.debian.org/distrib/packages) exist in a central public repository that a group of open source maintainers curates. The package maintainers use the [Debian Archive Kit, or dak](https://salsa.debian.org/ftp-team/dak) scripts to generate release files and do other maintenance tasks. So, in addition to storing and serving files, a complete Debian repository needs to accomplish the same behavior that dak provides. This behavior is what the GitLab Debian registry aims to do.
+
+## What are GPG keys, and what are signed releases
+
+A [GPG key](https://www.gnupg.org/) is a public/private key pair for secure data transmission. Similar to an SSH key, there is a private and public key. Whoever has the _public key can encrypt data_, and whoever has the _private key can decrypt data_ that was encrypted using the public key. You can also use GPG keys to sign data. Whoever has the private key can sign data or a file, and whoever has the public key can then check the signature and trust it came from the person with the matching private key.
+
+We use GPG to sign the release file for the Debian packages. The release file is an index of all packages within a given distribution and their respective digests.
+
+In the GitLab Debian registry, a background process generates a new release file whenever a user publishes a new package to their Debian repository. A GPG key is created for each distribution. If a user requests a release for that distribution, they can request the signed version and the public GPG key to verify the authenticity of that release file.
+
+## GitLab repository internals
+
+When a [file upload](../../api/packages/debian.md#upload-a-package-file) occurs:
+
+1. A new "incoming" package record is found or created. All new files are assigned to the "incoming" package. It is a holding area used until we know what package the file is actually associated with.
+1. A new "unknown" file is stored. It is unknown because we do not yet know if this file belongs to an existing package or not.
+
+Once we know which package the file belongs to, it is associated with that package, and the "incoming" package is removed if no more files remain. The "unknown" status of the file is updated to the correct file type.
+
+Next, if the file is a `.changes` format:
+
+1. The `.changes` file is parsed and any files listed within it are updated. All uploaded non-`.changes` files are correctly associated with various distributions and packages.
+1. The `::Packages::Debian::GenerateDistributionWorker` and thus `::Packages::Debian::GenerateDistributionService` are run.
+ 1. Component files are created or updated. Since we just updated package files that were listed in the `.changes` file, we now check the component/architecture files based on the changed checksum values.
+ 1. A new release is generated:
+ 1. A new GPG key is generated if one does not already exist for the distribution
+ 1. A [Release file](https://wiki.debian.org/DebianRepository/Format#A.22Release.22_files) is written, signed by the GPG key, and then stored.
+ 1. Old component files are destroyed.
+
+This diagram shows the path taken after a file is uploaded to the Debian API:
+
+```mermaid
+sequenceDiagram
+ Client->>+DebianProjectPackages: PUT projects/:id/packages/debian/:file_name
+ DebianProjectPackages->>+FindOrCreateIncomingService: Create "incoming" package
+ DebianProjectPackages->>+CreatePackageFileService: Create "unknown" file
+ Note over DebianProjectPackages: If `.changes` file
+ DebianProjectPackages->>+ProcessChangesWorker: Schedule worker to process the file
+ DebianProjectPackages->>+Client: 202 Created
+ ProcessChangesWorker->>+ProcessChangesService: Start service
+ ProcessChangesService->>+ExtractChangesMetadataService: Extract changesmetadata
+ ExtractChangesMetadataService->>+ExtractMetadataService: Extract file metadata
+ ExtractMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
+ ExtractMetadataService->>+ExtractDebMetadataService: If .deb or .udeb
+ ExtractDebMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
+ ParseDebian822Service-->>-ExtractDebMetadataService: Parse String as Debian RFC822 control data format
+ ExtractDebMetadataService-->>-ExtractMetadataService: Return the parsed control file
+ ExtractMetadataService->>+ParseDebian822Service: if .dsc, .changes, or buildinfo
+ ParseDebian822Service-->>-ExtractMetadataService: Parse String as Debian RFC822 control data format
+ ExtractMetadataService-->>-ExtractChangesMetadataService: Parse Metadata file
+ ExtractChangesMetadataService-->>-ProcessChangesService: Return list of files and hashes from the .changes file
+ loop process files listed in .changes
+ ProcessChangesService->>+ExtractMetadataService: Process file
+ ExtractMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
+ ExtractMetadataService->>+ExtractDebMetadataService: If .deb or .udeb
+ ExtractDebMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
+ ParseDebian822Service-->>-ExtractDebMetadataService: Parse String as Debian RFC822 control data format
+ ExtractDebMetadataService-->>-ExtractMetadataService: Return the parsed control file
+ ExtractMetadataService->>+ParseDebian822Service: if .dsc, .changes, or buildinfo
+ ParseDebian822Service-->>-ExtractMetadataService: Parse String as Debian RFC822 control data format
+ ExtractMetadataService-->>-ProcessChangesService: Use parsed metadata to update "unknown" (or known) file
+ end
+ ProcessChangesService->>+GenerateDistributionWorker: Find distribution and start service
+ GenerateDistributionWorker->>+GenerateDistributionService: Generate distribution
+ GenerateDistributionService->>+GenerateDistributionService: generate component files based on new archs and updates from .changes
+ GenerateDistributionService->>+GenerateDistributionKeyService: generate GPG key for distribution
+ GenerateDistributionKeyService-->>-GenerateDistributionService: GPG key
+ GenerateDistributionService-->>-GenerateDistributionService: Generate distribution file
+ GenerateDistributionService->>+SignDistributionService: Sign release file with GPG key
+ SignDistributionService-->>-GenerateDistributionService: Save the signed release file
+ GenerateDistributionWorker->>+GenerateDistributionService: destroy no longer used component files
+```
+
+### Distributions
+
+You must create a distribution before publishing a package to it. When you create or update a distribution using the project or group distribution API, in addition to creating the initial backing records in the database, the `GenerateDistributionService` run as shown in the above sequence diagram.
diff --git a/doc/development/packages/structure.md b/doc/development/packages/structure.md
index a2716232b11..f8d9da2cc73 100644
--- a/doc/development/packages/structure.md
+++ b/doc/development/packages/structure.md
@@ -39,7 +39,6 @@ erDiagram
projects }|--|| namespaces : ""
packages_packages }|--|| projects : ""
packages_package_files }o--|| packages_packages : ""
- package_debian_file_metadatum |o--|| packages_package_files : ""
packages_debian_group_architectures }|--|| packages_debian_group_distributions : ""
packages_debian_group_component_files }|--|| packages_debian_group_components : ""
packages_debian_group_component_files }|--|| packages_debian_group_architectures : ""
diff --git a/doc/development/performance.md b/doc/development/performance.md
index 6d0b833a2da..d7cbef0a211 100644
--- a/doc/development/performance.md
+++ b/doc/development/performance.md
@@ -26,10 +26,10 @@ consistent performance of GitLab. Refer to the [Index](#performance-documentatio
- Frontend:
- [Performance guidelines](../development/fe_guide/performance.md)
- [Performance dashboards and monitoring guidelines](../development/new_fe_guide/development/performance.md)
- - [Browser performance testing guidelines](../user/project/merge_requests/browser_performance_testing.md)
+ - [Browser performance testing guidelines](../ci/testing/browser_performance_testing.md)
- [`gdk measure` and `gdk measure-workflow`](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/gdk_commands.md#measure-performance)
- QA:
- - [Load performance testing](../user/project/merge_requests/load_performance_testing.md)
+ - [Load performance testing](../ci/testing/load_performance_testing.md)
- [GitLab Performance Tool project](https://gitlab.com/gitlab-org/quality/performance)
- [Review apps performance metrics](../development/testing_guide/review_apps.md#performance-metrics)
- Monitoring & Overview:
@@ -581,7 +581,7 @@ called `memory-on-boot`. ([Read an example job.](https://gitlab.com/gitlab-org/g
You may find the results:
- On the merge request **Overview** tab, in the merge request reports area, in the
- **Metrics Reports** [dropdown list](../ci/metrics_reports.md).
+ **Metrics Reports** [dropdown list](../ci/testing/metrics_reports.md).
- In the `memory-on-boot` artifacts for a full report and a dependency breakdown.
`derailed_benchmarks` also provides other methods to investigate memory. To learn more,
diff --git a/doc/development/pipelines.md b/doc/development/pipelines.md
index 436977a7f38..2bf1e5a315a 100644
--- a/doc/development/pipelines.md
+++ b/doc/development/pipelines.md
@@ -37,7 +37,7 @@ flowchart LR
subgraph backend
be["Backend code"]--tested with-->rspec
end
-
+
be--generates-->fixtures["frontend fixtures"]
fixtures--used in-->jest
```
@@ -171,7 +171,7 @@ Our current RSpec tests parallelization setup is as follows:
1. The `retrieve-tests-metadata` job in the `prepare` stage ensures we have a
`knapsack/report-master.json` file:
- The `knapsack/report-master.json` file is fetched from the latest `main` pipeline which runs `update-tests-metadata`
- (for now it's the 2-hourly scheduled master pipeline), if it's not here we initialize the file with `{}`.
+ (for now it's the 2-hourly `maintenance` scheduled master pipeline), if it's not here we initialize the file with `{}`.
1. Each `[rspec|rspec-ee] [migration|unit|integration|system|geo] n m` job are run with
`knapsack rspec` and should have an evenly distributed share of tests:
- It works because the jobs have access to the `knapsack/report-master.json`
@@ -275,6 +275,19 @@ rather than from the default branch `main-jh`.
NOTE:
For now, CI will try to fetch the branch on the [GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab), so it might take some time for the new JH branch to propagate to the mirror.
+## Ruby 3.0 jobs
+
+You can add the `pipeline:run-in-ruby3` label to the merge request to switch
+the Ruby version used for running the whole test suite to 3.0. When you do
+this, the test suite will no longer run in Ruby 2.7 (default), and an
+additional job `verify-ruby-2.7` will also run and always fail to remind us to
+remove the label and run in Ruby 2.7 before merging the merge request.
+
+This should let us:
+
+- Test changes for Ruby 3.0
+- Make sure it will not break anything when it's merged into the default branch
+
## `undercover` RSpec test
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/74859) in GitLab 14.6.
@@ -292,20 +305,20 @@ fail.
### Troubleshooting `rspec:undercoverage` failures
The `rspec:undercoverage` job has [known bugs](https://gitlab.com/groups/gitlab-org/-/epics/8254)
-that can cause false positive failures. You can locally test coverage locally to determine if it's
+that can cause false positive failures. You can test coverage locally to determine if it's
safe to apply `~"pipeline:skip-undercoverage"`. For example, using `<spec>` as the name of the
test causing the failure:
1. Run `SIMPLECOV=1 bundle exec rspec <spec>`.
1. Run `scripts/undercoverage`.
-If these commands return `undercover: ✅ No coverage is missing in latest changes` then you can apply `~"pipeline:skip-undercoverage"` to bypass pipeline failures.
+If these commands return `undercover: ✅ No coverage is missing in latest changes` then you can apply `~"pipeline:skip-undercoverage"` to bypass pipeline failures.
## Ruby versions testing
Our test suite runs against Ruby 2 in merge requests and default branch pipelines.
-We do run our test suite against Ruby 3 on 2-hourly scheduled pipelines, as GitLab.com will soon run on Ruby 3.
+We also run our test suite against Ruby 3 on another 2-hourly scheduled pipelines, as GitLab.com will soon run on Ruby 3.
## PostgreSQL versions testing
@@ -318,12 +331,13 @@ We also run our test suite against PG11 upon specific database library changes i
### Current versions testing
-| Where? | PostgreSQL version |
-| ------ | ------------------ |
-| MRs | 12, 11 for DB library changes |
-| `main` (non-scheduled pipelines) | 12, 11 for DB library changes |
-| 2-hourly scheduled pipelines | 12, 11 for DB library changes |
-| `nightly` scheduled pipelines | 12, 11, 13 |
+| Where? | PostgreSQL version | Ruby version |
+| ------ | ------------------ | ------------ |
+| Merge requests | 12 (default version), 11 for DB library changes | 2.7 (default version) |
+| `master` branch commits | 12 (default version), 11 for DB library changes | 2.7 (default version) |
+| `maintenance` scheduled pipelines (every 2 hours at even hour) | 12 (default version), 11 for DB library changes | 2.7 (default version) |
+| `maintenance` scheduled pipelines (every 2 hours at odd hour) | 12 (default version), 11 for DB library changes | 3.0 (set in the schedule variables) |
+| `nightly` scheduled pipelines | 12 (default version), 11, 13 | 2.7 (default version) |
### Long-term plan
@@ -618,7 +632,7 @@ and included in `rules` definitions via [YAML anchors](../ci/yaml/yaml_optimizat
| `if-default-refs` | Matches if the pipeline is for `master`, `main`, `/^[\d-]+-stable(-ee)?$/` (stable branches), `/^\d+-\d+-auto-deploy-\d+$/` (auto-deploy branches), `/^security\//` (security branches), merge requests, and tags. | Note that jobs aren't created for branches with this default configuration. |
| `if-master-refs` | Matches if the current branch is `master` or `main`. | |
| `if-master-push` | Matches if the current branch is `master` or `main` and pipeline source is `push`. | |
-| `if-master-schedule-2-hourly` | Matches if the current branch is `master` or `main` and pipeline runs on a 2-hourly schedule. | |
+| `if-master-schedule-maintenance` | Matches if the current branch is `master` or `main` and pipeline runs on a 2-hourly schedule. | |
| `if-master-schedule-nightly` | Matches if the current branch is `master` or `main` and pipeline runs on a nightly schedule. | |
| `if-auto-deploy-branches` | Matches if the current branch is an auto-deploy one. | |
| `if-master-or-tag` | Matches if the pipeline is for the `master` or `main` branch or for a tag. | |
@@ -660,6 +674,7 @@ and included in `rules` definitions via [YAML anchors](../ci/yaml/yaml_optimizat
| `code-backstage-patterns` | Combination of `code-patterns` and `backstage-patterns`. |
| `code-qa-patterns` | Combination of `code-patterns` and `qa-patterns`. |
| `code-backstage-qa-patterns` | Combination of `code-patterns`, `backstage-patterns`, and `qa-patterns`. |
+| `static-analysis-patterns` | Only create jobs for Static Analytics configuration-related changes. |
## Performance
@@ -704,7 +719,7 @@ This works well for the following reasons:
- `.yarn-cache`
- `.assets-compile-cache` (the key includes `${NODE_ENV}` so it's actually two different caches).
1. These cache definitions are composed of [multiple atomic caches](../ci/caching/index.md#use-multiple-caches).
-1. Only the following jobs, running in 2-hourly scheduled pipelines, are pushing (that is, updating) to the caches:
+1. Only the following jobs, running in 2-hourly `maintenance` scheduled pipelines, are pushing (that is, updating) to the caches:
- `update-setup-test-env-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- `update-gitaly-binaries-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- `update-rubocop-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
diff --git a/doc/development/product_qualified_lead_guide/index.md b/doc/development/product_qualified_lead_guide/index.md
index 25634876aef..90b8d905264 100644
--- a/doc/development/product_qualified_lead_guide/index.md
+++ b/doc/development/product_qualified_lead_guide/index.md
@@ -1,6 +1,6 @@
---
stage: Growth
-group: Conversion
+group: Acquisition
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/rails_initializers.md b/doc/development/rails_initializers.md
index 68f3c07e45a..fca24cf8c01 100644
--- a/doc/development/rails_initializers.md
+++ b/doc/development/rails_initializers.md
@@ -29,12 +29,18 @@ query) from an initializer means that tasks like `db:drop`, and
`db:test:prepare` will fail because an active session prevents the database from
being dropped.
-To help detect when database connections are opened from initializers, we now
-warn in `STDERR`. For example:
+To prevent this, we stop database connections from being opened during
+routes loading. Doing will result in an error:
```shell
-DEPRECATION WARNING: Database connection should not be called during initializers (called from block in <module:HasVariable> at app/models/concerns/ci/has_variable.rb:22)
+RuntimeError:
+ Database connection should not be called during initializers.
+# ./config/initializers/00_connection_logger.rb:15:in `new_client'
+# ./lib/gitlab/database/load_balancing/load_balancer.rb:112:in `block in read_write'
+# ./lib/gitlab/database/load_balancing/load_balancer.rb:184:in `retry_with_backoff'
+# ./lib/gitlab/database/load_balancing/load_balancer.rb:111:in `read_write'
+# ./lib/gitlab/database/load_balancing/connection_proxy.rb:119:in `write_using_load_balancer'
+# ./lib/gitlab/database/load_balancing/connection_proxy.rb:89:in `method_missing'
+# ./config/routes.rb:10:in `block in <main>'
+# ./config/routes.rb:9:in `<main>'
```
-
-If you wish to print out the full backtrace, set the
-`DEBUG_INITIALIZER_CONNECTIONS` environment variable.
diff --git a/doc/development/rake_tasks.md b/doc/development/rake_tasks.md
index 13c4bdaedca..c13f1195df3 100644
--- a/doc/development/rake_tasks.md
+++ b/doc/development/rake_tasks.md
@@ -313,11 +313,11 @@ run:
```shell
# Validate all queries
-bundle exec rake gitlab::graphql:validate
+bundle exec rake gitlab:graphql:validate
# Validate one query
-bundle exec rake gitlab::graphql:validate[path/to/query.graphql]
+bundle exec rake gitlab:graphql:validate[path/to/query.graphql]
# Validate a directory
-bundle exec rake gitlab::graphql:validate[path/to/queries]
+bundle exec rake gitlab:graphql:validate[path/to/queries]
```
This prints out a report with an entry for each query, explaining why
@@ -335,11 +335,11 @@ Usage:
```shell
# Analyze all queries
-bundle exec rake gitlab::graphql:analyze
+bundle exec rake gitlab:graphql:analyze
# Analyze one query
-bundle exec rake gitlab::graphql:analyze[path/to/query.graphql]
+bundle exec rake gitlab:graphql:analyze[path/to/query.graphql]
# Analyze a directory
-bundle exec rake gitlab::graphql:analyze[path/to/queries]
+bundle exec rake gitlab:graphql:analyze[path/to/queries]
```
This prints out a report for each query, including the complexity
@@ -393,3 +393,21 @@ The following command combines the intent of [Update GraphQL documentation and s
```shell
bundle exec rake gitlab:graphql:update_all
```
+
+## Update OpenAPI client for Error Tracking feature
+
+NOTE:
+This Rake task needs `docker` to be installed.
+
+To update generated code for OpenAPI client located in
+`vendor/gems/error_tracking_open_api` run the following commands:
+
+```shell
+# Run rake task
+bundle exec rake gems:error_tracking_open_api:generate
+
+# Review and test the changes
+
+# Commit the changes
+git commit -m 'Update ErrorTrackingOpenAPI from OpenAPI definition' vendor/gems/error_tracking_open_api
+```
diff --git a/doc/development/reusing_abstractions.md b/doc/development/reusing_abstractions.md
index ccf82dc6c77..f3eb1ebcc0c 100644
--- a/doc/development/reusing_abstractions.md
+++ b/doc/development/reusing_abstractions.md
@@ -190,6 +190,10 @@ Everything in `app/finders`, typically used for retrieving data from a database.
Finders can not reuse other finders in an attempt to better control the SQL
queries they produce.
+Finders' `execute` method should return `ActiveRecord::Relation`. Exceptions
+can be added to `spec/support/finder_collection_allowlist.yml`.
+See [`#298771`](https://gitlab.com/gitlab-org/gitlab/-/issues/298771) for more details.
+
### Presenters
Everything in `app/presenters`, used for exposing complex data to a Rails view,
diff --git a/doc/development/secure_coding_guidelines.md b/doc/development/secure_coding_guidelines.md
index d8e2352bd93..9048da77071 100644
--- a/doc/development/secure_coding_guidelines.md
+++ b/doc/development/secure_coding_guidelines.md
@@ -1278,3 +1278,31 @@ This sensitive data must be handled carefully to avoid leaks which could lead to
- Avoid sending credentials in URL parameters, as these can be more easily logged inadvertently during transit.
In the event of credential leak through an MR, issue, or any other medium, [reach out to SIRT team](https://about.gitlab.com/handbook/engineering/security/security-operations/sirt/#-engaging-sirt).
+
+## Serialization
+
+Serialization of active record models can leak sensitive attributes if they are not protected.
+
+Using the [`prevent_from_serialization`](https://gitlab.com/gitlab-org/gitlab/-/blob/d7b85128c56cc3e669f72527d9f9acc36a1da95c/app/models/concerns/sensitive_serializable_hash.rb#L11)
+method protects the attributes when the object is serialized with `serializable_hash`.
+When an attribute is protected with `prevent_from_serialization`, it is not included with
+`serializable_hash`, `to_json`, or `as_json`.
+
+For more guidance on serialization:
+
+- [Why using a serializer is important](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/serializers/README.md#why-using-a-serializer-is-important).
+- Always use [Grape entities](../../ee/development/api_styleguide.md#entities) for the API.
+
+To `serialize` an `ActiveRecord` column:
+
+- You can use `app/serializers`.
+- You cannot use `to_json / as_json`.
+- You cannot use `serialize :some_colum`.
+
+### Serialization example
+
+The following is an example used for the [`TokenAuthenticatable`](https://gitlab.com/gitlab-org/gitlab/-/blob/9b15c6621588fce7a80e0438a39eeea2500fa8cd/app/models/concerns/token_authenticatable.rb#L30) class:
+
+```ruby
+prevent_from_serialization(*strategy.token_fields) if respond_to?(:prevent_from_serialization)
+```
diff --git a/doc/development/service_ping/implement.md b/doc/development/service_ping/implement.md
index 6948eb20e00..3263ba6458e 100644
--- a/doc/development/service_ping/implement.md
+++ b/doc/development/service_ping/implement.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
@@ -830,7 +830,7 @@ However, it has the following limitations:
## Aggregated metrics
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/45979) in GitLab 13.6.
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/45979) in GitLab 13.6.
WARNING:
This feature is intended solely for internal GitLab use.
diff --git a/doc/development/service_ping/index.md b/doc/development/service_ping/index.md
index e776b78b710..cd8af3e9152 100644
--- a/doc/development/service_ping/index.md
+++ b/doc/development/service_ping/index.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
@@ -113,15 +113,15 @@ sequenceDiagram
1. Finally, the timing metadata information that is used for diagnostic purposes is submitted to the Versions application. It consists of a list of metric identifiers and the time it took to calculate the metrics:
- > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37911) in GitLab 15.0 [with a flag(../../user/feature_flags.md), enabled by default.
-
-FLAG:
-On self-managed GitLab, by default this feature is available. To hide the feature, ask an administrator to [disable the feature flag](../../administration/feature_flags.md) named `measure_service_ping_metric_collection`.
-On GitLab.com, this feature is available.
+ > - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37911) in GitLab 15.0 [with a flag(../../user/feature_flags.md), enabled by default.
+ > - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/295289) in GitLab 15.2. [Feature flag `measure_service_ping_metric_collection`](https://gitlab.com/gitlab-org/gitlab/-/issues/358128) removed.
```ruby
- {"metadata"=>
- {"metrics"=>
+ {
+ "metadata"=>
+ {
+ "uuid"=>"0000000-0000-0000-0000-000000000000",
+ "metrics"=>
[{"name"=>"version", "time_elapsed"=>1.1811964213848114e-05},
{"name"=>"installation_type", "time_elapsed"=>0.00017242692410945892},
{"name"=>"license_billable_users", "time_elapsed"=>0.009520471096038818},
@@ -133,9 +133,7 @@ On GitLab.com, this feature is available.
{"name"=>"counts.clusters_platforms_user",
"time_elapsed"=>0.06410990096628666},
{"name"=>"counts.clusters_management_project",
- "time_elapsed"=>0.24020783510059118},
- {"name"=>"counts.clusters_integrations_elastic_stack",
- "time_elapsed"=>0.03484998410567641}
+ "time_elapsed"=>0.24020783510059118}
]
}
}
@@ -163,25 +161,6 @@ We also collect metrics specific to [Geo](../../administration/geo/index.md) sec
]
```
-### Enable or disable service ping metadata reporting
-
-Service Ping timing metadata reporting is under development but ready for production use.
-It is deployed behind a feature flag that is **enabled by default**.
-[GitLab administrators with access to the GitLab Rails console](../../administration/feature_flags.md)
-can opt to disable it.
-
-To enable it:
-
-```ruby
-Feature.enable(:measure_service_ping_metric_collection)
-```
-
-To disable it:
-
-```ruby
-Feature.disable(:measure_service_ping_metric_collection)
-```
-
## Implementing Service Ping
See the [implement Service Ping](implement.md) guide.
@@ -200,6 +179,7 @@ The following is example content of the Service Ping payload.
"recorded_at": "2020-04-17T07:43:54.162+00:00",
"edition": "EEU",
"license_md5": "00000000000000000000000000000000",
+ "license_sha256: "0000000000000000000000000000000000000000000000000000000000000000",
"license_id": null,
"historical_max_users": 999,
"licensee": {
diff --git a/doc/development/service_ping/metrics_dictionary.md b/doc/development/service_ping/metrics_dictionary.md
index fee3bb571c2..2adba5d8095 100644
--- a/doc/development/service_ping/metrics_dictionary.md
+++ b/doc/development/service_ping/metrics_dictionary.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
@@ -207,7 +207,7 @@ description: GitLab instance unique identifier
product_category: collection
product_section: growth
product_stage: growth
-product_group: group::product intelligence
+product_group: product_intelligence
value_type: string
status: active
milestone: 9.1
@@ -240,15 +240,17 @@ The generator takes a list of key paths and 3 options as arguments. It creates m
```shell
bundle exec rails generate gitlab:usage_metric_definition counts.issues --dir=7d --class_name=CountIssues
-create config/metrics/counts_7d/issues.yml
+// Creates 1 file
+// create config/metrics/counts_7d/issues.yml
```
**Multiple metrics example**
```shell
bundle exec rails generate gitlab:usage_metric_definition counts.issues counts.users --dir=7d --class_name=CountUsersCreatingIssues
-create config/metrics/counts_7d/issues.yml
-create config/metrics/counts_7d/users.yml
+// Creates 2 files
+// create config/metrics/counts_7d/issues.yml
+// create config/metrics/counts_7d/users.yml
```
NOTE:
@@ -256,7 +258,8 @@ To create a metric definition used in EE, add the `--ee` flag.
```shell
bundle exec rails generate gitlab:usage_metric_definition counts.issues --ee --dir=7d --class_name=CountUsersCreatingIssues
-create ee/config/metrics/counts_7d/issues.yml
+// Creates 1 file
+// create ee/config/metrics/counts_7d/issues.yml
```
### Metrics added dynamic to Service Ping payload
@@ -265,20 +268,35 @@ The [Redis HLL metrics](implement.md#known-events-are-added-automatically-in-ser
A YAML metric definition is required for each metric. A dedicated generator is provided to create metric definitions for Redis HLL events.
-The generator takes `category` and `event` arguments, as the root key is `redis_hll_counters`, and creates two metric definitions for weekly and monthly time frames:
+The generator takes `category` and `events` arguments, as the root key is `redis_hll_counters`, and creates two metric definitions for each of the events (for weekly and monthly time frames):
+
+**Single metric example**
```shell
bundle exec rails generate gitlab:usage_metric_definition:redis_hll issues count_users_closing_issues
-create config/metrics/counts_7d/count_users_closing_issues_weekly.yml
-create config/metrics/counts_28d/count_users_closing_issues_monthly.yml
+// Creates 2 files
+// create config/metrics/counts_7d/count_users_closing_issues_weekly.yml
+// create config/metrics/counts_28d/count_users_closing_issues_monthly.yml
+```
+
+**Multiple metrics example**
+
+```shell
+bundle exec rails generate gitlab:usage_metric_definition:redis_hll issues count_users_closing_issues count_users_reopening_issues
+// Creates 4 files
+// create config/metrics/counts_7d/count_users_closing_issues_weekly.yml
+// create config/metrics/counts_28d/count_users_closing_issues_monthly.yml
+// create config/metrics/counts_7d/count_users_reopening_issues_weekly.yml
+// create config/metrics/counts_28d/count_users_reopening_issues_monthly.yml
```
To create a metric definition used in EE, add the `--ee` flag.
```shell
bundle exec rails generate gitlab:usage_metric_definition:redis_hll issues users_closing_issues --ee
-create config/metrics/counts_7d/i_closed_weekly.yml
-create config/metrics/counts_28d/i_closed_monthly.yml
+// Creates 2 files
+// create config/metrics/counts_7d/i_closed_weekly.yml
+// create config/metrics/counts_28d/i_closed_monthly.yml
```
## Metrics Dictionary
diff --git a/doc/development/service_ping/metrics_instrumentation.md b/doc/development/service_ping/metrics_instrumentation.md
index 4fd03eea84f..e1c51713f3c 100644
--- a/doc/development/service_ping/metrics_instrumentation.md
+++ b/doc/development/service_ping/metrics_instrumentation.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/service_ping/metrics_lifecycle.md b/doc/development/service_ping/metrics_lifecycle.md
index c9cc9a4f2d2..28f77b6f587 100644
--- a/doc/development/service_ping/metrics_lifecycle.md
+++ b/doc/development/service_ping/metrics_lifecycle.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/service_ping/performance_indicator_metrics.md b/doc/development/service_ping/performance_indicator_metrics.md
index 48c123171fa..bdd4c319d41 100644
--- a/doc/development/service_ping/performance_indicator_metrics.md
+++ b/doc/development/service_ping/performance_indicator_metrics.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/service_ping/review_guidelines.md b/doc/development/service_ping/review_guidelines.md
index ee2d8f4f4a1..4ce5b2d577c 100644
--- a/doc/development/service_ping/review_guidelines.md
+++ b/doc/development/service_ping/review_guidelines.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/service_ping/troubleshooting.md b/doc/development/service_ping/troubleshooting.md
index 2764ef41f98..29ab334f867 100644
--- a/doc/development/service_ping/troubleshooting.md
+++ b/doc/development/service_ping/troubleshooting.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
@@ -26,6 +26,16 @@ You can use [this query](https://gitlab.com/gitlab-org/gitlab/-/issues/347298#no
For results about an investigation conducted into an unexpected drop in Service ping Payload events volume, see [this issue](https://gitlab.com/gitlab-data/analytics/-/issues/11071).
+### Troubleshoot VersionApp layer
+
+Check if the [export jobs](https://gitlab.com/gitlab-services/version-gitlab-com#data-export-using-pipeline-schedules) are successful.
+
+Check [Service Ping errors](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health?widget=14609989&udv=0) in the [Service Ping Health Dahsboard](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health).
+
+### Troubleshoot Google Storage layer
+
+Check if the files are present in [Google Storage](https://console.cloud.google.com/storage/browser/cloudsql-gs-production-efd5e8-cloudsql-exports;tab=objects?project=gs-production-efd5e8&prefix=&forceOnObjectsSortingFiltering=false).
+
### Troubleshoot the data warehouse layer
Reach out to the [Data team](https://about.gitlab.com/handbook/business-technology/data-team/) to ask about current state of data warehouse. On their handbook page there is a [section with contact details](https://about.gitlab.com/handbook/business-technology/data-team/#how-to-connect-with-us).
diff --git a/doc/development/service_ping/usage_data.md b/doc/development/service_ping/usage_data.md
index a25ad5f62be..a659bbf2265 100644
--- a/doc/development/service_ping/usage_data.md
+++ b/doc/development/service_ping/usage_data.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/sidekiq/compatibility_across_updates.md b/doc/development/sidekiq/compatibility_across_updates.md
index 35f4b88351e..96a3573d11a 100644
--- a/doc/development/sidekiq/compatibility_across_updates.md
+++ b/doc/development/sidekiq/compatibility_across_updates.md
@@ -142,7 +142,10 @@ When renaming queues, use the `sidekiq_queue_migrate` helper migration method
in a **post-deployment migration**:
```ruby
-class MigrateTheRenamedSidekiqQueue < Gitlab::Database::Migration[1.0]
+class MigrateTheRenamedSidekiqQueue < Gitlab::Database::Migration[2.0]
+ restrict_gitlab_migration gitlab_schema: :gitlab_main
+ disable_ddl_transaction!
+
def up
sidekiq_queue_migrate 'old_queue_name', to: 'new_queue_name'
end
diff --git a/doc/development/snowplow/event_dictionary_guide.md b/doc/development/snowplow/event_dictionary_guide.md
index 5ae81c3426d..7980395b1a9 100644
--- a/doc/development/snowplow/event_dictionary_guide.md
+++ b/doc/development/snowplow/event_dictionary_guide.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/snowplow/implementation.md b/doc/development/snowplow/implementation.md
index 88fb1d5cfe4..f8e37aee1e0 100644
--- a/doc/development/snowplow/implementation.md
+++ b/doc/development/snowplow/implementation.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
@@ -464,7 +464,7 @@ Page titles are hardcoded as `GitLab` for the same reason.
#### Snowplow Inspector Chrome Extension
-Snowplow Inspector Chrome Extension is a browser extension for testing frontend events. This works in production, staging, and local development environments.
+Snowplow Inspector Chrome Extension is a browser extension for testing frontend events. This works in production, staging, and local development environments.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a video tutorial, see the [Snowplow plugin walk through](https://www.youtube.com/watch?v=g4rqnIZ1Mb4).
@@ -505,16 +505,22 @@ To install and run Snowplow Micro, complete these steps to modify the
1. Set the environment variable to tell the GDK to use Snowplow Micro in development. This overrides two `application_settings` options:
- `snowplow_enabled` setting will instead return `true` from `Gitlab::Tracking.enabled?`
- - `snowplow_collector_hostname` setting will instead always return `localhost:9090` (or whatever is set for `SNOWPLOW_MICRO_URI`) from `Gitlab::Tracking.collector_hostname`.
+ - `snowplow_collector_hostname` setting will instead always return `localhost:9090` (or whatever port is set for `snowplow_micro.port` GDK setting) from `Gitlab::Tracking.collector_hostname`.
```shell
- export SNOWPLOW_MICRO_ENABLE=1
+ gdk config set snowplow_micro.enabled true
```
- Optionally, you can set the URI for you Snowplow Micro instance as well (the default value is `http://localhost:9090`):
+ Optionally, you can set the port for you Snowplow Micro instance as well (the default value is `9090`):
```shell
- export SNOWPLOW_MICRO_URI=https://127.0.0.1:8080
+ gdk config set snowplow_micro.port 8080
+ ```
+
+1. Regenerate the project YAML config:
+
+ ```shell
+ gdk reconfigure
```
1. Restart GDK:
diff --git a/doc/development/snowplow/index.md b/doc/development/snowplow/index.md
index d6a7b900629..155ce87b8d9 100644
--- a/doc/development/snowplow/index.md
+++ b/doc/development/snowplow/index.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/snowplow/infrastructure.md b/doc/development/snowplow/infrastructure.md
index 28541874e98..758c850e89f 100644
--- a/doc/development/snowplow/infrastructure.md
+++ b/doc/development/snowplow/infrastructure.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/snowplow/review_guidelines.md b/doc/development/snowplow/review_guidelines.md
index 0359636380b..673166452b7 100644
--- a/doc/development/snowplow/review_guidelines.md
+++ b/doc/development/snowplow/review_guidelines.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/snowplow/schemas.md b/doc/development/snowplow/schemas.md
index 4066151600d..799f8335000 100644
--- a/doc/development/snowplow/schemas.md
+++ b/doc/development/snowplow/schemas.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/snowplow/troubleshooting.md b/doc/development/snowplow/troubleshooting.md
index 2a6db80a6f2..42a433e6a94 100644
--- a/doc/development/snowplow/troubleshooting.md
+++ b/doc/development/snowplow/troubleshooting.md
@@ -1,5 +1,5 @@
---
-stage: Growth
+stage: Analytics
group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/stage_group_dashboards.md b/doc/development/stage_group_dashboards.md
deleted file mode 100644
index 8e3e6982430..00000000000
--- a/doc/development/stage_group_dashboards.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: 'stage_group_observability/index.md'
-remove_date: '2022-06-15'
----
-
-This document was moved to [another location](stage_group_observability/index.md).
-
-<!-- This redirect file can be deleted after <2022-06-15>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md
index eda1c8c3d10..ea36214f6b7 100644
--- a/doc/development/testing_guide/best_practices.md
+++ b/doc/development/testing_guide/best_practices.md
@@ -155,16 +155,6 @@ FPROF=1 bin/rspec spec/[path]/[to]/[spec].rb
FPROF=flamegraph bin/rspec spec/[path]/[to]/[spec].rb
```
-A common change is to use [`let_it_be`](#common-test-setup):
-
-```ruby
-# Old
-let(:project) { create(:project) }
-
-# New
-let_it_be(:project) { create(:project) }
-```
-
A common cause of a large number of created factories is [factory cascades](https://github.com/test-prof/test-prof/blob/master/docs/profilers/factory_prof.md#factory-flamegraph), which result when factories create and recreate associations.
They can be identified by a noticeable difference between `total time` and `top-level time` numbers:
@@ -215,6 +205,56 @@ In this case, the `total time` and `top-level time` numbers match more closely:
8 8 0.0477s 0.0477s 0.0477s namespace
```
+##### Let's talk about `let`
+
+There are various ways to create objects and store them in variables in your tests. They are, from least efficient to most efficient:
+
+- `let!` creates the object before each example runs. It also creates a new object for every example. You should only use this option if you need to create a clean object before each example without explicitly referring to it.
+- `let` lazily creates the object. It isn't created until the object is called. `let` is generally inefficient as it creates a new object for every example. `let` is fine for simple values. However, more efficient variants of `let` are best when dealing with database models such as factories.
+- `let_it_be_with_refind` works similar to `let_it_be_with_reload`, but the [former calls `ActiveRecord::Base#find`](https://github.com/test-prof/test-prof/blob/936b29f87b36f88a134e064aa6d8ade143ae7a13/lib/test_prof/ext/active_record_refind.rb#L15) instead of `ActiveRecord::Base#reload`. `reload` is usually faster than `refind`.
+- `let_it_be_with_reload` creates an object one time for all examples in the same context, but after each example, the database changes are rolled back, and `object.reload` will be called to restore the object to its original state. This means you can make changes to the object before or during an example. However, there are cases where [state leaks across other models](https://github.com/test-prof/test-prof/blob/master/docs/recipes/let_it_be.md#state-leakage-detection) can occur. In these cases, `let` may be an easier option, especially if only a few examples exist.
+- `let_it_be` creates an immutable object one time for all of the examples in the same context. This is a great alternative to `let` and `let!` for objects that do not need to change from one example to another. Using `let_it_be` can dramatically speed up tests that create database models. See <https://github.com/test-prof/test-prof/blob/master/docs/recipes/let_it_be.md#let-it-be> for more details and examples.
+
+`let_it_be` is the most optimized option since it instantiates an object once and does not change it. If you find yourself needing `let` instead of `let_it_be`, try `let_it_be_with_reload`.
+
+```ruby
+# Old
+let(:project) { create(:project) }
+
+# New
+let_it_be(:project) { create(:project) }
+
+# If you need to expect changes to the object in the test
+let_it_be_with_reload(:project) { create(:project) }
+```
+
+Here is an example of when `let_it_be` cannot be used, but `let_it_be_with_reload` allows for more efficiency than `let`:
+
+```ruby
+let_it_be(:user) { create(:user) }
+let_it_be_with_reload(:project) { create(:project) } # The test will fail if `let_it_be` is used
+
+context 'with a developer' do
+ before do
+ project.add_developer(user)
+ end
+
+ it 'project has an owner and a developer' do
+ expect(project.members.map(&:access_level)).to match_array([Gitlab::Access::OWNER, Gitlab::Access::DEVELOPER])
+ end
+end
+
+context 'with a maintainer' do
+ before do
+ project.add_maintainer(user)
+ end
+
+ it 'project has an owner and a maintainer' do
+ expect(project.members.map(&:access_level)).to match_array([Gitlab::Access::OWNER, Gitlab::Access::MAINTAINER])
+ end
+end
+```
+
#### Stubbing methods within factories
You should avoid using `allow(object).to receive(:method)` in factories, as this makes the factory unable to be used with `let_it_be`.
diff --git a/doc/development/testing_guide/contract/consumer_tests.md b/doc/development/testing_guide/contract/consumer_tests.md
index b4d6882a655..df7c9ee0abd 100644
--- a/doc/development/testing_guide/contract/consumer_tests.md
+++ b/doc/development/testing_guide/contract/consumer_tests.md
@@ -10,13 +10,15 @@ This tutorial guides you through writing a consumer test from scratch. To start,
## Create the skeleton
-Start by creating the skeleton of a consumer test. Create a file under `spec/contracts/consumer/specs` called `discussions.spec.js`.
+Start by creating the skeleton of a consumer test. Create a file under `spec/contracts/consumer/specs/project/merge_request` called `discussions.spec.js`.
Then, populate it with the following function and parameters:
- [`pactWith`](#the-pactwith-function)
- [`PactOptions`](#the-pactoptions-parameter)
- [`PactFn`](#the-pactfn-parameter)
+To learn more about how the contract test directory is structured, see the contract testing [test suite folder structure](index.md#test-suite-folder-structure).
+
### The `pactWith` function
The Pact consumer test is defined through the `pactWith` function that takes `PactOptions` and the `PactFn`.
@@ -36,15 +38,17 @@ const { pactWith } = require('jest-pact');
pactWith(
{
- consumer: 'Merge Request Page',
+ consumer: 'MergeRequest#show',
provider: 'Merge Request Discussions Endpoint',
log: '../logs/consumer.log',
- dir: '../contracts',
+ dir: '../contracts/project/merge_request/show',
},
PactFn
);
```
+To learn more about how to name the consumers and providers, see contract testing [naming conventions](index.md#naming-conventions).
+
### The `PactFn` parameter
The `PactFn` is where your tests are defined. This is where you set up the mock provider and where you can use the standard Jest methods like [`Jest.describe`](https://jestjs.io/docs/api#describename-fn), [`Jest.beforeEach`](https://jestjs.io/docs/api#beforeeachfn-timeout), and [`Jest.it`](https://jestjs.io/docs/api#testname-fn-timeout). For more information, see [https://jestjs.io/docs/api](https://jestjs.io/docs/api).
@@ -54,20 +58,20 @@ const { pactWith } = require('jest-pact');
pactWith(
{
- consumer: 'Merge Request Page',
+ consumer: 'MergeRequest#show',
provider: 'Merge Request Discussions Endpoint',
log: '../logs/consumer.log',
dir: '../contracts',
},
(provider) => {
- describe('Discussions Endpoint', () => {
+ describe('Merge Request Discussions Endpoint', () => {
beforeEach(() => {
-
+
});
it('return a successful body', () => {
-
+
});
});
},
@@ -93,14 +97,14 @@ const { Matchers } = require('@pact-foundation/pact');
pactWith(
{
- consumer: 'Merge Request Page',
+ consumer: 'MergeRequest#show',
provider: 'Merge Request Discussions Endpoint',
log: '../logs/consumer.log',
- dir: '../contracts',
+ dir: '../contracts/project/merge_request/show',
},
(provider) => {
- describe('Discussions Endpoint', () => {
+ describe('Merge Request Discussions Endpoint', () => {
beforeEach(() => {
const interaction = {
state: 'a merge request with discussions exists',
@@ -144,7 +148,7 @@ Notice how we use `Matchers` in the `body` of the expected response. This allows
After the mock provider is set up, you can write the test. For this test, you make a request and expect a particular response.
-First, set up the client that makes the API request. To do that, either create or find an existing file under `spec/contracts/consumer/endpoints` and add the following API request.
+First, set up the client that makes the API request. To do that, create `spec/contracts/consumer/endpoints/project/merge_requests.js` and add the following API request.
```javascript
const axios = require('axios');
@@ -169,18 +173,18 @@ After that's set up, import it to the test file and call it to make the request.
const { pactWith } = require('jest-pact');
const { Matchers } = require('@pact-foundation/pact');
-const { getDiscussions } = require('../endpoints/merge_requests');
+const { getDiscussions } = require('../endpoints/project/merge_requests');
pactWith(
{
- consumer: 'Merge Request Page',
+ consumer: 'MergeRequest#show',
provider: 'Merge Request Discussions Endpoint',
log: '../logs/consumer.log',
- dir: '../contracts',
+ dir: '../contracts/project/merge_request/show',
},
(provider) => {
- describe('Discussions Endpoint', () => {
+ describe('Merge Request Discussions Endpoint', () => {
beforeEach(() => {
const interaction = {
state: 'a merge request with discussions exists',
@@ -230,7 +234,7 @@ There we have it! The consumer test is now set up. You can now try [running this
As you may have noticed, the request and response definitions can get large. This results in the test being difficult to read, with a lot of scrolling to find what you want. You can make the test easier to read by extracting these out to a `fixture`.
-Create a file under `spec/contracts/consumer/fixtures` called `discussions.fixture.js`. You place the `request` and `response` definitions here.
+Create a file under `spec/contracts/consumer/fixtures/project/merge_request` called `discussions.fixture.js` where you will place the `request` and `response` definitions.
```javascript
const { Matchers } = require('@pact-foundation/pact');
@@ -274,18 +278,18 @@ With all of that moved to the `fixture`, you can simplify the test to the follow
const { pactWith } = require('jest-pact');
const { Discussions } = require('../fixtures/discussions.fixture');
-const { getDiscussions } = require('../endpoints/merge_requests');
+const { getDiscussions } = require('../endpoints/project/merge_requests');
pactWith(
{
- consumer: 'Merge Request Page',
+ consumer: 'MergeRequest#show',
provider: 'Merge Request Discussions Endpoint',
log: '../logs/consumer.log',
- dir: '../contracts',
+ dir: '../contracts/project/merge_request/show',
},
(provider) => {
- describe('Discussions Endpoint', () => {
+ describe('Merge Request Discussions Endpoint', () => {
beforeEach(() => {
const interaction = {
state: 'a merge request with discussions exists',
diff --git a/doc/development/testing_guide/contract/index.md b/doc/development/testing_guide/contract/index.md
index 6556bd85624..8e12eea2874 100644
--- a/doc/development/testing_guide/contract/index.md
+++ b/doc/development/testing_guide/contract/index.md
@@ -37,3 +37,42 @@ rake contracts:mr:pact:verify:discussions # Verify provider against the
rake contracts:mr:pact:verify:metadata # Verify provider against the consumer pacts for metadata
rake contracts:mr:test:merge_request[contract_mr] # Run all merge request contract tests
```
+
+## Test suite folder structure and naming conventions
+
+To keep the consumer and provider test suite organized and maintainable, it's important that tests are organized, also that consumers and providers are named consistently. Therefore, it's important to adhere to the following conventions.
+
+### Test suite folder structure
+
+Having an organized and sensible folder structure for the test suite makes it easier to find relevant files when reviewing, debugging, or introducing tests.
+
+#### Consumer tests
+
+The consumer tests are grouped according to the different pages in the application. Each file contains various types of requests found in a page. As such, the consumer test files are named using the Rails standards of how pages are referenced. For example, the project pipelines page would be the `Project::Pipeline#index` page so the equivalent consumer test would be located in `consumer/specs/project/pipelines/index.spec.js`.
+
+When defining the location to output the contract generated by the test, we want to follow the same file structure which would be `contracts/project/pipelines/` for this example. This is the structure in `consumer/endpoints` and `consumer/fixtures` as well.
+
+#### Provider tests
+
+The provider tests are grouped similarly to our controllers. Each of these tests contains various tests for an API endpoint. For example, the API endpoint to get a list of pipelines for a project would be located in `provider/pact_helpers/project/pipelines/get_list_project_pipelines_helper.rb`. The provider states are structured the same way.
+
+### Naming conventions
+
+When writing the consumer and provider tests, there are parts where a name is required for the consumer and provider. Since there are no restrictions imposed by Pact on how these should be named, a naming convention is important to keep it easy for us to figure out which consumer and provider tests are involved during debugging. Pact also uses the consumer and provider names to generate the generated contracts in the `#{consumer_name}-#{provider_name}` format.
+
+#### Consumer naming
+
+As mentioned in the [folder structure section](#consumer-tests), consumer tests are grouped according to the different pages in the application. As such, consumer names should follow the same naming format using the Rails standard. For example, the consumer test for `Project::Pipeline#index` would be `ProjectPipeline#index` as the consumer name. Since Pact uses this name to name the contracts it generates, the colons (`::`) are dropped as colons are not valid characters in file names.
+
+#### Provider naming
+
+These are the API endpoints that provides the data to the consumer so they are simply named according to the API endpoint they pertain to. Be mindful that this name is as descriptive as possible. For example, if we're writing a test for the `GET /groups/:id/projects` endpoint, we don't want to simply name it "Projects endpoint" as there is a `GET /projects` endpoint as well that also fetches a list of projects the user has access to across all of GitLab. An easy way to name them is by checking out our [API documentation](../../../api/api_resources.md) and naming it the same way it is named in there. So the [`GET /groups/:id/projects`](../../../api/groups.md#list-a-groups-projects) would be called `List a group’s projects` and [`GET /projects`](../../../api/projects.md#list-all-projects) would be called `List all projects`. Subsequently, the test files are named `list_a_groups_projects_helper.rb` and `list_all_projects_helper.rb` respectively.
+
+There are some cases where the provider being tested may not be documented so, in those cases, fall back to choosing a name that is as descriptive as possible to ensure it's easy to tell what the provider is for.
+
+#### Conventions summary
+
+| Tests | Folder structure | Naming convention |
+| ----- | ---------------- | ----------------- |
+| Consumer Test | Follows the Rails reference standards. For example, `Project::Pipeline#index` would be `consumer/specs/project/pipelines/index.spec.js` | Follows the Rails naming standard. For example, `Project::Pipeline#index` would be `ProjectPipeline#index` |
+| Provider Test | Grouped like the Rails controllers. For example, [`List project pipelines` API endpoint](../../../api/pipelines.md#list-project-pipelines) would be `provider/pact_helpers/project/pipelines/provider/pact_helpers/project/pipelines/get_list_project_pipelines_helper.rb` | Follows the API documentation naming scheme. For example, [`GET /projects/:id/pipelines`](../../../api/pipelines.md#list-project-pipelines) would be called `List project pipelines`. |
diff --git a/doc/development/testing_guide/contract/provider_tests.md b/doc/development/testing_guide/contract/provider_tests.md
index 0da5bcb4aef..92ac4c4ed71 100644
--- a/doc/development/testing_guide/contract/provider_tests.md
+++ b/doc/development/testing_guide/contract/provider_tests.md
@@ -6,23 +6,25 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Writing provider tests
-This tutorial guides you through writing a provider test from scratch. It is a continuation of the [consumer test tutorial](consumer_tests.md). To start, the provider tests are written using [`pact-ruby`](https://github.com/pact-foundation/pact-ruby). In this tutorial, you write a provider test that addresses the contract generated by `discussions.spec.js`.
+This tutorial guides you through writing a provider test from scratch. It is a continuation of the [consumer test tutorial](consumer_tests.md). To start, the provider tests are written using [`pact-ruby`](https://github.com/pact-foundation/pact-ruby). In this tutorial, you write a provider test that addresses the contract generated by `discussions.spec.js`. As Pact is a consumer-driven testing tool, this tutorial assumes that there is an existing consumer test that had already generated a contract for us to work with.
## Create the skeleton
-Provider tests are quite simple. The goal is to set up the test data and then link that with the corresponding contract. Start by creating a file called `discussions_helper.rb` under `spec/contracts/provider/specs`. Note that the files are called `helpers` to match how they are called by Pact in the Rake tasks, which are set up at the end of this tutorial.
+Provider tests are quite simple. The goal is to set up the test data and then link that with the corresponding contract. Start by creating a file called `discussions_helper.rb` under `spec/contracts/provider/pact_helpers/project/merge_request`. Note that the files are called `helpers` to match how they are called by Pact in the Rake tasks, which are set up at the end of this tutorial.
+
+To learn more about how the contract test directory is structured, see the contract testing [test suite folder structure](index.md#test-suite-folder-structure).
### The `service_provider` block
The `service_provider` block is where the provider test is defined. For this block, put in a description of the service provider. Name it exactly as it is called in the contracts that are derived from the consumer tests.
```ruby
-require_relative '../spec_helper'
+require_relative '../../../spec_helper'
module Provider
module DiscussionsHelper
Pact.service_provider 'Merge Request Discussions Endpoint' do
-
+
end
end
end
@@ -33,33 +35,35 @@ end
The `honours_pact_with` block describes which consumer this provider test is addressing. Similar to the `service_provider` block, name this exactly the same as it's called in the contracts that are derived from the consumer tests.
```ruby
-require_relative '../spec_helper'
+require_relative '../../../spec_helper'
module Provider
module DiscussionsHelper
Pact.service_provider 'Merge Request Discussions Endpoint' do
- honours_pact_with 'Merge Request Page' do
-
+ honours_pact_with 'MergeRequest#show' do
+
end
end
end
end
```
+To learn more about how to name the consumers and providers, see contract testing [naming conventions](index.md#naming-conventions).
+
## Configure the test app
For the provider tests to verify the contracts, you must hook it up to a test app that makes the actual request and return a response to verify against the contract. To do this, configure the `app` the test uses as `Environment::Test.app`, which is defined in [`spec/contracts/provider/environments/test.rb`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/spec/contracts/provider/environments/test.rb).
```ruby
-require_relative '../spec_helper'
+require_relative '../../../spec_helper'
module Provider
module DiscussionsHelper
Pact.service_provider 'Merge Request Discussions Endpoint' do
app { Environment::Test.app }
-
- honours_pact_with 'Merge Request Page' do
-
+
+ honours_pact_with 'MergeRequest#show' do
+
end
end
end
@@ -71,15 +75,15 @@ end
Now that the test app is configured, all that is left is to define which contract this provider test is verifying. To do this, set the `pact_uri`.
```ruby
-require_relative '../spec_helper'
+require_relative '../../../spec_helper'
module Provider
module DiscussionsHelper
Pact.service_provider 'Merge Request Discussions Endpoint' do
app { Environment::Test.app }
-
- honours_pact_with 'Merge Request Page' do
- pact_uri '../contracts/merge_request_page-merge_request_discussions_endpoint.json'
+
+ honours_pact_with 'MergeRequest#show' do
+ pact_uri '../contracts/project/merge_request/show/mergerequest#show-merge_request_discussions_endpoint.json'
end
end
end
@@ -95,8 +99,8 @@ Under the `contracts:mr` namespace, introduce the Rake task to run this new test
```ruby
Pact::VerificationTask.new(:discussions) do |pact|
pact.uri(
- "#{contracts}/contracts/merge_request_page-merge_request_discussions_endpoint.json",
- pact_helper: "#{provider}/specs/discussions_helper.rb"
+ "#{contracts}/contracts/project/merge_request/show/merge_request#show-merge_request_discussions_endpoint.json",
+ pact_helper: "#{provider}/pact_helpers/project/merge_request/discussions_helper.rb"
)
end
```
@@ -109,7 +113,7 @@ As the last step, create the test data that allows the provider test to return t
You can read more about [provider states](https://docs.pact.io/implementation_guides/ruby/provider_states). We can do global provider states but for this tutorial, the provider state is for one specific `state`.
-To create the test data, create `discussions_state.rb` under `spec/contracts/provider/states`. As a quick aside, make sure to also import this state file in the `discussions_helper.rb` file.
+To create the test data, create `discussions_state.rb` under `spec/contracts/provider/states/project/merge_request`. Be sure to also import this state file in the `discussions_helper.rb` file.
### Default user in `spec/contracts/provider/spec_helper.rb`
@@ -118,10 +122,13 @@ Before you create the test data, note that a default user is created in the [`sp
```ruby
RSpec.configure do |config|
config.include Devise::Test::IntegrationHelpers
+ config.include FactoryBot::Syntax::Methods
+
config.before do
- user = FactoryBot.create(:user, name: "Contract Test").tap do |user|
+ user = create(:user, name: Provider::UsersHelper::CONTRACT_USER_NAME).tap do |user|
user.current_sign_in_at = Time.current
end
+
sign_in user
end
end
@@ -134,7 +141,7 @@ Any further modifications to the user that's needed can be done through the indi
In the state file, you must define which consumer this provider state is for. You can do that with `provider_states_for`. Make sure that the `name` provided matches the name defined for the consumer.
```ruby
-Pact.provider_states_for 'Merge Request Page' do
+Pact.provider_states_for 'MergeRequest#show' do
end
```
@@ -143,7 +150,7 @@ end
In the `provider_states_for` block, you then define the state the test data is for. These states are also defined in the consumer test. In this case, there is a `'a merge request with discussions exists'` state.
```ruby
-Pact.provider_states_for "Merge Request Page" do
+Pact.provider_states_for "MergeRequest#show" do
provider_state "a merge request with discussions exists" do
end
@@ -155,7 +162,7 @@ end
This is where you define the test data creation steps. Use `FactoryBot` to create the data. As you create the test data, you can keep [running the provider test](index.md#run-the-provider-tests) to check on the status of the test and figure out what else is missing in your data setup.
```ruby
-Pact.provider_states_for "Merge Request Page" do
+Pact.provider_states_for "MergeRequest#show" do
provider_state "a merge request with discussions exists" do
set_up do
user = User.find_by(name: Provider::UsersHelper::CONTRACT_USER_NAME)
@@ -172,6 +179,28 @@ Pact.provider_states_for "Merge Request Page" do
end
```
-Note the `Provider::UsersHelper::CONTRACT_USER_NAME` here to fetch a user is a user that is from the [`spec_helper`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/spec/contracts/provider/spec_helper.rb) that sets up a user before any of these tests run.
+## Using the test data
+
+Now that the provider state file is created, you need to import the state file to the provider test.
+
+```ruby
+# frozen_string_literal: true
+
+require_relative '../../../spec_helper'
+require_relative '../../../states/project/merge_request/discussions_state'
+
+module Provider
+ module DiscussionsHelper
+ Pact.service_provider "/merge_request/discussions" do
+ app { Environments::Test.app }
+
+ honours_pact_with 'Merge Request#show' do
+ pact_uri '../contracts/project/merge_request/show/merge_request#show-merge_request_discussions_endpoint.json'
+ end
+ end
+ end
+end
+
+```
-And with that, the provider tests for `discussion_helper.rb` should now pass with this.
+And there we have it. The provider test for `discussions_helper.rb` should now pass with this.
diff --git a/doc/development/testing_guide/end_to_end/best_practices.md b/doc/development/testing_guide/end_to_end/best_practices.md
index 85f8beeacad..00b843ffdbe 100644
--- a/doc/development/testing_guide/end_to_end/best_practices.md
+++ b/doc/development/testing_guide/end_to_end/best_practices.md
@@ -220,7 +220,7 @@ For example, if you encapsulate some actions and expectations in a private metho
it "has Owner role with Owner permissions" do
Page::Dashboard::Projects.perform do |projects|
projects.filter_by_name(project.name)
-
+
expect(projects).to have_project_with_access_role(project.name, 'Owner')
end
diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md
index 9730115fd9f..06359d612ad 100644
--- a/doc/development/testing_guide/end_to_end/index.md
+++ b/doc/development/testing_guide/end_to_end/index.md
@@ -174,7 +174,7 @@ See [Review Apps](../review_apps.md) for more details about Review Apps.
To run tests in parallel on CI, the [Knapsack](https://github.com/KnapsackPro/knapsack)
gem is used. Knapsack reports are generated automatically and stored in the `GCS` bucket
-`knapsack-reports` in the `gitlab-qa-resources` project. The [`KnapsackReport`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/qa/qa/tools/knapsack_report.rb)
+`knapsack-reports` in the `gitlab-qa-resources` project. The [`KnapsackReport`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/qa/qa/support/knapsack_report.rb)
helper handles automated report generation and upload.
## Test metrics
diff --git a/doc/development/testing_guide/end_to_end/resources.md b/doc/development/testing_guide/end_to_end/resources.md
index dacc428aec6..a519d1ecb47 100644
--- a/doc/development/testing_guide/end_to_end/resources.md
+++ b/doc/development/testing_guide/end_to_end/resources.md
@@ -568,6 +568,16 @@ def unique_identifiers
end
```
+### Resources cleanup
+
+We have a mechanism to [collect](https://gitlab.com/gitlab-org/gitlab/-/blob/44345381e89d6bbd440f7b4c680d03e8b75b86de/qa/qa/tools/test_resource_data_processor.rb#L32)
+all resources created during test executions, and another to [handle](https://gitlab.com/gitlab-org/gitlab/-/blob/44345381e89d6bbd440f7b4c680d03e8b75b86de/qa/qa/tools/test_resources_handler.rb#L44)
+these resources. On [dotcom environments](https://about.gitlab.com/handbook/engineering/infrastructure/environments/#environments), after a test suite finishes in the [QA pipelines](https://about.gitlab.com/handbook/engineering/quality/quality-engineering/debugging-qa-test-failures/#scheduled-qa-test-pipelines), resources from all passing test are
+automatically deleted in the same pipeline run. Resources from all failed tests are reserved for investigation,
+and won't be deleted until the following Saturday by a scheduled pipeline. When introducing new resources, please
+also make sure to add any resource that cannot be deleted to the [IGNORED_RESOURCES](https://gitlab.com/gitlab-org/gitlab/-/blob/44345381e89d6bbd440f7b4c680d03e8b75b86de/qa/qa/tools/test_resources_handler.rb#L29)
+list.
+
## Where to ask for help?
If you need more information, ask for help on `#quality` channel on Slack
diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md
index f1083c23406..532bb9fcdef 100644
--- a/doc/development/testing_guide/review_apps.md
+++ b/doc/development/testing_guide/review_apps.md
@@ -45,7 +45,7 @@ Maintainers can elect to use the [process for merging during broken `master`](ht
On every [pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) in the `qa` stage, the
`review-performance` job is automatically started: this job does basic
browser performance testing using a
-[Sitespeed.io Container](../../user/project/merge_requests/browser_performance_testing.md).
+[Sitespeed.io Container](../../ci/testing/browser_performance_testing.md).
## Sample Data for Review Apps
diff --git a/doc/development/uploads.md b/doc/development/uploads.md
deleted file mode 100644
index 1860f898a26..00000000000
--- a/doc/development/uploads.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-redirect_to: 'uploads/index.md'
-remove_date: '2022-04-30'
----
-
-This document was moved to [another location](uploads/index.md).
-
-<!-- This redirect file can be deleted after 2022-04-30. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/value_stream_analytics/value_stream_analytics_aggregated_backend.md b/doc/development/value_stream_analytics/value_stream_analytics_aggregated_backend.md
index aef85107cd9..79262e2d0dc 100644
--- a/doc/development/value_stream_analytics/value_stream_analytics_aggregated_backend.md
+++ b/doc/development/value_stream_analytics/value_stream_analytics_aggregated_backend.md
@@ -6,7 +6,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Aggregated Value Stream Analytics
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/335391) in GitLab 14.7.
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/335391) in GitLab 14.7.
DISCLAIMER:
This page contains information related to upcoming products, features, and functionality.
diff --git a/doc/development/workhorse/new_features.md b/doc/development/workhorse/new_features.md
index 3ad15c1de16..5c00903497a 100644
--- a/doc/development/workhorse/new_features.md
+++ b/doc/development/workhorse/new_features.md
@@ -74,5 +74,5 @@ The Workhorse maintainers can help you assess the situation.
- In 2020, `@nolith` presented the talk
["Speed up the monolith. Building a smart reverse proxy in Go"](https://archive.fosdem.org/2020/schedule/event/speedupmonolith/)
at FOSDEM. The talk includes more details on the history of Workhorse and the NFS removal.
-- The [uploads development documentation](../uploads.md) contains the most common
+- The [uploads development documentation](../uploads/index.md) contains the most common
use cases for adding a new type of upload.