Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2023-11-14 11:41:52 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2023-11-14 11:41:52 +0300
commit585826cb22ecea5998a2c2a4675735c94bdeedac (patch)
tree5b05f0b30d33cef48963609e8a18a4dff260eab3 /doc/development
parentdf221d036e5d0c6c0ee4d55b9c97f481ee05dee8 (diff)
Add latest changes from gitlab-org/gitlab@16-6-stable-eev16.6.0-rc42
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/ai_architecture.md5
-rw-r--r--doc/development/ai_features/duo_chat.md37
-rw-r--r--doc/development/ai_features/index.md134
-rw-r--r--doc/development/api_graphql_styleguide.md9
-rw-r--r--doc/development/backend/create_source_code_be/gitaly_touch_points.md6
-rw-r--r--doc/development/bulk_import.md9
-rw-r--r--doc/development/cells/index.md1
-rw-r--r--doc/development/code_review.md10
-rw-r--r--doc/development/contributing/first_contribution.md2
-rw-r--r--doc/development/contributing/img/bot_ready.pngbin9367 -> 0 bytes
-rw-r--r--doc/development/contributing/img/bot_ready_v16_6.pngbin0 -> 7163 bytes
-rw-r--r--doc/development/dangerbot.md7
-rw-r--r--doc/development/database/avoiding_downtime_in_migrations.md11
-rw-r--r--doc/development/database/clickhouse/clickhouse_within_gitlab.md45
-rw-r--r--doc/development/database/database_lab.md2
-rw-r--r--doc/development/database/iterating_tables_in_batches.md4
-rw-r--r--doc/development/database/loose_foreign_keys.md6
-rw-r--r--doc/development/database/multiple_databases.md10
-rw-r--r--doc/development/database/understanding_explain_plans.md1
-rw-r--r--doc/development/development_processes.md57
-rw-r--r--doc/development/distributed_tracing.md4
-rw-r--r--doc/development/documentation/styleguide/index.md36
-rw-r--r--doc/development/documentation/styleguide/word_list.md30
-rw-r--r--doc/development/documentation/versions.md5
-rw-r--r--doc/development/documentation/workflow.md12
-rw-r--r--doc/development/ee_features.md24
-rw-r--r--doc/development/experiment_guide/implementing_experiments.md2
-rw-r--r--doc/development/export_csv.md2
-rw-r--r--doc/development/fe_guide/graphql.md37
-rw-r--r--doc/development/fe_guide/security.md51
-rw-r--r--doc/development/fe_guide/sentry.md5
-rw-r--r--doc/development/fe_guide/storybook.md34
-rw-r--r--doc/development/fe_guide/style/scss.md96
-rw-r--r--doc/development/fe_guide/style/typescript.md215
-rw-r--r--doc/development/fe_guide/type_hinting.md215
-rw-r--r--doc/development/feature_flags/controls.md11
-rw-r--r--doc/development/feature_flags/index.md6
-rw-r--r--doc/development/gems.md5
-rw-r--r--doc/development/gitaly.md43
-rw-r--r--doc/development/github_importer.md46
-rw-r--r--doc/development/i18n/externalization.md2
-rw-r--r--doc/development/i18n/proofreader.md1
-rw-r--r--doc/development/img/runner_fleet_dashboard.pngbin0 -> 38440 bytes
-rw-r--r--doc/development/index.md2
-rw-r--r--doc/development/internal_analytics/index.md53
-rw-r--r--doc/development/internal_analytics/internal_event_instrumentation/local_setup_and_debugging.md53
-rw-r--r--doc/development/internal_analytics/internal_event_instrumentation/quick_start.md24
-rw-r--r--doc/development/internal_analytics/metrics/metrics_dictionary.md2
-rw-r--r--doc/development/internal_analytics/service_ping/index.md119
-rw-r--r--doc/development/internal_api/index.md4
-rw-r--r--doc/development/migration_style_guide.md20
-rw-r--r--doc/development/permissions/custom_roles.md4
-rw-r--r--doc/development/pipelines/index.md33
-rw-r--r--doc/development/repository_storage_moves/index.md102
-rw-r--r--doc/development/rubocop_development_guide.md48
-rw-r--r--doc/development/ruby_upgrade.md14
-rw-r--r--doc/development/runner_fleet_dashboard.md245
-rw-r--r--doc/development/testing_guide/end_to_end/beginners_guide.md10
-rw-r--r--doc/development/testing_guide/end_to_end/capybara_to_chemlab_migration_guide.md38
-rw-r--r--doc/development/utilities.md2
-rw-r--r--doc/development/wikis.md3
61 files changed, 1406 insertions, 608 deletions
diff --git a/doc/development/ai_architecture.md b/doc/development/ai_architecture.md
index f03ffa748fa..54ad52f0c39 100644
--- a/doc/development/ai_architecture.md
+++ b/doc/development/ai_architecture.md
@@ -55,9 +55,8 @@ It is possible to utilize other models or technologies, however they will need t
The following models have been approved for use:
-- [OpenAI models](https://platform.openai.com/docs/models)
- Google's [Vertex AI](https://cloud.google.com/vertex-ai) and [model garden](https://cloud.google.com/model-garden)
-- [AI Code Suggestions](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/tree/main)
+- [Anthropic models](https://docs.anthropic.com/claude/reference/selecting-a-model)
- [Suggested reviewer](https://gitlab.com/gitlab-org/modelops/applied-ml/applied-ml-updates/-/issues/10)
### Vector stores
@@ -77,7 +76,7 @@ A [draft MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/122035) has b
The index function has been updated to improve search quality. This was tested locally by setting the `ivfflat.probes` value to `10` with the following SQL command:
```ruby
-Embedding::TanukiBotMvc.connection.execute("SET ivfflat.probes = 10")
+::Embedding::Vertex::GitlabDocumentation.connection.execute("SET ivfflat.probes = 10")
```
Setting the `probes` value for indexing improves results, as per the neighbor [documentation](https://github.com/ankane/neighbor#indexing).
diff --git a/doc/development/ai_features/duo_chat.md b/doc/development/ai_features/duo_chat.md
index 841123c803a..ad044f4a923 100644
--- a/doc/development/ai_features/duo_chat.md
+++ b/doc/development/ai_features/duo_chat.md
@@ -12,7 +12,6 @@ NOTE:
Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
1. [Enable Anthropic API features](index.md#configure-anthropic-access).
-1. [Enable OpenAI support](index.md#configure-openai-access).
1. [Ensure the embedding database is configured](index.md#set-up-the-embedding-database).
1. Ensure that your current branch is up-to-date with `master`.
1. To access the GitLab Duo Chat interface, in the lower-left corner of any page, select **Help** and **Ask GitLab Duo Chat**.
@@ -86,19 +85,45 @@ gdk start
tail -f log/llm.log
```
-## Testing GitLab Duo Chat with predefined questions
+## Testing GitLab Duo Chat against real LLMs
-Because success of answers to user questions in GitLab Duo Chat heavily depends on toolchain and prompts of each tool, it's common that even a minor change in a prompt or a tool impacts processing of some questions. To make sure that a change in the toolchain doesn't break existing functionality, you can use the following rspecs to validate answers to some predefined questions:
+Because success of answers to user questions in GitLab Duo Chat heavily depends
+on toolchain and prompts of each tool, it's common that even a minor change in a
+prompt or a tool impacts processing of some questions.
+
+To make sure that a change in the toolchain doesn't break existing
+functionality, you can use the following RSpec tests to validate answers to some
+predefined questions when using real LLMs:
```ruby
-export OPENAI_API_KEY='<key>'
-export ANTHROPIC_API_KEY='<key>'
-REAL_AI_REQUEST=1 rspec ee/spec/lib/gitlab/llm/chain/agents/zero_shot/executor_spec.rb
+export VERTEX_AI_EMBEDDINGS='true' # if using Vertex embeddings
+export ANTHROPIC_API_KEY='<key>' # can use dev value of Gitlab::CurrentSettings
+export VERTEX_AI_CREDENTIALS='<vertex-ai-credentials>' # can set as dev value of Gitlab::CurrentSettings.vertex_ai_credentials
+export VERTEX_AI_PROJECT='<vertex-project-name>' # can use dev value of Gitlab::CurrentSettings.vertex_ai_project
+
+REAL_AI_REQUEST=1 bundle exec rspec ee/spec/lib/gitlab/llm/chain/agents/zero_shot/executor_real_requests_spec.rb
```
When you need to update the test questions that require documentation embeddings,
make sure a new fixture is generated and committed together with the change.
+## Running the rspecs tagged with `real_ai_request`
+
+The rspecs tagged with the metadata `real_ai_request` can be run in GitLab project's CI by triggering
+`rspec-ee unit gitlab-duo-chat`.
+The former runs with Vertex APIs enabled. The CI jobs are optional and allowed to fail to account for
+the non-deterministic nature of LLM responses.
+
+### Management of credentials and API keys for CI jobs
+
+All API keys required to run the rspecs should be [masked](../../ci/variables/index.md#mask-a-cicd-variable)
+
+The exception is GCP credentials as they contain characters that prevent them from being masked.
+Because `rspec-ee unit gitlab-duo-chat` needs to run on MR branches, GCP credentials cannot be added as a protected variable
+and must be added as a regular CI variable.
+For security, the GCP credentials and the associated project added to
+GitLab project's CI must not be able to access any production infrastructure and sandboxed.
+
## GraphQL Subscription
The GraphQL Subscription for Chat behaves slightly different because it's user-centric. A user could have Chat open on multiple browser tabs, or also on their IDE.
diff --git a/doc/development/ai_features/index.md b/doc/development/ai_features/index.md
index 4401a7e3fb1..df1627f2dc3 100644
--- a/doc/development/ai_features/index.md
+++ b/doc/development/ai_features/index.md
@@ -15,7 +15,6 @@ info: To determine the technical writer assigned to the Stage/Group associated w
- Background workers execute
- GraphQL subscriptions deliver results back in real time
- Abstraction for
- - OpenAI
- Google Vertex AI
- Anthropic
- Rate Limiting
@@ -28,7 +27,6 @@ info: To determine the technical writer assigned to the Stage/Group associated w
- Automatic Markdown Rendering of responses
- Centralised Group Level settings for experiment and 3rd party
- Experimental API endpoints for exploration of AI APIs by GitLab team members without the need for credentials
- - OpenAI
- Google Vertex AI
- Anthropic
@@ -36,7 +34,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
Apply the following two feature flags to any AI feature work:
-- A general that applies to all AI features.
+- A general flag (`ai_global_switch`) that applies to all AI features.
- A flag specific to that feature. The feature flag name [must be different](../feature_flags/index.md#feature-flags-for-licensed-features) than the licensed feature name.
See the [feature flag tracker](https://gitlab.com/gitlab-org/gitlab/-/issues/405161) for the list of all feature flags and how to use them.
@@ -58,20 +56,19 @@ Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for
1. Enable the required general feature flags:
```ruby
- Feature.enable(:openai_experimentation)
+ Feature.enable(:ai_global_switch, type: :ops)
```
1. Ensure you have followed [the process to obtain an EE license](https://about.gitlab.com/handbook/developer-onboarding/#working-on-gitlab-ee-developer-licenses) for your local instance
1. Simulate the GDK to [simulate SaaS](../ee_features.md#simulate-a-saas-instance) and ensure the group you want to test has an Ultimate license
-1. Enable `Experimental features` and `Third-party AI services`
+1. Enable `Experimental features`:
1. Go to the group with the Ultimate license
1. **Group Settings** > **General** -> **Permissions and group features**
1. Enable **Experiment features**
- 1. Enable **Third-party AI services**
1. Enable the specific feature flag for the feature you want to test
1. Set the required access token. To receive an access token:
1. For Vertex, follow the [instructions below](#configure-gcp-vertex-access).
- 1. For all other providers, like Anthropic or OpenAI, create an access request where `@m_gill`, `@wayne`, and `@timzallmann` are the tech stack owners.
+ 1. For all other providers, like Anthropic, create an access request where `@m_gill`, `@wayne`, and `@timzallmann` are the tech stack owners.
### Set up the embedding database
@@ -117,12 +114,6 @@ In order to obtain a GCP service key for local development, please follow the st
Gitlab::CurrentSettings.update(vertex_ai_project: PROJECT_ID)
```
-### Configure OpenAI access
-
-```ruby
-Gitlab::CurrentSettings.update(openai_api_key: "<open-ai-key>")
-```
-
### Configure Anthropic access
```ruby
@@ -131,36 +122,9 @@ Gitlab::CurrentSettings.update!(anthropic_api_key: <insert API key>)
### Populating embeddings and using embeddings fixture
-Currently we have embeddings generate both with OpenAI and VertexAI. Bellow sections explain how to populate
+Embeddings are generated through VertexAI text embeddings endpoint. The sections below explain how to populate
embeddings in the DB or extract embeddings to be used in specs.
-FLAG:
-We are moving towards having VertexAI embeddings only, so eventually the OpenAI embeddings support will be drop
-as well as the section bellow will be removed.
-
-#### OpenAI embeddings
-
-To seed your development database with the embeddings for GitLab Documentation,
-you may use the pre-generated embeddings and a Rake task.
-
-```shell
-RAILS_ENV=development bundle exec rake gitlab:llm:embeddings:seed_pre_generated
-```
-
-The DBCleaner gem we use clear the database tables before each test runs.
-Instead of fully populating the table `tanuki_bot_mvc` where we store OpenAI embeddings for the documentations,
-we can add a few selected embeddings to the table from a pre-generated fixture.
-
-For instance, to test that the question "How can I reset my password" is correctly
-retrieving the relevant embeddings and answered, we can extract the top N closet embeddings
-to the question into a fixture and only restore a small number of embeddings quickly.
-To facilitate an extraction process, a Rake task been written.
-You can add or remove the questions needed to be tested in the Rake task and run the task to generate a new fixture.
-
-```shell
-RAILS_ENV=development bundle exec rake gitlab:llm:embeddings:extract_embeddings
-```
-
#### VertexAI embeddings
To seed your development database with the embeddings for GitLab Documentation,
@@ -210,9 +174,6 @@ Use the [experimental REST API endpoints](https://gitlab.com/gitlab-org/gitlab/-
The endpoints are:
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/completions`
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/embeddings`
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/chat/completions`
- `https://gitlab.example.com/api/v4/ai/experimentation/anthropic/complete`
- `https://gitlab.example.com/api/v4/ai/experimentation/vertex/chat`
@@ -257,11 +218,9 @@ mutation {
}
```
-The GraphQL API then uses the [OpenAI Client](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/gitlab/llm/open_ai/client.rb)
+The GraphQL API then uses the [Anthropic Client](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/llm/anthropic/client.rb)
to send the response.
-Remember that other clients are available and you should not use OpenAI.
-
#### How to receive a response
The API requests to AI providers are handled in a background job. We therefore do not keep the request alive and the Frontend needs to match the request to the response from the subscription.
@@ -302,7 +261,7 @@ To not have many concurrent subscriptions, you should also only subscribe to the
#### Current abstraction layer flow
-The following graph uses OpenAI as an example. You can use different providers.
+The following graph uses VertexAI as an example. You can use different providers.
```mermaid
flowchart TD
@@ -311,9 +270,9 @@ B --> C[Llm::ExecuteMethodService]
C --> D[One of services, for example: Llm::GenerateSummaryService]
D -->|scheduled| E[AI worker:Llm::CompletionWorker]
E -->F[::Gitlab::Llm::Completions::Factory]
-F -->G[`::Gitlab::Llm::OpenAi::Completions::...` class using `::Gitlab::Llm::OpenAi::Templates::...` class]
-G -->|calling| H[Gitlab::Llm::OpenAi::Client]
-H --> |response| I[::Gitlab::Llm::OpenAi::ResponseService]
+F -->G[`::Gitlab::Llm::VertexAi::Completions::...` class using `::Gitlab::Llm::Templates::...` class]
+G -->|calling| H[Gitlab::Llm::VertexAi::Client]
+H --> |response| I[::Gitlab::Llm::GraphqlSubscriptionResponseService]
I --> J[GraphqlTriggers.ai_completion_response]
J --> K[::GitlabSchema.subscriptions.trigger]
```
@@ -419,11 +378,11 @@ end
We recommend to use [policies](../policies.md) to deal with authorization for a feature. Currently we need to make sure to cover the following checks:
-1. General AI feature flag is enabled
+1. General AI feature flag (`ai_global_switch`) is enabled
1. Feature specific feature flag is enabled
1. The namespace has the required license for the feature
1. User is a member of the group/project
-1. `experiment_features_enabled` and `third_party_ai_features_enabled` flags are set on the `Namespace`
+1. `experiment_features_enabled` settings are set on the `Namespace`
For our example, we need to implement the `allowed?(:amazing_new_ai_feature)` call. As an example, you can look at the [Issue Policy for the summarize comments feature](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/issue_policy.rb). In our example case, we want to implement the feature for Issues as well:
@@ -436,7 +395,7 @@ module EE
prepended do
with_scope :subject
condition(:ai_available) do
- ::Feature.enabled?(:openai_experimentation)
+ ::Feature.enabled?(:ai_global_switch, type: :ops)
end
with_scope :subject
@@ -501,10 +460,9 @@ Caching has following limitations:
### Check if feature is allowed for this resource based on namespace settings
-There are two settings allowed on root namespace level that restrict the use of AI features:
+There is one setting allowed on root namespace level that restrict the use of AI features:
- `experiment_features_enabled`
-- `third_party_ai_features_enabled`.
To check if that feature is allowed for a given namespace, call:
@@ -512,46 +470,39 @@ To check if that feature is allowed for a given namespace, call:
Gitlab::Llm::StageCheck.available?(namespace, :name_of_the_feature)
```
-Add the name of the feature to the `Gitlab::Llm::StageCheck` class. There are arrays there that differentiate
-between experimental and beta features.
+Add the name of the feature to the `Gitlab::Llm::StageCheck` class. There are
+arrays there that differentiate between experimental and beta features.
This way we are ready for the following different cases:
-- If the feature is not in any array, the check will return `true`. For example, the feature was moved to GA and does not use a third-party setting.
-- If feature is in GA, but uses a third-party setting, the class will return a proper answer based on the namespace third-party setting.
+- If the feature is not in any array, the check will return `true`. For example, the feature was moved to GA.
To move the feature from the experimental phase to the beta phase, move the name of the feature from the `EXPERIMENTAL_FEATURES` array to the `BETA_FEATURES` array.
### Implement calls to AI APIs and the prompts
The `CompletionWorker` will call the `Completions::Factory` which will initialize the Service and execute the actual call to the API.
-In our example, we will use OpenAI and implement two new classes:
+In our example, we will use VertexAI and implement two new classes:
```ruby
-# /ee/lib/gitlab/llm/open_ai/completions/amazing_new_ai_feature.rb
+# /ee/lib/gitlab/llm/vertex_ai/completions/amazing_new_ai_feature.rb
module Gitlab
module Llm
- module OpenAi
+ module VertexAi
module Completions
- class AmazingNewAiFeature
- def initialize(ai_prompt_class)
- @ai_prompt_class = ai_prompt_class
- end
+ class AmazingNewAiFeature < Gitlab::Llm::Completions::Base
+ def execute
+ prompt = ai_prompt_class.new(options[:user_input]).to_prompt
- def execute(user, issue, options)
- options = ai_prompt_class.get_options(options[:messages])
+ response = Gitlab::Llm::VertexAi::Client.new(user).text(content: prompt)
- ai_response = Gitlab::Llm::OpenAi::Client.new(user).chat(content: nil, **options)
+ response_modifier = ::Gitlab::Llm::VertexAi::ResponseModifiers::Predictions.new(response)
- ::Gitlab::Llm::OpenAi::ResponseService.new(user, issue, ai_response, options: {}).execute(
- Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new
- )
+ ::Gitlab::Llm::GraphqlSubscriptionResponseService.new(
+ user, nil, response_modifier, options: response_options
+ ).execute
end
-
- private
-
- attr_reader :ai_prompt_class
end
end
end
@@ -560,28 +511,23 @@ end
```
```ruby
-# /ee/lib/gitlab/llm/open_ai/templates/amazing_new_ai_feature.rb
+# /ee/lib/gitlab/llm/vertex_ai/templates/amazing_new_ai_feature.rb
module Gitlab
module Llm
- module OpenAi
+ module VertexAi
module Templates
class AmazingNewAiFeature
- TEMPERATURE = 0.3
-
- def self.get_options(messages)
- system_content = <<-TEMPLATE
- You are an assistant that writes code for the following input:
- """
- TEMPLATE
-
- {
- messages: [
- { role: "system", content: system_content },
- { role: "user", content: messages },
- ],
- temperature: TEMPERATURE
- }
+ def initialize(user_input)
+ @user_input = user_input
+ end
+
+ def to_prompt
+ <<-PROMPT
+ You are an assistant that writes code for the following context:
+
+ context: #{user_input}
+ PROMPT
end
end
end
diff --git a/doc/development/api_graphql_styleguide.md b/doc/development/api_graphql_styleguide.md
index 3662b21eb9e..318f9bed6d3 100644
--- a/doc/development/api_graphql_styleguide.md
+++ b/doc/development/api_graphql_styleguide.md
@@ -154,7 +154,14 @@ developers must familiarize themselves with our [Deprecation and Removal process
Breaking changes are:
- Removing or renaming a field, argument, enum value, or mutation.
-- Changing the type of a field, argument or enum value.
+- Changing the type or type name of an argument. The type of an argument
+ is declared by the client when [using variables](https://graphql.org/learn/queries/#variables),
+ and a change would cause a query using the old type name to be rejected by the API.
+- Changing the [_scalar type_](https://graphql.org/learn/schema/#scalar-types) of a field or enum
+ value where it results in a change to how the value serializes to JSON.
+ For example, a change from a JSON String to a JSON Number, or a change to how a String is formatted.
+ A change to another [_object type_](https://graphql.org/learn/schema/#object-types-and-fields) can be
+ allowed so long as all scalar type fields of the object continue to serialize in the same way.
- Raising the [complexity](#max-complexity) of a field or complexity multipliers in a resolver.
- Changing a field from being _not_ nullable (`null: false`) to nullable (`null: true`), as
discussed in [Nullable fields](#nullable-fields).
diff --git a/doc/development/backend/create_source_code_be/gitaly_touch_points.md b/doc/development/backend/create_source_code_be/gitaly_touch_points.md
index c689af2f150..98607c7f6c7 100644
--- a/doc/development/backend/create_source_code_be/gitaly_touch_points.md
+++ b/doc/development/backend/create_source_code_be/gitaly_touch_points.md
@@ -19,9 +19,3 @@ All access to Gitaly from other parts of GitLab are through Create: Source Code
After a call is made to Gitaly, Git `commit` information is stored in memory. This information is wrapped by the [Ruby `Commit` Model](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/commit.rb), which is a wrapper around [`Gitlab::Git::Commit`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/git/commit.rb).
The `Commit` model acts like an ActiveRecord object, but it does not have a PostgreSQL backend. Instead, it maps back to Gitaly RPCs.
-
-## Rugged Patches
-
-Historically in GitLab, access to the server-based `git` repositories was provided through the [rugged](https://github.com/libgit2/rugged) RubyGem, which provides Ruby bindings to `libgit2`. This was further extended by what is termed "Rugged Patches", [a set of extensions to the Rugged library](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/57317). Rugged implementations of some of the most commonly-used RPCs can be [enabled via feature flags](../../gitaly.md#legacy-rugged-code).
-
-Rugged access requires the use of a NFS file system, a direction GitLab is moving away from in favor of Gitaly Cluster. Rugged has been proposed for [deprecation and removal](https://gitlab.com/gitlab-org/gitaly/-/issues/1690). Several large customers are still using NFS, and a specific removal date is not planned at this point.
diff --git a/doc/development/bulk_import.md b/doc/development/bulk_import.md
index 081af2b4e17..502bee97c9c 100644
--- a/doc/development/bulk_import.md
+++ b/doc/development/bulk_import.md
@@ -51,3 +51,12 @@ and its users.
The migration process starts with the creation of a [`BulkImport`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/bulk_import.rb)
record to keep track of the migration. From there all the code related to the
GitLab Group Migration can be found under the new `BulkImports` namespace in all the application layers.
+
+### Idempotency
+
+To ensure we don't get duplicate entries when re-running the same Sidekiq job, we cache each entry as it's processed and skip entries if they're present in the cache.
+
+There are two different strategies:
+
+- `BulkImports::Pipeline::HexdigestCacheStrategy`, which caches a hexdigest representation of the data.
+- `BulkImports::Pipeline::IndexCacheStrategy`, which caches the last processed index of an entry in a pipeline.
diff --git a/doc/development/cells/index.md b/doc/development/cells/index.md
index 30dccd91c9d..1ab88e0d8c6 100644
--- a/doc/development/cells/index.md
+++ b/doc/development/cells/index.md
@@ -16,6 +16,7 @@ To make the application work within the GitLab Cells architecture, we need to fi
Here is the suggested approach:
1. Pick a workflow to fix.
+1. Firstly, we need to find out the tables that are affected while performing the chosen workflow. As an example, in [this note](https://gitlab.com/gitlab-org/gitlab/-/issues/428600#note_1610331742) we have described how to figure out the list of all tables that are affected when a project is created in a group.
1. For each table affected for the chosen workflow, choose the approriate
[GitLab schema](../database/multiple_databases.md#gitlab-schema).
1. Identify all cross-joins, cross-transactions, and cross-database foreign keys for
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index 8e6ea3d68e9..c2f2a7643ae 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -115,10 +115,10 @@ It picks reviewers and maintainers from the list at the
page, with these behaviors:
- It doesn't pick people whose Slack or [GitLab status](../user/profile/index.md#set-your-current-status):
- - Contains the string `OOO`, `PTO`, `Parental Leave`, or `Friends and Family`.
+ - Contains the string `OOO`, `PTO`, `Parental Leave`, `Friends and Family`, or `Conference`.
- GitLab user **Busy** indicator is set to `True`.
- Emoji is from one of these categories:
- - **On leave** - 🌴 `:palm_tree:`, 🏖️ `:beach:`, ⛱ `:beach_umbrella:`, 🏖 `:beach_with_umbrella:`, 🌞 `:sun_with_face:`, 🎡 `:ferris_wheel:`
+ - **On leave** - 🌴 `:palm_tree:`, 🏖️ `:beach:`, ⛱ `:beach_umbrella:`, 🏖 `:beach_with_umbrella:`, 🌞 `:sun_with_face:`, 🎡 `:ferris_wheel:`, 🏙 `:cityscape:`
- **Out sick** - 🌡️ `:thermometer:`, 🤒 `:face_with_thermometer:`
- **At capacity** - 🔴 `:red_circle:`
- **Focus mode** - 💡 `:bulb:` (focusing on their team's work)
@@ -295,6 +295,10 @@ up confusion or verify that the end result matches what they had in mind, to
database specialists to get input on the data model or specific queries, or to
any other developer to get an in-depth review of the solution.
+If you know you'll need many merge requests to deliver a feature (for example, you created a proof of concept and it is clear the feature will consist of 10+ merge requests),
+consider identifying reviewers and maintainers who possess the necessary understanding of the feature (you share the context with them). Then direct all merge requests to these reviewers.
+The best DRI for finding these reviewers is the EM or Staff Engineer. Having stable reviewer counterparts for multiple merge requests with the same context improves efficiency.
+
If your merge request touches more than one domain (for example, Dynamic Analysis and GraphQL), ask for reviews from an expert from each domain.
If an author is unsure if a merge request needs a [domain expert's](#domain-experts) opinion,
@@ -764,7 +768,7 @@ A merge request may benefit from being considered a customer critical priority b
Properties of customer critical merge requests:
-- The [VP of Development](https://about.gitlab.com/job-families/engineering/development/management/vp/) ([@clefelhocz1](https://gitlab.com/clefelhocz1)) is the approver for deciding if a merge request qualifies as customer critical. Also, if two of his direct reports approve, that can also serve as approval.
+- A senior director or higher in Development must approve that a merge request qualifies as customer-critical. Alternatively, if two of their direct reports approve, that can also serve as approval.
- The DRI applies the `customer-critical-merge-request` label to the merge request.
- It is required that the reviewers and maintainers involved with a customer critical merge request are engaged as soon as this decision is made.
- It is required to prioritize work for those involved on a customer critical merge request so that they have the time available necessary to focus on it.
diff --git a/doc/development/contributing/first_contribution.md b/doc/development/contributing/first_contribution.md
index 3477590f40b..834f34328bc 100644
--- a/doc/development/contributing/first_contribution.md
+++ b/doc/development/contributing/first_contribution.md
@@ -343,7 +343,7 @@ Now you're ready to push changes from the community fork to the main GitLab repo
1. If you're happy with this merge request and want to start the review process, type
`@gitlab-bot ready` in a comment and then select **Comment**.
- ![GitLab bot ready comment](img/bot_ready.png)
+ ![GitLab bot ready comment](img/bot_ready_v16_6.png)
Someone from GitLab will look at your request and let you know what the next steps are.
diff --git a/doc/development/contributing/img/bot_ready.png b/doc/development/contributing/img/bot_ready.png
deleted file mode 100644
index 85116c8957b..00000000000
--- a/doc/development/contributing/img/bot_ready.png
+++ /dev/null
Binary files differ
diff --git a/doc/development/contributing/img/bot_ready_v16_6.png b/doc/development/contributing/img/bot_ready_v16_6.png
new file mode 100644
index 00000000000..a26971eefad
--- /dev/null
+++ b/doc/development/contributing/img/bot_ready_v16_6.png
Binary files differ
diff --git a/doc/development/dangerbot.md b/doc/development/dangerbot.md
index ef1e563b668..476d370e7ee 100644
--- a/doc/development/dangerbot.md
+++ b/doc/development/dangerbot.md
@@ -158,10 +158,9 @@ To enable the Dangerfile on another existing GitLab project, complete the follow
- if: $CI_SERVER_HOST == "gitlab.com"
```
-1. If your project is in the `gitlab-org` group, you don't need to set up any token as the `DANGER_GITLAB_API_TOKEN`
- variable is available at the group level. If not, follow these last steps:
- 1. Create a [Project access tokens](../user/project/settings/project_access_tokens.md).
- 1. Add the token as a CI/CD project variable named `DANGER_GITLAB_API_TOKEN`.
+1. Create a [Project access tokens](../user/project/settings/project_access_tokens.md) with the `api` scope,
+ `Developer` permission (so that it can add labels), and no expiration date (which actually means one year).
+1. Add the token as a CI/CD project variable named `DANGER_GITLAB_API_TOKEN`.
You should add the ~"Danger bot" label to the merge request before sending it
for review.
diff --git a/doc/development/database/avoiding_downtime_in_migrations.md b/doc/development/database/avoiding_downtime_in_migrations.md
index 27ffd356df6..3b4b45935b9 100644
--- a/doc/development/database/avoiding_downtime_in_migrations.md
+++ b/doc/development/database/avoiding_downtime_in_migrations.md
@@ -583,7 +583,7 @@ visualized in Thanos ([see an example](https://thanos-query.ops.gitlab.net/graph
### Swap the columns (release N + 1)
-After the background is completed and the new `bigint` columns are populated for all records, we can
+After the background migration is complete and the new `bigint` columns are populated for all records, we can
swap the columns. Swapping is done with post-deployment migration. The exact process depends on the
table being converted, but in general it's done in the following steps:
@@ -591,8 +591,11 @@ table being converted, but in general it's done in the following steps:
migration has finished ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L13-18)).
If the migration has not completed, the subsequent steps fail anyway. By checking in advance we
aim to have more helpful error message.
-1. Create indexes using the `bigint` columns that match the existing indexes using the `integer`
-column ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L28-34)).
+1. Use the `add_bigint_column_indexes` helper method from `Gitlab::Database::MigrationHelpers::ConvertToBigint` module
+ to create indexes with the `bigint` columns that match the existing indexes using the `integer` column.
+ - The helper method is expected to create all required `bigint` indexes, but it's advised to recheck to make sure
+ we are not missing any of the existing indexes. More information about the helper can be
+ found in merge request [135781](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/135781).
1. Create foreign keys (FK) using the `bigint` columns that match the existing FK using the
`integer` column. Do this both for FK referencing other tables, and FK that reference the table
that is being migrated ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L36-43)).
@@ -603,6 +606,8 @@ that is being migrated ([see an example](https://gitlab.com/gitlab-org/gitlab/-/
1. Swap the defaults ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L59-62)).
1. Swap the PK constraint (if any) ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L64-68)).
1. Remove old indexes and rename new ones ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L70-72)).
+ - Names of the `bigint` indexes created using `add_bigint_column_indexes` helper can be retrieved by calling
+ `bigint_index_name` from `Gitlab::Database::MigrationHelpers::ConvertToBigint` module.
1. Remove old foreign keys (if still present) and rename new ones ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L74)).
See example [merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66088), and [migration](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb).
diff --git a/doc/development/database/clickhouse/clickhouse_within_gitlab.md b/doc/development/database/clickhouse/clickhouse_within_gitlab.md
index 297776429d7..2f7a3c4dfe0 100644
--- a/doc/development/database/clickhouse/clickhouse_within_gitlab.md
+++ b/doc/development/database/clickhouse/clickhouse_within_gitlab.md
@@ -45,22 +45,39 @@ ClickHouse::Client.select('SELECT 1', :main)
## Database schema and migrations
-For the ClickHouse database there are no established schema migration procedures yet. We have very basic tooling to build up the database schema in the test environment from scratch using timestamp-prefixed SQL files.
-
-You can create a table by placing a new SQL file in the `db/click_house/main` folder:
-
-```sql
-// 20230811124511_create_issues.sql
-CREATE TABLE issues
-(
- id UInt64 DEFAULT 0,
- title String DEFAULT ''
-)
-ENGINE = MergeTree
-PRIMARY KEY (id)
+There are `bundle exec rake gitlab:clickhouse:migrate` and `bundle exec rake gitlab:clickhouse:rollback` tasks
+(introduced in [!136103](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136103)).
+
+You can create a migration by creating a Ruby migration file in `db/click_house/migrate` folder. It should be prefixed with a timestamp in the format `YYYYMMDDHHMMSS_description_of_migration.rb`
+
+```ruby
+# 20230811124511_create_issues.rb
+# frozen_string_literal: true
+
+class CreateIssues < ClickHouse::Migration
+ def up
+ execute <<~SQL
+ CREATE TABLE issues
+ (
+ id UInt64 DEFAULT 0,
+ title String DEFAULT ''
+ )
+ ENGINE = MergeTree
+ PRIMARY KEY (id)
+ SQL
+ end
+
+ def down
+ execute <<~SQL
+ DROP TABLE sync_cursors
+ SQL
+ end
+end
```
-When you're working locally in your development environment, you can create or re-create your table schema by executing the respective `CREATE TABLE` statement. Alternatively, you can use the following snippet in the Rails console:
+When you're working locally in your development environment, you can create or re-create your table schema by
+executing `rake gitlab:clickhouse:rollback` and `rake gitlab:clickhouse:migrate`.
+Alternatively, you can use the following snippet in the Rails console:
```ruby
require_relative 'spec/support/database/click_house/hooks.rb'
diff --git a/doc/development/database/database_lab.md b/doc/development/database/database_lab.md
index 7edb8ab4de5..7cdf034844d 100644
--- a/doc/development/database/database_lab.md
+++ b/doc/development/database/database_lab.md
@@ -18,7 +18,7 @@ schema changes, like additional indexes or columns, in an isolated copy of produ
1. Select **Sign in with Google**. (Not GitLab, as you need Google SSO to connect with our project.)
1. After you sign in, select the GitLab organization and then visit "Ask Joe" in the sidebar.
1. Select the database you're testing against:
- - Most queries for the GitLab project run against `gitlab-production-tunnel-pg12`.
+ - Most queries for the GitLab project run against `gitlab-production-main`.
- If the query is for a CI table, select `gitlab-production-ci`.
- If the query is for the container registry, select `gitlab-production-registry`.
1. Type `explain <Query Text>` in the chat box to get a plan.
diff --git a/doc/development/database/iterating_tables_in_batches.md b/doc/development/database/iterating_tables_in_batches.md
index 84b82b16255..44a8c72ea2c 100644
--- a/doc/development/database/iterating_tables_in_batches.md
+++ b/doc/development/database/iterating_tables_in_batches.md
@@ -523,14 +523,14 @@ and resumed at any point. This capability is demonstrated in the following code
stop_at = Time.current + 3.minutes
count, last_value = Issue.each_batch_count do
- Time.current > stop_at # condition for stopping the counting
+ stop_at.past? # condition for stopping the counting
end
# Continue the counting later
stop_at = Time.current + 3.minutes
count, last_value = Issue.each_batch_count(last_count: count, last_value: last_value) do
- Time.current > stop_at
+ stop_at.past?
end
```
diff --git a/doc/development/database/loose_foreign_keys.md b/doc/development/database/loose_foreign_keys.md
index fd380bee385..08d618a26ae 100644
--- a/doc/development/database/loose_foreign_keys.md
+++ b/doc/development/database/loose_foreign_keys.md
@@ -251,8 +251,12 @@ When the loose foreign key definition is no longer needed (parent table is remov
we need to remove the definition from the YAML file and ensure that we don't leave pending deleted
records in the database.
-1. Remove the deletion tracking trigger from the parent table (if the parent table is still there).
1. Remove the loose foreign key definition from the configuration (`config/gitlab_loose_foreign_keys.yml`).
+
+The deletion tracking trigger needs to be removed only when the parent table no longer uses loose foreign keys.
+If the model still has at least one `loose_foreign_key` definition remaining, then these steps can be skipped:
+
+1. Remove the trigger from the parent table (if the parent table is still there).
1. Remove leftover deleted records from the `loose_foreign_keys_deleted_records` table.
Migration for removing the trigger:
diff --git a/doc/development/database/multiple_databases.md b/doc/development/database/multiple_databases.md
index 79e1d3c0578..a045d8ad144 100644
--- a/doc/development/database/multiple_databases.md
+++ b/doc/development/database/multiple_databases.md
@@ -49,11 +49,21 @@ The usage of schema enforces the base class to be used:
### Guidelines on choosing between `gitlab_main_cell` and `gitlab_main_clusterwide` schema
+Depending on the use case, your feature may be [cell-local or clusterwide](../../architecture/blueprints/cells/index.md#how-do-i-decide-whether-to-move-my-feature-to-the-cluster-cell-or-organization-level) and hence the tables used for the feature should also use the appropriate schema.
+
When you choose the appropriate schema for tables, consider the following guidelines as part of the [Cells](../../architecture/blueprints/cells/index.md) architecture:
- Default to `gitlab_main_cell`: We expect most tables to be assigned to the `gitlab_main_cell` schema by default. Choose this schema if the data in the table is related to `projects` or `namespaces`.
- Consult with the Tenant Scale group: If you believe that the `gitlab_main_clusterwide` schema is more suitable for a table, seek approval from the Tenant Scale group This is crucial because it has scaling implications and may require reconsideration of the schema choice.
+To understand how existing tables are classified, you can use [this dashboard](https://manojmj.gitlab.io/tenant-scale-schema-progress/).
+
+After a schema has been assigned, the merge request pipeline might fail due to one or more of the following reasons, which can be rectified by following the linked guidelines:
+
+- [Cross-database joins](#suggestions-for-removing-cross-database-joins)
+- [Cross-database transactions](#fixing-cross-database-transactions)
+- [Cross-database foreign keys](#foreign-keys-that-cross-databases)
+
### The impact of `gitlab_schema`
The usage of `gitlab_schema` has a significant impact on the application.
diff --git a/doc/development/database/understanding_explain_plans.md b/doc/development/database/understanding_explain_plans.md
index 92688eb01dc..3e8978e1046 100644
--- a/doc/development/database/understanding_explain_plans.md
+++ b/doc/development/database/understanding_explain_plans.md
@@ -352,7 +352,6 @@ Indexes:
"index_users_on_static_object_token" UNIQUE, btree (static_object_token)
"index_users_on_unlock_token" UNIQUE, btree (unlock_token)
"index_on_users_name_lower" btree (lower(name::text))
- "index_users_on_accepted_term_id" btree (accepted_term_id)
"index_users_on_admin" btree (admin)
"index_users_on_created_at" btree (created_at)
"index_users_on_email_trigram" gin (email gin_trgm_ops)
diff --git a/doc/development/development_processes.md b/doc/development/development_processes.md
index 1cdf667a35f..fa221d5b51f 100644
--- a/doc/development/development_processes.md
+++ b/doc/development/development_processes.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: "See the Technical Writers assigned to Development Guidelines: https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines"
+info: Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/ee/development/development_processes.html#development-guidelines-review.
---
# Development processes
@@ -35,32 +35,12 @@ Complementary reads:
### Development guidelines review
-When you submit a change to the GitLab development guidelines, who
-you ask for reviews depends on the level of change.
+For changes to development guidelines, request review and approval from an experienced GitLab Team Member.
-#### Wording, style, or link changes
-
-Not all changes require extensive review. For example, MRs that don't change the
-content's meaning or function can be reviewed, approved, and merged by any
-maintainer or Technical Writer. These can include:
-
-- Typo fixes.
-- Clarifying links, such as to external programming language documentation.
-- Changes to comply with the [Documentation Style Guide](documentation/index.md)
- that don't change the intent of the documentation page.
-
-#### Specific changes
-
-If the MR proposes changes that are limited to a particular stage, group, or team,
-request a review and approval from an experienced GitLab Team Member in that
-group. For example, if you're documenting a new internal API used exclusively by
+For example, if you're documenting a new internal API used exclusively by
a given group, request an engineering review from one of the group's members.
-After the engineering review is complete, assign the MR to the
-[Technical Writer associated with the stage and group](https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments)
-in the modified documentation page's metadata.
-If the page is not assigned to a specific group, follow the
-[Technical Writing review process for development guidelines](https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines).
+Small fixes, like typos, can be merged by any user with at least the Maintainer role.
#### Broader changes
@@ -85,7 +65,6 @@ In these cases, use the following workflow:
- [Quality](https://about.gitlab.com/handbook/engineering/quality/)
- [Engineering Productivity](https://about.gitlab.com/handbook/engineering/quality/engineering-productivity/)
- [Infrastructure](https://about.gitlab.com/handbook/engineering/infrastructure/)
- - [Technical Writing](https://about.gitlab.com/handbook/product/ux/technical-writing/)
You can skip this step for MRs authored by EMs or Staff Engineers responsible
for their area.
@@ -97,15 +76,15 @@ In these cases, use the following workflow:
author / approver of the MR.
If this is a significant change across multiple areas, request final review
- and approval from the VP of Development, the DRI for Development Guidelines,
- @clefelhocz1.
+ and approval from the VP of Development, who is the DRI for development guidelines.
+
+Any Maintainer can merge the MR.
-1. After all approvals are complete, assign the MR to the
- [Technical Writer associated with the stage and group](https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments)
- in the modified documentation page's metadata.
- If the page is not assigned to a specific group, follow the
- [Technical Writing review process for development guidelines](https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines).
- The Technical Writer may ask for additional approvals as previously suggested before merging the MR.
+#### Technical writing reviews
+
+If you would like a review by a technical writer, post a message in the `#docs` Slack channel.
+Technical writers do not need to review the content, however, and any Maintainer
+other than the MR author can merge.
### Reviewer values
@@ -114,6 +93,8 @@ In these cases, use the following workflow:
As a reviewer or as a reviewee, make sure to familiarize yourself with
the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/reviewer-values/) we strive for at GitLab.
+Also, any doc content should follow the [Documentation Style Guide](documentation/index.md).
+
## Language-specific guides
### Go guides
@@ -123,3 +104,13 @@ the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/rev
### Shell Scripting guides
- [Shell scripting standards and style guidelines](shell_scripting_guide/index.md)
+
+## Clear written communication
+
+While writing any comment in an issue or merge request or any other mode of communication,
+follow [IETF standard](https://www.ietf.org/rfc/rfc2119.txt) while using terms like
+"MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT","RECOMMENDED", "MAY",
+and "OPTIONAL".
+
+This ensures that different team members from different cultures have a clear understanding of
+the terms being used.
diff --git a/doc/development/distributed_tracing.md b/doc/development/distributed_tracing.md
index da6af8b95ef..56c114ba8de 100644
--- a/doc/development/distributed_tracing.md
+++ b/doc/development/distributed_tracing.md
@@ -221,8 +221,8 @@ This configuration string uses the Jaeger driver `opentracing://jaeger` with the
| Name | Example | Description |
|------|-------|-------------|
| `udp_endpoint` | `localhost:6831` | This is the default. Configures Jaeger to send trace information to the UDP listener on port `6831` using compact thrift protocol. Note that we've experienced some issues with the [Jaeger Client for Ruby](https://github.com/salemove/jaeger-client-ruby) when using this protocol. |
-| `sampler` | `probabalistic` | Configures Jaeger to use a probabilistic random sampler. The rate of samples is configured by the `sampler_param` value. |
-| `sampler_param` | `0.01` | Use a ratio of `0.01` to configure the `probabalistic` sampler to randomly sample _1%_ of traces. |
+| `sampler` | `probabilistic` | Configures Jaeger to use a probabilistic random sampler. The rate of samples is configured by the `sampler_param` value. |
+| `sampler_param` | `0.01` | Use a ratio of `0.01` to configure the `probabilistic` sampler to randomly sample _1%_ of traces. |
| `service_name` | `api` | Override the service name used by the Jaeger backend. This parameter takes precedence over the application-supplied value. |
NOTE:
diff --git a/doc/development/documentation/styleguide/index.md b/doc/development/documentation/styleguide/index.md
index c3df15f1890..6158d60a0ba 100644
--- a/doc/development/documentation/styleguide/index.md
+++ b/doc/development/documentation/styleguide/index.md
@@ -1281,11 +1281,10 @@ You can use an automatic screenshot generator to take and compress screenshots.
#### Extending the tool
-To add an additional **screenshot generator**, complete the following steps:
+To add an additional screenshot generator:
-1. Locate the `spec/docs_screenshots` directory.
-1. Add a new file with a `_docs.rb` extension.
-1. Be sure to include the following information in the file:
+1. In the `spec/docs_screenshots` directory, add a new file with a `_docs.rb` extension.
+1. Add the following information to your file:
```ruby
require 'spec_helper'
@@ -1298,29 +1297,29 @@ To add an additional **screenshot generator**, complete the following steps:
end
```
-1. In addition, every `it` block must include the path where the screenshot is saved:
+1. To each `it` block, add the path where the screenshot is saved:
```ruby
- it 'user/packages/container_registry/img/project_image_repositories_list'
+ it '<path/to/images/directory>'
```
-##### Full page screenshots
+You can take a screenshot of a page with `visit <path>`.
+To avoid blank screenshots, use `expect` to wait for the content to load.
-To take a full page screenshot, `visit the page` and perform any expectation on real content (to have capybara wait till the page is ready and not take a white screenshot).
+##### Single-element screenshots
-##### Element screenshot
+You can take a screenshot of a single element.
-To have the screenshot focuses few more steps are needed:
+- Add the following to your screenshot generator file:
-- **find the area**: `screenshot_area = find('#js-registry-policies')`
-- **scroll the area in focus**: `scroll_to screenshot_area`
-- **wait for the content**: `expect(screenshot_area).to have_content 'Expiration interval'`
-- **set the crop area**: `set_crop_data(screenshot_area, 20)`
-
-In particular, `set_crop_data` accepts as arguments: a `DOM` element and a
-padding. The padding is added around the element, enlarging the screenshot area.
+ ```ruby
+ screenshot_area = find('<element>') # Find the element
+ scroll_to screenshot_area # Scroll to the element
+ expect(screenshot_area).to have_content '<content>' # Wait for the content you want to capture
+ set_crop_data(screenshot_area, <padding>) # Capture the element with added padding
+ ```
-Use `spec/docs_screenshots/container_registry_docs.rb` as a guide and as an example to create your own scripts.
+Use `spec/docs_screenshots/container_registry_docs.rb` as a guide to create your own scripts.
## Emoji
@@ -1731,6 +1730,7 @@ Some pages won't have a tier badge, because no obvious tier badge applies. For e
- Tutorials.
- Pages that compare features from different tiers.
- Pages in the `/development` folder. These pages are automatically assigned a `Contribute` badge.
+- Pages in the `/solutions` folder. These pages are automatically assigned a `Solutions` badge.
##### Administrator documentation tier badges
diff --git a/doc/development/documentation/styleguide/word_list.md b/doc/development/documentation/styleguide/word_list.md
index ad2cbee974b..1888d72f991 100644
--- a/doc/development/documentation/styleguide/word_list.md
+++ b/doc/development/documentation/styleguide/word_list.md
@@ -26,6 +26,15 @@ For guidance not on this page, we defer to these style guides:
<!-- Disable trailing punctuation in heading rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md026---trailing-punctuation-in-heading -->
<!-- markdownlint-disable MD026 -->
+## `.gitlab-ci.yml` file
+
+Use backticks and lowercase for **the `.gitlab-ci.yml` file**.
+
+When possible, use the full phrase: **the `.gitlab-ci.yml` file**
+
+Although users can specify another name for their CI/CD configuration file,
+in most cases, use **the `.gitlab-ci.yml` file** instead.
+
## `&`
Do not use Latin abbreviations. Use **and** instead, unless you are documenting a UI element that uses an `&`.
@@ -383,9 +392,14 @@ Use **confirmation dialog** to describe the dialog that asks you to confirm an a
Do not use **confirmation box** or **confirmation dialog box**. See also [**dialog**](#dialog).
-## Container Registry
+## container registry
+
+When documenting the GitLab container registry features and functionality, use lower case.
+
+Use:
-Use title case for the GitLab Container Registry.
+- The GitLab container registry supports A, B, and C.
+- You can push a Docker image to your project's container registry.
## currently
@@ -783,7 +797,9 @@ Do not use **handy**. If the user doesn't find the feature or process to be hand
## high availability, HA
-Do not use **high availability** or **HA**. Instead, direct readers to the GitLab [reference architectures](../../../administration/reference_architectures/index.md) for information about configuring GitLab for handling greater amounts of users.
+Do not use **high availability** or **HA**, except in the GitLab [reference architectures](../../../administration/reference_architectures/index.md#high-availability-ha). Instead, direct readers to the reference architectures for more information about configuring GitLab for handling greater amounts of users.
+
+Do not use phrases like **high availability setup** to mean a multiple node environment. Instead, use **multi-node setup** or similar.
## higher
@@ -1303,6 +1319,14 @@ For example, you might write something like:
Use lowercase for **push rules**.
+## `README` file
+
+Use backticks and lowercase for **the `README` file**, or **the `README.md` file**.
+
+When possible, use the full phrase: **the `README` file**
+
+For plural, use **`README` files**.
+
## recommend, we recommend
Instead of **we recommend**, use **you should**. We want to talk to the user the way
diff --git a/doc/development/documentation/versions.md b/doc/development/documentation/versions.md
index dadae134f4c..bd83ed7eff2 100644
--- a/doc/development/documentation/versions.md
+++ b/doc/development/documentation/versions.md
@@ -119,9 +119,8 @@ To deprecate a page or topic:
You can add any additional context-specific details that might help users.
-1. Add the following HTML comments above and below the content.
- For `remove_date`, set a date three months after the release where it
- will be removed.
+1. Add the following HTML comments above and below the content. For `remove_date`,
+ set a date three months after the [release where it will be removed](https://about.gitlab.com/releases/).
```markdown
<!--- start_remove The following content will be removed on remove_date: 'YYYY-MM-DD' -->
diff --git a/doc/development/documentation/workflow.md b/doc/development/documentation/workflow.md
index eb1ea28d3b8..5c99f5c48df 100644
--- a/doc/development/documentation/workflow.md
+++ b/doc/development/documentation/workflow.md
@@ -36,6 +36,13 @@ A member of the Technical Writing team adds these labels:
`docs::` prefix. For example, `~docs::improvement`.
- The [`~Technical Writing` team label](../labels/index.md#team-labels).
+NOTE:
+With the exception of `/doc/development/documentation`,
+technical writers do not review content in the `doc/development` directory.
+Any Maintainer can merge content in the `doc/development` directory.
+If you would like a technical writer review of content in the `doc/development` directory,
+ask in the `#docs` Slack channel.
+
## Post-merge reviews
If not assigned to a Technical Writer for review prior to merging, a review must be scheduled
@@ -65,6 +72,11 @@ Remember:
- The Technical Writer can also help decide that documentation can be merged without Technical
writer review, with the review to occur soon after merge.
+## Pages with no tech writer review
+
+The documentation under `/doc/solutions` is created, maintained, copy edited,
+and merged by the Solutions Architect team.
+
## Do not use ChatGPT or AI-generated content for the docs
GitLab documentation is distributed under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/), which presupposes that GitLab owns the documentation.
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index 10943b2d135..d05249f3d3f 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -38,10 +38,10 @@ context rich definitions around the reason the feature is SaaS-only.
1. Add the new feature to `FEATURE` in `ee/lib/ee/gitlab/saas.rb`.
```ruby
- FEATURES = %w[purchases/additional_minutes some_domain/new_feature_name].freeze
+ FEATURES = %i[purchases_additional_minutes some_domain_new_feature_name].freeze
```
-1. Use the new feature in code with `Gitlab::Saas.feature_available?('some_domain/new_feature_name')`.
+1. Use the new feature in code with `Gitlab::Saas.feature_available?(:some_domain_new_feature_name)`.
#### SaaS-only feature definition and validation
@@ -68,7 +68,7 @@ Each SaaS feature is defined in a separate YAML file consisting of a number of f
Prepend the `ee/lib/ee/gitlab/saas.rb` module and override the `Gitlab::Saas.feature_available?` method.
```ruby
-JH_DISABLED_FEATURES = %w[some_domain/new_feature_name].freeze
+JH_DISABLED_FEATURES = %i[some_domain_new_feature_name].freeze
override :feature_available?
def feature_available?(feature)
@@ -78,7 +78,7 @@ end
### Do not use SaaS-only features for functionality in CE
-`Gitlab::Saas.feature_vailable?` must not appear in CE.
+`Gitlab::Saas.feature_available?` must not appear in CE.
See [extending CE with EE guide](#extend-ce-features-with-ee-backend-code).
### SaaS-only features in tests
@@ -88,30 +88,30 @@ It is strongly advised to include automated tests for all code affected by a Saa
to ensure the feature works properly.
To enable a SaaS-only feature in a test, use the `stub_saas_features`
-helper. For example, to globally disable the `purchases/additional_minutes` feature
+helper. For example, to globally disable the `purchases_additional_minutes` feature
flag in a test:
```ruby
-stub_saas_features('purchases/additional_minutes' => false)
+stub_saas_features(purchases_additional_minutes: false)
-::Gitlab::Saas.feature_available?('purchases/additional_minutes') # => false
+::Gitlab::Saas.feature_available?(:purchases_additional_minutes) # => false
```
A common pattern of testing both paths looks like:
```ruby
it 'purchases/additional_minutes is not available' do
- # tests assuming purchases/additional_minutes is not enabled by default
- ::Gitlab::Saas.feature_available?('purchases/additional_minutes') # => false
+ # tests assuming purchases_additional_minutes is not enabled by default
+ ::Gitlab::Saas.feature_available?(:purchases_additional_minutes) # => false
end
-context 'when purchases/additional_minutes is available' do
+context 'when purchases_additional_minutes is available' do
before do
- stub_saas_features('purchases/additional_minutes' => true)
+ stub_saas_features(purchases_additional_minutes: true)
end
it 'returns true' do
- ::Gitlab::Saas.feature_available?('purchases/additional_minutes') # => true
+ ::Gitlab::Saas.feature_available?(:purchases_additional_minutes) # => true
end
end
```
diff --git a/doc/development/experiment_guide/implementing_experiments.md b/doc/development/experiment_guide/implementing_experiments.md
index 83369ad8e34..15b8f8fc192 100644
--- a/doc/development/experiment_guide/implementing_experiments.md
+++ b/doc/development/experiment_guide/implementing_experiments.md
@@ -8,7 +8,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
## Implementing an experiment
-[Examples](https://gitlab.com/gitlab-org/growth/growth/-/wikis/GLEX-Framework-code-examples)
+[Examples](https://gitlab.com/groups/gitlab-org/growth/-/wikis/GLEX-How-Tos)
Start by generating a feature flag using the `bin/feature-flag` command as you
usually would for a development feature flag, making sure to use `experiment` for
diff --git a/doc/development/export_csv.md b/doc/development/export_csv.md
index 9b0205166bf..ce0a6e026ff 100644
--- a/doc/development/export_csv.md
+++ b/doc/development/export_csv.md
@@ -10,7 +10,7 @@ This document lists the different implementations of CSV export in GitLab codeba
| Export type | How it works | Advantages | Disadvantages | Existing examples |
|---|---|---|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Streaming | - Query and yield data in batches to a response stream.<br>- Download starts immediately. | - Report available immediately. | - No progress indicator.<br>- Requires a reliable connection. | [Export Audit Event Log](../administration/audit_events.md#export-to-csv) |
+| Streaming | - Query and yield data in batches to a response stream.<br>- Download starts immediately. | - Report available immediately. | - No progress indicator.<br>- Requires a reliable connection. | [Export Audit Event Log](../administration/audit_events.md#exporting-audit-events) |
| Downloading | - Query and write data in batches to a temporary file.<br>- Loads the file into memory.<br>- Sends the file to the client. | - Report available immediately. | - Large amount of data might cause request timeout.<br>- Memory intensive.<br>- Request expires when user navigates to a different page. | - [Export Chain of Custody Report](../user/compliance/compliance_center/index.md#chain-of-custody-report)<br>- [Export License Usage File](../subscriptions/self_managed/index.md#export-your-license-usage) |
| As email attachment | - Asynchronously process the query with background job.<br>- Email uses the export as an attachment. | - Asynchronous processing. | - Requires users use a different app (email) to download the CSV.<br>- Email providers may limit attachment size. | - [Export issues](../user/project/issues/csv_export.md)<br>- [Export merge requests](../user/project/merge_requests/csv_export.md) |
| As downloadable link in email (*) | - Asynchronously process the query with background job.<br>- Email uses an export link. | - Asynchronous processing.<br>- Bypasses email provider attachment size limit. | - Requires users use a different app (email).<br>- Requires additional storage and cleanup. | [Export User Permissions](https://gitlab.com/gitlab-org/gitlab/-/issues/1772) |
diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md
index 99070f3d31c..5807c9c5621 100644
--- a/doc/development/fe_guide/graphql.md
+++ b/doc/development/fe_guide/graphql.md
@@ -974,28 +974,6 @@ const data = store.readQuery({
Read more about the `@connection` directive in [Apollo's documentation](https://www.apollographql.com/docs/react/caching/advanced-topics/#the-connection-directive).
-### Managing performance
-
-The Apollo client batches queries by default. Given 3 deferred queries,
-Apollo groups them into one request, sends the single request to the server, and
-responds after all 3 queries have completed.
-
-If you need to have queries sent as individual requests, additional context can be provided
-to tell Apollo to do this.
-
-```javascript
-export default {
- apollo: {
- user: {
- query: QUERY_IMPORT,
- context: {
- isSingleRequest: true,
- }
- }
- },
-};
-```
-
#### Polling and Performance
While the Apollo client has support for simple polling, for performance reasons, our [ETag-based caching](../polling.md) is preferred to hitting the database each time.
@@ -1081,21 +1059,6 @@ await this.$apollo.mutate({
});
```
-ETags depend on the request being a `GET` instead of GraphQL's usual `POST`. Our default link library does not support `GET` requests, so we must let our default Apollo client know to use a different library. Keep in mind, this means your app cannot batch queries.
-
-```javascript
-/* componentMountIndex.js */
-
-const apolloProvider = new VueApollo({
- defaultClient: createDefaultClient(
- {},
- {
- useGet: true,
- },
- ),
-});
-```
-
Finally, we can add a visibility check so that the component pauses polling when the browser tab is not active. This should lessen the request load on the page.
```javascript
diff --git a/doc/development/fe_guide/security.md b/doc/development/fe_guide/security.md
index d578449e578..4e06c22b383 100644
--- a/doc/development/fe_guide/security.md
+++ b/doc/development/fe_guide/security.md
@@ -12,57 +12,6 @@ info: To determine the technical writer assigned to the Stage/Group associated w
[Qualys SSL Labs Server Test](https://www.ssllabs.com/ssltest/analyze.html) are good resources for finding
potential problems and ensuring compliance with security best practices.
-<!-- Uncomment these sections when CSP/SRI are implemented.
-### Content Security Policy (CSP)
-
-Content Security Policy is a web standard that intends to mitigate certain
-forms of Cross-Site Scripting (XSS) as well as data injection.
-
-Content Security Policy rules should be taken into consideration when
-implementing new features, especially those that may rely on connection with
-external services.
-
-GitLab's CSP is used for the following:
-
-- Blocking plugins like Flash and Silverlight from running at all on our pages.
-- Blocking the use of scripts and stylesheets downloaded from external sources.
-- Upgrading `http` requests to `https` when possible.
-- Preventing `iframe` elements from loading in most contexts.
-
-Some exceptions include:
-
-- Scripts from Google Analytics and Matomo if either is enabled.
-- Connecting with GitHub, Bitbucket, GitLab.com, etc. to allow project importing.
-- Connecting with Google, Twitter, GitHub, etc. to allow OAuth authentication.
-
-We use [the Secure Headers gem](https://github.com/twitter/secureheaders) to enable Content
-Security Policy headers in the GitLab Rails app.
-
-Some resources on implementing Content Security Policy:
-
-- [MDN Article on CSP](https://developer.mozilla.org/en-US/docs/Web/Security/CSP)
-- [GitHub's CSP Journey on the GitHub Engineering Blog](https://github.blog/2016-04-12-githubs-csp-journey/)
-- The Dropbox Engineering Blog's series on CSP: [1](https://blogs.dropbox.com/tech/2015/09/on-csp-reporting-and-filtering/), [2](https://blogs.dropbox.com/tech/2015/09/unsafe-inline-and-nonce-deployment/), [3](https://blogs.dropbox.com/tech/2015/09/csp-the-unexpected-eval/), [4](https://blogs.dropbox.com/tech/2015/09/csp-third-party-integrations-and-privilege-separation/)
-
-### Subresource Integrity (SRI)
-
-Subresource Integrity prevents malicious assets from being provided by a CDN by
-guaranteeing that the asset downloaded is identical to the asset the server
-is expecting.
-
-The Rails app generates a unique hash of the asset, which is used as the
-asset's `integrity` attribute. The browser generates the hash of the asset
-on-load and will reject the asset if the hashes do not match.
-
-All CSS and JavaScript assets should use Subresource Integrity.
-
-Some resources on implementing Subresource Integrity:
-
-- [MDN Article on SRI](https://developer.mozilla.org/en-us/docs/web/security/subresource_integrity)
-- [Subresource Integrity on the GitHub Engineering Blog](https://github.blog/2015-09-19-subresource-integrity/)
-
--->
-
## Including external resources
External fonts, CSS, and JavaScript should never be used with the exception of
diff --git a/doc/development/fe_guide/sentry.md b/doc/development/fe_guide/sentry.md
index 929de1499c7..95a170b7976 100644
--- a/doc/development/fe_guide/sentry.md
+++ b/doc/development/fe_guide/sentry.md
@@ -39,7 +39,7 @@ to our Sentry instance under the project
The most common way to report errors to Sentry is to call `captureException(error)`, for example:
```javascript
-import * as Sentry from '@sentry/browser';
+import * as Sentry from '~/sentry/sentry_browser_wrapper';
try {
// Code that may fail in runtime
@@ -53,6 +53,9 @@ about, or have no control over. For example, we shouldn't report validation erro
out a form incorrectly. However, if that form submission fails because or a server error,
this is an error we want Sentry to know about.
+By default your local development instance does not have Sentry configured. Calls to Sentry are
+stubbed and shown in the console with a `[Sentry stub]` prefix for debugging.
+
### Unhandled/unknown errors
Additionally, we capture unhandled errors automatically in all of our pages.
diff --git a/doc/development/fe_guide/storybook.md b/doc/development/fe_guide/storybook.md
index 6049dd7c7d3..cbda9d5efa2 100644
--- a/doc/development/fe_guide/storybook.md
+++ b/doc/development/fe_guide/storybook.md
@@ -135,3 +135,37 @@ export const Default = Template.bind({});
Default.args = {};
```
+
+## Using a Vuex store
+
+To write a story for a component that requires access to a Vuex store, use the `createVuexStore` method provided in
+the Story context.
+
+```javascript
+import Vue from 'vue';
+import { withVuexStore } from 'storybook_addons/vuex_store';
+import DurationChart from './duration-chart.vue';
+
+const Template = (_, { argTypes, createVuexStore }) => {
+ return {
+ components: { DurationChart },
+ store: createVuexStore({
+ state: {},
+ getters: {},
+ modules: {},
+ }),
+ props: Object.keys(argTypes),
+ template: '<duration-chart />',
+ };
+};
+
+export default {
+ component: DurationChart,
+ title: 'ee/analytics/cycle_analytics/components/duration_chart',
+ decorators: [withVuexStore],
+};
+
+export const Default = Template.bind({});
+
+Default.args = {};
+```
diff --git a/doc/development/fe_guide/style/scss.md b/doc/development/fe_guide/style/scss.md
index e760b0adaaa..400b178d9a4 100644
--- a/doc/development/fe_guide/style/scss.md
+++ b/doc/development/fe_guide/style/scss.md
@@ -6,18 +6,11 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# SCSS style guide
-This style guide recommends best practices for SCSS to make styles easy to read,
-easy to maintain, and performant for the end-user.
-
-## Rules
-
-Our CSS is a mixture of current and legacy approaches. That means sometimes it may be difficult to follow this guide to the letter; it means you are likely to run into exceptions, where following the guide is difficult to impossible without major effort. In those cases, you may work with your reviewers and maintainers to identify an approach that does not fit these rules. Try to limit these cases.
-
-### Utility Classes
+## Utility Classes
In order to reduce the generation of more CSS as our site grows, prefer the use of utility classes over adding new CSS. In complex cases, CSS can be addressed by adding component classes.
-#### Where are utility classes defined?
+### Where are utility classes defined?
Prefer the use of [utility classes defined in GitLab UI](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/main/doc/css.md#utilities).
@@ -27,6 +20,8 @@ An easy list of classes can also be [seen on Unpkg](https://unpkg.com/browse/@gi
<!-- vale gitlab.Spelling = YES -->
+Or using an extension like [CSS Class completion](https://marketplace.visualstudio.com/items?itemName=Zignd.html-css-class-completion).
+
Classes in [`utilities.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/utilities.scss) and [`common.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/framework/common.scss) are being deprecated.
Classes in [`common.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/framework/common.scss) that use non-design-system values should be avoided. Use classes with conforming values instead.
@@ -40,13 +35,13 @@ GitLab differs from the scale used in the Bootstrap library. For a Bootstrap pad
utility, you may need to double the size of the applied utility to achieve the same visual
result (such as `ml-1` becoming `gl-ml-2`).
-#### Where should you put new utility classes?
+### Where should you put new utility classes?
If a class you need has not been added to GitLab UI, you get to add it! Follow the naming patterns documented in the [utility files](https://gitlab.com/gitlab-org/gitlab-ui/-/tree/main/src/scss/utility-mixins) and refer to the [GitLab UI CSS documentation](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/main/doc/contributing/adding_css.md#adding-utility-mixins) for more details, especially about adding responsive and stateful rules.
If it is not possible to wait for a GitLab UI update (generally one day), add the class to [`utilities.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/utilities.scss) following the same naming conventions documented in GitLab UI. A follow-up issue to backport the class to GitLab UI and delete it from GitLab should be opened.
-#### When should you create component classes?
+### When should you create component classes?
We recommend a "utility-first" approach.
@@ -60,7 +55,7 @@ Inspiration:
- <https://tailwindcss.com/docs/utility-first>
- <https://tailwindcss.com/docs/extracting-components>
-#### Utility mixins
+### Utility mixins
In addition to utility classes GitLab UI provides utility mixins named after the utility classes.
@@ -95,7 +90,7 @@ For example prefer `display: flex` over `@include gl-display-flex`. Utility mixi
}
```
-### Naming
+## Naming
Filenames should use `snake_case`.
@@ -119,6 +114,23 @@ CSS classes should use the `lowercase-hyphenated` format rather than
}
```
+Avoid making compound class names with SCSS `&` features. It makes
+searching for usages harder, and provides limited benefit.
+
+```scss
+// Bad
+.class {
+ &-name {
+ color: orange;
+ }
+}
+
+// Good
+.class-name {
+ color: #fff;
+}
+```
+
Class names should be used instead of tag name selectors.
Using tag name selectors is discouraged because they can affect
unintended elements in the hierarchy.
@@ -154,53 +166,47 @@ the page.
}
```
-### Selectors with a `js-` Prefix
-
-Do not use any selector prefixed with `js-` for styling purposes. These
-selectors are intended for use only with JavaScript to allow for removal or
-renaming without breaking styling.
-
-### Variables
-
-Before adding a new variable for a color or a size, guarantee:
-
-- There isn't an existing one.
-- There isn't a similar one we can use instead.
-
-### Using `extend` at-rule
+## Nesting
-Usage of the `extend` at-rule is prohibited due to [memory leaks](https://gitlab.com/gitlab-org/gitlab/-/issues/323021) and [the rule doesn't work as it should to](https://sass-lang.com/documentation/breaking-changes/extend-compound). Use mixins instead:
+Avoid unnecessary nesting. The extra specificity of a wrapper component
+makes things harder to override.
```scss
// Bad
-.gl-pt-3 {
- padding-top: 12px;
-}
-
-.my-element {
- @extend .gl-pt-3;
-}
+.component-container {
+ .component-header {
+ /* ... */
+ }
-// compiles to
-.gl-pt-3, .my-element {
- padding-top: 12px;
+ .component-body {
+ /* ... */
+ }
}
// Good
-@mixin gl-pt-3 {
- padding-top: 12px;
+.component-container {
+ /* ... */
}
-.my-element {
- @include gl-pt-3;
+.component-header {
+ /* ... */
}
-// compiles to
-.my-element {
- padding-top: 12px;
+.component-body {
+ /* ... */
}
```
+## Selectors with a `js-` Prefix
+
+Do not use any selector prefixed with `js-` for styling purposes. These
+selectors are intended for use only with JavaScript to allow for removal or
+renaming without breaking styling.
+
+## Using `extend` at-rule
+
+Usage of the `extend` at-rule is prohibited due to [memory leaks](https://gitlab.com/gitlab-org/gitlab/-/issues/323021) and [the rule doesn't work as it should](https://sass-lang.com/documentation/breaking-changes/extend-compound).
+
## Linting
We use [stylelint](https://stylelint.io) to check for style guide conformity. It uses the
diff --git a/doc/development/fe_guide/style/typescript.md b/doc/development/fe_guide/style/typescript.md
new file mode 100644
index 00000000000..529459097b4
--- /dev/null
+++ b/doc/development/fe_guide/style/typescript.md
@@ -0,0 +1,215 @@
+---
+type: reference, dev
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# TypeScript
+
+## History with GitLab
+
+TypeScript has been [considered](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/35),
+discussed, promoted, and rejected for years at GitLab. The general
+conclusion is that we are unable to integrate TypeScript into the main
+project because the costs outweigh the benefits.
+
+- The main project has **a lot** of pre-existing code that is not strongly typed.
+- The main contributors to the main project are not all familiar with TypeScript.
+
+Apart from the main project, TypeScript has been profitably employed in
+a handful of satellite projects.
+
+## Projects using TypeScript
+
+The following GitLab projects use TypeScript:
+
+- [`gitlab-web-ide`](https://gitlab.com/gitlab-org/gitlab-web-ide/)
+- [`gitlab-vscode-extension`](https://gitlab.com/gitlab-org/gitlab-vscode-extension/)
+- [`gitlab-language-server-for-code-suggestions`](https://gitlab.com/gitlab-org/editor-extensions/gitlab-language-server-for-code-suggestions)
+- [`gitlab-org/cluster-integration/javascript-client`](https://gitlab.com/gitlab-org/cluster-integration/javascript-client)
+
+## Recommendations
+
+### Setup ESLint and TypeScript configuration
+
+When setting up a new TypeScript project, configure strict type-safety rules for
+ESLint and TypeScript. This ensures that the project remains as type-safe as possible.
+
+The [GitLab Workflow Extension](https://gitlab.com/gitlab-org/gitlab-vscode-extension/)
+project is a good model for a TypeScript project's boilerplate and configuration.
+Consider copying the `tsconfig.json` and `.eslintrc.json` from there.
+
+For `tsconfig.json`:
+
+- Use [`"strict": true`](https://www.typescriptlang.org/tsconfig#strict).
+ This enforces the strongest type-checking capabilities in the project and
+ prohibits overriding type-safety.
+- Use [`"skipLibCheck": true`](https://www.typescriptlang.org/tsconfig#skipLibCheck).
+ This improves compile time by only checking references `.d.ts`
+ files as opposed to all `.d.ts` files in `node_modules`.
+
+For `.eslintrc.json` (or `.eslintrc.js`):
+
+- Make sure that TypeScript-specific parsing and linting are placed in an `overrides`
+ for `**/*.ts` files. This way, linting regular `.js` files
+ remains unaffected by the TypeScript-specific rules.
+- Extend from [`plugin:@typescript-eslint/recommended`](https://typescript-eslint.io/rules?supported-rules=recommended)
+ which has some very sensible defaults, such as:
+ - [`"@typescript-eslint/no-explicit-any": "error"`](https://typescript-eslint.io/rules/no-explicit-any/)
+ - [`"@typescript-eslint/no-unsafe-assignment": "error"`](https://typescript-eslint.io/rules/no-unsafe-assignment/)
+ - [`"@typescript-eslint/no-unsafe-return": "error"`](https://typescript-eslint.io/rules/no-unsafe-return)
+
+### Avoid `any`
+
+Avoid `any` at all costs. This should already be configured in the project's linter,
+but it's worth calling out here.
+
+Developers commonly resort to `any` when dealing with data structures that cross
+domain boundaries, such as handling HTTP responses or interacting with untyped
+libraries. This appears convenient at first. However, opting for a well-defined type (or using
+`unknown` and employing type narrowing through predicates) carries substantial benefits.
+
+```typescript
+// Bad :(
+function handleMessage(data: any) {
+ console.log("We don't know what data is. This could blow up!", data.special.stuff);
+}
+
+// Good :)
+function handleMessage(data: unknown) {
+ console.log("Sometimes it's okay that it remains unknown.", JSON.stringify(data));
+}
+
+// Also good :)
+function isFooMessage(data: unknown): data is { foo: string } {
+ return typeof data === 'object' && data && 'foo' in data;
+}
+
+function handleMessage(data: unknown) {
+ if (isFooMessage(data)) {
+ console.log("We know it's a foo now. This is safe!", data.foo);
+ }
+}
+```
+
+### Avoid casting with `<>` or `as`
+
+Avoid casting with `<>` or `as` as much as possible.
+
+Type casting explicitly circumvents type-safety. Consider using
+[type predicates](https://www.typescriptlang.org/docs/handbook/2/narrowing.html#using-type-predicates).
+
+```typescript
+// Bad :(
+function handler(data: unknown) {
+ console.log((data as StuffContainer).stuff);
+}
+
+// Good :)
+function hasStuff(data: unknown): data is StuffContainer {
+ if (data && typeof data === 'object') {
+ return 'stuff' in data;
+ }
+
+ return false;
+}
+
+function handler(data: unknown) {
+ if (hasStuff(data)) {
+ // No casting needed :)
+ console.log(data.stuff);
+ }
+ throw new Error('Expected data to have stuff. Catastrophic consequences might follow...');
+}
+
+```
+
+There's some rare cases this might be acceptable (consider
+[this test utility](https://gitlab.com/gitlab-org/gitlab-web-ide/-/blob/3ea8191ed066811caa4fb108713e7538b8d8def1/packages/vscode-extension-web-ide/test-utils/createFakePartial.ts#L1)). However, 99% of the
+time, there's a better way.
+
+### Prefer `interface` over `type` for new structures
+
+Prefer declaring a new `interface` over declaring a new `type` alias when defining new structures.
+
+Interfaces and type aliases have a lot of cross-over, but only interfaces can be used
+with the `implements` keyword. A class is not able to `implement` a `type` (only an `interface`),
+so using `type` would restrict the usability of the structure.
+
+```typescript
+// Bad :(
+type Fooer = {
+ foo: () => string;
+}
+
+// Good :)
+interface Fooer {
+ foo: () => string;
+}
+```
+
+From the [TypeScript guide](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#differences-between-type-aliases-and-interfaces):
+
+> If you would like a heuristic, use `interface` until you need to use features from `type`.
+
+### Use `type` to define aliases for existing types
+
+Use type to define aliases for existing types, classes or interfaces. Use
+the TypeScript [Utility Types](https://www.typescriptlang.org/docs/handbook/utility-types.html)
+to provide transformations.
+
+```typescript
+interface Config = {
+ foo: string;
+
+ isBad: boolean;
+}
+
+// Bad :(
+type PartialConfig = {
+ foo?: string;
+
+ isBad?: boolean;
+}
+
+// Good :)
+type PartialConfig = Partial<Config>;
+```
+
+### Use union types to improve inference
+
+```typescript
+// Bad :(
+interface Foo { type: string }
+interface FooBar extends Foo { bar: string }
+interface FooZed extends Foo { zed: string }
+
+const doThing = (foo: Foo) => {
+ if (foo.type === 'bar') {
+ // Casting bad :(
+ console.log((foo as FooBar).bar);
+ }
+}
+
+// Good :)
+interface FooBar { type: 'bar', bar: string }
+interface FooZed { type: 'zed', zed: string }
+type Foo = FooBar | FooZed;
+
+const doThing = (foo: Foo) => {
+ if (foo.type === 'bar') {
+ // No casting needed :) - TS knows we are FooBar now
+ console.log(foo.bar);
+ }
+}
+```
+
+## Future plans
+
+- Shared ESLint configuration to reuse across TypeScript projects.
+
+## Related topics
+
+- [TypeScript Handbook](https://www.typescriptlang.org/docs/handbook/intro.html)
+- [TypeScript notes in GitLab Workflow Extension](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/blob/main/docs/developer/coding-guidelines.md?ref_type=heads#typescript)
diff --git a/doc/development/fe_guide/type_hinting.md b/doc/development/fe_guide/type_hinting.md
new file mode 100644
index 00000000000..026bf855e27
--- /dev/null
+++ b/doc/development/fe_guide/type_hinting.md
@@ -0,0 +1,215 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Type hinting overview
+
+The Frontend codebase of the GitLab project currently does not require nor enforces types. Adding
+type annotations is optional, and we don't currently enforce any type safety in the JavaScript
+codebase. However, type annotations might be very helpful in adding clarity to the codebase,
+especially in shared utilities code. This document aims to cover how type hinting currently works,
+how to add new type annotations, and how to set up type hinting in the GitLab project.
+
+## JSDoc
+
+[JSDoc](https://jsdoc.app/) is a tool to document and describe types in JavaScript code, using
+specially formed comments. JSDoc's types vocabulary is relatively limited, but it is widely
+supported [by many IDEs](https://en.wikipedia.org/wiki/JSDoc#JSDoc_in_use).
+
+### Examples
+
+#### Describing functions
+
+Use [`@param`](https://jsdoc.app/tags-param.html) and [`@returns`](https://jsdoc.app/tags-returns.html)
+to describe function type:
+
+```javascript
+/**
+ * Adds two numbers
+ * @param {number} a first number
+ * @param {number} b second number
+ * @returns {number} sum of two numbers
+ */
+function add(a, b) {
+ return a + b;
+}
+```
+
+##### Optional parameters
+
+Use square brackets `[]` around a parameter name to mark it as optional. A default value can be
+provided by using the `[name=value]` syntax:
+
+```javascript
+/**
+ * Adds two numbers
+ * @param {number} value
+ * @param {number} [increment=1] optional param
+ * @returns {number} sum of two numbers
+ */
+function increment(a, b=1) {
+ return a + b;
+}
+```
+
+##### Object parameters
+
+Functions that accept objects can be typed by using `object.field` notation in `@param` names:
+
+```javascript
+/**
+ * Adds two numbers
+ * @param {object} config
+ * @param {string} config.path path
+ * @param {string} [config.anchor] anchor
+ * @returns {string}
+ */
+function createUrl(config) {
+ if (config.anchor) {
+ return path + '#' + anchor;
+ }
+ return path;
+}
+```
+
+#### Annotating types of variables that are not immediately assigned a value
+
+For tools and IDEs it's hard to infer type of a value that doesn't immediately receive a value. We
+can use [`@type`](https://jsdoc.app/tags-type.html) notation to assign type to such variables:
+
+```javascript
+/** @type {number} */
+let value;
+```
+
+Consult [JSDoc official website](https://jsdoc.app/) for more syntax details.
+
+### Tips for using JSDoc
+
+#### Use lower-case names for basic types
+
+While both uppercase `Boolean` and lowercase `boolean` are acceptable, in most cases when we need a
+primitive or an object — lower case versions are the right choice: `boolean`, `number`, `string`,
+`symbol`, `object`.
+
+```javascript
+/**
+ * Translates `text`.
+ * @param {string} text - The text to be translated
+ * @returns {string} The translated text
+ */
+const gettext = (text) => locale.gettext(ensureSingleLine(text));
+```
+
+#### Use well-known types
+
+Well-known types, like `HTMLDivElement` or `Intl` are available and can be used directly:
+
+```javascript
+/** @type {HTMLDivElement} */
+let element;
+```
+
+```javascript
+/**
+ * Creates an instance of Intl.DateTimeFormat for the current locale.
+ * @param {Intl.DateTimeFormatOptions} [formatOptions] - for available options, please see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DateTimeFormat
+ * @returns {Intl.DateTimeFormat}
+ */
+const createDateTimeFormat = (formatOptions) =>
+ Intl.DateTimeFormat(getPreferredLocales(), formatOptions);
+```
+
+#### Import existing type definitions via `import('path/to/module')`
+
+Here are examples of how to annotate a type of the Vue Test Utils Wrapper variables, that are not
+immediately defined:
+
+```javascript
+/** @type {import('helpers/vue_test_utils_helper').ExtendedWrapper} */
+let wrapper;
+// ...
+wrapper = mountExtended(/* ... */);
+```
+
+```javascript
+/** @type {import('@vue/test-utils').Wrapper} */
+let wrapper;
+// ...
+wrapper = shallowMount(/* ... */);
+```
+
+NOTE:
+`import()` is [not a native JSDoc construct](https://github.com/jsdoc/jsdoc/issues/1645), but it is
+recognized by many IDEs and tools. In this case we're aiming for better clarity in the code and
+improved Developer Experience with an IDE.
+
+#### JSDoc is limited
+
+As was stated above, JSDoc has limited vocabulary. And using it would not describe the type fully.
+But sometimes it's possible to use 3rd party library's type definitions to make type inference to
+work for our code. Here's an example of such approach:
+
+```diff
+- export const mountExtended = (...args) => extendedWrapper(mount(...args));
++ import { compose } from 'lodash/fp';
++ export const mountExtended = compose(extendedWrapper, mount);
+```
+
+Here we use TypeScript type definitions from `compose` function, to add inferred type definitions to
+`mountExtended` function. In this case `mountExtended` arguments will be of same type as `mount`
+arguments. And return type will be the same as `extendedWrapper` return type.
+
+We can still use JSDoc's syntax to add description to the function, for example:
+
+```javascript
+/** Mounts a component and returns an extended wrapper for it */
+export const mountExtended = compose(extendedWrapper, mount);
+```
+
+## System requirements
+
+A setup might be required for type definitions from GitLab codebase and from 3rd party packages to
+be properly displayed in IDEs and tools.
+
+### Aliases
+
+Our codebase uses many aliases for imports. For example, `import Api from '~/api';` would import a
+`app/assets/javascripts/api.js` file. But IDEs might not know that alias and thus might not know the
+type of the `Api`. To fix that for most IDEs — we need to create a
+[`jsconfig.json`](https://code.visualstudio.com/docs/languages/jsconfig) file.
+
+There is a script in the GitLab project that can generate a `jsconfig.json` file based on webpack
+configuration and current environment variables. To generate or update the `jsconfig.json` file —
+run from the GitLab project root:
+
+```shell
+node scripts/frontend/create_jsconfig.js
+```
+
+`jsconfig.json` is added to gitignore list, so creating or changing it does not cause Git changes in
+the GitLab project. This also means it is not included in Git pulls, so it has to be manually
+generated or updated.
+
+### 3rd party TypeScript definitions
+
+While more and more libraries use TypeScript for type definitions, some still might have JSDoc
+annotated types or no types at all. To cover that gap, TypeScript community started a
+[DefinitelyTyped](https://github.com/DefinitelyTyped/DefinitelyTyped) initiative, that creates and
+supports standalone type definitions for popular JavaScript libraries. We can use those definitions
+by either explicitly installing the type packages (`yarn add -D "@types/lodash"`) or by using a
+feature called [Automatic Type Acquisition (ATA)](https://www.typescriptlang.org/tsconfig#typeAcquisition),
+that is available in some Language Services
+(for example, [ATA in VS Code](https://github.com/microsoft/TypeScript/wiki/JavaScript-Language-Service-in-Visual-Studio#user-content--automatic-acquisition-of-type-definitions)).
+
+Automatic Type Acquisition (ATA) automatically fetches type definitions from the DefinitelyTyped
+list. But for ATA to work, a globally installed `npm` might be required. IDEs can provide a fallback
+configuration options to set location of the `npm` executables. Consult your IDE documentation for
+details.
+
+Because ATA is not guaranteed to work and Lodash is a backbone for many of our utility functions
+— we have [DefinitelyTyped definitions for Lodash](https://www.npmjs.com/package/@types/lodash)
+explicitly added to our `devDependencies` in the `package.json`. This ensures that everyone gets
+type hints for `lodash`-based functions out of the box.
diff --git a/doc/development/feature_flags/controls.md b/doc/development/feature_flags/controls.md
index 6c46780a5d7..6e0f0e8dbcf 100644
--- a/doc/development/feature_flags/controls.md
+++ b/doc/development/feature_flags/controls.md
@@ -507,15 +507,8 @@ Once the above MR has been merged, you should:
When a feature gate has been removed from the codebase, the feature
record still exists in the database that the flag was deployed too.
-The record can be deleted once the MR is deployed to each environment:
+The record can be deleted once the MR is deployed to all the environments:
```shell
-/chatops run feature delete some_feature --dev
-/chatops run feature delete some_feature --staging
-```
-
-Then, you can delete it from production after the MR is deployed to prod:
-
-```shell
-/chatops run feature delete some_feature
+/chatops run feature delete <feature-flag-name> --dev --ops --pre --staging --staging-ref --production
```
diff --git a/doc/development/feature_flags/index.md b/doc/development/feature_flags/index.md
index 552a4ccc84b..c1a5963e97f 100644
--- a/doc/development/feature_flags/index.md
+++ b/doc/development/feature_flags/index.md
@@ -203,7 +203,7 @@ Only feature flags that have a YAML definition file can be used when running the
```shell
$ bin/feature-flag my_feature_flag
>> Specify the group introducing the feature flag, like `group::project management`:
-?> group::application performance
+?> group::cloud connector
>> URL of the MR introducing the feature flag (enter to skip):
?> https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38602
@@ -218,7 +218,7 @@ create config/feature_flags/development/my_feature_flag.yml
name: my_feature_flag
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38602
rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/232533
-group: group::application performance
+group: group::cloud connector
type: development
default_enabled: false
```
@@ -625,7 +625,7 @@ A common pattern of testing both paths looks like:
```ruby
it 'ci_live_trace works' do
# tests assuming ci_live_trace is enabled in tests by default
- Feature.enabled?(:ci_live_trace) # => true
+ Feature.enabled?(:ci_live_trace) # => true
end
context 'when ci_live_trace is disabled' do
diff --git a/doc/development/gems.md b/doc/development/gems.md
index c9672483e8d..54d6e6dc30d 100644
--- a/doc/development/gems.md
+++ b/doc/development/gems.md
@@ -254,13 +254,12 @@ The project for a new Gem should always be created in [`gitlab-org/ruby/gems` na
1. Create a project in the [`gitlab-org/ruby/gems` group](https://gitlab.com/gitlab-org/ruby/gems/) (or in a subgroup of it):
1. Follow the [instructions for new projects](https://about.gitlab.com/handbook/engineering/gitlab-repositories/#creating-a-new-project).
1. Follow the instructions for setting up a [CI/CD configuration](https://about.gitlab.com/handbook/engineering/gitlab-repositories/#cicd-configuration).
- 1. Use the [shared CI/CD config](https://gitlab.com/gitlab-org/quality/pipeline-common/-/blob/master/ci/gem-release.yml)
+ 1. Use the [gem-release CI component](https://gitlab.com/gitlab-org/quality/pipeline-common/-/tree/master/gem-release)
to release and publish new gem versions by adding the following to their `.gitlab-ci.yml`:
```yaml
include:
- - project: 'gitlab-org/quality/pipeline-common'
- file: '/ci/gem-release.yml'
+ - component: gitlab.com/gitlab-org/quality/pipeline-common/gem-release@<REPLACE WITH LATEST TAG FROM https://gitlab.com/gitlab-org/quality/pipeline-common/-/releases>
```
This job will handle building and publishing the gem (it uses a `gitlab_rubygems` Rubygems.org
diff --git a/doc/development/gitaly.md b/doc/development/gitaly.md
index e6a853c107e..ed7fb6325d6 100644
--- a/doc/development/gitaly.md
+++ b/doc/development/gitaly.md
@@ -41,8 +41,8 @@ To read or write Git data, a request has to be made to Gitaly. This means that
if you're developing a new feature where you need data that's not yet available
in `lib/gitlab/git` changes have to be made to Gitaly.
-There should be no new code that touches Git repositories via disk access (for example,
-Rugged, `git`, `rm -rf`) anywhere in the `gitlab` repository. Anything that
+There should be no new code that touches Git repositories via disk access
+anywhere in the `gitlab` repository. Anything that
needs direct access to the Git repository *must* be implemented in Gitaly, and
exposed via an RPC.
@@ -64,45 +64,6 @@ rm -rf tmp/tests/gitaly
During RSpec tests, the Gitaly instance writes logs to `gitlab/log/gitaly-test.log`.
-## Legacy Rugged code
-
-While Gitaly can handle all Git access, many of GitLab customers still
-run Gitaly atop NFS. The legacy Rugged implementation for Git calls may
-be faster than the Gitaly RPC due to N+1 Gitaly calls and other
-reasons. See [the issue](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/57317) for more
-details.
-
-Until GitLab has eliminated most of these inefficiencies or the use of
-NFS is discontinued for Git data, Rugged implementations of some of the
-most commonly-used RPCs can be enabled via feature flags:
-
-- `rugged_find_commit`
-- `rugged_get_tree_entries`
-- `rugged_tree_entry`
-- `rugged_commit_is_ancestor`
-- `rugged_commit_tree_entry`
-- `rugged_list_commits_by_oid`
-
-A convenience Rake task can be used to enable or disable these flags
-all together. To enable:
-
-```shell
-bundle exec rake gitlab:features:enable_rugged
-```
-
-To disable:
-
-```shell
-bundle exec rake gitlab:features:disable_rugged
-```
-
-Most of this code exists in the `lib/gitlab/git/rugged_impl` directory.
-
-NOTE:
-You should *not* have to add or modify code related to Rugged unless explicitly discussed with the
-[Gitaly Team](https://gitlab.com/groups/gl-gitaly/group_members). This code does not work on GitLab.com or other GitLab
-instances that do not use NFS.
-
## `TooManyInvocationsError` errors
During development and testing, you may experience `Gitlab::GitalyClient::TooManyInvocationsError` failures.
diff --git a/doc/development/github_importer.md b/doc/development/github_importer.md
index 45554ae465d..9ce95cf7da1 100644
--- a/doc/development/github_importer.md
+++ b/doc/development/github_importer.md
@@ -34,21 +34,42 @@ The importer's codebase is broken up into the following directories:
## Architecture overview
-When a GitHub project is imported, we schedule and execute a job for the
-`RepositoryImportWorker` worker as all other importers. However, unlike other
-importers, we don't immediately perform the work necessary. Instead work is
-divided into separate stages, with each stage consisting out of a set of Sidekiq
-jobs that are executed. Between every stage a job is scheduled that periodically
-checks if all work of the current stage is completed, advancing the import
-process to the next stage when this is the case. The worker handling this is
-called `Gitlab::GithubImport::AdvanceStageWorker`.
+When a GitHub project is imported, work is divided into separate stages, with
+each stage consisting of a set of Sidekiq jobs that are executed. Between
+every stage a job is scheduled that periodically checks if all work of the
+current stage is completed, advancing the import process to the next stage when
+this is the case. The worker handling this is called
+`Gitlab::GithubImport::AdvanceStageWorker`.
+
+- An import is initiated via an API request to
+ [`POST /import/github`](https://gitlab.com/gitlab-org/gitlab/-/blob/18878b90991e2d478f3c79a68013b156d83b5db8/lib/api/import_github.rb#L42)
+- The API endpoint calls [`Import::GitHubService`](https://gitlab.com/gitlab-org/gitlab/-/blob/18878b90991e2d478f3c79a68013b156d83b5db8/lib/api/import_github.rb#L43).
+- Which calls
+ [`Gitlab::LegacyGithubImport::ProjectCreator`](https://gitlab.com/gitlab-org/gitlab/-/blob/18878b90991e2d478f3c79a68013b156d83b5db8/app/services/import/github_service.rb#L31-38)
+- Which calls
+ [`Projects::CreateService`](https://gitlab.com/gitlab-org/gitlab/-/blob/18878b90991e2d478f3c79a68013b156d83b5db8/lib/gitlab/legacy_github_import/project_creator.rb#L30)
+- Which calls
+ [`@project.import_state.schedule`](https://gitlab.com/gitlab-org/gitlab/-/blob/18878b90991e2d478f3c79a68013b156d83b5db8/app/services/projects/create_service.rb#L325)
+- Which calls
+ [`project.add_import_job`](https://gitlab.com/gitlab-org/gitlab/-/blob/1d154fa0b9121566aebf3afe3d28808d025cc5af/app/models/project_import_state.rb#L43)
+- Which calls
+ [`RepositoryImportWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/1d154fa0b9121566aebf3afe3d28808d025cc5af/app/models/project.rb#L1105)
## Stages
### 1. RepositoryImportWorker
-This worker starts the import process by scheduling a job for the
-next worker.
+This worker calls
+[`Projects::ImportService.new.execute`](https://gitlab.com/gitlab-org/gitlab/-/blob/651e6a0139396ed6fa9ce73e27587ca88f9f4d96/app/workers/repository_import_worker.rb#L23-24),
+which calls
+[`importer.execute`](https://gitlab.com/gitlab-org/gitlab/-/blob/fcccaaac8d62191ad233cebeffc67111145b1ad7/app/services/projects/import_service.rb#L143).
+
+In this context, `importer` is an instance of
+[`Gitlab::ImportSources.importer(project.import_type)`](https://gitlab.com/gitlab-org/gitlab/-/blob/fcccaaac8d62191ad233cebeffc67111145b1ad7/app/services/projects/import_service.rb#L149),
+which for `github` import types maps to
+[`ParallelImporter`](https://gitlab.com/gitlab-org/gitlab/-/blob/651e6a0139396ed6fa9ce73e27587ca88f9f4d96/lib/gitlab/import_sources.rb#L13).
+
+`ParallelImporter` schedules a job for the next worker.
### 2. Stage::ImportRepositoryWorker
@@ -222,9 +243,8 @@ them to GitLab users. Other data such as issue pages and comments typically only
We handle the rate limit by doing the following:
-1. After we hit the rate limit, we either:
- - Automatically reschedule jobs in such a way that they are not executed until the rate limit has been reset.
- - Move onto another GitHub access token if multiple GitHub access tokens were passed to the API.
+1. After we hit the rate limit, we automatically reschedule jobs in such a way that they are not executed until the rate
+ limit has been reset.
1. We cache the mapping of GitHub users to GitLab users in Redis.
More information on user caching can be found below.
diff --git a/doc/development/i18n/externalization.md b/doc/development/i18n/externalization.md
index 68c2778eabe..1ce35b254f1 100644
--- a/doc/development/i18n/externalization.md
+++ b/doc/development/i18n/externalization.md
@@ -232,7 +232,7 @@ If strings are reused throughout a component, it can be useful to define these s
If we are reusing the same translated string in multiple components, it is tempting to add them to a `constants.js` file instead and import them across our components. However, there are multiple pitfalls to this approach:
- It creates distance between the HTML template and the copy, adding an additional level of complexity while navigating our codebase.
-- Copy strings are rarely, if ever, truly the same entity. The benefit of having a reusable variable is to have one easy place to go to update a value, but for copy it is quite common to have similar strings that aren't quite the same.
+- The benefit of having a reusable variable is to have one easy place to go to update a value, but for copy it is quite common to have similar strings that aren't quite the same.
Another practice to avoid when exporting copy strings is to import them in specs. While it might seem like a much more efficient test (if we change the copy, the test will still pass!) it creates additional problems:
diff --git a/doc/development/i18n/proofreader.md b/doc/development/i18n/proofreader.md
index cea59bae41b..f24ebacab18 100644
--- a/doc/development/i18n/proofreader.md
+++ b/doc/development/i18n/proofreader.md
@@ -140,7 +140,6 @@ are very appreciative of the work done by translators and proofreaders!
- Rıfat Ünalmış (Rifat Unalmis) - [GitLab](https://gitlab.com/runalmis), [Crowdin](https://crowdin.com/profile/runalmis)
- İsmail Arılık - [GitLab](https://gitlab.com/ismailarilik), [Crowdin](https://crowdin.com/profile/ismailarilik)
- Ukrainian
- - Volodymyr Sobotovych - [GitLab](https://gitlab.com/wheleph), [Crowdin](https://crowdin.com/profile/wheleph)
- Andrew Vityuk - [GitLab](https://gitlab.com/3_1_3_u), [Crowdin](https://crowdin.com/profile/andruwa13)
- Welsh
- Delyth Prys - [GitLab](https://gitlab.com/Delyth), [Crowdin](https://crowdin.com/profile/DelythPrys)
diff --git a/doc/development/img/runner_fleet_dashboard.png b/doc/development/img/runner_fleet_dashboard.png
new file mode 100644
index 00000000000..242ebf4aea9
--- /dev/null
+++ b/doc/development/img/runner_fleet_dashboard.png
Binary files differ
diff --git a/doc/development/index.md b/doc/development/index.md
index 71ab54c8a73..abc19645ecb 100644
--- a/doc/development/index.md
+++ b/doc/development/index.md
@@ -10,7 +10,7 @@ description: "Development Guidelines: learn how to contribute to GitLab."
Learn how to contribute to the development of the GitLab product.
-This content is intended for GitLab team members as well as members of the wider community.
+This content is intended for both GitLab team members and members of the wider community.
- [Contribute to GitLab development](contributing/index.md)
- [Contribute to GitLab Runner development](https://docs.gitlab.com/runner/development/)
diff --git a/doc/development/internal_analytics/index.md b/doc/development/internal_analytics/index.md
index 64b9c7af037..b0e47233777 100644
--- a/doc/development/internal_analytics/index.md
+++ b/doc/development/internal_analytics/index.md
@@ -14,6 +14,13 @@ when developing new features or instrumenting existing ones.
## Fundamental concepts
+<div class="video-fallback">
+ See the video about <a href="https://www.youtube.com/watch?v=GtFNXbjygWo">the concepts of events and metrics.</a>
+</div>
+<figure class="video_container">
+ <iframe src="https://www.youtube-nocookie.com/embed/GtFNXbjygWo" frameborder="0" allowfullscreen="true"> </iframe>
+</figure>
+
Events and metrics are the foundation of the internal analytics system.
Understanding the difference between the two concepts is vital to using the system.
@@ -50,9 +57,53 @@ such as the value of a setting or the count of rows in a database table.
- To instrument an event-based metric, see the [internal event tracking quick start guide](internal_event_instrumentation/quick_start.md).
- To instrument a metric that observes the GitLab instances state, see [the metrics instrumentation](metrics/metrics_instrumentation.md).
-## Data flow
+## Data availability
For GitLab there is an essential difference in analytics setup between SaaS and self-managed or GitLab Dedicated instances.
+On our SaaS instance both individual events and pre-computed metrics are available for analysis.
+Additionally for SaaS page views are automatically instrumented.
+For self-managed only the metrics instrumenented on the version installed on the instance are available.
+
+## Data discovery
+
+The data visualization tools [Sisense](https://about.gitlab.com/handbook/business-technology/data-team/platform/sisensecdt/) and [Tableau](https://about.gitlab.com/handbook/business-technology/data-team/platform/tableau/),
+which have access to our Data Warehouse, can be used to query the internal analytics data.
+
+### Querying metrics
+
+The following example query returns all values reported for `count_distinct_user_id_from_feature_used_7d` within the last six months and the according `instance_id`:
+
+```sql
+SELECT
+ date_trunc('week', ping_created_at),
+ dim_instance_id,
+ metric_value
+FROM common.fct_ping_instance_metric_rolling_6_months --model limited to last 6 months for performance
+WHERE metrics_path = 'counts.users_visiting_dashboard_weekly' --set to metric of interest
+ORDER BY ping_created_at DESC
+```
+
+For a list of other metrics tables refer to the [Data Models Cheat Sheet](https://about.gitlab.com/handbook/product/product-analysis/data-model-cheat-sheet/#commonly-used-data-models).
+
+### Querying events
+
+The following example query returns the number of daily event occurences for the `feature_used` event.
+
+```sql
+SELECT
+ behavior_date,
+ COUNT(*) as event_occurences
+FROM common_mart.mart_behavior_structured_event
+WHERE event_action = 'feature_used'
+AND event_category = 'InternalEventTracking'
+AND behavior_date > '2023-08-01' --restricted minimum date for performance
+GROUP BY 1 ORDER BY 1 desc
+```
+
+For a list of other event tables refer to the [Data Models Cheat Sheet](https://about.gitlab.com/handbook/product/product-analysis/data-model-cheat-sheet/#commonly-used-data-models-2).
+
+## Data flow
+
On SaaS event records are directly sent to a collection system, called Snowplow, and imported into our data warehouse.
Self-managed and GitLab Dedicated instances record event counts locally. Every week, a process called Service Ping sends the current
values for all pre-defined and active metrics to our data warehouse. For GitLab.com, metrics are calculated directly in the data warehouse.
diff --git a/doc/development/internal_analytics/internal_event_instrumentation/local_setup_and_debugging.md b/doc/development/internal_analytics/internal_event_instrumentation/local_setup_and_debugging.md
index d68e5565775..d9f45a2d93e 100644
--- a/doc/development/internal_analytics/internal_event_instrumentation/local_setup_and_debugging.md
+++ b/doc/development/internal_analytics/internal_event_instrumentation/local_setup_and_debugging.md
@@ -14,7 +14,7 @@ Internal events are using a tool called Snowplow under the hood. To develop and
| Snowplow Micro | Yes | Yes | Yes | No | No |
For local development you will have to either [setup a local event collector](#setup-local-event-collector) or [configure a remote event collector](#configure-a-remote-event-collector).
-We recommend the local setup when actively developing new events.
+We recommend using the local setup together with the [internal events monitor](#internal-events-monitor) when actively developing new events.
## Setup local event collector
@@ -68,6 +68,57 @@ You can configure your self-managed GitLab instance to use a custom Snowplow col
1. Select **Save changes**.
+## Internal Events Monitor
+
+<div class="video-fallback">
+ Watch the demo video about the <a href="https://www.youtube.com/watch?v=R7vT-VEzZOI">Internal Events Tracking Monitor</a>
+</div>
+<figure class="video_container">
+ <iframe src="https://www.youtube-nocookie.com/embed/R7vT-VEzZOI" frameborder="0" allowfullscreen="true"> </iframe>
+</figure>
+
+To understand how events are triggered and metrics are updated while you use the Rails app locally or `rails console`,
+you can use the monitor.
+
+Start the monitor and list one or more events that you would like to monitor. In this example we would like to monitor `i_code_review_user_create_mr`.
+
+```shell
+rails runner scripts/internal_events/monitor.rb i_code_review_user_create_mr
+```
+
+The monitor shows two tables. The top table lists all the metrics that are defined on the `i_code_review_user_create_mr` event.
+The second right-most column shows the value of each metric when the monitor was started and the right most column shows the current value of each metric.
+The bottom table has a list selected properties of all Snowplow events that matches the event name.
+
+If a new `i_code_review_user_create_mr` event is fired, the metrics values will get updated and a new event will appear in the `SNOWPLOW EVENTS` table.
+
+The monitor looks like below.
+
+```plaintext
+Updated at 2023-10-11 10:17:59 UTC
+Monitored events: i_code_review_user_create_mr
+
++--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| RELEVANT METRICS |
++-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
+| Key Path | Monitored Events | Instrumentation Class | Initial Value | Current Value |
++-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
+| counts_monthly.aggregated_metrics.code_review_category_monthly_active_users | i_code_review_user_create_mr | AggregatedMetric | 13 | 14 |
+| counts_monthly.aggregated_metrics.code_review_group_monthly_active_users | i_code_review_user_create_mr | AggregatedMetric | 13 | 14 |
+| counts_weekly.aggregated_metrics.code_review_category_monthly_active_users | i_code_review_user_create_mr | AggregatedMetric | 0 | 1 |
+| counts_weekly.aggregated_metrics.code_review_group_monthly_active_users | i_code_review_user_create_mr | AggregatedMetric | 0 | 1 |
+| redis_hll_counters.code_review.i_code_review_user_create_mr_monthly | i_code_review_user_create_mr | RedisHLLMetric | 8 | 9 |
+| redis_hll_counters.code_review.i_code_review_user_create_mr_weekly | i_code_review_user_create_mr | RedisHLLMetric | 0 | 1 |
++-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
++---------------------------------------------------------------------------------------------------------+
+| SNOWPLOW EVENTS |
++------------------------------+--------------------------+---------+--------------+------------+---------+
+| Event Name | Collector Timestamp | user_id | namespace_id | project_id | plan |
++------------------------------+--------------------------+---------+--------------+------------+---------+
+| i_code_review_user_create_mr | 2023-10-11T10:17:15.504Z | 29 | 93 | | default |
++------------------------------+--------------------------+---------+--------------+------------+---------+
+```
+
## Snowplow Analytics Debugger Chrome Extension
[Snowplow Analytics Debugger](https://chrome.google.com/webstore/detail/snowplow-analytics-debugg/jbnlcgeengmijcghameodeaenefieedm) is a browser extension for testing frontend events.
diff --git a/doc/development/internal_analytics/internal_event_instrumentation/quick_start.md b/doc/development/internal_analytics/internal_event_instrumentation/quick_start.md
index 271cb5f98a6..15ad4266d1b 100644
--- a/doc/development/internal_analytics/internal_event_instrumentation/quick_start.md
+++ b/doc/development/internal_analytics/internal_event_instrumentation/quick_start.md
@@ -148,3 +148,27 @@ Sometimes we want to send internal events when the component is rendered or load
= render Pajamas::ButtonComponent.new(button_options: { data: { event_tracking_load: 'true', event_tracking: 'i_devops' } }) do
= _("New project")
```
+
+### Props
+
+Apart from `eventName`, the `trackEvent` method also supports `extra` and `context` props.
+
+`extra`: Use this property to append supplementary information to GitLab standard context.
+`context`: Use this property to attach an additional context, if needed.
+
+The following example shows how to use the `extra` and `context` props with the `trackEvent` method:
+
+```javascript
+this.trackEvent('i_code_review_user_apply_suggestion', {
+ extra: {
+ projectId : 123,
+ },
+ context: {
+ schema: 'iglu:com.gitlab/design_management_context/jsonschema/1-0-0',
+ data: {
+ 'design-version-number': '1.0.0',
+ 'design-is-current-version': '1.0.1',
+ },
+ },
+});
+```
diff --git a/doc/development/internal_analytics/metrics/metrics_dictionary.md b/doc/development/internal_analytics/metrics/metrics_dictionary.md
index afdbd17c63b..6a3291eaba5 100644
--- a/doc/development/internal_analytics/metrics/metrics_dictionary.md
+++ b/doc/development/internal_analytics/metrics/metrics_dictionary.md
@@ -104,7 +104,7 @@ A metric's time frame is calculated based on the `time_frame` field and the `dat
We use the following categories to classify a metric:
- `operational`: Required data for operational purposes.
-- `optional`: Default value for a metric. Data that is optional to collect. This can be [enabled or disabled](../../../administration/settings/usage_statistics.md#enable-or-disable-usage-statistics) in the Admin Area.
+- `optional`: Default value for a metric. Data that is optional to collect. This can be [enabled or disabled](../../../administration/settings/usage_statistics.md#enable-or-disable-service-ping) in the Admin Area.
- `subscription`: Data related to licensing.
- `standard`: Standard set of identifiers that are included when collecting data.
diff --git a/doc/development/internal_analytics/service_ping/index.md b/doc/development/internal_analytics/service_ping/index.md
index bae4e35149d..f010884272b 100644
--- a/doc/development/internal_analytics/service_ping/index.md
+++ b/doc/development/internal_analytics/service_ping/index.md
@@ -22,7 +22,7 @@ and sales teams understand how GitLab is used. The data helps to:
Service Ping information is not anonymous. It's linked to the instance's hostname, but does
not contain project names, usernames, or any other specific data.
-Service Ping is enabled by default. However, you can [disable](../../../administration/settings/usage_statistics.md#enable-or-disable-usage-statistics) it on any self-managed instance. When Service Ping is enabled, GitLab gathers data from the other instances and can show your instance's usage statistics to your users.
+Service Ping is enabled by default. However, you can [disable](../../../administration/settings/usage_statistics.md#enable-or-disable-service-ping) certain metrics on any self-managed instance. When Service Ping is enabled, GitLab gathers data from the other instances and can show your instance's usage statistics to your users.
## Service Ping terminology
@@ -38,13 +38,8 @@ We use the following terminology to describe the Service Ping components:
### Limitations
-- Service Ping does not track frontend events things like page views, link clicks, or user sessions.
-- Service Ping focuses only on aggregated backend events.
-
-Because of these limitations we recommend you:
-
-- Instrument your products with Snowplow for more detailed analytics on GitLab.com.
-- Use Service Ping to track aggregated backend events on self-managed instances.
+- Service Ping delivers only [metrics](../index.md#metric), not individual events.
+- A metric has to be present and instrumented in the codebase for a GitLab version to be delivered in Service Pings for that version.
## Service Ping request flow
@@ -358,14 +353,6 @@ The following is example content of the Service Ping payload.
}
```
-## Notable changes
-
-In GitLab 14.6, [`flavor`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/75587) was added to try to detect the underlying managed database variant.
-Possible values are "Amazon Aurora PostgreSQL", "PostgreSQL on Amazon RDS", "Cloud SQL for PostgreSQL",
-"Azure Database for PostgreSQL - Flexible Server", or "null".
-
-In GitLab 13.5, `pg_system_id` was added to send the [PostgreSQL system identifier](https://www.2ndquadrant.com/en/blog/support-for-postgresqls-system-identifier-in-barman/).
-
## Export Service Ping data
Rake tasks exist to export Service Ping data in different formats.
@@ -390,105 +377,7 @@ bin/rake gitlab:usage_data:dump_non_sql_in_json
bin/rake gitlab:usage_data:dump_sql_in_yaml > ~/Desktop/usage-metrics-2020-09-02.yaml
```
-## Generate Service Ping
-
-To generate Service Ping, use [Teleport](https://goteleport.com/docs/) or a detached screen session on a remote server.
-
-### Triggering
-
-#### Trigger Service Ping with Teleport
-
-1. Request temporary [access](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/teleport/Connect_to_Rails_Console_via_Teleport.md#how-to-use-teleport-to-connect-to-rails-console) to the required environment.
-1. After your approval is issued, [access the Rails console](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/teleport/Connect_to_Rails_Console_via_Teleport.md#access-approval).
-1. Run `GitlabServicePingWorker.new.perform('triggered_from_cron' => false)`.
-
-#### Trigger Service Ping with a detached screen session
-
-1. Connect to bastion with agent forwarding:
-
- ```shell
- ssh -A lb-bastion.gprd.gitlab.com
- ```
-
-1. Create named screen:
-
- ```shell
- screen -S <username>_usage_ping_<date>
- ```
-
-1. Connect to console host:
-
- ```shell
- ssh $USER-rails@console-01-sv-gprd.c.gitlab-production.internal
- ```
-
-1. Run:
-
- ```shell
- GitlabServicePingWorker.new.perform('triggered_from_cron' => false)
- ```
-
-1. To detach from screen, press `ctrl + A`, `ctrl + D`.
-1. Exit from bastion:
-
- ```shell
- exit
- ```
-
-1. Get the metrics duration from logs:
-
-Search in Google Console logs for `time_elapsed`. [Query example](https://cloudlogging.app.goo.gl/nWheZvD8D3nWazNe6).
-
-### Verification (After approx 30 hours)
-
-#### Verify with Teleport
-
-1. Follow [the steps](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/teleport/Connect_to_Rails_Console_via_Teleport.md#how-to-use-teleport-to-connect-to-rails-console) to request a new access to the required environment and connect to the Rails console
-1. Check the last payload in `raw_usage_data` table: `RawUsageData.last.payload`
-1. Check the when the payload was sent: `RawUsageData.last.sent_at`
-
-#### Verify using detached screen session
-
-1. Reconnect to bastion:
-
- ```shell
- ssh -A lb-bastion.gprd.gitlab.com
- ```
-
-1. Find your screen session:
-
- ```shell
- screen -ls
- ```
-
-1. Attach to your screen session:
-
- ```shell
- screen -x 14226.mwawrzyniak_usage_ping_2021_01_22
- ```
-
-1. Check the last payload in `raw_usage_data` table:
-
- ```shell
- RawUsageData.last.payload
- ```
-
-1. Check the when the payload was sent:
-
- ```shell
- RawUsageData.last.sent_at
- ```
-
-### Skip database write operations
-
-To skip database write operations, DevOps report creation, and storage of usage data payload, pass an optional argument:
-
-```shell
-skip_db_write:
-GitlabServicePingWorker.new.perform('triggered_from_cron' => false, 'skip_db_write' => true)
-```
-
-### Fallback values for Service Ping
+## Fallback values for Service Ping
We return fallback values in these cases:
diff --git a/doc/development/internal_api/index.md b/doc/development/internal_api/index.md
index f9b494b80c2..9b5bafaad8f 100644
--- a/doc/development/internal_api/index.md
+++ b/doc/development/internal_api/index.md
@@ -1215,7 +1215,7 @@ Example response:
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/9388) in GitLab 11.10.
-The group SCIM API implements the [RFC7644 protocol](https://www.rfc-editor.org/rfc/rfc7644). As this API is for
+The group SCIM API partially implements the [RFC7644 protocol](https://www.rfc-editor.org/rfc/rfc7644). This API provides the `/groups/:group_path/Users` and `/groups/:group_path/Users/:id` endpoints. The base URL is `<http|https>://<GitLab host>/api/scim/v2`. Because this API is for
**system** use for SCIM provider integration, it is subject to change without notice.
To use this API, enable [Group SSO](../../user/group/saml_sso/index.md) for the group.
@@ -1452,7 +1452,7 @@ Returns an empty response with a `204` status code if successful.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/378599) in GitLab 15.8.
-The Instance SCIM API implements the [RFC7644 protocol](https://www.rfc-editor.org/rfc/rfc7644). As this API is for
+The instance SCIM API partially implements the [RFC7644 protocol](https://www.rfc-editor.org/rfc/rfc7644). This API provides the `/application/Users` and `/application/Users/:id` endpoints. The base URL is `<http|https>://<GitLab host>/api/scim/v2`. Because this API is for
**system** use for SCIM provider integration, it is subject to change without notice.
To use this API, enable [SAML SSO](../../integration/saml.md) for the instance.
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index 29181dd1b9d..afb36519b8d 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -1563,3 +1563,23 @@ Any table which has some high read operation compared to current [high-traffic t
As a general rule, we discourage adding columns to high-traffic tables that are purely for
analytics or reporting of GitLab.com. This can have negative performance impacts for all
self-managed instances without providing direct feature value to them.
+
+## Milestone
+
+Beginning in GitLab 16.6, all new migrations must specify a milestone, using the following syntax:
+
+```ruby
+class AddFooToBar < Gitlab::Database::Migration[2.2]
+ milestone '16.6'
+
+ def change
+ # Your migration here
+ end
+end
+```
+
+Adding the correct milestone to a migration enables us to logically partition migrations into
+their corresponding GitLab minor versions. This:
+
+- Simplifies the upgrade process.
+- Alleviates potential migration ordering issues that arise when we rely solely on the migration's timestamp for ordering.
diff --git a/doc/development/permissions/custom_roles.md b/doc/development/permissions/custom_roles.md
index a060d7a740b..1630ea7b9ab 100644
--- a/doc/development/permissions/custom_roles.md
+++ b/doc/development/permissions/custom_roles.md
@@ -200,6 +200,10 @@ Examples of merge requests adding new abilities to custom roles:
You should make sure a new custom roles ability is under a feature flag.
+### Privilege escalation consideration
+
+A base role typically has permissions that allow creation or management of artifacts corresponding to the base role when interacting with that artifact. For example, when a `Developer` creates an access token for a project, it is created with `Developer` access encoded into that credential. It is important to keep in mind that as new custom permissions are created, there might be a risk of elevated privileges when interacting with GitLab artifacts, and appropriate safeguards or base role checks should be added.
+
### Consuming seats
If a new user with a role `Guest` is added to a member role that includes enablement of an ability that is **not** in the `CUSTOMIZABLE_PERMISSIONS_EXEMPT_FROM_CONSUMING_SEAT` array, a seat is consumed. We simply want to make sure we are charging Ultimate customers for guest users, who have "elevated" abilities. This only applies to billable users on SaaS (billable users that are counted towards namespace subscription). More details about this topic can be found in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/390269).
diff --git a/doc/development/pipelines/index.md b/doc/development/pipelines/index.md
index 2266bdbe459..77f91300a57 100644
--- a/doc/development/pipelines/index.md
+++ b/doc/development/pipelines/index.md
@@ -610,15 +610,26 @@ Exceptions to this general guideline should be motivated and documented.
### Ruby versions testing
-We're running Ruby 3.0 on GitLab.com, as well as for merge requests and the default branch.
-To prepare for the next release, Ruby 3.1, we also run our test suite against Ruby 3.1 on
-a dedicated 2-hourly scheduled pipelines.
+We're running Ruby 3.0 on GitLab.com, as well as for the default branch.
+To prepare for the next Ruby version, we run merge requests in Ruby 3.1.
-For merge requests, you can add the `pipeline:run-in-ruby3_1` label to switch
-the Ruby version used for running the whole test suite to 3.1. When you do
-this, the test suite will no longer run in Ruby 3.0 (default), and an
-additional job `verify-ruby-3.0` will also run and always fail to remind us to
-remove the label and run in Ruby 3.0 before merging the merge request.
+This takes effects at the time when
+[Run merge requests in Ruby 3.1 by default](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134290)
+is merged. See
+[Ruby 3.1 epic](https://gitlab.com/groups/gitlab-org/-/epics/10034)
+for the roadmap to fully make Ruby 3.1 the default.
+
+To make sure both Ruby versions are working, we also run our test suite
+against both Ruby 3.0 and Ruby 3.1 on dedicated 2-hourly scheduled pipelines.
+
+For merge requests, you can add the `pipeline:run-in-ruby3_0` label to switch
+the Ruby version to 3.0. When you do this, the test suite will no longer run
+in Ruby 3.1 (default for merge requests).
+
+When the pipeline is running in a Ruby version not considered default, an
+additional job `verify-default-ruby` will also run and always fail to remind
+us to remove the label and run in default Ruby before merging the merge
+request. At the moment both Ruby 3.0 and Ruby 3.1 are considered default.
This should let us:
@@ -632,17 +643,17 @@ Our test suite runs against PostgreSQL 14 as GitLab.com runs on PostgreSQL 14 an
We do run our test suite against PostgreSQL 14 on nightly scheduled pipelines.
-We also run our test suite against PostgreSQL 12 and PostgreSQL 13 upon specific database library changes in merge requests and `main` pipelines (with the `rspec db-library-code pg12` and `rspec db-library-code pg13` jobs).
+We also run our test suite against PostgreSQL 13 upon specific database library changes in merge requests and `main` pipelines (with the `rspec db-library-code pg13` job).
#### Current versions testing
| Where? | PostgreSQL version | Ruby version |
|--------------------------------------------------------------------------------------------------|-------------------------------------------------|-----------------------|
-| Merge requests | 14 (default version), 13 for DB library changes | 3.0 (default version) |
+| Merge requests | 14 (default version), 13 for DB library changes | 3.1 |
| `master` branch commits | 14 (default version), 13 for DB library changes | 3.0 (default version) |
| `maintenance` scheduled pipelines for the `master` branch (every even-numbered hour) | 14 (default version), 13 for DB library changes | 3.0 (default version) |
| `maintenance` scheduled pipelines for the `ruby3_1` branch (every odd-numbered hour), see below. | 14 (default version), 13 for DB library changes | 3.1 |
-| `nightly` scheduled pipelines for the `master` branch | 14 (default version), 12, 13, 15 | 3.0 (default version) |
+| `nightly` scheduled pipelines for the `master` branch | 14 (default version), 13, 15 | 3.0 (default version) |
There are 2 pipeline schedules used for testing Ruby 3.1. One is triggering a
pipeline in `ruby3_1-sync` branch, which updates the `ruby3_1` branch with latest
diff --git a/doc/development/repository_storage_moves/index.md b/doc/development/repository_storage_moves/index.md
new file mode 100644
index 00000000000..578bc1eabee
--- /dev/null
+++ b/doc/development/repository_storage_moves/index.md
@@ -0,0 +1,102 @@
+---
+stage: Create
+group: Source Code
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Project Repository Storage Moves
+
+This document was created to help contributors understand the code design of
+[project repository storage moves](../../api/project_repository_storage_moves.md).
+Read this document before making changes to the code for this feature.
+
+This document is intentionally limited to an overview of how the code is
+designed, as code can change often. To understand how a specific part of the
+feature works, view the code and the specs. The details here explain how the
+major components of the Code Owners feature work.
+
+NOTE:
+This document should be updated when parts of the codebase referenced in this
+document are updated, removed, or new parts are added.
+
+## Business logic
+
+- `Projects::RepositoryStorageMove`: Tracks the move, includes state machine.
+ - Defined in `app/models/projects/repository_storage_move.rb`.
+- `RepositoryStorageMovable`: Contains the state machine logic, validators, and some helper methods.
+ - Defined in `app/models/concerns/repository_storage_movable.rb`.
+- `Project`: The project model.
+ - Defined in `app/models/project.rb`.
+- `CanMoveRepositoryStorage`: Contains helper methods that are into `Project`.
+ - Defined in `app/models/concerns/can_move_repository_storage.rb`.
+- `API::ProjectRepositoryStorageMoves`: API class for project repository storage moves.
+ - Defined in `lib/api/project_repository_storage_moves.rb`.
+- `Entities::Projects::RepositoryStorageMove`: API entity for serializing the `Projects::RepositoryStorageMove` model.
+ - Defined in `lib/api/entities/projects/repository_storage_moves.rb`.
+- `Projects::ScheduleBulkRepositoryShardMovesService`: Service to schedule bulk moves.
+ - Defined in `app/services/projects/schedule_bulk_repository_shard_moves_service.rb`.
+- `ScheduleBulkRepositoryShardMovesMethods`: Generic methods for bulk moves.
+ - Defined in `app/services/concerns/schedule_bulk_repository_shard_moves_methods.rb`.
+- `Projects::ScheduleBulkRepositoryShardMovesWorker`: Worker to handle bulk moves.
+ - Defined in `app/workers/projects/schedule_bulk_repository_shard_moves_worker.rb`.
+- `Projects::UpdateRepositoryStorageWorker`: Finds repository storage move and then calls the update storage service.
+ - Defined in `app/workers/projects/update_repository_storage_worker.rb`.
+- `UpdateRepositoryStorageWorker`: Module containing generic logic for `Projects::UpdateRepositoryStorageWorker`.
+ - Defined in `app/workers/concerns/update_repository_storage_worker.rb`.
+- `Projects::UpdateRepositoryStorageService`: Performs the move.
+ - Defined in `app/services/projects/update_repository_storage_service.rb`.
+- `UpdateRepositoryStorageMethods`: Module with generic methods included in `Projects::UpdateRepositoryStorageService`.
+ - Defined in `app/services/concerns/update_repository_storage_methods.rb`.
+- `Projects::UpdateService`: Schedules move if the passed parameters request a move.
+ - Defined in `app/services/projects/update_service.rb`.
+- `PoolRepository`: Ruby object representing Gitaly `ObjectPool`.
+ - Defined in `app/models/pool_repository.rb`.
+- `ObjectPool::CreateWorker`: Worker to create an `ObjectPool` via `Gitaly`.
+ - Defined in `app/workers/object_pool/create_worker.rb`.
+- `ObjectPool::JoinWorker`: Worker to join an `ObjectPool` via `Gitaly`.
+ - Defined in `app/workers/object_pool/join_worker.rb`.
+- `ObjectPool::ScheduleJoinWorker`: Worker to schedule an `ObjectPool::JoinWorker`.
+ - Defined in `app/workers/object_pool/schedule_join_worker.rb`.
+- `ObjectPool::DestroyWorker`: Worker to destroy an `ObjectPool` via `Gitaly`.
+ - Defined in `app/workers/object_pool/destroy_worker.rb`.
+- `ObjectPoolQueue`: Module to configure `ObjectPool` workers.
+ - Defined in `app/workers/concerns/object_pool_queue.rb`.
+- `Repositories::ReplicateService`: Handles replication of data from one repository to another.
+ - Defined in `app/services/repositories/replicate_service.rb`.
+
+## Flow
+
+These flowcharts should help explain the flow from the endpoints down to the
+models for different features.
+
+### Schedule a repository storage move via the API
+
+```mermaid
+graph TD
+ A[<code>POST /api/:version/project_repository_storage_moves</code>] --> C
+ B[<code>POST /api/:version/projects/:id/repository_storage_moves</code>] --> D
+ C[Schedule move for each project in shard] --> D[Set state to scheduled]
+ D --> E[<code>after_transition callback</code>]
+ E --> F{<code>set_repository_read_only!</code>}
+ F -->|success| H[Schedule repository update worker]
+ F -->|error| G[Set state to failed]
+```
+
+### Moving the storage after being scheduled
+
+```mermaid
+graph TD
+ A[Repository update worker scheduled] --> B{State is scheduled?}
+ B -->|Yes| C[Set state to started]
+ B -->|No| D[Return success]
+ C --> E{Same filesystem?}
+ E -.-> G[Set project repo to writable]
+ E -->|Yes| F["Mirror repositories (project, wiki, design, & pool)"]
+ G --> H[Update repo storage value]
+ H --> I[Set state to finished]
+ I --> J[Associate project with new pool repository]
+ J --> K[Unlink old pool repository]
+ K --> L[Update project repository storage values]
+ L --> N[Remove old paths if same filesystem]
+ N --> M[Set state to finished]
+```
diff --git a/doc/development/rubocop_development_guide.md b/doc/development/rubocop_development_guide.md
index 6568d025ca5..807544b71d4 100644
--- a/doc/development/rubocop_development_guide.md
+++ b/doc/development/rubocop_development_guide.md
@@ -28,15 +28,51 @@ discussions, nitpicking, or back-and-forth in reviews. The
[GitLab Ruby style guide](backend/ruby_style_guide.md) includes a non-exhaustive
list of styles that commonly come up in reviews and are not enforced.
-By default, we should not
-[disable a RuboCop rule inline](https://docs.rubocop.org/rubocop/configuration.html#disabling-cops-within-source-code), because it negates agreed-upon code standards that the rule is attempting to apply to the codebase.
-
-If you must use inline disable, provide the reason on the MR and ensure the reviewers agree
-before merging.
-
Additionally, we have dedicated
[test-specific style guides and best practices](testing_guide/index.md).
+## Disabling rules inline
+
+By default, RuboCop rules should not be
+[disabled inline](https://docs.rubocop.org/rubocop/configuration.html#disabling-cops-within-source-code),
+because it negates agreed-upon code standards that the rule is attempting to
+apply to the codebase.
+
+If you must use inline disable provide the reason as a code comment in
+the same line where the rule is disabled.
+
+More context can go into code comments above this inline disable comment. To
+reduce verbose code comments link a resource (issue, epic, ...) to provide
+detailed context.
+
+For example:
+
+```ruby
+# bad
+module Types
+ module Domain
+ # rubocop:disable Graphql/AuthorizeTypes
+ class SomeType < BaseObject
+ object.public_send(action) # rubocop:disable GitlabSecurity/PublicSend
+ end
+ # rubocop:enable Graphql/AuthorizeTypes
+ end
+end
+
+# good
+module Types
+ module Domain
+ # rubocop:disable Graphql/AuthorizeTypes -- already authroized in parent entity
+ class SomeType < BaseObject
+ # At this point `action` is safe to be used in `public_send`.
+ # See https://gitlab.com/gitlab-org/gitlab/-/issues/123457890.
+ object.public_send(action) # rubocop:disable GitlabSecurity/PublicSend -- User input verified
+ end
+ # rubocop:enable Graphql/AuthorizeTypes
+ end
+end
+```
+
## Creating new RuboCop cops
Typically it is better for the linting rules to be enforced programmatically as it
diff --git a/doc/development/ruby_upgrade.md b/doc/development/ruby_upgrade.md
index 52f0f72e72a..61bc629e8c8 100644
--- a/doc/development/ruby_upgrade.md
+++ b/doc/development/ruby_upgrade.md
@@ -84,6 +84,8 @@ order reversed as described above.
Tracking this work in an epic is useful to get a sense of progress. For larger upgrades, include a
timeline in the epic description so stakeholders know when the final switch is expected to go live.
+Include the designated [performance testing template](https://gitlab.com/gitlab-org/quality/performance-testing/ruby-rollout-performance-testing)
+to help ensure performance standards on the upgrade.
Break changes to individual repositories into separate issues under this epic.
@@ -141,14 +143,13 @@ A [build matrix definition](../ci/yaml/index.md#parallelmatrix) can do this effi
#### Decide which repositories to update
-When upgrading Ruby, consider updating the following repositories:
+When upgrading Ruby, consider updating the repositories in the [`ruby/gems` group](https://gitlab.com/gitlab-org/ruby/gems/) as well.
+For reference, here is a list of merge requests that have updated Ruby for some of these projects in the past:
-- [Gitaly](https://gitlab.com/gitlab-org/gitaly) ([example](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/3771))
- [GitLab LabKit](https://gitlab.com/gitlab-org/labkit-ruby) ([example](https://gitlab.com/gitlab-org/labkit-ruby/-/merge_requests/79))
- [GitLab Exporter](https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter) ([example](https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter/-/merge_requests/150))
- [GitLab Experiment](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment) ([example](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment/-/merge_requests/128))
- [Gollum Lib](https://gitlab.com/gitlab-org/gollum-lib) ([example](https://gitlab.com/gitlab-org/gollum-lib/-/merge_requests/21))
-- [GitLab Helm Chart](https://gitlab.com/gitlab-org/charts/gitlab) ([example](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2162))
- [GitLab Sidekiq fetcher](https://gitlab.com/gitlab-org/sidekiq-reliable-fetch) ([example](https://gitlab.com/gitlab-org/sidekiq-reliable-fetch/-/merge_requests/33))
- [Prometheus Ruby Mmap Client](https://gitlab.com/gitlab-org/prometheus-client-mmap) ([example](https://gitlab.com/gitlab-org/prometheus-client-mmap/-/merge_requests/59))
- [GitLab-mail_room](https://gitlab.com/gitlab-org/gitlab-mail_room) ([example](https://gitlab.com/gitlab-org/gitlab-mail_room/-/merge_requests/16))
@@ -213,8 +214,6 @@ the new Ruby to be the new default.
The last step is to use the new Ruby in production. This
requires updating Omnibus and production Docker images to use the new version.
-Helm charts may also have to be updated if there were changes to related systems that maintain
-their own charts (such as `gitlab-exporter`.)
To use the new Ruby in production, update the following projects:
@@ -222,6 +221,11 @@ To use the new Ruby in production, update the following projects:
- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) ([example](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5545))
- [Self-compiled installations](../install/installation.md): update the [Ruby system version check](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/system_check/app/ruby_version_check.rb)
+Charts like the [GitLab Helm Chart](https://gitlab.com/gitlab-org/charts/gitlab) should also be updated if
+they use Ruby in some capacity, for example
+to run tests (see [this example](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2162)), though
+this may not strictly be necessary.
+
If you submit a change management request, coordinate the rollout with infrastructure
engineers. When dealing with larger upgrades, involve [Release Managers](https://about.gitlab.com/community/release-managers/)
in the rollout plan.
diff --git a/doc/development/runner_fleet_dashboard.md b/doc/development/runner_fleet_dashboard.md
new file mode 100644
index 00000000000..2a7c7d05453
--- /dev/null
+++ b/doc/development/runner_fleet_dashboard.md
@@ -0,0 +1,245 @@
+---
+stage: Verify
+group: Runner
+info: >-
+ To determine the technical writer assigned to the Stage/Group associated with
+ this page, see
+ https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+# Runner Fleet Dashboard **(ULTIMATE BETA)**
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/424495) in GitLab 16.6 behind several [feature flags](#enable-feature-flags).
+
+This feature is in [BETA](../policy/experiment-beta-support.md).
+To join the list of users testing this feature, contact us in
+[epic 11180](https://gitlab.com/groups/gitlab-org/-/epics/11180).
+
+GitLab administrators can use the Runner Fleet Dashboard to assess the health of your instance runners.
+The Runner Fleet Dashboard shows:
+
+- Recent CI errors related caused by runner infrastructure.
+- Number of concurrent jobs executed on most busy runners.
+- Histogram of job queue times (available only with ClickHouse).
+
+There is a proposal to introduce [more features](#whats-next) to the Runner Fleet Dashboard.
+
+![Runner Fleet Dashboard](img/runner_fleet_dashboard.png)
+
+## View the Runner Fleet Dashboard
+
+Prerequisites:
+
+- You must be an administrator.
+
+To view the runner fleet dashboard:
+
+1. On the left sidebar, select **Search or go to**.
+1. Select **Admin Area**.
+1. Select **Runners**.
+1. Click **Fleet dashboard**.
+
+Most of the dashboard works without any additional actions, with the
+exception of **Wait time to pick a job** chart and [proposed features](#whats-next).
+These features require setting up an additional infrastructure, described in this page.
+
+To test the Runner Fleet Dashboard and gather feedback, we have launched an early adopters program
+for some customers to try this feature.
+
+## Requirements
+
+To test the Runner Fleet Dashboard as part of the early adopters program, you must:
+
+- Run GitLab 16.6 or above.
+- Have an [Ultimate license](https://about.gitlab.com/pricing/).
+- Be able to run ClickHouse database. We recommend using [ClickHouse Cloud](https://clickhouse.cloud/).
+
+## Setup
+
+To setup ClickHouse as the GitLab data storage:
+
+1. [Run ClickHouse Cluster and configure database](#run-and-configure-clickhouse).
+1. [Configure GitLab connection to Clickhouse](#configure-the-gitlab-connection-to-clickhouse).
+1. [Enable the feature flags](#enable-feature-flags).
+
+### Run and configure ClickHouse
+
+The most straightforward way to run ClickHouse is with [ClickHouse Cloud](https://clickhouse.cloud/).
+You can also [run ClickHouse on your own server](https://clickhouse.com/docs/en/install). Refer to the ClickHouse
+documentation regarding [recommendations for self-managed instances](https://clickhouse.com/docs/en/install#recommendations-for-self-managed-clickhouse).
+
+When you run ClickHouse on a hosted server, various data points might impact the resource consumption, like the number
+of builds that run on your instance each month, the selected hardware, the data center choice to host ClickHouse, and more.
+Regardless, the cost should not be significant.
+
+NOTE:
+ClickHouse is a secondary data store for GitLab. All your data is still stored in Postgres,
+and only duplicated in ClickHouse for analytics purposes.
+
+To create necessary user and database objects:
+
+1. Generate a secure password and save it.
+1. Sign in to the ClickHouse SQL console.
+1. Execute the following command. Replace `PASSWORD_HERE` with the generated password.
+
+ ```sql
+ CREATE DATABASE gitlab_clickhouse_main_production;
+ CREATE USER gitlab IDENTIFIED WITH sha256_password BY 'PASSWORD_HERE';
+ CREATE ROLE gitlab_app;
+ GRANT SELECT, INSERT, ALTER, CREATE, UPDATE, DROP, TRUNCATE, OPTIMIZE ON gitlab_clickhouse_main_production.* TO gitlab_app;
+ GRANT gitlab_app TO gitlab;
+ ```
+
+1. Connect to the `gitlab_clickhouse_main_production` database (or just switch it in the ClickHouse Cloud UI).
+
+1. To create the required database objects, execute:
+
+ ```sql
+ CREATE TABLE ci_finished_builds
+ (
+ id UInt64 DEFAULT 0,
+ project_id UInt64 DEFAULT 0,
+ pipeline_id UInt64 DEFAULT 0,
+ status LowCardinality(String) DEFAULT '',
+ created_at DateTime64(6, 'UTC') DEFAULT now(),
+ queued_at DateTime64(6, 'UTC') DEFAULT now(),
+ finished_at DateTime64(6, 'UTC') DEFAULT now(),
+ started_at DateTime64(6, 'UTC') DEFAULT now(),
+ runner_id UInt64 DEFAULT 0,
+ runner_manager_system_xid String DEFAULT '',
+ runner_run_untagged Boolean DEFAULT FALSE,
+ runner_type UInt8 DEFAULT 0,
+ runner_manager_version LowCardinality(String) DEFAULT '',
+ runner_manager_revision LowCardinality(String) DEFAULT '',
+ runner_manager_platform LowCardinality(String) DEFAULT '',
+ runner_manager_architecture LowCardinality(String) DEFAULT '',
+ duration Int64 MATERIALIZED age('ms', started_at, finished_at),
+ queueing_duration Int64 MATERIALIZED age('ms', queued_at, started_at)
+ )
+ ENGINE = ReplacingMergeTree
+ ORDER BY (status, runner_type, project_id, finished_at, id)
+ PARTITION BY toYear(finished_at);
+
+ CREATE TABLE ci_finished_builds_aggregated_queueing_delay_percentiles
+ (
+ status LowCardinality(String) DEFAULT '',
+ runner_type UInt8 DEFAULT 0,
+ started_at_bucket DateTime64(6, 'UTC') DEFAULT now(),
+
+ count_builds AggregateFunction(count),
+ queueing_duration_quantile AggregateFunction(quantile, Int64)
+ )
+ ENGINE = AggregatingMergeTree()
+ ORDER BY (started_at_bucket, status, runner_type);
+
+ CREATE MATERIALIZED VIEW ci_finished_builds_aggregated_queueing_delay_percentiles_mv
+ TO ci_finished_builds_aggregated_queueing_delay_percentiles
+ AS
+ SELECT
+ status,
+ runner_type,
+ toStartOfInterval(started_at, INTERVAL 5 minute) AS started_at_bucket,
+
+ countState(*) as count_builds,
+ quantileState(queueing_duration) AS queueing_duration_quantile
+ FROM ci_finished_builds
+ GROUP BY status, runner_type, started_at_bucket;
+ ```
+
+### Configure the GitLab connection to ClickHouse
+
+::Tabs
+
+:::TabTitle Linux package
+
+To provide GitLab with ClickHouse credentials:
+
+1. Edit `/etc/gitlab/gitlab.rb`:
+
+ ```ruby
+ gitlab_rails['clickhouse_databases']['main']['database'] = 'gitlab_clickhouse_main_production'
+ gitlab_rails['clickhouse_databases']['main']['url'] = 'https://example.com/path'
+ gitlab_rails['clickhouse_databases']['main']['username'] = 'gitlab'
+ gitlab_rails['clickhouse_databases']['main']['password'] = 'PASSWORD_HERE' # replace with the actual password
+ ```
+
+1. Save the file and reconfigure GitLab:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+:::TabTitle Helm chart (Kubernetes)
+
+1. Save the ClickHouse password as a Kubernetes Secret:
+
+ ```shell
+ kubectl create secret generic gitlab-clickhouse-password --from-literal="main_password=PASSWORD_HERE"
+ ```
+
+1. Export the Helm values:
+
+ ```shell
+ helm get values gitlab > gitlab_values.yaml
+ ```
+
+1. Edit `gitlab_values.yaml`:
+
+ ```yaml
+ global:
+ clickhouse:
+ enabled: true
+ main:
+ username: default
+ password:
+ secret: gitlab-clickhouse-password
+ key: main_password
+ database: gitlab_clickhouse_main_production
+ url: 'http://example.com'
+ ```
+
+1. Save the file and apply the new values:
+
+ ```shell
+ helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
+ ```
+
+::EndTabs
+
+To verify that your connection is set up successfully:
+
+1. Log in to [Rails console](../administration/operations/rails_console.md#starting-a-rails-console-session)
+1. Execute the following:
+
+ ```ruby
+ ClickHouse::Client.select('SELECT 1', :main)
+ ```
+
+ If successful, the command returns `[{"1"=>1}]`
+
+### Enable feature flags
+
+Features that use ClickHouse are currently under development and are disabled by feature flags.
+
+To enable these features, [enable](../administration/feature_flags.md#how-to-enable-and-disable-features-behind-flags)
+the following feature flags:
+
+| Feature flag name | Purpose |
+|------------------------------------|---------------------------------------------------------------------------|
+| `ci_data_ingestion_to_click_house` | Enables synchronization of new finished CI builds to Clickhouse database. |
+| `clickhouse_ci_analytics` | Enables the **Wait time to pick a job** chart. |
+
+## What's next
+
+Support for usage and cost analysis are proposed in
+[epic 11183](https://gitlab.com/groups/gitlab-org/-/epics/11183).
+
+## Feedback
+
+To help us improve the Runner Fleet Dashboard, you can provide feedback in
+[issue 421737](https://gitlab.com/gitlab-org/gitlab/-/issues/421737).
+In particular:
+
+- How easy or difficult it was to setup GitLab to make the dashboard work.
+- How useful you found the dashboard.
+- What other information you would like to see on that dashboard.
+- Any other related thoughts and ideas.
diff --git a/doc/development/testing_guide/end_to_end/beginners_guide.md b/doc/development/testing_guide/end_to_end/beginners_guide.md
index 12f90e0d88c..4a3aec97d29 100644
--- a/doc/development/testing_guide/end_to_end/beginners_guide.md
+++ b/doc/development/testing_guide/end_to_end/beginners_guide.md
@@ -127,7 +127,7 @@ Assign `product_group` metadata and specify what product group this test belongs
module QA
RSpec.describe 'Manage' do
- describe 'Login', product_group: :authentication_and_authorization do
+ describe 'Login', product_group: :authentication do
end
end
@@ -142,7 +142,7 @@ writing end-to-end tests is to write test case descriptions as `it` blocks:
```ruby
module QA
RSpec.describe 'Manage' do
- describe 'Login', product_group: :authentication_and_authorization do
+ describe 'Login', product_group: :authentication do
it 'can login' do
end
@@ -166,7 +166,7 @@ Begin by logging in.
module QA
RSpec.describe 'Manage' do
- describe 'Login', product_group: :authentication_and_authorization do
+ describe 'Login', product_group: :authentication do
it 'can login' do
Flow::Login.sign_in
@@ -189,7 +189,7 @@ should answer the question "What do we test?"
module QA
RSpec.describe 'Manage' do
- describe 'Login', product_group: :authentication_and_authorization do
+ describe 'Login', product_group: :authentication do
it 'can login' do
Flow::Login.sign_in
@@ -236,7 +236,7 @@ a call to `sign_in`.
module QA
RSpec.describe 'Manage' do
- describe 'Login', product_group: :authentication_and_authorization do
+ describe 'Login', product_group: :authentication do
before do
Flow::Login.sign_in
end
diff --git a/doc/development/testing_guide/end_to_end/capybara_to_chemlab_migration_guide.md b/doc/development/testing_guide/end_to_end/capybara_to_chemlab_migration_guide.md
index 7bac76c88e8..025f998c0c9 100644
--- a/doc/development/testing_guide/end_to_end/capybara_to_chemlab_migration_guide.md
+++ b/doc/development/testing_guide/end_to_end/capybara_to_chemlab_migration_guide.md
@@ -35,44 +35,6 @@ Given the view:
| ------ | ----- |
| ![before](img/gl-capybara_V13_12.png) | ![after](img/gl-chemlab_V13_12.png) |
-<!--
-```ruby
-# frozen_string_literal: true
-
-module QA
- module Page
- class Form < Page::Base
- view '_form.html' do
- element :first_name
- element :last_name
- element :company_name
- element :user_name
- element :password
- element :continue
- end
- end
- end
-end
-```
-```ruby
-# frozen_string_literal: true
-
-module QA
- module Page
- class Form < Chemlab::Page
- text_field :first_name
- text_field :last_name
- text_field :company_name
- text_field :user_name
- text_field :password
-
- button :continue
- end
- end
-end
-```
--->
-
## Key Differences
### Page Library Design vs Page Object Design
diff --git a/doc/development/utilities.md b/doc/development/utilities.md
index 343d03b9d68..83b87d6d289 100644
--- a/doc/development/utilities.md
+++ b/doc/development/utilities.md
@@ -206,7 +206,7 @@ Refer to [`strong_memoize.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/maste
# good
def expensive_method(arg)
- strong_memoize_with(:expensive_method, arg)
+ strong_memoize_with(:expensive_method, arg) do
# ...
end
end
diff --git a/doc/development/wikis.md b/doc/development/wikis.md
index a814fa76ec9..eca43f6df03 100644
--- a/doc/development/wikis.md
+++ b/doc/development/wikis.md
@@ -28,9 +28,6 @@ Some notable gems that are used for wikis are:
| Component | Description | Gem name | GitLab project | Upstream project |
|:--------------|:-----------------------------------------------|:-------------------------------|:--------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------|
| `gitlab` | Markup renderer, depends on various other gems | `gitlab-markup` | [`gitlab-org/gitlab-markup`](https://gitlab.com/gitlab-org/gitlab-markup) | [`github/markup`](https://github.com/github/markup) |
-| `gollum-lib` | Main Gollum library | `gitlab-gollum-lib` | [`gitlab-org/gollum-lib`](https://gitlab.com/gitlab-org/gollum-lib) | [`gollum/gollum-lib`](https://github.com/gollum/gollum-lib) |
-| | Gollum Git adapter for Rugged | `gitlab-gollum-rugged_adapter` | [`gitlab-org/gitlab-gollum-rugged_adapter`](https://gitlab.com/gitlab-org/gitlab-gollum-rugged_adapter) | [`gollum/rugged_adapter`](https://github.com/gollum/rugged_adapter) |
-| | Rugged (also used in Gitaly itself) | `rugged` | - | [`libgit2/rugged`](https://github.com/libgit2/rugged) |
### Notes on Gollum