Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/activitypub/actor.md (renamed from doc/development/snowplow/index.md)10
-rw-r--r--doc/development/activitypub/actors/group.md205
-rw-r--r--doc/development/activitypub/actors/index.md148
-rw-r--r--doc/development/activitypub/actors/project.md640
-rw-r--r--doc/development/activitypub/actors/releases.md85
-rw-r--r--doc/development/activitypub/actors/topic.md91
-rw-r--r--doc/development/activitypub/actors/user.md47
-rw-r--r--doc/development/activitypub/index.md216
-rw-r--r--doc/development/adding_service_component.md2
-rw-r--r--doc/development/ai_architecture.md23
-rw-r--r--doc/development/ai_features.md662
-rw-r--r--doc/development/ai_features/duo_chat.md139
-rw-r--r--doc/development/ai_features/index.md577
-rw-r--r--doc/development/ai_features/prompts.md28
-rw-r--r--doc/development/api_graphql_styleguide.md11
-rw-r--r--doc/development/audit_event_guide/index.md24
-rw-r--r--doc/development/avoiding_required_stops.md12
-rw-r--r--doc/development/build_test_package.md19
-rw-r--r--doc/development/cascading_settings.md2
-rw-r--r--doc/development/cloud_connector/code_suggestions_for_sm.md259
-rw-r--r--doc/development/cloud_connector/img/code_suggestions_components.pngbin0 -> 44296 bytes
-rw-r--r--doc/development/code_review.md6
-rw-r--r--doc/development/code_suggestions/index.md56
-rw-r--r--doc/development/contributing/index.md2
-rw-r--r--doc/development/database/avoiding_downtime_in_migrations.md3
-rw-r--r--doc/development/database/batched_background_migrations.md3
-rw-r--r--doc/development/database/clickhouse/clickhouse_within_gitlab.md237
-rw-r--r--doc/development/database/foreign_keys.md2
-rw-r--r--doc/development/database/index.md2
-rw-r--r--doc/development/database/multiple_databases.md116
-rw-r--r--doc/development/database/not_null_constraints.md2
-rw-r--r--doc/development/database/poc_tree_iterator.md475
-rw-r--r--doc/development/database_review.md16
-rw-r--r--doc/development/development_seed_files.md26
-rw-r--r--doc/development/documentation/alpha_beta.md52
-rw-r--r--doc/development/documentation/experiment_beta.md49
-rw-r--r--doc/development/documentation/help.md3
-rw-r--r--doc/development/documentation/restful_api_styleguide.md8
-rw-r--r--doc/development/documentation/review_apps.md1
-rw-r--r--doc/development/documentation/site_architecture/automation.md77
-rw-r--r--doc/development/documentation/site_architecture/deployment_process.md42
-rw-r--r--doc/development/documentation/styleguide/index.md102
-rw-r--r--doc/development/documentation/styleguide/word_list.md129
-rw-r--r--doc/development/documentation/testing.md16
-rw-r--r--doc/development/documentation/topic_types/task.md2
-rw-r--r--doc/development/ee_features.md6
-rw-r--r--doc/development/event_store.md15
-rw-r--r--doc/development/experiment_guide/implementing_experiments.md2
-rw-r--r--doc/development/fe_guide/accessibility.md39
-rw-r--r--doc/development/fe_guide/architecture.md18
-rw-r--r--doc/development/fe_guide/axios.md4
-rw-r--r--doc/development/fe_guide/customizable_dashboards.md11
-rw-r--r--doc/development/fe_guide/dark_mode.md2
-rw-r--r--doc/development/fe_guide/design_anti_patterns.md220
-rw-r--r--doc/development/fe_guide/design_patterns.md220
-rw-r--r--doc/development/fe_guide/development_process.md125
-rw-r--r--doc/development/fe_guide/getting_started.md54
-rw-r--r--doc/development/fe_guide/guides.md13
-rw-r--r--doc/development/fe_guide/img/boards_diagram.pngbin9518 -> 0 bytes
-rw-r--r--doc/development/fe_guide/index.md125
-rw-r--r--doc/development/fe_guide/principles.md21
-rw-r--r--doc/development/fe_guide/sentry.md34
-rw-r--r--doc/development/fe_guide/tech_stack.md11
-rw-r--r--doc/development/fe_guide/tips_and_tricks.md31
-rw-r--r--doc/development/feature_development.md6
-rw-r--r--doc/development/feature_flags/index.md9
-rw-r--r--doc/development/features_inside_dot_gitlab.md2
-rw-r--r--doc/development/file_storage.md2
-rw-r--r--doc/development/fips_compliance.md2
-rw-r--r--doc/development/gems.md9
-rw-r--r--doc/development/git_object_deduplication.md4
-rw-r--r--doc/development/github_importer.md11
-rw-r--r--doc/development/go_guide/index.md61
-rw-r--r--doc/development/gotchas.md58
-rw-r--r--doc/development/i18n/externalization.md4
-rw-r--r--doc/development/i18n/proofreader.md2
-rw-r--r--doc/development/img/build_package_v12_6.pngbin39482 -> 0 bytes
-rw-r--r--doc/development/img/trigger_build_package_v12_6.pngbin44149 -> 0 bytes
-rw-r--r--doc/development/img/trigger_omnibus_v16_3.pngbin0 -> 34918 bytes
-rw-r--r--doc/development/img/triggered_ee_pipeline_v16_3.pngbin0 -> 47309 bytes
-rw-r--r--doc/development/integrations/index.md75
-rw-r--r--doc/development/integrations/jenkins.md6
-rw-r--r--doc/development/integrations/secure.md2
-rw-r--r--doc/development/internal_analytics/index.md2
-rw-r--r--doc/development/internal_analytics/internal_event_tracking/architecture.md2
-rw-r--r--doc/development/internal_analytics/internal_event_tracking/event_definition_guide.md2
-rw-r--r--doc/development/internal_analytics/internal_event_tracking/index.md2
-rw-r--r--doc/development/internal_analytics/internal_event_tracking/introduction.md2
-rw-r--r--doc/development/internal_analytics/internal_event_tracking/migration.md155
-rw-r--r--doc/development/internal_analytics/internal_event_tracking/quick_start.md46
-rw-r--r--doc/development/internal_analytics/service_ping/implement.md2
-rw-r--r--doc/development/internal_analytics/service_ping/index.md2
-rw-r--r--doc/development/internal_analytics/service_ping/metrics_dictionary.md6
-rw-r--r--doc/development/internal_analytics/service_ping/metrics_instrumentation.md2
-rw-r--r--doc/development/internal_analytics/service_ping/metrics_lifecycle.md2
-rw-r--r--doc/development/internal_analytics/service_ping/performance_indicator_metrics.md2
-rw-r--r--doc/development/internal_analytics/service_ping/review_guidelines.md2
-rw-r--r--doc/development/internal_analytics/service_ping/troubleshooting.md6
-rw-r--r--doc/development/internal_analytics/service_ping/usage_data.md2
-rw-r--r--doc/development/internal_analytics/snowplow/event_dictionary_guide.md2
-rw-r--r--doc/development/internal_analytics/snowplow/implementation.md2
-rw-r--r--doc/development/internal_analytics/snowplow/index.md4
-rw-r--r--doc/development/internal_analytics/snowplow/infrastructure.md2
-rw-r--r--doc/development/internal_analytics/snowplow/review_guidelines.md2
-rw-r--r--doc/development/internal_analytics/snowplow/schemas.md2
-rw-r--r--doc/development/internal_analytics/snowplow/troubleshooting.md2
-rw-r--r--doc/development/internal_api/index.md79
-rw-r--r--doc/development/internal_users.md2
-rw-r--r--doc/development/merge_request_concepts/performance.md10
-rw-r--r--doc/development/migration_style_guide.md42
-rw-r--r--doc/development/packages/debian_repository.md12
-rw-r--r--doc/development/performance.md2
-rw-r--r--doc/development/permissions.md2
-rw-r--r--doc/development/permissions/authorizations.md2
-rw-r--r--doc/development/permissions/custom_roles.md2
-rw-r--r--doc/development/permissions/predefined_roles.md16
-rw-r--r--doc/development/pipelines/index.md4
-rw-r--r--doc/development/pipelines/internals.md29
-rw-r--r--doc/development/pipelines/performance.md2
-rw-r--r--doc/development/policies.md2
-rw-r--r--doc/development/rails_update.md6
-rw-r--r--doc/development/rake_tasks.md17
-rw-r--r--doc/development/redis.md8
-rw-r--r--doc/development/rubocop_development_guide.md10
-rw-r--r--doc/development/ruby_upgrade.md2
-rw-r--r--doc/development/sec/token_revocation_api.md4
-rw-r--r--doc/development/secure_coding_guidelines.md2
-rw-r--r--doc/development/service_ping/implement.md11
-rw-r--r--doc/development/service_ping/index.md11
-rw-r--r--doc/development/service_ping/metrics_dictionary.md11
-rw-r--r--doc/development/service_ping/metrics_instrumentation.md11
-rw-r--r--doc/development/service_ping/metrics_lifecycle.md11
-rw-r--r--doc/development/service_ping/performance_indicator_metrics.md11
-rw-r--r--doc/development/service_ping/review_guidelines.md11
-rw-r--r--doc/development/service_ping/troubleshooting.md11
-rw-r--r--doc/development/service_ping/usage_data.md11
-rw-r--r--doc/development/shell_commands.md2
-rw-r--r--doc/development/snowplow/event_dictionary_guide.md11
-rw-r--r--doc/development/snowplow/implementation.md11
-rw-r--r--doc/development/snowplow/infrastructure.md11
-rw-r--r--doc/development/snowplow/review_guidelines.md11
-rw-r--r--doc/development/snowplow/schemas.md11
-rw-r--r--doc/development/snowplow/troubleshooting.md11
-rw-r--r--doc/development/testing_guide/best_practices.md7
-rw-r--r--doc/development/testing_guide/end_to_end/beginners_guide.md9
-rw-r--r--doc/development/testing_guide/end_to_end/best_practices.md10
-rw-r--r--doc/development/testing_guide/end_to_end/index.md8
-rw-r--r--doc/development/testing_guide/end_to_end/resources.md76
-rw-r--r--doc/development/testing_guide/flaky_tests.md18
-rw-r--r--doc/development/testing_guide/frontend_testing.md4
-rw-r--r--doc/development/testing_guide/review_apps.md12
-rw-r--r--doc/development/value_stream_analytics.md34
-rw-r--r--doc/development/work_items.md174
153 files changed, 5111 insertions, 1747 deletions
diff --git a/doc/development/snowplow/index.md b/doc/development/activitypub/actor.md
index c0e53fe3b1b..1d10e421df7 100644
--- a/doc/development/snowplow/index.md
+++ b/doc/development/activitypub/actor.md
@@ -1,11 +1,11 @@
---
-redirect_to: '../internal_analytics/snowplow/index.md'
-remove_date: '2023-08-20'
+redirect_to: 'actors/index.md'
+remove_date: '2023-12-08'
---
-This document was moved to [another location](../internal_analytics/snowplow/index.md).
+This document was moved to [another location](actors/index.md).
-<!-- This redirect file can be deleted after <2023-08-20>. -->
+<!-- This redirect file can be deleted after <2023-12-08>. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/activitypub/actors/group.md b/doc/development/activitypub/actors/group.md
new file mode 100644
index 00000000000..dad02298170
--- /dev/null
+++ b/doc/development/activitypub/actors/group.md
@@ -0,0 +1,205 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments"
+---
+
+# Activities for group actor **(EXPERIMENT)**
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127023) in GitLab 16.5 [with two flags](../../../administration/feature_flags.md) named `activity_pub` and `activity_pub_project`. Disabled by default. This feature is an [Experiment](../../../policy/experiment-beta-support.md).
+
+FLAG:
+On self-managed GitLab, by default this feature is not available. To make it available,
+an administrator can [enable the feature flags](../../../administration/feature_flags.md)
+named `activity_pub` and `activity_pub_project`.
+On GitLab.com, this feature is not available.
+The feature is not ready for production use.
+
+This feature requires two feature flags:
+
+- `activity_pub`: Enables or disables all ActivityPub-related features.
+- `activity_pub_project`: Enables and disable ActivityPub features specific to
+ projects. Requires the `activity_pub` flag to also be enabled.
+
+## Profile
+
+```javascript
+{
+ "@context": "https://www.w3.org/ns/activitystreams",
+ "id": GROUP_URL,
+ "type": "Group",
+ "name": GROUP_NAME,
+ "summary": GROUP_DESCRIPTION,
+ "url": GROUP_URL,
+ "outbox": GROUP_OUTBOX_URL,
+ "inbox": null,
+}
+```
+
+## Outbox
+
+The various activities for a group are:
+
+- [The group was created](#the-group-was-created).
+- All project activities for projects in that group, and its subgroups.
+- [A user joined the group](#a-user-joined-the-group).
+- [A user left the group](#a-user-left-the-group).
+- [The group was deleted](#the-group-was-deleted).
+- [A subgroup was created](#a-subgroup-was-created).
+- [A subgroup was deleted](#a-subgroup-was-deleted).
+
+### The group was created
+
+```javascript
+{
+ "id": GROUP_OUTBOX_URL#event_id,
+ "type": "Create",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": GROUP_URL,
+ "type": "Group",
+ "name": GROUP_NAME,
+ "url": GROUP_URL,
+ }
+}
+```
+
+### A user joined the group
+
+```javascript
+{
+ "id": GROUP_OUTBOX_URL#event_id,
+ "type": "Join",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": GROUP_URL,
+ "type": "Group",
+ "name": GROUP_NAME,
+ "url": GROUP_URL,
+ },
+}
+```
+
+### A user left the group
+
+```javascript
+{
+ "id": GROUP_OUTBOX_URL#event_id,
+ "type": "Leave",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": GROUP_URL,
+ "type": "Group",
+ "name": GROUP_NAME,
+ "url": GROUP_URL,
+ },
+}
+```
+
+### The group was deleted
+
+```javascript
+{
+ "id": GROUP_OUTBOX_URL#event_id,
+ "type": "Delete",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": GROUP_URL,
+ "type": "Group",
+ "name": GROUP_NAME,
+ "url": GROUP_URL,
+ }
+}
+```
+
+### A subgroup was created
+
+```javascript
+{
+ "id": GROUP_OUTBOX_URL#event_id,
+ "type": "Create",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": GROUP_URL,
+ "type": "Group",
+ "name": GROUP_NAME,
+ "url": GROUP_URL,
+ "context": {
+ "id": PARENT_GROUP_URL,
+ "type": "Group",
+ "name": PARENT_GROUP_NAME,
+ "url": PARENT_GROUP_URL,
+ }
+ }
+}
+```
+
+### A subgroup was deleted
+
+```javascript
+{
+ "id": GROUP_OUTBOX_URL#event_id,
+ "type": "Delete",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": GROUP_URL,
+ "type": "Group",
+ "name": GROUP_NAME,
+ "url": GROUP_URL,
+ "context": {
+ "id": PARENT_GROUP_URL,
+ "type": "Group",
+ "name": PARENT_GROUP_NAME,
+ "url": PARENT_GROUP_URL,
+ }
+ }
+}
+```
diff --git a/doc/development/activitypub/actors/index.md b/doc/development/activitypub/actors/index.md
new file mode 100644
index 00000000000..032cb26587a
--- /dev/null
+++ b/doc/development/activitypub/actors/index.md
@@ -0,0 +1,148 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments"
+---
+
+# Implement an ActivityPub actor **(EXPERIMENT)**
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127023) in GitLab 16.5 [with two flags](../../../administration/feature_flags.md) named `activity_pub` and `activity_pub_project`. Disabled by default. This feature is an [Experiment](../../../policy/experiment-beta-support.md).
+
+FLAG:
+On self-managed GitLab, by default this feature is not available. To make it available,
+an administrator can [enable the feature flags](../../../administration/feature_flags.md)
+named `activity_pub` and `activity_pub_project`.
+On GitLab.com, this feature is not available.
+The feature is not ready for production use.
+
+This feature requires two feature flags:
+
+- `activity_pub`: Enables or disables all ActivityPub-related features.
+- `activity_pub_project`: Enables and disable ActivityPub features specific to
+ projects. Requires the `activity_pub` flag to also be enabled.
+
+ActivityPub is based on three standard documents:
+
+- [ActivityPub](https://www.w3.org/TR/activitypub/) defines the HTTP
+ requests happening to implement federation.
+- [ActivityStreams](https://www.w3.org/TR/activitystreams-core/) defines the
+ format of the JSON messages exchanged by the users of the protocol.
+- [Activity Vocabulary](https://www.w3.org/TR/activitystreams-vocabulary/)
+ defines the various messages recognized by default.
+
+The first one is typically handled by controllers, while the two others are
+related to what happen in serializers.
+
+To implement an ActivityPub actor, you must:
+
+- Implement the profile page of the resource.
+- Implement the outbox page.
+- Handle incoming requests on the inbox.
+
+All requests are made using
+`application/ld+json; profile="https://www.w3.org/ns/activitystreams"` as `Accept` HTTP header.
+
+The actors we're implementing for the social features:
+
+- [Releases](releases.md)
+- [Topics](topic.md)
+- [Projects](project.md)
+- [Groups](group.md)
+- [Users](user.md)
+
+## Profile page
+
+Querying the profile page is used to retrieve:
+
+- General information about it, like name and description.
+- URLs for the inbox and the outbox.
+
+To implement a profile page, create an ActivityStreams
+serializer in `app/serializers/activity_pub/`, making your serializer
+inherit from `ActivityStreamsSerializer`. See below in the serializers
+section about the mandatory fields.
+
+To call your serializer in your controller:
+
+```ruby
+opts = {
+ inbox: nil,
+ outbox: outbox_project_releases_url(project)
+}
+
+render json: ActivityPub::ReleasesActorSerializer.new.represent(project, opts)
+```
+
+- `outbox` is the endpoint where to find the activities feed for this
+actor.
+- `inbox` is where to POST to subscribe to the feed. Not yet implemented, so pass `nil`.
+
+## Outbox page
+
+The outbox is the list of activities for the resource. It's a feed for the
+resource, and it allows ActivityPub clients to show public activities for
+this actor without having yet subscribed to it.
+
+To implement an outbox page, create an ActivityStreams
+serializer in `app/serializers/activity_pub/`, making your serializer
+inherit from `ActivityStreamsSerializer`. See below in the serializers
+section about the mandatory fields.
+
+You call your serializer in your controller like this:
+
+```ruby
+serializer = ActivityPub::ReleasesOutboxSerializer.new.with_pagination(request, response)
+render json: serializer.represent(releases)
+```
+
+This converts the response to an `OrderedCollection`
+ActivityPub type, with all the correct fields.
+
+## Inbox
+
+Not yet implemented.
+
+The inbox is where the ActivityPub compatible third-parties makes their
+requests, to subscribe to the actor or send it messages.
+
+## ActivityStreams serializers
+
+The serializers implement half the core of ActivityPub support: they're all
+about [ActivityStreams](https://www.w3.org/TR/activitystreams-core/), the
+message format used by ActivityPub.
+
+To leverage the features doing most of the formatting for you, your
+serializer should inherit from `ActivityPub::ActivityStreamsSerializer`.
+
+To use it, call the `#represent` method. It requires you to provide
+`inbox` and `outbox` options (as mentioned above) if it
+is an actor profile page. You don't need those if your serializer
+represents an object that is just meant to be embedded as part of actors,
+like the object representing the contact information for a user.
+
+Each resource serialized (included other objects embedded in your
+actor) must provide an `id` and a `type` field.
+
+`id` is a URL. It's meant to be a unique identifier for the resource, and
+it must point to an existing page: ideally, an actor. Otherwise, you can
+just reference the closest actor and use an anchor, like this:
+
+```plaintext
+https://gitlab.com/user/project/-/releases#release-1
+```
+
+`type` should be taken from ActivityStreams core vocabulary:
+
+- [Activity types](https://www.w3.org/TR/activitystreams-vocabulary/#activity-types)
+- [Actor types](https://www.w3.org/TR/activitystreams-vocabulary/#actor-types)
+- [Object types](https://www.w3.org/TR/activitystreams-vocabulary/#object-types)
+
+The properties you can use are all documented in
+[the ActivityStreams vocabulary document](https://www.w3.org/TR/activitystreams-vocabulary).
+Given the type you have chosen for your resource, find the
+`properties` list, telling you all available properties, direct or
+inherited.
+
+It's worth noting that Mastodon adds one more property, `preferredName`.
+Mastodon expects it to be set on any actor, or that actor is not recognized by
+Mastodon.
diff --git a/doc/development/activitypub/actors/project.md b/doc/development/activitypub/actors/project.md
new file mode 100644
index 00000000000..4f876b9e3fa
--- /dev/null
+++ b/doc/development/activitypub/actors/project.md
@@ -0,0 +1,640 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments"
+---
+
+# Activities for project actor **(EXPERIMENT)**
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127023) in GitLab 16.5 [with two flags](../../../administration/feature_flags.md) named `activity_pub` and `activity_pub_project`. Disabled by default. This feature is an [Experiment](../../../policy/experiment-beta-support.md).
+
+FLAG:
+On self-managed GitLab, by default this feature is not available. To make it available,
+an administrator can [enable the feature flags](../../../administration/feature_flags.md)
+named `activity_pub` and `activity_pub_project`.
+On GitLab.com, this feature is not available.
+The feature is not ready for production use.
+
+This feature requires two feature flags:
+
+- `activity_pub`: Enables or disables all ActivityPub-related features.
+- `activity_pub_project`: Enables and disable ActivityPub features specific to
+ projects. Requires the `activity_pub` flag to also be enabled.
+
+## Profile
+
+```javascript
+{
+ "@context": "https://www.w3.org/ns/activitystreams",
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ "outbox": PROJECT_OUTBOX_URL,
+ "inbox": null,
+}
+```
+
+## Outbox
+
+For a project, we can map the events happening on the project activity
+timeline on GitLab, when a user:
+
+- [Creates the repository](#user-creates-the-repository).
+- [Pushes commits](#user-pushes-commits).
+- [Pushes a tag](#user-pushes-a-tag).
+- [Opens a merge request](#user-opens-a-merge-request).
+- [Accepts a merge request](#user-accepts-a-merge-request).
+- [Closes a merge request](#user-closes-a-merge-request).
+- [Opens an issue](#user-opens-an-issue).
+- [Closes an issue](#user-closes-an-issue).
+- [Reopens an issue](#user-reopens-an-issue).
+- [Comments on a merge request](#user-comments-on-a-merge-request).
+- [Comments on an issue](#user-comments-on-an-issue).
+- [Creates a wiki page](#user-creates-a-wiki-page).
+- [Updates a wiki page](#user-updates-a-wiki-page).
+- [Destroys a wiki page](#user-destroys-a-wiki-page).
+- [Joins the project](#user-joins-the-project).
+- [Leaves the project](#user-leaves-the-project).
+- [Deletes the repository](#user-deletes-the-repository).
+
+There's also a Design tab in the project activities, but it's just empty in
+all projects I follow and I don't see anything related to it in my projects
+sidebar. Maybe it's a premium feature? If so, it's of no concern to us for
+public following through ActivityPub.
+
+### User creates the repository
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Create",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ }
+}
+```
+
+### User pushes commits
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Update",
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ "result": COMMITS_DIFF_URL,
+}
+```
+
+### User pushes a tag
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Update",
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ "name": TAG_NAME,
+ "result": COMMIT_URL,
+}
+```
+
+### User opens a merge request
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Add",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": MERGE_REQUEST_URL,
+ "type": "Application",
+ "name": MERGE_REQUEST_TITLE,
+ "url": MERGE_REQUEST_URL,
+ "context": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ },
+ "target": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+}
+```
+
+### User accepts a merge request
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Accept",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": MERGE_REQUEST_URL,
+ "type": "Application",
+ "name": MERGE_REQUEST_TITLE,
+ "url": MERGE_REQUEST_URL,
+ "context": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ },
+ "target": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+}
+```
+
+### User closes a merge request
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Remove",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": MERGE_REQUEST_URL,
+ "type": "Application",
+ "name": MERGE_REQUEST_TITLE,
+ "url": MERGE_REQUEST_URL,
+ "context": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ },
+ "origin": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+}
+```
+
+### User opens an issue
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Add",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": ISSUE_URL,
+ "type": "Page",
+ "name": ISSUE_TITLE,
+ "url": ISSUE_URL,
+ "context": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ }
+ },
+ "target": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ }
+}
+```
+
+Why to add the project both as `object.context` and `target`? For multiple
+consistency reasons:
+
+- The **Add** activity is more commonly used with a `target`.
+- The **Remove** activity used to close the issue is more
+ commonly used with an `origin`.
+- The **Update** activity used to reopen an issue specifies that
+ `target` and `origin` have no specific meaning, making `context` better
+ suited for that.
+- We could use `context` only with **Update**, but merge requests
+ must be taken into consideration.
+
+Merge requests are very similar to issues, so we want their activities to
+be similar. While the best type for issues is `page`, the type chosen for
+merge request is `application`, both to distinguish it from issues and because
+they contain code.
+
+To distinguish merge requests from projects (which are also `application`),
+merge requests are an `application` with another `application` (the project)
+as context. Given the merge request will have a `context` even with the **Add**
+and **Remove** activities, the same is done with issues for consistency.
+
+An alternative that was considered, but dismissed: instead of **Add** for issues,
+use **Create**. That would have allowed us to always use `context`, but
+it creates more problems that it solves. **Accept** and **Reject** could work quite
+well for closing merge requests, but what would we use to close issues?
+**Delete** is incorrect, as the issue is not deleted, just closed.
+Reopening the issue later would require an **Update** after a
+**Delete**.
+
+Using **Create** for opening issues and **Remove** for closing
+issues would be asymmetrical:
+
+- **Create** is mirrored by **Delete**.
+- **Add** is mirrored by **Remove**.
+
+To minimize pain for those who will build on top of those resources, it's best
+to duplicate the project information as `context` and `target` / `origin`.
+
+### User closes an issue
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Remove",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": ISSUE_URL,
+ "type": "Page",
+ "name": ISSUE_TITLE,
+ "url": ISSUE_URL,
+ "context": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ },
+ "origin": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+}
+```
+
+### User reopens an issue
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Update",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": ISSUE_URL,
+ "type": "Page",
+ "name": ISSUE_TITLE,
+ "url": ISSUE_URL,
+ "context": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ },
+}
+```
+
+### User comments on a merge request
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Add",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": NOTE_URL,
+ "type": "Note",
+ "content": NOTE_NOTE,
+ },
+ "target": {
+ "id": MERGE_REQUEST_URL,
+ "type": "Application",
+ "name": MERGE_REQUEST_TITLE,
+ "url": MERGE_REQUEST_URL,
+ "context": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ },
+}
+```
+
+### User comments on an issue
+
+```javascript
+{
+ "id": PROJECT_URL#event_id,
+ "type": "Add",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": NOTE_URL,
+ "type": "Note",
+ "content": NOTE_NOTE,
+ },
+ "target": {
+ "id": ISSUE_URL,
+ "type": "Page",
+ "name": ISSUE_TITLE,
+ "url": ISSUE_URL,
+ "context": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ },
+}
+```
+
+### User creates a wiki page
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Create",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": WIKI_PAGE_URL,
+ "type": "Page",
+ "name": WIKI_PAGE_HUMAN_TITLE,
+ "url": WIKI_PAGE_URL,
+ }
+}
+```
+
+### User updates a wiki page
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Update",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": WIKI_PAGE_URL,
+ "type": "Page",
+ "name": WIKI_PAGE_HUMAN_TITLE,
+ "url": WIKI_PAGE_URL,
+ }
+}
+```
+
+### User destroys a wiki page
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Delete",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": WIKI_PAGE_URL,
+ "type": "Page",
+ "name": WIKI_PAGE_HUMAN_TITLE,
+ "url": WIKI_PAGE_URL,
+ }
+}
+```
+
+### User joins the project
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Add",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "target": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+}
+```
+
+The GitLab project timeline does not mention who added a member to the
+project, so this does the same. However, the **Add** activity requires an Actor.
+For that reason, we use the same person as actor and object.
+
+In the **Members** page of a project contains a `source` attribute.
+While there is sometimes mention of who added the user, this is used mainly
+to distinguish if the user is a member attached to the project directly, or
+through a group. It would not be a good "actor" (that would rather be an
+`origin` for the membership).
+
+### User leaves the project
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Remove",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "target": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+}
+```
+
+See [User joined the project](#user-joins-the-project).
+
+### User deletes the repository
+
+```javascript
+{
+ "id": PROJECT_OUTBOX_URL#event_id,
+ "type": "Delete",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ }
+}
+```
diff --git a/doc/development/activitypub/actors/releases.md b/doc/development/activitypub/actors/releases.md
new file mode 100644
index 00000000000..009b98b6adf
--- /dev/null
+++ b/doc/development/activitypub/actors/releases.md
@@ -0,0 +1,85 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments"
+---
+
+# Activities for following releases actor **(EXPERIMENT)**
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127023) in GitLab 16.5 [with two flags](../../../administration/feature_flags.md) named `activity_pub` and `activity_pub_project`. Disabled by default. This feature is an [Experiment](../../../policy/experiment-beta-support.md).
+
+FLAG:
+On self-managed GitLab, by default this feature is not available. To make it available,
+an administrator can [enable the feature flags](../../../administration/feature_flags.md)
+named `activity_pub` and `activity_pub_project`.
+On GitLab.com, this feature is not available.
+The feature is not ready for production use.
+
+This feature requires two feature flags:
+
+- `activity_pub`: Enables or disables all ActivityPub-related features.
+- `activity_pub_project`: Enables and disable ActivityPub features specific to
+ projects. Requires the `activity_pub` flag to also be enabled.
+
+## Profile
+
+The profile is this actor is a bit different from other actors. We don't want to
+show activities for a given release, but instead the releases for a given project.
+
+The profile endpoint is handled by `Projects::ReleasesController#index`
+on the list of releases, and should reply with something like this:
+
+```javascript
+{
+ "@context": "https://www.w3.org/ns/activitystreams",
+ "id": PROJECT_RELEASES_URL,
+ "type": "Application",
+ "name": PROJECT_NAME + " releases",
+ "url": PROJECT_RELEASES_URL,
+ "content": PROJECT_DESCRIPTION,
+ "context": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ "outbox": PROJECT_RELEASES_OUTBOX_URL,
+ "inbox": null,
+}
+```
+
+## Outbox
+
+The release actor is relatively simple: the only activity happening is the
+**Create release** event.
+
+```javascript
+{
+ "id": PROJECT_RELEASES_OUTBOX_URL#release_id,
+ "type": "Create",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ },
+ "object": {
+ "id": RELEASE_URL,
+ "type": "Application",
+ "name": RELEASE_TITLE,
+ "url": RELEASE_URL,
+ "content": RELEASE_DESCRIPTION,
+ "context": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "summary": PROJECT_DESCRIPTION,
+ "url": PROJECT_URL,
+ },
+ },
+}
+```
diff --git a/doc/development/activitypub/actors/topic.md b/doc/development/activitypub/actors/topic.md
new file mode 100644
index 00000000000..f99a6e0569a
--- /dev/null
+++ b/doc/development/activitypub/actors/topic.md
@@ -0,0 +1,91 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments"
+---
+
+# Activities for topic actor **(EXPERIMENT)**
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127023) in GitLab 16.5 [with two flags](../../../administration/feature_flags.md) named `activity_pub` and `activity_pub_project`. Disabled by default. This feature is an [Experiment](../../../policy/experiment-beta-support.md).
+
+FLAG:
+On self-managed GitLab, by default this feature is not available. To make it available,
+an administrator can [enable the feature flags](../../../administration/feature_flags.md)
+named `activity_pub` and `activity_pub_project`.
+On GitLab.com, this feature is not available.
+The feature is not ready for production use.
+
+This feature requires two feature flags:
+
+- `activity_pub`: Enables or disables all ActivityPub-related features.
+- `activity_pub_project`: Enables and disable ActivityPub features specific to
+ projects. Requires the `activity_pub` flag to also be enabled.
+
+## Profile
+
+```javascript
+{
+ "@context": "https://www.w3.org/ns/activitystreams",
+ "id": TOPIC_URL,
+ "type": "Group",
+ "name": TOPIC_NAME,
+ "url": TOPIC_URL,
+ "summary": TOPIC_DESCRIPTION,
+ "outbox": TOPIC_OUTBOX_URL,
+ "inbox": null,
+}
+```
+
+## Outbox
+
+Like the release actor, the topic specification is not complex. It generates an
+activity only when a new project has been added to the given topic.
+
+```javascript
+{
+ "id": TOPIC_OUTBOX_URL#event_id,
+ "type": "Add",
+ "to": [
+ "https://www.w3.org/ns/activitystreams#Public"
+ ],
+ "actor": {
+ "id": PROJECT_URL,
+ "type": "Application",
+ "name": PROJECT_NAME,
+ "url": PROJECT_URL,
+ },
+ "object": {
+ "id": TOPIC_URL,
+ "type": "Group",
+ "name": TOPIC_NAME,
+ "url": TOPIC_URL,
+ },
+ },
+}
+```
+
+## Possible difficulties
+
+There is hidden complexity here.
+
+The simpler way to build this endpoint is to take the projects associated
+to a topic, and sort them by descending creation date. However,
+if we do that, discrepancies will occur when implementing the
+activity push part of the standard.
+
+Adding the project to a topic is not made at project creation time. It's
+made when a project's topics are _edited_. That action can happen a very long time
+after the project creation date. In that case, a push activity is
+created and sent to federated instances when adding the topic to the
+project. However, the list in the outbox endpoint that sorts projects by descending
+creation date doesn't show the project, because it was created long ago.
+
+No special logic happens when a topic is added to a project, except:
+
+- Cleaning up the topic list.
+- Creating the topic in database, if it doesn't exist yet.
+
+No event is generated. We should add such an event so the activity
+push create an event, ideally in `Projects::UpdateService`. Then, the outbox endpoint
+can list those events to be sure to match what was sent. When doing that, we should
+verify that it doesn't affect other pages or endpoints dealing with events.
diff --git a/doc/development/activitypub/actors/user.md b/doc/development/activitypub/actors/user.md
new file mode 100644
index 00000000000..9fe4f8ec88e
--- /dev/null
+++ b/doc/development/activitypub/actors/user.md
@@ -0,0 +1,47 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments"
+---
+
+# Activities for following user actor **(EXPERIMENT)**
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127023) in GitLab 16.5 [with two flags](../../../administration/feature_flags.md) named `activity_pub` and `activity_pub_project`. Disabled by default. This feature is an [Experiment](../../../policy/experiment-beta-support.md).
+
+FLAG:
+On self-managed GitLab, by default this feature is not available. To make it available,
+an administrator can [enable the feature flags](../../../administration/feature_flags.md)
+named `activity_pub` and `activity_pub_project`.
+On GitLab.com, this feature is not available.
+The feature is not ready for production use.
+
+This feature requires two feature flags:
+
+- `activity_pub`: Enables or disables all ActivityPub-related features.
+- `activity_pub_project`: Enables and disable ActivityPub features specific to
+ projects. Requires the `activity_pub` flag to also be enabled.
+
+## Profile
+
+This activity is the first resource ActivityPub has in mind:
+
+```javascript
+{
+ "@context": "https://www.w3.org/ns/activitystreams",
+ "id": USER_PROFILE_URL,
+ "type": "Person",
+ "name": USER_NAME,
+ "url": USER_PROFILE_URL,
+ "outbox": USER_OUTBOX_URL,
+ "inbox": null,
+}
+```
+
+## Outbox
+
+The user actor is special because it can be linked to all events happening on the platform.
+It's a join of events on other resources:
+
+- All release activities.
+- All project activities.
+- All group activities.
diff --git a/doc/development/activitypub/index.md b/doc/development/activitypub/index.md
new file mode 100644
index 00000000000..d89f18080f0
--- /dev/null
+++ b/doc/development/activitypub/index.md
@@ -0,0 +1,216 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments"
+---
+
+# ActivityPub **(EXPERIMENT)**
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127023) in GitLab 16.5 [with two flags](../../administration/feature_flags.md) named `activity_pub` and `activity_pub_project`. Disabled by default. This feature is an [Experiment](../../policy/experiment-beta-support.md).
+
+FLAG:
+On self-managed GitLab, by default this feature is not available. To make it available,
+an administrator can [enable the feature flags](../../administration/feature_flags.md)
+named `activity_pub` and `activity_pub_project`.
+On GitLab.com, this feature is not available.
+The feature is not ready for production use.
+
+Usage of ActivityPub in GitLab is governed by the
+[GitLab Testing Agreement](https://about.gitlab.com/handbook/legal/testing-agreement/).
+
+The goal of those documents is to provide an implementation path for adding
+Fediverse capabilities to GitLab.
+
+This page describes the conceptual and high level point of view, while
+sub-pages discuss implementation in more technical depth (as in, how to
+implement this in the actual rails codebase of GitLab).
+
+This feature requires two feature flags:
+
+- `activity_pub`: Enables or disables all ActivityPub-related features.
+- `activity_pub_project`: Enables and disable ActivityPub features specific to
+ projects. Requires the `activity_pub` flag to also be enabled.
+
+## What
+
+Feel free to jump to [the Why section](#why) if you already know what
+ActivityPub and the Fediverse are.
+
+Among the push for [decentralization of the web](https://en.wikipedia.org/wiki/Decentralized_web),
+several projects tried different protocols with different ideals behind their reasoning.
+Some examples:
+
+- [Secure Scuttlebutt](https://en.wikipedia.org/wiki/Secure_Scuttlebutt) (or SSB for short)
+- [Dat](https://en.wikipedia.org/wiki/Dat_%28software%29)
+- [IPFS](https://en.wikipedia.org/wiki/InterPlanetary_File_System),
+- [Solid](https://en.wikipedia.org/wiki/Solid_%28web_decentralization_project%29)
+
+One gained traction recently: [ActivityPub](https://en.wikipedia.org/wiki/ActivityPub),
+better known for the colloquial [Fediverse](https://en.wikipedia.org/wiki/Fediverse) built
+on top of it, through applications like
+[Mastodon](https://en.wikipedia.org/wiki/Mastodon_%28social_network%29)
+(which could be described as some sort of decentralized Facebook) or
+[Lemmy](https://en.wikipedia.org/wiki/Lemmy_%28software%29) (which could be
+described as some sort of decentralized Reddit).
+
+ActivityPub has several advantages that makes it attractive
+to implementers and could explain its current success:
+
+- **It's built on top of HTTP**. You don't need to install new software or
+ to tinker with TCP/UDP to implement ActivityPub, if you have a webserver
+ or an application that provides an HTTP API (like a rails application),
+ you already have everything you need.
+- **It's built on top of JSON**. All communications are basically JSON
+ objects, which web developers are already used to, which simplifies adoption.
+- **It's a W3C standard and already has multiple implementations**. Being
+ piloted by the W3C is a guarantee of stability and quality work. They
+ have profusely demonstrated in the past through their work on HTML, CSS
+ or other web standards that we can build on top of their work without
+ the fear of it becoming deprecated or irrelevant after a few years.
+
+### The Fediverse
+
+The core idea behind Mastodon and Lemmy is called the Fediverse. Rather
+than full decentralization, those applications rely on federation, in the
+sense that there still are servers and clients. It's not P2P like SSB,
+Dat and IPFS, but instead a galaxy of servers chatting with each other
+instead of having central servers controlled by a single entity.
+
+The user signs up to one of those servers (called **instances**), and they
+can then interact with users either on this instance, or on other ones.
+From the perspective of the user, they access a global network, and not
+only their instance. They see the articles posted on other instances, they
+can comment on them, upvote them, etc.
+
+What happens behind the scenes:
+their instance knows where the user they reply to is hosted. It
+contacts that other instance to let them know there is a message for them -
+somewhat similar to SMTP. Similarly, when a user subscribes
+to a feed, their instance informs the instance where the feed is
+hosted of this subscription. That target instance then posts back
+messages when new activities are created. This allows for a push model, rather
+than a constant poll model like RSS. Of course, what was just described is
+the happy path; there is moderation, validation and fault tolerance
+happening all the way.
+
+### ActivityPub
+
+Behind the Fediverse is the ActivityPub protocol. It's a HTTP API
+attempting to be as general a social network implementation as possible,
+while giving options to be extendable.
+
+The basic idea is that an `actor` sends and receives `activities`. Activities
+are structured JSON messages with well-defined properties, but are extensible
+to cover any need. An actor is defined by four endpoints, which are
+contacted with the
+`application/ld+json; profile="https://www.w3.org/ns/activitystreams"` HTTP Accept header:
+
+- `GET /inbox`: used by the actor to find new activities intended for them.
+- `POST /inbox`: used by instances to push new activities intended for the actor.
+- `GET /outbox`: used by anyone to read the activities created by the actor.
+- `POST /outbox`: used by the actor to publish new activities.
+
+Among those, Mastodon and Lemmy only use `POST /inbox` and `GET /outbox`, which
+are the minimum needed to implement federation:
+
+- Instances push new activities for the actor on the inbox.
+- Reading the outbox allows reading the feed of an actor.
+
+Additionally, Mastodon and Lemmy implement a `GET /` endpoint (with the
+mentioned Accept header). This endpoint responds with general information about the
+actor, like name and URL of the inbox and outbox. While not required by the
+standard, it makes discovery easier.
+
+While a person is the main use case for an actor, an actor does not
+necessarily map to a person. Anything can be an actor: a topic, a
+subreddit, a group, an event. For GitLab, anything with activities (in the sense
+of what GitLab means by "activity") can be an ActivityPub actor. This includes
+items like projects, groups, and releases. In those more abstract examples,
+an actor can be thought of as an actionable feed.
+
+ActivityPub by itself does not cover everything that is needed to implement
+the Fediverse. Most notably, these are left for the implementers to figure out:
+
+- Finding a way to deal with spam. Spam is handled by authorizing or
+ blocking ("defederating") other instances.
+- Discovering new instances.
+- Performing network-wide searches.
+
+## Why
+
+Why would a social media protocol be useful for GitLab? People want a single,
+global GitLab network to interact between various projects, without having to
+register on each of their hosts.
+
+Several very popular discussions around this have already happened:
+
+- [Share events externally via ActivityPub](https://gitlab.com/gitlab-org/gitlab/-/issues/21582)
+- [Implement cross-server (federated) merge requests](https://gitlab.com/gitlab-org/gitlab/-/issues/14116)
+- [Distributed merge requests](https://gitlab.com/groups/gitlab-org/-/epics/260).
+
+The ideal workflow would be:
+
+1. Alice registers to her favorite GitLab instance, like `gitlab.example.org`.
+1. She looks for a project on a given topic, and sees Bob's project, even though
+ Bob is on `gitlab.com`.
+1. Alice selects **Fork**, and the `gitlab.com/Bob/project.git` is
+ forked to `gitlab.example.org/Alice/project.git`.
+1. She makes her edits, and opens a merge request, which appears in Bob's
+ project on `gitlab.com`.
+1. Alice and Bob discuss the merge request, each one from their own GitLab
+ instance.
+1. Bob can send additional commits, which are picked up by Alice's instance.
+1. When Bob accepts the merge request, his instance picks up the code from
+ Alice's instance.
+
+In this process, ActivityPub would help in:
+
+- Letting Bob know a fork happened.
+- Sending the merge request to Bob.
+- Enabling Alice and Bob to discuss the merge request.
+- Letting Alice know the code was merged.
+
+It does _not_ help in these cases, which need specific implementations:
+
+- Implementing a network-wide search.
+- Implementing cross-instance forks. (Not needed, thanks to Git.)
+
+Why use ActivityPub here rather than implementing cross-instance merge requests
+in a custom way? Two reasons:
+
+1. **Building on top of a standard helps reach beyond GitLab**.
+ While the workflow presented above only mentions GitLab, building on top
+ of a W3C standard means other forges can follow GitLab
+ there, and build a massive Fediverse of code sharing.
+1. **An opportunity to make GitLab more social**. To prepare the
+ architecture for the workflow above, smaller steps can be taken, allowing
+ people to subscribe to activity feeds from their Fediverse social
+ network. Anything that has a RSS feed could become an ActivityPub feed.
+ People on Mastodon could follow their favorite developer, project, or topic
+ from GitLab and see the news in their feed on Mastodon, hopefully raising
+ engagement with GitLab.
+
+## How
+
+The idea of this implementation path is not to take the fastest route to
+the feature with the most value added (cross-instance merge requests), but
+to go on with the smallest useful step at each iteration, making sure each step
+brings something immediately.
+
+1. **Implement ActivityPub for social following**.
+ After this, the Fediverse can follow activities on GitLab instances.
+ 1. ActivityPub to subscribe to project releases.
+ 1. ActivityPub to subscribe to project creation in topics.
+ 1. ActivityPub to subscribe to project activities.
+ 1. ActivityPub to subscribe to group activities.
+ 1. ActivityPub to subscribe to user activities.
+1. **Implement cross-instance search** to enable discovering projects on other instances.
+1. **Implement cross-instance forks** to enable forking a project from an other instance.
+1. **Implement ActivityPub for cross-instance discussions** to enable discussing
+ issues and merge requests from another instance:
+ 1. In issues.
+ 1. In merge requests.
+1. **Implement ActivityPub to submit cross-instance merge requests** to enable
+ submitting merge requests to other instances.
+
+For now, see [how to implement an ActivityPub actor](actors/index.md).
diff --git a/doc/development/adding_service_component.md b/doc/development/adding_service_component.md
index 6e47d2991dc..3ce303d429a 100644
--- a/doc/development/adding_service_component.md
+++ b/doc/development/adding_service_component.md
@@ -23,7 +23,7 @@ The following outline re-uses the [maturity metric](https://about.gitlab.com/dir
- [Release management](#release-management)
- [Enabled on GitLab.com](feature_flags/controls.md#enabling-a-feature-for-gitlabcom)
- Complete
- - [Configurable by the GitLab Environment Toolkit](https://gitlab.com/gitlab-org/gitlab-environment-toolkit)
+ - [Validated by the Reference Architecture group and scaled out recommendations made](https://about.gitlab.com/handbook/engineering/quality/quality-engineering/self-managed-excellence/#reference-architectures)
- Lovable
- Enabled by default for the majority of users
diff --git a/doc/development/ai_architecture.md b/doc/development/ai_architecture.md
index 84a2635b13c..28483b943d1 100644
--- a/doc/development/ai_architecture.md
+++ b/doc/development/ai_architecture.md
@@ -11,7 +11,7 @@ GitLab has created a common set of tools to support our product groups and their
1. Increase the velocity of feature teams by providing a set of high quality, ready to use tools
1. Ability to switch underlying technologies quickly and easily
-AI is moving very quickly, and we need to be able to keep pace with changes in the area. We have built an [abstraction layer](../../ee/development/ai_features.md) to do this, allowing us to take a more "pluggable" approach to the underlying models, data stores, and other technologies.
+AI is moving very quickly, and we need to be able to keep pace with changes in the area. We have built an [abstraction layer](../../ee/development/ai_features/index.md) to do this, allowing us to take a more "pluggable" approach to the underlying models, data stores, and other technologies.
The following diagram from the [architecture blueprint](../architecture/blueprints/ai_gateway/index.md) shows a simplified view of how the different components in GitLab interact. The abstraction layer helps avoid code duplication within the REST APIs within the `AI API` block.
@@ -27,6 +27,25 @@ There are two primary reasons for this: the best AI models are cloud-based as th
The AI Gateway (formerly the [model gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist)) is a standalone-service that will give access to AI features to all users of GitLab, no matter which instance they are using: self-managed, dedicated or GitLab.com. The SaaS-based AI abstraction layer will transition to connecting to this gateway, rather than accessing cloud-based providers directly.
+Calls to the AI-gateway from GitLab-rails can be made using the
+[Abstraction Layer](ai_features/index.md#abstraction-layer).
+By default, these actions are performed asynchronously via a Sidekiq
+job to prevent long-running requests in Puma. It should be used for
+non-latency sensitive actions due to the added latency by Sidekiq.
+
+At the time of writing, the Abstraction Layer still directly calls the AI providers. This will be
+changed [in the future](https://gitlab.com/gitlab-org/gitlab/-/issues/424614).
+
+When a certain action is latency sensitive, we can decide to call the
+AI-gateway directly. This avoids the latency added by Sidekiq.
+[We already do this for `code_suggestions`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/code_suggestions.rb)
+which get handled by API endpoints nested in
+`/api/v4/code_suggestions`. For any new endpoints added, we should
+nest them within the `/api/v4/ai_assisted` namespace. Doing this will
+automatically route the requests on GitLab.com to the `ai-assisted`
+fleet for GitLab.com, isolating the workload from the regular API and
+making it easier to scale if needed.
+
## Supported technologies
As part of the AI working group, we have been investigating various technologies and vetting them. Below is a list of the tools which have been reviewed and already approved for use within the GitLab application.
@@ -98,7 +117,7 @@ The following table documents functionality that Code Suggestions offers today,
#### Code Suggestions Latency
-Code Suggestions acceptance rates are _highly_ sensitive to latency. While writing code with an AI assistant, a user will pause only for a short duration before continuing on with manually typing out a block of code. As soon as the user has pressed a subsequent keypress, the existing suggestion will be invalidated and a new request will need to be issued to the code suggestions endpoint. In turn, this request will also be highly sensitive to latency.
+Code Suggestions acceptance rates are _highly_ sensitive to latency. While writing code with an AI assistant, a user will pause only for a short duration before continuing on with manually typing out a block of code. As soon as the user has pressed a subsequent keypress, the existing suggestion will be invalidated and a new request will need to be issued to the Code Suggestions endpoint. In turn, this request will also be highly sensitive to latency.
In a worst case with sufficient latency, the IDE could be issuing a string of requests, each of which is then ignored as the user proceeds without waiting for the response. This adds no value for the user, while still putting load on our services.
diff --git a/doc/development/ai_features.md b/doc/development/ai_features.md
index ffe151f3876..a952d8f2804 100644
--- a/doc/development/ai_features.md
+++ b/doc/development/ai_features.md
@@ -1,659 +1,11 @@
---
-stage: AI-powered
-group: AI Framework
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+redirect_to: 'ai_features/index.md'
+remove_date: '2023-12-01'
---
-# AI features based on 3rd-party integrations
+This document was moved to [another location](ai_features/index.md).
-[Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/117296) in GitLab 15.11.
-
-## Features
-
-- Async execution of the long running API requests
- - GraphQL Action starts the request
- - Background workers execute
- - GraphQL subscriptions deliver results back in real time
-- Abstraction for
- - OpenAI
- - Google Vertex AI
- - Anthropic
-- Rate Limiting
-- Circuit Breaker
-- Multi-Level feature flags
-- License checks on group level
-- Snowplow execution tracking
-- Tracking of Token Spent on Prometheus
-- Configuration for Moderation check of inputs
-- Automatic Markdown Rendering of responses
-- Centralised Group Level settings for experiment and 3rd party
-- Experimental API endpoints for exploration of AI APIs by GitLab team members without the need for credentials
- - OpenAI
- - Google Vertex AI
- - Anthropic
-
-## Feature flags
-
-Apply the following two feature flags to any AI feature work:
-
-- A general that applies to all AI features.
-- A flag specific to that feature. The feature flag name [must be different](feature_flags/index.md#feature-flags-for-licensed-features) than the licensed feature name.
-
-See the [feature flag tracker](https://gitlab.com/gitlab-org/gitlab/-/issues/405161) for the list of all feature flags and how to use them.
-
-## Implement a new AI action
-
-To implement a new AI action, connect to the preferred AI provider. You can connect to this API using either the:
-
-- Experimental REST API.
-- Abstraction layer.
-
-All AI features are experimental.
-
-## Test AI features locally
-
-NOTE:
-Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
-
-1. Enable the required general feature flags:
-
- ```ruby
- Feature.enable(:ai_related_settings)
- Feature.enable(:openai_experimentation)
- Feature.enable(:tofa_experimentation_main_flag)
- Feature.enable(:anthropic_experimentation)
- ```
-
-1. Simulate the GDK to [simulate SaaS](ee_features.md#simulate-a-saas-instance) and ensure the group you want to test has an Ultimate license
-1. Enable `Experimental features` and `Third-party AI services`
- 1. Go to the group with the Ultimate license
- 1. **Group Settings** > **General** -> **Permissions and group features**
- 1. Enable **Experiment features**
- 1. Enable **Third-party AI services**
-1. Enable the specific feature flag for the feature you want to test
-1. Set the required access token. To receive an access token:
- 1. For Vertex, follow the [instructions below](#configure-gcp-vertex-access).
- 1. For all other providers, like Anthropic or OpenAI, create an access request where `@m_gill`, `@wayne`, and `@timzallmann` are the tech stack owners.
-
-### Set up the embedding database
-
-NOTE:
-Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
-
-For features that use the embedding database, additional setup is needed.
-
-1. Enable [pgvector](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/pgvector.md#enable-pgvector-in-the-gdk) in GDK
-1. Enable the embedding database in GDK
-
- ```shell
- gdk config set gitlab.rails.databases.embedding.enabled true
- ```
-
-1. Run `gdk reconfigure`
-1. Run database migrations to create the embedding database
-
-### Set up GitLab Duo Chat
-
-NOTE:
-Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
-
-1. [Enable Anthropic API features](#configure-anthropic-access).
-1. [Enable OpenAI support](#configure-openai-access).
-1. [Ensure the embedding database is configured](#set-up-the-embedding-database).
-1. Enable feature specific feature flag.
-
- ```ruby
- Feature.enable(:gitlab_duo)
- Feature.enable(:tanuki_bot)
- Feature.enable(:ai_redis_cache)
- ```
-
-1. Ensure that your current branch is up-to-date with `master`.
-1. To access the GitLab Duo Chat interface, in the lower-left corner of any page, select **Help** and **Ask GitLab Duo Chat**.
-
-#### Tips for local development
-
-1. When responses are taking too long to appear in the user interface, consider restarting Sidekiq by running `gdk restart rails-background-jobs`. If that doesn't work, try `gdk kill` and then `gdk start`.
-1. Alternatively, bypass Sidekiq entirely and run the chat service synchronously. This can help with debugging errors as GraphQL errors are now available in the network inspector instead of the Sidekiq logs.
-
-```diff
-diff --git a/ee/app/services/llm/chat_service.rb b/ee/app/services/llm/chat_service.rb
-index 5fa7ae8a2bc1..5fe996ba0345 100644
---- a/ee/app/services/llm/chat_service.rb
-+++ b/ee/app/services/llm/chat_service.rb
-@@ -5,7 +5,7 @@ class ChatService < BaseService
- private
-
- def perform
-- worker_perform(user, resource, :chat, options)
-+ worker_perform(user, resource, :chat, options.merge(sync: true))
- end
-
- def valid?
-```
-
-### Working with GitLab Duo Chat
-
-Prompts are the most vital part of GitLab Duo Chat system. Prompts are the instructions sent to the Large Language Model to perform certain tasks.
-
-The state of the prompts is the result of weeks of iteration. If you want to change any prompt in the current tool, you must put it behind a feature flag.
-
-If you have any new or updated prompts, ask members of AI Framework team to review, because they have significant experience with them.
-
-### Setup for GitLab documentation chat (legacy chat)
-
-To populate the embedding database for GitLab chat:
-
-1. Open a rails console
-1. Run [this script](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/10588#note_1373586079) to populate the embedding database
-
-### Contributing to GitLab Duo Chat
-
-The Chat feature uses a [zero-shot agent](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/gitlab/llm/chain/agents/zero_shot/executor.rb) that includes a system prompt explaining how the large language model should interpret the question and provide an
-answer. The system prompt defines available tools that can be used to gather
-information to answer the user's question.
-
-The zero-shot agent receives the user's question and decides which tools to use to gather information to answer it.
-It then makes a request to the large language model, which decides if it can answer directly or if it needs to use one
-of the defined tools.
-
-The tools each have their own prompt that provides instructions to the large language model on how to use that tool to
-gather information. The tools are designed to be self-sufficient and avoid multiple requests back and forth to
-the large language model.
-
-After the tools have gathered the required information, it is returned to the zero-shot agent, which asks the large language
-model if enough information has been gathered to provide the final answer to the user's question.
-
-#### Adding a new tool
-
-To add a new tool:
-
-1. Create files for the tool in the `ee/lib/gitlab/llm/chain/tools/` folder. Use existing tools like `issue_identifier` or
-`resource_reader` as a template.
-
-1. Write a class for the tool that includes:
-
- - Name and description of what the tool does
- - Example questions that would use this tool
- - Instructions for the large language model on how to use the tool to gather information - so the main prompts that
- this tool is using.
-
-1. Test and iterate on the prompt using RSpec tests that make real requests to the large language model.
- - Prompts require trial and error, the non-deterministic nature of working with LLM can be surprising.
- - Anthropic provides good [guide](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design) on working on prompts.
-
-1. Implement code in the tool to parse the response from the large language model and return it to the zero-shot agent.
-
-1. Add the new tool name to the `tools` array in `ee/lib/gitlab/llm/completions/chat.rb` so the zero-shot agent knows about it.
-
-1. Add tests by adding questions to the test-suite for which the new tool should respond to. Iterate on the prompts as needed.
-
-The key things to keep in mind are properly instructing the large language model through prompts and tool descriptions,
-keeping tools self-sufficient, and returning responses to the zero-shot agent. With some trial and error on prompts,
-adding new tools can expand the capabilities of the chat feature.
-
-There are available short [videos](https://www.youtube.com/playlist?list=PL05JrBw4t0KoOK-bm_bwfHaOv-1cveh8i) covering this topic.
-
-### Debugging
-
-To gather more insights about the full request, use the `Gitlab::Llm::Logger` file to debug logs.
-The default logging level on production is `INFO` and **must not** be used to log any data that could contain personal identifying information.
-
-To follow the debugging messages related to the AI requests on the abstraction layer, you can use:
-
-```shell
-export LLM_DEBUG=1
-gdk start
-tail -f log/llm.log
-```
-
-### Configure GCP Vertex access
-
-In order to obtain a GCP service key for local development, please follow the steps below:
-
-- Create a sandbox GCP environment by visiting [this page](https://about.gitlab.com/handbook/infrastructure-standards/#individual-environment) and following the instructions, or by requesting access to our existing group environment by using [this template](https://gitlab.com/gitlab-com/it/infra/issue-tracker/-/issues/new?issuable_template=gcp_group_account_iam_update_request).
-- In the GCP console, go to `IAM & Admin` > `Service Accounts` and click on the "Create new service account" button
-- Name the service account something specific to what you're using it for. Select Create and Continue. Under `Grant this service account access to project`, select the role `Vertex AI User`. Select `Continue` then `Done`
-- Select your new service account and `Manage keys` > `Add Key` > `Create new key`. This will download the **private** JSON credentials for your service account.
-- Open the Rails console. Update the settings to:
-
-```ruby
-Gitlab::CurrentSettings.update(vertex_ai_credentials: File.read('/YOUR_FILE.json'))
-
-# Note: These credential examples will not work locally for all models
-Gitlab::CurrentSettings.update(vertex_ai_host: "<root-domain>") # Example: us-central1-aiplatform.googleapis.com
-Gitlab::CurrentSettings.update(vertex_ai_project: "<project-id>") # Example: cloud-large-language-models
-```
-
-Internal team members can [use this snippet](https://gitlab.com/gitlab-com/gl-infra/production/-/snippets/2541742) for help configuring these endpoints.
-
-### Configure OpenAI access
-
-```ruby
-Gitlab::CurrentSettings.update(openai_api_key: "<open-ai-key>")
-```
-
-### Configure Anthropic access
-
-```ruby
-Feature.enable(:anthropic_experimentation)
-Gitlab::CurrentSettings.update!(anthropic_api_key: <insert API key>)
-```
-
-### Testing GitLab Duo Chat with predefined questions
-
-Because success of answers to user questions in GitLab Duo Chat heavily depends on toolchain and prompts of each tool, it's common that even a minor change in a prompt or a tool impacts processing of some questions. To make sure that a change in the toolchain doesn't break existing functionality, you can use the following rspecs to validate answers to some predefined questions:
-
-```ruby
-export OPENAI_API_KEY='<key>'
-export ANTHROPIC_API_KEY='<key>'
-REAL_AI_REQUEST=1 rspec ee/spec/lib/gitlab/llm/chain/agents/zero_shot/executor_spec.rb
-```
-
-When you need to update the test questions that require documentation embeddings,
-make sure a new fixture is generated and committed together with the change.
-
-#### Populating embeddings and using embeddings fixture
-
-To seed your development database with the embeddings for GitLab Documentation,
-you may use the pre-generated embeddings and a Rake test.
-
-```shell
-RAILS_ENV=development bundle exec rake gitlab:llm:embeddings:seed_pre_generated
-```
-
-The DBCleaner gem we use clear the database tables before each test runs.
-Instead of fully populating the table `tanuki_bot_mvc` where we store embeddings for the documentations,
-we can add a few selected embeddings to the table from a pre-generated fixture.
-
-For instance, to test that the question "How can I reset my password" is correctly
-retrieving the relevant embeddings and answered, we can extract the top N closet embeddings
-to the question into a fixture and only restore a small number of embeddings quickly.
-To faciliate an extraction process, a Rake task been written.
-You can add or remove the questions needed to be tested in the Rake task and run the task to generate a new fixture.
-
-```shell
-RAILS_ENV=development bundle exec rake gitlab:llm:embeddings:extract_embeddings
-```
-
-In the specs where you need to use the embeddings,
-use the RSpec config hook `:ai_embedding_fixtures` on a context.
-
-```ruby
-context 'when asking about how to use GitLab', :ai_embedding_fixtures do
- # ...examples
-end
-```
-
-## Experimental REST API
-
-Use the [experimental REST API endpoints](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/ai/experimentation) to quickly experiment and prototype AI features.
-
-The endpoints are:
-
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/completions`
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/embeddings`
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/chat/completions`
-- `https://gitlab.example.com/api/v4/ai/experimentation/anthropic/complete`
-- `https://gitlab.example.com/api/v4/ai/experimentation/tofa/chat`
-
-These endpoints are only for prototyping, not for rolling features out to customers.
-The experimental endpoint is only available to GitLab team members on production. Use the
-[GitLab API token](../user/profile/personal_access_tokens.md) to authenticate.
-
-## Abstraction layer
-
-### GraphQL API
-
-To connect to the AI provider API using the Abstraction Layer, use an extendable GraphQL API called
-[`aiAction`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/graphql/mutations/ai/action.rb).
-The `input` accepts key/value pairs, where the `key` is the action that needs to be performed.
-We only allow one AI action per mutation request.
-
-Example of a mutation:
-
-```graphql
-mutation {
- aiAction(input: {summarizeComments: {resourceId: "gid://gitlab/Issue/52"}}) {
- clientMutationId
- }
-}
-```
-
-As an example, assume we want to build an "explain code" action. To do this, we extend the `input` with a new key,
-`explainCode`. The mutation would look like this:
-
-```graphql
-mutation {
- aiAction(input: {explainCode: {resourceId: "gid://gitlab/MergeRequest/52", code: "foo() { console.log()" }}) {
- clientMutationId
- }
-}
-```
-
-The GraphQL API then uses the [OpenAI Client](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/gitlab/llm/open_ai/client.rb)
-to send the response.
-
-Remember that other clients are available and you should not use OpenAI.
-
-#### How to receive a response
-
-As the OpenAI API requests are handled in a background job, we do not keep the request alive and
-the response is sent through the `aiCompletionResponse` subscription:
-
-```mutation
-subscription aiCompletionResponse($userId: UserID, $resourceId: AiModelID!) {
- aiCompletionResponse(userId: $userId, resourceId: $resourceId) {
- responseBody
- errors
- }
-}
-```
-
-WARNING:
-You should only subscribe to the subscription once the mutation is sent. If multiple subscriptions are active on the same page, they currently all receive updates as our identifier is the user and the resource. To mitigate this, you should only subscribe when the mutation is sent. You can use [`skip()`](You can use [`skip()`](https://apollo.vuejs.org/guide/apollo/subscriptions.html#skipping-the-subscription)) for this case. To prevent this problem in the future, we implement a [request identifier](https://gitlab.com/gitlab-org/gitlab/-/issues/408196).
-
-#### Current abstraction layer flow
-
-The following graph uses OpenAI as an example. You can use different providers.
-
-```mermaid
-flowchart TD
-A[GitLab frontend] -->B[AiAction GraphQL mutation]
-B --> C[Llm::ExecuteMethodService]
-C --> D[One of services, for example: Llm::GenerateSummaryService]
-D -->|scheduled| E[AI worker:Llm::CompletionWorker]
-E -->F[::Gitlab::Llm::Completions::Factory]
-F -->G[`::Gitlab::Llm::OpenAi::Completions::...` class using `::Gitlab::Llm::OpenAi::Templates::...` class]
-G -->|calling| H[Gitlab::Llm::OpenAi::Client]
-H --> |response| I[::Gitlab::Llm::OpenAi::ResponseService]
-I --> J[GraphqlTriggers.ai_completion_response]
-J --> K[::GitlabSchema.subscriptions.trigger]
-```
-
-## CircuitBreaker
-
-The CircuitBreaker concern is a reusable module that you can include in any class that needs to run code with circuit breaker protection. The concern provides a `run_with_circuit` method that wraps a code block with circuit breaker functionality, which helps prevent cascading failures and improves system resilience. For more information about the circuit breaker pattern, see:
-
-- [What is Circuit breaker](https://martinfowler.com/bliki/CircuitBreaker.html).
-- [The Hystrix documentation on CircuitBreaker](https://github.com/Netflix/Hystrix/wiki/How-it-Works#circuit-breaker).
-
-### Use CircuitBreaker
-
-To use the CircuitBreaker concern, you need to include it in a class. For example:
-
-```ruby
-class MyService
- include Gitlab::Llm::Concerns::CircuitBreaker
-
- def call_external_service
- run_with_circuit do
- # Code that interacts with external service goes here
-
- raise InternalServerError
- end
- end
-end
-```
-
-The `call_external_service` method is an example method that interacts with an external service.
-By wrapping the code that interacts with the external service with `run_with_circuit`, the method is executed within the circuit breaker.
-The circuit breaker is created and configured by the `circuit` method, which is called automatically when the `CircuitBreaker` module is included.
-The method should raise `InternalServerError` error which will be counted towards the error threshold if raised during the execution of the code block.
-
-The circuit breaker tracks the number of errors and the rate of requests,
-and opens the circuit if it reaches the configured error threshold or volume threshold.
-If the circuit is open, subsequent requests fail fast without executing the code block, and the circuit breaker periodically allows a small number of requests through to test the service's availability before closing the circuit again.
-
-### Configuration
-
-The circuit breaker is configured with two constants which control the number of errors and requests at which the circuit will open:
-
-- `ERROR_THRESHOLD`
-- `VOLUME_THRESHOLD`
-
-You can adjust these values as needed for the specific service and usage pattern.
-The `InternalServerError` is the exception class counted towards the error threshold if raised during the execution of the code block.
-This is the exception class that triggers the circuit breaker when raised by the code that interacts with the external service.
-
-NOTE:
-The `CircuitBreaker` module depends on the `Circuitbox` gem to provide the circuit breaker implementation. By default, the service name is inferred from the class name where the concern module is included. Override the `service_name` method if the name needs to be different.
-
-### Testing
-
-To test code that uses the `CircuitBreaker` concern, you can use `RSpec` shared examples and pass the `service` and `subject` variables:
-
-```ruby
-it_behaves_like 'has circuit breaker' do
- let(:service) { dummy_class.new }
- let(:subject) { service.dummy_method }
-end
-```
-
-## How to implement a new action
-
-### Register a new method
-
-Go to the `Llm::ExecuteMethodService` and add a new method with the new service class you will create.
-
-```ruby
-class ExecuteMethodService < BaseService
- METHODS = {
- # ...
- amazing_new_ai_feature: Llm::AmazingNewAiFeatureService
- }.freeze
-```
-
-### Create a Service
-
-1. Create a new service under `ee/app/services/llm/` and inherit it from the `BaseService`.
-1. The `resource` is the object we want to act on. It can be any object that includes the `Ai::Model` concern. For example it could be a `Project`, `MergeRequest`, or `Issue`.
-
-```ruby
-# ee/app/services/llm/amazing_new_ai_feature_service.rb
-
-module Llm
- class AmazingNewAiFeatureService < BaseService
- private
-
- def perform
- ::Llm::CompletionWorker.perform_async(user.id, resource.id, resource.class.name, :amazing_new_ai_feature)
- success
- end
-
- def valid?
- super && Ability.allowed?(user, :amazing_new_ai_feature, resource)
- end
- end
-end
-```
-
-### Authorization
-
-We recommend to use [policies](policies.md) to deal with authorization for a feature. Currently we need to make sure to cover the following checks:
-
-1. General AI feature flag is enabled
-1. Feature specific feature flag is enabled
-1. The namespace has the required license for the feature
-1. User is a member of the group/project
-1. `experiment_features_enabled` and `third_party_ai_features_enabled` flags are set on the `Namespace`
-
-For our example, we need to implement the `allowed?(:amazing_new_ai_feature)` call. As an example, you can look at the [Issue Policy for the summarize comments feature](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/issue_policy.rb). In our example case, we want to implement the feature for Issues as well:
-
-```ruby
-# ee/app/policies/ee/issue_policy.rb
-
-module EE
- module IssuePolicy
- extend ActiveSupport::Concern
- prepended do
- with_scope :subject
- condition(:ai_available) do
- ::Feature.enabled?(:openai_experimentation)
- end
-
- with_scope :subject
- condition(:amazing_new_ai_feature_enabled) do
- ::Feature.enabled?(:amazing_new_ai_feature, subject_container) &&
- subject_container.licensed_feature_available?(:amazing_new_ai_feature)
- end
-
- rule do
- ai_available & amazing_new_ai_feature_enabled & is_project_member
- end.enable :amazing_new_ai_feature
- end
- end
-end
-```
-
-### Pairing requests with responses
-
-Because multiple users' requests can be processed in parallel, when receiving responses,
-it can be difficult to pair a response with its original request. The `requestId`
-field can be used for this purpose, because both the request and response are assured
-to have the same `requestId` UUID.
-
-### Caching
-
-AI requests and responses can be cached. Cached conversation is being used to
-display user interaction with AI features. In the current implementation, this cache
-is not used to skip consecutive calls to the AI service when a user repeats
-their requests.
-
-```graphql
-query {
- aiMessages {
- nodes {
- id
- requestId
- content
- role
- errors
- timestamp
- }
- }
-}
-```
-
-This cache is especially useful for chat functionality. For other services,
-caching is disabled. (It can be enabled for a service by using `cache_response: true`
-option.)
-
-Caching has following limitations:
-
-- Messages are stored in Redis stream.
-- There is a single stream of messages per user. This means that all services
- currently share the same cache. If needed, this could be extended to multiple
- streams per user (after checking with the infrastructure team that Redis can handle
- the estimated amount of messages).
-- Only the last 50 messages (requests + responses) are kept.
-- Expiration time of the stream is 3 days since adding last message.
-- User can access only their own messages. There is no authorization on the caching
- level, and any authorization (if accessed by not current user) is expected on
- the service layer.
-
-### Check if feature is allowed for this resource based on namespace settings
-
-There are two settings allowed on root namespace level that restrict the use of AI features:
-
-- `experiment_features_enabled`
-- `third_party_ai_features_enabled`.
-
-To check if that feature is allowed for a given namespace, call:
-
-```ruby
-Gitlab::Llm::StageCheck.available?(namespace, :name_of_the_feature)
-```
-
-Add the name of the feature to the `Gitlab::Llm::StageCheck` class. There are arrays there that differentiate
-between experimental and beta features.
-
-This way we are ready for the following different cases:
-
-- If the feature is not in any array, the check will return `true`. For example, the feature was moved to GA and does not use a third-party setting.
-- If feature is in GA, but uses a third-party setting, the class will return a proper answer based on the namespace third-party setting.
-
-To move the feature from the experimental phase to the beta phase, move the name of the feature from the `EXPERIMENTAL_FEATURES` array to the `BETA_FEATURES` array.
-
-### Implement calls to AI APIs and the prompts
-
-The `CompletionWorker` will call the `Completions::Factory` which will initialize the Service and execute the actual call to the API.
-In our example, we will use OpenAI and implement two new classes:
-
-```ruby
-# /ee/lib/gitlab/llm/open_ai/completions/amazing_new_ai_feature.rb
-
-module Gitlab
- module Llm
- module OpenAi
- module Completions
- class AmazingNewAiFeature
- def initialize(ai_prompt_class)
- @ai_prompt_class = ai_prompt_class
- end
-
- def execute(user, issue, options)
- options = ai_prompt_class.get_options(options[:messages])
-
- ai_response = Gitlab::Llm::OpenAi::Client.new(user).chat(content: nil, **options)
-
- ::Gitlab::Llm::OpenAi::ResponseService.new(user, issue, ai_response, options: {}).execute(
- Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new
- )
- end
-
- private
-
- attr_reader :ai_prompt_class
- end
- end
- end
- end
-end
-```
-
-```ruby
-# /ee/lib/gitlab/llm/open_ai/templates/amazing_new_ai_feature.rb
-
-module Gitlab
- module Llm
- module OpenAi
- module Templates
- class AmazingNewAiFeature
- TEMPERATURE = 0.3
-
- def self.get_options(messages)
- system_content = <<-TEMPLATE
- You are an assistant that writes code for the following input:
- """
- TEMPLATE
-
- {
- messages: [
- { role: "system", content: system_content },
- { role: "user", content: messages },
- ],
- temperature: TEMPERATURE
- }
- end
- end
- end
- end
- end
-end
-```
-
-Because we support multiple AI providers, you may also use those providers for the same example:
-
-```ruby
-Gitlab::Llm::VertexAi::Client.new(user)
-Gitlab::Llm::Anthropic::Client.new(user)
-```
-
-### Add Ai Action to GraphQL
-
-TODO
-
-## Security
-
-Refer to the [secure coding guidelines for Artificial Intelligence (AI) features](secure_coding_guidelines.md#artificial-intelligence-ai-features).
+<!-- This redirect file can be deleted after <2023-12-01>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/ai_features/duo_chat.md b/doc/development/ai_features/duo_chat.md
new file mode 100644
index 00000000000..5c7359eca9f
--- /dev/null
+++ b/doc/development/ai_features/duo_chat.md
@@ -0,0 +1,139 @@
+---
+stage: AI-powered
+group: Duo Chat
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# GitLab Duo Chat
+
+## Set up GitLab Duo Chat
+
+NOTE:
+Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
+
+1. [Enable Anthropic API features](index.md#configure-anthropic-access).
+1. [Enable OpenAI support](index.md#configure-openai-access).
+1. [Ensure the embedding database is configured](index.md#set-up-the-embedding-database).
+1. Enable feature specific feature flag.
+
+ ```ruby
+ Feature.enable(:gitlab_duo)
+ Feature.enable(:tanuki_bot)
+ Feature.enable(:ai_redis_cache)
+ ```
+
+1. Ensure that your current branch is up-to-date with `master`.
+1. To access the GitLab Duo Chat interface, in the lower-left corner of any page, select **Help** and **Ask GitLab Duo Chat**.
+
+### Tips for local development
+
+1. When responses are taking too long to appear in the user interface, consider restarting Sidekiq by running `gdk restart rails-background-jobs`. If that doesn't work, try `gdk kill` and then `gdk start`.
+1. Alternatively, bypass Sidekiq entirely and run the chat service synchronously. This can help with debugging errors as GraphQL errors are now available in the network inspector instead of the Sidekiq logs.
+
+```diff
+diff --git a/ee/app/services/llm/chat_service.rb b/ee/app/services/llm/chat_service.rb
+index 5fa7ae8a2bc1..5fe996ba0345 100644
+--- a/ee/app/services/llm/chat_service.rb
++++ b/ee/app/services/llm/chat_service.rb
+@@ -5,7 +5,7 @@ class ChatService < BaseService
+ private
+
+ def perform
+- worker_perform(user, resource, :chat, options)
++ worker_perform(user, resource, :chat, options.merge(sync: true))
+ end
+
+ def valid?
+```
+
+## Working with GitLab Duo Chat
+
+Prompts are the most vital part of GitLab Duo Chat system. Prompts are the instructions sent to the Large Language Model to perform certain tasks.
+
+The state of the prompts is the result of weeks of iteration. If you want to change any prompt in the current tool, you must put it behind a feature flag.
+
+If you have any new or updated prompts, ask members of AI Framework team to review, because they have significant experience with them.
+
+## Contributing to GitLab Duo Chat
+
+The Chat feature uses a [zero-shot agent](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/gitlab/llm/chain/agents/zero_shot/executor.rb) that includes a system prompt explaining how the large language model should interpret the question and provide an
+answer. The system prompt defines available tools that can be used to gather
+information to answer the user's question.
+
+The zero-shot agent receives the user's question and decides which tools to use to gather information to answer it.
+It then makes a request to the large language model, which decides if it can answer directly or if it needs to use one
+of the defined tools.
+
+The tools each have their own prompt that provides instructions to the large language model on how to use that tool to
+gather information. The tools are designed to be self-sufficient and avoid multiple requests back and forth to
+the large language model.
+
+After the tools have gathered the required information, it is returned to the zero-shot agent, which asks the large language
+model if enough information has been gathered to provide the final answer to the user's question.
+
+### Adding a new tool
+
+To add a new tool:
+
+1. Create files for the tool in the `ee/lib/gitlab/llm/chain/tools/` folder. Use existing tools like `issue_identifier` or
+ `resource_reader` as a template.
+
+1. Write a class for the tool that includes:
+
+ - Name and description of what the tool does
+ - Example questions that would use this tool
+ - Instructions for the large language model on how to use the tool to gather information - so the main prompts that
+ this tool is using.
+
+1. Test and iterate on the prompt using RSpec tests that make real requests to the large language model.
+ - Prompts require trial and error, the non-deterministic nature of working with LLM can be surprising.
+ - Anthropic provides good [guide](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design) on working on prompts.
+ - GitLab [guide](prompts.md) on working with prompts.
+
+1. Implement code in the tool to parse the response from the large language model and return it to the zero-shot agent.
+
+1. Add the new tool name to the `tools` array in `ee/lib/gitlab/llm/completions/chat.rb` so the zero-shot agent knows about it.
+
+1. Add tests by adding questions to the test-suite for which the new tool should respond to. Iterate on the prompts as needed.
+
+The key things to keep in mind are properly instructing the large language model through prompts and tool descriptions,
+keeping tools self-sufficient, and returning responses to the zero-shot agent. With some trial and error on prompts,
+adding new tools can expand the capabilities of the Chat feature.
+
+There are available short [videos](https://www.youtube.com/playlist?list=PL05JrBw4t0KoOK-bm_bwfHaOv-1cveh8i) covering this topic.
+
+## Debugging
+
+To gather more insights about the full request, use the `Gitlab::Llm::Logger` file to debug logs.
+The default logging level on production is `INFO` and **must not** be used to log any data that could contain personal identifying information.
+
+To follow the debugging messages related to the AI requests on the abstraction layer, you can use:
+
+```shell
+export LLM_DEBUG=1
+gdk start
+tail -f log/llm.log
+```
+
+## Testing GitLab Duo Chat with predefined questions
+
+Because success of answers to user questions in GitLab Duo Chat heavily depends on toolchain and prompts of each tool, it's common that even a minor change in a prompt or a tool impacts processing of some questions. To make sure that a change in the toolchain doesn't break existing functionality, you can use the following rspecs to validate answers to some predefined questions:
+
+```ruby
+export OPENAI_API_KEY='<key>'
+export ANTHROPIC_API_KEY='<key>'
+REAL_AI_REQUEST=1 rspec ee/spec/lib/gitlab/llm/chain/agents/zero_shot/executor_spec.rb
+```
+
+When you need to update the test questions that require documentation embeddings,
+make sure a new fixture is generated and committed together with the change.
+
+## GraphQL Subscription
+
+The GraphQL Subscription for Chat behaves slightly different because it's user-centric. A user could have Chat open on multiple browser tabs, or also on their IDE.
+We therefore need to broadcast messages to multiple clients to keep them in sync. The `aiAction` mutation with the `chat` action behaves the following:
+
+1. All complete Chat messages (including messages from the user) are broadcasted with the `userId` and the `resourceId` from the mutation as identifier, ignoring the `clientSubscriptionId`.
+1. Chunks from streamed Chat messages are broadcasted with the `userId`, `resourceId`, and `clientSubscriptionId` as identifier.
+
+To truly sync messages between all clients of a user, we need to remove the `resourceId` as well, which will be fixed by [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/420296).
diff --git a/doc/development/ai_features/index.md b/doc/development/ai_features/index.md
new file mode 100644
index 00000000000..e1d3ae36570
--- /dev/null
+++ b/doc/development/ai_features/index.md
@@ -0,0 +1,577 @@
+---
+stage: AI-powered
+group: AI Framework
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# AI features based on 3rd-party integrations
+
+[Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/117296) in GitLab 15.11.
+
+## Features
+
+- Async execution of the long running API requests
+ - GraphQL Action starts the request
+ - Background workers execute
+ - GraphQL subscriptions deliver results back in real time
+- Abstraction for
+ - OpenAI
+ - Google Vertex AI
+ - Anthropic
+- Rate Limiting
+- Circuit Breaker
+- Multi-Level feature flags
+- License checks on group level
+- Snowplow execution tracking
+- Tracking of Token Spent on Prometheus
+- Configuration for Moderation check of inputs
+- Automatic Markdown Rendering of responses
+- Centralised Group Level settings for experiment and 3rd party
+- Experimental API endpoints for exploration of AI APIs by GitLab team members without the need for credentials
+ - OpenAI
+ - Google Vertex AI
+ - Anthropic
+
+## Feature flags
+
+Apply the following two feature flags to any AI feature work:
+
+- A general that applies to all AI features.
+- A flag specific to that feature. The feature flag name [must be different](../feature_flags/index.md#feature-flags-for-licensed-features) than the licensed feature name.
+
+See the [feature flag tracker](https://gitlab.com/gitlab-org/gitlab/-/issues/405161) for the list of all feature flags and how to use them.
+
+## Implement a new AI action
+
+To implement a new AI action, connect to the preferred AI provider. You can connect to this API using either the:
+
+- Experimental REST API.
+- Abstraction layer.
+
+All AI features are experimental.
+
+## Test AI features locally
+
+NOTE:
+Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
+
+1. Enable the required general feature flags:
+
+ ```ruby
+ Feature.enable(:ai_related_settings)
+ Feature.enable(:openai_experimentation)
+ ```
+
+1. Simulate the GDK to [simulate SaaS](../ee_features.md#simulate-a-saas-instance) and ensure the group you want to test has an Ultimate license
+1. Enable `Experimental features` and `Third-party AI services`
+ 1. Go to the group with the Ultimate license
+ 1. **Group Settings** > **General** -> **Permissions and group features**
+ 1. Enable **Experiment features**
+ 1. Enable **Third-party AI services**
+1. Enable the specific feature flag for the feature you want to test
+1. Set the required access token. To receive an access token:
+ 1. For Vertex, follow the [instructions below](#configure-gcp-vertex-access).
+ 1. For all other providers, like Anthropic or OpenAI, create an access request where `@m_gill`, `@wayne`, and `@timzallmann` are the tech stack owners.
+
+### Set up the embedding database
+
+NOTE:
+Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
+
+For features that use the embedding database, additional setup is needed.
+
+1. Enable [pgvector](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/pgvector.md#enable-pgvector-in-the-gdk) in GDK
+1. Enable the embedding database in GDK
+
+ ```shell
+ gdk config set gitlab.rails.databases.embedding.enabled true
+ ```
+
+1. Run `gdk reconfigure`
+1. Run database migrations to create the embedding database
+
+### Setup for GitLab documentation chat (legacy chat)
+
+To populate the embedding database for GitLab chat:
+
+1. Open a rails console
+1. Run [this script](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/10588#note_1373586079) to populate the embedding database
+
+### Configure GCP Vertex access
+
+In order to obtain a GCP service key for local development, please follow the steps below:
+
+- Create a sandbox GCP environment by visiting [this page](https://about.gitlab.com/handbook/infrastructure-standards/#individual-environment) and following the instructions, or by requesting access to our existing group environment by using [this template](https://gitlab.com/gitlab-com/it/infra/issue-tracker/-/issues/new?issuable_template=gcp_group_account_iam_update_request).
+- In the GCP console, go to `IAM & Admin` > `Service Accounts` and click on the "Create new service account" button
+- Name the service account something specific to what you're using it for. Select Create and Continue. Under `Grant this service account access to project`, select the role `Vertex AI User`. Select `Continue` then `Done`
+- Select your new service account and `Manage keys` > `Add Key` > `Create new key`. This will download the **private** JSON credentials for your service account.
+- If you are using your own project, you may also need to enable the Vertex AI API:
+ 1. Go to **APIs & Services > Enabled APIs & services**.
+ 1. Select **+ Enable APIs and Services**.
+ 1. Search for `Vertex AI API`.
+ 1. Select **Vertex AI API**, then select **Enable**.
+- Open the Rails console. Update the settings to:
+
+```ruby
+Gitlab::CurrentSettings.update(vertex_ai_credentials: File.read('/YOUR_FILE.json'))
+
+# Note: These credential examples will not work locally for all models
+Gitlab::CurrentSettings.update(vertex_ai_host: "<root-domain>") # Example: us-central1-aiplatform.googleapis.com
+Gitlab::CurrentSettings.update(vertex_ai_project: "<project-id>") # Example: cloud-large-language-models
+```
+
+Internal team members can [use this snippet](https://gitlab.com/gitlab-com/gl-infra/production/-/snippets/2541742) for help configuring these endpoints.
+
+### Configure OpenAI access
+
+```ruby
+Gitlab::CurrentSettings.update(openai_api_key: "<open-ai-key>")
+```
+
+### Configure Anthropic access
+
+```ruby
+Gitlab::CurrentSettings.update!(anthropic_api_key: <insert API key>)
+```
+
+#### Populating embeddings and using embeddings fixture
+
+To seed your development database with the embeddings for GitLab Documentation,
+you may use the pre-generated embeddings and a Rake test.
+
+```shell
+RAILS_ENV=development bundle exec rake gitlab:llm:embeddings:seed_pre_generated
+```
+
+The DBCleaner gem we use clear the database tables before each test runs.
+Instead of fully populating the table `tanuki_bot_mvc` where we store embeddings for the documentations,
+we can add a few selected embeddings to the table from a pre-generated fixture.
+
+For instance, to test that the question "How can I reset my password" is correctly
+retrieving the relevant embeddings and answered, we can extract the top N closet embeddings
+to the question into a fixture and only restore a small number of embeddings quickly.
+To faciliate an extraction process, a Rake task been written.
+You can add or remove the questions needed to be tested in the Rake task and run the task to generate a new fixture.
+
+```shell
+RAILS_ENV=development bundle exec rake gitlab:llm:embeddings:extract_embeddings
+```
+
+In the specs where you need to use the embeddings,
+use the RSpec config hook `:ai_embedding_fixtures` on a context.
+
+```ruby
+context 'when asking about how to use GitLab', :ai_embedding_fixtures do
+ # ...examples
+end
+```
+
+### Working with GitLab Duo Chat
+
+View [guidelines](duo_chat.md) for working with GitLab Duo Chat.
+
+## Experimental REST API
+
+Use the [experimental REST API endpoints](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/ai/experimentation) to quickly experiment and prototype AI features.
+
+The endpoints are:
+
+- `https://gitlab.example.com/api/v4/ai/experimentation/openai/completions`
+- `https://gitlab.example.com/api/v4/ai/experimentation/openai/embeddings`
+- `https://gitlab.example.com/api/v4/ai/experimentation/openai/chat/completions`
+- `https://gitlab.example.com/api/v4/ai/experimentation/anthropic/complete`
+- `https://gitlab.example.com/api/v4/ai/experimentation/vertex/chat`
+
+These endpoints are only for prototyping, not for rolling features out to customers.
+
+In your local dev environment, you can experiment with these endpoints locally with the feature flag enabled:
+
+```ruby
+Feature.enable(:ai_experimentation_api)
+```
+
+On production, the experimental endpoints are only available to GitLab team members. Use a
+[GitLab API token](../../user/profile/personal_access_tokens.md) to authenticate.
+
+## Abstraction layer
+
+### GraphQL API
+
+To connect to the AI provider API using the Abstraction Layer, use an extendable GraphQL API called
+[`aiAction`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/graphql/mutations/ai/action.rb).
+The `input` accepts key/value pairs, where the `key` is the action that needs to be performed.
+We only allow one AI action per mutation request.
+
+Example of a mutation:
+
+```graphql
+mutation {
+ aiAction(input: {summarizeComments: {resourceId: "gid://gitlab/Issue/52"}}) {
+ clientMutationId
+ }
+}
+```
+
+As an example, assume we want to build an "explain code" action. To do this, we extend the `input` with a new key,
+`explainCode`. The mutation would look like this:
+
+```graphql
+mutation {
+ aiAction(input: {explainCode: {resourceId: "gid://gitlab/MergeRequest/52", code: "foo() { console.log()" }}) {
+ clientMutationId
+ }
+}
+```
+
+The GraphQL API then uses the [OpenAI Client](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/gitlab/llm/open_ai/client.rb)
+to send the response.
+
+Remember that other clients are available and you should not use OpenAI.
+
+#### How to receive a response
+
+The API requests to AI providers are handled in a background job. We therefore do not keep the request alive and the Frontend needs to match the request to the response from the subscription.
+
+WARNING:
+Determining the right response to a request can cause problems when only `userId` and `resourceId` are used. For example, when two AI features use the same `userId` and `resourceId` both subscriptions will receive the response from each other. To prevent this intereference, we introduced the `clientSubscriptionId`.
+
+To match a response on the `aiCompletionResponse` subscription, you can provide a `clientSubscriptionId` to the `aiAction` mutation.
+
+- The `clientSubscriptionId` should be unique per feature and within a page to not interfere with other AI features. We recommend to use a `UUID`.
+- Only when the `clientSubscriptionId` is provided as part of the `aiAction` mutation, it will be used for broadcasting the `aiCompletionResponse`.
+- If the `clientSubscriptionId` is not provided, only `userId` and `resourceId` are used for the `aiCompletionResponse`.
+
+As an example mutation for summarizing comments, we provide a `randomId` as part of the mutation:
+
+```graphql
+mutation {
+ aiAction(input: {summarizeComments: {resourceId: "gid://gitlab/Issue/52"}, clientSubscriptionId: "randomId"}) {
+ clientMutationId
+ }
+}
+```
+
+In our component, we then listen on the `aiCompletionResponse` using the `userId`, `resourceId` and `clientSubscriptionId` (`"randomId"`):
+
+```graphql
+subscription aiCompletionResponse($userId: UserID, $resourceId: AiModelID, $clientSubscriptionId: String) {
+ aiCompletionResponse(userId: $userId, resourceId: $resourceId, clientSubscriptionId: $clientSubscriptionId) {
+ content
+ errors
+ }
+}
+```
+
+Note that the [subscription for chat](duo_chat.md#graphql-subscription) behaves differently.
+
+To not have many concurrent subscriptions, you should also only subscribe to the subscription once the mutation is sent by using [`skip()`](https://apollo.vuejs.org/guide/apollo/subscriptions.html#skipping-the-subscription).
+
+#### Current abstraction layer flow
+
+The following graph uses OpenAI as an example. You can use different providers.
+
+```mermaid
+flowchart TD
+A[GitLab frontend] -->B[AiAction GraphQL mutation]
+B --> C[Llm::ExecuteMethodService]
+C --> D[One of services, for example: Llm::GenerateSummaryService]
+D -->|scheduled| E[AI worker:Llm::CompletionWorker]
+E -->F[::Gitlab::Llm::Completions::Factory]
+F -->G[`::Gitlab::Llm::OpenAi::Completions::...` class using `::Gitlab::Llm::OpenAi::Templates::...` class]
+G -->|calling| H[Gitlab::Llm::OpenAi::Client]
+H --> |response| I[::Gitlab::Llm::OpenAi::ResponseService]
+I --> J[GraphqlTriggers.ai_completion_response]
+J --> K[::GitlabSchema.subscriptions.trigger]
+```
+
+## CircuitBreaker
+
+The CircuitBreaker concern is a reusable module that you can include in any class that needs to run code with circuit breaker protection. The concern provides a `run_with_circuit` method that wraps a code block with circuit breaker functionality, which helps prevent cascading failures and improves system resilience. For more information about the circuit breaker pattern, see:
+
+- [What is Circuit breaker](https://martinfowler.com/bliki/CircuitBreaker.html).
+- [The Hystrix documentation on CircuitBreaker](https://github.com/Netflix/Hystrix/wiki/How-it-Works#circuit-breaker).
+
+### Use CircuitBreaker
+
+To use the CircuitBreaker concern, you need to include it in a class. For example:
+
+```ruby
+class MyService
+ include Gitlab::Llm::Concerns::CircuitBreaker
+
+ def call_external_service
+ run_with_circuit do
+ # Code that interacts with external service goes here
+
+ raise InternalServerError
+ end
+ end
+end
+```
+
+The `call_external_service` method is an example method that interacts with an external service.
+By wrapping the code that interacts with the external service with `run_with_circuit`, the method is executed within the circuit breaker.
+The circuit breaker is created and configured by the `circuit` method, which is called automatically when the `CircuitBreaker` module is included.
+The method should raise `InternalServerError` error which will be counted towards the error threshold if raised during the execution of the code block.
+
+The circuit breaker tracks the number of errors and the rate of requests,
+and opens the circuit if it reaches the configured error threshold or volume threshold.
+If the circuit is open, subsequent requests fail fast without executing the code block, and the circuit breaker periodically allows a small number of requests through to test the service's availability before closing the circuit again.
+
+### Configuration
+
+The circuit breaker is configured with two constants which control the number of errors and requests at which the circuit will open:
+
+- `ERROR_THRESHOLD`
+- `VOLUME_THRESHOLD`
+
+You can adjust these values as needed for the specific service and usage pattern.
+The `InternalServerError` is the exception class counted towards the error threshold if raised during the execution of the code block.
+This is the exception class that triggers the circuit breaker when raised by the code that interacts with the external service.
+
+NOTE:
+The `CircuitBreaker` module depends on the `Circuitbox` gem to provide the circuit breaker implementation. By default, the service name is inferred from the class name where the concern module is included. Override the `service_name` method if the name needs to be different.
+
+### Testing
+
+To test code that uses the `CircuitBreaker` concern, you can use `RSpec` shared examples and pass the `service` and `subject` variables:
+
+```ruby
+it_behaves_like 'has circuit breaker' do
+ let(:service) { dummy_class.new }
+ let(:subject) { service.dummy_method }
+end
+```
+
+## How to implement a new action
+
+### Register a new method
+
+Go to the `Llm::ExecuteMethodService` and add a new method with the new service class you will create.
+
+```ruby
+class ExecuteMethodService < BaseService
+ METHODS = {
+ # ...
+ amazing_new_ai_feature: Llm::AmazingNewAiFeatureService
+ }.freeze
+```
+
+### Create a Service
+
+1. Create a new service under `ee/app/services/llm/` and inherit it from the `BaseService`.
+1. The `resource` is the object we want to act on. It can be any object that includes the `Ai::Model` concern. For example it could be a `Project`, `MergeRequest`, or `Issue`.
+
+```ruby
+# ee/app/services/llm/amazing_new_ai_feature_service.rb
+
+module Llm
+ class AmazingNewAiFeatureService < BaseService
+ private
+
+ def perform
+ ::Llm::CompletionWorker.perform_async(user.id, resource.id, resource.class.name, :amazing_new_ai_feature)
+ success
+ end
+
+ def valid?
+ super && Ability.allowed?(user, :amazing_new_ai_feature, resource)
+ end
+ end
+end
+```
+
+### Authorization
+
+We recommend to use [policies](../policies.md) to deal with authorization for a feature. Currently we need to make sure to cover the following checks:
+
+1. General AI feature flag is enabled
+1. Feature specific feature flag is enabled
+1. The namespace has the required license for the feature
+1. User is a member of the group/project
+1. `experiment_features_enabled` and `third_party_ai_features_enabled` flags are set on the `Namespace`
+
+For our example, we need to implement the `allowed?(:amazing_new_ai_feature)` call. As an example, you can look at the [Issue Policy for the summarize comments feature](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/issue_policy.rb). In our example case, we want to implement the feature for Issues as well:
+
+```ruby
+# ee/app/policies/ee/issue_policy.rb
+
+module EE
+ module IssuePolicy
+ extend ActiveSupport::Concern
+ prepended do
+ with_scope :subject
+ condition(:ai_available) do
+ ::Feature.enabled?(:openai_experimentation)
+ end
+
+ with_scope :subject
+ condition(:amazing_new_ai_feature_enabled) do
+ ::Feature.enabled?(:amazing_new_ai_feature, subject_container) &&
+ subject_container.licensed_feature_available?(:amazing_new_ai_feature)
+ end
+
+ rule do
+ ai_available & amazing_new_ai_feature_enabled & is_project_member
+ end.enable :amazing_new_ai_feature
+ end
+ end
+end
+```
+
+### Pairing requests with responses
+
+Because multiple users' requests can be processed in parallel, when receiving responses,
+it can be difficult to pair a response with its original request. The `requestId`
+field can be used for this purpose, because both the request and response are assured
+to have the same `requestId` UUID.
+
+### Caching
+
+AI requests and responses can be cached. Cached conversation is being used to
+display user interaction with AI features. In the current implementation, this cache
+is not used to skip consecutive calls to the AI service when a user repeats
+their requests.
+
+```graphql
+query {
+ aiMessages {
+ nodes {
+ id
+ requestId
+ content
+ role
+ errors
+ timestamp
+ }
+ }
+}
+```
+
+This cache is especially useful for chat functionality. For other services,
+caching is disabled. (It can be enabled for a service by using `cache_response: true`
+option.)
+
+Caching has following limitations:
+
+- Messages are stored in Redis stream.
+- There is a single stream of messages per user. This means that all services
+ currently share the same cache. If needed, this could be extended to multiple
+ streams per user (after checking with the infrastructure team that Redis can handle
+ the estimated amount of messages).
+- Only the last 50 messages (requests + responses) are kept.
+- Expiration time of the stream is 3 days since adding last message.
+- User can access only their own messages. There is no authorization on the caching
+ level, and any authorization (if accessed by not current user) is expected on
+ the service layer.
+
+### Check if feature is allowed for this resource based on namespace settings
+
+There are two settings allowed on root namespace level that restrict the use of AI features:
+
+- `experiment_features_enabled`
+- `third_party_ai_features_enabled`.
+
+To check if that feature is allowed for a given namespace, call:
+
+```ruby
+Gitlab::Llm::StageCheck.available?(namespace, :name_of_the_feature)
+```
+
+Add the name of the feature to the `Gitlab::Llm::StageCheck` class. There are arrays there that differentiate
+between experimental and beta features.
+
+This way we are ready for the following different cases:
+
+- If the feature is not in any array, the check will return `true`. For example, the feature was moved to GA and does not use a third-party setting.
+- If feature is in GA, but uses a third-party setting, the class will return a proper answer based on the namespace third-party setting.
+
+To move the feature from the experimental phase to the beta phase, move the name of the feature from the `EXPERIMENTAL_FEATURES` array to the `BETA_FEATURES` array.
+
+### Implement calls to AI APIs and the prompts
+
+The `CompletionWorker` will call the `Completions::Factory` which will initialize the Service and execute the actual call to the API.
+In our example, we will use OpenAI and implement two new classes:
+
+```ruby
+# /ee/lib/gitlab/llm/open_ai/completions/amazing_new_ai_feature.rb
+
+module Gitlab
+ module Llm
+ module OpenAi
+ module Completions
+ class AmazingNewAiFeature
+ def initialize(ai_prompt_class)
+ @ai_prompt_class = ai_prompt_class
+ end
+
+ def execute(user, issue, options)
+ options = ai_prompt_class.get_options(options[:messages])
+
+ ai_response = Gitlab::Llm::OpenAi::Client.new(user).chat(content: nil, **options)
+
+ ::Gitlab::Llm::OpenAi::ResponseService.new(user, issue, ai_response, options: {}).execute(
+ Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new
+ )
+ end
+
+ private
+
+ attr_reader :ai_prompt_class
+ end
+ end
+ end
+ end
+end
+```
+
+```ruby
+# /ee/lib/gitlab/llm/open_ai/templates/amazing_new_ai_feature.rb
+
+module Gitlab
+ module Llm
+ module OpenAi
+ module Templates
+ class AmazingNewAiFeature
+ TEMPERATURE = 0.3
+
+ def self.get_options(messages)
+ system_content = <<-TEMPLATE
+ You are an assistant that writes code for the following input:
+ """
+ TEMPLATE
+
+ {
+ messages: [
+ { role: "system", content: system_content },
+ { role: "user", content: messages },
+ ],
+ temperature: TEMPERATURE
+ }
+ end
+ end
+ end
+ end
+ end
+end
+```
+
+Because we support multiple AI providers, you may also use those providers for the same example:
+
+```ruby
+Gitlab::Llm::VertexAi::Client.new(user)
+Gitlab::Llm::Anthropic::Client.new(user)
+```
+
+### Monitoring Ai Actions
+
+- Error ratio and response latency apdex for each Ai action can be found on [Sidekiq Service dashboard](https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1) under "SLI Detail: llm_completion".
+- Spent tokens, usage of each Ai feature and other statistics can be found on [periscope dashboard](https://app.periscopedata.com/app/gitlab/1137231/Ai-Features).
+
+### Add Ai Action to GraphQL
+
+TODO
+
+## Security
+
+Refer to the [secure coding guidelines for Artificial Intelligence (AI) features](../secure_coding_guidelines.md#artificial-intelligence-ai-features).
diff --git a/doc/development/ai_features/prompts.md b/doc/development/ai_features/prompts.md
new file mode 100644
index 00000000000..f4738055f6b
--- /dev/null
+++ b/doc/development/ai_features/prompts.md
@@ -0,0 +1,28 @@
+---
+stage: AI-powered
+group: AI Framework
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Working with AI prompts
+
+This documentation provides some tips and guidelines for working with AI prompts, particularly aimed at GitLab engineers. The tips are:
+
+1. Set the tone - Describe how the AI assistant should respond, e.g. "You're a helpful assistant specialized in DevSecOps". Giving context helps the AI provide better answers. This establishes expectations for how the AI should communicate.
+1. Be specific - When describing a task, provide lots of details and context to help the AI understand. Give as much specific information as possible. For example, don't just say "summarize this text", provide context like "You are an AI assistant named GitLab Duo. Please read the following text and summarize it in 3 concise sentences focusing on the key points." The more details you provide, the better the AI will perform.
+1. Give examples - Provide examples of potential questions and desired answers. This helps the AI give better responses. For instance, you can provide a sample question like "What is the main idea of this text?" and then give the ideal concise summary as an example response. Always give the instructions first, and then provide illustrative examples.
+1. Guide the input - Use delimiters to clearly indicate where the user's input starts and ends. The AI needs to know what is input. Make it obvious to the model what text is the user input.
+1. Step-by-step reasoning - Ask the AI to explain its reasoning step-by-step. This produces more accurate results. You can get better responses by explicitly asking the model to think through its reasoning step-by-step and show the full explanation. Say something like "Please explain your reasoning step-by-step for how you arrived at your summary:"
+1. Allow uncertainty - Tell the AI to say "I don't know" if it is unsure, to avoid hallucinating answers. Give the model an explicit way out if it does not know the answer to avoid false responses. Say "If you do not know the answer, please respond with 'I don't know'".
+1. Use positive phrasing - Say what the AI should do, not what it shouldn't do, even when prohibiting actions. Although tricky, use positive language as much as possible, even when restricting behavior. For example, say "Please provide helpful, honest responses" rather than "Do not provide harmful or dishonest responses".
+1. Correct language - Use proper English grammar and syntax to help the AI understand. Having technically accurate language and grammar will enable the model to better comprehend the prompt. This is why working with technical writers is very helpful for crafting prompts.
+1. Test different models - Prompts are provider specific. Test new models before fully switching. It's important to recognize prompts do not work equally across different AI providers. Make sure to test performance carefully when changing to a new model, don't assume it will work the same.
+1. Build quality control - Automate testing prompts with RSpec or Rake task to catch differences. Develop automated checks to regularly test prompts and catch regressions. Use frameworks like RSpec or Rake tasks to build test cases with sample inputs and desired outputs.
+1. Iterate - Refine prompts gradually, testing changes to see their impact. Treat prompt engineering as an iterative process. Make small changes, then test results before continuing. Build up prompts incrementally while continually evaluating effects.
+
+## Further Resources
+
+For more comprehensive prompt engineering guides, see:
+
+- [Prompt Engineering Guide 1](https://www.promptingguide.ai/)
+- [Prompt Engineering Guide 2](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/)
diff --git a/doc/development/api_graphql_styleguide.md b/doc/development/api_graphql_styleguide.md
index 440068d55c2..3662b21eb9e 100644
--- a/doc/development/api_graphql_styleguide.md
+++ b/doc/development/api_graphql_styleguide.md
@@ -1058,6 +1058,17 @@ A taxonomic genus. See: [Wikipedia page on genera](https://wikipedia.org/wiki/Ge
Multiple documentation references can be provided. The syntax for this property
is a `HashMap` where the keys are textual descriptions, and the values are URLs.
+### Subscription tier badges
+
+If a field or argument is available to higher subscription tiers than the other fields,
+add the [tier badge](documentation/styleguide/index.md#product-tier-badges) inline.
+
+For example:
+
+```ruby
+description: '**(ULTIMATE ALL)** Full path of a custom template.'
+```
+
## Authorization
See: [GraphQL Authorization](graphql_guide/authorization.md)
diff --git a/doc/development/audit_event_guide/index.md b/doc/development/audit_event_guide/index.md
index b8af1341919..aeab188fa76 100644
--- a/doc/development/audit_event_guide/index.md
+++ b/doc/development/audit_event_guide/index.md
@@ -55,7 +55,7 @@ With `Gitlab::Audit::Auditor` service, we can instrument audit events in two way
### Using block to record multiple events
-This method is useful when events are emitted deep in the call stack.
+You can use this method when events are emitted deep in the call stack.
For example, we can record multiple audit events when the user updates a merge
request approval rule. As part of this user flow, we would like to audit changes
@@ -118,7 +118,7 @@ end
### Data volume considerations
Because every audit event is persisted to the database, consider the amount of data we expect to generate, and the rate of generation, for new
-audit events. For new audit events that will produce a lot of data in the database, consider adding a
+audit events. For new audit events that produce a lot of data in the database, consider adding a
[streaming-only audit event](#event-streaming) instead. If you have questions about this, feel free to ping
`@gitlab-org/govern/compliance/backend` in an issue or merge request.
@@ -225,6 +225,22 @@ To add a new audit event type:
| `saved_to_database` | yes | Indicate whether to persist events to database and JSON logs |
| `streamed` | yes | Indicate that events should be streamed to external services (if configured) |
+### Generate documentation
+
+Audit event types documentation is automatically generated and [published](../../administration/audit_event_streaming/audit_event_types.md)
+to the GitLab documentation site.
+
+If you add a new audit event type, run the
+[`gitlab:audit_event_types:compile_docs` Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/tasks/gitlab/audit_event_types/audit_event_types.rake)
+to update the documentation:
+
+```shell
+bundle exec rake gitlab:audit_event_types:compile_docs
+```
+
+Run the [`gitlab:audit_event_types:check_docs` Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/tasks/gitlab/audit_event_types/audit_event_types.rake)
+to check if the documentation is up-to-date.
+
## Event streaming
All events where the entity is a `Group` or `Project` are recorded in the audit log, and also streamed to one or more
@@ -233,7 +249,7 @@ All events where the entity is a `Group` or `Project` are recorded in the audit
- `Group`, events are streamed to the group's root ancestor's event streaming destinations.
- `Project`, events are streamed to the project's root ancestor's event streaming destinations.
-You can add streaming-only events that are not stored in the GitLab database. This is primarily intended to be used for actions that generate
+You can add streaming-only events that are not stored in the GitLab database. Streaming-only events are primarily intended to be used for actions that generate
a large amount of data. See [this merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/76719/diffs#d56e47632f0384722d411ed3ab5b15e947bd2265_26_36)
for an example.
This feature is under heavy development. Follow the [parent epic](https://gitlab.com/groups/gitlab-org/-/epics/5925) for updates on feature
@@ -243,5 +259,5 @@ development.
We intentionally do not translate audit event messages because translated messages would be saved in the database and served to users, regardless of their locale settings.
-This could mean, for example, that we use the locale for the currently authenticated user to record an audit event message and stream the message to an external streaming
+For example, this could mean that we use the locale for the authenticated user to record an audit event message and stream the message to an external streaming
destination in the wrong language for that destination. Users could find that confusing.
diff --git a/doc/development/avoiding_required_stops.md b/doc/development/avoiding_required_stops.md
index 0308e0c30c0..5c2197048b0 100644
--- a/doc/development/avoiding_required_stops.md
+++ b/doc/development/avoiding_required_stops.md
@@ -10,7 +10,7 @@ Required stops are any changes to GitLab [components](architecture.md) or
dependencies that result in the need to upgrade to and stop at a specific
`major.minor` version when [upgrading GitLab](../update/index.md).
-While Development maintains a [maintainenance policy](../policy/maintenance.md)
+While Development maintains a [maintenance policy](../policy/maintenance.md)
that results in a three-release (3 month) backport window - GitLab maintains a
much longer window of [version support](https://about.gitlab.com/support/statement-of-support/#version-support)
that includes the current major version, as well as the two previous major
@@ -31,8 +31,16 @@ across greater than 1-3 minor releases.
Wherever possible, a required stop should be avoided. If it can't be avoided,
the required stop should be aligned to a _scheduled_ required stop.
+In cases where we are considering retroactively declaring an unplanned required stop,
+please contact the [Distribution team product manager](https://about.gitlab.com/handbook/product/categories/#distributionbuild-group) to advise on next steps. If there
+is uncertainty about whether we should declare a required stop, the Distribution product
+manager may escalate to GitLab product leadership (VP or Chief Product Officer) to make
+a final determination. This may happen, for example, if a change might require a stop for
+a small subset of very large self-managed installations and there are well-defined workarounds
+if customers run into issues.
+
Scheduled required stops are often implemented for the previous `major`.`minor`
-release just prior to a `major` version release in order to accomodate multiple
+release just prior to a `major` version release in order to accommodate multiple
[planned deprecations](../update/terminology.md#deprecation) and known
[breaking changes](../update/terminology.md#breaking-change).
diff --git a/doc/development/build_test_package.md b/doc/development/build_test_package.md
index 70aa328bf8a..c4ae0ac5b71 100644
--- a/doc/development/build_test_package.md
+++ b/doc/development/build_test_package.md
@@ -19,12 +19,23 @@ that will create:
commit which triggered the pipeline).
When you push a commit to either the GitLab CE or GitLab EE project, the
-pipeline for that commit will have a `build-package` manual action you can
-trigger.
+pipeline for that commit will have a `trigger-omnibus` job in the `qa` stage you
+can trigger manually (if it didn't trigger already).
-![Manual actions](img/build_package_v12_6.png)
+![Trigger omnibus QA job](img/trigger_omnibus_v16_3.png)
-![Build package manual action](img/trigger_build_package_v12_6.png)
+After the child pipeline started, you can select `trigger-omnibus` to go to
+the child pipeline named `TRIGGERED_EE_PIPELINE`.
+
+![Triggered child pipeline](img/triggered_ee_pipeline_v16_3.png)
+
+Next, select the `Trigger:package` job in the `trigger-package` stage.
+
+The `Trigger:package` job when finished will upload its artifacts to GitLab, and
+then you can `Browse` them and download the `.deb` file or you can use the
+GitLab API to download the file straight to your VM. Keep in mind the expiry of
+these artifacts is short, so they will be deleted automatically within a day or
+so.
## Specifying versions of components
diff --git a/doc/development/cascading_settings.md b/doc/development/cascading_settings.md
index 42f4b5dd6f3..16ad7b3eab6 100644
--- a/doc/development/cascading_settings.md
+++ b/doc/development/cascading_settings.md
@@ -1,5 +1,5 @@
---
-stage: Manage
+stage: Govern
group: Authentication and Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/cloud_connector/code_suggestions_for_sm.md b/doc/development/cloud_connector/code_suggestions_for_sm.md
new file mode 100644
index 00000000000..bd8a39bc0d6
--- /dev/null
+++ b/doc/development/cloud_connector/code_suggestions_for_sm.md
@@ -0,0 +1,259 @@
+---
+stage: Data Stores
+group: Cloud Connector
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Cloud Connector MVC: Code Suggestions for Self-Managed/GitLab Dedicated
+
+This document presents the systems and their interactions involved in delivering
+the [Code Suggestions AI feature](../../user/project/repository/code_suggestions/index.md)
+to self-managed and GitLab Dedicated customers. It was considered the MVC
+or initial iteration toward a more extensive vision called [GitLab Cloud Connector](https://gitlab.com/groups/gitlab-org/-/epics/308)
+(formerly "GitLab Plus"), which will allow self-managed customers to benefit from
+features operated by us.
+
+In the context of this document, and any architectural discussions around Cloud Connector features,
+it is important to understand that **Cloud Connector is not a system**. It is an umbrella term
+for all the projects we engage in that make existing SaaS-only features available to
+self-managed and GitLab Dedicated customers. Some of these may be sufficiently similar to share
+solutions, including new systems that deliver these features, but this may not extend to all
+features covered by the Cloud Connector umbrella project.
+
+For the remainder of this document we do not distinguish between self-managed and GitLab Dedicated
+use cases, as at the time of this writing, they are identical. We are exploring any necessary
+deviations needed specifically for GitLab Dedicated in [issue 410394](https://gitlab.com/gitlab-org/gitlab/-/issues/410394).
+
+## Problem statement
+
+Code Suggestions for self-managed users work almost identically to GitLab SaaS:
+
+1. The user:
+ 1. Connects their IDE to a GitLab instance.
+ 1. Requests a code suggestion through their IDE.
+1. The GitLab instance:
+ 1. Authenticates the user and performs instance-local permission and configuration checks.
+ 1. Produces a [JSON Web Token](https://jwt.io/) to use with the [AI gateway](../../architecture/blueprints/ai_gateway/index.md).
+ 1. Forwards the request to the AI gateway with this access token.
+ 1. Returns the response back to the user's IDE.
+
+The unique challenge we had to solve in the context of self-managed instances is step 2b:
+For GitLab SaaS, we can make an instance-local decision about whether a user is allowed to use Code Suggestions
+(or any other feature for that matter) and contact the AI gateway, but for self-managed users we cannot.
+This is because we cannot trust a self-managed instance to produce such a token as we have no control over them.
+In the context of Cloud Connector, this means providing a system for permission delegation that takes into account
+self-managed instance billing and license data, so that once we establish that an instance is eligible
+for this feature based on its payment history, we can issue an access token to this instance.
+
+The architecture and data flow for this solution are outlined in the following section.
+
+## Architecture
+
+NOTE:
+This section covers the architectural details relevant to Code Suggestions in the context of Cloud Connector.
+A more high-level overview can be found in [AI Architecture](../ai_architecture.md).
+
+The Code Suggestions architecture for Cloud Connector serves as a blueprint to implement
+similar features in the future. As mentioned above, the primary problem to solve is verifying that a request
+coming from a self-managed instance is eligible to obtain Code Suggestions from the AI gateway. This can
+be broken down further into two smaller problems:
+
+1. **Instance eligibility.** Code suggestions are a paid feature. The source of truth for subscription state is the
+ customers portal (CustomersDot). A self-managed GitLab instance must therefore involve
+ CustomersDot in the decision for whether an instance is allowed to request code suggestions
+ on behalf of a user before it forwards this request to the AI gateway.
+1. **AI gateway authentication.** Because the AI gateway has no concept of self-managed instance or user identity,
+ the AI gateway must verify that the instance from which a request originates is legitimate.
+ This is handled almost identically to GitLab SaaS. The relevant differences are covered in later sections.
+
+Three systems are involved in the entire flow, with the following responsibilities:
+
+1. **GitLab Rails application:**
+ - Authenticates the current user requesting a code suggestion, as it would any other API request.
+ - Manages and checks instance-level configuration related to Code Suggestions for self-managed installations.
+ - [Runs a background cron job](#gitlabcustomersdot-token-synchronization) that regularly fetches an JWT access token for use with the AI gateway from CustomersDot.
+ - Calls the AI gateway with the given JWT access token, potentially enriching the call
+ with instance-local context.
+1. **CustomersDot:**
+ - Provides a user interface for customers to purchase Code Suggestions.
+ - When a GitLab instance syncs with CustomersDot, checks whether it
+ has an active Code Suggestions purchase, and issues a cryptographically signed JWT access token scoped
+ to the Code Suggestions feature. This mechanism is extensible to other Cloud Connector features.
+ - Acts as an [OIDC Provider](https://openid.net/developers/how-connect-works/) by offering discovery and configuration endpoints
+ that serve the public keys required to validate a Code Suggestions JWT. The AI gateway
+ calls these endpoints (see below.)
+1. **AI gateway:**
+ - Services Code Suggestions requests coming from the GitLab application.
+ - Synchronizes with CustomersDot OIDC endpoints to obtain token validation keys.
+ - Validates the JWT access token sent by a GitLab instance using these keys.
+ This establishes the necessary trust relationship between any GitLab instance
+ and the AI gateway, which is hosted by us.
+
+It is important to highlight that from the perspective of the AI gateway, all requests are equal.
+As long as the access token can be verified to be authentic, the request will succeed, be that
+from a GitLab SaaS or self-managed user.
+
+The following diagram shows how these systems interact at a high level:
+
+<!--
+ Component architecture sources for PlantUML diagram source code:
+ https://gitlab.com/gitlab-org/gitlab/-/snippets/3597299
+
+ The comments do not render correctly in various tested browser, so it is included as a binary image instead.
+-->
+![Code Suggestions architecture and components](img/code_suggestions_components.png)
+
+## Implementation details and data flow
+
+This section breaks down the three primary mechanisms used to deliver Code Suggestions
+to self-managed instances:
+
+1. GitLab/CustomersDot token synchronization.
+1. Proxying AI gateway requests.
+1. AI gateway access token validation.
+
+The sequence diagram below shows how a typical flow might look like.
+
+```mermaid
+sequenceDiagram
+ autonumber
+ participant U as User
+ participant VS as IDE
+ participant SM as GitLab
+ participant CD as CustomersDot
+ participant AI as AI gateway<br/>(aka Model Gateway)
+
+ Note over SM,CD: AI gateway token synchronization
+ loop Sidekiq cron job
+ SM->>CD: sync subscription seat data
+ CD->>CD: verify eligibility
+ Note over CD,SM: Token validity tied to subscription
+ CD-->>SM: seat data + AI gateway access token (JWT)
+ SM->>SM: store seat data + JWT
+ end
+ Note over U,AI: Developer persona
+ U->>SM: create PAT
+ SM-->>U: PAT
+ U->>VS: configure with PAT
+ loop Use code suggestions
+ Note over VS,AI: All requests via AI abstraction layer
+ VS->>SM: get code suggestions with PAT
+ SM->>SM: auth user with PAT
+ SM->>AI: fetch code suggestions with JWT
+ alt Validation key missing
+ AI->>CD: fetch JWKS
+ AI->>AI: cache key set in Redis
+ else
+ AI->>AI: load key set
+ end
+ AI->>AI: validate JWT with<br/>cached JWKS keys
+ AI-->>SM: code suggestions
+ SM-->>VS: code suggestions
+ end
+```
+
+### GitLab/CustomersDot token synchronization
+
+The first problem we had to solve was to determine whether any given self-managed GitLab is
+allowed to use Code Suggestions. The mechanism described below was built for Code
+Suggestions but could serve any other Cloud Connector feature tied to a Cloud License
+subscription.
+
+The source of truth for this from the perspective of the GitLab Rails application is CustomersDot,
+which itself converses with Zuora (a third party subscription service) to determine which
+subscriptions or add-ons are active for a given customer.
+
+Unlike with GitLab SaaS, CustomersDot does not call back into self-managed GitLab instances when
+subscriptions or add-ons are purchased. Instead, a daily synchronization job is scheduled in
+Sidekiq that compares purchases against actual seat usage using the `/api/v1/seat_links`
+REST endpoint on CustomersDot. An instance authenticates itself by posting
+its license key as part of the request, which CustomersDot uses to look up subscription data
+connected to this license, therefore using the license key as a form of authentication token.
+
+If CustomersDot deems the instance eligible, it embeds a Code Suggestions token in the response
+payload:
+
+```json
+"service_tokens": {
+ "code_suggestions": {
+ "token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJqdGkiOiI0M2FlOWI4...qmvMVhRS01YRc6a5LaBbhU_m5tw",
+ "expires_at": 1695121894
+ }
+}
+```
+
+- `token` is an [encoded JSON Web Token](https://jwt.io/) signed with RSA256, an asymmetric
+ secure signing algorithm. This allows other services who hold the corresponding public key
+ to validate the authenticity of this token. The token carries [claims](https://auth0.com/docs/secure/tokens/json-web-tokens/json-web-token-claims#registered-claims) that receivers
+ can verify, most importantly the `aud` (audience) claim. We use the audience claim to scope
+ access to particular features, such as Code Suggestions. Each paid feature requires a separate
+ token to be issued with a corresponding claim.
+- `expires_at`: UNIX epoch timestamp of the token's expiration time. Tokens currently have
+ an expiration time of 3 days. This TTL was chosen to strike a balance between regularly
+ rolling over access tokens and some leeway in case the token sync fails.
+
+NOTE:
+To sign tokens, CustomersDot maintains a private key.
+For security reasons, we rotate this key on a regular basis. For more information, refer
+to [the key rotation process for CustomersDot](https://gitlab.com/gitlab-org/customers-gitlab-com/-/blob/main/doc/architecture/add_ons/code_suggestions/authorization_for_self_managed.md#jwk-signing-key-rotation).
+
+Upon receiving a response, GitLab Rails stores this token in Postgres and removes any
+other tokens it may have previously stored. When users request a code suggestion, GitLab
+can then load this token and attach it to AI gateway requests, which is described next.
+
+### Proxying AI gateway requests
+
+Given the JWT access token described above, a GitLab instance is ready to serve Code Suggestions
+requests to users. Upon receiving a request to `/api/v4/code_suggestions/completions`, GitLab:
+
+1. Authenticates this request with a user's Personal Access Token (PAT), as it would any REST API call.
+ Users configure this token in their IDE settings.
+1. Verifies that the administrator has the Code Suggestions feature enabled and the instance
+ license tier includes this feature.
+1. Loads the JWT access token from the database.
+1. Forwards the request to the AI gateway with the token attached.
+
+As with GitLab SaaS requests, the downstream call uses Workhorse's `senddata` feature. This
+mechanism yields control to Workhorse by responding with a `send-url` header field. Workhorse
+then intercepts this response and calls into the AI gateway given the request URL, headers
+and payload provided through `send-url`. This relieves the Puma process from stalling on downstream I/O,
+removing a scalability bottleneck.
+
+This process is mostly identical to GitLab SaaS. The biggest difference is
+how GitLab Rails resolves user permissions and loads the access token. GitLab SaaS can self-issue the access token
+because billing is not handled by CustomersDot.
+
+### AI gateway access token validation
+
+The next problem we had to solve was authenticating the Code Suggestions requests that a self-managed GitLab sends
+to the AI gateway on behalf of a user. The AI gateway does not and should not have any knowledge
+of a customer's instance. Instead, the gateway verifies the request's authenticity by decoding the JWT
+access token enclosed in the request using a public key it fetches from CustomersDot, which is
+the original issuer of the token.
+
+The AI gateway accomplishes this by first requesting a [JSON Web Key Set](https://auth0.com/docs/secure/tokens/json-web-tokens/json-web-key-sets)
+from CustomersDot. The keyset obtained from the JWKS endpoint is then cached for 24 hours and used to decode any
+JWTs attached to Code Suggestions requests. Note that token expiration is implicitly enforced
+in that expired tokens fail to decode, in which case the AI gateway rejects this request.
+
+The steps to obtain the JWKS and verify tokens are detailed in [the AI service verification sequence diagram](https://gitlab.com/gitlab-org/customers-gitlab-com/-/blob/main/doc/architecture/add_ons/code_suggestions/authorization_for_self_managed.md#gitlab-hosted-ai-service-flow-to-verify-jwt-token) for CustomersDot.
+
+This process is mostly identical to GitLab SaaS. The only difference is that the AI gateway obtains validation keys
+from CustomersDot instead of GitLab SaaS, which self-issues its own tokens.
+
+## Gaps and outlook
+
+Code Suggestions was the first Cloud Connector feature we looked at, but we expect many more
+to be designed and built, some of which may require different technical approaches from what is
+documented here.
+
+Some areas that are not currently well-defined or understood include:
+
+- Support for GitLab Dedicated and regional deployments. This is currently being investigated in
+ [issue 410394](https://gitlab.com/gitlab-org/gitlab/-/issues/410394).
+- The impact on end-user experience when a GitLab instance is deployed to a geographic region that
+ has high latency overhead when connecting to a GitLab service in US-east.
+- There are some known usability issues with relying solely on a daily Sidekiq job to fetch access
+ tokens. We are exploring ways to improve this in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/11289).
+- Rate-limiting requests at the GitLab instance level was out of scope for the MVC. We are exploring
+ this idea in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/420123).
diff --git a/doc/development/cloud_connector/img/code_suggestions_components.png b/doc/development/cloud_connector/img/code_suggestions_components.png
new file mode 100644
index 00000000000..3f41d1d1a6b
--- /dev/null
+++ b/doc/development/cloud_connector/img/code_suggestions_components.png
Binary files differ
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index a8c527ad30e..17c16e79232 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -161,12 +161,6 @@ The [Roulette dashboard](https://gitlab-org.gitlab.io/gitlab-roulette/) contains
For more information, review [the roulette README](https://gitlab.com/gitlab-org/gitlab-roulette).
-As an experiment, we want to introduce a `local` reviewer status for database reviews. Local reviewers are reviewers
-focusing on work from a team/stage, but not outside of it. This helps to focus and build great domain
-knowledge. We are not introducing changes to the reviewer roulette till we evaluate the impact and feedback from this
-experiment. We ask to respect reviewers who decline reviews based on their focus on `local` reviews. For tracking purposes,
-please use in your personal YAML file entry: `- reviewer database local` instead of `- reviewer database`.
-
### Approval guidelines
As described in the section on the responsibility of the maintainer below, you
diff --git a/doc/development/code_suggestions/index.md b/doc/development/code_suggestions/index.md
new file mode 100644
index 00000000000..38fd6200ace
--- /dev/null
+++ b/doc/development/code_suggestions/index.md
@@ -0,0 +1,56 @@
+---
+stage: Create
+group: Code Creation
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Code Suggestions development guidelines
+
+## Code Suggestions development setup
+
+The recommended setup for locally developing and debugging Code Suggestions is to have all 3 different components running:
+
+- IDE Extension (e.g. VSCode Extension)
+- Main application configured correctly
+- [Model gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist)
+
+This should enable everyone to see locally any change in an IDE being sent to the main application transformed to a prompt which is then sent to the respective model.
+
+### Setup instructions
+
+1. Install and run locally the [VSCode Extension](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/blob/main/CONTRIBUTING.md#configuring-development-environment)
+ 1. Add the ```"gitlab.debug": true,``` info to the Code Suggestions development config
+ 1. In VSCode navigate to the Extensions page and find "GitLab Workflow" in the list
+ 1. Open the extension settings by clicking a small cog icon and select "Extension Settings" option
+ 1. Check a "GitLab: Debug" checkbox.
+1. Main Application
+ 1. Enable Feature Flags ```code_suggestions_completion_api``` and ```code_suggestions_tokens_api```
+ 1. In your terminal, navigate to a `gitlab` inside your `gitlab-development-kit` directory
+ 1. Run `bundle exec rails c` to start a Rails console
+ 1. Call `Feature.enable(:code_suggestions_completion_api)` and `Feature.enable(:code_suggestions_tokens_api)` from the console
+ 1. Run the GDK with ```export CODE_SUGGESTIONS_BASE_URL=http://localhost:5052```
+1. [Setup Model Gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist#how-to-run-the-server-locally)
+ 1. Build tree sitter libraries ```poetry run scripts/build-tree-sitter-lib.py```
+ 1. Extra .env Changes for all debugging insights
+ 1. LOG_LEVEL=DEBUG
+ 1. LOG_FORMAT_JSON=false
+ 1. LOG_TO_FILE=true
+ 1. Watch the new log file ```modelgateway_debug.log``` , e.g. ```tail -f modelgateway_debug.log | fblog -a prefix -a suffix -a current_file_name -a suggestion -a language -a input -a parameters -a score -a exception```
+
+### Setup instructions to use staging Model Gateway
+
+When testing interactions with the Model Gateway, you might want to integrate your local GDK
+with the deployed staging Model Gateway. To do this:
+
+1. You need a [cloud staging license](../../user/project/repository/code_suggestions/self_managed.md#update-gitlab) that has the Code Suggestions add-on, because add-ons are enabled on staging. Drop a note in the `#s_fulfillment` internal Slack channel to request an add-on to your license. See this [handbook page](https://about.gitlab.com/handbook/developer-onboarding/#working-on-gitlab-ee-developer-licenses) for how to request a license for local development.
+1. Set env variables to point customers-dot to staging, and the Model Gateway to staging:
+
+ ```shell
+ export GITLAB_LICENSE_MODE=test
+ export CUSTOMER_PORTAL_URL=https://customers.staging.gitlab.com
+ export CODE_SUGGESTIONS_BASE_URL=https://codesuggestions.staging.gitlab.com
+ ```
+
+1. Restart the GDK.
+1. Ensure you followed the necessary [steps to enable the Code Suggestions feature](../../user/project/repository/code_suggestions/self_managed.md#gitlab-163-and-later).
+1. Test out the Code Suggestions feature by opening the Web IDE for a project.
diff --git a/doc/development/contributing/index.md b/doc/development/contributing/index.md
index d2063a6836d..4f941b798c1 100644
--- a/doc/development/contributing/index.md
+++ b/doc/development/contributing/index.md
@@ -61,7 +61,7 @@ To write and test your code, you will use the GitLab Development Kit.
- To run a pre-configured GDK instance in the cloud, use [GDK with Gitpod](../../integration/gitpod.md).
From a project repository:
- 1. On the left sidebar, at the top, select **Search GitLab** (**{search}**) to find your project.
+ 1. On the left sidebar, select **Search or go to** and find your project.
1. In the upper right, select **Edit > Gitpod**.
1. If you want to contribute to the [website](https://about.gitlab.com/) or the [handbook](https://about.gitlab.com/handbook/),
go to the footer of any page and select **Edit in Web IDE** to open the [Web IDE](../../user/project/web_ide/index.md).
diff --git a/doc/development/database/avoiding_downtime_in_migrations.md b/doc/development/database/avoiding_downtime_in_migrations.md
index 371df5b45ff..5a2343e883c 100644
--- a/doc/development/database/avoiding_downtime_in_migrations.md
+++ b/doc/development/database/avoiding_downtime_in_migrations.md
@@ -14,8 +14,7 @@ requiring downtime.
## Dropping columns
-Removing columns is tricky because running GitLab processes may still be using
-the columns. To work around this safely, you need three steps in three releases:
+Removing columns is tricky because running GitLab processes expect these columns to exist, as ActiveRecord caches the tables schema, even if the columns are not referenced. This happens if the columns are not explicitly marked as ignored. To work around this safely, you need three steps in three releases:
1. [Ignoring the column](#ignoring-the-column-release-m) (release M)
1. [Dropping the column](#dropping-the-column-release-m1) (release M+1)
diff --git a/doc/development/database/batched_background_migrations.md b/doc/development/database/batched_background_migrations.md
index 10490df7b5e..a6d827df820 100644
--- a/doc/development/database/batched_background_migrations.md
+++ b/doc/development/database/batched_background_migrations.md
@@ -193,9 +193,10 @@ Because batched migrations are update heavy and there were few incidents in the
These database indicators are checked to throttle a migration. On getting a
stop signal, the migration is paused for a set time (10 minutes):
-- WAL queue pending archival crossing a threshold.
+- WAL queue pending archival crossing the threshold.
- Active autovacuum on the tables on which the migration works on.
- Patroni apdex SLI dropping below the SLO.
+- WAL rate crossing the threshold.
It's an ongoing effort to add more indicators to further enhance the
database health check framework. For more details, see
diff --git a/doc/development/database/clickhouse/clickhouse_within_gitlab.md b/doc/development/database/clickhouse/clickhouse_within_gitlab.md
new file mode 100644
index 00000000000..297776429d7
--- /dev/null
+++ b/doc/development/database/clickhouse/clickhouse_within_gitlab.md
@@ -0,0 +1,237 @@
+---
+stage: Data Stores
+group: Database
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# ClickHouse within GitLab
+
+This document gives a high-level overview of how to develop features using ClickHouse in the GitLab Rails application.
+
+NOTE:
+Most of the tooling and APIs are considered unstable.
+
+## GDK setup
+
+For instructions on how to set up a ClickHouse server locally, see the [ClickHouse installation documentation](https://clickhouse.com/docs/en/install).
+
+### Configure your Rails application
+
+1. Copy the example file and configure the credentials:
+
+ ```shell
+ cp config/click_house.yml.example
+ config/click_house.yml
+ ```
+
+1. Create the database using the `clickhouse-client` CLI tool:
+
+ ```shell
+ clickhouse-client --password
+ ```
+
+ ```sql
+ create database gitlab_clickhouse_development;
+ ```
+
+### Validate your setup
+
+Run the Rails console and invoke a simple query:
+
+```ruby
+ClickHouse::Client.select('SELECT 1', :main)
+# => [{"1"=>1}]
+```
+
+## Database schema and migrations
+
+For the ClickHouse database there are no established schema migration procedures yet. We have very basic tooling to build up the database schema in the test environment from scratch using timestamp-prefixed SQL files.
+
+You can create a table by placing a new SQL file in the `db/click_house/main` folder:
+
+```sql
+// 20230811124511_create_issues.sql
+CREATE TABLE issues
+(
+ id UInt64 DEFAULT 0,
+ title String DEFAULT ''
+)
+ENGINE = MergeTree
+PRIMARY KEY (id)
+```
+
+When you're working locally in your development environment, you can create or re-create your table schema by executing the respective `CREATE TABLE` statement. Alternatively, you can use the following snippet in the Rails console:
+
+```ruby
+require_relative 'spec/support/database/click_house/hooks.rb'
+
+# Drops and re-creates all tables
+ClickHouseTestRunner.new.ensure_schema
+```
+
+## Writing database queries
+
+For the ClickHouse database we don't use ORM (Object Relational Mapping). The main reason is that the GitLab application has many customizations for the `ActiveRecord` PostgresSQL adapter and the application generally assumes that all databases are using `PostgreSQL`. Since ClickHouse-related features are still in a very early stage of development, we decided to implement a simple HTTP client to avoid hard to discover bugs and long debugging time when dealing with multiple `ActiveRecord` adapters.
+
+Additionally, ClickHouse might not be used the same way as other adapters for `ActiveRecord`. The access patterns differ from traditional transactional databases, in that ClickHouse:
+
+- Uses nested aggregation `SELECT` queries with `GROUP BY` clauses.
+- Doesn't use single `INSERT` statements. Data is inserted in batches via background jobs.
+- Has different consistency characteristics, no transactions.
+- Has very little database-level validations.
+
+Database queries are written and executed with the help of the `ClickHouse::Client` gem.
+
+A simple query from the `events` table:
+
+```ruby
+rows = ClickHouse::Client.select('SELECT * FROM events', :main)
+```
+
+When working with queries with placeholders you can use the `ClickHouse::Query` object where you need to specify the placeholder name and its data type. The actual variable replacement, quoting and escaping will be done by the ClickHouse server.
+
+```ruby
+raw_query = 'SELECT * FROM events WHERE id > {min_id:UInt64}'
+placeholders = { min_id: Integer(100) }
+query = ClickHouse::Client::Query.new(raw_query: raw_query, placeholders: placeholders)
+
+rows = ClickHouse::Client.select(query, :main)
+```
+
+When using placeholders the client can provide the query with redacted placeholder values which can be ingested by our logging system. You can see the redacted version of your query by calling the `to_redacted_sql` method:
+
+```ruby
+puts query.to_redacted_sql
+```
+
+ClickHouse allows only one statement per request. This means that the common SQL injection vulnerability where the statement is closed with a `;` character and then another query is "injected" cannot be exploited:
+
+```ruby
+ClickHouse::Client.select('SELECT 1; SELECT 2', :main)
+
+# ClickHouse::Client::DatabaseError: Code: 62. DB::Exception: Syntax error (Multi-statements are not allowed): failed at position 9 (end of query): ; SELECT 2. . (SYNTAX_ERROR) (version 23.4.2.11 (official build))
+```
+
+### Subqueries
+
+You can compose complex queries with the `ClickHouse::Client::Query` class by specifying the query placeholder with the special `Subquery` type. The library will make sure to correctly merge the queries and the placeholders:
+
+```ruby
+subquery = ClickHouse::Client::Query.new(raw_query: 'SELECT id FROM events WHERE id = {id:UInt64}', placeholders: { id: Integer(10) })
+
+raw_query = 'SELECT * FROM events WHERE id > {id:UInt64} AND id IN ({q:Subquery})'
+placeholders = { id: Integer(10), q: subquery }
+
+query = ClickHouse::Client::Query.new(raw_query: raw_query, placeholders: placeholders)
+rows = ClickHouse::Client.select(query, :main)
+
+# ClickHouse will replace the placeholders
+puts query.to_sql # SELECT * FROM events WHERE id > {id:UInt64} AND id IN (SELECT id FROM events WHERE id = {id:UInt64})
+
+puts query.to_redacted_sql # SELECT * FROM events WHERE id > $1 AND id IN (SELECT id FROM events WHERE id = $2)
+
+puts query.placeholders # { id: 10 }
+```
+
+In case there are placeholders with the same name but different values the query will raise an error.
+
+### Writing query conditions
+
+When working with complex forms where multiple filter conditions are present, building queries by concatenating query fragments as string can get out of hands very quickly. For queries with several conditions you may use the `ClickHouse::QueryBuilder` class. The class uses the `Arel` gem to generate queries and provides a similar query interface like `ActiveRecord`.
+
+```ruby
+builder = ClickHouse::QueryBuilder.new('events')
+
+query = builder
+ .where(builder.table[:created_at].lteq(Date.today))
+ .where(id: [1,2,3])
+
+rows = ClickHouse::Client.select(query, :main)
+```
+
+## Inserting data
+
+The ClickHouse client supports inserting data through the standard query interface:
+
+```ruby
+raw_query = 'INSERT INTO events (id, target_type) VALUES ({id:UInt64}, {target_type:String})'
+placeholders = { id: 1, target_type: 'Issue' }
+
+query = ClickHouse::Client::Query.new(raw_query: raw_query, placeholders: placeholders)
+rows = ClickHouse::Client.execute(query, :main)
+```
+
+Inserting data this way is acceptable if:
+
+- The table contains settings or configuration data where we need to add one row.
+- For testing, test data has to be prepared in the database.
+
+When inserting data, we should always try to use batch processing where multiple rows are inserted at once. Building large `INSERT` queries in memory is discouraged because of the increased memory usage. Additionally, values specified within such queries cannot be redacted automatically by the client.
+
+To compress data and reduce memory usage, insert CSV data. You can do this with the internal [`CsvBuilder`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/gems/csv_builder) gem:
+
+```ruby
+iterator = Event.find_each
+
+# insert from events table using only the id and the target_type columns
+column_mapping = {
+ id: :id,
+ target_type: :target_type
+}
+
+CsvBuilder::Gzip.new(iterator, column_mapping).render do |tempfile|
+ query = 'INSERT INTO events (id, target_type) FORMAT CSV'
+ ClickHouse::Client.insert_csv(query, File.open(tempfile.path), :main)
+end
+```
+
+NOTE:
+It's important to test and verify efficient batching of database records from PostgreSQL. Consider using the techniques described in the [Iterating tables in batches](../iterating_tables_in_batches.md).
+
+## Testing
+
+ClickHouse is enabled on CI/CD but to avoid significantly affecting the pipeline runtime we've decided to run the ClickHouse server for test cases tagged with `:click_house` only.
+
+The `:click_house` tag ensures that the database schema is properly set up before every test case.
+
+```ruby
+RSpec.describe MyClickHouseFeature, :click_house do
+ it 'returns rows' do
+ rows = ClickHouse::Client.select('SELECT 1', :main)
+ expect(rows.size).to eq(1)
+ end
+end
+```
+
+## Multiple databases
+
+By design, the `ClickHouse::Client` library supports configuring multiple databases. Because we're still at a very early stage of development, we only have one database called `main`.
+
+Multi database configuration example:
+
+```yaml
+development:
+ main:
+ database: gitlab_clickhouse_main_development
+ url: 'http://localhost:8123'
+ username: clickhouse
+ password: clickhouse
+
+ user_analytics: # made up database
+ database: gitlab_clickhouse_user_analytics_development
+ url: 'http://localhost:8123'
+ username: clickhouse
+ password: clickhouse
+```
+
+## Observability
+
+All queries executed via the `ClickHouse::Client` library expose the query with performance metrics (timings, read bytes) via `ActiveSupport::Notifications`.
+
+```ruby
+ActiveSupport::Notifications.subscribe('sql.click_house') do |_, _, _, _, data|
+ puts data.inspect
+end
+```
+
+Additionally, to view the executed ClickHouse queries in web interactions, on the performance bar, next to the `ch` label select the count.
diff --git a/doc/development/database/foreign_keys.md b/doc/development/database/foreign_keys.md
index 5dda3dd55a3..84ab32d0c0b 100644
--- a/doc/development/database/foreign_keys.md
+++ b/doc/development/database/foreign_keys.md
@@ -195,5 +195,5 @@ end
```
Using a foreign key as primary key saves space but can make
-[batch counting](../internal_analytics/service_ping/implement.md#batch-counters) in [Service Ping](../service_ping/index.md) less efficient.
+[batch counting](../internal_analytics/service_ping/implement.md#batch-counters) in [Service Ping](../internal_analytics/service_ping/index.md) less efficient.
Consider using a regular `id` column if the table is relevant for Service Ping.
diff --git a/doc/development/database/index.md b/doc/development/database/index.md
index 1ee6aeaa213..70681994229 100644
--- a/doc/development/database/index.md
+++ b/doc/development/database/index.md
@@ -109,6 +109,7 @@ including the major methods:
## ClickHouse
- [Introduction](clickhouse/index.md)
+- [ClickHouse within GitLab](clickhouse/clickhouse_within_gitlab.md)
- [Optimizing query execution](clickhouse/optimization.md)
- [Rebuild GitLab features using ClickHouse 1: Activity data](clickhouse/gitlab_activity_data.md)
- [Rebuild GitLab features using ClickHouse 2: Merge Request analytics](clickhouse/merge_request_analytics.md)
@@ -118,3 +119,4 @@ including the major methods:
- [Maintenance operations](maintenance_operations.md)
- [Update multiple database objects](setting_multiple_values.md)
+- [Batch iteration in a tree hierarchy proof of concept](poc_tree_iterator.md)
diff --git a/doc/development/database/multiple_databases.md b/doc/development/database/multiple_databases.md
index 7037ab22983..4387e19b6df 100644
--- a/doc/development/database/multiple_databases.md
+++ b/doc/development/database/multiple_databases.md
@@ -11,6 +11,9 @@ To allow GitLab to scale further we
The two databases are `main` and `ci`. GitLab supports being run with either one database or two databases.
On GitLab.com we are using two separate databases.
+For the purpose of building the [Cells](../../architecture/blueprints/cells/index.md) architecture, we are decomposing
+the databases further, to introduce another database `gitlab_main_clusterwide`.
+
## GitLab Schema
For properly discovering allowed patterns between different databases
@@ -23,17 +26,22 @@ that we cannot use PostgreSQL schema due to complex migration procedures. Instea
the concept of application-level classification.
Each table of GitLab needs to have a `gitlab_schema` assigned:
-- `gitlab_main`: describes all tables that are being stored in the `main:` database (for example, like `projects`, `users`).
-- `gitlab_ci`: describes all CI tables that are being stored in the `ci:` database (for example, `ci_pipelines`, `ci_builds`).
-- `gitlab_geo`: describes all Geo tables that are being stored in the `geo:` database (for example, like `project_registry`, `secondary_usage_data`).
-- `gitlab_shared`: describes all application tables that contain data across all decomposed databases (for example, `loose_foreign_keys_deleted_records`) for models that inherit from `Gitlab::Database::SharedModel`.
-- `gitlab_internal`: describes all internal tables of Rails and PostgreSQL (for example, `ar_internal_metadata`, `schema_migrations`, `pg_*`).
-- `gitlab_pm`: describes all tables that store `package_metadata` (it is an alias for `gitlab_main`).
-- `...`: more schemas to be introduced with additional decomposed databases
+| Database | Description | Notes |
+| -------- | ----------- | ------- |
+| `gitlab_main`| All tables that are being stored in the `main:` database. | Currently, this is being replaced with `gitlab_main_cell`, for the purpose of building the [Cells](../../architecture/blueprints/cells/index.md) architecture. `gitlab_main_cell` schema describes all tables that are local to a cell in a GitLab installation. For example, `projects` and `groups` |
+| `gitlab_main_clusterwide` | All tables that are being stored cluster-wide in a GitLab installation, in the [Cells](../../architecture/blueprints/cells/index.md) architecture. For example, `users` and `application_settings` | |
+| `gitlab_ci` | All CI tables that are being stored in the `ci:` database (for example, `ci_pipelines`, `ci_builds`) | |
+| `gitlab_geo` | All Geo tables that are being stored in the `geo:` database (for example, like `project_registry`, `secondary_usage_data`) | |
+| `gitlab_shared` | All application tables that contain data across all decomposed databases (for example, `loose_foreign_keys_deleted_records`) for models that inherit from `Gitlab::Database::SharedModel`. | |
+| `gitlab_internal` | All internal tables of Rails and PostgreSQL (for example, `ar_internal_metadata`, `schema_migrations`, `pg_*`) | |
+| `gitlab_pm` | All tables that store `package_metadata`| It is an alias for `gitlab_main`|
+
+More schemas to be introduced with additional decomposed databases
The usage of schema enforces the base class to be used:
-- `ApplicationRecord` for `gitlab_main`
+- `ApplicationRecord` for `gitlab_main`/`gitlab_main_cell.`
+- `MainClusterwide::ApplicationRecord` for `gitlab_main_clusterwide`.
- `Ci::ApplicationRecord` for `gitlab_ci`
- `Geo::TrackingBase` for `gitlab_geo`
- `Gitlab::Database::SharedModel` for `gitlab_shared`
@@ -465,6 +473,20 @@ You can see a real example of using this method for fixing a cross-join in
#### Allowlist for existing cross-joins
+The easiest way of identifying a cross-join is via failing pipelines.
+
+As an example, in [!130038](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/130038/diffs) we moved the `notification_settings` table to the `gitlab_main_cell` schema, by marking it as such in the `db/docs/notification_settings.yml` file.
+
+The pipeline failed with the following [error](https://gitlab.com/gitlab-org/gitlab/-/jobs/4929130983):
+
+```ruby
+Database::PreventCrossJoins::CrossJoinAcrossUnsupportedTablesError:
+
+Unsupported cross-join across 'users, notification_settings' querying 'gitlab_main_clusterwide, gitlab_main_cell' discovered when executing query 'SELECT "users".* FROM "users" WHERE "users"."id" IN (SELECT "notification_settings"."user_id" FROM ((SELECT "notification_settings"."user_id" FROM "notification_settings" WHERE "notification_settings"."source_id" = 119 AND "notification_settings"."source_type" = 'Project' AND (("notification_settings"."level" = 3 AND EXISTS (SELECT true FROM "notification_settings" "notification_settings_2" WHERE "notification_settings_2"."user_id" = "notification_settings"."user_id" AND "notification_settings_2"."source_id" IS NULL AND "notification_settings_2"."source_type" IS NULL AND "notification_settings_2"."level" = 2)) OR "notification_settings"."level" = 2))) notification_settings)'
+```
+
+To make the pipeline green, this cross-join query must be allow-listed.
+
A cross-join across databases can be explicitly allowed by wrapping the code in the
`::Gitlab::Database.allow_cross_joins_across_databases` helper method. Alternative
way is to mark a given relation as `relation.allow_cross_joins_across_databases`.
@@ -494,6 +516,30 @@ def find_actual_head_pipeline
end
```
+In model associations or scopes, this can be used as in the following example:
+
+```ruby
+class Group < Namespace
+ has_many :users, -> {
+ allow_cross_joins_across_databases(url: "https://gitlab.com/gitlab-org/gitlab/-/issues/422405")
+ }, through: :group_members
+end
+```
+
+WARNING:
+Overriding an association can have unintended consequences and may even lead to data loss, as we noticed in [issue 424307](https://gitlab.com/gitlab-org/gitlab/-/issues/424307). Do not override existing ActiveRecord associations to mark a cross-join as allowed, as in the example below.
+
+```ruby
+class Group < Namespace
+ has_many :users, through: :group_members
+
+ # DO NOT override an association like this.
+ def users
+ super.allow_cross_joins_across_databases(url: "https://gitlab.com/gitlab-org/gitlab/-/issues/422405")
+ end
+end
+```
+
The `url` parameter should point to an issue with a milestone for when we intend
to fix the cross-join. If the cross-join is being used in a migration, we do not
need to fix the code. See <https://gitlab.com/gitlab-org/gitlab/-/issues/340017>
@@ -530,7 +576,42 @@ more information, look at the
[transaction guidelines](transaction_guidelines.md#dangerous-example-third-party-api-calls)
page.
-#### Fixing cross-database errors
+#### Fixing cross-database transactions
+
+A transaction across databases can be explicitly allowed by wrapping the code in the
+`Gitlab::Database::QueryAnalyzers::PreventCrossDatabaseModification.temporary_ignore_tables_in_transaction` helper method.
+
+For cross-database transactions in Rails callbacks, the `cross_database_ignore_tables` method can be used.
+
+These methods should only be used for existing code.
+
+The `temporary_ignore_tables_in_transaction` helper method can be used as follows:
+
+```ruby
+class GroupMember < Member
+ def update_two_factor_requirement
+ return unless user
+
+ # To mark and ignore cross-database transactions involving members and users/user_details/user_preferences
+ Gitlab::Database::QueryAnalyzers::PreventCrossDatabaseModification.temporary_ignore_tables_in_transaction(
+ %w[users user_details user_preferences], url: 'https://gitlab.com/gitlab-org/gitlab/-/issues/424288'
+ ) do
+ user.update_two_factor_requirement
+ end
+ end
+end
+```
+
+The `cross_database_ignore_tables` method can be used as follows:
+
+```ruby
+class Namespace < ApplicationRecord
+ include CrossDatabaseIgnoredTables
+
+ # To mark and ignore cross-database transactions involving namespaces and routes/redirect_routes happening within Rails callbacks.
+ cross_database_ignore_tables %w[routes redirect_routes], url: 'https://gitlab.com/gitlab-org/gitlab/-/issues/424277'
+end
+```
##### Removing the transaction block
@@ -616,6 +697,23 @@ or records that point to nowhere, which might lead to bugs. As such we created
["loose foreign keys"](loose_foreign_keys.md) which is an asynchronous
process of cleaning up orphaned records.
+### Allowlist for existing cross-database foreign keys
+
+The easiest way of identifying a cross-database foreign key is via failing pipelines.
+
+As an example, in [!130038](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/130038/diffs) we moved the `notification_settings` table to the `gitlab_main_cell` schema, by marking it in the `db/docs/notification_settings.yml` file.
+
+`notification_settings.user_id` is a column that points to `users`, but the `users` table belongs to a different database, thus this is now treated as a cross-database foreign key.
+
+We have a spec to capture such cases of cross-database foreign keys in [`no_cross_db_foreign_keys_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/01d3a1e41513200368a22bbab5d4312174762ee0/spec/lib/gitlab/database/no_cross_db_foreign_keys_spec.rb), which would fail if such a cross-database foreign key is encountered.
+
+To make the pipeline green, this cross-database foreign key must be allow-listed.
+
+To do this, explicitly allow the existing cross-database foreign key to exist by adding it as an exception in the same spec (as in [this example](https://gitlab.com/gitlab-org/gitlab/-/blob/7d99387f399c548af24d93d564b35f2f9510662d/spec/lib/gitlab/database/no_cross_db_foreign_keys_spec.rb#L26)).
+This way, the spec will not fail.
+
+Later, this foreign key can be converted to a loose foreign key, like we did in [!130080](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/130080/diffs).
+
## Testing for multiple databases
In our testing CI pipelines, we test GitLab by default with multiple databases set up, using
diff --git a/doc/development/database/not_null_constraints.md b/doc/development/database/not_null_constraints.md
index e1b6868c68e..05b1081fc4d 100644
--- a/doc/development/database/not_null_constraints.md
+++ b/doc/development/database/not_null_constraints.md
@@ -72,8 +72,6 @@ The steps required are:
Depending on the size of the table, a background migration for cleanup could be required in the next release.
See the [`NOT NULL` constraints on large tables](not_null_constraints.md#not-null-constraints-on-large-tables) section for more information.
- - Create an issue for the next milestone to validate the `NOT NULL` constraint.
-
1. Release `N.M+1` (next release)
1. Make sure all existing records on GitLab.com have attribute set. If not, go back to step 1 from Release `N.M`.
diff --git a/doc/development/database/poc_tree_iterator.md b/doc/development/database/poc_tree_iterator.md
new file mode 100644
index 00000000000..453f77f0cde
--- /dev/null
+++ b/doc/development/database/poc_tree_iterator.md
@@ -0,0 +1,475 @@
+---
+stage: Data Stores
+group: Database
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Batch iteration in a tree hierarchy (proof of concept)
+
+The group hierarchy in GitLab is represented with a tree, where the root element
+is the top-level namespace, and the child elements are the subgroups or the
+recently introduced `Namespaces::ProjectNamespace` records.
+
+The tree is implemented in the `namespaces` table ,via the `parent_id` column.
+The column points to the parent namespace record. The top level namespace has no
+`parent_id`.
+
+Partial hierarchy of `gitlab-org`:
+
+```mermaid
+flowchart TD
+ A("gitlab-org (9979)") --- B("quality (2750817)")
+ B --- C("engineering-productivity (16947798)")
+ B --- D("performance-testing (9453799)")
+ A --- F("charts (5032027)")
+ A --- E("ruby (14018648)")
+```
+
+Efficiently iterating over the group hierarchy has several potential use cases.
+This is true especially in background jobs, which need to perform queries on the group hierarchy,
+where stable and safe execution is more important than fast runtime. Batch iteration
+requires more network round-trips, but each batch provides similar performance
+characteristics.
+
+A few examples:
+
+- For each subgroup, do something.
+- For each project in the hierarchy, do something.
+- For each issue in the hierarchy, do something.
+
+## Problem statement
+
+A group hierarchy could grow so big that a single query would not be able to load
+it in time. The query would fail with statement timeout error.
+
+Addressing scalability issues related to very large groups requires us to store
+the same data in different formats (de-normalization). However, if we're unable
+to load the group hierarchy, then de-normalization could not be implemented.
+
+One de-normalization technique would be to store all descendant group IDs for a
+given group. This would speed up queries where we need to load the group and its
+subgroups. Example:
+
+```mermaid
+flowchart TD
+ A(1) --- B(2)
+ A --- C(3)
+ C --- D(4)
+```
+
+| GROUP_ID | DESCENDANT_GROUP_IDS |
+|----------|------------------------|
+| 1 | `[2,3,4]` |
+| 2 | `[]` |
+| 3 | `[4]` |
+| 4 | `[]` |
+
+With this structure, determining all the subgroups would require us to read only
+one row from the database, instead of 4 rows. For a hierarchy as big as 1000 groups,
+this could make a huge difference.
+
+The reading of the hierarchy problem is solved with this de-normalization. However,
+we still need to find a way to persist this data in a table. Because a group and
+its hierarchy could grow very large, we cannot expect a single query to work here.
+
+```sql
+SELECT id FROM namespaces WHERE traversal_ids && ARRAY[9970]
+```
+
+The query above could time out for large groups, so we need to process the data in batches.
+
+Implementing batching logic in a tree is not something we've looked at before,
+and it's fairly complex to implement. An `EachBatch` or `find_in_batches` based
+solution would not work because:
+
+- The data (group IDs) are not sorted in the hierarchy.
+- Groups in sub groups don't know about the top-level group ID.
+
+## Algorithm
+
+The batching query is implemented as a recursive CTE SQL query, where one batch
+would read a maximum of N rows. Due to the tree structure, reading N rows might
+not necessarily mean that we're reading N group IDs. If the tree is structured in
+a non-optimal way, a batch could return less (but never more) group IDs.
+
+The query implements a [depth-first](https://en.wikipedia.org/wiki/Depth-first_search)
+tree walking logic, where the DB scans the first branch of the tree until the leaf
+element. We're implementing depth-first algorithm because, when a batch is finished,
+the query must return enough information for the next batch (cursor). In GitLab,
+we limit the depth of the tree to 20, which means that in the worst case, the
+query would return a cursor containing 19 elements.
+
+Implementing a [breadth-first](https://en.wikipedia.org/wiki/Breadth-first_search)
+tree walking algorithm would be impractical, because a group can have unbounded
+number of descendants, thus we might end up with a huge cursor.
+
+1. Create an initializer row that contains:
+ 1. The currently processed group ID (top-level group ID)
+ 1. Two arrays (tree depth and the collected IDs)
+ 1. A counter for tracking the number of row reads in the query.
+1. Recursively process the row and do one of the following (whenever the condition matches):
+ - Load the first child namespace and update the currently processed namespace
+ ID if we're not at the leaf node. (Walking down a branch)
+ - Load the next namespace record on the current depth if there are any rows left.
+ - Walk up one node and process rows at one level higher.
+1. Continue the processing until the number of reads reaches our `LIMIT` (batch size).
+1. Find the last processed row which contains the data for the cursor, and all the collected record IDs.
+
+```sql
+WITH RECURSIVE result AS (
+ (
+ SELECT
+ 9970 AS current_id, /* current namespace id we're processing */
+ ARRAY[9970]::int[] AS depth, /* cursor */
+ ARRAY[9970]::int[] AS ids, /* collected ids */
+ 1::bigint AS reads,
+ 'initialize' AS action
+ ) UNION ALL
+ (
+ WITH cte AS ( /* trick for referencing the result cte multiple times */
+ select * FROM result
+ )
+ SELECT * FROM (
+ (
+ SELECT /* walk down the branch */
+ namespaces.id,
+ cte.depth || namespaces.id,
+ cte.ids || namespaces.id,
+ cte.reads + 1,
+ 'walkdown'
+ FROM namespaces, cte
+ WHERE
+ namespaces.parent_id = cte.current_id
+ ORDER BY namespaces.id ASC
+ LIMIT 1
+ ) UNION ALL
+ (
+ SELECT /* find next element on the same level */
+ namespaces.id,
+ cte.depth[:array_length(cte.depth, 1) - 1] || namespaces.id,
+ cte.ids || namespaces.id,
+ cte.reads + 1,
+ 'next'
+ FROM namespaces, cte
+ WHERE
+ namespaces.parent_id = cte.depth[array_length(cte.depth, 1) - 1] AND
+ namespaces.id > cte.depth[array_length(cte.depth, 1)]
+ ORDER BY namespaces.id ASC
+ LIMIT 1
+ ) UNION ALL
+ (
+ SELECT /* jump up one node when finished with the current level */
+ cte.current_id,
+ cte.depth[:array_length(cte.depth, 1) - 1],
+ cte.ids,
+ cte.reads + 1,
+ 'jump'
+ FROM cte
+ WHERE cte.depth <> ARRAY[]::int[]
+ LIMIT 1
+ )
+ ) next_row LIMIT 1
+ )
+)
+SELECT current_id, depth, ids, action
+FROM result
+```
+
+```plaintext
+ current_id | depth | ids | action
+------------+--------------+------------------------+------------
+ 24 | {24} | {24} | initialize
+ 25 | {24,25} | {24,25} | walkdown
+ 26 | {24,26} | {24,25,26} | next
+ 112 | {24,112} | {24,25,26,112} | next
+ 113 | {24,113} | {24,25,26,112,113} | next
+ 114 | {24,113,114} | {24,25,26,112,113,114} | walkdown
+ 114 | {24,113} | {24,25,26,112,113,114} | jump
+ 114 | {24} | {24,25,26,112,113,114} | jump
+ 114 | {} | {24,25,26,112,113,114} | jump
+```
+
+NOTE:
+Using this query to find all the namespace IDs in a group hierarchy is likely slower
+than other querying methods, such as the current `self_and_descendants` implementation
+based on the `traversal_ids` column. The query above should be only used when
+implementing batch iteration over the group hierarchy.
+
+Rudimentary batching implementation in Ruby:
+
+```ruby
+class NamespaceEachBatch
+ def initialize(namespace_id:, cursor: nil)
+ @namespace_id = namespace_id
+ @cursor = cursor || { current_id: namespace_id, depth: [namespace_id] }
+ end
+
+ def each_batch(of: 500)
+ current_cursor = cursor.dup
+
+ first_iteration = true
+ loop do
+ new_cursor, ids = load_batch(cursor: current_cursor, of: of, first_iteration: first_iteration)
+ first_iteration = false
+ current_cursor = new_cursor
+
+ yield ids
+
+ break if new_cursor[:depth].empty?
+ end
+ end
+
+ private
+
+ # yields array of namespace ids
+ def load_batch(cursor:, of:, first_iteration: false)
+ recursive_cte = Gitlab::SQL::RecursiveCTE.new(:result,
+ union_args: { remove_order: false, remove_duplicates: false })
+
+ ids = first_iteration ? namespace_id.to_s : ""
+
+ recursive_cte << Namespace.select(
+ Arel.sql(Integer(cursor.fetch(:current_id)).to_s).as('current_id'),
+ Arel.sql("ARRAY[#{cursor.fetch(:depth).join(',')}]::int[]").as('depth'),
+ Arel.sql("ARRAY[#{ids}]::int[]").as('ids'),
+ Arel.sql("1::bigint AS count")
+ ).from('(VALUES (1)) AS does_not_matter').limit(1)
+
+ cte = Gitlab::SQL::CTE.new(:cte, Namespace.select('*').from('result'))
+
+ union_query = Namespace.with(cte.to_arel).from_union(
+ walk_down,
+ next_elements,
+ up_one_level,
+ remove_duplicates: false,
+ remove_order: false
+ ).select('current_id', 'depth', 'ids', 'count').limit(1)
+
+ recursive_cte << union_query
+
+ scope = Namespace.with
+ .recursive(recursive_cte.to_arel)
+ .from(recursive_cte.alias_to(Namespace.arel_table))
+ .limit(of)
+ row = Namespace.from(scope.arel.as('namespaces')).order(count: :desc).limit(1).first
+
+ [
+ { current_id: row[:current_id], depth: row[:depth] },
+ row[:ids]
+ ]
+ end
+
+ attr_reader :namespace_id, :cursor
+
+ def walk_down
+ Namespace.select(
+ Arel.sql('namespaces.id').as('current_id'),
+ Arel.sql('cte.depth || namespaces.id').as('depth'),
+ Arel.sql('cte.ids || namespaces.id').as('ids'),
+ Arel.sql('cte.count + 1').as('count')
+ ).from('cte, LATERAL (SELECT id FROM namespaces WHERE parent_id = cte.current_id ORDER BY id LIMIT 1) namespaces')
+ end
+
+ def next_elements
+ Namespace.select(
+ Arel.sql('namespaces.id').as('current_id'),
+ Arel.sql('cte.depth[:array_length(cte.depth, 1) - 1] || namespaces.id').as('depth'),
+ Arel.sql('cte.ids || namespaces.id').as('ids'),
+ Arel.sql('cte.count + 1').as('count')
+ ).from('cte, LATERAL (SELECT id FROM namespaces WHERE namespaces.parent_id = cte.depth[array_length(cte.depth, 1) - 1] AND namespaces.id > cte.depth[array_length(cte.depth, 1)] ORDER BY id LIMIT 1) namespaces')
+ end
+
+ def up_one_level
+ Namespace.select(
+ Arel.sql('cte.current_id').as('current_id'),
+ Arel.sql('cte.depth[:array_length(cte.depth, 1) - 1]').as('depth'),
+ Arel.sql('cte.ids').as('ids'),
+ Arel.sql('cte.count + 1').as('count')
+ ).from('cte')
+ .where('cte.depth <> ARRAY[]::int[]')
+ .limit(1)
+ end
+end
+
+iterator = NamespaceEachBatch.new(namespace_id: 9970)
+all_ids = []
+iterator.each_batch do |ids|
+ all_ids.concat(ids)
+end
+
+# Test
+puts all_ids.count
+puts all_ids.sort == Namespace.where('traversal_ids && ARRAY[9970]').pluck(:id).sort
+```
+
+Example batch query:
+
+```sql
+SELECT
+ "namespaces".*
+FROM ( WITH RECURSIVE "result" AS ((
+ SELECT
+ 15847356 AS current_id,
+ ARRAY[9970,
+ 12061481,
+ 12128714,
+ 12445111,
+ 15847356]::int[] AS depth,
+ ARRAY[]::int[] AS ids,
+ 1::bigint AS count
+ FROM (
+ VALUES (1)) AS does_not_matter
+ LIMIT 1)
+ UNION ALL ( WITH "cte" AS MATERIALIZED (
+ SELECT
+ *
+ FROM
+ result
+)
+ SELECT
+ current_id,
+ depth,
+ ids,
+ count
+ FROM ((
+ SELECT
+ namespaces.id AS current_id,
+ cte.depth || namespaces.id AS depth,
+ cte.ids || namespaces.id AS ids,
+ cte.count + 1 AS count
+ FROM
+ cte,
+ LATERAL (
+ SELECT
+ id
+ FROM
+ namespaces
+ WHERE
+ parent_id = cte.current_id
+ ORDER BY
+ id
+ LIMIT 1
+) namespaces
+)
+ UNION ALL (
+ SELECT
+ namespaces.id AS current_id,
+ cte.depth[:array_length(
+ cte.depth, 1
+) - 1] || namespaces.id AS depth,
+ cte.ids || namespaces.id AS ids,
+ cte.count + 1 AS count
+ FROM
+ cte,
+ LATERAL (
+ SELECT
+ id
+ FROM
+ namespaces
+ WHERE
+ namespaces.parent_id = cte.depth[array_length(
+ cte.depth, 1
+) - 1]
+ AND namespaces.id > cte.depth[array_length(
+ cte.depth, 1
+)]
+ ORDER BY
+ id
+ LIMIT 1
+) namespaces
+)
+ UNION ALL (
+ SELECT
+ cte.current_id AS current_id,
+ cte.depth[:array_length(
+ cte.depth, 1
+) - 1] AS depth,
+ cte.ids AS ids,
+ cte.count + 1 AS count
+ FROM
+ cte
+ WHERE (
+ cte.depth <> ARRAY[]::int[]
+)
+ LIMIT 1
+)
+) namespaces
+ LIMIT 1
+))
+SELECT
+ "namespaces".*
+FROM
+ "result" AS "namespaces"
+LIMIT 500) namespaces
+ORDER BY
+ "count" DESC
+LIMIT 1
+```
+
+Execution plan:
+
+```plaintext
+ Limit (cost=16.36..16.36 rows=1 width=76) (actual time=436.963..436.970 rows=1 loops=1)
+ Buffers: shared hit=3721 read=423 dirtied=8
+ I/O Timings: read=412.590 write=0.000
+ -> Sort (cost=16.36..16.39 rows=11 width=76) (actual time=436.961..436.968 rows=1 loops=1)
+ Sort Key: namespaces.count DESC
+ Sort Method: top-N heapsort Memory: 27kB
+ Buffers: shared hit=3721 read=423 dirtied=8
+ I/O Timings: read=412.590 write=0.000
+ -> Limit (cost=15.98..16.20 rows=11 width=76) (actual time=0.005..436.394 rows=500 loops=1)
+ Buffers: shared hit=3718 read=423 dirtied=8
+ I/O Timings: read=412.590 write=0.000
+ CTE result
+ -> Recursive Union (cost=0.00..15.98 rows=11 width=76) (actual time=0.003..432.924 rows=500 loops=1)
+ Buffers: shared hit=3718 read=423 dirtied=8
+ I/O Timings: read=412.590 write=0.000
+ -> Limit (cost=0.00..0.01 rows=1 width=76) (actual time=0.002..0.003 rows=1 loops=1)
+ I/O Timings: read=0.000 write=0.000
+ -> Result (cost=0.00..0.01 rows=1 width=76) (actual time=0.001..0.002 rows=1 loops=1)
+ I/O Timings: read=0.000 write=0.000
+ -> Limit (cost=0.76..1.57 rows=1 width=76) (actual time=0.862..0.862 rows=1 loops=499)
+ Buffers: shared hit=3718 read=423 dirtied=8
+ I/O Timings: read=412.590 write=0.000
+ CTE cte
+ -> WorkTable Scan on result (cost=0.00..0.20 rows=10 width=76) (actual time=0.000..0.000 rows=1 loops=499)
+ I/O Timings: read=0.000 write=0.000
+ -> Append (cost=0.56..17.57 rows=21 width=76) (actual time=0.862..0.862 rows=1 loops=499)
+ Buffers: shared hit=3718 read=423 dirtied=8
+ I/O Timings: read=412.590 write=0.000
+ -> Nested Loop (cost=0.56..7.77 rows=10 width=76) (actual time=0.675..0.675 rows=0 loops=499)
+ Buffers: shared hit=1693 read=357 dirtied=1
+ I/O Timings: read=327.812 write=0.000
+ -> CTE Scan on cte (cost=0.00..0.20 rows=10 width=76) (actual time=0.001..0.001 rows=1 loops=499)
+ I/O Timings: read=0.000 write=0.000
+ -> Limit (cost=0.56..0.73 rows=1 width=4) (actual time=0.672..0.672 rows=0 loops=499)
+ Buffers: shared hit=1693 read=357 dirtied=1
+ I/O Timings: read=327.812 write=0.000
+ -> Index Only Scan using index_namespaces_on_parent_id_and_id on public.namespaces namespaces_1 (cost=0.56..5.33 rows=29 width=4) (actual time=0.671..0.671 rows=0 loops=499)
+ Index Cond: (namespaces_1.parent_id = cte.current_id)
+ Heap Fetches: 7
+ Buffers: shared hit=1693 read=357 dirtied=1
+ I/O Timings: read=327.812 write=0.000
+ -> Nested Loop (cost=0.57..9.45 rows=10 width=76) (actual time=0.208..0.208 rows=1 loops=442)
+ Buffers: shared hit=2025 read=66 dirtied=7
+ I/O Timings: read=84.778 write=0.000
+ -> CTE Scan on cte cte_1 (cost=0.00..0.20 rows=10 width=72) (actual time=0.000..0.000 rows=1 loops=442)
+ I/O Timings: read=0.000 write=0.000
+ -> Limit (cost=0.57..0.89 rows=1 width=4) (actual time=0.203..0.203 rows=1 loops=442)
+ Buffers: shared hit=2025 read=66 dirtied=7
+ I/O Timings: read=84.778 write=0.000
+ -> Index Only Scan using index_namespaces_on_parent_id_and_id on public.namespaces namespaces_2 (cost=0.57..3.77 rows=10 width=4) (actual time=0.201..0.201 rows=1 loops=442)
+ Index Cond: ((namespaces_2.parent_id = (cte_1.depth)[(array_length(cte_1.depth, 1) - 1)]) AND (namespaces_2.id > (cte_1.depth)[array_length(cte_1.depth, 1)]))
+ Heap Fetches: 35
+ Buffers: shared hit=2025 read=66 dirtied=6
+ I/O Timings: read=84.778 write=0.000
+ -> Limit (cost=0.00..0.03 rows=1 width=76) (actual time=0.003..0.003 rows=1 loops=59)
+ I/O Timings: read=0.000 write=0.000
+ -> CTE Scan on cte cte_2 (cost=0.00..0.29 rows=9 width=76) (actual time=0.002..0.002 rows=1 loops=59)
+ Filter: (cte_2.depth <> '{}'::integer[])
+ Rows Removed by Filter: 0
+ I/O Timings: read=0.000 write=0.000
+ -> CTE Scan on result namespaces (cost=0.00..0.22 rows=11 width=76) (actual time=0.005..436.240 rows=500 loops=1)
+ Buffers: shared hit=3718 read=423 dirtied=8
+ I/O Timings: read=412.590 write=0.000
+```
diff --git a/doc/development/database_review.md b/doc/development/database_review.md
index bb0bfbc759b..ba0423a1a0d 100644
--- a/doc/development/database_review.md
+++ b/doc/development/database_review.md
@@ -173,10 +173,11 @@ Include in the MR description:
##### Query Plans
- The query plan for each raw SQL query included in the merge request along with the link to the query plan following each raw SQL snippet.
-- Provide a link to the plan generated using the `explain` command in the [postgres.ai](database/database_lab.md) chatbot.
- - If it's not possible to get an accurate picture in Database Lab, you may need to seed a development environment, and instead provide links
- from [explain.depesz.com](https://explain.depesz.com) or [explain.dalibo.com](https://explain.dalibo.com). Be sure to paste both the plan
- and the query used in the form.
+- Provide a link to the plan generated using the `explain` command in the [postgres.ai](database/database_lab.md) chatbot. The `explain` command runs
+ `EXPLAIN ANALYZE`.
+ - If it's not possible to get an accurate picture in Database Lab, you may need to
+ seed a development environment, and instead provide output
+ from `EXPLAIN ANALYZE`. Create links to the plan using [explain.depesz.com](https://explain.depesz.com) or [explain.dalibo.com](https://explain.dalibo.com). Be sure to paste both the plan and the query used in the form.
- When providing query plans, make sure it hits enough data:
- To produce a query plan with enough data, you can use the IDs of:
- The `gitlab-org` namespace (`namespace_id = 9970`), for queries involving a group.
@@ -192,6 +193,13 @@ Include in the MR description:
plan _before_ and _after_ the change. This helps spot differences quickly.
- Include data that shows the performance improvement, preferably in
the form of a benchmark.
+- When evaluating a query plan, we need the final query to be
+ executed against the database. We don't need to analyze the intermediate
+ queries returned as `ActiveRecord::Relation` from finders and scopes.
+ PostgreSQL query plans are dependent on all the final parameters,
+ including limits and other things that may be added before final execution.
+ One way to be sure of the actual query executed is to check
+ `log/development.log`.
#### Preparation when adding foreign keys to existing tables
diff --git a/doc/development/development_seed_files.md b/doc/development/development_seed_files.md
new file mode 100644
index 00000000000..2bf3688fd48
--- /dev/null
+++ b/doc/development/development_seed_files.md
@@ -0,0 +1,26 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Development seed files
+
+Development seed files are listed under `gitlab/db/fixtures/development/` and `gitlab/ee/db/fixtures/development/`
+folders. These files are used to populate the database with records to help verifying if feature functionalities, like charts, are working as expected on local host.
+
+The task `rake db:seed_fu` can be used to run all development seeds with the exception of the ones under a flag which is usually passed as an environment variable.
+
+The following table summarizes the seeds and tasks that can be used to generate
+data for features.
+
+| Feature | Command | Seed |
+|-------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| DevOps Adoption | `FILTER=devops_adoption bundle exec rake db:seed_fu` | [31_devops_adoption.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/db/fixtures/development/31_devops_adoption.rb) |
+| Value Streams Dashboard | `FILTER=cycle_analytics SEED_VSA=1 bundle exec rake db:seed_fu` | [17_cycle_analytics.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/fixtures/development/17_cycle_analytics.rb) |
+| Value Stream Analytics | `FILTER=customizable_cycle_analytics SEED_CUSTOMIZABLE_CYCLE_ANALYTICS=1 bundle exec rake db:seed_fu` | [30_customizable_cycle_analytics](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/db/fixtures/development/30_customizable_cycle_analytics.rb) |
+| CI/CD analytics | `FILTER=ci_cd_analytics SEED_CI_CD_ANALYTICS=1 bundle exec rake db:seed_fu` | [38_ci_cd_analytics](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/fixtures/development/38_ci_cd_analytics.rb?ref_type=heads) |
+| Contributions Analytics<br><br>Productivity Analytics<br><br>Code review Analytics<br><br>Merge Request Analytics | `FILTER=productivity_analytics SEED_PRODUCTIVITY_ANALYTICS=1 bundle exec rake db:seed_fu` | [90_productivity_analytics](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/db/fixtures/development/90_productivity_analytics.rb) |
+| Repository Analytics | `FILTER=14_pipelines NEW_PROJECT=1 bundle exec rake db:seed_fu` | [14_pipelines](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/fixtures/development/14_pipelines.rb?ref_type=heads) |
+| Issue Analytics<br><br>Insights | `NEW_PROJECT=1 bin/rake gitlab:seed:insights:issues` | [insights Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/tasks/gitlab/seed/insights.rake) |
+| DORA metrics | `SEED_DORA=1 FILTER=dora_metrics bundle exec rake db:seed_fu` | [92_dora_metrics](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/db/fixtures/development/92_dora_metrics.rb) |
diff --git a/doc/development/documentation/alpha_beta.md b/doc/development/documentation/alpha_beta.md
index 61f07e79e12..4579c57b448 100644
--- a/doc/development/documentation/alpha_beta.md
+++ b/doc/development/documentation/alpha_beta.md
@@ -1,49 +1,11 @@
---
-info: For assistance with this Style Guide page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects
-stage: none
-group: unassigned
+redirect_to: 'experiment_beta.md'
+remove_date: '2023-11-29'
---
-# Documenting Experiment and Beta features
+This document was moved to [another location](experiment_beta.md).
-Some features are not generally available and are instead considered
-[Experiment or Beta](../../policy/experiment-beta-support.md).
-
-When you document a feature in one of these three statuses:
-
-- Add `(Experiment)` or `(Beta)` in parentheses after the page or topic title.
-- Do not include `(Experiment)` or `(Beta)` in the left nav.
-- Ensure the version history lists the feature's status.
-
-These features are usually behind a feature flag, which follow [these documentation guidelines](feature_flags.md).
-
-If you add details of how users should enroll, or how to contact the team with issues,
-the `FLAG:` note should be above these details.
-
-For example:
-
-```markdown
-## Great new feature (Experiment)
-
-> [Introduced](link) in GitLab 15.10. This feature is an [Experiment](<link_to>/policy/experiment-beta-support.md).
-
-FLAG:
-On self-managed GitLab, by default this feature is not available.
-To make it available, an administrator can enable the feature flag named `example_flag`.
-On GitLab.com, this feature is not available. This feature is not ready for production use.
-
-Use this great new feature when you need to do this new thing.
-
-This feature is an [Experiment](<link_to>/policy/experiment-beta-support.md). To join
-the list of users testing this feature, do this thing. If you find a bug,
-[open an issue](link).
-```
-
-When the feature is ready for production, remove:
-
-- The text in parentheses.
-- Any language about the feature not being ready for production in the body
- description.
-- The feature flag information if available.
-
-Ensure the version history is up-to-date by adding a note about the production release.
+<!-- This redirect file can be deleted after <2023-11-29>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/documentation/experiment_beta.md b/doc/development/documentation/experiment_beta.md
new file mode 100644
index 00000000000..fab78082cb5
--- /dev/null
+++ b/doc/development/documentation/experiment_beta.md
@@ -0,0 +1,49 @@
+---
+info: For assistance with this Style Guide page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects
+stage: none
+group: unassigned
+---
+
+# Documenting Experiment and Beta features
+
+Some features are not generally available and are instead considered
+[Experiment or Beta](../../policy/experiment-beta-support.md).
+
+When you document a feature in one of these three statuses:
+
+- Add the tier badge after the page or topic title.
+- Do not include `(Experiment)` or `(Beta)` in the left nav.
+- Ensure the version history lists the feature's status.
+
+These features are usually behind a feature flag, which follow [these documentation guidelines](feature_flags.md).
+
+If you add details of how users should enroll, or how to contact the team with issues,
+the `FLAG:` note should be above these details.
+
+For example:
+
+```markdown
+## Great new feature **(EXPERIMENT)**
+
+> [Introduced](link) in GitLab 15.10. This feature is an [Experiment](<link_to>/policy/experiment-beta-support.md).
+
+FLAG:
+On self-managed GitLab, by default this feature is not available.
+To make it available, an administrator can enable the feature flag named `example_flag`.
+On GitLab.com, this feature is not available. This feature is not ready for production use.
+
+Use this great new feature when you need to do this new thing.
+
+This feature is an [Experiment](<link_to>/policy/experiment-beta-support.md). To join
+the list of users testing this feature, do this thing. If you find a bug,
+[open an issue](link).
+```
+
+When the feature is ready for production, remove:
+
+- The text in parentheses.
+- Any language about the feature not being ready for production in the body
+ description.
+- The feature flag information if available.
+
+Ensure the version history is up-to-date by adding a note about the production release.
diff --git a/doc/development/documentation/help.md b/doc/development/documentation/help.md
index fb730aca6f0..a921429bf49 100644
--- a/doc/development/documentation/help.md
+++ b/doc/development/documentation/help.md
@@ -28,9 +28,6 @@ For example:
1. The change shows up in the 14.5 self-managed release, due to missing the release cutoff
for 14.4.
-The exact cutoff date for each release is flexible, and can be sooner or later
-than expected due to holidays, weekends or other events. In general, MRs merged
-by the 17th should be present in the release on the 22nd, though it is not guaranteed.
If it is important that a documentation update is present in that month's release,
merge it as early as possible.
diff --git a/doc/development/documentation/restful_api_styleguide.md b/doc/development/documentation/restful_api_styleguide.md
index a5d565ffa79..a53330a7e63 100644
--- a/doc/development/documentation/restful_api_styleguide.md
+++ b/doc/development/documentation/restful_api_styleguide.md
@@ -146,7 +146,7 @@ Sort the table by required attributes first, then alphabetically.
| Attribute | Type | Required | Description |
|------------------------------|---------------|----------|-----------------------------------------------------|
| `title` | string | Yes | Title of the issue. |
-| `assignee_ids` **(PREMIUM)** | integer array | No | IDs of the users to assign the issue to. |
+| `assignee_ids` **(PREMIUM ALL)** | integer array | No | IDs of the users to assign the issue to. |
| `confidential` | boolean | No | Sets the issue to confidential. Default is `false`. |
```
@@ -155,7 +155,7 @@ Rendered example:
| Attribute | Type | Required | Description |
|------------------------------|---------------|----------|-----------------------------------------------------|
| `title` | string | Yes | Title of the issue. |
-| `assignee_ids` **(PREMIUM)** | integer array | No | IDs of the users to assign the issue to. |
+| `assignee_ids` **(PREMIUM ALL)** | integer array | No | IDs of the users to assign the issue to. |
| `confidential` | boolean | No | Sets the issue to confidential. Default is `false`. |
For information about writing attribute descriptions, see the [GraphQL API description style guide](../api_graphql_styleguide.md#description-style-guide).
@@ -181,7 +181,7 @@ Sort the table alphabetically.
```markdown
| Attribute | Type | Description |
|------------------------------|---------------|-------------------------------------------|
-| `assignee_ids` **(PREMIUM)** | integer array | IDs of the users to assign the issue to. |
+| `assignee_ids` **(PREMIUM ALL)** | integer array | IDs of the users to assign the issue to. |
| `confidential` | boolean | Whether the issue is confidential or not. |
| `title` | string | Title of the issue. |
```
@@ -190,7 +190,7 @@ Rendered example:
| Attribute | Type | Description |
|------------------------------|---------------|-------------------------------------------|
-| `assignee_ids` **(PREMIUM)** | integer array | IDs of the users to assign the issue to. |
+| `assignee_ids` **(PREMIUM ALL)** | integer array | IDs of the users to assign the issue to. |
| `confidential` | boolean | Whether the issue is confidential or not. |
| `title` | string | Title of the issue. |
diff --git a/doc/development/documentation/review_apps.md b/doc/development/documentation/review_apps.md
index adc9d727844..483145d1f44 100644
--- a/doc/development/documentation/review_apps.md
+++ b/doc/development/documentation/review_apps.md
@@ -16,6 +16,7 @@ Review apps are enabled for the following projects:
- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab)
- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner)
- [GitLab Charts](https://gitlab.com/gitlab-org/charts/gitlab)
+- [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator)
Alternatively, check the [`gitlab-docs` development guide](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/README.md#development-when-contributing-to-gitlab-documentation)
or [the GDK documentation](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/gitlab_docs.md)
diff --git a/doc/development/documentation/site_architecture/automation.md b/doc/development/documentation/site_architecture/automation.md
new file mode 100644
index 00000000000..5b2b02ad97e
--- /dev/null
+++ b/doc/development/documentation/site_architecture/automation.md
@@ -0,0 +1,77 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Automated pages
+
+Most pages in the GitLab documentation are written manually in Markdown.
+However, some pages are created by automated processes.
+
+Two primary categories of automation exist in the GitLab documentation:
+
+- Content that is generated by using a standard process and structured data (for example, YAML or JSON files).
+- Content that is generated by any other means.
+
+Automation helps with consistency and speed. But content that is automated in a
+non-standard way causes difficulty with:
+
+- Frontend changes.
+- Site troubleshooting and maintenance.
+- The contributor experience.
+
+Ideally, any automation should be done in a standard way, which helps alleviate some of the downsides.
+
+## Pages generated from structured data
+
+Some functionality on the docs site uses structured data:
+
+- Hierarchical global navigation (YAML)
+- Survey banner (YAML)
+- Badges (YAML)
+- Homepage content lists (YAML)
+- Redirects (YAML)
+- Versions menu (JSON)
+
+## Pages generated otherwise
+
+Other pages are generated by using non-standard processes. These pages often use solutions
+that are coded across multiple repositories.
+
+| Page | Details | Owner |
+|---|---|---|
+| [All feature flags in GitLab](../../../user/feature_flags.md) | [Generated during docs build](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/raketasks.md#generate-the-feature-flag-tables) | [Technical Writing](https://about.gitlab.com/handbook/product/ux/technical-writing/) |
+| [GitLab Runner feature flags](https://docs.gitlab.com/runner/configuration/feature-flags.html) | [Page source](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/ec6e1797d2173a95c8ac7f726bd62f6f110b7211/docs/configuration/feature-flags.md?plain=1#L39) | [Runner](https://about.gitlab.com/handbook/engineering/development/ops/verify/runner/) |
+| [Deprecations and removals by version](../../../update/deprecations.md) | [Deprecating GitLab features](../../deprecation_guidelines/index.md) | |
+| [GraphQL API resources](../../../api/graphql/reference/index.md) | [GraphQL API style guide](../../api_graphql_styleguide.md#documentation-and-schema) | [Import and Integrate](https://about.gitlab.com/handbook/engineering/development/dev/manage/import-and-integrate/) |
+| [Audit event types](../../../administration/audit_event_streaming/audit_event_types.md) | [Audit event development guidelines](../../audit_event_guide/index.md) | [Compliance](https://about.gitlab.com/handbook/engineering/development/sec/govern/compliance/) |
+| DAST vulnerability check documentation ([Example](../../../user/application_security/dast/checks/798.19.md)) | [How to generate the Markdown](https://gitlab.com/gitlab-org/security-products/dast-cwe-checks/-/blob/main/doc/how-to-generate-the-markdown-documentation.md) | [Dynamic Analysis](https://about.gitlab.com/handbook/product/categories/#dynamic-analysis-group) |
+| Blueprints ([Example](../../../architecture/blueprints/ci_data_decay/pipeline_partitioning.md)) | | |
+| [The docs homepage](../../../index.md) | | [Technical Writing](https://about.gitlab.com/handbook/product/ux/technical-writing/) |
+
+## Make an automation request
+
+If you want to automate a page on the docs site:
+
+1. Review [issue 823](https://gitlab.com/gitlab-org/gitlab-docs/-/issues/823)
+ and consider adding feedback there.
+1. If that issue does not describe what you need, contact
+ [the DRI for the docs site backend](https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects).
+
+Because automation adds extra complexity and a support burden, we
+review it on a case-by-case basis.
+
+## Document the automation
+
+If you do add automation, you must document:
+
+- The list of files that are included.
+- The `.gitlab-ci.yml` updates and any pipeline requirements.
+- The steps needed to troubleshoot.
+
+Other GitLab team members should be able to easily find information about how to maintain the automation.
+You should announce the change widely, including, at a minimum:
+
+- In Slack, in `#whats-happening-at-gitlab`.
+- In the Technical Writer team meeting agenda.
diff --git a/doc/development/documentation/site_architecture/deployment_process.md b/doc/development/documentation/site_architecture/deployment_process.md
index 2ba69ca0987..767fdf907d6 100644
--- a/doc/development/documentation/site_architecture/deployment_process.md
+++ b/doc/development/documentation/site_architecture/deployment_process.md
@@ -66,12 +66,12 @@ graph TD
C["14.2 MR merged"]
D["13.12 MR merged"]
E["12.10 MR merged"]
- F{{"Container registry on `gitlab-docs` project"}}
- A--"`image:docs-single`<br>job runs and pushes<br>`gitlab-docs:14.4` image"-->F
- B--"`image:docs-single`<br>job runs and pushes<br>`gitlab-docs:14.3` image"-->F
- C--"`image:docs-single`<br>job runs and pushes<br>`gitlab-docs:14.2` image"-->F
- D--"`image:docs-single`<br>job runs and pushes<br>`gitlab-docs:13.12` image"-->F
- E--"`image:docs-single`<br>job runs and pushes<br>`gitlab-docs:12.10` image"-->F
+ F{{"Container registry on gitlab-docs project"}}
+ A--"image:docs-single<br>job runs and pushes<br>gitlab-docs:14.4 image"-->F
+ B--"image:docs-single<br>job runs and pushes<br>gitlab-docs:14.3 image"-->F
+ C--"image:docs-single<br>job runs and pushes<br>gitlab-docs:14.2 image"-->F
+ D--"image:docs-single<br>job runs and pushes<br>gitlab-docs:13.12 image"-->F
+ E--"image:docs-single<br>job runs and pushes<br>gitlab-docs:12.10 image"-->F
```
### Rebuild stable documentation images
@@ -104,23 +104,23 @@ For example, [a pipeline](https://gitlab.com/gitlab-org/gitlab-docs/-/pipelines/
```mermaid
graph TD
- A["Latest `gitlab`, `gitlab-runner`<br>`omnibus-gitlab`, and `charts`"]
- subgraph "Container registry on `gitlab-docs` project"
- B["14.4 versioned docs<br>`gitlab-docs:14.4`"]
- C["14.3 versioned docs<br>`gitlab-docs:14.3`"]
- D["14.2 versioned docs<br>`gitlab-docs:14.2`"]
- E["13.12 versioned docs<br>`gitlab-docs:13.12`"]
- F["12.10 versioned docs<br>`gitlab-docs:12.10`"]
+ A["Latest gitlab, gitlab-runner<br>omnibus-gitlab, and charts"]
+ subgraph "Container registry on gitlab-docs project"
+ B["14.4 versioned docs<br>gitlab-docs:14.4"]
+ C["14.3 versioned docs<br>gitlab-docs:14.3"]
+ D["14.2 versioned docs<br>gitlab-docs:14.2"]
+ E["13.12 versioned docs<br>gitlab-docs:13.12"]
+ F["12.10 versioned docs<br>gitlab-docs:12.10"]
end
- G[["Scheduled pipeline<br>`image:docs-latest` job<br>combines all these"]]
+ G[["Scheduled pipeline<br>image:docs-latest job<br>combines all these"]]
A--"Default branches<br>pulled down"-->G
- B--"`gitlab-docs:14.4` image<br>pulled down"-->G
- C--"`gitlab-docs:14.3` image<br>pulled down"-->G
- D--"`gitlab-docs:14.2` image<br>pulled down"-->G
- E--"`gitlab-docs:13.12` image<br>pulled down"-->G
- F--"`gitlab-docs:12.10` image<br>pulled down"-->G
+ B--"gitlab-docs:14.4 image<br>pulled down"-->G
+ C--"gitlab-docs:14.3 image<br>pulled down"-->G
+ D--"gitlab-docs:14.2 image<br>pulled down"-->G
+ E--"gitlab-docs:13.12 image<br>pulled down"-->G
+ F--"gitlab-docs:12.10 image<br>pulled down"-->G
H{{"Container registry on gitlab-docs project"}}
- G--"Latest `gitlab-docs:latest` image<br>pushed up"-->H
+ G--"Latest gitlab-docs:latest image<br>pushed up"-->H
```
## Pages deploy job
@@ -144,7 +144,7 @@ graph LR
A{{"Container registry on gitlab-docs project"}}
B[["Scheduled pipeline<br>`pages` and<br>`pages:deploy` job"]]
C([docs.gitlab.com])
- A--"`gitlab-docs:latest`<br>pulled"-->B
+ A--"gitlab-docs:latest<br>pulled"-->B
B--"Unpacked documentation uploaded"-->C
```
diff --git a/doc/development/documentation/styleguide/index.md b/doc/development/documentation/styleguide/index.md
index 94bc6bba240..0fa5819acae 100644
--- a/doc/development/documentation/styleguide/index.md
+++ b/doc/development/documentation/styleguide/index.md
@@ -62,20 +62,45 @@ the documentation helps others efficiently accomplish tasks and solve problems.
## Writing for localization
-The GitLab documentation is not localized, but we follow guidelines that
-help benefit translation. For example, we:
+The GitLab documentation is not localized, but we follow guidelines that help us write for a global audience.
-- Write in [active voice](word_list.md#active-voice).
-- Write in [present tense](word_list.md#future-tense).
-- Avoid words that can be translated incorrectly, like:
- - [since and because](word_list.md#since)
- - [once and after](word_list.md#once)
- - [it](word_list.md#it)
-- Avoid [-ing](word_list.md#-ing-words) words.
+[The GitLab voice](#the-gitlab-voice) dictates that we write clearly and directly with translation in mind.
+Our style guide, [word list](word_list.md), and [Vale rules](../testing.md) ensure consistency in the documentation.
-[The GitLab voice](#the-gitlab-voice) dictates that we write clearly and directly,
-and with translation in mind. [The word list](word_list.md) and our Vale rules
-also aid in consistency, which is important for localization.
+When documentation is translated into other languages, the meaning of each word must be clear.
+The increasing use of machine translation, GitLab Duo Chat, and other AI tools
+means that consistency is even more important.
+
+The following rules can help documentation be translated more efficiently.
+
+Avoid:
+
+- Phrases that hide the subject like [**there is** and **there are**](word_list.md#there-is-there-are).
+- Ambiguous pronouns like [**it**](word_list.md#it).
+- Words that end in [**-ing**](word_list.md#-ing-words).
+- Words that can be confused with one another like [**since**](word_list.md#since) and **because**.
+- Latin abbreviations like [**e.g.**](word_list.md#eg) and [**i.e.**](word_list.md#ie).
+- Culture-specific references like **kill two birds with one stone**.
+
+Use:
+
+- Standard [text for links](#text-for-links).
+- [Lists](#lists) and [tables](#tables) instead of complex sentences and paragraphs.
+- Common abbreviations like [**AI**](word_list.md#ai-artificial-intelligence) and
+ [**CI/CD**](word_list.md#cicd) and abbreviations you've previously spelled out.
+
+Also, keep the following guidance in mind:
+
+- Be consistent with [feature names](#feature-names) and how to interact with them.
+- Break up noun strings. For example, instead of **project integration custom settings**,
+ use **custom settings for project integrations**.
+- Format [dates and times](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/date-time-terms)
+ consistently and for an international audience.
+- Use [images](#images), including screenshots, sparingly.
+- For [UI text](#ui-text), allow for up to 30% expansion and contraction in translation.
+ To see how much a string expands or contracts in another language, paste the string
+ into [Google Translate](https://translate.google.com/) and review the results.
+ You can ask a colleague who speaks the language to verify if the translation is clear.
## Markdown
@@ -240,12 +265,13 @@ create an issue or an MR to propose a change to the user interface text.
#### Feature names
-- Feature names are typically lowercase.
-- Some features require title case, typically nouns that name GitLab-specific capabilities or tools. Features requiring
- title case should be:
- - Added as a proper name to markdownlint [configuration](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint.yml),
- so that it can be consistently applied across all documentation.
- - Added to the [word list](word_list.md).
+Feature names should be lowercase.
+
+However, in a few rare cases, features can be title case. These exceptions are:
+
+- Added as a proper name to [markdownlint](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint.yml),
+ so they can be consistently applied across all documentation.
+- Added to the [word list](word_list.md).
If the term is not in the word list, ask a GitLab Technical Writer for advice.
@@ -449,7 +475,7 @@ For example:
cp <your_source_directory> <your_destination_directory>
```
-If the placeholder is not in a code block, use [`<` and `>`] and wrap the placeholder
+If the placeholder is not in a code block, use `<` and `>` and wrap the placeholder
in a single backtick. For example:
```plaintext
@@ -738,7 +764,7 @@ For example:
```html
<html>
-<small>Footnotes
+<small>Footnotes:
<ol>
<li>This is the footnote.</li>
<li>This is the other footnote.</li>
@@ -755,7 +781,7 @@ This text renders as this output:
| App B | Description text. <sup>2</sup> |
<html>
-<small>Footnotes
+<small>Footnotes:
<ol>
<li>This is the footnote.</li>
<li>This is the other footnote.</li>
@@ -984,7 +1010,7 @@ To be consistent, use these templates when you write navigation steps in a task
To open project settings:
```markdown
-1. On the left sidebar, at the top, select **Search GitLab** (**{search}**) to find your project.
+1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > CI/CD**.
1. Expand **General pipelines**.
```
@@ -992,7 +1018,7 @@ To open project settings:
To open group settings:
```markdown
-1. On the left sidebar, at the top, select **Search GitLab** (**{search}**) to find your group.
+1. On the left sidebar, select **Search or go to** and find your group.
1. Select **Settings > CI/CD**.
1. Expand **General pipelines**.
```
@@ -1000,7 +1026,7 @@ To open group settings:
To open either project or group settings:
```markdown
-1. On the left sidebar, at the top, select **Search GitLab** (**{search}**) to find your project or group.
+1. On the left sidebar, select **Search or go to** and find your project or group.
1. Select **Settings > CI/CD**.
1. Expand **General pipelines**.
```
@@ -1020,14 +1046,14 @@ To create a group:
To open the Admin Area:
```markdown
-1. On the left sidebar, expand the top-most chevron (**{chevron-down}**).
+1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
```
To open the **Your work** menu item:
```markdown
-1. On the left sidebar, expand the top-most chevron (**{chevron-down}**).
+1. On the left sidebar, select **Search or go to**.
1. Select **Your work**.
```
@@ -1049,15 +1075,15 @@ To save the selection in some dropdown lists:
To view all your projects:
```markdown
-1. On the left sidebar, expand the top-most chevron (**{chevron-down}**).
-1. Select **View all your projects**.
+1. On the left sidebar, select **Search or go to**.
+1. Select **View all my projects**.
```
To view all your groups:
```markdown
-1. On the left sidebar, expand the top-most chevron (**{chevron-down}**).
-1. Select **View all your groups**.
+1. On the left sidebar, select **Search or go to**.
+1. Select **View all my groups**.
```
### Optional steps
@@ -1089,7 +1115,7 @@ Use the phrase **Complete the fields**.
For example:
-1. On the left sidebar, at the top, select **Search GitLab** (**{search}**) to find your project.
+1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > Repository**.
1. Expand **Push rules**.
1. Complete the fields.
@@ -1641,7 +1667,7 @@ The H1 tier badge should be the badge that applies to the lowest tier for the fe
#### Available product tier badges
-Tier badges must include two components, in this order: a subscription tier and an offering.
+Tier badges should include two components, in this order: a subscription tier and an offering.
These components are surrounded by bold and parentheses, for example `**(ULTIMATE SAAS)**`.
Subscription tiers:
@@ -1661,10 +1687,14 @@ You can also add a third component for the feature's status:
- `EXPERIMENT`
- `BETA`
+For example, `**(FREE ALL EXPERIMENT)**`.
+
+A tier or status can stand alone. An offering should always have a tier.
+
#### Add a tier badge
To add a tier badge to a topic title, add the two relevant components
-after the title text. You must include the subscription tier first, and then the offering.
+after the title text. You should include the subscription tier first, and then the offering.
For example:
```markdown
@@ -1677,6 +1707,12 @@ Optionally, you can add the feature status as the last part of the badge:
# Topic title **(FREE ALL EXPERIMENT)**
```
+Or add the status by itself:
+
+```markdown
+# Topic title **(EXPERIMENT)**
+```
+
##### Inline tier badges
Do not add tier badges inline with other text, except for [API attributes](../restful_api_styleguide.md).
diff --git a/doc/development/documentation/styleguide/word_list.md b/doc/development/documentation/styleguide/word_list.md
index d65df0b56c8..509cabbe631 100644
--- a/doc/development/documentation/styleguide/word_list.md
+++ b/doc/development/documentation/styleguide/word_list.md
@@ -22,7 +22,9 @@ For guidance not on this page, we defer to these style guides:
- [Google Developer Documentation Style Guide](https://developers.google.com/style)
<!-- vale off -->
-<!-- markdownlint-disable -->
+
+<!-- Disable trailing punctuation in heading rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md026---trailing-punctuation-in-heading -->
+<!-- markdownlint-disable MD026 -->
## `&`
@@ -85,7 +87,7 @@ is passive. `Zombies select the button` is active.
## Admin Area
-Use title case for **Admin Area** to refer to the area of the UI that you access when you select **Main menu > Admin**.
+Use title case for **Admin Area**.
This area of the UI says **Admin Area** at the top of the page and on the menu.
## administrator
@@ -233,7 +235,6 @@ Use **text box** to refer to the UI field. Do not use **field** or **box**. For
- In the **Variable name** text box, enter a value.
-
## bullet
Don't refer to individual items in an ordered or unordered list as **bullets**. Use **list item** instead. If you need to be less ambiguous, you can use:
@@ -263,6 +264,9 @@ See also [contractions](index.md#contractions).
Use **Chat** with a capital `c` for **Chat** or **GitLab Duo Chat**.
+On first use on a page, use **GitLab Duo Chat**.
+Thereafter, use **Chat** by itself.
+
## checkbox
Use one word for **checkbox**. Do not use **check box**.
@@ -306,9 +310,30 @@ This version is different than the larger, more monolithic **Linux package** tha
You can also use **cloud-native GitLab** for short. It should be hyphenated and lowercase.
+## Code explanation
+
+Use sentence case for **Code explanation**.
+
+On first mention on a page, use **GitLab Duo Code explanation**.
+Thereafter, use **Code explanation** by itself.
+
+## Code review summary
+
+Use sentence case for **Code review summary**.
+
+On first mention on a page, use **GitLab Duo Code review summary**.
+Thereafter, use **Code review summary** by itself.
+
## Code Suggestions
-Use title case for **Code Suggestions**.
+Use title case for **Code Suggestions**. On first mention on a page, use **GitLab Duo Code Suggestions**.
+
+**Code Suggestions** should always be plural, and is capitalized even if it's generic.
+
+Examples:
+
+- Use Code Suggestions to display suggestions as you type. (This phrase describes the feature.)
+- As you type, Code Suggestions are displayed. (This phrase is generic but still uses capital letters.)
## collapse
@@ -438,6 +463,13 @@ Use **inactive** or **off** instead. ([Vale](../testing.md#vale) rule: [`Inclusi
Use **prevent** instead of **disallow**. ([Vale](../testing.md#vale) rule: [`Substitutions.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/Substitutions.yml))
+## Discussion summary
+
+Use sentence case for **Discussion summary**.
+
+On first mention on a page, use **GitLab Duo Discussion summary**.
+Thereafter, use **Discussion summary** by itself.
+
## Docker-in-Docker, `dind`
Use **Docker-in-Docker** when you are describing running a Docker container by using the Docker executor.
@@ -580,7 +612,7 @@ Instead of:
However, you can make an exception when you are writing a task and you need to refer to all
of the fields at once. For example:
-1. On the left sidebar, at the top, select **Search GitLab** (**{search}**) to find your project.
+1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > CI/CD**.
1. Expand **General pipelines**.
1. Complete the fields.
@@ -621,6 +653,13 @@ For **GB** and **MB**, follow the [Microsoft guidance](https://learn.microsoft.c
Use title case for **Geo**.
+## Git suggestions
+
+Use sentence case for **Git suggestions**.
+
+On first mention on a page, use **GitLab Duo Git suggestions**.
+Thereafter, use **Git suggestions** by itself.
+
## GitLab
Do not make **GitLab** possessive (GitLab's). This guidance follows [GitLab Trademark Guidelines](https://about.gitlab.com/handbook/marketing/brand-and-product-marketing/brand/brand-activation/trademark-guidelines/).
@@ -633,6 +672,24 @@ Do not use **Dedicated** by itself. Always use **GitLab Dedicated**.
Do not use **Duo** by itself. Always use **GitLab Duo**.
+On first use on a page, use **GitLab Duo `<featurename>`**. For example:
+
+- GitLab Duo Chat
+- GitLab Duo Code Suggestions
+- GitLab Duo Suggested Reviewers
+- GitLab Duo Value stream forecasting
+- GitLab Duo Discussion summary
+- GitLab Duo Merge request summary
+- GitLab Duo Code review summary
+- GitLab Duo Code explanation
+- GitLab Duo Vulnerability summary
+- GitLab Duo Test generation
+- GitLab Duo Git suggestions
+- GitLab Duo Root cause analysis
+- GitLab Duo Issue description generation
+
+After the first use, use the feature name without **GitLab Duo**.
+
## GitLab Flavored Markdown
When possible, spell out [**GitLab Flavored Markdown**](../../../user/markdown.md).
@@ -757,7 +814,7 @@ Do not use **in order to**. Use **to** instead. ([Vale](../testing.md#vale) rule
For the plural of **index**, use **indexes**.
-However, for ElasticSearch, use [**indices**](https://www.elastic.co/blog/what-is-an-elasticsearch-index).
+However, for Elasticsearch, use [**indices**](https://www.elastic.co/blog/what-is-an-elasticsearch-index).
## Installation from source
@@ -792,6 +849,13 @@ Use lowercase for **issue**.
Use lowercase for **issue board**.
+## Issue description generation
+
+Use sentence case for **Issue description generation**.
+
+On first mention on a page, use **GitLab Duo Issue description generation**.
+Thereafter, use **Issue description generation** by itself.
+
## issue weights
Use lowercase for **issue weights**.
@@ -860,13 +924,13 @@ When writing about licenses:
Use:
- - Add a license to your instance.
- - Purchase a subscription.
+- Add a license to your instance.
+- Purchase a subscription.
Instead of:
- - Buy a license.
- - Purchase a license.
+- Buy a license.
+- Purchase a license.
## limitations
@@ -956,6 +1020,13 @@ the user account becomes a **member**.
Use lowercase for **merge requests**. If you use **MR** as the acronym, spell it out on first use.
+## Merge request summary
+
+Use sentence case for **Merge request summary**.
+
+On first mention on a page, use **GitLab Duo Merge request summary**.
+Thereafter, use **Merge request summary** by itself.
+
## milestones
Use lowercase for **milestones**.
@@ -1277,6 +1348,13 @@ Do not use **roles** and [**permissions**](#permissions) interchangeably. Each u
Roles are not the same as [**access levels**](#access-level).
+## Root cause analysis
+
+Use sentence case for **Root cause analysis**.
+
+On first mention on a page, use **GitLab Duo Root cause analysis**.
+Thereafter, use **Root cause analysis** by itself.
+
## roll back
Use **roll back** for changing a GitLab version to an earlier one.
@@ -1454,6 +1532,17 @@ To describe tiers:
| In the Premium tier or higher | In the Premium and Ultimate tier |
| In the Premium tier or lower | In the Free and Premium tier |
+## Suggested Reviewers
+
+Use title case for **Suggested Reviewers**. On first mention on a page, use **GitLab Duo Suggested Reviewers**.
+
+**Suggested Reviewers** should always be plural, and is capitalized even if it's generic.
+
+Examples:
+
+- Suggested Reviewers can recommend a person to review your merge request. (This phrase describes the feature.)
+- As you type, Suggested Reviewers are displayed. (This phrase is generic but still uses capital letters.)
+
## that
Do not use **that** when describing a noun. For example:
@@ -1482,6 +1571,13 @@ talking about non-specific modules. For example:
- You can publish a Terraform module to your project's Terraform Module Registry.
+## Test generation
+
+Use sentence case for **Test generation**.
+
+On first mention on a page, use **GitLab Duo Test generation**.
+Thereafter, use **Test generation** by itself.
+
## text box
Use **text box** instead of **field** or **box** when referring to the UI element.
@@ -1620,10 +1716,23 @@ When you add a **user account** to a group or project, the user account becomes
Do not use **utilize**. Use **use** instead. It's more succinct and easier for non-native English speakers to understand.
([Vale](../testing.md#vale) rule: [`SubstitutionSuggestions.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/SubstitutionSuggestions.yml))
+## Value stream forecasting
+
+Use sentence case for **Value stream forecasting**. On first mention on a page, use **GitLab Duo Value stream forecasting**.
+
+Thereafter, use **Value stream forecasting** by itself.
+
## via
Do not use Latin abbreviations. Use **with**, **through**, or **by using** instead. ([Vale](../testing.md#vale) rule: [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/LatinTerms.yml))
+## Vulnerability summary
+
+Use sentence case for **Vulnerability summary**.
+
+On first mention on a page, use **GitLab Duo Vulnerability summary**.
+Thereafter, use **Vulnerability summary** by itself.
+
## we
Try to avoid **we** and focus instead on how the user can accomplish something in GitLab.
diff --git a/doc/development/documentation/testing.md b/doc/development/documentation/testing.md
index 0c65e008436..c0f1d0028f9 100644
--- a/doc/development/documentation/testing.md
+++ b/doc/development/documentation/testing.md
@@ -507,7 +507,21 @@ To configure markdownlint in your editor, install one of the following as approp
To configure Vale in your editor, install one of the following as appropriate:
-- Sublime Text [`SublimeLinter-vale` package](https://packagecontrol.io/packages/SublimeLinter-vale).
+- Sublime Text [`SublimeLinter-vale` package](https://packagecontrol.io/packages/SublimeLinter-vale). To have Vale
+ suggestions appears as blue instead of red (which is how errors appear), add `vale` configuration to your
+ [SublimeLinter](http://sublimelinter.readthedocs.org) configuration:
+
+ ```json
+ "vale": {
+ "styles": [{
+ "mark_style": "outline",
+ "scope": "region.bluish",
+ "types": ["suggestion"]
+ }]
+ }
+ ```
+
+- [LSP for Sublime Text](https://lsp.sublimetext.io) package [`LSP-vale-ls`](https://packagecontrol.io/packages/LSP-vale-ls).
- Visual Studio Code [`ChrisChinchilla.vale-vscode` extension](https://marketplace.visualstudio.com/items?itemName=ChrisChinchilla.vale-vscode).
You can configure the plugin to [display only a subset of alerts](#show-subset-of-vale-alerts).
- Vim [ALE plugin](https://github.com/dense-analysis/ale).
diff --git a/doc/development/documentation/topic_types/task.md b/doc/development/documentation/topic_types/task.md
index 87ce4d770f5..7fb4201ac40 100644
--- a/doc/development/documentation/topic_types/task.md
+++ b/doc/development/documentation/topic_types/task.md
@@ -43,7 +43,7 @@ Prerequisites:
To create an issue:
-1. On the left sidebar, at the top, select **Search GitLab** (**{search}**) to find your project.
+1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Plan > Issues**.
1. In the upper-right corner, select **New issue**.
1. Complete the fields. (If you have reference content that lists each field, link to it here.)
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index 9219fcd6710..2bf8ad81ba4 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -49,7 +49,7 @@ version of the product:
1. Enable **Allow use of licensed EE features** to make licensed EE features available to projects
only if the project namespace's plan includes the feature.
- 1. On the left sidebar, expand the top-most chevron (**{chevron-down}**).
+ 1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
1. On the left sidebar, select **Settings > General**.
1. Expand **Account and limit**.
@@ -57,7 +57,7 @@ version of the product:
1. Select **Save changes**.
1. Ensure the group you want to test the EE feature for is actually using an EE plan:
- 1. On the left sidebar, expand the top-most chevron (**{chevron-down}**).
+ 1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
1. On the left sidebar, select **Overview > Groups**.
1. Identify the group you want to modify, and select **Edit**.
@@ -463,7 +463,7 @@ end
When it's not possible/logical to modify the implementation of a method, then
wrap it in a self-descriptive method and use that method.
-For example, in GitLab-FOSS, the only user created by the system is `User.ghost`
+For example, in GitLab-FOSS, the only user created by the system is `Users::Internal.ghost`
but in EE there are several types of bot-users that aren't really users. It would
be incorrect to override the implementation of `User#ghost?`, so instead we add
a method `#internal?` to `app/models/user.rb`. The implementation:
diff --git a/doc/development/event_store.md b/doc/development/event_store.md
index c54e6ae2d07..918da8fb738 100644
--- a/doc/development/event_store.md
+++ b/doc/development/event_store.md
@@ -300,6 +300,21 @@ executed synchronously every time the given event is published.
For complex conditions it's best to subscribe to all the events and then handle the logic
in the `handle_event` method of the subscriber worker.
+### Delayed dispatching of events
+
+A subscription can specify a delay when to receive an event:
+
+```ruby
+store.subscribe ::MergeRequests::UpdateHeadPipelineWorker,
+ to: ::Ci::PipelineCreatedEvent,
+ delay: 1.minute
+```
+
+The `delay` parameter switches the dispatching of the event to use `perform_in` method
+on the subscriber Sidekiq worker, instead of `perform_async`.
+
+This technique is useful when publishing many events and leverage the Sidekiq deduplication.
+
## Testing
### Testing the publisher
diff --git a/doc/development/experiment_guide/implementing_experiments.md b/doc/development/experiment_guide/implementing_experiments.md
index 6fe58a1da54..61a46397390 100644
--- a/doc/development/experiment_guide/implementing_experiments.md
+++ b/doc/development/experiment_guide/implementing_experiments.md
@@ -281,7 +281,7 @@ about contexts now.
We can assume we run the experiment in one or a few places, but
track events potentially in many places. The tracking call remains the same, with
the arguments you would usually use when
-[tracking events using snowplow](../snowplow/index.md). The easiest example
+[tracking events using snowplow](../internal_analytics/snowplow/index.md). The easiest example
of tracking an event in Ruby would be:
```ruby
diff --git a/doc/development/fe_guide/accessibility.md b/doc/development/fe_guide/accessibility.md
index 65b50bedb0c..a9f82e85493 100644
--- a/doc/development/fe_guide/accessibility.md
+++ b/doc/development/fe_guide/accessibility.md
@@ -531,10 +531,43 @@ We aim to have full coverage for all the views.
One of the advantages of testing in feature tests is that we can check different states, not only
single components in isolation.
-Make sure to add assertions, when the view you are working on:
+You can find some examples on how to approach accessibility checks below.
-- Has an empty state,
-- Has significant changes in page structure, for example an alert is shown, or a new section is rendered.
+#### Empty state
+
+Some views have an empty state that result in a page structure that's different from the default view.
+They may also offer some actions, for example to create a first issue or to enable a feature.
+In this case, add assertions for both an empty state and a default view.
+
+#### Ensure compliance before user interactions
+
+Often we test against a number of steps we expect our users to perform.
+In this case, make sure to include the check early on, before any of them has been simulated.
+This way we ensure there are no barriers to what we expect of users.
+
+#### Ensure compliance after changed page structure
+
+User interactions may result in significant changes in page structure. For example, a modal is shown, or a new section is rendered.
+In that case, add an assertion after any such change.
+We want to make sure that users are able to interact with all available components.
+
+#### Separate file for extensive test suites
+
+For some views, feature tests span multiple files.
+Take a look at our [feature tests for a merge request](https://gitlab.com/gitlab-org/gitlab/-/tree/master/spec/features/merge_request).
+The number of user interactions that needs to be covered is too big to fit into one test file.
+As a result, multiple feature tests cover one view, with different user privileges, or data sets.
+If we were to include accessibility checks in all of them, there is a chance we would cover the same states of a view multiple times and significantly increase the run time.
+It would also make it harder to determine the coverage for accessibility, if assertions would be scattered across many files.
+
+In that case, consider creating one test file dedicated to accessibility.
+Place it in the same directory and name it `accessibility_spec.rb`, for example `spec/features/merge_request/accessibility_spec.rb`.
+
+#### Shared examples
+
+Often feature tests include shared examples for a number of scenarios.
+If they differ only by provided data, but are based on the same user interaction, you can check for accessibility compliance outside the shared examples.
+This way we only run the check once and save resources.
### How to add accessibility tests
diff --git a/doc/development/fe_guide/architecture.md b/doc/development/fe_guide/architecture.md
index 0c85a21fdf4..810d9af2de7 100644
--- a/doc/development/fe_guide/architecture.md
+++ b/doc/development/fe_guide/architecture.md
@@ -6,16 +6,11 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Architecture
-When developing a feature that requires architectural design, or changing the fundamental design of an existing feature, discuss it with a Frontend Architecture Expert.
+When building new features, consider reaching out to relevant stakeholders as early as possible in the process.
-A Frontend Architect is an expert who makes high-level Frontend design decisions
-and decides on technical standards, including coding standards and frameworks.
-
-Architectural decisions should be accessible to everyone, so document
-them in the relevant Merge Request discussion or by updating our documentation
-when appropriate.
-
-You can find the Frontend Architecture experts on the [team page](https://about.gitlab.com/company/team/).
+Architectural decisions should be accessible to everyone. Document
+them in the relevant Merge Request discussions or by updating our documentation
+when appropriate by adding an entry to this section.
## Widget Architecture
@@ -23,8 +18,3 @@ The [Plan stage](https://about.gitlab.com/handbook/engineering/development/dev/p
is refactoring the right sidebar to consist of **widgets**. They have a specific architecture to be
reusable and to expose an interface that can be used by external Vue applications on the page.
Learn more about the [widget architecture](widgets.md).
-
-## Examples
-
-You can find [documentation about the desired architecture](vue.md) for a new
-feature built with Vue.js.
diff --git a/doc/development/fe_guide/axios.md b/doc/development/fe_guide/axios.md
index f90a2b37a1c..876855b807c 100644
--- a/doc/development/fe_guide/axios.md
+++ b/doc/development/fe_guide/axios.md
@@ -6,9 +6,9 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Axios
-We use [Axios](https://github.com/axios/axios) to communicate with the server in Vue applications and most new code.
+In older parts of our codebase using the REST API, we used [Axios](https://github.com/axios/axios) to communicate with the server, but you should not use Axios in new applications. Instead rely on `apollo-client` to query the GraphQL API. For more details, see [our GraphQL documentation](graphql.md).
-In order to guarantee all defaults are set you *should not use Axios directly*, you should import Axios from `axios_utils`.
+To guarantee all defaults are set you should import Axios from `axios_utils`. Do not use Axios directly.
## CSRF token
diff --git a/doc/development/fe_guide/customizable_dashboards.md b/doc/development/fe_guide/customizable_dashboards.md
index ac8b0b8a1ab..476a8acabd0 100644
--- a/doc/development/fe_guide/customizable_dashboards.md
+++ b/doc/development/fe_guide/customizable_dashboards.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Product Analytics
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
@@ -74,15 +74,15 @@ export const pageViewsOverTime = {
dimensions: [],
filters: [
{
- member: 'SnowplowTrackedEvents.event',
+ member: 'TrackedEvents.event',
operator: 'equals',
values: ['page_view']
}
],
- measures: ['SnowplowTrackedEvents.pageViewsCount'],
+ measures: ['TrackedEvents.pageViewsCount'],
timeDimensions: [
{
- dimension: 'SnowplowTrackedEvents.derivedTstamp',
+ dimension: 'TrackedEvents.derivedTstamp',
granularity: 'day',
},
],
@@ -123,6 +123,7 @@ import { pageViewsOverTime } from './visualizations';
export const dashboard = {
slug: 'my_dashboard', // Used to set the URL path for the dashboard.
title: 'My dashboard title', // The title to display.
+ description: 'This is a description of the dashboard', // A description of the dashboard
// Each dashboard consists of an array of panels to display.
panels: [
{
@@ -143,7 +144,7 @@ export const dashboard = {
// Here we override the Cube.js query to get page views per week instead of days.
queryOverrides: {
timeDimensions: {
- dimension: 'SnowplowTrackedEvents.derivedTstamp',
+ dimension: 'TrackedEvents.derivedTstamp',
granularity: 'week',
},
},
diff --git a/doc/development/fe_guide/dark_mode.md b/doc/development/fe_guide/dark_mode.md
index 5e8f844172a..144418f72cd 100644
--- a/doc/development/fe_guide/dark_mode.md
+++ b/doc/development/fe_guide/dark_mode.md
@@ -5,7 +5,7 @@ group: Development
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
-This page is about developing dark mode for GitLab. For more information on how to enable dark mode, see [Change the syntax highlighting theme]](../../user/profile/preferences.md#change-the-syntax-highlighting-theme).
+This page is about developing dark mode for GitLab. For more information on how to enable dark mode, see [Change the syntax highlighting theme](../../user/profile/preferences.md#change-the-syntax-highlighting-theme).
# How dark mode works
diff --git a/doc/development/fe_guide/design_anti_patterns.md b/doc/development/fe_guide/design_anti_patterns.md
index f087fbd8235..e0d3a80d9c8 100644
--- a/doc/development/fe_guide/design_anti_patterns.md
+++ b/doc/development/fe_guide/design_anti_patterns.md
@@ -1,218 +1,10 @@
---
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+redirect_to: 'design_patterns.md'
+remove_date: '2023-12-07'
---
-# Design Anti-patterns
+This document was moved to [another location](design_patterns.md).
-Anti-patterns may seem like good approaches at first, but it has been shown that they bring more ills than benefits. These should
-generally be avoided.
-
-Throughout the GitLab codebase, there may be historic uses of these anti-patterns. [Use discretion](https://about.gitlab.com/handbook/engineering/development/principles/#balance-refactoring-and-velocity)
-when figuring out whether or not to refactor, when touching code that uses one of these legacy patterns.
-
-NOTE:
-For new features, anti-patterns are not necessarily prohibited, but it is **strongly suggested** to find another approach.
-
-## Shared Global Object (Anti-pattern)
-
-A shared global object is an instance of something that can be accessed from anywhere and therefore has no clear owner.
-
-Here's an example of this pattern applied to a Vuex Store:
-
-```javascript
-const createStore = () => new Vuex.Store({
- actions,
- state,
- mutations
-});
-
-// Notice that we are forcing all references to this module to use the same single instance of the store.
-// We are also creating the store at import-time and there is nothing which can automatically dispose of it.
-//
-// As an alternative, we should export the `createStore` and let the client manage the
-// lifecycle and instance of the store.
-export default createStore();
-```
-
-### What problems do Shared Global Objects cause?
-
-Shared Global Objects are convenient because they can be accessed from anywhere. However,
-the convenience does not always outweigh their heavy cost:
-
-- **No ownership.** There is no clear owner to these objects and therefore they assume a non-deterministic
- and permanent lifecycle. This can be especially problematic for tests.
-- **No access control.** When Shared Global Objects manage some state, this can create some very buggy and difficult
- coupling situations because there is no access control to this object.
-- **Possible circular references.** Shared Global Objects can also create some circular referencing situations since submodules
- of the Shared Global Object can reference modules that reference itself (see
- [this MR for an example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/33366)).
-
-Here are some historic examples where this pattern was identified to be problematic:
-
-- [Reference to global Vuex store in IDE](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36401)
-- [Docs update to discourage singleton Vuex store](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36952)
-
-### When could the Shared Global Object pattern be actually appropriate?
-
-Shared Global Object's solve the problem of making something globally accessible. This pattern
-could be appropriate:
-
-- When a responsibility is truly global and should be referenced across the application
- (for example, an application-wide Event Bus).
-
-Even in these scenarios, consider avoiding the Shared Global Object pattern because the
-side-effects can be notoriously difficult to reason with.
-
-### References
-
-For more information, see [Global Variables Are Bad on the C2 wiki](https://wiki.c2.com/?GlobalVariablesAreBad).
-
-## Singleton (Anti-pattern)
-
-The classic [Singleton pattern](https://en.wikipedia.org/wiki/Singleton_pattern) is an approach to ensure that only one
-instance of a thing exists.
-
-Here's an example of this pattern:
-
-```javascript
-class MyThing {
- constructor() {
- // ...
- }
-
- // ...
-}
-
-MyThing.instance = null;
-
-export const getThingInstance = () => {
- if (MyThing.instance) {
- return MyThing.instance;
- }
-
- const instance = new MyThing();
- MyThing.instance = instance;
- return instance;
-};
-```
-
-### What problems do Singletons cause?
-
-It is a big assumption that only one instance of a thing should exist. More often than not,
-a Singleton is misused and causes very tight coupling amongst itself and the modules that reference it.
-
-Here are some historic examples where this pattern was identified to be problematic:
-
-- [Test issues caused by singleton class in IDE](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/30398#note_331174190)
-- [Implicit Singleton created by module's shared variables](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/merge_requests/97#note_417515776)
-- [Complexity caused by Singletons](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/29461#note_324585814)
-
-Here are some ills that Singletons often produce:
-
-1. **Non-deterministic tests.** Singletons encourage non-deterministic tests because the single instance is shared across
- individual tests, often causing the state of one test to bleed into another.
-1. **High coupling.** Under the hood, clients of a singleton class all share a single specific
- instance of an object, which means this pattern inherits all the [problems of Shared Global Object](#what-problems-do-shared-global-objects-cause)
- such as no clear ownership and no access control. These leads to high coupling situations that can
- be buggy and difficult to untangle.
-1. **Infectious.** Singletons are infectious, especially when they manage state. Consider the component
- [RepoEditor](https://gitlab.com/gitlab-org/gitlab/-/blob/27ad6cb7b76430fbcbaf850df68c338d6719ed2b/app%2Fassets%2Fjavascripts%2Fide%2Fcomponents%2Frepo_editor.vue#L0-1)
- used in the Web IDE. This component interfaces with a Singleton [Editor](https://gitlab.com/gitlab-org/gitlab/-/blob/862ad57c44ec758ef3942ac2e7a2bd40a37a9c59/app%2Fassets%2Fjavascripts%2Fide%2Flib%2Feditor.js#L21)
- which manages some state for working with Monaco. Because of the Singleton nature of the Editor class,
- the component `RepoEditor` is now forced to be a Singleton as well. Multiple instances of this component
- would cause production issues because no one truly owns the instance of `Editor`.
-
-### Why is the Singleton pattern popular in other languages like Java?
-
-This is because of the limitations of languages like Java where everything has to be wrapped
-in a class. In JavaScript we have things like object and function literals where we can solve
-many problems with a module that exports utility functions.
-
-### When could the Singleton pattern be actually appropriate?**
-
-Singletons solve the problem of enforcing there to be only 1 instance of a thing. It's possible
-that a Singleton could be appropriate in the following rare cases:
-
-- We need to manage some resource that **MUST** have just 1 instance (that is, some hardware restriction).
-- There is a real [cross-cutting concern](https://en.wikipedia.org/wiki/Cross-cutting_concern) (for example, logging) and a Singleton provides the simplest API.
-
-Even in these scenarios, consider avoiding the Singleton pattern.
-
-### What alternatives are there to the Singleton pattern?
-
-#### Utility Functions
-
-When no state needs to be managed, we can export utility functions from a module without
-messing with any class instantiation.
-
-```javascript
-// bad - Singleton
-export class ThingUtils {
- static create() {
- if(this.instance) {
- return this.instance;
- }
-
- this.instance = new ThingUtils();
- return this.instance;
- }
-
- bar() { /* ... */ }
-
- fuzzify(id) { /* ... */ }
-}
-
-// good - Utility functions
-export const bar = () => { /* ... */ };
-
-export const fuzzify = (id) => { /* ... */ };
-```
-
-#### Dependency Injection
-
-[Dependency Injection](https://en.wikipedia.org/wiki/Dependency_injection) is an approach which breaks
-coupling by declaring a module's dependencies to be injected from outside the module (for example, through constructor parameters, a bona-fide Dependency Injection framework, and even in Vue `provide/inject`).
-
-```javascript
-// bad - Vue component coupled to Singleton
-export default {
- created() {
- this.mediator = MyFooMediator.getInstance();
- },
-};
-
-// good - Vue component declares dependency
-export default {
- inject: ['mediator']
-};
-```
-
-```javascript
-// bad - We're not sure where the singleton is in it's lifecycle so we init it here.
-export class Foo {
- constructor() {
- Bar.getInstance().init();
- }
-
- stuff() {
- return Bar.getInstance().doStuff();
- }
-}
-
-// good - Lets receive this dependency as a constructor argument.
-// It's also not our responsibility to manage the lifecycle.
-export class Foo {
- constructor(bar) {
- this.bar = bar;
- }
-
- stuff() {
- return this.bar.doStuff();
- }
-}
-```
-
-In this example, the lifecycle and implementation details of `mediator` are all managed
-**outside** the component (most likely the page entrypoint).
+<!-- This redirect file can be deleted after <2023-12-07>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/fe_guide/design_patterns.md b/doc/development/fe_guide/design_patterns.md
index 3c273ab18e9..44238ff5dc5 100644
--- a/doc/development/fe_guide/design_patterns.md
+++ b/doc/development/fe_guide/design_patterns.md
@@ -6,12 +6,226 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Design Patterns
+This page covers suggested design patterns and also anti-patterns.
+
+NOTE:
+When adding a design pattern to this document, be sure to clearly state the **problem it solves**.
+When adding a design anti-pattern, clearly state **the problem it prevents**.
+
+## Patterns
+
The following design patterns are suggested approaches for solving common problems. Use discretion when evaluating
if a certain pattern makes sense in your situation. Just because it is a pattern, doesn't mean it is a good one for your problem.
+## Anti-patterns
+
+Anti-patterns may seem like good approaches at first, but it has been shown that they bring more ills than benefits. These should
+generally be avoided.
+
+Throughout the GitLab codebase, there may be historic uses of these anti-patterns. [Use discretion](https://about.gitlab.com/handbook/engineering/development/principles/#balance-refactoring-and-velocity)
+when figuring out whether or not to refactor, when touching code that uses one of these legacy patterns.
+
NOTE:
-When adding a design pattern to this document, be sure to clearly state the **problem it solves**.
+For new features, anti-patterns are not necessarily prohibited, but it is **strongly suggested** to find another approach.
+
+### Shared Global Object
+
+A shared global object is an instance of something that can be accessed from anywhere and therefore has no clear owner.
+
+Here's an example of this pattern applied to a Vuex Store:
+
+```javascript
+const createStore = () => new Vuex.Store({
+ actions,
+ state,
+ mutations
+});
+
+// Notice that we are forcing all references to this module to use the same single instance of the store.
+// We are also creating the store at import-time and there is nothing which can automatically dispose of it.
+//
+// As an alternative, we should export the `createStore` and let the client manage the
+// lifecycle and instance of the store.
+export default createStore();
+```
+
+#### What problems do Shared Global Objects cause?
+
+Shared Global Objects are convenient because they can be accessed from anywhere. However,
+the convenience does not always outweigh their heavy cost:
+
+- **No ownership.** There is no clear owner to these objects and therefore they assume a non-deterministic
+ and permanent lifecycle. This can be especially problematic for tests.
+- **No access control.** When Shared Global Objects manage some state, this can create some very buggy and difficult
+ coupling situations because there is no access control to this object.
+- **Possible circular references.** Shared Global Objects can also create some circular referencing situations since submodules
+ of the Shared Global Object can reference modules that reference itself (see
+ [this MR for an example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/33366)).
+
+Here are some historic examples where this pattern was identified to be problematic:
+
+- [Reference to global Vuex store in IDE](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36401)
+- [Docs update to discourage singleton Vuex store](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36952)
+
+#### When could the Shared Global Object pattern be actually appropriate?
+
+Shared Global Object's solve the problem of making something globally accessible. This pattern
+could be appropriate:
+
+- When a responsibility is truly global and should be referenced across the application
+ (for example, an application-wide Event Bus).
+
+Even in these scenarios, consider avoiding the Shared Global Object pattern because the
+side-effects can be notoriously difficult to reason with.
+
+#### References
+
+For more information, see [Global Variables Are Bad on the C2 wiki](https://wiki.c2.com/?GlobalVariablesAreBad).
+
+### Singleton
+
+The classic [Singleton pattern](https://en.wikipedia.org/wiki/Singleton_pattern) is an approach to ensure that only one
+instance of a thing exists.
+
+Here's an example of this pattern:
+
+```javascript
+class MyThing {
+ constructor() {
+ // ...
+ }
+
+ // ...
+}
+
+MyThing.instance = null;
+
+export const getThingInstance = () => {
+ if (MyThing.instance) {
+ return MyThing.instance;
+ }
+
+ const instance = new MyThing();
+ MyThing.instance = instance;
+ return instance;
+};
+```
+
+#### What problems do Singletons cause?
+
+It is a big assumption that only one instance of a thing should exist. More often than not,
+a Singleton is misused and causes very tight coupling amongst itself and the modules that reference it.
+
+Here are some historic examples where this pattern was identified to be problematic:
+
+- [Test issues caused by singleton class in IDE](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/30398#note_331174190)
+- [Implicit Singleton created by module's shared variables](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/merge_requests/97#note_417515776)
+- [Complexity caused by Singletons](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/29461#note_324585814)
+
+Here are some ills that Singletons often produce:
+
+1. **Non-deterministic tests.** Singletons encourage non-deterministic tests because the single instance is shared across
+ individual tests, often causing the state of one test to bleed into another.
+1. **High coupling.** Under the hood, clients of a singleton class all share a single specific
+ instance of an object, which means this pattern inherits all the [problems of Shared Global Object](#what-problems-do-shared-global-objects-cause)
+ such as no clear ownership and no access control. These leads to high coupling situations that can
+ be buggy and difficult to untangle.
+1. **Infectious.** Singletons are infectious, especially when they manage state. Consider the component
+ [RepoEditor](https://gitlab.com/gitlab-org/gitlab/-/blob/27ad6cb7b76430fbcbaf850df68c338d6719ed2b/app%2Fassets%2Fjavascripts%2Fide%2Fcomponents%2Frepo_editor.vue#L0-1)
+ used in the Web IDE. This component interfaces with a Singleton [Editor](https://gitlab.com/gitlab-org/gitlab/-/blob/862ad57c44ec758ef3942ac2e7a2bd40a37a9c59/app%2Fassets%2Fjavascripts%2Fide%2Flib%2Feditor.js#L21)
+ which manages some state for working with Monaco. Because of the Singleton nature of the Editor class,
+ the component `RepoEditor` is now forced to be a Singleton as well. Multiple instances of this component
+ would cause production issues because no one truly owns the instance of `Editor`.
+
+#### Why is the Singleton pattern popular in other languages like Java?
+
+This is because of the limitations of languages like Java where everything has to be wrapped
+in a class. In JavaScript we have things like object and function literals where we can solve
+many problems with a module that exports utility functions.
+
+#### When could the Singleton pattern be actually appropriate?**
+
+Singletons solve the problem of enforcing there to be only 1 instance of a thing. It's possible
+that a Singleton could be appropriate in the following rare cases:
+
+- We need to manage some resource that **MUST** have just 1 instance (that is, some hardware restriction).
+- There is a real [cross-cutting concern](https://en.wikipedia.org/wiki/Cross-cutting_concern) (for example, logging) and a Singleton provides the simplest API.
+
+Even in these scenarios, consider avoiding the Singleton pattern.
+
+#### What alternatives are there to the Singleton pattern?
+
+##### Utility Functions
+
+When no state needs to be managed, we can export utility functions from a module without
+messing with any class instantiation.
+
+```javascript
+// bad - Singleton
+export class ThingUtils {
+ static create() {
+ if(this.instance) {
+ return this.instance;
+ }
+
+ this.instance = new ThingUtils();
+ return this.instance;
+ }
+
+ bar() { /* ... */ }
+
+ fuzzify(id) { /* ... */ }
+}
+
+// good - Utility functions
+export const bar = () => { /* ... */ };
+
+export const fuzzify = (id) => { /* ... */ };
+```
+
+##### Dependency Injection
+
+[Dependency Injection](https://en.wikipedia.org/wiki/Dependency_injection) is an approach which breaks
+coupling by declaring a module's dependencies to be injected from outside the module (for example, through constructor parameters, a bona-fide Dependency Injection framework, and even in Vue `provide/inject`).
+
+```javascript
+// bad - Vue component coupled to Singleton
+export default {
+ created() {
+ this.mediator = MyFooMediator.getInstance();
+ },
+};
+
+// good - Vue component declares dependency
+export default {
+ inject: ['mediator']
+};
+```
+
+```javascript
+// bad - We're not sure where the singleton is in it's lifecycle so we init it here.
+export class Foo {
+ constructor() {
+ Bar.getInstance().init();
+ }
+
+ stuff() {
+ return Bar.getInstance().doStuff();
+ }
+}
+
+// good - Lets receive this dependency as a constructor argument.
+// It's also not our responsibility to manage the lifecycle.
+export class Foo {
+ constructor(bar) {
+ this.bar = bar;
+ }
-## TBD
+ stuff() {
+ return this.bar.doStuff();
+ }
+}
+```
-Stay tuned!
+In this example, the lifecycle and implementation details of `mediator` are all managed
+**outside** the component (most likely the page entrypoint).
diff --git a/doc/development/fe_guide/development_process.md b/doc/development/fe_guide/development_process.md
deleted file mode 100644
index 232689080ea..00000000000
--- a/doc/development/fe_guide/development_process.md
+++ /dev/null
@@ -1,125 +0,0 @@
----
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
----
-
-# Frontend Development Process
-
-You can find more about the organization of the frontend team in the [handbook](https://about.gitlab.com/handbook/engineering/frontend/).
-
-## Development Checklist
-
-The idea is to remind us about specific topics during the time we build a new feature or start something. This is a common practice in other industries (like pilots) that also use standardized checklists to reduce problems early on.
-
-Copy the content over to your issue or merge request and if something doesn't apply, remove it from your current list.
-
-This checklist is intended to help us during development of bigger features/refactorings. It is not a "use it always and every point always matches" list.
-
-Use your best judgment when to use it and contribute new points through merge requests if something comes to your mind.
-
-```markdown
-### Frontend development
-
-#### Planning development
-
-- [ ] Check the current set weight of the issue, does it fit your estimate?
-- [ ] Are all [departments](https://about.gitlab.com/handbook/engineering/#engineering-teams) that are needed from your perspective already involved in the issue? (For example is UX missing?)
-- [ ] Is the specification complete? Are you missing decisions? How about error handling/defaults/edge cases? Take your time to understand the needed implementation and go through its flow.
-- [ ] Are all necessary UX specifications available that you will need to implement? Are there new UX components/patterns in the designs? Then contact the UI component team early on. How should error messages or validation be handled?
-- [ ] **Library usage** Use Vuex as soon as you have even a medium state to manage, use Vue router if you need to have different views internally and want to link from the outside. Check what libraries we already have for which occasions.
-- [ ] **Plan your implementation:**
- - [ ] **Architecture plan:** Create a plan aligned with GitLab's architecture, how you are going to do the implementation, for example Vue application setup and its components (through [onion skinning](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/35873#note_39994091)), Store structure and data flow, which existing Vue components can you reuse. It's a good idea to go through your plan with another engineer to refine it.
- - [ ] **Backend:** The best way is to kickoff the implementation in a call and discuss with the assigned Backend engineer what you will need from the backend and also when. Can you reuse existing API's? How is the performance with the planned architecture? Maybe create together a JSON mock object to already start with development.
- - [ ] **Communication:** It also makes sense to have for bigger features an own slack channel (normally called #f_{feature_name}) and even weekly demo calls with all people involved.
- - [ ] **Dependency Plan:** Are there big dependencies in the plan between you and others, then maybe create an execution diagram to show what is blocking which part and the order of the different parts.
- - [ ] **Task list:** Create a simple checklist of the subtasks that are needed for the implementation, also consider creating even sub issues. (for example show a comment, delete a comment, update a comment, etc.). This helps you and also everyone else following the implementation
-- [ ] **Keep it small** To make it easier for you and also all reviewers try to keep merge requests small and merge into a feature branch if needed. To accomplish that you need to plan that from the start. Different methods are:
- - [ ] **Skeleton based plan** Start with an MR that has the skeleton of the components with placeholder content. In following MRs you can fill the components with interactivity. This also makes it easier to spread out development on multiple people.
- - [ ] **Cookie Mode** Think about hiding the feature behind a cookie flag if the implementation is on top of existing features
- - [ ] **New route** Are you refactoring something big then you might consider adding a new route where you implement the new feature and when finished delete the current route and rename the new one. (for example 'merge_request' and 'new_merge_request')
-- [ ] **Setup** Is there any specific setup needed for your implementation (for example a kubernetes cluster)? Then let everyone know if it is not already mentioned where they can find documentation (if it doesn't exist - create it)
-- [ ] **Security** Are there any new security relevant implementations? Then contact the security team for an app security review. If you are not sure ask our [domain expert](https://about.gitlab.com/handbook/engineering/frontend/#frontend-domain-experts)
-
-#### During development
-
-- [ ] Check off tasks on your created task list to keep everyone updated on the progress
-- [ ] [Share your work early with reviewers/maintainers](#share-your-work-early)
-- [ ] Share your work with UXer and Product Manager with Screenshots and/or [GIF images](https://about.gitlab.com/handbook/product/making-gifs/). They are easy to create for you and keep them up to date.
-- [ ] If you are blocked on something let everyone on the issue know through a comment.
-- [ ] Are you unable to work on this issue for a longer period of time, also let everyone know.
-- [ ] **Documentation** Update/add docs for the new feature, see `docs/`. Ping one of the documentation experts/reviewers
-
-#### Finishing development + Review
-
-- [ ] **Keep it in the scope** Try to focus on the actual scope and avoid a scope creep during review and keep new things to new issues.
-- [ ] **Performance** Have you checked performance? For example do the same thing with 500 comments instead of 1. Document the tests and possible findings in the MR so a reviewer can directly see it.
-- [ ] Have you tested with a variety of our [supported browsers](../../install/requirements.md#supported-web-browsers)? You can use [browserstack](https://www.browserstack.com/) to be able to access a wide variety of browsers and operating systems.
-- [ ] Did you check the mobile view?
-- [ ] Check the built webpack bundle (For the report run `WEBPACK_REPORT=true gdk start`, then open `webpack-report/index.html`) if we have unnecessary bloat due to wrong references, including libraries multiple times, etc.. If you need help contact the webpack [domain expert](https://about.gitlab.com/handbook/engineering/frontend/#frontend-domain-experts)
-- [ ] **Tests** Not only greenfield tests - Test also all bad cases that come to your mind.
-- [ ] If you have multiple MRs then also smoke test against the final merge.
-- [ ] Are there any big changes on how and especially how frequently we use the API then let production know about it
-- [ ] Smoke test of the RC on dev., staging., canary deployments and .com
-- [ ] Follow up on issues that came out of the review. Create issues for discovered edge cases that should be covered in future iterations.
-```
-
-### Code deletion checklist
-
-When your merge request deletes code, it's important to also delete all
-related code that is no longer used.
-When deleting Haml and Vue code, check whether it contains the following types of
-code that is unused:
-
-- CSS.
-
- For example, we've deleted a Vue component that contained the `.mr-card` class, which is now unused.
- The `.mr-card` CSS rule set should then be deleted from `merge_requests.scss`.
-
-- Ruby variables.
-
- Deleting unused Ruby variables is important so we don't continue instantiating them with
- potentially expensive code.
-
- For example, we've deleted a Haml template that used the `@total_count` Ruby variable.
- The `@total_count` variable was no longer used in the remaining templates for the page.
- The instantiation of `@total_count` in `issues_controller.rb` should then be deleted so that we
- don't make unnecessary database calls to calculate the count of issues.
-
-- Ruby methods.
-
-### Merge Request Review
-
-With the purpose of being [respectful of others' time](https://about.gitlab.com/handbook/values/#be-respectful-of-others-time), follow these guidelines when asking for a review:
-
-- Make sure your Merge Request:
- - milestone is set
- - at least the labels suggested by danger-bot are set
- - has a clear description
- - includes before/after screenshots if there is a UI change
- - pipeline is green
- - includes tests
- - includes a changelog entry (when necessary)
-- Before assigning to a maintainer, assign to a reviewer.
-- If you assigned a merge request or pinged someone directly, be patient because we work in different timezones and asynchronously. Unless the merge request is urgent (like fixing a broken default branch), don't DM or reassign the merge request before waiting for a 24-hour window.
-- If you have a question regarding your merge request/issue, make it on the merge request/issue. When we DM each other, we no longer have a SSOT and [no one else is able to contribute](https://about.gitlab.com/handbook/values/#public-by-default).
-- When you have a big **Draft** merge request with many changes, you're advised to get the review started before adding/removing significant code. Make sure it is assigned well before the release cut-off, as the reviewers/maintainers would always prioritize reviewing finished MRs before the **Draft** ones.
-- Make sure to remove the `Draft:` title before the last round of review.
-
-### Share your work early
-
-1. Before writing code, ensure your vision of the architecture is aligned with
- GitLab architecture.
-1. Add a diagram to the issue and ask a frontend maintainer in the Slack channel `#frontend_maintainers` about it.
-
- ![Diagram of issue boards architecture](img/boards_diagram.png)
-
-1. Don't take more than one week between starting work on a feature and
- sharing a Merge Request with a reviewer or a maintainer.
-
-### Vue features
-
-1. Follow the steps in [Vue.js Best Practices](vue.md)
-1. Follow the style guide.
-1. Only a handful of people are allowed to merge Vue related features.
- Reach out to one of Vue experts early in this process.
diff --git a/doc/development/fe_guide/getting_started.md b/doc/development/fe_guide/getting_started.md
new file mode 100644
index 00000000000..bb59bf7b8ee
--- /dev/null
+++ b/doc/development/fe_guide/getting_started.md
@@ -0,0 +1,54 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Getting started
+
+This page will guide you through the Frontend development process and show you what a normal Merge Request cycle looks like. You can find more about the organization of the frontend team in the [handbook](https://about.gitlab.com/handbook/engineering/frontend/).
+
+There are a lot of things to consider for a first merge request and it can feel overwhelming. The [Frontend onboarding course](onboarding_course/index.md) provides a 6-week structured curriculum to learn how to contribute to the GitLab frontend.
+
+## Development life cycle
+
+### Step 1: Preparing the issue
+
+Before tackling any work, read through the issue that has been assigned to you and make sure that all [required departments](https://about.gitlab.com/handbook/engineering/#engineering-teams) have been involved as they should. Read through the comments as needed and if unclear, post a comment in the issue summarizing **what you think the work is** and ping your Engineering or Product Manager to confirm. Then once everything is clarified, apply the correct worfklow labels to the issue and create a merge request branch. If created directly from the issue, the issue and the merge request will be linked by default.
+
+### Step 2: Plan your implementation
+
+Before writing code, make sure to ask yourself the following questions and have clear answers before you start developing:
+
+- What API data is required? Is it already available in our API or should I ask a Backend counterpart?
+ - If this is GraphQL, write a query proposal and ask your BE counterpart to confirm they are in agreement.
+- Can I use [GitLab UI components](https://gitlab-org.gitlab.io/gitlab-ui/?path=/docs/base-accordion--docs)? Which components are appropriate and do they have all of the functionality that I need?
+- Are there existing components or utils in the GitLab project that I could use?
+- [Should this change live behind a Feature Flag](https://about.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags)?
+- In which directory should this code live?
+- Should I build part of this feature as reusable? If so, where should it live in the codebase and how do I make it discoverable?
+ - Note: For now this is still being considered, but the `vue_shared` folder is still the preferred directory for GitLab-wide components.
+- What kinds of tests will it require? Consider unit tests **and** [Feature Tests](../testing_guide/frontend_testing.md#get-started-with-feature-tests)? Should I reach out to a [SET](https://handbook.gitlab.com/job-families/engineering/software-engineer-in-test/) for guidance or am I comfortable implementing the tests?
+- How big will this change be? Try to keep diffs to **500+- at the most**.
+
+If all of these questions have an answer, then you can safely move on to writing code.
+
+### Step 3: Writing code
+
+Make sure to communicate with your team as you progress or if you are unable to work on a planned issue for a long period of time.
+
+If you require assistance, make sure to push your branch and share your Merge Request either directly to a teammate or in the Slack channel `#frontend` to get advice on how to move forward. You can [mark your Merge Request as a draft](../../user/project/merge_requests/drafts.md), which will clearly communicate that it is not ready for a full on review. Always remember to have a [low level of shame](https://handbook.gitlab.com/handbook/values/#low-level-of-shame) and **ask for help when you need it**.
+
+As you write code, make sure to test your change thoroughly. It is the author's responsibility to test their code, ensure that it works as expected, and ensure that it did not break existing behaviours. Reviewers may help in that regard, but **do not expect it**. Make sure to check different browsers, mobile viewports and unexpected user flows.
+
+### Step 4: Review
+
+When it's time to send your code to review, it can be quite stressful. It is recommended to read through [the code review guidelines](../code_review.md) to get a better sense of what to expect. One of the most valuable pieces of advice that is **essential** is simply:
+
+> ... to avoid unnecessary back-and-forth with reviewers, ... perform a self-review of your own merge request, and follow the Code Review guidelines.
+
+This is key to having a great merge request experience because you will catch small mistakes and leave comments in areas where your reviewer might be uncertain and have questions. This speeds up the process tremendously.
+
+### Step 5: Verifying
+
+After your code has merged (congratulations!), make sure to verify that it works on the production environment and does not cause any errors.
diff --git a/doc/development/fe_guide/guides.md b/doc/development/fe_guide/guides.md
new file mode 100644
index 00000000000..dc2fffcf10a
--- /dev/null
+++ b/doc/development/fe_guide/guides.md
@@ -0,0 +1,13 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Guides
+
+This section contains guides to help our developers.
+For example, you can find information about how to accomplish a specific task,
+or how get proficient with a tool.
+
+Guidelines related to one specific technology, like Vue, should not be added to this section. Instead, add them to the `Tech Stack` section.
diff --git a/doc/development/fe_guide/img/boards_diagram.png b/doc/development/fe_guide/img/boards_diagram.png
deleted file mode 100644
index 856c9b05bbf..00000000000
--- a/doc/development/fe_guide/img/boards_diagram.png
+++ /dev/null
Binary files differ
diff --git a/doc/development/fe_guide/index.md b/doc/development/fe_guide/index.md
index 405e830406e..70f7aad207b 100644
--- a/doc/development/fe_guide/index.md
+++ b/doc/development/fe_guide/index.md
@@ -86,135 +86,48 @@ Now that our values have been defined, we can base our goals on these values and
We have detailed description on how we see GitLab frontend in the future in [Frontend Goals](frontend_goals.md) section
-### Frontend onboarding course
+### First time contributors
-The [Frontend onboarding course](onboarding_course/index.md) provides a 6-week structured curriculum to learn how to contribute to the GitLab frontend.
+If you're a first-time contributor, see [Contribute to GitLab development](../contributing/index.md).
-### Browser Support
+When you're ready to create your first merge request, or need to review the GitLab frontend workflow, see [Getting started](getting_started.md).
-For supported browsers, see our [requirements](../../install/requirements.md#supported-web-browsers).
+For a guided introduction to frontend development at GitLab, you can watch the [Frontend onboarding course](onboarding_course/index.md) which provides a six-week structured curriculum.
-Use [BrowserStack](https://www.browserstack.com/) to test with our supported browsers.
-Sign in to BrowserStack with the credentials saved in the **Engineering** vault of the GitLab
-[shared 1Password account](https://about.gitlab.com/handbook/security/#1password-guide).
+### Helpful links
-## Initiatives
+#### Initiatives
You can find current frontend initiatives with a cross-functional impact on epics
with the label [frontend-initiative](https://gitlab.com/groups/gitlab-org/-/epics?state=opened&page=1&sort=UPDATED_AT_DESC&label_name[]=frontend-initiative).
-## Principles
-
-[High-level guidelines](principles.md) for contributing to GitLab.
-
-## Development Process
-
-How we [plan and execute](development_process.md) the work on the frontend.
-
-## Architecture
-
-How we go about [making fundamental design decisions](architecture.md) in the GitLab frontend team
-or make changes to our frontend development guidelines.
-
-## Testing
+#### Testing
How we write [frontend tests](../testing_guide/frontend_testing.md), run the GitLab test suite, and debug test related
issues.
-## Pajamas Design System
+#### Pajamas Design System
Reusable components with technical and usage guidelines can be found in our
[Pajamas Design System](https://design.gitlab.com/).
-## Design Patterns
-
-JavaScript [design patterns](design_patterns.md) in the GitLab codebase.
-
-## Design Anti-patterns
-
-JavaScript [design anti-patterns](design_anti_patterns.md) we try to avoid.
-
-## Vue.js Best Practices
-
-Vue specific [design patterns and practices](vue.md).
-
-## Vuex
-
-[Vuex](vuex.md) specific design patterns and practices.
-
-## Axios
-
-[Axios](axios.md) specific practices and gotchas.
-
-## GraphQL
-
-How to use [GraphQL](graphql.md).
-
-## HAML
-
-How to use [HAML](haml.md).
-
-## ViewComponent
-
-How we use [ViewComponent](view_component.md).
-
-## Icons and Illustrations
-
-How we use SVG for our [Icons and Illustrations](icons.md).
-
-## Dependencies
-
-General information about frontend [dependencies](dependencies.md) and how we manage them.
-
-## Keyboard Shortcuts
-
-How we implement [keyboard shortcuts](keyboard_shortcuts.md) that can be customized and disabled.
-
-## Editors
-
-GitLab text editing experiences are provided by the [source editor](source_editor.md) and
-the [rich text editor](content_editor.md).
-
-## Frontend FAQ
+#### Frontend FAQ
Read the [frontend's FAQ](frontend_faq.md) for common small pieces of helpful information.
-## Style Guides
-
-See the relevant style guides for our guidelines and for information on linting:
-
-- [JavaScript](style/javascript.md). Our guide is based on
-the excellent [Airbnb](https://github.com/airbnb/javascript) style guide with a few small
-changes.
-- [SCSS](style/scss.md): [our SCSS conventions](https://gitlab.com/gitlab-org/frontend/gitlab-stylelint-config) which are enforced through [`stylelint`](https://stylelint.io).
-- [HTML](style/html.md). Guidelines for writing HTML code consistent with the rest of the codebase.
-- [Vue](style/vue.md). Guidelines and conventions for Vue code may be found here.
-
-## [Tooling](tooling.md)
-
-Our code is automatically formatted with [Prettier](https://prettier.io) to follow our guidelines. Read our [Tooling guide](tooling.md) for more detail.
-
-## [Performance](performance.md)
-
-Best practices for monitoring and maximizing frontend performance.
-
-## [Security](security.md)
-
-Frontend security practices.
-
-## Accessibility
-
-Our [accessibility standards and resources](accessibility.md).
-
-## Logging
-
-Best practices for [client-side logging](logging.md) for GitLab frontend development.
-
-## [Internationalization (i18n) and Translations](../i18n/externalization.md)
+#### [Internationalization (i18n) and Translations](../i18n/externalization.md)
Frontend internationalization support is described in [this document](../i18n/index.md).
The [externalization part of the guide](../i18n/externalization.md) explains the helpers/methods available.
-## [Troubleshooting](troubleshooting.md)
+#### [Troubleshooting](troubleshooting.md)
Running into a Frontend development problem? Check out [this guide](troubleshooting.md) to help resolve your issue.
+
+#### Browser support
+
+For supported browsers, see our [requirements](../../install/requirements.md#supported-web-browsers).
+
+Use [BrowserStack](https://www.browserstack.com/) to test with our supported browsers.
+Sign in to BrowserStack with the credentials saved in the **Engineering** vault of the GitLab
+[shared 1Password account](https://about.gitlab.com/handbook/security/#1password-guide).
diff --git a/doc/development/fe_guide/principles.md b/doc/development/fe_guide/principles.md
deleted file mode 100644
index 6d1a3238b73..00000000000
--- a/doc/development/fe_guide/principles.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
----
-
-# Principles
-
-These principles ensure that your frontend contribution starts off in the right direction.
-
-## Discuss architecture before implementation
-
-Discuss your architecture design in an issue before writing code. This helps decrease the review time and also provides good practice for writing and thinking about system design.
-
-## Be consistent
-
-There are multiple ways of writing code to accomplish the same results. We should be as consistent as possible in how we write code across our codebases. This makes it easier for us to maintain our code across GitLab.
-
-## Improve code [iteratively](https://about.gitlab.com/handbook/values/#iteration)
-
-Whenever you see existing code that does not follow our current style guide, update it proactively. You don't need to fix everything, but each merge request should iteratively improve our codebase, and reduce technical debt where possible.
diff --git a/doc/development/fe_guide/sentry.md b/doc/development/fe_guide/sentry.md
new file mode 100644
index 00000000000..af4a006c7ea
--- /dev/null
+++ b/doc/development/fe_guide/sentry.md
@@ -0,0 +1,34 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Sentry
+
+As part of the [Frontend Observability Working Group](https://google.com) we're looking to provide documentation on how to use Sentry effectively.
+If left unchecked, Sentry can get noisy and become unreliable.
+This page aims to help guide us toward more sensible Sentry usage.
+
+## Which errors we should report to Sentry explicitly and which should be only shown to users (e.g. as alerts)
+
+If we send all errors to Sentry, it gets very noisy, very quickly.
+We want to filter out the errors that we either don't care about, or have no control over.
+For example, if a user fills out a form incorrectly, this is not something we want to send to Sentry.
+If that form fails because it's hitting a dead endpoint, this is an error we want Sentry to know about.
+
+## How to catch errors correctly so Sentry can display them reliably
+
+TBD
+
+## How to catch special cases you want to track (like we did with the pipeline graph)
+
+TBD
+
+## How to navigate Sentry and find errors
+
+TBD
+
+## How to debug Sentry errors effectively
+
+TBD
diff --git a/doc/development/fe_guide/tech_stack.md b/doc/development/fe_guide/tech_stack.md
new file mode 100644
index 00000000000..9c0d50ea7bd
--- /dev/null
+++ b/doc/development/fe_guide/tech_stack.md
@@ -0,0 +1,11 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Tech Stack
+
+For an exhaustive list of all the technology that we use, simply check our [latest `package.json` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/package.json?ref_type=heads).
+
+Each navigation item in this section is a guide for that specific technology.
diff --git a/doc/development/fe_guide/tips_and_tricks.md b/doc/development/fe_guide/tips_and_tricks.md
new file mode 100644
index 00000000000..dcacdb8387b
--- /dev/null
+++ b/doc/development/fe_guide/tips_and_tricks.md
@@ -0,0 +1,31 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Tips and tricks
+
+## Code deletion checklist
+
+When your merge request deletes code, it's important to also delete all
+related code that is no longer used.
+When deleting Haml and Vue code, check whether it contains the following types of
+code that is unused:
+
+- CSS.
+
+ For example, we've deleted a Vue component that contained the `.mr-card` class, which is now unused.
+ The `.mr-card` CSS rule set should then be deleted from `merge_requests.scss`.
+
+- Ruby variables.
+
+ Deleting unused Ruby variables is important so we don't continue instantiating them with
+ potentially expensive code.
+
+ For example, we've deleted a Haml template that used the `@total_count` Ruby variable.
+ The `@total_count` variable was no longer used in the remaining templates for the page.
+ The instantiation of `@total_count` in `issues_controller.rb` should then be deleted so that we
+ don't make unnecessary database calls to calculate the count of issues.
+
+- Ruby methods.
diff --git a/doc/development/feature_development.md b/doc/development/feature_development.md
index c4a5c996ab1..2f013f698dc 100644
--- a/doc/development/feature_development.md
+++ b/doc/development/feature_development.md
@@ -95,7 +95,7 @@ Consult these topics for information on contributing to specific GitLab features
- [Shell commands](shell_commands.md) in the GitLab codebase
- [Value Stream Analytics development guide](value_stream_analytics.md)
- [Application limits](application_limits.md)
-- [AI features](ai_features.md)
+- [AI features](ai_features/index.md)
### Import and Export
@@ -163,8 +163,8 @@ The following integration guides are internal. Some integrations require access
## Analytics Instrumentation guides
- [Analytics Instrumentation guide](https://about.gitlab.com/handbook/product/analytics-instrumentation-guide/)
-- [Service Ping guide](service_ping/index.md)
-- [Snowplow guide](snowplow/index.md)
+- [Service Ping guide](internal_analytics/service_ping/index.md)
+- [Snowplow guide](internal_analytics/snowplow/index.md)
## Experiment guide
diff --git a/doc/development/feature_flags/index.md b/doc/development/feature_flags/index.md
index 8c0f7faab28..af40fd8b945 100644
--- a/doc/development/feature_flags/index.md
+++ b/doc/development/feature_flags/index.md
@@ -420,9 +420,18 @@ The actor is a second parameter of the `Feature.enabled?` call. The
same actor type must be used consistently for all invocations of `Feature.enabled?`.
```ruby
+# Bad
Feature.enabled?(:feature_flag, project)
Feature.enabled?(:feature_flag, group)
Feature.enabled?(:feature_flag, user)
+
+# Good
+Feature.enabled?(:feature_flag, group_a)
+Feature.enabled?(:feature_flag, group_b)
+
+# Also good - using separate flags for each actor type
+Feature.enabled?(:feature_flag_group, group)
+Feature.enabled?(:feature_flag_user, user)
```
See [Feature flags in the development of GitLab](controls.md#process) for details on how to use ChatOps
diff --git a/doc/development/features_inside_dot_gitlab.md b/doc/development/features_inside_dot_gitlab.md
index a1f5111d6f4..f8de93a2243 100644
--- a/doc/development/features_inside_dot_gitlab.md
+++ b/doc/development/features_inside_dot_gitlab.md
@@ -16,6 +16,6 @@ When implementing new features, please refer to these existing features to avoid
- [Route Maps](../ci/review_apps/index.md#route-maps): `.gitlab/route-map.yml`.
- [Customize Auto DevOps Helm Values](../topics/autodevops/customize.md#customize-helm-chart-values): `.gitlab/auto-deploy-values.yaml`.
- [Insights](../user/project/insights/index.md#configure-project-insights): `.gitlab/insights.yml`.
-- [Service Desk Templates](../user/project/service_desk/index.md#customize-emails-sent-to-the-requester): `.gitlab/service_desk_templates/`.
+- [Service Desk Templates](../user/project/service_desk/configure.md#customize-emails-sent-to-the-requester): `.gitlab/service_desk_templates/`.
- [Secret Detection Custom Rulesets](../user/application_security/secret_detection/index.md#disable-predefined-analyzer-rules): `.gitlab/secret-detection-ruleset.toml`
- [Static Analysis Custom Rulesets](../user/application_security/sast/customize_rulesets.md#create-the-configuration-file): `.gitlab/sast-ruleset.toml`
diff --git a/doc/development/file_storage.md b/doc/development/file_storage.md
index c346d55f639..39833a441ee 100644
--- a/doc/development/file_storage.md
+++ b/doc/development/file_storage.md
@@ -56,7 +56,7 @@ they are still not 100% standardized. You can see them below:
CI Artifacts and LFS Objects behave differently in CE and EE. In CE they inherit the `GitlabUploader`
while in EE they inherit the `ObjectStorage` and store files in and S3 API compatible object store.
-In the case of Issues/MR/Notes Markdown attachments, there is a different approach using the [Hashed Storage](../administration/repository_storage_types.md) layout,
+In the case of Issues/MR/Notes Markdown attachments, there is a different approach using the [Hashed Storage](../administration/repository_storage_paths.md) layout,
instead of basing the path into a mutable variable `:project_path_with_namespace`, it's possible to use the
hash of the project ID instead, if project migrates to the new approach (introduced in 10.2).
diff --git a/doc/development/fips_compliance.md b/doc/development/fips_compliance.md
index 4f6a9feb191..60677abf292 100644
--- a/doc/development/fips_compliance.md
+++ b/doc/development/fips_compliance.md
@@ -59,7 +59,7 @@ listed here that also do not work properly in FIPS mode:
- [Container Scanning](../user/application_security/container_scanning/index.md) support for scanning images in repositories that require authentication.
- [Code Quality](../ci/testing/code_quality.md) does not support operating in FIPS-compliant mode.
- [Dependency scanning](../user/application_security/dependency_scanning/index.md) support for Gradle.
-- [Dynamic Application Security Testing (DAST)](../user/application_security/dast/proxy-based.md) supports a reduced set of analyzers. The proxy-based analyzer is not available in FIPS mode today, however browser-based DAST, DAST API, and DAST API Fuzzing images are available.
+- [Dynamic Application Security Testing (DAST)](../user/application_security/dast/proxy-based.md) supports a reduced set of analyzers. The proxy-based analyzer and on-demand scanning is not available in FIPS mode today, however browser-based DAST, DAST API, and DAST API Fuzzing images are available.
- [Solutions for vulnerabilities](../user/application_security/vulnerabilities/index.md#resolve-a-vulnerability)
for yarn projects.
- [Static Application Security Testing (SAST)](../user/application_security/sast/index.md)
diff --git a/doc/development/gems.md b/doc/development/gems.md
index c061b33b5e4..132bf931da8 100644
--- a/doc/development/gems.md
+++ b/doc/development/gems.md
@@ -238,11 +238,10 @@ The project for a new Gem should always be created in [`gitlab-org/ruby/gems` na
the gem name with `gitlab-`. For example, `gitlab-sidekiq-fetcher`.
1. Locally create the gem or fork as necessary.
1. [Publish an empty `0.0.1` version of the gem to rubygems.org](https://guides.rubygems.org/publishing/#publishing-to-rubygemsorg) to ensure the gem name is reserved.
-1. Add the [`gitlab_rubygems`](https://rubygems.org/profiles/gitlab_rubygems) and [`gitlab-qa`](https://rubygems.org/profiles/gitlab-qa) users as owners of the new gem by running:
+1. Add the [`gitlab_rubygems`](https://rubygems.org/profiles/gitlab_rubygems) user as owner of the new gem by running:
```shell
gem owner <gem-name> --add gitlab_rubygems
- gem owner <gem-name> --add gitlab-qa
```
1. Optional. Add some or all of the following users as co-owners:
@@ -251,8 +250,8 @@ The project for a new Gem should always be created in [`gitlab-org/ruby/gems` na
- [Stan Hu](https://rubygems.org/profiles/stanhu)
1. Optional. Add any other relevant developers as co-owners.
1. Visit `https://rubygems.org/gems/<gem-name>` and verify that the gem was published
- successfully and `gitlab_rubygems` & `gitlab-qa` are also owners.
-1. Create a project in the [`gitlab-org/ruby/gems` group](https://gitlab.com/gitlab-org/ruby/gems/). To create this project:
+ successfully and `gitlab_rubygems` is also an owner.
+1. Create a project in the [`gitlab-org/ruby/gems` group](https://gitlab.com/gitlab-org/ruby/gems/) (or in a subgroup of it):
1. Follow the [instructions for new projects](https://about.gitlab.com/handbook/engineering/gitlab-repositories/#creating-a-new-project).
1. Follow the instructions for setting up a [CI/CD configuration](https://about.gitlab.com/handbook/engineering/gitlab-repositories/#cicd-configuration).
1. Use the [shared CI/CD config](https://gitlab.com/gitlab-org/quality/pipeline-common/-/blob/master/ci/gem-release.yml)
@@ -264,7 +263,7 @@ The project for a new Gem should always be created in [`gitlab-org/ruby/gems` na
file: '/ci/gem-release.yml'
```
- This job will handle building and publishing the gem (it uses a `gilab-qa` Rubygems.org
+ This job will handle building and publishing the gem (it uses a `gitlab_rubygems` Rubygems.org
API token inherited from the `gitlab-org/ruby/gems` group, in order to publish the gem
package), as well as creating the tag, release and populating its release notes by
using the
diff --git a/doc/development/git_object_deduplication.md b/doc/development/git_object_deduplication.md
index 961bfca0d9b..65d2338cd65 100644
--- a/doc/development/git_object_deduplication.md
+++ b/doc/development/git_object_deduplication.md
@@ -46,7 +46,7 @@ reliable decide if an object is no longer needed.
### Git alternates in GitLab: pool repositories
-GitLab organizes this object borrowing by [creating special **pool repositories**](../administration/repository_storage_types.md)
+GitLab organizes this object borrowing by [creating special **pool repositories**](../administration/repository_storage_paths.md)
which are hidden from the user. We then use Git
alternates to let a collection of project repositories borrow from a
single pool repository. We call such a collection of project
@@ -101,7 +101,7 @@ are as follows:
### Assumptions
-- All repositories in a pool must use [hashed storage](../administration/repository_storage_types.md).
+- All repositories in a pool must use [hashed storage](../administration/repository_storage_paths.md).
This is so that we don't have to ever worry about updating paths in
`object/info/alternates` files.
- All repositories in a pool must be on the same Gitaly storage shard.
diff --git a/doc/development/github_importer.md b/doc/development/github_importer.md
index d38be071f39..45554ae465d 100644
--- a/doc/development/github_importer.md
+++ b/doc/development/github_importer.md
@@ -243,11 +243,13 @@ To avoid mismatching users, the search by GitHub user ID is not done when import
Enterprise.
Because this process is quite expensive we cache the result of these lookups in
-Redis. For every user looked up we store three keys:
+Redis. For every user looked up we store five keys:
- A Redis key mapping GitHub usernames to their Email addresses.
- A Redis key mapping a GitHub Email addresses to a GitLab user ID.
- A Redis key mapping a GitHub user ID to GitLab user ID.
+- A Redis key mapping a GitHub username to an ETAG header.
+- A Redis key indicating whether an email lookup has been done for a project.
We cache two types of lookups:
@@ -260,9 +262,12 @@ The expiration time of these keys is 24 hours. When retrieving the cache of a
positive lookup, we refresh the TTL automatically. The TTL of false lookups is
never refreshed.
+If a lookup for email returns an empty or negative lookup, a [Conditional Request](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#conditional-requests) is made with a cached ETAG in the header once for every project.
+Conditional Requests do not count towards the GitHub API rate limit.
+
Because of this caching layer, it's possible newly registered GitLab accounts
aren't linked to their corresponding GitHub accounts. This, however, is resolved
-after the cached keys expire.
+after the cached keys expire or if a new project is imported.
The user cache lookup is shared across projects. This means that the greater the number of
projects that are imported, fewer GitHub API calls are needed.
@@ -287,7 +292,7 @@ The code for this resides in:
- `lib/gitlab/github_import/label_finder.rb`
- `lib/gitlab/github_import/milestone_finder.rb`
-- `lib/gitlab/github_import/caching.rb`
+- `lib/gitlab/cache/import/caching.rb`
## Logs
diff --git a/doc/development/go_guide/index.md b/doc/development/go_guide/index.md
index 7648e84f5e8..c6d7b231b72 100644
--- a/doc/development/go_guide/index.md
+++ b/doc/development/go_guide/index.md
@@ -9,8 +9,6 @@ info: To determine the technical writer assigned to the Stage/Group associated w
This document describes various guidelines and best practices for GitLab
projects using the [Go language](https://go.dev/).
-## Overview
-
GitLab is built on top of [Ruby on Rails](https://rubyonrails.org/), but we're
also using Go for projects where it makes sense. Go is a very powerful
language, with many advantages, and is best suited for projects with a lot of
@@ -73,7 +71,7 @@ of possible security breaches in our code:
Remember to run
[SAST](../../user/application_security/sast/index.md) and [Dependency Scanning](../../user/application_security/dependency_scanning/index.md)
-**(ULTIMATE)** on your project (or at least the
+**(ULTIMATE ALL)** on your project (or at least the
[`gosec` analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/gosec)),
and to follow our [Security requirements](../code_review.md#security).
@@ -142,6 +140,12 @@ become available, you can share job templates like this
Go GitLab linter plugins are maintained in the [`gitlab-org/language-tools/go/linters`](https://gitlab.com/gitlab-org/language-tools/go/linters/) namespace.
+### Help text style guide
+
+If your Go project produces help text for users, consider following the advice given in the
+[Help text style guide](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/help_text_style_guide.md) in the
+`gitaly` project.
+
## Dependencies
Dependencies should be kept to the minimum. The introduction of a new
@@ -340,21 +344,34 @@ which documents and centralizes at the same time all the possible command line
interactions with the program. Don't use `os.GetEnv`, it hides variables deep
in the code.
-## Daemons
+## Libraries
+
+### LabKit
-### Logging
+[LabKit](https://gitlab.com/gitlab-org/labkit) is a place to keep common
+libraries for Go services. For examples using of using LabKit, see [`workhorse`](https://gitlab.com/gitlab-org/gitlab/tree/master/workhorse)
+and [`gitaly`](https://gitlab.com/gitlab-org/gitaly). LabKit exports three related pieces of functionality:
-The usage of a logging library is strongly recommended for daemons. Even
-though there is a `log` package in the standard library, we generally use
-[Logrus](https://github.com/sirupsen/logrus). Its plugin ("hooks") system
-makes it a powerful logging library, with the ability to add notifiers and
-formatters at the logger level directly.
+- [`gitlab.com/gitlab-org/labkit/correlation`](https://gitlab.com/gitlab-org/labkit/tree/master/correlation):
+ for propagating and extracting correlation ids between services.
+- [`gitlab.com/gitlab-org/labkit/tracing`](https://gitlab.com/gitlab-org/labkit/tree/master/tracing):
+ for instrumenting Go libraries for distributed tracing.
+- [`gitlab.com/gitlab-org/labkit/log`](https://gitlab.com/gitlab-org/labkit/tree/master/log):
+ for structured logging using Logrus.
+
+This gives us a thin abstraction over underlying implementations that is
+consistent across Workhorse, Gitaly, and possibly other Go servers. For
+example, in the case of `gitlab.com/gitlab-org/labkit/tracing` we can switch
+from using `Opentracing` directly to using `Zipkin` or the Go kit's own tracing wrapper
+without changes to the application code, while still keeping the same
+consistent configuration mechanism (that is, the `GITLAB_TRACING` environment
+variable).
#### Structured (JSON) logging
Every binary ideally must have structured (JSON) logging in place as it helps
-with searching and filtering the logs. At GitLab we use structured logging in
-JSON format, as all our infrastructure assumes that. When using
+with searching and filtering the logs. LabKit provides an abstraction over [Logrus](https://github.com/sirupsen/logrus).
+We use structured logging in JSON format, because all our infrastructure assumes that. When using
[Logrus](https://github.com/sirupsen/logrus) you can turn on structured
logging by using the build in [JSON formatter](https://github.com/sirupsen/logrus#formatters). This follows the
same logging type we use in our [Ruby applications](../logging.md#use-structured-json-logging).
@@ -375,26 +392,6 @@ There are a few guidelines one should follow when using the
have to log multiple keys, always use `WithFields` instead of calling
`WithField` more than once.
-### Tracing and Correlation
-
-[LabKit](https://gitlab.com/gitlab-org/labkit) is a place to keep common
-libraries for Go services. Currently it's vendored into two projects:
-Workhorse and Gitaly, and it exports two main (but related) pieces of
-functionality:
-
-- [`gitlab.com/gitlab-org/labkit/correlation`](https://gitlab.com/gitlab-org/labkit/tree/master/correlation):
- for propagating and extracting correlation ids between services.
-- [`gitlab.com/gitlab-org/labkit/tracing`](https://gitlab.com/gitlab-org/labkit/tree/master/tracing):
- for instrumenting Go libraries for distributed tracing.
-
-This gives us a thin abstraction over underlying implementations that is
-consistent across Workhorse, Gitaly, and, in future, other Go servers. For
-example, in the case of `gitlab.com/gitlab-org/labkit/tracing` we can switch
-from using `Opentracing` directly to using `Zipkin` or the Go kit's own tracing wrapper
-without changes to the application code, while still keeping the same
-consistent configuration mechanism (that is, the `GITLAB_TRACING` environment
-variable).
-
### Context
Since daemons are long-running applications, they should have mechanisms to
diff --git a/doc/development/gotchas.md b/doc/development/gotchas.md
index 25651639170..59362dc33c0 100644
--- a/doc/development/gotchas.md
+++ b/doc/development/gotchas.md
@@ -221,7 +221,7 @@ Assets that need to be served to the user are stored under the `app/assets` dire
However, you cannot access the content of any file from within `app/assets` from the application code, as we do not include that folder in production installations as a [space saving measure](https://gitlab.com/gitlab-org/omnibus-gitlab/-/commit/ca049f990b223f5e1e412830510a7516222810be).
```ruby
-support_bot = User.support_bot
+support_bot = Users::Internal.support_bot
# accessing a file from the `app/assets` folder
support_bot.avatar = Rails.root.join('app', 'assets', 'images', 'bot_avatars', 'support_bot.png').open
@@ -244,3 +244,59 @@ Use `app/assets` for storing any asset that needs to be precompiled and served t
Use `lib/assets` for storing any asset that does not need to be served to the end user directly, but is still required to be accessed by the application code.
MR for reference: [!37671](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37671)
+
+## Do not override `has_many through:` or `has_one through:` associations
+
+Associations with the `:through` option should not be overridden as we could accidentally
+destroy the wrong object.
+
+This is because the `destroy()` method behaves differently when acting on
+`has_many through:` and `has_one through:` associations.
+
+```ruby
+group.users.destroy(id)
+```
+
+The code example above reads as if we are destroying a `User` record, but behind the scenes, it is destroying a `Member` record. This is because the `users` association is defined on `Group` as a `has_many through:` association:
+
+```ruby
+class Group < Namespace
+ has_many :group_members, -> { where(requested_at: nil).where.not(members: { access_level: Gitlab::Access::MINIMAL_ACCESS }) }, dependent: :destroy, as: :source
+
+ has_many :users, through: :group_members
+end
+```
+
+And Rails has the following [behavior](https://api.rubyonrails.org/classes/ActiveRecord/Associations/ClassMethods.html#method-i-has_many) on using `destroy()` on such associations:
+
+> If the :through option is used, then the join records are destroyed instead, not the objects themselves.
+
+This is why a `Member` record, which is the join record connecting a `User` and `Group`, is being destroyed.
+
+Now, if we override the `users` association, so like:
+
+```ruby
+class Group < Namespace
+ has_many :group_members, -> { where(requested_at: nil).where.not(members: { access_level: Gitlab::Access::MINIMAL_ACCESS }) }, dependent: :destroy, as: :source
+
+ has_many :users, through: :group_members
+
+ def users
+ super.where(admin: false)
+ end
+end
+```
+
+The overridden method now changes the above behavior of `destroy()`, such that if we execute
+
+```ruby
+group.users.destroy(id)
+```
+
+a `User` record will be deleted, which can lead to data loss.
+
+In short, overriding a `has_many through:` or `has_one through:` association can prove dangerous.
+To prevent this from happening, we are introducing an
+automated check in [!131455](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131455).
+
+For more information, see [issue 424536](https://gitlab.com/gitlab-org/gitlab/-/issues/424536).
diff --git a/doc/development/i18n/externalization.md b/doc/development/i18n/externalization.md
index f4ace7491eb..68c2778eabe 100644
--- a/doc/development/i18n/externalization.md
+++ b/doc/development/i18n/externalization.md
@@ -204,9 +204,9 @@ Example:
```javascript
// Bad. Not necessary in Frontend environment.
-expect(findText()).toBe(__('Lorem ipsum dolar sit'));
+expect(findText()).toBe(__('Lorem ipsum dolor sit'));
// Good.
-expect(findText()).toBe('Lorem ipsum dolar sit');
+expect(findText()).toBe('Lorem ipsum dolor sit');
```
#### Recommendations
diff --git a/doc/development/i18n/proofreader.md b/doc/development/i18n/proofreader.md
index cf50e417278..65cde363e98 100644
--- a/doc/development/i18n/proofreader.md
+++ b/doc/development/i18n/proofreader.md
@@ -31,7 +31,7 @@ are very appreciative of the work done by translators and proofreaders!
- Huang Tao - [GitLab](https://gitlab.com/htve), [Crowdin](https://crowdin.com/profile/htve)
- Victor Wu - [GitLab](https://gitlab.com/_victorwu_), [Crowdin](https://crowdin.com/profile/victorwu)
- Xiaogang Wen - [GitLab](https://gitlab.com/xiaogang_cn), [Crowdin](https://crowdin.com/profile/xiaogang_gitlab)
- - Shuang Zhang - [GitLab](https://gitlab.com/tonygodspeed92), [Crowdin](https://crowdin.com/profile/tonygodspeed92)
+ - Qi Zhao - [GitLab](https://gitlab.com/zhaoqi01), [Crowdin](https://crowdin.com/profile/zhaoqi01)
- Chinese Traditional 繁體中文
- Weizhe Ding - [GitLab](https://gitlab.com/d.weizhe), [Crowdin](https://crowdin.com/profile/d.weizhe)
- Yi-Jyun Pan - [GitLab](https://gitlab.com/pan93412), [Crowdin](https://crowdin.com/profile/pan93412)
diff --git a/doc/development/img/build_package_v12_6.png b/doc/development/img/build_package_v12_6.png
deleted file mode 100644
index 32a3ebedba4..00000000000
--- a/doc/development/img/build_package_v12_6.png
+++ /dev/null
Binary files differ
diff --git a/doc/development/img/trigger_build_package_v12_6.png b/doc/development/img/trigger_build_package_v12_6.png
deleted file mode 100644
index ca6797ebf65..00000000000
--- a/doc/development/img/trigger_build_package_v12_6.png
+++ /dev/null
Binary files differ
diff --git a/doc/development/img/trigger_omnibus_v16_3.png b/doc/development/img/trigger_omnibus_v16_3.png
new file mode 100644
index 00000000000..d5348333faa
--- /dev/null
+++ b/doc/development/img/trigger_omnibus_v16_3.png
Binary files differ
diff --git a/doc/development/img/triggered_ee_pipeline_v16_3.png b/doc/development/img/triggered_ee_pipeline_v16_3.png
new file mode 100644
index 00000000000..be91f35b9ab
--- /dev/null
+++ b/doc/development/img/triggered_ee_pipeline_v16_3.png
Binary files differ
diff --git a/doc/development/integrations/index.md b/doc/development/integrations/index.md
index dd73256ce11..f84375b3b77 100644
--- a/doc/development/integrations/index.md
+++ b/doc/development/integrations/index.md
@@ -12,7 +12,7 @@ which are part of our [main Rails project](https://gitlab.com/gitlab-org/gitlab)
Also see our [direction page](https://about.gitlab.com/direction/manage/import_and_integrate/integrations/) for an overview of our strategy around integrations.
-This guide is a work in progress. You're welcome to ping `@gitlab-org/manage/integrations`
+This guide is a work in progress. You're welcome to ping `@gitlab-org/manage/import-and-integrate`
if you need clarification or spot any outdated information.
## Add a new integration
@@ -39,9 +39,9 @@ if you need clarification or spot any outdated information.
has_one :foo_bar_integration, class_name: 'Integrations::FooBar'
```
-### Define properties
+### Define fields
-Integrations can define arbitrary properties to store their configuration with the class method `Integration.prop_accessor`.
+Integrations can define arbitrary fields to store their configuration with the class method `Integration.field`.
The values are stored as an encrypted JSON hash in the `integrations.encrypted_properties` column.
For example:
@@ -49,25 +49,26 @@ For example:
```ruby
module Integrations
class FooBar < Integration
- prop_accessor :url
- prop_accessor :tags
+ field :url
+ field :tags
end
end
```
-`Integration.prop_accessor` installs accessor methods on the class. Here we would have `#url`, `#url=` and `#url_changed?`, to manage the `url` field. Fields stored in `Integration#properties` should be accessed by these accessors directly on the model, just like other ActiveRecord attributes.
+`Integration.field` installs accessor methods on the class.
+Here we would have `#url`, `#url=`, and `#url_changed?` to manage the `url` field.
+These accessors should access the fields stored in `Integration#properties` directly on the model, just like other `ActiveRecord` attributes.
-You should always access the properties through their `getters`, and not interact with the `properties` hash directly.
+You should always access the fields through their `getters` and not interact with the `properties` hash directly.
You **must not** write to the `properties` hash, you **must** use the generated setter method instead. Direct writes to this
hash are not persisted.
You should also define validations for all your properties.
+To see how these fields are exposed in the frontend form for the integration,
+see [Customize the frontend form](#customize-the-frontend-form).
-Also refer to the section [Customize the frontend form](#customize-the-frontend-form) below to see how these properties
-are exposed in the frontend form for the integration.
-
-There is an alternative approach using `Integration.data_field`, which you may see in other integrations.
-With data fields the values are stored in a separate table per integration. At the moment we don't recommend using this for new integrations.
+Other approaches include using `Integration.prop_accessor` or `Integration.data_field`, which you might see in earlier versions of integrations.
+You should not use these approaches for new integrations.
### Define trigger events
@@ -94,7 +95,7 @@ The following events are supported for integrations:
| [Pipeline event](../../user/project/integrations/webhook_events.md#pipeline-events) | | `pipeline` | A pipeline status changes.
| [Push event](../../user/project/integrations/webhook_events.md#push-events) | ✓ | `push` | A push is made to the repository.
| [Tag push event](../../user/project/integrations/webhook_events.md#tag-events) | ✓ | `tag_push` | New tags are pushed to the repository.
-| Vulnerability event **(ULTIMATE)** | | `vulnerability` | A new, unique vulnerability is recorded.
+| Vulnerability event **(ULTIMATE ALL)** | | `vulnerability` | A new, unique vulnerability is recorded.
| [Wiki page event](../../user/project/integrations/webhook_events.md#wiki-page-events) | ✓ | `wiki_page` | A wiki page is created or updated.
#### Event examples
@@ -191,8 +192,8 @@ This method should return an array of hashes for each field, where the keys can
| Key | Type | Required | Default | Description
|:---------------|:--------|:---------|:-----------------------------|:--
-| `type:` | string | true | | The type of the form field. Can be `text`, `textarea`, `password`, `checkbox`, or `select`.
-| `name:` | string | true | | The property name for the form field. This must match a `prop_accessor` [defined on the class](#define-properties).
+| `type:` | symbol | true | `:text` | The type of the form field. Can be `:text`, `:textarea`, `:password`, `:checkbox`, or `:select`.
+| `name:` | string | true | | The property name for the form field.
| `required:` | boolean | false | `false` | Specify if the form field is required or optional.
| `title:` | string | false | Capitalized value of `name:` | The label for the form field.
| `placeholder:` | string | false | | A placeholder for the form field.
@@ -200,19 +201,19 @@ This method should return an array of hashes for each field, where the keys can
| `api_only:` | boolean | false | `false` | Specify if the field should only be available through the API, and excluded from the frontend form.
| `if:` | boolean or lambda | false | `true` | Specify if the field should be available. The value can be a boolean or a lambda.
-### Additional keys for `type: 'checkbox'`
+### Additional keys for `type: :checkbox`
| Key | Type | Required | Default | Description
|:------------------|:-------|:---------|:------------------|:--
| `checkbox_label:` | string | false | Value of `title:` | A custom label that displays next to the checkbox.
-### Additional keys for `type: 'select'`
+### Additional keys for `type: :select`
| Key | Type | Required | Default | Description
|:-----------|:------|:---------|:--------|:--
| `choices:` | array | true | | A nested array of `[label, value]` tuples.
-### Additional keys for `type: 'password'`
+### Additional keys for `type: :password`
| Key | Type | Required | Default | Description
|:----------------------------|:-------|:---------|:------------------|:--
@@ -226,30 +227,20 @@ This example defines a required `url` field, and optional `username` and `passwo
```ruby
module Integrations
class FooBar < Integration
- prop_accessor :url, :username, :password
-
- def fields
- [
- {
- type: 'text',
- name: 'url',
- title: s_('FooBarIntegration|Server URL'),
- placeholder: 'https://example.com/',
- required: true
- },
- {
- type: 'text',
- name: 'username',
- title: s_('FooBarIntegration|Username'),
- },
- {
- type: 'password',
- name: 'password',
- title: s_('FoobarIntegration|Password'
- non_empty_password_title: s_('FooBarIntegration|Enter new password')
- }
- ]
- end
+ field :url,
+ type: :text,
+ title: s_('FooBarIntegration|Server URL'),
+ placeholder: 'https://example.com/',
+ required: true
+
+ field :username,
+ type: :text,
+ title: s_('FooBarIntegration|Username')
+
+ field :password,
+ type: 'password',
+ title: s_('FoobarIntegration|Password'
+ non_empty_password_title: s_('FooBarIntegration|Enter new password')
end
end
```
diff --git a/doc/development/integrations/jenkins.md b/doc/development/integrations/jenkins.md
index 65194a04a62..f52098394dc 100644
--- a/doc/development/integrations/jenkins.md
+++ b/doc/development/integrations/jenkins.md
@@ -24,7 +24,7 @@ brew services start jenkins
GitLab does not allow requests to localhost or the local network by default. When running Jenkins on your local machine, you need to enable local access.
1. Log into your GitLab instance as an administrator.
-1. On the left sidebar, expand the top-most chevron (**{chevron-down}**).
+1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
1. On the left sidebar, select **Settings > Network**.
1. Expand **Outbound requests**, and select the following checkboxes:
@@ -55,8 +55,8 @@ To set up the Jenkins project you intend to run your build on, read
You can configure your integration between Jenkins and GitLab:
-- With the [recommended approach for Jenkins integration](../../integration/jenkins.md#configure-a-jenkins-integration-recommended).
-- [Using a webhook](../../integration/jenkins.md#configure-a-webhook).
+- With the [recommended approach for Jenkins integration](../../integration/jenkins.md#with-a-jenkins-server-url).
+- [Using a webhook](../../integration/jenkins.md#with-a-webhook).
## Test your setup
diff --git a/doc/development/integrations/secure.md b/doc/development/integrations/secure.md
index 8fda6042fcf..8c6e3145000 100644
--- a/doc/development/integrations/secure.md
+++ b/doc/development/integrations/secure.md
@@ -181,7 +181,7 @@ See also [Docker Tagging: Best practices for tagging and versioning Docker image
## Command line
-A scanner is a command line tool that takes environment variables as inputs,
+A scanner is a command-line tool that takes environment variables as inputs,
and generates a file that is uploaded as a report (based on the job definition).
It also generates text output on the standard output and standard error streams, and exits with a status code.
diff --git a/doc/development/internal_analytics/index.md b/doc/development/internal_analytics/index.md
index c7cf907ca92..d24ecf5a99c 100644
--- a/doc/development/internal_analytics/index.md
+++ b/doc/development/internal_analytics/index.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/internal_event_tracking/architecture.md b/doc/development/internal_analytics/internal_event_tracking/architecture.md
index de5672a4895..0265e39745a 100644
--- a/doc/development/internal_analytics/internal_event_tracking/architecture.md
+++ b/doc/development/internal_analytics/internal_event_tracking/architecture.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/internal_event_tracking/event_definition_guide.md b/doc/development/internal_analytics/internal_event_tracking/event_definition_guide.md
index 7e4222ead2e..591c6672810 100644
--- a/doc/development/internal_analytics/internal_event_tracking/event_definition_guide.md
+++ b/doc/development/internal_analytics/internal_event_tracking/event_definition_guide.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/internal_event_tracking/index.md b/doc/development/internal_analytics/internal_event_tracking/index.md
index 73e9e2d1a4c..e35d5f6f084 100644
--- a/doc/development/internal_analytics/internal_event_tracking/index.md
+++ b/doc/development/internal_analytics/internal_event_tracking/index.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/internal_event_tracking/introduction.md b/doc/development/internal_analytics/internal_event_tracking/introduction.md
index ebb3caa198a..e776691fdf0 100644
--- a/doc/development/internal_analytics/internal_event_tracking/introduction.md
+++ b/doc/development/internal_analytics/internal_event_tracking/introduction.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/internal_event_tracking/migration.md b/doc/development/internal_analytics/internal_event_tracking/migration.md
new file mode 100644
index 00000000000..4b8a726768f
--- /dev/null
+++ b/doc/development/internal_analytics/internal_event_tracking/migration.md
@@ -0,0 +1,155 @@
+---
+stage: Analyze
+group: Analytics Instrumentation
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Migrating existing tracking to internal event tracking
+
+GitLab Internal Events Tracking exposes a unified API on top of the existing tracking options. Currently RedisHLL and Snowplow are supported.
+
+This page describes how you can switch from tracking using a single method to using Internal Events Tracking.
+
+## Migrating from tracking with Snowplow
+
+If you are already tracking events in Snowplow, you can start collecting metrics also from self-managed instances by switching to Internal Events Tracking.
+
+Notice that the Snowplow event you trigger after switching to Internal Events Tracking looks slightly different from your current event.
+
+Please make sure that you are okay with this change before you migrate.
+
+### Backend
+
+If you are already tracking Snowplow events using `Gitlab::Tracking.event` and you want to migrate to Internal Events Tracking you might start with something like this:
+
+```ruby
+Gitlab::Tracking.event(name, 'ci_templates_unique', namespace: namespace,
+ project: project, context: [context], user: user, label: label)
+```
+
+The code above can be replaced by something like this:
+
+```ruby
+Gitlab::InternalEvents.track_event('ci_templates_unique', namespace: namespace, project: project, user: user)
+```
+
+In addition, you have to create definitions for the metrics that you would like to track.
+
+To generate metric definitions, you can use the generator like this:
+
+```shell
+bin/rails g gitlab:analytics:internal_events \
+ --time_frames=7d 28d\
+ --group=project_management \
+ --stage=plan \
+ --section=dev \
+ --event=ci_templates_unique \
+ --unique=user.id \
+ --mr=https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121544
+```
+
+### Frontend
+
+If you are using the `Tracking` mixin in the Vue component, you can replace it with the `InternalEvents` mixin.
+
+For example, if your current Vue component look like this:
+
+```vue
+import Tracking from '~/tracking';
+...
+mixins: [Tracking.mixin()]
+...
+...
+this.track('some_label', options)
+```
+
+After converting it to Internal Events Tracking, it should look like this:
+
+```vue
+import { InternalEvents } from '~/tracking';
+...
+mixins: [InternalEvents.mixin()]
+...
+...
+this.track_event('action')
+```
+
+You can use [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123901/diffs) as an example. It migrates the `devops_adoption_app` component to use Internal Events Tracking.
+
+If you are using `data-track-action` in the component, you have to change it to `data-event-tracking` to migrate to Internal Events Tracking.
+
+For example, if a button is defined like this:
+
+```vue
+ <gl-button
+ :href="diffFile.external_url"
+ :title="externalUrlLabel"
+ :aria-label="externalUrlLabel"
+ target="_blank"
+ data-track-action="click_toggle_external_button"
+ data-track-label="diff_toggle_external_button"
+ data-track-property="diff_toggle_external"
+ icon="external-link"
+/>
+```
+
+This can be converted to Internal Events Tracking like this:
+
+```vue
+ <gl-button
+ :href="diffFile.external_url"
+ :title="externalUrlLabel"
+ :aria-label="externalUrlLabel"
+ target="_blank"
+ data-event-tracking="click_toggle_external_button"
+ icon="external-link"
+/>
+```
+
+Notice that we just need action to pass in the `data-event-tracking` attribute which will be passed to both Snowplow and RedisHLL.
+
+## Migrating from tracking with RedisHLL
+
+### Backend
+
+If you are currently tracking a metric in `RedisHLL` like this:
+
+```ruby
+ Gitlab::UsageDataCounters::HLLRedisCounter.track_event(:git_write_action, values: current_user.id)
+```
+
+To start using Internal Events Tracking, follow these steps:
+
+1. Create an event definition that describes `git_write_action` ([guide](../snowplow/event_dictionary_guide.md#create-a-new-event-definition)).
+1. Find metric definitions that list `git_write_action` in the events section (`20210216182041_action_monthly_active_users_git_write.yml` and `20210216184045_git_write_action_weekly.yml`).
+1. Change the `data_source` from `redis_hll` to `internal_events` in the metric definition files.
+1. Add an `events` section to both metric definition files.
+
+ ```yaml
+ events:
+ - name: git_write_action
+ unique: user.id
+ ```
+
+ Use `project.id` or `namespace.id` instead of `user.id` if your metric is counting something other than unique users.
+1. Call `InternalEvents.tract_event` instead of `HLLRedisCounter.track_event`:
+
+ ```diff
+ - Gitlab::UsageDataCounters::HLLRedisCounter.track_event(:git_write_action, values: current_user.id)
+ + Gitlab::InternalEvents.track_event('project_created', user: current_user)
+ ```
+
+1. Optional. Add additional values to the event. You typically want to add `project` and `namespace` as it is useful information to have in the data warehouse.
+
+ ```diff
+ - Gitlab::UsageDataCounters::HLLRedisCounter.track_event(:git_write_action, values: current_user.id)
+ + Gitlab::InternalEvents.track_event('project_created', user: current_user, project: project, namespace: namespace)
+ ```
+
+1. Update your test to use the `internal event tracking` shared example.
+
+### Frontend
+
+If you are calling `trackRedisHllUserEvent` in the frontend to track the frontend event, you can convert this to Internal events by using mixin, raw JavaScript or data tracking attribute,
+
+[Quick start guide](quick_start.md#frontend-tracking) has example for each methods.
diff --git a/doc/development/internal_analytics/internal_event_tracking/quick_start.md b/doc/development/internal_analytics/internal_event_tracking/quick_start.md
index 84926657a3b..19c76ecc045 100644
--- a/doc/development/internal_analytics/internal_event_tracking/quick_start.md
+++ b/doc/development/internal_analytics/internal_event_tracking/quick_start.md
@@ -1,15 +1,39 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
-# Quick start for internal event tracking
+# Quick start for Internal Event Tracking
In an effort to provide a more efficient, scalable, and unified tracking API, GitLab is deprecating existing RedisHLL and Snowplow tracking. Instead, we're implementing a new `track_event` method.
-With this approach, we can both update RedisHLL counters and send Snowplow events simultaneously, streamlining the tracking process.
+With this approach, we can update both RedisHLL counters and send Snowplow events without worrying about the underlying implementation.
-## Create and trigger events
+In order to instrument your code with Internal Events Tracking need three things:
+
+1. Define an event
+1. Define one or more metrics
+1. Trigger the event
+
+## Defining event and metrics
+
+To create event and metric definitions you can use the `internal_events` generator.
+
+This example will create an event definition for an event called `project_created` and two metric definitions which will be aggregated every 7 and 28 days.
+
+```shell
+bin/rails g gitlab:analytics:internal_events \
+--time_frames=7d 28d \
+--group=project_management \
+--stage=plan --section=dev \
+--event=project_created \
+--unique=user.id \
+--mr=https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121544
+```
+
+## Trigger events
+
+Triggering an event and thereby updating a metric is slightly different on backend and frontend. Please refer to the relevant section below.
### Backend tracking
@@ -18,9 +42,9 @@ To trigger an event, call the `Gitlab::InternalEvents.track_event` method with t
```ruby
Gitlab::InternalEvents.track_event(
"i_code_review_user_apply_suggestion",
- user_id: user_id,
- namespace_id: namespace_id,
- project_id: project_id
+ user: user,
+ namespace: namespace,
+ project: project
)
```
@@ -46,7 +70,7 @@ To implement Vue component tracking:
```javascript
export default {
mixins: [trackingMixin],
-
+
data() {
return {
expanded: false,
@@ -97,7 +121,7 @@ This attribute ensures that if we want to track GitLab internal events for a but
</gl-button>
```
-For Haml
+#### Haml
```haml
= render Pajamas::ButtonComponent.new(button_options: { class: 'js-settings-toggle', data: { event_tracking: 'action' }}) do
@@ -111,3 +135,7 @@ Sometimes we want to send internal events when the component is rendered or load
= render Pajamas::ButtonComponent.new(button_options: { data: { event_tracking_load: 'true', event_tracking: 'i_devops' } }) do
= _("New project")
```
+
+### Limitations
+
+The only values we allow for `unique` are `user.id`, `project.id`, and `namespace.id`, as they are logged as part of the standard context. We currently don't have anywhere to put a value like `merge_request.id`. That will change with self-describing events.
diff --git a/doc/development/internal_analytics/service_ping/implement.md b/doc/development/internal_analytics/service_ping/implement.md
index 9dbfa02854d..c6da26f86c2 100644
--- a/doc/development/internal_analytics/service_ping/implement.md
+++ b/doc/development/internal_analytics/service_ping/implement.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/service_ping/index.md b/doc/development/internal_analytics/service_ping/index.md
index 22e66a247c9..f532bb1ac31 100644
--- a/doc/development/internal_analytics/service_ping/index.md
+++ b/doc/development/internal_analytics/service_ping/index.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/service_ping/metrics_dictionary.md b/doc/development/internal_analytics/service_ping/metrics_dictionary.md
index 8103db5113f..e677118fff6 100644
--- a/doc/development/internal_analytics/service_ping/metrics_dictionary.md
+++ b/doc/development/internal_analytics/service_ping/metrics_dictionary.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
@@ -46,9 +46,9 @@ Each metric is defined in a separate YAML file consisting of a number of fields:
| `performance_indicator_type` | no | `array`; may be set to one of [`gmau`, `smau`, `paid_gmau`, `umau` or `customer_health_score`](https://about.gitlab.com/handbook/business-technology/data-team/data-catalog/xmau-analysis/). |
| `tier` | yes | `array`; may contain one or a combination of `free`, `premium` or `ultimate`. The [tier](https://about.gitlab.com/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/tiers/#definitions) where the tracked feature is available. This should be verbose and contain all tiers where a metric is available. |
| `milestone` | yes | The milestone when the metric is introduced and when it's available to self-managed instances with the official GitLab release. |
-| `milestone_removed` | no | The milestone when the metric is removed. |
+| `milestone_removed` | no | The milestone when the metric is removed. Required for removed metrics. |
| `introduced_by_url` | no | The URL to the merge request that introduced the metric to be available for self-managed instances. |
-| `removed_by_url` | no | The URL to the merge request that removed the metric. |
+| `removed_by_url` | no | The URL to the merge request that removed the metric. Required for removed metrics. |
| `repair_issue_url` | no | The URL of the issue that was created to repair a metric with a `broken` status. |
| `options` | no | `object`: options information needed to calculate the metric value. |
| `skip_validation` | no | This should **not** be set. [Used for imported metrics until we review, update and make them valid](https://gitlab.com/groups/gitlab-org/-/epics/5425). |
diff --git a/doc/development/internal_analytics/service_ping/metrics_instrumentation.md b/doc/development/internal_analytics/service_ping/metrics_instrumentation.md
index dc225a40d1b..ada7cc566a1 100644
--- a/doc/development/internal_analytics/service_ping/metrics_instrumentation.md
+++ b/doc/development/internal_analytics/service_ping/metrics_instrumentation.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/service_ping/metrics_lifecycle.md b/doc/development/internal_analytics/service_ping/metrics_lifecycle.md
index bb3d6797011..4980a8cf63d 100644
--- a/doc/development/internal_analytics/service_ping/metrics_lifecycle.md
+++ b/doc/development/internal_analytics/service_ping/metrics_lifecycle.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/service_ping/performance_indicator_metrics.md b/doc/development/internal_analytics/service_ping/performance_indicator_metrics.md
index d7811c52bb1..63177f093e2 100644
--- a/doc/development/internal_analytics/service_ping/performance_indicator_metrics.md
+++ b/doc/development/internal_analytics/service_ping/performance_indicator_metrics.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/service_ping/review_guidelines.md b/doc/development/internal_analytics/service_ping/review_guidelines.md
index 8a46de7086e..c816c905097 100644
--- a/doc/development/internal_analytics/service_ping/review_guidelines.md
+++ b/doc/development/internal_analytics/service_ping/review_guidelines.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/service_ping/troubleshooting.md b/doc/development/internal_analytics/service_ping/troubleshooting.md
index 8f5e94506cd..e685635c5f7 100644
--- a/doc/development/internal_analytics/service_ping/troubleshooting.md
+++ b/doc/development/internal_analytics/service_ping/troubleshooting.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
@@ -58,7 +58,7 @@ checking the configuration file of your GitLab instance:
- Using the Admin Area:
- 1. On the left sidebar, expand the top-most chevron (**{chevron-down}**).
+ 1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
1. On the left sidebar, select **Settings > Metrics and profiling**.
1. Expand **Usage statistics**.
@@ -116,7 +116,7 @@ To work around this bug, you have two options:
sudo gitlab-ctl reconfigure
```
- 1. On the left sidebar, expand the top-most chevron (**{chevron-down}**).
+ 1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
1. On the left sidebar, select **Settings > Metrics and profiling**.
1. Expand **Usage statistics**.
diff --git a/doc/development/internal_analytics/service_ping/usage_data.md b/doc/development/internal_analytics/service_ping/usage_data.md
index b6ec3e00670..8742bc03fbb 100644
--- a/doc/development/internal_analytics/service_ping/usage_data.md
+++ b/doc/development/internal_analytics/service_ping/usage_data.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/snowplow/event_dictionary_guide.md b/doc/development/internal_analytics/snowplow/event_dictionary_guide.md
index 6e8947e0210..c0d5e3efdfa 100644
--- a/doc/development/internal_analytics/snowplow/event_dictionary_guide.md
+++ b/doc/development/internal_analytics/snowplow/event_dictionary_guide.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/snowplow/implementation.md b/doc/development/internal_analytics/snowplow/implementation.md
index 5ad97cf528c..5d328f22ca5 100644
--- a/doc/development/internal_analytics/snowplow/implementation.md
+++ b/doc/development/internal_analytics/snowplow/implementation.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/snowplow/index.md b/doc/development/internal_analytics/snowplow/index.md
index 8265bceaf06..17d3f3f2cfc 100644
--- a/doc/development/internal_analytics/snowplow/index.md
+++ b/doc/development/internal_analytics/snowplow/index.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
@@ -32,7 +32,7 @@ instances do not have a collector configured and do not collect data via Snowplo
You can configure your self-managed GitLab instance to use a custom Snowplow collector.
-1. On the left sidebar, expand the top-most chevron (**{chevron-down}**).
+1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
1. On the left sidebar, select **Settings > General**.
1. Expand **Snowplow**.
diff --git a/doc/development/internal_analytics/snowplow/infrastructure.md b/doc/development/internal_analytics/snowplow/infrastructure.md
index 462dee2c39b..b856ccec78e 100644
--- a/doc/development/internal_analytics/snowplow/infrastructure.md
+++ b/doc/development/internal_analytics/snowplow/infrastructure.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/snowplow/review_guidelines.md b/doc/development/internal_analytics/snowplow/review_guidelines.md
index 03d1812cbfc..a0bdad8fafb 100644
--- a/doc/development/internal_analytics/snowplow/review_guidelines.md
+++ b/doc/development/internal_analytics/snowplow/review_guidelines.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/snowplow/schemas.md b/doc/development/internal_analytics/snowplow/schemas.md
index 21142f68d39..2cc09500c36 100644
--- a/doc/development/internal_analytics/snowplow/schemas.md
+++ b/doc/development/internal_analytics/snowplow/schemas.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_analytics/snowplow/troubleshooting.md b/doc/development/internal_analytics/snowplow/troubleshooting.md
index b531c6dcd56..5eabba04792 100644
--- a/doc/development/internal_analytics/snowplow/troubleshooting.md
+++ b/doc/development/internal_analytics/snowplow/troubleshooting.md
@@ -1,5 +1,5 @@
---
-stage: Analytics
+stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/internal_api/index.md b/doc/development/internal_api/index.md
index 538b66124ba..81fd78d1d27 100644
--- a/doc/development/internal_api/index.md
+++ b/doc/development/internal_api/index.md
@@ -178,6 +178,40 @@ Example response:
- GitLab Shell
+## Authorized Certs
+
+This endpoint is called by the GitLab Shell to get the namespace that has a particular CA SSH certificate
+configured. It also accepts `user_identifier` to return a GitLab user for specified identifier.
+
+| Attribute | Type | Required | Description |
+|:----------------------|:-------|:---------|:------------|
+| `key` | string | yes | The fingerprint of the SSH certificate. |
+| `user_identifier` | string | yes | The identifier of the user to whom the SSH certificate has been issued (username or primary email). |
+
+```plaintext
+GET /internal/authorized_certs
+```
+
+Example request:
+
+```shell
+curl --request GET --header "Gitlab-Shell-Api-Request: <JWT token>" "http://localhost:3001/api/v4/internal/authorized_certs?key=<key>&user_identifier=<user_identifier>"
+```
+
+Example response:
+
+```json
+{
+ "success": true,
+ "namespace": "gitlab-org",
+ "username": "root"
+}
+```
+
+### Known consumers
+
+- GitLab Shell
+
## Get user for user ID or key
This endpoint is used when a user performs `ssh git@gitlab.com`. It
@@ -492,21 +526,24 @@ curl --request GET --header "Gitlab-Kas-Api-Request: <JWT token>" \
Called from GitLab agent server (`kas`) to increase the usage
metric counters.
-| Attribute | Type | Required | Description |
-|:--------------------------------------------------------------------------|:--------------|:---------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `counters` | hash | no | Hash of counters |
-| `counters["k8s_api_proxy_request"]` | integer | no | The number to increase the `k8s_api_proxy_request` counter by |
-| `counters["gitops_sync"]` | integer | no | The number to increase the `gitops_sync` counter by |
-| `counters["flux_git_push_notifications_total"]` | integer | no | The number to increase the `flux_git_push_notifications_total` counter by |
-| `counters["k8s_api_proxy_requests_via_ci_access"]` | integer | no | The number to increase the `k8s_api_proxy_requests_via_ci_access` counter by |
-| `counters["k8s_api_proxy_requests_via_user_access"]` | integer | no | The number to increase the `k8s_api_proxy_requests_via_user_access` counter by |
-| `unique_counters` | hash | no | Array of unique numbers |
-| `unique_counters["agent_users_using_ci_tunnel"]` | integer array | no | The set of unique user ids that have interacted a CI Tunnel to track the `agent_users_using_ci_tunnel` metric event |
-| `unique_counters["k8s_api_proxy_requests_unique_users_via_ci_access"]` | integer array | no | The set of unique user ids that have interacted a CI Tunnel via `ci_access` to track the `k8s_api_proxy_requests_unique_users_via_ci_access` metric event |
-| `unique_counters["k8s_api_proxy_requests_unique_agents_via_ci_access"]` | integer array | no | The set of unique user ids that have interacted a CI Tunnel via `ci_access` to track the `k8s_api_proxy_requests_unique_agents_via_ci_access` metric event |
-| `unique_counters["k8s_api_proxy_requests_unique_users_via_user_access"]` | integer array | no | The set of unique user ids that have interacted a CI Tunnel via `user_access` to track the `k8s_api_proxy_requests_unique_users_via_user_access` metric event |
-| `unique_counters["k8s_api_proxy_requests_unique_agents_via_user_access"]` | integer array | no | The set of unique user ids that have interacted a CI Tunnel via `user_access` to track the `k8s_api_proxy_requests_unique_agents_via_user_access` metric event |
-| `unique_counters["flux_git_push_notified_unique_projects"]` | integer array | no | The set of unique projects ids that have been notified to reconcile their Flux workloads to track the `flux_git_push_notified_unique_projects` metric event |
+| Attribute | Type | Required | Description |
+|:--------------------------------------------------------------------------|:--------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `counters` | hash | no | Hash of counters |
+| `counters["k8s_api_proxy_request"]` | integer | no | The number to increase the `k8s_api_proxy_request` counter by |
+| `counters["gitops_sync"]` | integer | no | The number to increase the `gitops_sync` counter by |
+| `counters["flux_git_push_notifications_total"]` | integer | no | The number to increase the `flux_git_push_notifications_total` counter by |
+| `counters["k8s_api_proxy_requests_via_ci_access"]` | integer | no | The number to increase the `k8s_api_proxy_requests_via_ci_access` counter by |
+| `counters["k8s_api_proxy_requests_via_user_access"]` | integer | no | The number to increase the `k8s_api_proxy_requests_via_user_access` counter by |
+| `counters["k8s_api_proxy_requests_via_pat_access"]` | integer | no | The number to increase the `k8s_api_proxy_requests_via_pat_access` counter by |
+| `unique_counters` | hash | no | Array of unique numbers |
+| `unique_counters["agent_users_using_ci_tunnel"]` | integer array | no | The set of unique user ids that have interacted a CI Tunnel to track the `agent_users_using_ci_tunnel` metric event |
+| `unique_counters["k8s_api_proxy_requests_unique_users_via_ci_access"]` | integer array | no | The set of unique user ids that have interacted a CI Tunnel via `ci_access` to track the `k8s_api_proxy_requests_unique_users_via_ci_access` metric event |
+| `unique_counters["k8s_api_proxy_requests_unique_agents_via_ci_access"]` | integer array | no | The set of unique agent ids that have interacted a CI Tunnel via `ci_access` to track the `k8s_api_proxy_requests_unique_agents_via_ci_access` metric event |
+| `unique_counters["k8s_api_proxy_requests_unique_users_via_user_access"]` | integer array | no | The set of unique user ids that have interacted a CI Tunnel via `user_access` to track the `k8s_api_proxy_requests_unique_users_via_user_access` metric event |
+| `unique_counters["k8s_api_proxy_requests_unique_agents_via_user_access"]` | integer array | no | The set of unique agent ids that have interacted a CI Tunnel via `user_access` to track the `k8s_api_proxy_requests_unique_agents_via_user_access` metric event |
+| `unique_counters["k8s_api_proxy_requests_unique_users_via_pat_access"]` | integer array | no | The set of unique user ids that have used the KAS Kubernetes API proxy via PAT to track the `k8s_api_proxy_requests_unique_users_via_pat_access` metric event |
+| `unique_counters["k8s_api_proxy_requests_unique_agents_via_pat_access"]` | integer array | no | The set of unique agent ids that have used the KAS Kubernetes API proxy via PAT to track the `k8s_api_proxy_requests_unique_agents_via_pat_access` metric event |
+| `unique_counters["flux_git_push_notified_unique_projects"]` | integer array | no | The set of unique projects ids that have been notified to reconcile their Flux workloads to track the `flux_git_push_notified_unique_projects` metric event |
```plaintext
POST /internal/kubernetes/usage_metrics
@@ -819,7 +856,7 @@ Example response:
## Subscription add-on purchases (excluding storage and compute packs)
-The subscription add-on purchase endpoint is used by [CustomersDot](https://gitlab.com/gitlab-org/customers-gitlab-com) (`customers.gitlab.com`) to apply subscription add-on purchases like code suggestions for personal namespaces, or top-level groups within GitLab.com. It is not used to apply storage and compute pack purchases.
+The subscription add-on purchase endpoint is used by [CustomersDot](https://gitlab.com/gitlab-org/customers-gitlab-com) (`customers.gitlab.com`) to apply subscription add-on purchases like Code Suggestions for personal namespaces, or top-level groups within GitLab.com. It is not used to apply storage and compute pack purchases.
### Create a subscription add-on purchase
@@ -831,9 +868,9 @@ POST /namespaces/:id/subscription_add_on_purchase/:add_on_name
| Attribute | Type | Required | Description |
|:------------|:--------|:---------|:------------|
-| `quantity` | integer | yes | Amount of units in the subscription add-on purchase (Example: Number of seats for a code suggestions add-on) |
+| `quantity` | integer | yes | Amount of units in the subscription add-on purchase (Example: Number of seats for a Code Suggestions add-on) |
| `expires_on` | date | yes | Expiration date of the subscription add-on purchase |
-| `purchase_xid` | string | yes | Identifier for the subscription add-on purchase (Example: Subscription name for a code suggestions add-on) |
+| `purchase_xid` | string | yes | Identifier for the subscription add-on purchase (Example: Subscription name for a Code Suggestions add-on) |
Example request:
@@ -864,9 +901,9 @@ PUT /namespaces/:id/subscription_add_on_purchase/:add_on_name
| Attribute | Type | Required | Description |
|:------------|:--------|:---------|:------------|
-| `quantity` | integer | no | Amount of units in the subscription add-on purchase (Example: Number of seats for a code suggestions add-on) |
+| `quantity` | integer | no | Amount of units in the subscription add-on purchase (Example: Number of seats for a Code Suggestions add-on) |
| `expires_on` | date | yes | Expiration date of the subscription add-on purchase |
-| `purchase_xid` | string | no | Identifier for the subscription add-on purchase (Example: Subscription name for a code suggestions add-on) |
+| `purchase_xid` | string | no | Identifier for the subscription add-on purchase (Example: Subscription name for a Code Suggestions add-on) |
Example request:
@@ -1365,7 +1402,7 @@ Example request:
```shell
curl --verbose --request PATCH "https://gitlab.example.com/api/scim/v2/groups/test_group/Users/f0b1d561c-21ff-4092-beab-8154b17f82f2" \
- --data '{ "Operations": [{"op":"Update","path":"name.formatted","value":"New Name"}] }' \
+ --data '{ "Operations": [{"op":"replace","path":"id","value":"1234abcd"}] }' \
--header "Authorization: Bearer <your_scim_token>" --header "Content-Type: application/scim+json"
```
diff --git a/doc/development/internal_users.md b/doc/development/internal_users.md
index 1c12e541149..cf45cf941d0 100644
--- a/doc/development/internal_users.md
+++ b/doc/development/internal_users.md
@@ -43,7 +43,7 @@ Other examples of internal users:
- [GitLab Admin Bot](https://gitlab.com/gitlab-org/gitlab/-/blob/278bc9018dd1515a10cbf15b6c6cd55cb5431407/app/models/user.rb#L950-960)
- [Alert Bot](../operations/incident_management/alerts.md#trigger-actions-from-alerts)
- [Ghost User](../user/profile/account/delete_account.md#associated-records)
-- [Support Bot](../user/project/service_desk/index.md#support-bot-user)
+- [Support Bot](../user/project/service_desk/configure.md#support-bot-user)
- Visual Review Bot
- Resource access tokens, including:
- [Project access tokens](../user/project/settings/project_access_tokens.md).
diff --git a/doc/development/merge_request_concepts/performance.md b/doc/development/merge_request_concepts/performance.md
index 665b84b2c40..7e26bf982b2 100644
--- a/doc/development/merge_request_concepts/performance.md
+++ b/doc/development/merge_request_concepts/performance.md
@@ -195,20 +195,18 @@ costly, time-consuming query to the replicas.
Read about [complex queries on the relation object](../database/iterating_tables_in_batches.md#complex-queries-on-the-relation-object)
for considerations on how to use CTEs. We have found in some situations that CTEs can become
-problematic in use (similar to the n+1 problem above). In particular, hierarchical recursive
+problematic in use (similar to the N+1 problem above). In particular, hierarchical recursive
CTE queries such as the CTE in [AuthorizedProjectsWorker](https://gitlab.com/gitlab-org/gitlab/-/issues/325688)
are very difficult to optimize and don't scale. We should avoid them when implementing new features
that require any kind of hierarchical structure.
CTEs have been effectively used as an optimization fence in many simpler cases,
such as this [example](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/43242#note_61416277).
-Beginning in PostgreSQL 12, CTEs are inlined then [optimized by default](https://paquier.xyz/postgresql-2/postgres-12-with-materialize/).
-Keeping the old behavior requires marking CTEs with the keyword `MATERIALIZED`.
+With the currently supported PostgreSQL versions, the optimization fence behavior must be enabled
+with the `MATERIALIZED` keyword. By default CTEs are inlined then [optimized by default](https://paquier.xyz/postgresql-2/postgres-12-with-materialize/).
When building CTE statements, use the `Gitlab::SQL::CTE` class [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/56976) in GitLab 13.11.
-By default, this `Gitlab::SQL::CTE` class forces materialization through adding the `MATERIALIZED` keyword for PostgreSQL 12 and higher.
-`Gitlab::SQL::CTE` automatically omits materialization when PostgreSQL 11 is running
-(this behavior is implemented using a custom Arel node `Gitlab::Database::AsWithMaterialized` under the surface).
+By default, this `Gitlab::SQL::CTE` class forces materialization through adding the `MATERIALIZED` keyword.
WARNING:
Upgrading to GitLab 14.0 requires PostgreSQL 12 or later.
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index 65dc4de30d1..9be322812e3 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -1290,6 +1290,48 @@ class BuildMetadata
end
```
+Additionally, you can expose the keys in a `JSONB` column as
+ActiveRecord attributes. Do this when you need complex validations,
+or ActiveRecord change tracking. This feature is provided by the
+[`jsonb_accessor`](https://github.com/madeintandem/jsonb_accessor) gem,
+and does not replace `JsonSchemaValidator`.
+
+```ruby
+module Organizations
+ class OrganizationSetting < ApplicationRecord
+ belongs_to :organization
+
+ validates :settings, json_schema: { filename: "organization_settings" }
+
+ jsonb_accessor :settings,
+ restricted_visibility_levels: [:integer, { array: true }]
+
+ validates_each :restricted_visibility_levels do |record, attr, value|
+ value&.each do |level|
+ unless Gitlab::VisibilityLevel.options.value?(level)
+ record.errors.add(attr, format(_("'%{level}' is not a valid visibility level"), level: level))
+ end
+ end
+ end
+ end
+end
+```
+
+You can now use `restricted_visibility_levels` as an ActiveRecord attribute:
+
+```ruby
+> s = Organizations::OrganizationSetting.find(1)
+=> #<Organizations::OrganizationSetting:0x0000000148d67628>
+> s.settings
+=> {"restricted_visibility_levels"=>[20]}
+> s.restricted_visibility_levels
+=> [20]
+> s.restricted_visibility_levels = [0]
+=> [0]
+> s.changes
+=> {"settings"=>[{"restricted_visibility_levels"=>[20]}, {"restricted_visibility_levels"=>[0]}], "restricted_visibility_levels"=>[[20], [0]]}
+```
+
## Encrypted attributes
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/227779) in GitLab 14.0.
diff --git a/doc/development/packages/debian_repository.md b/doc/development/packages/debian_repository.md
index b3e9bedfdd4..2d8ba98f9ad 100644
--- a/doc/development/packages/debian_repository.md
+++ b/doc/development/packages/debian_repository.md
@@ -17,11 +17,13 @@ This guide explains:
There are two types of [Debian packages](https://www.debian.org/doc/manuals/debian-faq/pkg-basics.en.html): binary and source.
- **Binary** - These are usually `.deb` files and contain executables, config files, and other data. A binary package must match your OS or architecture since it is already compiled. These are usually installed using `dpkg`. Dependencies must already exist on the system when installing a binary package.
-- **Source** - These are usual made up of `.dsc` files and `.gz` files. A source package is compiled on your system. These are fetched and installed with [`apt`](https://manpages.debian.org/bullseye/apt/apt.8.en.html), which then uses `dpkg` after the package is compiled. When you use `apt`, it will fetch and install the necessary dependencies.
+- **Source** - These are usually made up of `.dsc` files and compressed `.tar` files. A source package may be compiled on your system.
-The `.deb` file follows the naming convention `<PackageName>_<VersionNumber>-<DebianRevisionNumber>_<DebianArchitecture>.deb`
+Packages are fetched with [`apt`](https://manpages.debian.org/bullseye/apt/apt.8.en.html) and installed with `dpkg`. When you use `apt`, it also fetches and installs any dependencies.
-It includes a `control file` that contains metadata about the package. You can view the control file by using `dpkg --info <deb_file>`
+The `.deb` file follows the naming convention `<PackageName>_<VersionNumber>-<DebianRevisionNumber>_<DebianArchitecture>.deb`.
+
+It includes a `control file` that contains metadata about the package. You can view the control file by using `dpkg --info <deb_file>`.
The [`.changes` file](https://www.debian.org/doc/debian-policy/ch-controlfields.html#debian-changes-files-changes) is used to tell the Debian repository how to process updates to packages. It contains a variety of metadata for the package, including architecture, distribution, and version. In addition to the metadata, they contain three lists of checksums: `sha1`, `sha256`, and `md5` in the `Files` section. Refer to [sample_1.2.3~alpha2_amd64.changes](https://gitlab.com/gitlab-org/gitlab/-/blob/dd1e70d3676891025534dc4a1e89ca9383178fe7/spec/fixtures/packages/debian/sample_1.2.3~alpha2_amd64.changes) for an example of how these files are structured.
@@ -40,8 +42,8 @@ When it comes to Debian, packages don't exist on their own. They belong to a _di
## What does a Debian Repository look like?
- A [Debian repository](https://wiki.debian.org/DebianRepository) is made up of many releases.
-- Each release is given a **codename**. For the public Debian repository, these are things like "bullseye" and "jesse".
- - There is also the concept of **suites** which are essentially aliases of codenames synonymous with release channels like "stable" and "edge".
+- Each release is given a stable **codename**. For the public Debian repository, these are names like "bullseye" and "jessie".
+ - There is also the concept of **suites** which are essentially aliases of codenames synonymous with release channels like "stable" and "edge". Over time they change and point to different _codenames_.
- Each release has many **components**. In the public repository, these are "main", "contrib", and "non-free".
- Each release has many **architectures** such as "amd64", "arm64", or "i386".
- Each release has a signed **Release** file (see below about [GPG signing](#what-are-gpg-keys-and-what-are-signed-releases))
diff --git a/doc/development/performance.md b/doc/development/performance.md
index fd7e9a85fba..428d5637aa9 100644
--- a/doc/development/performance.md
+++ b/doc/development/performance.md
@@ -250,7 +250,7 @@ the timeout.
Once profiling stops, the profile is written out to disk at
`$STACKPROF_FILE_PREFIX/stackprof.$PID.$RAND.profile`. It can then be inspected
-further through the `stackprof` command line tool, as described in the
+further through the `stackprof` command-line tool, as described in the
[Reading a Stackprof profile section](#reading-a-stackprof-profile).
Currently supported profiling targets are:
diff --git a/doc/development/permissions.md b/doc/development/permissions.md
index aa58447b818..35fd0f1c440 100644
--- a/doc/development/permissions.md
+++ b/doc/development/permissions.md
@@ -1,5 +1,5 @@
---
-stage: Manage
+stage: Govern
group: Authentication and Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/permissions/authorizations.md b/doc/development/permissions/authorizations.md
index 8d8944562a8..7580b7c473b 100644
--- a/doc/development/permissions/authorizations.md
+++ b/doc/development/permissions/authorizations.md
@@ -1,5 +1,5 @@
---
-stage: Manage
+stage: Govern
group: Authentication and Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/permissions/custom_roles.md b/doc/development/permissions/custom_roles.md
index 337c8f6d96b..d317d943cd3 100644
--- a/doc/development/permissions/custom_roles.md
+++ b/doc/development/permissions/custom_roles.md
@@ -1,5 +1,5 @@
---
-stage: Manage
+stage: Govern
group: Authentication and Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/permissions/predefined_roles.md b/doc/development/permissions/predefined_roles.md
index 9afc5966e93..50e8fbfd5b3 100644
--- a/doc/development/permissions/predefined_roles.md
+++ b/doc/development/permissions/predefined_roles.md
@@ -1,5 +1,5 @@
---
-stage: Manage
+stage: Govern
group: Authentication and Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
@@ -96,6 +96,20 @@ NOTE:
In [GitLab 14.9](https://gitlab.com/gitlab-org/gitlab/-/issues/351211) and later, projects in personal namespaces have a maximum role of Owner.
Because of a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/219299) in GitLab 14.8 and earlier, projects in personal namespaces have a maximum role of Maintainer.
+#### Guest role
+
+A user with the Guest role in GitLab can view project plans, blockers and other
+progress indicators. While unable to modify data they have not created, Guests
+can contribute to a project by creating and linking project work items. Guests
+can also view high-level project information such as:
+
+- Analytics.
+- Incident information.
+- Issues and epics.
+- Licenses.
+
+For more information, see [project member permissions](../../user/permissions.md#project-members-permissions).
+
### Confidential issues
[Confidential issues](../../user/project/issues/confidential_issues.md) can be accessed
diff --git a/doc/development/pipelines/index.md b/doc/development/pipelines/index.md
index 316ae3c83cd..a5b654e96e2 100644
--- a/doc/development/pipelines/index.md
+++ b/doc/development/pipelines/index.md
@@ -278,8 +278,8 @@ See `.review:rules:start-review-app-pipeline` in
[`rules.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml) for
the specific list of rules.
-If you want to force a Review App to be deployed regardless of your changes, you can add the
-`pipeline:run-review-app` label to the merge request.
+If you want to deploy a Review App in a merge request, you can either trigger the `start-review-app-pipeline` manual job in the CI/CD pipeline, or add the
+`pipeline:run-review-app` label to the merge request and run a new CI/CD pipeline.
Consult the [Review Apps](../testing_guide/review_apps.md) dedicated page for more information.
diff --git a/doc/development/pipelines/internals.md b/doc/development/pipelines/internals.md
index 1a4e4e738a8..0e2c1c991fd 100644
--- a/doc/development/pipelines/internals.md
+++ b/doc/development/pipelines/internals.md
@@ -152,7 +152,7 @@ that are scoped to a single [configuration keyword](../../ci/yaml/index.md#job-k
| `.setup-test-env-cache` | Allows a job to use a default `cache` definition suitable for setting up test environment for subsequent Ruby/Rails tasks. |
| `.ruby-cache` | Allows a job to use a default `cache` definition suitable for Ruby tasks. |
| `.static-analysis-cache` | Allows a job to use a default `cache` definition suitable for static analysis tasks. |
-| `.coverage-cache` | Allows a job to use a default `cache` definition suitable for coverage tasks. |
+| `.ruby-gems-coverage-cache` | Allows a job to use a default `cache` definition suitable for coverage tasks. |
| `.qa-cache` | Allows a job to use a default `cache` definition suitable for QA tasks. |
| `.yarn-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that do a `yarn install`. |
| `.assets-compile-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that compile assets. |
@@ -164,7 +164,7 @@ that are scoped to a single [configuration keyword](../../ci/yaml/index.md#job-k
| `.use-pg15-ee` | Same as `.use-pg15` but also use an `elasticsearch` service (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific version of the service). |
| `.use-kaniko` | Allows a job to use the `kaniko` tool to build Docker images. |
| `.as-if-foss` | Simulate the FOSS project by setting the `FOSS_ONLY='1'` CI/CD variable. |
-| `.use-docker-in-docker` | Allows a job to use Docker in Docker. |
+| `.use-docker-in-docker` | Allows a job to use Docker in Docker. For more details, see the [handbook about CI/CD configuration](https://about.gitlab.com/handbook/engineering/gitlab-repositories/#cicd-configuration). |
## `rules`, `if:` conditions and `changes:` patterns
@@ -198,7 +198,7 @@ and included in `rules` definitions via [YAML anchors](../../ci/yaml/yaml_optimi
| `if-merge-request` | Matches if the pipeline is for a merge request. | |
| `if-merge-request-title-as-if-foss` | Matches if the pipeline is for a merge request and the MR has label ~"pipeline:run-as-if-foss" | |
| `if-merge-request-title-update-caches` | Matches if the pipeline is for a merge request and the MR has label ~"pipeline:update-cache". | |
-| `if-merge-request-title-run-all-rspec` | Matches if the pipeline is for a merge request and the MR has label ~"pipeline:run-all-rspec". | |
+| `if-merge-request-labels-run-all-rspec` | Matches if the pipeline is for a merge request and the MR has label ~"pipeline:run-all-rspec". | |
| `if-security-merge-request` | Matches if the pipeline is for a security merge request. | |
| `if-security-schedule` | Matches if the pipeline is for a security scheduled pipeline. | |
| `if-nightly-master-schedule` | Matches if the pipeline is for a `master` scheduled pipeline with `$NIGHTLY` set. | |
@@ -396,7 +396,6 @@ For this scenario, you have to:
- `GITALY_SERVER_VERSION`
- `GITLAB_ELASTICSEARCH_INDEXER_VERSION`
- `GITLAB_KAS_VERSION`
- - `GITLAB_METRICS_EXPORTER_VERSION`
- `GITLAB_PAGES_VERSION`
- `GITLAB_SHELL_VERSION`
- `scripts/trigger-build.rb`
@@ -417,3 +416,25 @@ For this scenario, you have to:
- `spec/simplecov_env.rb`
Additionally, `scripts/utils.sh` is always downloaded from the API when this pattern is used (this file contains the code for `.fast-no-clone-job`).
+
+#### Runner tags
+
+On GitLab.com, both unprivileged and privileged runners are
+available. For projects in the `gitlab-org` group and forks of those
+projects, only one of the following tags should be added to a job:
+
+- `gitlab-org`: Jobs randomly use privileged and unprivileged runners.
+- `gitlab-org-docker`: Jobs must use a privileged runner. If you need [Docker-in-Docker support](../../ci/docker/using_docker_build.md#use-docker-in-docker),
+use `gitlab-org-docker` instead of `gitlab-org`.
+
+The `gitlab-org-docker` tag is added by the `.use-docker-in-docker` job
+definition above.
+
+To ensure compatibility with forks, avoid using both `gitlab-org` and
+`gitlab-org-docker` simultaneously. No instance runners
+have both `gitlab-org` and `gitlab-org-docker` tags. For forks of
+`gitlab-org` projects, jobs will get stuck if both tags are supplied because
+no matching runners are available.
+
+See [the GitLab Repositories handbook page](https://about.gitlab.com/handbook/engineering/gitlab-repositories/#cicd-configuration)
+for more information.
diff --git a/doc/development/pipelines/performance.md b/doc/development/pipelines/performance.md
index 2dbed640fbb..7db498adc02 100644
--- a/doc/development/pipelines/performance.md
+++ b/doc/development/pipelines/performance.md
@@ -40,7 +40,7 @@ This works well for the following reasons:
- `.ruby-cache`
- `.static-analysis-cache`
- `.rubocop-cache`
- - `.coverage-cache`
+ - `.ruby-gems-coverage-cache`
- `.ruby-node-cache`
- `.qa-cache`
- `.yarn-cache`
diff --git a/doc/development/policies.md b/doc/development/policies.md
index 38f1fc54bf4..faf5b32985f 100644
--- a/doc/development/policies.md
+++ b/doc/development/policies.md
@@ -1,5 +1,5 @@
---
-stage: Manage
+stage: Govern
group: Authentication and Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/development/rails_update.md b/doc/development/rails_update.md
index 32295cc0e43..772206c2d73 100644
--- a/doc/development/rails_update.md
+++ b/doc/development/rails_update.md
@@ -4,11 +4,11 @@ group: unassigned
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
-# Rails update guidelines
+# Rails upgrade guidelines
We strive to run GitLab using the latest Rails releases to benefit from performance, security updates, and new features.
-## Rails update approach
+## Rails upgrade approach
1. [Prepare an MR for GitLab](#prepare-an-mr-for-gitlab).
1. [Prepare an MR for Gitaly](#prepare-an-mr-for-gitaly).
@@ -40,7 +40,7 @@ We strive to run GitLab using the latest Rails releases to benefit from performa
### Create patch releases and backports for security patches
-If the Rails update was over a patch release and it contains important security fixes,
+If the Rails upgrade was over a patch release and it contains important security fixes,
make sure to release it in a
GitLab patch release to self-managed customers. Consult with our [release managers](https://about.gitlab.com/community/release-managers/)
for how to proceed.
diff --git a/doc/development/rake_tasks.md b/doc/development/rake_tasks.md
index b15c4eca5ae..d879668649d 100644
--- a/doc/development/rake_tasks.md
+++ b/doc/development/rake_tasks.md
@@ -30,7 +30,9 @@ See also [Mass inserting Rails models](mass_insert.md).
**LARGE_PROJECTS**: Create large projects (through import) from a predefined set of URLs.
-### Seeding issues for all or a given project
+### Seeding Data
+
+#### Seeding issues for all projects or a single project
You can seed issues for all or a given project with the `gitlab:seed:issues`
task:
@@ -197,6 +199,14 @@ bundle exec rake "gitlab:seed:ci_variables_instance"
bundle exec rake "gitlab:seed:ci_variables_instance[25, CI_VAR_]"
```
+#### Seed a project for merge train development
+
+Seeds a project with merge trains configured and 20 merge requests(each with 3 commits). The command:
+
+```shell
+rake gitlab:seed:merge_trains:project
+```
+
### Automation
If you're very sure that you want to **wipe the current database** and refill
@@ -520,6 +530,11 @@ The following command combines the intent of [Update GraphQL documentation and s
bundle exec rake gitlab:graphql:update_all
```
+## Update audit event types documentation
+
+For information on updating audit event types documentation, see
+[Generate documentation](audit_event_guide/index.md#generate-documentation).
+
## Update OpenAPI client for Error Tracking feature
NOTE:
diff --git a/doc/development/redis.md b/doc/development/redis.md
index ebc7c0271a1..19a6b8e75d4 100644
--- a/doc/development/redis.md
+++ b/doc/development/redis.md
@@ -56,7 +56,7 @@ the entry, instead of relying on the key changing.
### Multi-key commands
-GitLab supports Redis Cluster only for the Redis [rate-limiting](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/redis/rate_limiting.rb) type, introduced in [epic 823](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/823).
+GitLab supports Redis Cluster for [cache-related workloads](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/redis/cache.rb) type, introduced in [epic 878](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/878).
This imposes an additional constraint on naming: where GitLab is performing
operations that require several keys to be held on the same Redis server - for
@@ -81,6 +81,12 @@ Developers are highly encouraged to use [hash-tags](https://redis.io/docs/refere
where appropriate to facilitate future adoption of Redis Cluster in more Redis types. For example, the Namespace model uses hash-tags
for its [config cache keys](https://gitlab.com/gitlab-org/gitlab/-/blob/1a12337058f260d38405886d82da5e8bb5d8da0b/app/models/namespace.rb#L786).
+To perform multi-key commands, developers may use the [`Gitlab::Redis::CrossSlot::Pipeline`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/redis/cross_slot.rb) wrapper.
+However, this does not work for [transactions](https://redis.io/docs/interact/transactions/) as Redis Cluster does not support cross-slot transactions.
+
+For `Rails.cache`, we handle the `MGET` command found in `read_multi_get` by [patching it](https://gitlab.com/gitlab-org/gitlab/-/blob/c2bad2aac25e2f2778897bd4759506a72b118b15/lib/gitlab/patch/redis_cache_store.rb#L10) to use the `Gitlab::Redis::CrossSlot::Pipeline` wrapper.
+The minimum size of the pipeline is set to 1000 commands and it can be adjusted by using the `GITLAB_REDIS_CLUSTER_PIPELINE_BATCH_LIMIT` environment variable.
+
## Redis in structured logging
For GitLab Team Members: There are <i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
diff --git a/doc/development/rubocop_development_guide.md b/doc/development/rubocop_development_guide.md
index 1fdc0fbe78c..f6c11a0c7e3 100644
--- a/doc/development/rubocop_development_guide.md
+++ b/doc/development/rubocop_development_guide.md
@@ -28,6 +28,12 @@ discussions, nitpicking, or back-and-forth in reviews. The
[GitLab Ruby style guide](backend/ruby_style_guide.md) includes a non-exhaustive
list of styles that commonly come up in reviews and are not enforced.
+By default, we should not
+[disable a RuboCop rule inline](https://docs.rubocop.org/rubocop/configuration.html#disabling-cops-within-source-code), because it negates agreed-upon code standards that the rule is attempting to apply to the codebase.
+
+If you must use inline disable, provide the reason on the MR and ensure the reviewers agree
+before merging.
+
Additionally, we have dedicated
[test-specific style guides and best practices](testing_guide/index.md).
@@ -53,7 +59,7 @@ A cop is in a _grace period_ if it is enabled and has `Details: grace period` de
On the default branch, offenses from cops in the [grace period](rake_tasks.md#run-rubocop-in-graceful-mode) do not fail the RuboCop CI job. Instead, the job notifies the `#f_rubocop` Slack channel. However, on other branches, the RuboCop job fails.
-A grace period can safely be lifted as soon as there are no warnings for 2 weeks in the `#f_rubocop` channel on Slack.
+A grace period can safely be lifted as soon as there are no warnings for 1 week in the `#f_rubocop` channel on Slack.
## Enabling a new cop
@@ -61,7 +67,7 @@ A grace period can safely be lifted as soon as there are no warnings for 2 weeks
1. [Generate TODOs for the new cop](rake_tasks.md#generate-initial-rubocop-todo-list).
1. [Set the new cop to `grace period`](#cop-grace-period).
1. Create an issue to fix TODOs and encourage community contributions (via ~"quick win" and/or ~"Seeking community contributions"). [See some examples](https://gitlab.com/gitlab-org/gitlab/-/issues/?sort=created_date&state=opened&label_name%5B%5D=quick%20wins&label_name%5B%5D=static%20code%20analysis&first_page_size=20).
-1. Create an issue to remove `grace period` after 2 weeks of silence in the `#f_rubocop` Slack channel. [See an example](https://gitlab.com/gitlab-org/gitlab/-/issues/374903).
+1. Create an issue to remove `grace period` after 1 week of silence in the `#f_rubocop` Slack channel. [See an example](https://gitlab.com/gitlab-org/gitlab/-/issues/374903).
## Silenced offenses
diff --git a/doc/development/ruby_upgrade.md b/doc/development/ruby_upgrade.md
index 21c19c31b0a..d3629bc8dba 100644
--- a/doc/development/ruby_upgrade.md
+++ b/doc/development/ruby_upgrade.md
@@ -272,5 +272,5 @@ These experimental branches are not intended to be merged; they can be closed on
and merged back independently.
- **Give yourself enough time to fix problems ahead of a milestone release.** GitLab moves fast.
As a Ruby upgrade requires many MRs to be sent and reviewed, make sure all changes are merged at least a week
-before the 22nd. This gives us extra time to act if something breaks. If in doubt, it is better to
+before release day. This gives us extra time to act if something breaks. If in doubt, it is better to
postpone the upgrade to the following month, as we [prioritize availability over velocity](https://about.gitlab.com/handbook/engineering/development/principles/#prioritizing-technical-decisions).
diff --git a/doc/development/sec/token_revocation_api.md b/doc/development/sec/token_revocation_api.md
index e2006ba519c..860faf30cf6 100644
--- a/doc/development/sec/token_revocation_api.md
+++ b/doc/development/sec/token_revocation_api.md
@@ -108,8 +108,8 @@ For example, to configure these values in the
```ruby
::Gitlab::CurrentSettings.update!(secret_detection_token_revocation_token: 'MYSECRETTOKEN')
-::Gitlab::CurrentSettings.update!(secret_detection_token_revocation_url: 'https://example.gitlab.com/revocation_service/v1/revoke_tokens')
-::Gitlab::CurrentSettings.update!(secret_detection_revocation_token_types_url: 'https://example.gitlab.com/revocation_service/v1/revocable_token_types')
+::Gitlab::CurrentSettings.update!(secret_detection_token_revocation_url: 'https://gitlab.example.com/revocation_service/v1/revoke_tokens')
+::Gitlab::CurrentSettings.update!(secret_detection_revocation_token_types_url: 'https://gitlab.example.com/revocation_service/v1/revocable_token_types')
::Gitlab::CurrentSettings.update!(secret_detection_token_revocation_enabled: true)
```
diff --git a/doc/development/secure_coding_guidelines.md b/doc/development/secure_coding_guidelines.md
index 186239cc547..806fbd8d1f6 100644
--- a/doc/development/secure_coding_guidelines.md
+++ b/doc/development/secure_coding_guidelines.md
@@ -1279,7 +1279,7 @@ Credentials can be:
- Login details like username and password.
- Private keys.
-- Tokens (PAT, runner tokens, JWT token, CSRF tokens, project access tokens, etc).
+- Tokens (PAT, runner authentication tokens, JWT token, CSRF tokens, project access tokens, etc).
- Session cookies.
- Any other piece of information that can be used for authentication or authorization purposes.
diff --git a/doc/development/service_ping/implement.md b/doc/development/service_ping/implement.md
deleted file mode 100644
index c1077793fb9..00000000000
--- a/doc/development/service_ping/implement.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/service_ping/implement.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/service_ping/implement.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/service_ping/index.md b/doc/development/service_ping/index.md
deleted file mode 100644
index d0806ed375b..00000000000
--- a/doc/development/service_ping/index.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/service_ping/index.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/service_ping/index.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/service_ping/metrics_dictionary.md b/doc/development/service_ping/metrics_dictionary.md
deleted file mode 100644
index fecab4916f5..00000000000
--- a/doc/development/service_ping/metrics_dictionary.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/service_ping/metrics_dictionary.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/service_ping/metrics_dictionary.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/service_ping/metrics_instrumentation.md b/doc/development/service_ping/metrics_instrumentation.md
deleted file mode 100644
index 5a4dfc325e2..00000000000
--- a/doc/development/service_ping/metrics_instrumentation.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/service_ping/metrics_instrumentation.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/service_ping/metrics_instrumentation.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/service_ping/metrics_lifecycle.md b/doc/development/service_ping/metrics_lifecycle.md
deleted file mode 100644
index 520b18139ff..00000000000
--- a/doc/development/service_ping/metrics_lifecycle.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/service_ping/metrics_lifecycle.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/service_ping/metrics_lifecycle.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/service_ping/performance_indicator_metrics.md b/doc/development/service_ping/performance_indicator_metrics.md
deleted file mode 100644
index eda7224732d..00000000000
--- a/doc/development/service_ping/performance_indicator_metrics.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/service_ping/performance_indicator_metrics.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/service_ping/performance_indicator_metrics.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/service_ping/review_guidelines.md b/doc/development/service_ping/review_guidelines.md
deleted file mode 100644
index d5805f615e2..00000000000
--- a/doc/development/service_ping/review_guidelines.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/service_ping/review_guidelines.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/service_ping/review_guidelines.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/service_ping/troubleshooting.md b/doc/development/service_ping/troubleshooting.md
deleted file mode 100644
index 31b04c1a6bc..00000000000
--- a/doc/development/service_ping/troubleshooting.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/service_ping/troubleshooting.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/service_ping/troubleshooting.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/service_ping/usage_data.md b/doc/development/service_ping/usage_data.md
deleted file mode 100644
index 94ae90273d0..00000000000
--- a/doc/development/service_ping/usage_data.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/service_ping/usage_data.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/service_ping/usage_data.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/shell_commands.md b/doc/development/shell_commands.md
index 25f62fbcc98..321bd7aeadd 100644
--- a/doc/development/shell_commands.md
+++ b/doc/development/shell_commands.md
@@ -88,7 +88,7 @@ cat: illegal option -- l
usage: cat [-benstuv] [file ...]
```
-In the example above, the argument parser of `cat` assumes that `-l` is an option. The solution in the example above is to make it clear to `cat` that `-l` is really an argument, not an option. Many Unix command line tools follow the convention of separating options from arguments with `--`.
+In the example above, the argument parser of `cat` assumes that `-l` is an option. The solution in the example above is to make it clear to `cat` that `-l` is really an argument, not an option. Many Unix command-line tools follow the convention of separating options from arguments with `--`.
```shell
# Example (continued)
diff --git a/doc/development/snowplow/event_dictionary_guide.md b/doc/development/snowplow/event_dictionary_guide.md
deleted file mode 100644
index 2bea681bf59..00000000000
--- a/doc/development/snowplow/event_dictionary_guide.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/snowplow/event_dictionary_guide.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/snowplow/event_dictionary_guide.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/snowplow/implementation.md b/doc/development/snowplow/implementation.md
deleted file mode 100644
index a9e4e252a53..00000000000
--- a/doc/development/snowplow/implementation.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/snowplow/implementation.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/snowplow/implementation.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/snowplow/infrastructure.md b/doc/development/snowplow/infrastructure.md
deleted file mode 100644
index 6374af40ffe..00000000000
--- a/doc/development/snowplow/infrastructure.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/snowplow/infrastructure.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/snowplow/infrastructure.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/snowplow/review_guidelines.md b/doc/development/snowplow/review_guidelines.md
deleted file mode 100644
index f4752e08dde..00000000000
--- a/doc/development/snowplow/review_guidelines.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/snowplow/review_guidelines.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/snowplow/review_guidelines.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/snowplow/schemas.md b/doc/development/snowplow/schemas.md
deleted file mode 100644
index 7e00ddd976d..00000000000
--- a/doc/development/snowplow/schemas.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/snowplow/schemas.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/snowplow/schemas.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/snowplow/troubleshooting.md b/doc/development/snowplow/troubleshooting.md
deleted file mode 100644
index ed1f5033239..00000000000
--- a/doc/development/snowplow/troubleshooting.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: '../internal_analytics/snowplow/troubleshooting.md'
-remove_date: '2023-08-20'
----
-
-This document was moved to [another location](../internal_analytics/snowplow/troubleshooting.md).
-
-<!-- This redirect file can be deleted after <2023-08-20>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md
index 65787f7a355..49739d7c8e9 100644
--- a/doc/development/testing_guide/best_practices.md
+++ b/doc/development/testing_guide/best_practices.md
@@ -911,7 +911,8 @@ so we need to set some guidelines for their use going forward:
### Common test setup
NOTE:
-`before_all` does not work with the `:delete` strategy. For more information, see [issue 420379](https://gitlab.com/gitlab-org/gitlab/-/issues/420379).
+`let_it_be` and `before_all` do not work with DatabaseCleaner's deletion strategy. This includes migration specs, Rake task specs, and specs that have the `:delete` RSpec metadata tag.
+For more information, see [issue 420379](https://gitlab.com/gitlab-org/gitlab/-/issues/420379).
In some cases, there is no need to recreate the same object for tests
again for each example. For example, a project and a guest of that project
@@ -1060,7 +1061,7 @@ Failing with an error along the lines of:
```shell
expected {
"assignee_id" => nil, "...1 +0000 } to include {"customer_relations_contacts" => [{:created_at => "2023-08-04T13:30:20Z", :first_name => "Sidney Jones3" }]}
-
+
Diff:
@@ -1,35 +1,69 @@
-"customer_relations_contacts" => [{:created_at=>"2023-08-04T13:30:20Z", :first_name=>"Sidney Jones3" }],
@@ -1225,7 +1226,7 @@ specs, so created repositories accumulate in this directory over the
lifetime of the process. Deleting them is expensive, but this could lead to
pollution unless carefully managed.
-To avoid this, [hashed storage](../../administration/repository_storage_types.md)
+To avoid this, [hashed storage](../../administration/repository_storage_paths.md)
is enabled in the test suite. This means that repositories are given a unique
path that depends on their project's ID. Because the project IDs are not reset
between specs, each spec gets its own repository on disk,
diff --git a/doc/development/testing_guide/end_to_end/beginners_guide.md b/doc/development/testing_guide/end_to_end/beginners_guide.md
index 4627d5d29cb..12f90e0d88c 100644
--- a/doc/development/testing_guide/end_to_end/beginners_guide.md
+++ b/doc/development/testing_guide/end_to_end/beginners_guide.md
@@ -266,7 +266,7 @@ ensuring we now sign in at the beginning of each test.
Next, let's test something other than Login. Let's test Issues, which are owned by the Plan
stage and the Project Management Group, so [create a file](#identify-the-devops-stage) in
-`qa/specs/features/browser_ui/3_create/issues` called `issues_spec.rb`.
+`qa/specs/features/browser_ui/2_plan/issue` called `issues_spec.rb`.
```ruby
# frozen_string_literal: true
@@ -274,12 +274,7 @@ stage and the Project Management Group, so [create a file](#identify-the-devops-
module QA
RSpec.describe 'Plan' do
describe 'Issues', product_group: :project_management do
- let(:issue) do
- Resource::Issue.fabricate_via_api! do |issue|
- issue.title = 'My issue'
- issue.description = 'This is an issue specific to this test'
- end
- end
+ let(:issue) { create(:issue) }
before do
Flow::Login.sign_in
diff --git a/doc/development/testing_guide/end_to_end/best_practices.md b/doc/development/testing_guide/end_to_end/best_practices.md
index 96bd02d235b..ab4dd9acb63 100644
--- a/doc/development/testing_guide/end_to_end/best_practices.md
+++ b/doc/development/testing_guide/end_to_end/best_practices.md
@@ -169,18 +169,14 @@ Page::Main::Menu.perform do |menu|
end
#=> Good
-issue = Resource::Issue.fabricate_via_api! do |issue|
- issue.name = 'issue-name'
-end
+issue = create(:issue, name: 'issue-name')
Project::Issues::Index.perform do |index|
expect(index).to have_issue(issue)
end
#=> Bad
-issue = Resource::Issue.fabricate_via_api! do |issue|
- issue.name = 'issue-name'
-end
+issue = create(:issue, name: 'issue-name')
Project::Issues::Index.perform do |index|
expect(index).to have_issue(issue)
@@ -371,7 +367,7 @@ If the _only_ action in the test that requires administrator access is to toggle
In line with [using the API](#prefer-api-over-ui), use a `Commit` resource whenever possible.
-`ProjectPush` uses raw shell commands via the Git Command Line Interface (CLI) whereas the `Commit` resource makes an HTTP request.
+`ProjectPush` uses raw shell commands from the Git command-line interface (CLI), and the `Commit` resource makes an HTTP request.
```ruby
# Using a commit resource
diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md
index 4e7ef6f29a2..30fb1abd7df 100644
--- a/doc/development/testing_guide/end_to_end/index.md
+++ b/doc/development/testing_guide/end_to_end/index.md
@@ -55,17 +55,17 @@ graph TB
A2 -.->|1. Triggers an `omnibus-gitlab-mirror` pipeline<br>and wait for it to be done| B1
B2[`Trigger-qa` stage<br>`Trigger:qa-test` job] -.->|2. Triggers a `gitlab-qa-mirror` pipeline<br>and wait for it to be done| C1
-subgraph "`gitlab-org/gitlab` pipeline"
+subgraph " `gitlab-org/gitlab` pipeline"
A1[`build-images` stage<br>`build-qa-image` and `build-assets-image` jobs]
A2[`qa` stage<br>`e2e:package-and-test` job]
end
-subgraph "`gitlab-org/build/omnibus-gitlab-mirror` pipeline"
+subgraph " `gitlab-org/build/omnibus-gitlab-mirror` pipeline"
B1[`Trigger-docker` stage<br>`Trigger:gitlab-docker` job] -->|once done| B2
end
-subgraph "`gitlab-org/gitlab-qa-mirror` pipeline"
- C1>End-to-end jobs run]
+subgraph " `gitlab-org/gitlab-qa-mirror` pipeline"
+ C1[End-to-end jobs run]
end
```
diff --git a/doc/development/testing_guide/end_to_end/resources.md b/doc/development/testing_guide/end_to_end/resources.md
index becdc375c63..bf3f1c25f5e 100644
--- a/doc/development/testing_guide/end_to_end/resources.md
+++ b/doc/development/testing_guide/end_to_end/resources.md
@@ -392,7 +392,7 @@ In this case, the result is similar to calling `Resource::Shirt.fabricate!`.
### Factories
-You may also use FactoryBot invocations to create resources within your tests.
+You may also use [FactoryBot](https://github.com/thoughtbot/factory_bot/) invocations to create, build, and fetch resources within your tests.
```ruby
# create a project via the API to use in the test
@@ -403,9 +403,81 @@ let(:issue) { create(:issue, project: project) }
# create a private project via the API with a specific name
let(:project) { create(:project, :private, name: 'my-project-name', add_name_uuid: false) }
+
+###
+
+# instantiate an Issue but don't create it via API yet
+let(:issue) { build(:issue) }
+
+# instantiate a Project and perform some actions before creating
+let(:project) do
+ build(:project) do |p|
+ p.name = 'Test'
+ p.add_name_uuid = false
+ end
+end
+
+# fetch an existing issue via the API with attributes
+let(:existing_issue) { build(:issue, project: project, iid: issue.iid).reload! }
+```
+
+All factories are defined in [`qa/qa/factories`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/qa/qa/factories/) and are representative of
+their respective `QA::Resource::Base` class.
+
+For example, a factory `:issue` can be found in `qa/resource/issue.rb`. A factory `:project` can be found in `qa/resource/project.rb`.
+
+#### Create a new Factory
+
+Given a resource:
+
+```ruby
+# qa/resource/shirt.rb
+module QA
+ module Resource
+ class Shirt < Base
+ attr_accessor :name
+ attr_reader :read_only
+
+ attribute :brand
+
+ def api_post_body
+ { name: name, brand: brand }
+ end
+ end
+ end
+end
```
-All factories are defined in [`qa/qa/factories`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/qa/qa/factories/).
+Define a factory with defaults and overrides:
+
+```ruby
+# qa/factories/shirts.rb
+module QA
+ FactoryBot.define do
+ factory :shirt, class: 'QA::Resource::Shirt' do
+ brand { 'BrandName' }
+
+ trait :with_name do
+ name { 'Shirt Name' }
+ end
+ end
+ end
+end
+```
+
+In the test, create the resource via the API:
+
+```ruby
+let(:my_shirt) { create(:shirt, brand: 'AnotherBrand') } #<Resource::Shirt @brand="AnotherBrand" @name=nil>
+let(:named_shirt) { create(:shirt, :with_name) } #<Resource::Shirt @brand="Brand Name" @name="Shirt Name">
+let(:invalid_shirt) { create(:shirt, read_only: true) } # NoMethodError
+
+it 'creates a shirt' do
+ expect(my_shirt.brand).to eq('AnotherBrand')
+ expect(named_shirt.name).to eq('Shirt Name')
+ expect(invalid_shirt).to raise_error(NoMethodError) # tries to call Resource::Shirt#read_only=
+end
+```
### Resources cleanup
diff --git a/doc/development/testing_guide/flaky_tests.md b/doc/development/testing_guide/flaky_tests.md
index 2d36377967e..cd5e32bc0ad 100644
--- a/doc/development/testing_guide/flaky_tests.md
+++ b/doc/development/testing_guide/flaky_tests.md
@@ -38,6 +38,18 @@ it's reset to a pristine test after each test.
- [Example 4](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/103434#note_1172316521): A test for a database query passes in a fresh database, but in a
CI/CD pipeline where the database is used to process previous test sequences, the test fails. This likely
means that the query itself needs to be updated to work in a non-clean database.
+- [Example 5](https://gitlab.com/gitlab-org/gitlab/-/issues/416663#note_1457867234): Unrelated database connections
+ in asynchronous requests checked back in, causing the tests to accidentally
+ use these unrelated database connections. The failure was resolved in this
+ [merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/125742).
+- [Example 6](https://gitlab.com/gitlab-org/gitlab/-/issues/418757#note_1502138269): The maximum time to live
+ for a database connection causes these connections to be disconnected, which
+ in turn causes tests that rely on the transactions on these connections to
+ in turn causes tests that rely on the transactions on these connections to
+ fail. The issue was fixed in this [merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128567).
+- [Example 7](https://gitlab.com/gitlab-org/quality/engineering-productivity/master-broken-incidents/-/issues/3389#note_1534827164):
+ A TCP socket used in a test was not closed before the next test, which also used
+ the same port with another TCP socket.
### Dataset-specific
@@ -172,10 +184,12 @@ it 'succeeds', quarantine: 'https://gitlab.com/gitlab-org/gitlab/-/issues/12345'
end
```
-This means it is skipped unless run with `--tag quarantine`:
+This means it is skipped in CI. By default, the quarantined tests will run locally.
+
+We can skip them in local development as well by running with `--tag ~quarantine`:
```shell
-bin/rspec --tag quarantine
+bin/rspec --tag ~quarantine
```
After the long-term quarantining MR has reached production, you should revert the fast-quarantine MR you created earlier.
diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md
index 8da4350074d..3800e22b2f9 100644
--- a/doc/development/testing_guide/frontend_testing.md
+++ b/doc/development/testing_guide/frontend_testing.md
@@ -51,7 +51,7 @@ The default timeout for Jest is set in
If your test exceeds that time, it fails.
If you cannot improve the performance of the tests, you can increase the timeout
-for the whole suite using [`jest.setTimeout`](https://jestjs.io/docs/28.x/jest-object#jestsettimeouttimeout)
+for the whole suite using [`jest.setTimeout`](https://jestjs.io/docs/next/jest-object#jestsettimeouttimeout)
```javascript
jest.setTimeout(500);
@@ -63,7 +63,7 @@ describe('Component', () => {
});
```
-or for a specific test by providing a third argument to [`it`](https://jestjs.io/docs/28.x/api#testname-fn-timeout)
+or for a specific test by providing a third argument to [`it`](https://jestjs.io/docs/next/api#testname-fn-timeout)
```javascript
describe('Component', () => {
diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md
index b4ae23336d5..ba13ca0c05a 100644
--- a/doc/development/testing_guide/review_apps.md
+++ b/doc/development/testing_guide/review_apps.md
@@ -6,21 +6,17 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Using review apps in the development of GitLab
-Review apps are deployed using the `start-review-app-pipeline` job which triggers a child pipeline containing a series of jobs to perform the various tasks needed to deploy a review app.
+Review apps are deployed using the `start-review-app-pipeline` manual job which triggers a child pipeline containing a series of jobs to perform the various tasks needed to deploy a review app.
![start-review-app-pipeline job](img/review-app-parent-pipeline.png)
For any of the following scenarios, the `start-review-app-pipeline` job would be automatically started:
-- for merge requests with CI configuration changes
-- for merge requests with frontend changes
-- for merge requests with changes to `{,ee/,jh/}{app/controllers}/**/*`
-- for merge requests with changes to `{,ee/,jh/}{app/models}/**/*`
-- for merge requests with changes to `{,ee/,jh/}lib/{,ee/,jh/}gitlab/**/*`
-- for merge requests with QA changes
- for scheduled pipelines
- the MR has the `pipeline:run-review-app` label set
+For all other scenarios, the `start-review-app-pipeline` job can be triggered manually.
+
## E2E test runs on review apps
On every pipeline in the `qa` stage (which comes after the `review` stage), the `review-qa-smoke` and `review-qa-blocking` jobs are automatically started.
@@ -278,7 +274,7 @@ find a way to limit it to only us.**
- [Review apps integration for CE/EE (presentation)](https://docs.google.com/presentation/d/1QPLr6FO4LduROU8pQIPkX1yfGvD13GEJIBOenqoKxR8/edit?usp=sharing)
- [Stability issues](https://gitlab.com/gitlab-org/quality/quality-engineering/team-tasks/-/issues/212)
-### Helpful command line tools
+### Helpful command-line tools
- [K9s](https://github.com/derailed/k9s) - enables CLI dashboard across pods and enabling filtering by labels
- [Stern](https://github.com/wercker/stern) - enables cross pod log tailing based on label/field selectors
diff --git a/doc/development/value_stream_analytics.md b/doc/development/value_stream_analytics.md
index 955fc88c713..5aa6aecd9db 100644
--- a/doc/development/value_stream_analytics.md
+++ b/doc/development/value_stream_analytics.md
@@ -354,36 +354,4 @@ Analytics::CycleAnalytics::ReaggregationWorker.new.perform
#### Value stream analytics
-Seed issues and merge requests for value stream analytics:
-
- ```shell
- // Seed 10 issues for the project specified by <project-id>
- $ VSA_SEED_PROJECT_ID=<project-id> VSA_ISSUE_COUNT=10 SEED_VSA=true FILTER=cycle_analytics rake db:seed_fu
- ```
-
-#### DORA metrics
-
-Seed DORA daily metrics for value stream, insights and CI/CD analytics:
-
-1. On the left sidebar, at the top, select **Search GitLab** (**{search}**) to find your project.
-1. On the project's homepage, in the upper-left corner, copy the **Project ID**. You need it in a later step.
-1. [Create an environment for your selected project from the UI](../ci/environments/index.md#create-a-static-environment) named `production`.
-1. Open the rails console:
-
- ```shell
- rails c
- ```
-
-1. In the rails console, find the created environment by searching for the project ID:
-
- ```shell
- e = Environment.find_by(project_id: <project-id>, name: "production")
- ```
-
-1. To seed data for the past 100 days for the environment, run the following command:
-
- ```shell
- 100.times { |i| Dora::DailyMetrics.create(environment_id: e.id, date: (i + 1).days.ago, deployment_frequency: rand(50), incidents_count: rand(5), lead_time_for_changes_in_seconds: rand(50000), time_to_restore_service_in_seconds: rand(100000)) }
- ```
-
-DORA metric data should now be available for your selected project and any group or subgroup it belongs to.
+For instructions on how to seed data for value stream analytics, see [development seed files](../development/development_seed_files.md).
diff --git a/doc/development/work_items.md b/doc/development/work_items.md
index 90e454bec85..2b28b2cd4f2 100644
--- a/doc/development/work_items.md
+++ b/doc/development/work_items.md
@@ -45,14 +45,15 @@ Here are some problems with current issues usage and why we are looking into wor
## Work item terminology
-To avoid confusion and ensure communication is efficient, we will use the following terms exclusively when discussing work items.
+To avoid confusion and ensure [communication is efficient](https://handbook.gitlab.com/handbook/communication/#mecefu-terms), we will use the following terms exclusively when discussing work items. This list is the [single source of truth (SSoT)](https://handbook.gitlab.com/handbook/values/#single-source-of-truth) for Work Item terminology.
| Term | Description | Example of misuse | Should be |
| --- | --- | --- | --- |
| work item type | Classes of work item; for example: issue, requirement, test case, incident, or task | _Epics will eventually become issues_ | _Epics will eventually become a **work item type**_ |
| work item | An instance of a work item type | | |
-| work item view | The new frontend view that renders work items of any type | | |
-| legacy issue view | The existing view used to render issues and incidents | | |
+| work item view | The new frontend view that renders work items of any type | _This should be rendered in the new view_ | _This should be rendered in the work item view_ |
+| legacy object | An object that has been or will be converted to a Work Item Type | _Epics will be migrated from a standalone/old/former object to a work item type_ | _Epics will be converted from a legacy object to a work item type_ |
+| legacy issue view | The existing view used to render issues and incidents | _Issues continue to be rendered in the old view_ | _Issues continue to be rendered in the legacy issue view_ |
| issue | The existing issue model | | |
| issuable | Any model currently using the issuable module (issues, epics and MRs) | _Incidents are an **issuable**_ | _Incidents are a **work item type**_ |
| widget | A UI element to present or allow interaction with specific work item data | | |
@@ -84,9 +85,10 @@ end
We already use the concept of WITs within `issues` table through `issue_type`
column. There are `issue`, `incident`, and `test_case` issue types. To extend this
-so that in future we can allow users to define custom WITs, we will move the
-`issue_type` to a separate table: `work_item_types`. The migration process of `issue_type`
-to `work_item_types` will involve creating the set of WITs for all root-level groups.
+so that in future we can allow users to define custom WITs, we will
+move the `issue_type` to a separate table: `work_item_types`. The migration process of `issue_type`
+to `work_item_types` will involve creating the set of WITs for all root-level groups as described in
+[this epic](https://gitlab.com/groups/gitlab-org/-/epics/6536).
NOTE:
At first, defining a WIT will only be possible at the root-level group, which would then be inherited by subgroups.
@@ -99,7 +101,7 @@ assume the following base types: `issue: 0`, `incident: 1`, `test_case: 2`.
The respective `work_item_types` records:
-| `group_id` | `base_type` | `title` |
+| `namespace_id` | `base_type` | `title` |
| -------------- | ----------- | --------- |
| 11 | 0 | Issue |
| 11 | 1 | Incident |
@@ -191,6 +193,164 @@ Until the architecture of WIT widgets is finalized, we are holding off on the cr
types. If a new work item type is absolutely necessary, please reach out to a
member of the [Project Management Engineering Team](https://gitlab.com/gitlab-org/gitlab/-/issues/370599).
+### Creating a new work item type in the database
+
+We have completed the removal of the `issue_type` column from the issues table, in favor of using the new
+`work_item_types` table as described in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/6536)).
+
+After the introduction of the `work_item_types` table, we added more `work_item_types`, and we want to make it
+easier for other teams to do so. To introduce a new `work_item_type`, you must:
+
+1. Write a database migration to create a new record in the `work_item_types` table.
+1. Update `Gitlab::DatabaseImporters::WorkItems::BaseTypeImporter`.
+
+The following MRs demonstrate how to introduce new `work_item_types`:
+
+- [MR example 1](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127482)
+- [MR example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127917)
+
+#### Write a database migration
+
+First, write a database migration that creates the new record in the `work_item_types` table.
+
+Keep the following in mind when you write your migration:
+
+- **Important:** Exclude new type from existing APIs.
+ - We probably want to exclude newly created work items of this type from showing
+ up in existing features (like issue lists) until we fully release a feature. For this reason,
+ we have to add a new type to
+ [this exclude list](https://gitlab.com/gitlab-org/gitlab/-/blob/a0a52dd05b5d3c6ca820b672f9c0626840d2429b/app/models/work_items/type.rb#L84),
+ unless it is expected that users can create new issues and work items with the new type as soon as the migration
+ is executed.
+- Use a regular migration, not a post-deploy.
+ - We believe it would be beneficial to use
+ [regular migrations](migration_style_guide.md#choose-an-appropriate-migration-type)
+ to add new work item types instead of a
+ [post deploy migration](database/post_deployment_migrations.md).
+ This way, follow-up MRs that depend on the type being created can assume it exists right away,
+ instead of having to wait for the next release.
+- Migrations should avoid failures.
+ - We expect data related to `work_item_types` to be in a certain state when running the migration that will create a new
+ type. At the moment, we write migrations that check the data and don't fail in the event we find
+ it in an inconsistent state. There's a discussion about how much we can rely on the state of data based on seeds and
+ migrations in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/423483). We can only
+ have a successful pipeline if we write the migration so it doesn't fail if data exists in an inconsistent
+ state. We probably need to update some of the database jobs in order to change this.
+- Add widget definitions for the new type.
+ - The migration adds the new work item type as well as the widget definitions that are required for each work item.
+ The widgets you choose depend on the feature the new work item supports, but there are some that probably
+ all new work items need, like `Description`.
+- Optional. Create hierarchy restrictions.
+ - In one of the example MRs we also insert records in the `work_item_hierarchy_restrictions` table. This is only
+ necessary if the new work item type is going to use the `Hierarchy` widget. In this table, you must add what
+ work item type can have children and of what type. Also, you should specify the hierarchy depth for work items of the same
+ type.
+
+##### Example of adding a ticket work item
+
+The `Ticket` work item type already exists in the database, but we'll use it as an example migration.
+Note that for a new type you need to use a new name and ENUM value.
+
+```ruby
+class AddTicketWorkItemType < Gitlab::Database::Migration[2.1]
+ disable_ddl_transaction!
+ restrict_gitlab_migration gitlab_schema: :gitlab_main
+
+ ISSUE_ENUM_VALUE = 0
+ # Enum value comes from the model where the enum is defined in
+ # https://gitlab.com/gitlab-org/gitlab/-/blob/1253f12abddb69cd1418c9e13e289d828b489f36/app/models/work_items/type.rb#L30.
+ # A new work item type should simply pick the next integer value.
+ TICKET_ENUM_VALUE = 8
+ TICKET_NAME = 'Ticket'
+ # Widget definitions also have an enum defined in
+ # https://gitlab.com/gitlab-org/gitlab/-/blob/1253f12abddb69cd1418c9e13e289d828b489f36/app/models/work_items/widget_definition.rb#L17.
+ # We need to provide both the enum and name as we plan to support custom widget names in the future.
+ TICKET_WIDGETS = {
+ 'Assignees' => 0,
+ 'Description' => 1,
+ 'Hierarchy' => 2,
+ 'Labels' => 3,
+ 'Milestone' => 4,
+ 'Notes' => 5,
+ 'Start and due date' => 6,
+ 'Health status' => 7,
+ 'Weight' => 8,
+ 'Iteration' => 9,
+ 'Notifications' => 14,
+ 'Current user todos' => 15,
+ 'Award emoji' => 16
+ }.freeze
+
+ class MigrationWorkItemType < MigrationRecord
+ self.table_name = 'work_item_types'
+ end
+
+ class MigrationWidgetDefinition < MigrationRecord
+ self.table_name = 'work_item_widget_definitions'
+ end
+
+ class MigrationHierarchyRestriction < MigrationRecord
+ self.table_name = 'work_item_hierarchy_restrictions'
+ end
+
+ def up
+ existing_ticket_work_item_type = MigrationWorkItemType.find_by(base_type: TICKET_ENUM_VALUE, namespace_id: nil)
+
+ return say('Ticket work item type record exists, skipping creation') if existing_ticket_work_item_type
+
+ new_ticket_work_item_type = MigrationWorkItemType.create(
+ name: TICKET_NAME,
+ namespace_id: nil,
+ base_type: TICKET_ENUM_VALUE,
+ icon_name: 'issue-type-issue'
+ )
+
+ return say('Ticket work item type create record failed, skipping creation') if new_ticket_work_item_type.new_record?
+
+ widgets = TICKET_WIDGETS.map do |widget_name, widget_enum_value|
+ {
+ work_item_type_id: new_ticket_work_item_type.id,
+ name: widget_name,
+ widget_type: widget_enum_value
+ }
+ end
+
+ MigrationWidgetDefinition.upsert_all(
+ widgets,
+ unique_by: :index_work_item_widget_definitions_on_default_witype_and_name
+ )
+
+ issue_type = MigrationWorkItemType.find_by(base_type: ISSUE_ENUM_VALUE, namespace_id: nil)
+ return say('Issue work item type not found, skipping hierarchy restrictions creation') unless issue_type
+
+ # This part of the migration is only necessary if the new type uses the `Hierarchy` widget.
+ restrictions = [
+ { parent_type_id: new_ticket_work_item_type.id, child_type_id: new_ticket_work_item_type.id, maximum_depth: 1 },
+ { parent_type_id: new_ticket_work_item_type.id, child_type_id: issue_type.id, maximum_depth: 1 }
+ ]
+
+ MigrationHierarchyRestriction.upsert_all(
+ restrictions,
+ unique_by: :index_work_item_hierarchy_restrictions_on_parent_and_child
+ )
+ end
+
+ def down
+ # There's the remote possibility that issues could already be
+ # using this issue type, with a tight foreign constraint.
+ # Therefore we will not attempt to remove any data.
+ end
+end
+```
+
+<!-- markdownlint-disable-next-line MD044 -->
+#### Update Gitlab::DatabaseImporters::WorkItems::BaseTypeImporter
+
+The [BaseTypeImporter](https://gitlab.com/gitlab-org/gitlab/-/blob/f816a369d7d6bbd1d8d53d6c0bca4ca3389fdba7/lib/gitlab/database_importers/work_items/base_type_importer.rb)
+is where we can clearly visualize the structure of the types we have and what widgets are associated with each of them.
+`BaseTypeImporter` is the single source of truth for fresh GitLab installs and also our test suite. This should always
+reflect what we change with migrations.
+
### Custom work item types
With the WIT widget metadata and the workflow around mapping WIT to specific